Suricata Fast.log, but in JSON?
-
TL;DR: Does anyone have an approach for getting Suricata Alerts in JSON or Syslog formats in its own file? (akin to Fast.log)
Hey all, new poster here, this board keeps coming up anytime I look up some stuff on Suricata so I figured it may a good place to start.
I am using Suricata as a HIDS on some hosts, and have some basic Alert Rules setup, mostly for testing purposes. I have another side project going on to do some data analytics at some point on any of the alerts I generate.
Now lies the crux of my problem, I need the data in JSON to do some ETL and enrichment later on, and EVE does that but I have been playing with the YAML and cannot get EVE to only output Alert Findings. If someone knows a good way to do that, that would be fantastic.
Another approach I have looked at is parsing out of Fast.log, but it does not seem like you can output that data in JSON or Syslog format. The standard format for Fast.log isn't in Apache-* formats either so I cannot do some hacky on-host ETL either. Is there a way to turn that into JSON or even Syslog easily?
I played around with the Syslog output, but from what I saw / read in the docs, it looks like it will be collated with all of my other syslogs which I definitely do not want, and the burden of separating it out is not worth it to me either.
Thanks!
Edit
To add some color, this is a lazy copy and paste from what Eve shows that I took off of my SE cross-post
,"event_type":"stats","stats":{"uptime":168,"capture":{"kernel_packets":313,"kernel_drops":0,"errors":0},"decoder":{"pkts":313,"bytes":68519,"invalid":0,"ipv4":305,"ipv6":0,"ethernet":313,"r$ {"timestamp":"2019-08-13T14:29:09.058698+0000","event_type":"stats","stats":{"uptime":176,"capture":{"kernel_packets":313,"kernel_drops":0,"errors":0},"decoder":{"pkts":313,"bytes":68519,"invalid":0,"ipv4":305,"ipv6":0,"ethernet":313,"r$ {"timestamp":"2019-08-13T14:29:17.059944+0000","event_type":"stats","stats":{"uptime":184,"capture":{"kernel_packets":313,"kernel_drops":0,"errors":0},"decoder":{"pkts":313,"bytes":68519,"invalid":0,"ipv4":305,"ipv6":0,"ethernet":313,"r$
I would only like to see something like this, from Fast.log, but you know...Jsonified?
[**] [1:200002:6] ET USER_AGENTS Suspicious User Agent (BlackSun) [**] [Classification: A Network Trojan was detected] [Priority: 1] {TCP}
-
You probably keep finding references to the pfSense forums because pfSense has a popular add-on GUI package for managing the Suricata binary's configuration.
Most folks wind up piping the EVE JSON data collected by Suricata on pfSense out to a third-party logs analysis tool on another external host. Two of the more popular mechanisms are graylog and ELK.
Here is a link to a how-to using Suricata and ELK on Ubuntu: https://www.howtoforge.com/tutorial/suricata-with-elk-and-web-front-ends-on-ubuntu-bionic-beaver-1804-lts/. This particular setup has Suricata on the Ubuntu machine as well, so pfSense would be out of the picture (which is how I understand your current situation to be).
Here is a graylog link: https://www.graylog.org/post/visualize-and-correlate-ids-alerts-with-open-source-tools.
I believe you can create your own custom filters for either of these platforms to further refine how logged alerts are presented.
-
@bmeeks Thank you for the reply, that does make sense now that I am browsing the other board topics.
You did make a correct assumption, pfSense is out of the picture, however I had already been playing around with ELK, the problem I have is I can move all of EVE easily and it will index it for me, but probably 80% of the lines are garbage noise (to me), which is why I was wondering if you could pare down what EVE outputs to ONLY Alerts.
To add some color for other posters, the final 2 destinations are a custom BI solution I built that enriches the data with GEOINT and OSINT feeds for consumption by analysts, I will also be sending the non-enriched logs into a ML model that I have do vector autoregression and a few other time-series evil on it to eventually be consumed by yet another side project for needle-in-the-needlestack insights - this is all very much hobbyist, so while ELK and Graylog and Splunk all work great, it doesn't work for what madness I am over-engineering.
-
Found an answer, took me long enough given it was right in front of me the whole time...
On Line 60 in the YAML, you can disable Stats - that probably cuts down 80% of the garbage data in EVE.
You can further disable logging (in EVE) under metadata for DNS, TLS, TCP, HTTP, etc. -- YMMV, but I feel keeping that stuff is fine since you can filter it out using something like Kibana or Splunk readily.