Snort + Barnyard2 + What?

  • I have snort and barnyard2 configured - seems to be working. I can see the alerts. However that does nothing more than someone shouting that things are happening. I cannot possibly investigate / classify / comprehend a list of:

    12/06/18-11:45:09.859121 ,1,2402000,5021,"ET DROP Dshield Block Listed Source group 1",TCP,,50328,,16655,10185,Misc Attack,2
    12/06/18-11:45:30.962013 ,1,2402000,5021,"ET DROP Dshield Block Listed Source group 1",TCP,,50776,,46090,23758,Misc Attack,2
    12/06/18-11:46:25.528181 ,1,2402000,5021,"ET DROP Dshield Block Listed Source group 1",TCP,,49261,,7900,1529,Misc Attack,2
    12/06/18-11:48:11.548984 ,1,2402000,5021,"ET DROP Dshield Block Listed Source group 1",TCP,,40886,,8472,46713,Misc Attack,2
    12/06/18-11:48:44.555142 ,1,2402000,5021,"ET DROP Dshield Block Listed Source group 1",TCP,,12503,,40000,77,Misc Attack,2
    12/06/18-11:49:43.664125 ,1,2403440,45443,"ET CINS Active Threat Intelligence Poor Reputation IP TCP group 71",TCP,,43597,,1800,64802,Misc Attack,2

    So naturally I want a way to analyse this stream of data. To be able to go down and inspect the actual payload, etc. to trace interesting events. There is nothing I can do with a generic rule match alert like above.

    So I looked at BASE - too old, cannot install on anything released in the past decade. It requires PHP5 and has not been touched in I do not know how long.

    Snorby - too old, does not install on anything modern. After fixing about 10 errors I got stumped at this:

    root@mypc snorby # bundle exec rake snorby:setup
    rake aborted!
    undefined method `yaml_as' for Module:Class
    Did you mean?  yaml_tag

    So what GUI tool exist that can:

    • Talk to Barnyard2 / interface with its mysql DB
    • Is supported on a modern OS
    • Has the ability to categorize / inspect alerts and drill down to packet dump level

    Any opinions would be greatly appreciated.

  • @pwnell said in Snort + Barnyard2 + What?:

    So what GUI tool exist that can:

    Talk to Barnyard2 / interface with its mysql DB
    Is supported on a modern OS
    Has the ability to categorize / inspect alerts and drill down to packet dump level

    Any opinions would be greatly appreciated.

    I use Graylog + Grafana for this.

    I am new to those platforms, so I won't be much help when you run into issues.

    The general configuration is as follows:

    • Get a Graylog VM/container setup on a machine with a good amount of RAM+SSD space. Be aware that the default OVA image from Graylog 2.4 will require some modifications. I forget what I had to do, but it was annoying tinkering to get it even up and running. This will give you something similar to an ELK stack (which is another way of doing this).

    • Configure your logging to feed into Graylog. I use Suricata, so I use the built-in EVE JSON logging which means setup on the Graylog side is far simpler. If you're set on using Snort, then you have to either see if someone has created plugins for Barnyard/Syslog from Snort, or make your own parser. To forward logs in a more robust way, I use the "filebeat" log forwarder rather than syslogng or rsyslog.

    • Once you can process basic events in Graylog, expand your capabilities by adding in geo location for mapping, reverse dns, and perhaps OTX data.

    • Install Grafana. I have a few templates I've modified, but you can also find reasonable starting places at the Grafana site.

    I found this link helpful: as a quickstart for Suricata + Graylog -- but know that he's pretty spare with details, the tutorial is for older versions, and some of his steps are ill-advised (his template for the Elasticsearch index, for example, changes the datatypes to mostly text/keyword rather than more advanced datatypes) -- you will have to learn on your own and make changes.

    It took me a few days to get it all figured out, and you will use a ton of disk space for the logging if you have a busy network and keep logs for any period of time.

    The benefits are much more actionable alerts due to:

    • reverse DNS for every IP by default
    • immediate links to rule source and documentation at snort/et/suricata websites
    • the ability to correlate streams across interfaces
    • additional IP reputation data and links to determine if traffic is a potential issue or if an IP can be whitelisted (I use VirusTotal, RiskIQ, Hybrid-Analysis, and a few others)

    Somewhere in the forum here is a screenshot of a small portion of the dashboard I made if you look around you'll get an idea.

  • @boobletins said in Snort + Barnyard2 + What?:

    Install Grafana

    I have managed to get filebeats, grafana and suricata all running. However I am stumped at this step:

    Configured lookup table <Service Port Translator> doesn't exist

    When I try to apply the content pack "Suricata Content Packs". I did copy the csv file:

    cp service-names-port-numbers.csv /etc/graylog/server

    But I did not see where I need to inform Graylog of this lookup table. Any pointers? Google did not help.

    ** Nevermind - I see I had to manually create the data adapter, cache then lookup table.

    ** Update2: Seems like I was wrong. If I create it manually I now get duplicate key violation. I have no idea how to get this content pack installed :(

    ** Update3: For anyone in my shoes, I ended up editing the content pack file, deleted everything except the lookup table config, imported it and applied it, then uninstalled it and then removed the lookup table definitions and applied the rest of the config. It seems like the latest Graylog does not allow creation of a lookup table and using it in one content pack? Anyway, now I have raw Suricata logs in Graylog.

    ** Update4: I am nearly there but stumped. Grafana is installed, I defined the ElasticSearch data source and then imported the dashboard ID 5243 but I have only Interface, protocol, Alert Category and Destination Service Name selectors in my grafana dashboard - nothing else. Clicking on Add Panel does nothing. Dark grey screen. No errors in logs. Just this in the JavaScript console:

    TypeError: undefined is not an object (evaluating 'this.pluginInfo.exports') — DashboardGrid.tsx:166

  • Ehe, yeah, that sounds about like how my experience went only I would have had an "** Update15" or so. Sorry I wasn't around earlier to save you some headaches.

    Your current error isn't one I ran into, but did you check the dependencies for that dashboard?

    GRAFANA 5.0.3
    GRAPH 5.0.0
    PIE CHART 1.2.0
    SINGLESTAT 5.0.0
    TABLE 5.0.0

    If you have all of those installed, then I won't be much help. For some reason I think WorldMap wasn't installed when I did this, and it isn't in that list, but it's used in the dashboard. Maybe that's the missing component?

    That's the same dash I started with and modified. I'll see if I can scrub the ones I created and export them for you tomorrow. The base version is attractive, but not as useful as I'd like it to be.

    Some ideas if you're spending the weekend working on this:

    • Don't get married to your initial data. You will need to adjust the datatypes in your Elasticsearch indices which will purge existing data. You will also need to reconcile field names and normalize them across different logging functions as you add features.

    • If you're using Suricata in inline IPS mode, start by creating a table that contains "allowed" events. These would be Alerts that fired, but were not blocked due to your current SID management settings. This is often the most useful data as it indicates a rule that needs to be adjusted or traffic that needs to be investigated (because it's not being blocked). The query would just add something like: "AND alert_action:allowed" - as far as I know, this is difficult to get out of pfSense easily, but it's the most useful view.

    • The EVE JSON logging from Suricata can output DNS logs. This is very useful because true reverse DNS lookups (PTR records) are often missing which means if you enable automatic reverse DNS in Graylog (this is a lookup table + plugin you create -- its own complication), you can still lack human-readable data. If you use the logging from the initial DNS resolution and run the query in reverse, you can often get more information about what was happening. This also allows you to search for traffic to/from IP addresses which were not recently resolved which is its own kind of suspicious traffic. I initially tried to investigate accessing Unbound's cache to accomplish this, but found that the EVE JSON logging was easier. I suggest using actual RDNS at the same time because the PTR records provide different information in many cases (if there are CNAME records, etc).

    • Add links to table cells. Unfortunately Grafana is a little frustrating here (I wish there was a way to add arbitrary HTML easily) -- but you can still get a lot of mileage out of turning IP address, domains, and SIDs into links to useful resources (eg VirusTotal and similar services). Turning SIDs into links to the rule data can be quite useful as well. I was recently getting a lot of alerts on SID 2026487 from ET Open. I was able to click through to the alert text, read what was happening, click back to my FLOW and read the data that triggered it, and determine that the rule was either overly broad or contained a typo. I wrote the ET Open guys and they adjusted the rule (you can also modify them in Suricata).

    • Add additional JSON lookup tables. The Cache+Adapter+LookupTable feature in Graylog is great for interacting with external APIs without much work. DShield, for example, has a free API to check IP addresses against their sensors. If you checked every IP address every time Graylog saw it, you'd be hammering the API. The built-in caching in Graylog is very nice as it allows you to create the Data Adapter, set some cache settings, and off you go. I'm using this API to get the "attacks" and "count" values for each IP the network sees ( ) -- without hammering. Obviously if you run into 1000 new IP addresses a second, this isn't a great idea. But for a home network? It's great. AlienVault OTX has a free API (limited) as does VirusTotal (limited) -- I plan to add these in the near future.

    Just one note: the dShield drop list is consistently responsible for blocking the most attacks. Today it was responsible for blocking 92% of 4500+ alerts. There are lots of factors here (pfBlocker, the selection bias of taking out entire class Cs), but it's still by far the most "valuable" rule in the ET Open rules. It's interesting to experience that visually over time. It's also interesting to think about the noise reduction it provides...

  • Hey,

    Thanks for the big update. I stopped posting at Update 4 as I thought it was getting annoying - eventually I figured it out (that it was missing plugins) but I have to confess - I did not look at the dependencies doh, I reviewed the panel JSON and looked up which plugin would for instance support geo data. What a roundabout way.

    I had to change some column names (the geo field names did not match my setup), change data types etc. like you said. His Elasticsearch suricata index template did not work for me at all, so I am using the grafana one modified a bit.

    I'll review your ideas next... Thanks so much for your help.

  • How are you faring with this? Any useful APIs or views you're willing to share? Let me know if you'd like to see the dashboards I use.

    A tip for anyone else who might try this: the EVE JSON encodes packets using base64 -- scapy can take the base64 packet and convert it into a .pcap for use in WireShark. That would look something like this:

    from elasticsearch import Elasticsearch
    from scapy.all import Ether, wrpcap
    from flask import Flask, send_file
    import base64
    app = Flask(__name__)
    def get_flow_single_packet(flow_id):
            es = Elasticsearch(
            res ="", body={"query": {"match_all": {}}}, q='flow_id:' + flow_id + ' AND _exists_:packet', size=1)
            p = Ether(base64.b64decode(res['hits']['hits'][0]['_source']['packet']))
            wrpcap(flow_id + '-single.pcap', p)
            return(send_file(flow_id + '-single.pcap'))
    if (__name__ == '__main__'):
  , host='', port=9201)

    Note: This example is wildly insecure, doesn't clean up the .pcap files after sending them, and only returns a single packet from a flow. It's just an example.

    Something like this would let you click from a flow in Grafana and open the packet in WireShark -- without having to deal with base64 and so forth.

  • @boobletins

    So far I have connected my pfSense filter log to graylog / grafana to see some firewall rule statistics. I also connected another remote node's suricata to the same stream so I have been dealing with setting up new filters to filter on source.

    I built some new panels but this is still early days. I had to restart from scratch after I could not delete an extractor, turns out Safari is pretty pathetic and it was not an app issue, but a browser memory leak issue.

    Will share once I have something novel :)