Maxmind database update?
-
Some time back I started to look at a way to visualize firewall and suricata logs. I found pfELK and managed to get that up and running which gave me some nice dashboards with e.g. source and destination heatmaps and a world map showing which countries had the most "hits" in the log etc.
Some time later I discovered that none of that geoIP-data was available. And it turned out that had stopped about one month after launching the tool... I have been going back and forth in pfELK trying every which way to get it back up and running, to no avail.
Until yesterday, when out of the blue it started working again... And then I realized it wasn't in the tool itself, it was something I did in pfSense...
As I was investigating this issue (link), going into pfBlocker and trying out changes to the Top Spammer list in the GeoIP selections, and making forced Updates. And that is when the GeoIP visualizations suddenly started working again... I do think I have run updates before, but without making any actual changes to the lists.
So this is all great but why did this happen? I read somewhere that there is a need to update the Maxmind database at least once per month (can't find that info now). Could that be why it stopped working after a month? But shouldn't be updated when the pfBlocker update job runs?
Suricata also has the ability to download the GeoLite2 database, but perhaps that's separated from pfBlocker? -
@gblenn said in Maxmind database update?:
Suricata also has the ability to download the GeoLite2 database, but perhaps that's separated from pfBlocker?
Yes, Suricata can have the GeoLite2 database enabled, but it and pfBlockerNG-devel store the database file in different locations. Suricata downloads and maintains a copy of the DB file in
/usr/local/share/suricata/GeoLite2/
.I once floated the idea for all the packages using GeoIP to come to an agreement for a shared location for the database, but there was not much interest at the time. Therefore different copies of the file may get downloaded and stored in different locations. Each package knows where its file resides, so they do not step on each other. But it did just strike me as a little inefficient to have multiple packages downloading the same file but storing it in, and using it from, different locations on the same firewall box.
-
@gblenn IIRC pfelk configures logstash to perform geoip lookups against IPs in the pfsense firewall log. So yet another copy of the Maxmind DB.
-
@darcey Exactly and that is where I was spending all my efforts trying to fix the problem. The content of the firewall or suricata logs shouldn't change just because I updated the maxmind database in pfSense, right?
And perhaps it has nothing to do with Maxmind in pfSense at all, I just assumed that since the timing seemed to match...
-
@gblenn Perhaps something you changed in pfsense/pfblocker/suricata impacted remote logging or its format and logstash failed (e.g. grok patterns) to extract the info?
IIRC you can manually run those patterns against log excerpts to see if they pluck out the variables as expected.
Also double check if geo location is present in logs sent from pfsense. I'm pretty sure pf firewall logs do not contain such information but less sure re suricata or pfblocker logs. Also depends on how you're shipping logs from pfsense to your elk stack. I only used the built-in remote syslog. I'm guessing you may be doing more than that. -
@darcey Yes you're right, it must be somewhere there that I changed something. Just don't know what although it should likely be something logging related...? However, when trying to fix it in pfELK I did also go through all the settings in pfsense and suricata to make sure I had not missed anything there... as per the pfELK docs.
In terms of logging I also use the built in remote logging, so nothing special there. And I don't think there is any Geo-location info in the logs from pfsense, regardless of which log I look at. Well, actually pfBlocker logs contain country ISO code's as an identifier towards the list which the blocking was related to. Suricata and firewall logs have no such info... so the translation into e.g. destination.geo.country_name is happening in Logstash.
I guess I'll have to monitor this, and the other issue with pfBlocker and see if something changes in the future, hoping that it won't...
-
@gblenn I know you now have it working. However, depending on what you selected to log before, your shipped logs may have been affected by BSD rsyslog's maximum line length for remote logging. It is surprisingly small and, equally surprising, defined in an RFC! I think the workaround was, installing syslog-ng (and maybe tcp?). Otherwise Suricata logs get truncated if you start logging payload information etc.
-
@darcey Good point, that It is one item I completely overlooked in this specific case... I only checked in the Status/Syslog settings, where I have the pfELK IP:Port defined etc.
But I guess it must have been somewhere in the instructions, because I actually had that installed.I have the same server IP:Port as "Object parameters" under the Advanced tab for the Object Type destination. Seems I have only defined an object for suricata, but I'm getting DHCP, DNS and Firewall logs in pfELK as well. So somehow both syslogs are working.
I do have it set to UDP though, not TCP if that is what you referred to?
-
@gblenn I think it is perfectly valid to have both syslog daemons running and sounds like syslog-ng is there purely to handle shipping the larger suricata logs. With rsyslog coping fine with everything else. TCP isn't necessary either, but it is more reliable if log messages exceed a single UDP payload. So, at a guess, I'd say truncated logs aren't your problem.
However try turning on more suricata log options and see if you break things - see if suricata dashboard still presents recent data as expected in Kibana. Then you can be sure truncating is not an issue.
IIRC with suricata logs being JSON, truncated logs pretty much breaks the entire logstash parsing of suricata. I am not running it right now so cannot check.
EDIT: Also, again IIRC, there are remote syslog options within the suricata package itself. But I cannot remember how or if these should be enabled when you are also running syslog-ng to ship suricata logs. I used suricata for while, mainly as an exercise, but could not justify the increased resources needed with the move to v6.