NAT Logs
-
One last question, how did you get DNS lookups as part of your set up?
There are a few steps, hope I remember all of them in this post (22 o´clock here already).
1- First, you need a reverse zone in your DNS.- This reverse zone can be dynamically updated or not, you choose. I opted by having static IP leases and creating A and PTR records for each host individually, but you don't need to do it.
2- Import these extractors to any input you want to use them:
{ "extractors": [ { "title": "hostname_src", "extractor_type": "lookup_table", "converters": [], "order": 12, "cursor_strategy": "copy", "source_field": "sourceIPv4Address", "target_field": "hostname_src", "extractor_config": { "lookup_table_name": "hostname" }, "condition_type": "regex", "condition_value": "192.168.255.25|192.168.10." }, { "title": "hostname_dst", "extractor_type": "lookup_table", "converters": [], "order": 13, "cursor_strategy": "copy", "source_field": "destinationIPv4Address", "target_field": "hostname_dst", "extractor_config": { "lookup_table_name": "hostname" }, "condition_type": "regex", "condition_value": "192.168.255.25|192.168.10." } ], "version": "6.1.5" }
Or, through the GUI:
The condition value was added to allow only local addresses to be resolved, but if you want the entire world to be resolved, just remove that.
3- Go to lookup tables in Graylog, create an adapter, a cache and a lookup table, point it your DNS server.
That alone should do it, if you have any doubts just ask, but I will only answer tomorrow, 22:30 here already.
- This reverse zone can be dynamically updated or not, you choose. I opted by having static IP leases and creating A and PTR records for each host individually, but you don't need to do it.
-
@mcury Got it working.
Ever thought about creating a blog and posting this? At the very least this is extremely useful and should be preserved
-
-
@mcury Had to update the JVM Heap size. Looks like IPFix widget i made was killing my VM.
-
Had to update the JVM Heap size. Looks like IPFix widget i made was killing my VM.
DNS data can be overwhelming.
I had to increase the JVM heap size and configure zswap with zmalloc and zstd to avoid crashes.
Then, checked every rule that was logging, or tracking, and optimized that also.
Created a bunch of no log rules, changed to 5 days of live data only, older indices are closed but I can open them anytime I need. -
Just updated my graylog to 6.1.6, still works.
I'm doing some magic here, managed to work with Graylog, samba domain controller with freeradius, apache server with php and ssl, a nut server for everything (APC nobreak) and a Unifi controller that controls 3 devices, one AP, and two switches.
All of these services running in a Raspberry pi 5 8GB hehe
It runs so well if you optimize it correctly, that I've already deployed a raspberry pi 5 a few to some small customers.
As soon as the 16GB variant reaches my region, I'll get one to see how many days of live logs I'll be able to get..
Very low cost and power usage solution, easy to replace or rebuild the sd-card in case of disaster.
Powering it up through PoE with an adapter bought in Aliexpress. -
All of these services running in a Raspberry pi 5 8GB hehe
You are a brave man! I have an XCP-NG 2x server set up.
I am planning a future migration of Graylog. When i first stood it up years ago i made the very very bad error of storing all data on the vhd - So i have a 500GB drive attached to this Virtual Machine. As you can probably imagine wanting to back up the VM takes some time. I wanted to move it to an NFS share at the very least but my drives are not very performant. Its a project that is on the radar but i never have time. -
All of these services running in a Raspberry pi 5 8GB hehe
You are a brave man! I have an XCP-NG 2x server set up.
I am planning a future migration of Graylog. When i first stood it up years ago i made the very very bad error of storing all data on the vhd - So i have a 500GB drive attached to this Virtual Machine. As you can probably imagine wanting to back up the VM takes some time. I wanted to move it to an NFS share at the very least but my drives are not very performant. Its a project that is on the radar but i never have time.If you are using the community version, that is possible with the Graylog 6.0 community version
But, If I were you, I wouldn't update to 6.1 series.They are slowly dropping support to opensearch, if you check the installation guide, they even removed opensearch from it.
The alternative now is Graylog-datanode.Since the "archive" is a paid feature, Graylog-datanode doesn't give you that option in the community version, as opensearch used to.
I used to archive everything in my NAS and restore them once needed, now I can't do it anymore.
So, I'm keeping two months of logs only, its enough to get 30GB of logs.I have a script that cleans up for me, but not on a schedule:
#!/bin/bash for counter in {0..38}; do curl -X DELETE --key /path_to_your/_key/key_datanode.crt --cert /path_to_your/cert/cert_datanode.crt --cacert /path_to_your/ca//ca_datanode.crt https://raspberry.yourdoamin.arpa:9200/pfsense_$counter --pass datanode_password_you_configured; done
In the example above, it delete closed indices from 0 to 38 and keep the others untouched.
I can run it each of the indices I have.Edit: For bigger customers with higher requirements, a NFS share would be a good idea indeed.
Perhaps a cluster of graylog servers also.. -
Found something interesting..
IPfix is working out (@mcury seriously..you the man) but I noticed some data is not being loggedI have pfSense set up as a Tailscale subnet router. There is another Subnet router within my tailnet.
I can reach devices behind the other subnet router. The LAN on that side is 192.168.8.0/24. I am connecting from 192.168.6.0/24.No IPFix logs were created. @stephenw10 Does pflow treat "vpns" differently somehow ?
-
Not VPNs generally, just Tailscale I think. The ACLs applied by Tailscale are not pf so flows are not logged from it.
-
@stephenw10 said in NAT Logs:
Not VPNs generally, just Tailscale I think. The ACLs applied by Tailscale are not pf so flows are not logged from it.
perhaps creating an OUT firewall rule on the destination LAN and tracking that ?
-
hmm. The documentation says something a bit different. Its based on state creation which dictates the flow record creation. At least that's how I'm interpreting it.
-
Right, states are created by pf. Anything that creates a state should be logged. I would expect!
-
Traffic coming from the tailscale network into the tailscale interface in pfSense is not filtered by pf unless it is coming the tailscale interface itself.
Yes you would only see an outbound state on the destination interface.
-
I believe pflow does not capture any VPN traffic.
I also have Wireguard peers and there is no data from IPFix.
I can try and test with OpenVPN but i have a feeling its going to be the same result.So far my guess is that physical interfaces are tracked but logical ones are not.
-
I would be very surprised if that was the case. pflow collects data directly from pf. And pf doesn't care about physical NICs, it filters and opens states on all interface types, physical or otherwise.
Of course I've been surprised before but....
-
hmm you may be right.
172.26.0.10 is a wireguard peer and here is the IPfix record. So now I'm not sure why its not coming up in my dashboard...hmmmm.....ok figured it out...@stephenw10 you are correct again sir.
-
@stephenw10 One more thing.
Tailscale flows are indeed captured by IPFix.
The source is the pfsense and the destination is the LAN behind another subnet router.
-
Yup there we go. Tailscale acts more like a proxy in firewall terms. You can only see the traffic to/from it and not the source/destination inside the tailscale network.