Suricata InLine with igb NICs
-
I purchased a SuperMicro AOC-SGP-I4 Quad-Port 1 Gigabit Ethernet Adapter Card RJ45 from Netgate:
https://store.netgate.com/AOC-SGP-I4.aspx
It's a great ethernet adapter card. I'm looking to purchase another in the near future.I'm presently running pfSense 2.4.4-RELEASE (amd64), FreeBSD 11.2-RELEASE-p3.
I know there has been quite a bit of conversation in the forums concerning pfSense running Suricata in Inline IPS Mode, igb(4) drivers, etc. I've practically read all of the posts that I can find. I've been reading Tuning and Troubleshooting Network Cards in the pfSense documentation and making the tweaks as suggested, however, when I check my System Logs I'm still getting the following types of entries:
kernel 363.016858 [1071] netmap_grab_packets bad pkt at 625 len 2147
kernel 362.869925 [1071] netmap_grab_packets bad pkt at 384 len 2147In the loader.conf.local file I created, I've entered and saved the following:
kern.ipc.nmbclusters="1000000"
net.isr.dispatch=deferred
hw.igb.num_queues=0
hw.igb.fc_setting=0
hw.pci.enable_msi=0Since the entries I listed are in the System Logs, is there a way to map the entries back to what caused them? If I knew what caused the entries, maybe I can tune my system appropriately seeing the tuning that I've been doing obviously isn't working as I had hoped. Any suggestions would be helpful. Thanks.
-
@newuser2pfsense said in Network Card Tuning:
netmap_grab_packets bad pkt
That is an error generated by Suricata in Inline mode which uses netmap. However I expect that NIC/driver to be supported.
Are you running the latest Suricata package?
Try removing or commenting out
net.isr.dispatch=deferred
. I'm unsure how that might play with netmap.Steve
-
stephenw10...Thank you for the response. Yes, I'm using the latest Suricata package. Here's an update. I read the FreeBSD man page here:
https://www.freebsd.org/cgi/man.cgi?igb(4)
I used a couple of tunables listed in there.With the information from Netgate/pfSense documentation:
https://www.netgate.com/docs/pfsense/hardware/tuning-and-troubleshooting-network-cards.html
I used a couple of tunables listed in there.As well, from the FreeBSD Network Performance Tuning wiki here:
https://wiki.freebsd.org/NetworkPerformanceTuning
I used a couple of tunables in there.Now I have the following in my loader.conf.local file:
hw.igb.rxd="1024"
hw.igb.txd="1024"
hw.igb.enable_aim=1
hw.igb.num_queues=0
kern.ipc.nmbclusters="1000000"
hw.pci.enable_msi=0
hw.igb.fc_setting=0
hw.igb.max_interrupt_rate="32000"I just went and looked and with the above, it's still throwing the netmap_grab_packets. Not sure what to do at this point. Any ideas?
-
Try it without any additional values in there first. The default settings should be good for most setups.
Try running Suricata in legacy mode to be sure it is that throwing the errors.
Steve
-
There is a FreeBSD bug report on the Netmap issue here. I think this is most likely an issue within the FreeBSD kernel with the netmap code, but that's just my personal guess. Suricata makes pretty vanilla API calls to set up and utilize a Netmap pipe.
Also found another post on a FreeBSD forum about the netmap problems here. The specific case here is with Intel NICs.
-
@stephenw10 said in Network Card Tuning:
Try it without any additional values in there first. The default settings should be good for most setups.
Try running Suricata in legacy mode to be sure it is that throwing the errors.
Steve
Steve:
So far as I know, Suricata is the only pfSense package that will utilize a Netmap pipe. And Suricata only does that when using Inline IPS Mode operation. So returning Suricata to Legacy Mode will shutdown all Netmap pipes in the kernel, so the error would be expected to disappear.Suricata initiates the Netmap pipe (it actually uses two for bi-directional traffic) by making FreeBSD system calls. The calls are to standardized Netmap API functions.
-
Yes, it is as far as I'm aware also. I expect the errors to resolve in legacy mode there, however if it doesn't that would be an interesting result.
I'm not sure there is anything else to be done until this is fixed upstream.Steve
-
@stephenw10 said in Network Card Tuning:
Yes, it is as far as I'm aware also. I expect the errors to resolve in legacy mode there, however if it doesn't that would be an interesting result.
I'm not sure there is anything else to be done until this is fixed upstream.Steve
I wish I was a better kernel code programmer so I could dive in and look at this, but there is a bit of magic stuff happening way down at that level of the network stack that I lack experience with.
-
stephenw10 and bmeeks...thank you both for responding. This has been ongoing for a while now in Inline IPS Mode. It doesn't happen in Legacy Mode. I've commented all of the lines out in the loader.conf.local file. I've noticed that the first release candidate of FreeBSD 12.0 has been released. I wonder how long it will take for the developers to fix the bug report?
-
If you really want to get pro-active you could try enabling suricata in inline mode on FreeBSD 12.
I've never tried Suricata in FreeBSD directly. I know that the pfSense package has some patches also. Might not be that easy.
It might not be necessary though since what you're actually testing is netmap with igb. Something else that uses netmap would probably also trigger it. Might have to make sure it does in 11.2 first.Steve
-
Yes, as @stephenw10 suggested, you can certainly run the base Suricata package on FreeBSD 12 using Netmap. Just install the pkg from the CLI. Of course you won't have benefit of the pfSense package's GUI to configure things, but you could copy the basic suricata.yaml file from your pfSense box to the new hardware. You might have to adjust for interface names, and you would certainly have to manage the setup using the text-based tools within a CLI session. You can find the shell startup commands for the pfSense installation in this shell script: /usr/local/etc/rc.d/suricata.sh
To use Inline IPS Mode you won't need any of the pfSense-specific Suricata patches. Those only add the custom Legacy Mode blocking functionality. Inline IPS Mode works with no patches. So just load up the package straight from FreeBSD ports or even compile it yourself using the local ports tree you can install with FreeBSD.
The culprit in this bug is Netmap within the FreeBSD kernel and perhaps in combination with issues in specific NIC drivers was well. Unless the FreeBSD 12 source tree shows some commits related to Netmap, I would not get my hopes up, though. There seems to be not a whole lot of interest in addressing the Netmap issues. Maybe the user base is not large enough to warrant the developer time required to track down the issues.
-
stephenw10 and bmeeks...without an extra computer to test this on, I guess I'll have to wait until the developers take an interest in researching the issue and providing a fix. bmeeks - has anyone from pfSense/Netgate entered a bug report with FreeBSD by chance? Maybe another bug report might get the ball rolling specifically from a company instead of an individual. A company might be able to persuade the FreeBSD developers to take a more proactive approach. I don't know, just a thought.
-
@newuser2pfsense said in Network Card Tuning:
stephenw10 and bmeeks...without an extra computer to test this on, I guess I'll have to wait until the developers take an interest in researching the issue and providing a fix. bmeeks - has anyone from pfSense/Netgate entered a bug report with FreeBSD by chance? Maybe another bug report might get the ball rolling specifically from a company instead of an individual. A company might be able to persuade the FreeBSD developers to take a more proactive approach. I don't know, just a thought.
I am not affiliated with Netgate, so I wouldn't know about any possible bug report about Netmap submitted by them.
-
I have not seen anything.
Generally speaking submitting additional bug reports for which a report is already open is, at best, frowned upon! Better to add to existing bug reports. Even better to add to them with actual useful data.
FreeBSD devs will rightly want to see any issues replicated in FreeBSD directly. And they will want to see that done in a current version. I would suggest that is 12 right now though maybe 11-stable might be acceptable.Demonstrating the bug exists in current FreeBSD and giving detailed steps to replicate that is the best way to attract developer attention.
Steve
-
stephenw10 and bmeeks...I appreciate all of your help. Unfortunately, I only have one computer like my pfSense instance, and of course it's being used for pfSense, so I wouldn't be able to replicate the issue to provide more information to the developers. I guess I had hoped that others who might see this post would have additional input to the FreeBSD bug report. Thanks.
-
You can only do what you can do. There may well be others who can do more. Inline Suricata with igb is not that uncommon.
Steve
-
I'm in a position to test this (and I've also been having issues with igb and em drivers + netmap).
I'm using a Jetway with I219-LM and I211-AT chips and it looks like the Supermicro is i350.
Would testing this with the lower level chips be useful, or do we expect that the i350s would work where the i2XX wouldn't?
If it would be useful, I can certainly put FreeBSD 12 and Suricata on a machine with i2XX...
-
Hi boobletins...thank you for the response. That's very kind of you to offer to test this. I would only know the i350 as that's the card that I'm using. I wish I could offer more. My apologies.
-
Can you ssh and give me the result of ifconfig on the interface in question?
I just discovered after lots of annoyance that IPv6 Transmit Checksums were not disabled via the GUI when they appeared to be. Manually configuring it off solved this issue for me (at least I can now complete speedtests over the last hour or so with Suricata in IPS mode).
You might want to double check that
TXCSUM_IPV6
does not appear in your
ifconfig igb0
output (or the interface in question).
If it does, you might try:
ifconfig igb0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso
then put Suricata back into IPS mode, restart Suricata, but don't reboot pfSense. See if your connection is stable. If you reboot pfSense, txcsum6 may reappear. I don't know where to permanently disable it.
-
@boobletins said in Suricata InLine with igb NICs:
Can you ssh and give me the result of ifconfig on the interface in question?
I just discovered after lots of annoyance that IPv6 Transmit Checksums were not disabled via the GUI when they appeared to be. Manually configuring it off solved this issue for me (at least I can now complete speedtests over the last hour or so with Suricata in IPS mode).
You might want to double check that
TXCSUM_IPV6
does not appear in your
ifconfig igb0
output (or the interface in question).
If it does, you might try:
ifconfig igb0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso
then put Suricata back into IPS mode, restart Suricata, but don't reboot pfSense. See if your connection is stable. If you reboot pfSense, txcsum6 may reappear. I don't know where to permanently disable it.
This issue is affecting more than just IPS mode in Suricata. It is impacting IPv6 connectivity for some folks on their WAN. I know there is a pfSense Redmine issue about it. The problem is within FreeBSD itself, I think. It's not a bug within Suricata itself. Suricata is a victim in this case. I think I've seen some posts about this in the IPv6 sub-forum here. I know I've seen a Redmine bug on it, too. I just had not connected directly to the IPS-Netmap problem. Good detective work!
-
boobletins...Yes, TXCSUM_IPV6 is in the output of the ifconfig on my WAN interface; it's actually on all 4 interfaces on my SuperMicro ethernet adapter. If we knew where to disable it, I'm wondering if we could put that information in the loader.conf.local file? Then if we restart pfSense, I'm wondering if it would be disabled then? I don't know, just a thought.
bmeeks...I don't have IPv6 enabled anywhere on my pfSense instance, at least I don't believe.
-
@boobletins said in Suricata InLine with igb NICs:
Manually configuring it off solved this issue for me
It solved the netmap errors? Did you see any other errors that lead you to try this?
Are you actually using IPv6?
Steve
-
For me, I have IPv6 blocked on my WAN as a rule in the Firewall > Rules; it's practically at the top. However, I noticed a mix of IPv4 and IPv6 addresses in my System Logs > Firewall > Normal View tab. As well, I currently have Suricata running in Legacy Mode and have a ton of IPv6 addresses of SURICATA zero length padN option in the Alerts tab and all IPv6 addresses in the Blocks tab. Maybe I'm completely wrong but if I have IPv6 blocked on the WAN, should IPv6 addresses be showing up at all in the logs?
-
@newuser2pfsense said in Suricata InLine with igb NICs:
For me, I have IPv6 blocked on my WAN as a rule in the Firewall > Rules; it's practically at the top. However, I noticed a mix of IPv4 and IPv6 addresses in my System Logs > Firewall > Normal View tab. As well, I currently have Suricata running in Legacy Mode and have a ton of IPv6 addresses of SURICATA zero length padN option in the Alerts tab and all IPv6 addresses in the Blocks tab. Maybe I'm completely wrong but if I have IPv6 blocked on the WAN, should IPv6 addresses be showing up at all in the logs?
You will likely have IPv6 Link-Local addresses created on your interfaces by default. I have them on all of my local firewall interfaces, including my WAN even though my ISP does not provide any type of IPv6 connectivity.
A typical Windows domain will spew a lot of IPv6 stuff by default. In fact, IPv6 is a preferred communications route for Windows domain traffic unless it is explicitly disabled. Most of that will be via link-local addresses.
-
I suppose "solved" is a strong word. What I should have said is that before I couldn't complete a single speedtest and now I can complete an arbitrary number without netmap errors. Suricata also lasted through the night in IPS mode on my LAN interface (igb) without crashing which is extremely rare. I won't know if it's truly solved until it lasts through more like a week or month.
I can reliably crash the interface by enabling TXCSUM_IPV6 and running a speedtest.
I'm not a pfSense expert -- so when you ask if I'm using IPv6, all I know to say is that I have "Allow IPv6" enabled in the UI, and I see a smattering of IPv6 IPs in both Suricata Alerts and states (the majority are IPv4).
Here is what I settled on for my boot.conf.local after referring these links:
https://calomel.org/freebsd_network_tuning.html
https://suricata.readthedocs.io/en/suricata-4.0.5/performance/packet-capture.html#rsskern.ipc.nmbclusters="1048576" hw.pci.enable_msix=1 hw.em.msix=1 hw.em.smart_pwr_down=0 hw.em.num_queues=1 # https://suricata.readthedocs.io/en/suricata-4.0.5/performance/packet-capture.html#rss # below this line is all from: https://calomel.org/freebsd_network_tuning.html if_igb_load="YES" hw.igb.enable_msix="1" hw.igb.enable_aim="1" hw.igb.rx_process_limit="100" #default hw.igb.num_queues="3" # (default 0 , queues equal the number of CPU real cores if queues available on card) hw.igb.max_interrupt_rate="16000" #double default coretemp_load="YES" hw.intr_storm_threshold="9000" #default if_em_load="YES" hw.em.enable_msix="1" hw.em.msix=1 autoboot_delay="-1" net.isr.maxthreads="-1" net.isr.bindthreads="1" # (default 0, runs randomly on any one cpu core) #Larger buffers and TCP Large Window Extensions net.inet.tcp.rfc1323=1 net.inet.tcp.recvbuf_inc=65536 # (default 16384) net.inet.tcp.sendbuf_inc=65536 # (default 8192) net.inet.tcp.sendspace=65536 # (default 32768) net.inet.tcp.mssdflt=1460 # Option 1 (default 536) net.tcp.minmss=536 # (default 216) #syn protection net.inet.tcp.syncache.rexmtlimit=0 # (default 3)
-
@stephenw10 said in Suricata InLine with igb NICs:
Did you see any other errors that lead you to try this?
No -- I didn't see any specific IPv6 errors. I just started investigating the interface settings using information from here: https://calomel.org/freebsd_network_tuning.html and noticed that ifconfig showed TXCSUM_IPV6 enabled when I thought it was supposed to be disabled. Disabling it seems to have created a more stable interface with netmap enabled.
I would receive 2 types of netmap errors previously "bad pkt" errors and "netmap_transmit" errors eg
[2925] netmap_transmit igb0 full hwcur 203 hwtail 204 qlen 1022 len 1514 m 0xfffff8000df20500 [1071] netmap_grab_packets bad pkt at 419 len 2167
I've tried using -txcsum6 the same on my WAN (em0) interface, but I still get bad packets there. I don't know if that has to do with the lack of msix support on that interface or my configuration settings. Still trying to figure that one out.
Here's my dmesg output for em0 and igb0 in case that helps.
em0: <Intel(R) PRO/1000 Network Connection 7.6.1-k> mem 0xdf100000-0xdf11ffff irq 16 at device 31.6 on pci0 em0: Using an MSI interrupt em0: Ethernet address: 00:30:18:ce:19:cf em0: netmap queues/slots: TX 1/1024, RX 1/1024 ses0 at ahciem0 bus 0 scbus6 target 0 lun 0 em0: link state changed to UP em0: promiscuous mode enabled
igb0: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0xe000-0xe01f mem 0xdf000000-0xdf01ffff,0xdf020000-0xdf023fff irq 17 at device 0.0 on pci1 igb0: Using MSIX interrupts with 3 vectors igb0: Ethernet address: 00:30:18:ce:19:d0 igb0: Bound queue 0 to cpu 0 igb0: Bound queue 1 to cpu 1 igb0: netmap queues/slots: TX 2/1024, RX 2/1024 igb0: link state changed to UP igb0: permanently promiscuous mode enabled
-
@newuser2pfsense said in Suricata InLine with igb NICs:
I'm wondering if we could put that information in the loader.conf.local file?
It looks like the right place to put this is described here:
https://www.netgate.com/docs/pfsense/development/executing-commands-at-boot-time.html
I tried to offer an example but Akismet thinks its spam. Let's see if this post will go through?
-
boobletins...I read through the link you provided but I'm not sure myself on what the syntax should be to add to the loader.conf.local file. I was looking to use the loader.conf.local file for an interim fix until the FreeBSD developers are able to solve the netmap issue(s) in a future release.
-
Look closely at https://www.netgate.com/docs/pfsense/packages/package-list.html there is a package that might help with running a command on system startup.
-
Grimson...You're right. I didn't see it. Shellcmd - The shellcmd utility is used to manage commands on system startup. Now we just need the syntax to use.
-
Shellcmd just runs commands like they would be at the command line. The only difference is you often need the complete path to the command as it runs as a different user. But you could use:
ifconfig igb0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso
That should run OK.
Steve
-
I noticed that with the release of pfSenxe 2.4.4, there was a dramatic increase of Netmap errors (https://forum.netgate.com/topic/136191/netmap-alerts-gotten-worst-with-2-4-4). I also have an igb network card.
The major headache is this issue turns into finger pointing...pfSense says its FreeBSD 11.2, Netmap says its Suricata, FreeBSD says its the network card, and Suricata says its Netmap.
My feeling is since we're ultimately using pfSense, it's pfSense responsibility to ensure that we achieve synergy of a robust firewall with all parts working seamlessly. Inline mode is an important part of firewall intrusion detection prevention system in that it provides a more efficient screening.
-
It looks like in that thread you seemed to have solved the issue, is that the case? (I ask because I'm interested in knowing if I've solved this issue for myself as well).
If you're game, could you give me the output from the following shell commands (case sensitive):
ifconfig igb0 | grep CSUM sysctl -a | grep igb sysctl -a | grep netmap
- How many CPU cores do you have?
- Is hyperthreading enabled?
- How much RAM do you have?
- Are you running Suricata on more than 1 interface? (If so, what's the second interface? Also: run the shell commands above on that interface)
I've gone a few days now without netmap errors on either my em0 or igb0 interface with Suricata in inline IPS mode and 2 speedtests / hour. I'm becoming more confident that I have a working configuration, but if we can eliminate them from yours as well that'd be some welcome evidence...
-
@boobletins said in Suricata InLine with igb NICs:
ifconfig igb0 | grep CSUM
Shell Output - ifconfig igb0 | grep CSUM
options=5400b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,NETMAP,TXCSUM_IPV6>Shell Output - sysctl -a | grep igb
device igb hw.igb.tx_process_limit: -1 hw.igb.rx_process_limit: 100 hw.igb.num_queues: 0 hw.igb.header_split: 0 hw.igb.max_interrupt_rate: 8000 hw.igb.enable_msix: 1 hw.igb.enable_aim: 1 hw.igb.txd: 1024 hw.igb.rxd: 1024 dev.igb.1.host.header_redir_missed: 0 dev.igb.1.host.serdes_violation_pkt: 0 dev.igb.1.host.length_errors: 0 dev.igb.1.host.tx_good_bytes: 5014864175 dev.igb.1.host.rx_good_bytes: 344809214 dev.igb.1.host.breaker_tx_pkt_drop: 0 dev.igb.1.host.tx_good_pkt: 63 dev.igb.1.host.breaker_rx_pkt_drop: 0 dev.igb.1.host.breaker_rx_pkts: 0 dev.igb.1.host.rx_pkt: 77 dev.igb.1.host.host_tx_pkt_discard: 0 dev.igb.1.host.breaker_tx_pkt: 0 dev.igb.1.interrupts.rx_overrun: 0 dev.igb.1.interrupts.rx_desc_min_thresh: 0 dev.igb.1.interrupts.tx_queue_min_thresh: 0 dev.igb.1.interrupts.tx_queue_empty: 4315841 dev.igb.1.interrupts.tx_abs_timer: 0 dev.igb.1.interrupts.tx_pkt_timer: 4315904 dev.igb.1.interrupts.rx_abs_timer: 2921232 dev.igb.1.interrupts.rx_pkt_timer: 2921155 dev.igb.1.interrupts.asserts: 8803973 dev.igb.1.mac_stats.tso_ctx_fail: 0 dev.igb.1.mac_stats.tso_txd: 0 dev.igb.1.mac_stats.tx_frames_1024_1522: 3233544 dev.igb.1.mac_stats.tx_frames_512_1023: 62481 dev.igb.1.mac_stats.tx_frames_256_511: 72052 dev.igb.1.mac_stats.tx_frames_128_255: 119162 dev.igb.1.mac_stats.tx_frames_65_127: 781667 dev.igb.1.mac_stats.tx_frames_64: 46998 dev.igb.1.mac_stats.mcast_pkts_txd: 269918 dev.igb.1.mac_stats.bcast_pkts_txd: 118 dev.igb.1.mac_stats.good_pkts_txd: 4315904 dev.igb.1.mac_stats.total_pkts_txd: 4315904 dev.igb.1.mac_stats.total_octets_txd: 5014886629 dev.igb.1.mac_stats.good_octets_txd: 5014885349 dev.igb.1.mac_stats.total_octets_recvd: 344809463 dev.igb.1.mac_stats.good_octets_recvd: 344808248 dev.igb.1.mac_stats.rx_frames_1024_1522: 49390 dev.igb.1.mac_stats.rx_frames_512_1023: 61271 dev.igb.1.mac_stats.rx_frames_256_511: 60178 dev.igb.1.mac_stats.rx_frames_128_255: 132406 dev.igb.1.mac_stats.rx_frames_65_127: 2127900 dev.igb.1.mac_stats.rx_frames_64: 490087 dev.igb.1.mac_stats.mcast_pkts_recvd: 0 dev.igb.1.mac_stats.bcast_pkts_recvd: 4 dev.igb.1.mac_stats.good_pkts_recvd: 2921232 dev.igb.1.mac_stats.total_pkts_recvd: 2921232 dev.igb.1.mac_stats.mgmt_pkts_txd: 0 dev.igb.1.mac_stats.mgmt_pkts_drop: 0 dev.igb.1.mac_stats.mgmt_pkts_recvd: 0 dev.igb.1.mac_stats.unsupported_fc_recvd: 0 dev.igb.1.mac_stats.xoff_txd: 0 dev.igb.1.mac_stats.xoff_recvd: 0 dev.igb.1.mac_stats.xon_txd: 0 dev.igb.1.mac_stats.xon_recvd: 0 dev.igb.1.mac_stats.coll_ext_errs: 0 dev.igb.1.mac_stats.tx_no_crs: 0 dev.igb.1.mac_stats.alignment_errs: 0 dev.igb.1.mac_stats.crc_errs: 0 dev.igb.1.mac_stats.recv_errs: 0 dev.igb.1.mac_stats.recv_jabber: 0 dev.igb.1.mac_stats.recv_oversize: 0 dev.igb.1.mac_stats.recv_fragmented: 0 dev.igb.1.mac_stats.recv_undersize: 0 dev.igb.1.mac_stats.recv_no_buff: 0 dev.igb.1.mac_stats.recv_length_errors: 0 dev.igb.1.mac_stats.missed_packets: 0 dev.igb.1.mac_stats.defer_count: 0 dev.igb.1.mac_stats.sequence_errors: 0 dev.igb.1.mac_stats.symbol_errors: 0 dev.igb.1.mac_stats.collision_count: 0 dev.igb.1.mac_stats.late_coll: 0 dev.igb.1.mac_stats.multiple_coll: 0 dev.igb.1.mac_stats.single_coll: 0 dev.igb.1.mac_stats.excess_coll: 0 dev.igb.1.queue1.lro_flushed: 0 dev.igb.1.queue1.lro_queued: 0 dev.igb.1.queue1.rx_bytes: 152608531 dev.igb.1.queue1.rx_packets: 1226723 dev.igb.1.queue1.rxd_tail: 994 dev.igb.1.queue1.rxd_head: 995 dev.igb.1.queue1.tx_packets: 257 dev.igb.1.queue1.no_desc_avail: 0 dev.igb.1.queue1.txd_tail: 339 dev.igb.1.queue1.txd_head: 339 dev.igb.1.queue1.interrupt_rate: 76923 dev.igb.1.queue0.lro_flushed: 0 dev.igb.1.queue0.lro_queued: 0 dev.igb.1.queue0.rx_bytes: 180516276 dev.igb.1.queue0.rx_packets: 1694509 dev.igb.1.queue0.rxd_tail: 812 dev.igb.1.queue0.rxd_head: 813 dev.igb.1.queue0.tx_packets: 4315647 dev.igb.1.queue0.no_desc_avail: 0 dev.igb.1.queue0.txd_tail: 442 dev.igb.1.queue0.txd_head: 442 dev.igb.1.queue0.interrupt_rate: 90909 dev.igb.1.fc_low_water: 29480 dev.igb.1.fc_high_water: 29488 dev.igb.1.rx_buf_alloc: 34 dev.igb.1.tx_buf_alloc: 14 dev.igb.1.extended_int_mask: 2147484419 dev.igb.1.interrupt_mask: 4 dev.igb.1.rx_control: 67141658 dev.igb.1.device_control: 1087373896 dev.igb.1.watchdog_timeouts: 0 dev.igb.1.rx_overruns: 0 dev.igb.1.tx_dma_fail: 0 dev.igb.1.mbuf_defrag_fail: 0 dev.igb.1.link_irq: 2 dev.igb.1.dropped: 0 dev.igb.1.tx_processing_limit: -1 dev.igb.1.rx_processing_limit: 100 dev.igb.1.fc: 3 dev.igb.1.enable_aim: 1 dev.igb.1.nvm: -1 dev.igb.1.%parent: pci3 dev.igb.1.%pnpinfo: vendor=0x8086 device=0x10a7 subvendor=0x8086 subdevice=0x10a7 class=0x020000 dev.igb.1.%location: slot=0 function=1 dbsf=pci0:3:0:1 dev.igb.1.%driver: igb dev.igb.1.%desc: Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k dev.igb.0.host.header_redir_missed: 0 dev.igb.0.host.serdes_violation_pkt: 0 dev.igb.0.host.length_errors: 0 dev.igb.0.host.tx_good_bytes: 702511124 dev.igb.0.host.rx_good_bytes: 8859910607 dev.igb.0.host.breaker_tx_pkt_drop: 0 dev.igb.0.host.tx_good_pkt: 389 dev.igb.0.host.breaker_rx_pkt_drop: 0 dev.igb.0.host.breaker_rx_pkts: 0 dev.igb.0.host.rx_pkt: 179 dev.igb.0.host.host_tx_pkt_discard: 0 dev.igb.0.host.breaker_tx_pkt: 0 dev.igb.0.interrupts.rx_overrun: 0 dev.igb.0.interrupts.rx_desc_min_thresh: 0 dev.igb.0.interrupts.tx_queue_min_thresh: 0 dev.igb.0.interrupts.tx_queue_empty: 8008878 dev.igb.0.interrupts.tx_abs_timer: 0 dev.igb.0.interrupts.tx_pkt_timer: 8009267 dev.igb.0.interrupts.rx_abs_timer: 9004187 dev.igb.0.interrupts.rx_pkt_timer: 9004008 dev.igb.0.interrupts.asserts: 18858568 dev.igb.0.mac_stats.tso_ctx_fail: 0 dev.igb.0.mac_stats.tso_txd: 0 dev.igb.0.mac_stats.tx_frames_1024_1522: 49679 dev.igb.0.mac_stats.tx_frames_512_1023: 59868 dev.igb.0.mac_stats.tx_frames_256_511: 65957 dev.igb.0.mac_stats.tx_frames_128_255: 117544 dev.igb.0.mac_stats.tx_frames_65_127: 4903787 dev.igb.0.mac_stats.tx_frames_64: 2812432 dev.igb.0.mac_stats.mcast_pkts_txd: 217 dev.igb.0.mac_stats.bcast_pkts_txd: 321 dev.igb.0.mac_stats.good_pkts_txd: 8009267 dev.igb.0.mac_stats.total_pkts_txd: 8009267 dev.igb.0.mac_stats.total_octets_txd: 702511679 dev.igb.0.mac_stats.good_octets_txd: 702510340 dev.igb.0.mac_stats.total_octets_recvd: 8859907035 dev.igb.0.mac_stats.good_octets_recvd: 8859915684 dev.igb.0.mac_stats.rx_frames_1024_1522: 5630206 dev.igb.0.mac_stats.rx_frames_512_1023: 67795 dev.igb.0.mac_stats.rx_frames_256_511: 155128 dev.igb.0.mac_stats.rx_frames_128_255: 445051 dev.igb.0.mac_stats.rx_frames_65_127: 765396 dev.igb.0.mac_stats.rx_frames_64: 1940609 dev.igb.0.mac_stats.mcast_pkts_recvd: 218995 dev.igb.0.mac_stats.bcast_pkts_recvd: 47673 dev.igb.0.mac_stats.good_pkts_recvd: 9004185 dev.igb.0.mac_stats.total_pkts_recvd: 9004224 dev.igb.0.mac_stats.mgmt_pkts_txd: 0 dev.igb.0.mac_stats.mgmt_pkts_drop: 0 dev.igb.0.mac_stats.mgmt_pkts_recvd: 0 dev.igb.0.mac_stats.unsupported_fc_recvd: 0 dev.igb.0.mac_stats.xoff_txd: 0 dev.igb.0.mac_stats.xoff_recvd: 1 dev.igb.0.mac_stats.xon_txd: 0 dev.igb.0.mac_stats.xon_recvd: 1 dev.igb.0.mac_stats.coll_ext_errs: 0 dev.igb.0.mac_stats.tx_no_crs: 0 dev.igb.0.mac_stats.alignment_errs: 0 dev.igb.0.mac_stats.crc_errs: 0 dev.igb.0.mac_stats.recv_errs: 0 dev.igb.0.mac_stats.recv_jabber: 0 dev.igb.0.mac_stats.recv_oversize: 0 dev.igb.0.mac_stats.recv_fragmented: 0 dev.igb.0.mac_stats.recv_undersize: 0 dev.igb.0.mac_stats.recv_no_buff: 0 dev.igb.0.mac_stats.recv_length_errors: 0 dev.igb.0.mac_stats.missed_packets: 0 dev.igb.0.mac_stats.defer_count: 0 dev.igb.0.mac_stats.sequence_errors: 0 dev.igb.0.mac_stats.symbol_errors: 0 dev.igb.0.mac_stats.collision_count: 0 dev.igb.0.mac_stats.late_coll: 0 dev.igb.0.mac_stats.multiple_coll: 0 dev.igb.0.mac_stats.single_coll: 0 dev.igb.0.mac_stats.excess_coll: 0 dev.igb.0.queue1.lro_flushed: 0 dev.igb.0.queue1.lro_queued: 0 dev.igb.0.queue1.rx_bytes: 0 dev.igb.0.queue1.rx_packets: 2432 dev.igb.0.queue1.rxd_tail: 35 dev.igb.0.queue1.rxd_head: 36 dev.igb.0.queue1.tx_packets: 1 dev.igb.0.queue1.no_desc_avail: 0 dev.igb.0.queue1.txd_tail: 0 dev.igb.0.queue1.txd_head: 0 dev.igb.0.queue1.interrupt_rate: 16129 dev.igb.0.queue0.lro_flushed: 0 dev.igb.0.queue0.lro_queued: 0 dev.igb.0.queue0.rx_bytes: 0 dev.igb.0.queue0.rx_packets: 7244 dev.igb.0.queue0.rxd_tail: 180 dev.igb.0.queue0.rxd_head: 181 dev.igb.0.queue0.tx_packets: 9386 dev.igb.0.queue0.no_desc_avail: 0 dev.igb.0.queue0.txd_tail: 755 dev.igb.0.queue0.txd_head: 755 dev.igb.0.queue0.interrupt_rate: 16129 dev.igb.0.fc_low_water: 29480 dev.igb.0.fc_high_water: 29488 dev.igb.0.rx_buf_alloc: 34 dev.igb.0.tx_buf_alloc: 14 dev.igb.0.extended_int_mask: 2147484419 dev.igb.0.interrupt_mask: 4 dev.igb.0.rx_control: 67141658 dev.igb.0.device_control: 1490027080 dev.igb.0.watchdog_timeouts: 0 dev.igb.0.rx_overruns: 0 dev.igb.0.tx_dma_fail: 0 dev.igb.0.mbuf_defrag_fail: 0 dev.igb.0.link_irq: 70 dev.igb.0.dropped: 0 dev.igb.0.tx_processing_limit: -1 dev.igb.0.rx_processing_limit: 100 dev.igb.0.fc: 3 dev.igb.0.enable_aim: 1 dev.igb.0.nvm: -1 dev.igb.0.%parent: pci3 dev.igb.0.%pnpinfo: vendor=0x8086 device=0x10a7 subvendor=0x8086 subdevice=0x10a7 class=0x020000 dev.igb.0.%location: slot=0 function=0 dbsf=pci0:3:0:0 dev.igb.0.%driver: igb dev.igb.0.%desc: Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k dev.igb.%parent: Shell Output - sysctl -a | grep netmap device netmap dev.netmap.ixl_rx_miss_bufs: 0 dev.netmap.ixl_rx_miss: 0 dev.netmap.iflib_rx_miss_bufs: 0 dev.netmap.iflib_rx_miss: 0 dev.netmap.iflib_crcstrip: 1 dev.netmap.bridge_batch: 1024 dev.netmap.default_pipes: 0 dev.netmap.priv_buf_num: 4098 dev.netmap.priv_buf_size: 2048 dev.netmap.buf_curr_num: 163840 dev.netmap.buf_num: 163840 dev.netmap.buf_curr_size: 4096 dev.netmap.buf_size: 4096 dev.netmap.priv_ring_num: 4 dev.netmap.priv_ring_size: 20480 dev.netmap.ring_curr_num: 200 dev.netmap.ring_num: 200 dev.netmap.ring_curr_size: 36864 dev.netmap.ring_size: 36864 dev.netmap.priv_if_num: 1 dev.netmap.priv_if_size: 1024 dev.netmap.if_curr_num: 100 dev.netmap.if_num: 100 dev.netmap.if_curr_size: 1024 dev.netmap.if_size: 1024 dev.netmap.generic_rings: 1 dev.netmap.generic_ringsize: 1024 dev.netmap.generic_mit: 100000 dev.netmap.admode: 0 dev.netmap.fwd: 0 dev.netmap.flags: 0 dev.netmap.adaptive_io: 0 dev.netmap.txsync_retry: 2 dev.netmap.no_pendintr: 1 dev.netmap.mitigate: 1 dev.netmap.no_timestamp: 0 dev.netmap.verbose: 0 dev.netmap.ix_rx_miss_bufs: 0 dev.netmap.ix_rx_miss: 0 dev.netmap.ix_crcstrip: 0
Every couple of days I get one or two netmap bad packet alert even after increasing this - netmap.buf_size: 4096. I run both Suricata and Snort on Wan and Lan however, I only enable blocking on Suricata WAN...all else are disabled. I have 8GB RAM however, I can only use 6GB as a failed processor killed a row/channel in my HP Pavilion 6242n trash find I converted into a pfSense firewall.
-
Under System / Advanced / Networking, is "Allow IPv6" checked?
And how many CPU cores? Is hyperthreading enabled?
-
@boobletins said in Suricata InLine with igb NICs:
Under System / Advanced / Networking, is "Allow IPv6" checked?
And how many CPU cores? Is hyperthreading enabled?
Yes...allowed IPv6 checked...CPU Type AMD Athlon(tm) 64 X2 Dual Core Processor 4800+
2 CPUs: 1 package(s) x 2 core(s)
AES-NI CPU Crypto: NoNo sure where to check for hyperthread...now I will disable IPv6...thought I did.
-
So here are some initial suggestions. Please keep in mind that I've been working on this for ~1 week (in other words: not long), and I'm not a FreeBSD, pfSense, or Suricata expert.
Start by making a backup of your configuration.
Do these first:
My understanding is that flow control should be off on any netmap interface. You have bi-directional flow control enabled:dev.igb.0.fc: 3
Disable flow control on all active interfaces using system tunables. Set dev.igb.0.fc=0 (and dev.igb.1.fc=0)
Actively set energy efficient ethernet to disabled:
dev.igb.0.eee_disabled=1Actively force IPv6_TXCSUM6 off by adding the following to config.xml in a shellcmd tag:
ifconfig igb0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso
(see above in this thread for a link on where/how to do that).
Edit:
To be clear: anywhere I have a command that says "igb0" or "igb.0" you will want to duplicate that for igb1 and any other interface you're running netmap on.So you will need 2 shellcmd lines in config.xml, and two new system tunables for flow control, etc
Consider changing later:
Set rx processing limit:
dev.igb.0.rx_processing_limit: -1It looks like your txd and rxd are both set to 1024 currently, I suggest you move those to 4096:
hw.igb.txd=4096
hw.igb.rxd=4096By changing your txd and rxd we may need to revisit your netmap buf/ring (memory settings).
We may also revisit your interrupt and queue settings.
-
It would be great if you could let me know what happens after the initial changes -- if you continue to get netmap errors or not.
If you do, don't jump right to the 2nd section of changes, we should verify that the changes we made above took properly. I learned the hard way that I was putting some settings in the wrong places.
-
boobletins...I apologize for not getting back sooner; other projects. I added the shellcmd line to the /cf/conf/config.xml file as you suggested. I re-enabled Suricata in Inline IPS Mode and restarted pfSense. I ran ifconfig against all four ethernet interfaces on my SuperMicro adapter and TXCSUM_IPV6 was not listed.
One thing I find interesting is in the Services > Suricata > Alerts tab, all of the text is now black in color when before making the above change it was all red in color in Inline IPS Mode. As well, there are no entries in the Blocks tab when before making the change it was automagically populated with over 300 blocked IP addresses in Inline IPS Mode. I don't know if this is normal or not. I didn't change any of the Suricata WAN Categories.
-
If you are running Suricata in inline mode, you will not see blocked IP addresses in the blocked tab, as any traffic that conforms to your "drop" rules is automatically intercepted and dropped (as opposed to initially logged, then ip banned as in Legacy mode).
The red text in the Alerts tab is letting you know that the traffic was indeed intercepted and dropped (since you don't have any information in the Block tab anymore).
That you are missing both blocks and red text means that either no traffic has conformed to your block rules yet, or something has gone wrong.
Double check which mode Suricata is running in. Then double check that you have some drop rules defined.
But originally the issue was netmap, yes? If so, have you see any netmap errors? Can you complete a speedtest with Suricata enabled in inline mode now?