Suricata crashing: Couldn't register igb0 with netmap: Cannot allocate memory
-
Greetings Forum. Newbie here.
My problem is Suricata runs in IPS legacy mode but when I switch over to Inline mode it dies. The error thrown is Cannot allocate memory. I have lots of ram to spare but maybe not allocated correctly for suricata in inline/netmap mode?. I could use some advice here.
Ive implemented pfsense as an inline transparent bridge between WAN (igb0) and LAN (igb1) interfaces with no ip addresses. All management is on OPT1 (igb2) . Suricata is set to manage the WAN interface only. After suricata crashes, the bridge and traffic passes smoothly.
Stats/Logs:
Suricata on pfsense vers 2.4.2-RELEASE-p1 . Hardware is quad core Celeron J1900, 4gb of ram, intel PRO/100 ( 2.5.3-k)/boot/loader..conf.local:
kern.ipc.nmbclusters="131072"
hw.igb.fc_setting=0
(Ive tried this with nmbclusters="1000000" with the same results after reboot)Error Log:
5/3/2018 – 22:12:04 - <notice>-- This is Suricata version 4.0.3 RELEASE
5/3/2018 -- 22:12:04 - <info>-- CPUs/cores online: 4
5/3/2018 -- 22:12:04 - <info>-- Netmap: Setting IPS mode
5/3/2018 -- 22:12:04 - <info>-- HTTP memcap: 67108864
5/3/2018 -- 22:12:04 - <notice>-- using flow hash instead of active packets
5/3/2018 -- 22:12:05 - <info>-- 2 rule files processed. 126 rules successfully loaded, 0 rules failed
5/3/2018 -- 22:12:05 - <info>-- Threshold config parsed: 0 rule(s) found
5/3/2018 -- 22:12:05 - <info>-- 132 signatures processed. 12 are IP-only rules, 0 are inspecting packet payload, 70 inspect application layer, 0 are decoder event only
5/3/2018 -- 22:12:05 - <info>-- fast output device (regular) initialized: alerts.log
5/3/2018 -- 22:12:05 - <info>-- http-log output device (regular) initialized: http.log
5/3/2018 -- 22:12:05 - <info>-- Using log dir /var/log/suricata/suricata_igb012748
5/3/2018 -- 22:12:05 - <info>-- using normal logging
5/3/2018 -- 22:12:05 - <info>-- storing certs in /var/log/suricata/suricata_igb012748/certs
5/3/2018 -- 22:12:05 - <info>-- Syslog output initialized
5/3/2018 -- 22:12:05 - <info>-- Going to log the md5 sum of email subject
5/3/2018 -- 22:12:05 - <info>-- Using 2 live device(s).
5/3/2018 -- 22:12:05 - <error>-- [ERRCODE: SC_ERR_NETMAP_CREATE(263)] - Couldn't register igb0 with netmap: Cannot allocate memory
5/3/2018 – 22:12:05 - <error>-- [ERRCODE: SC_ERR_NETMAP_CREATE(263)] - Couldn't register igb0 with netmap: Cannot allocate memory
5/3/2018 – 22:12:05 - <error>-- [ERRCODE: SC_ERR_NETMAP_CREATE(263)] - Couldn't register igb0 with netmap: Cannot allocate memory
5/3/2018 – 22:12:05 - <info>-- Initializing PCAP ring buffer for /var/log/suricata/suricata_igb012748/log.pcap.
5/3/2018 -- 22:12:05 - <notice>-- Ring buffer initialized with 34 files.Any help would be appreciated
With My Thanks
Mike D</notice></info></error></error></error></info></info></info></info></info></info></info></info></info></info></info></notice></info></info></info></notice> -
If you have a dual NIC, it seems that the dual Intel NIC is not natively supported per here: https://www.unix.com/man-page/freebsd/4/netmap/
I am not sure why the dual NIC would not be supported…just doesn't make sense.
-
Sorry, but as @NollipfSense indicated, your NIC likely does not support the netmap API which inline IPS mode depends on. Netmap is a special kernel module that allows high-speed network connections between userland and the kernel. Inline IPS mode on Suricata needs that connection pathway, but unfortunately netmap requires support from the hardware NIC driver. As of now, just a few NIC drivers support netmap operation without problems. Some drivers work but just spam the system console with warnings, but others do not work at all. Some NIC drivers behave so badly with netmap that they crash the firewall!
So as has been said in this forum more than a hundred times, getting inline IPS mode to work flawlessly requires the right NIC and associated driver. If your hardware does not play well with the netmap kernel module, then you must abandon inline IPS mode and switch over to Legacy Mode blocking (or go buy a supported NIC) The list provided by @NollipfSense should get your started with finding a NIC that supports netmap operation on FreeBSD.
Bill
-
Bill and Nolli ,
Thanks for that . The appliance has an embedded quad ethernet ports based on i211 intel controller. (pciconf below) . I examined the igb(4) page and see the chipset is supported but that it doesn't specifically state Dual/Quad. Am i correct in that you are referring to the lack of Dual/Quad specifically listed ? I notice the netgateSG-4860 uses 2 x i211. I don't doubt the responses here, just looking for clarification.
Ill stick to legacy mode in the meantime, as I need to move off the j1900 cpu due to lack of AES-NI support but i'd still like to know whats wrong with the nic's in this appliance or suri/netmap running on it/them .
With My Thanks,
pciconf -lv looks like such:
igb0@pci0:1:0:0: class=0x020000 card=0x00008086 chip=0x15398086 rev=0x03 hdr=0x00
vendor = 'Intel Corporation'
device = 'I211 Gigabit Network Connection'
class = network
subclass = ethernet
igb1@pci0:2:0:0: class=0x020000 card=0x00008086 chip=0x15398086 rev=0x03 hdr=0x00
vendor = 'Intel Corporation'
device = 'I211 Gigabit Network Connection'
class = network
subclass = ethernet
igb2@pci0:3:0:0: class=0x020000 card=0x00008086 chip=0x15398086 rev=0x03 hdr=0x00
vendor = 'Intel Corporation'
device = 'I211 Gigabit Network Connection'
class = network
subclass = ethernet
igb3@pci0:4:0:0: class=0x020000 card=0x00008086 chip=0x15398086 rev=0x03 hdr=0x00
vendor = 'Intel Corporation'
device = 'I211 Gigabit Network Connection'
class = network
subclass = ethernet -
I examined the igb(4) page and see the chipset is supported but that it doesn't specifically state Dual/Quad. Am i correct in that you are referring to the lack of Dual/Quad specifically listed ?
Yes, it very well can be that the Dual/Quad variety is not supported. I'm not a NIC driver expert, but I assume there is still some degree of difference between the driver for a single port NIC versus one for a multiport NIC even if the underlying physical chipset is the same. For one thing, the multiport NIC driver would have to know which "port" on the card to send and receive a given datastream from.
Suricata supports several Inline IPS operating modes on various operating system platforms. The two available on FreeBSD are Netmap and IPFW Divert Sockets. In the distant past I tried using the IPFW Divert Sockets mode with Suricata on pfSense but it would not work due to some customizations done within the IPFW module at the time to support some of the old limited Layer 7 inspection capability pfSense offered. IPFW is also used, I believe, as part of Captive Portal. IPFW is the alternate firewall engine available in the FreeBSD used for pfSense. The other firewall engine, and the one pfSense uses, is the pf (packet filter) engine. IPFW Divert Sockets mode is NIC driver agnostic and thus would work with any NIC, but that mode is quite slow as it does not allow direct connections between the NIC, Suricata and the kernel network stack. So Inline IPS mode would be much slower with IPFW Divert Sockets than it is with Netmap.
Bill