Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Suricata Inline on various hardware?

    Scheduled Pinned Locked Moved IDS/IPS
    16 Posts 10 Posters 4.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      jeffhammett
      last edited by

      Can anyone confirm whether or not Suricata inline mode works on APU14D systems? I believe they have Realtek RTL8111E network cards.

      Also, can anyone confirm whether or not Suricata inline mode works on the various Netgate/ADI systems being sold. I believe all have Intel NICS.

      1 Reply Last reply Reply Quote 0
      • D
        doktornotor Banned
        last edited by

        https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4

        
        SUPPORTED DEVICES
             netmap natively supports the following devices:
        
             On FreeBSD: em(4), igb(4), ixgbe(4), lem(4), re(4).
        
             On Linux e1000(4), e1000e(4), igb(4), ixgbe(4), mlx4(4), forcedeth(4),
             r8169(4).
        
        
        1 Reply Last reply Reply Quote 0
        • R
          repne
          last edited by

          I'll drop in here out of curiosity. Does anyone know what's the netmap support status on WiFi network devices?
          That document only mentions some drivers which I believe are for wired links only.

          regards

          1 Reply Last reply Reply Quote 0
          • A
            allu
            last edited by

            I have a Shuttle DS68U* running 2.3.2-RELEASE-p1 (w/ latest packages). The PC has Intel i211 (igb0) and i219LM (em0) nics and inline Suricata does not work for me.

            As soon as I enable the inline for my WAN interface (em0), the interface dies. The link stays UP, but dhcp does not get an address and so forth. I've followed the inline configuration guides to the letter, i.e. the three hardware functions have been disabled to no avail. As soon as I disable the inline mode, the interface comes back alive.

            Any ideas? I'd love to test the dev branches but don't have identical hardware to try with and this is a "production" system. With the classic mode I get a lot of package leakage, i.e. an offending request to for example dyndns can be made and responded to before Suricata blocks the offending host  :-[

            • [url=http://www.shuttle.eu/products/slim/ds68u/specification/]http://www.shuttle.eu/products/slim/ds68u/specification/ - I can otherwise highly recommend this little box if x86 floats your boat.
            1 Reply Last reply Reply Quote 0
            • C
              certifiable
              last edited by

              Can anyone confirm whether or not Suricata inline mode works on APU14D systems? I believe they have Realtek RTL8111E network cards.

              Probably, the 8111 is supported.  I would imagine the E simply stands for PCI-E.
              https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4

              Under supported devices the "re(4)" link refers to 4 supported Realtek NICs.  One of which is the 8111.  Unless the code base of the driver has changed dramatically between 2011 (test date of the Realtek for Netmap) and 2015, the update of the bsd base support documentation.  Sorry that's not a confirmation though, it's a good bet.

              I have a Shuttle DS68U* running 2.3.2-RELEASE-p1 (w/ latest packages). The PC has Intel i211 (igb0) and i219LM (em0) nics and inline Suricata does not work for me.

              This is really good information.  Thank you for this.  It's interesting that intel NICs supported are 1000e and the 10Gig nic. I hope I don't have the same issues because BSD says your intel card is supported here:
              https://www.freebsd.org/relnotes/CURRENT/hardware/support.html

              Do a search for "igb."  (no quotes no period)  The supported devices section of the Netmap BSD documentation:

              https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4

              says igp is supported and BSD shows the i210/211 as "igp."  Is the device showing up with the name "igp" in BSD?  What does it say when you issue the command: "ifconfig?" (no quotes no question mark)

              Are you using the i211?  Or the i219LM?  Use the i211, it makes a difference.  I would refer to the motherboard specifications to figure out which port is which.  I'm getting two i210's myself and two 10 gig nics both on the MB.  I hope I don't run into the same issue.  Worst come to worse at least you have a full case with room for this: http://www.memory4less.com/intel-network-interface-adapter-d33682?rid=90&origin=pla&gclid=CIj2loifvtECFdaKswodTBgCDA

              That's a dual port 1000e, it's tested and it should work for you.  I don't have room in my little case for it.  So if my 10Gb nics don't work for me I'll be hunting down the developer in Italy for help, as I'm not sure I need the i210's to work.  It looks like he's over here:

              http://info.iet.unipi.it/~luigi/netmap/

              I only say "he" because I don't know any girls named Luigi.  That's all for now.  It's starting to feel a bit too much like work.  Take your time though, details matter, you can probably find a way to make things work.

              ETechBuyE 1 Reply Last reply Reply Quote 0
              • A
                allu
                last edited by

                What does it say when you issue the command: "ifconfig?" (no quotes no question mark)

                With Suricata inline not enabled, i.e. running in classic mode, ifconfig:

                igb0: flags=8943 <up,broadcast,running,promisc,simplex,multicast>metric 0 mtu 1500
                options=400bb <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,jumbo_mtu,vlan_hwcsum,vlan_hwtso>ether 80:ee:nn:nn:nn:nn
                inet6 nnnn%igb0 prefixlen 64 scopeid 0x1
                inet nnnnn netmask 0xffffff00 broadcast nnnnn
                nd6 options=21 <performnud,auto_linklocal>media: Ethernet autoselect (1000baseT <full-duplex>)
                status: active
                em0: flags=8943 <up,broadcast,running,promisc,simplex,multicast>metric 0 mtu 1500
                options=4009b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,vlan_hwtso>ether 80:ee:nn:nn:nn:nn
                inet6 nnnn%em0 prefixlen 64 scopeid 0x2
                inet nnnnnn netmask 0xfffffc00 broadcast nnnnn
                nd6 options=21 <performnud,auto_linklocal>media: Ethernet autoselect (1000baseT <full-duplex>)
                status: active

                also, dmesg:

                em0: <intel(r) 1000="" pro="" network="" connection="" 7.6.1-k="">mem 0xdf200000-0xdf21ffff irq 16 at device 31.6 on pci0
                em0: Using an MSI interrupt
                em0: Ethernet address: 80:ee:nn:nn:nn:nn
                em0: netmap queues/slots: TX 1/1024, RX 1/1024

                igb0: <intel(r) 1000="" pro="" network="" connection,="" version="" -="" 2.5.3-k="">port 0xd000-0xd01f mem 0xdf000000-0xdf01ffff,0xdf020000-0xdf023fff irq 16 at device 0.0 on pci2
                igb0: Using MSIX interrupts with 3 vectors
                igb0: Ethernet address: 80:ee:nn:nn:nn:nn
                igb0: Bound queue 0 to cpu 0
                igb0: Bound queue 1 to cpu 1
                igb0: netmap queues/slots: TX 2/1024, RX 2/1024

                One thing I have not tried is to swap these LAN/WAN ports the other way around and then try to see if I can enable the inline mode on the other. This requires a little bit of downtime though as if I kill the LAN connection I can't easily restore the box :(</intel(r)></intel(r)></full-duplex></performnud,auto_linklocal></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,vlan_hwtso></up,broadcast,running,promisc,simplex,multicast></full-duplex></performnud,auto_linklocal></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,jumbo_mtu,vlan_hwcsum,vlan_hwtso></up,broadcast,running,promisc,simplex,multicast>

                1 Reply Last reply Reply Quote 0
                • bmeeksB
                  bmeeks
                  last edited by

                  @allu:

                  One thing I have not tried is to swap these LAN/WAN ports the other way around and then try to see if I can enable the inline mode on the other. This requires a little bit of downtime though as if I kill the LAN connection I can't easily restore the box :(

                  Are you trying this on pfSense 2.3.x or 2.4-BETA?  The current production release of pfSense is using the older 3.0.2 Suricata binary while the 2.4-BETA release uses the newer 3.1.2 Suricata binary.  There were several netmap related fixes in the move from 3.0.x Suricata to the 3.1.x version.  These fixes were all done by the upstream Suricata folks, but some of them were FreeBSD specific fixes and thus cured some bugs on pfSense.

                  You can try installing the 3.1.2 Suricata package from here:  https://beta.pfsense.org/packages/pfSense_master_amd64-pfSense_devel/All/.  It will likely need some other updated dependent packages as well such as libpcap and libhtp, so be prepared for some trial and error.  You might first want to work this out on a virtual machine running 2.3.x of pfSense and experiment with installing the 3.1.2 Suricata package there.

                  Bill

                  1 Reply Last reply Reply Quote 0
                  • A
                    allu
                    last edited by

                    I'm on 2.3.2-RELEASE-p1. Now that the 3.1.2 is officially released I upgraded and the behaviour is still the same, enable inline mode for the interface and a few moments later the interface goes totally silent. No obvious related message etc in system log :(

                    1 Reply Last reply Reply Quote 0
                    • R
                      Redyr Banned
                      last edited by

                      @allu:

                      I have a Shuttle DS68U* running 2.3.2-RELEASE-p1 (w/ latest packages). The PC has Intel i211 (igb0) and i219LM (em0) nics and inline Suricata does not work for me.

                      As soon as I enable the inline for my WAN interface (em0), the interface dies. The link stays UP, but dhcp does not get an address and so forth. I've followed the inline configuration guides to the letter, i.e. the three hardware functions have been disabled to no avail. As soon as I disable the inline mode, the interface comes back alive.

                      Any ideas? I'd love to test the dev branches but don't have identical hardware to try with and this is a "production" system. With the classic mode I get a lot of package leakage, i.e. an offending request to for example dyndns can be made and responded to before Suricata blocks the offending host  :-[

                      • [url=http://www.shuttle.eu/products/slim/ds68u/specification/]http://www.shuttle.eu/products/slim/ds68u/specification/ - I can otherwise highly recommend this little box if x86 floats your boat.

                      Hi,

                      I have a different Shuttle model, but with the same NICs, you can run Inline mode only on igb(0) the em(0) will just die.

                      Switch the interfaces, assign the igb(0) to WAN and the em(0) to LAN, then you run Suricata in mixed mode:

                      WAN (igb(0)) - Inline mode
                      LAN (em0)) - Legacy mode

                      This worked for me.

                      Or assign the igb(0) to which network you want to protect the most, internal or external

                      1 Reply Last reply Reply Quote 0
                      • H
                        Hegemon
                        last edited by

                        I don't know if this applies to pfSense, but according to the FreeBSD documentation you can use a NIC that doesn't support native netmap:

                        NICs without native support can still be used in netmap mode through emulation. Performance is inferior to native netmap mode but still significantly higher than sockets, and approaching that of in-kernel solutions such as Linux's pktgen.

                        Emulation is also available for devices with native netmap support, which can be used for testing or performance comparison. The sysctl variable dev.netmap.admode globally controls how netmap mode is implemented.

                        Some aspect of the operation of netmap are controlled through sysctl variables on FreeBSD (dev.netmap.) and module parameters on Linux (/sys/module/netmap_lin/parameters/):

                        dev.netmap.admode: 0
                            Controls the use of native or emulated adapter mode.  0 uses the
                            best available option, 1 forces native and fails if not avail-
                            able, 2 forces emulated hence never fails.

                        1 Reply Last reply Reply Quote 0
                        • A
                          allu
                          last edited by

                          @Redyr:

                          @allu:

                          I have a Shuttle DS68U* running 2.3.2-RELEASE-p1 (w/ latest packages). The PC has Intel i211 (igb0) and i219LM (em0) nics and inline Suricata does not work for me.

                          As soon as I enable the inline for my WAN interface (em0), the interface dies.

                          I have a different Shuttle model, but with the same NICs, you can run Inline mode only on igb(0) the em(0) will just die.

                          This seems to be the case, for igb0 I can get inline mode working without any issues.

                          1 Reply Last reply Reply Quote 0
                          • R
                            Redyr Banned
                            last edited by

                            @allu:

                            @Redyr:

                            @allu:

                            I have a Shuttle DS68U* running 2.3.2-RELEASE-p1 (w/ latest packages). The PC has Intel i211 (igb0) and i219LM (em0) nics and inline Suricata does not work for me.

                            As soon as I enable the inline for my WAN interface (em0), the interface dies.

                            I have a different Shuttle model, but with the same NICs, you can run Inline mode only on igb(0) the em(0) will just die.

                            This seems to be the case, for igb0 I can get inline mode working without any issues.

                            It is the case, neither Free-Bsd 10.3 doesn't support Intel i211 and what's worse nor Free-Bsd 11.

                            The only way to make this work (as I see it), besides buying old chipsets is to include drivers for new chipsets, as a kernel module.

                            Please read the section 11.5.1. Locating the Correct Driver from the link below:

                            https://www.freebsd.org/doc/en/books/handbook/config-network-setup.html

                            For the Intel chipsets at least, I think someone could create a driver module

                            1 Reply Last reply Reply Quote 0
                            • A
                              allu
                              last edited by

                              Are all of the manually loaded modules etc lost when an upgrade of the pfSense OS is made (cli or webgui)?

                              1 Reply Last reply Reply Quote 0
                              • R
                                Redyr Banned
                                last edited by

                                @allu

                                If you're asking me, you should know that I'm not a pfSense official, so I cannot speak on pfSense behalf.

                                AFAIK, pfSense doesn't have this implemented. This was a proposal, maybe someone from the DEV team will see it, and can respond to it in either way.

                                And, to respond to your question, the module can be reloaded (after the update), but you should do this, only if you have the knowledge and have a spare machine for testing. Please note, that the module, should first be created for the current kernel.

                                1 Reply Last reply Reply Quote 0
                                • ?
                                  A Former User
                                  last edited by

                                  @jeffh:

                                  Can anyone confirm whether or not Suricata inline mode works on APU14D systems? I believe they have Realtek RTL8111E network cards.

                                  Also, can anyone confirm whether or not Suricata inline mode works on the various Netgate/ADI systems being sold. I believe all have Intel NICS.

                                  I have a GA-C1007UN board with 2 embedded Realtek RTL8111F chips. Running 2.3.2-p1 64bit.
                                  Suricata Inline on LAN(re1) and Snort on WAN(re0), other network interface is PCI card not important here.
                                  LAN ran Inline fine prior to 3.1.2 binary upgrade and is running now with 3.1.2 binary. Only issues showed when first dowload of 3.1.2 upgrade did not seem to take. Removed with settings and reinstalled after a reboot. All is well so far. System Logs are clean, no warnings. Set to block, Secure option with community rules enabled and set to policy. Loaded for bear. ;)
                                  PS - RTL8111F not E for these chips. Not a typo.

                                  1 Reply Last reply Reply Quote 0
                                  • ETechBuyE
                                    ETechBuy @certifiable
                                    last edited by

                                    @certifiable Thanks for gathering and sharing all this detail—super helpful.

                                    Yeah, I’m inclined to agree about the 8111. It probably is supported via the re(4) driver, and like you said, the “E” just denotes PCIe. Unless there were big changes to the Realtek driver between 2011 and 2015, it should still work with Netmap. Of course, without a definitive confirmation, there’s always some uncertainty, but it seems like a solid bet.

                                    On the Intel side, it’s interesting (and a bit frustrating) that inline Suricata doesn’t work even though the i211 is listed as supported in both the FreeBSD hardware notes and the Netmap man page. I did check the ifconfig output—my i211 shows up as igb0, not igp. From what I understand, igb is the correct driver family for the i210/i211 series. igp might be a typo or confusion with something else (maybe the older PRO/1000 series?).

                                    I’ve been using the i211 exclusively for testing because I’ve read in several threads (and you confirmed) that the i219LM can be more problematic, especially with netmap/pf_ring compatibility. If you haven’t already, I’d definitely try binding Suricata only to the i211 to rule out issues from the onboard i219.

                                    Also appreciate the fallback suggestion on the dual-port 1000e NIC—good to know there are reliable options if the built-ins keep acting up. And yeah, if it comes to that, maybe a direct appeal to Luigi Rizzo himself will be in order.

                                    Thanks again—like you said, this stuff can start feeling like work real fast, but having all these specifics in one place makes troubleshooting a lot easier. I’ll keep digging, and if I get it working, I’ll post back with the exact configuration.

                                    1 Reply Last reply Reply Quote 0
                                    • First post
                                      Last post
                                    Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.