Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Inline IPS : can't increase threads

    Scheduled Pinned Locked Moved IDS/IPS
    9 Posts 2 Posters 1.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • V
      verizu
      last edited by

      Hello,

      I decided to regive netmap a try after the upgrade to 2.5, good news is that it did not crash on me but the throughput is worse than legacy mode, it seems limited to 1Gbps (Vs 6Gbps) and CPU utilisation is rather low 20%

      So decided to play with threading on the template conf by going from 1.0 to 2.0 on the detect-thread-ratio but still no change it seems to be limited to 2 threads for processing :

      10/6/2021 -- 10:44:21 - <Notice> -- all 2 packet processing threads, 4 management threads initialized, engine started.

      I also tried cpu-affinity but it does not change either, am I doing something wong ?

      I use the kernel ix driver btw

      V 1 Reply Last reply Reply Quote 0
      • bmeeksB
        bmeeks
        last edited by bmeeks

        The threads setting you changed controls detection, it does not apply to netmap, though. To increase throughput there, you need to experiment with the "threads" parameter in the netmap configuration section of suricata.yaml for the interface. I am working on incorporating this new parameter in an upcoming release of the GUI package. If you want to experiment with it early, you will need to edit one of the package PHP files.

        First, determine how many queues your NIC is exposing to netmap by running this command from a shell prompt and noting the output:

        grep netmap /var/log/dmesg.boot
        

        Search for the line with your physical NIC name and an entry with the text netmap queues/slots: followed by a number. The number is how many netmap queues your NIC driver exposes. You can't use a number higher than what is shown, so if your NIC only exposes 1 queue, that's all you can use and there is no need to proceed further. If your NIC exposes more queues, though, try the setting below to see if you can increase throughput.

        Open up and edit the file /usr/local/pkg/suricata/suricata_generate_yaml.php. Find the following section of code at the bottom of the file:

        # Netmap
        netmap:
         - interface: default
           threads: auto
           copy-mode: ips
           disable-promisc: {$netmap_intf_promisc_mode}
           checksum-checks: auto
         - interface: {$if_real}
           copy-iface: {$if_real}^
         - interface: {$if_real}^
           copy-iface: {$if_real}
        

        Edit that section by adding a new line for "threads:" under the "interface:" line as follows:

        # Netmap
        netmap:
         - interface: default
           threads: <num_queues>
           copy-mode: ips
           disable-promisc: {$netmap_intf_promisc_mode}
           checksum-checks: auto
         - interface: {$if_real}
           copy-iface: {$if_real}^
         - interface: {$if_real}^
           copy-iface: {$if_real}
        

        Replace <num_queues> with the number extracted from the grep command earlier. Pay close attention to the indentation! Suricata is very picky about formatting in this file. Be careful while editing and do not change anything else! Save the new file, then go to the INTERFACE SETTINGS tab for the Suricata interface and click Save to generate a new YAML file for the interface. Now restart Suricata on that interface.

        I am interested in hearing back from you how this works. It can increase throughput, but only on some NICs that expose several netmap queues.

        V 1 Reply Last reply Reply Quote 1
        • V
          verizu @verizu
          last edited by

          Managed to work around the issue by modifying this file :

          /usr/local/pkg/suricata/suricata_generate_yaml.php

          find the netmap block and replace threads:auto with threads: 2 (I have 4 queues)

          I think this should be added to the web interface as the auto does not seem to be well supported on some platforms.

          bmeeksB 1 Reply Last reply Reply Quote 0
          • V
            verizu @bmeeks
            last edited by

            @bmeeks said in Inline IPS : can't increase threads:

            grep netmap /var/log/dmesg.boot

            Thanks !

            I found the answer juste before you posted on a suricata forum where a person was having the same issue on ix driver

            So basically I have 4 queues (4 cores) but when setting num_queue to 4 I get poor download 300Mb/s and good upload 6Gb/s, with 2 or 3 I get good speeds both ways with few packet drops

            1 Reply Last reply Reply Quote 0
            • bmeeksB
              bmeeks @verizu
              last edited by bmeeks

              @verizu said in Inline IPS : can't increase threads:

              Managed to work around the issue by modifying this file :

              /usr/local/pkg/suricata/suricata_generate_yaml.php

              find the netmap block and replace threads:auto with threads: 2 (I have 4 queues)

              I think this should be added to the web interface as the auto does not seem to be well supported on some platforms.

              Yes, see my post above (recently edited so it shows current Suricata PHP code). I am planning to introduce this as a configurable parameter in an upcoming Suricata package update.

              The "auto" default from Suricata does not work properly on some systems. It is supposed to query for the number of supported queues, but if that call fails it defaults to a single queue. Unfortunately the call can fail on most systems.

              1 Reply Last reply Reply Quote 1
              • bmeeksB
                bmeeks
                last edited by bmeeks

                I've been holding off on submitting this latest Suricata update as I am waiting to see if upstream fixes a severe issue with netmap in version 6.0.x of the Suricata binary. Right now, that version will suddenly stop passing traffic on netmap-enabled interfaces after some random period of time. I've seen this in testing on my virtual machines. I have an open bug report upstream, but so far no action on it.

                V 1 Reply Last reply Reply Quote 0
                • V
                  verizu @bmeeks
                  last edited by

                  @bmeeks

                  I can test this version if you want, can you give me the compilation flags ?

                  bmeeksB 1 Reply Last reply Reply Quote 0
                  • bmeeksB
                    bmeeks @verizu
                    last edited by

                    @verizu said in Inline IPS : can't increase threads:

                    @bmeeks

                    I can test this version if you want, can you give me the compilation flags ?

                    We are essentially using the standard port from FreeBSD Ports for the binary portion. Suricata 6.0.2 is posted there now. Just be aware that if you compile and install that port, you will NOT have Legacy Blocking Mode as that custom plugin is a pfSense-specific patch that is added on the pfSense package builder infrastructure. However, you don't need that feature to test Inline IPS Mode.

                    You may find changing out the binary challenging. It will probably want to remove the GUI package pieces when removing or updating the current 5.x binary.

                    V 1 Reply Last reply Reply Quote 0
                    • V
                      verizu @bmeeks
                      last edited by

                      Thanks for adding the Netmap thread parameter on the latest release, I can confirm that I'm still limited to 1Gbps with auto, putting 2 fixes the bandwidth issue 👍

                      1 Reply Last reply Reply Quote 0
                      • First post
                        Last post
                      Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.