• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

Performance on IDS/IPS

Scheduled Pinned Locked Moved IDS/IPS
11 Posts 4 Posters 1.7k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C
    Cool_Corona
    last edited by Sep 28, 2020, 7:51 AM

    Just a heads up.

    Lowering MTU to 1440 and MSS to 1400 increase the performance of IDS/IPS 300% on the same hardware.

    Going from 200 mbit/s to 600+ mbit/s on the same hardware.

    It seems that eventual package fragmentation is tough to deal with for IDS/IPS. (naturally)

    T 1 Reply Last reply Sep 29, 2020, 12:35 PM Reply Quote 0
    • T
      tman222 @Cool_Corona
      last edited by Sep 29, 2020, 12:35 PM

      @Cool_Corona - Very interesting results! Could you please provide a bit more detail describing your IDS/IPS setup and where exactly in pfSense you made those changes? What prompted you to start looking into MTU sizing? Also, what method(s) did you use to assess throughput (before and after the MTU change)?

      Thanks in advance!

      C 1 Reply Last reply Sep 29, 2020, 3:14 PM Reply Quote 0
      • C
        Cool_Corona @tman222
        last edited by Sep 29, 2020, 3:14 PM

        @tman222 said in Performance on IDS/IPS:

        @Cool_Corona - Very interesting results! Could you please provide a bit more detail describing your IDS/IPS setup and where exactly in pfSense you made those changes? What prompted you to start looking into MTU sizing? Also, what method(s) did you use to assess throughput (before and after the MTU change)?

        Thanks in advance!

        I made the changes on the Interface tab. (all)

        I am running full IDS with a lot of rules.

        What prompted me was somebody doing testing on OPNsense that came up with the results of lowering the std. MTU from 1500 to 1440 and MSS to 1400 to avoid fragmentation. Suddenly the speed increased both in real world testing and IPERF to almost line speed minus overhead.

        IPERF went up from around 230mbit/s to 900 mbit/s and real world testing went from 190mbit/s to 650 mbit/s.

        And less load on the CPU's to.

        B T 2 Replies Last reply Sep 29, 2020, 3:19 PM Reply Quote 0
        • B
          bmeeks @Cool_Corona
          last edited by Sep 29, 2020, 3:19 PM

          @Cool_Corona said in Performance on IDS/IPS:

          @tman222 said in Performance on IDS/IPS:

          @Cool_Corona - Very interesting results! Could you please provide a bit more detail describing your IDS/IPS setup and where exactly in pfSense you made those changes? What prompted you to start looking into MTU sizing? Also, what method(s) did you use to assess throughput (before and after the MTU change)?

          Thanks in advance!

          I made the changes on the Interface tab. (all)

          I am running full IDS with a lot of rules.

          What prompted me was somebody doing testing on OPNsense that came up with the results of lowering the std. MTU from 1500 to 1440 and MSS to 1400 to avoid fragmentation. Suddenly the speed increased both in real world testing and IPERF to almost line speed minus overhead.

          IPERF went up from around 230mbit/s to 900 mbit/s and real world testing went from 190mbit/s to 650 mbit/s.

          And less load on the CPU's to.

          Do you have a PPPoE interface on your WAN? 1500 is the typical setting for DHCP-type WAN interfaces. PPPoE, however, is different and works much better with a reduced MTU.

          C 1 Reply Last reply Sep 29, 2020, 3:28 PM Reply Quote 0
          • T
            tman222 @Cool_Corona
            last edited by Sep 29, 2020, 3:19 PM

            @Cool_Corona said in Performance on IDS/IPS:

            @tman222 said in Performance on IDS/IPS:

            @Cool_Corona - Very interesting results! Could you please provide a bit more detail describing your IDS/IPS setup and where exactly in pfSense you made those changes? What prompted you to start looking into MTU sizing? Also, what method(s) did you use to assess throughput (before and after the MTU change)?

            Thanks in advance!

            I made the changes on the Interface tab. (all)

            I am running full IDS with a lot of rules.

            What prompted me was somebody doing testing on OPNsense that came up with the results of lowering the std. MTU from 1500 to 1440 and MSS to 1400 to avoid fragmentation. Suddenly the speed increased both in real world testing and IPERF to almost line speed minus overhead.

            IPERF went up from around 230mbit/s to 900 mbit/s and real world testing went from 190mbit/s to 650 mbit/s.

            And less load on the CPU's to.

            Thanks @Cool_Corona.

            Is this running Snort or Suricata? Is the setup running in IDS (pcap) or IPS (inline) mode? Do you have a link to the testing / discussion where it was discovered that changing the MTU helps? I'm still not sure why there would be fragmentation at standard MTU of 1500 bytes unless perhaps the WAN connection didn't support this? Thanks again!

            C 1 Reply Last reply Sep 29, 2020, 3:31 PM Reply Quote 0
            • T
              tman222
              last edited by Sep 29, 2020, 3:22 PM

              I have seen some discussion about increasing the snaplen (or snapshot length) in Snort, but I'm not sure if that would be helpful when running in inline (IPS) mode?

              https://seclists.org/snort/2015/q1/757

              1 Reply Last reply Reply Quote 0
              • C
                Cool_Corona @bmeeks
                last edited by Cool_Corona Sep 29, 2020, 3:30 PM Sep 29, 2020, 3:28 PM

                @bmeeks said in Performance on IDS/IPS:

                @Cool_Corona said in Performance on IDS/IPS:

                @tman222 said in Performance on IDS/IPS:

                @Cool_Corona - Very interesting results! Could you please provide a bit more detail describing your IDS/IPS setup and where exactly in pfSense you made those changes? What prompted you to start looking into MTU sizing? Also, what method(s) did you use to assess throughput (before and after the MTU change)?

                Thanks in advance!

                I made the changes on the Interface tab. (all)

                I am running full IDS with a lot of rules.

                What prompted me was somebody doing testing on OPNsense that came up with the results of lowering the std. MTU from 1500 to 1440 and MSS to 1400 to avoid fragmentation. Suddenly the speed increased both in real world testing and IPERF to almost line speed minus overhead.

                IPERF went up from around 230mbit/s to 900 mbit/s and real world testing went from 190mbit/s to 650 mbit/s.

                And less load on the CPU's to.

                Do you have a PPPoE interface on your WAN? 1500 is the typical setting for DHCP-type WAN interfaces. PPPoE, however, is different and works much better with a reduced MTU.

                No a dedicated fiber 1/1 gbit/s with static IP's

                1 Reply Last reply Reply Quote 0
                • C
                  Cool_Corona @tman222
                  last edited by Sep 29, 2020, 3:31 PM

                  @tman222 said in Performance on IDS/IPS:

                  @Cool_Corona said in Performance on IDS/IPS:

                  @tman222 said in Performance on IDS/IPS:

                  @Cool_Corona - Very interesting results! Could you please provide a bit more detail describing your IDS/IPS setup and where exactly in pfSense you made those changes? What prompted you to start looking into MTU sizing? Also, what method(s) did you use to assess throughput (before and after the MTU change)?

                  Thanks in advance!

                  I made the changes on the Interface tab. (all)

                  I am running full IDS with a lot of rules.

                  What prompted me was somebody doing testing on OPNsense that came up with the results of lowering the std. MTU from 1500 to 1440 and MSS to 1400 to avoid fragmentation. Suddenly the speed increased both in real world testing and IPERF to almost line speed minus overhead.

                  IPERF went up from around 230mbit/s to 900 mbit/s and real world testing went from 190mbit/s to 650 mbit/s.

                  And less load on the CPU's to.

                  Thanks @Cool_Corona.

                  Is this running Snort or Suricata? Is the setup running in IDS (pcap) or IPS (inline) mode? Do you have a link to the testing / discussion where it was discovered that changing the MTU helps? I'm still not sure why there would be fragmentation at standard MTU of 1500 bytes unless perhaps the WAN connection didn't support this? Thanks again!

                  Suricata Legacy mode since I run VMXnet3 adapters and netmap doesnt play well with inline mode.

                  1 Reply Last reply Reply Quote 0
                  • C
                    Cool_Corona
                    last edited by Sep 29, 2020, 3:40 PM

                    41f9f608-912e-4010-ad62-f269e5ffe39f-billede.png

                    Just a quick test with speedtest. Nothing fancy...

                    1 Reply Last reply Reply Quote 0
                    • B
                      buggz
                      last edited by Sep 30, 2020, 3:41 PM

                      Was this change on both the WAN and LAN, or just the Snort interface, which for me is LAN.

                      Thanks!

                      C 1 Reply Last reply Sep 30, 2020, 4:53 PM Reply Quote 0
                      • C
                        Cool_Corona @buggz
                        last edited by Sep 30, 2020, 4:53 PM

                        @buggz said in Performance on IDS/IPS:

                        Was this change on both the WAN and LAN, or just the Snort interface, which for me is LAN.

                        Thanks!

                        Both. (all)

                        1 Reply Last reply Reply Quote 0
                        1 out of 11
                        • First post
                          1/11
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                          This community forum collects and processes your personal information.
                          consent.not_received