Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Pfsense tuning for 10 Gbit Throughput

    Scheduled Pinned Locked Moved Virtualization
    6 Posts 2 Posters 5.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • F
      fwcheck
      last edited by

      We have a pfsense running on esxi 6.0 with good hardware (HP DL380) with many cores.

      I did some performance measurements

      Test1  -> pfsense -> Test2

      Tests have been done using iperf3.
      Adapter is vmxnet3

      I was not able to reach more than 5.0 Gbit/s throughput using MTU 1500. If i use Jumbo-Frames i am able to
      saturate the line (9.90 Gbit/s) throughput.

      Our goal is to reach line saturation using MTU 1500.

      Is this possible using tuned settings ? Or is there a pps limit which leads to performance degration.
      I also tried to configure SR-IOV, but this seems to be difficult under pfsense (running into bugs).

      However i noticed that the Interrupt is nearly 100 % at MTU 1500.
      This is a dedicated setup so i am able to check nearly all tuning setings. I have checked more cpus, but this does not
      seem to be the bottleneck.

      170207_Jumbo_Frames.PNG
      170207_Jumbo_Frames.PNG_thumb
      170207_MTU1500.PNG
      170207_MTU1500.PNG_thumb

      1 Reply Last reply Reply Quote 0
      • H
        heper
        last edited by

        i doubt there are many tweaks that will make a difference

        see: https://blog.pfsense.org/?p=1866
        this basically states that a baremetal  Xeon E3-1275 should/could potentially hit around 10Gbit/s with a 1500MTU. No clue if this is currently possible inside a hypervisor.

        At 10GbE, every firewall rule matters. You could try to limit these or even disable firewalling to see if this makes a difference.

        in my experience Passthrough generally doesn't improve performance ( but i've only tried it briefly, years ago)

        1 Reply Last reply Reply Quote 0
        • F
          fwcheck
          last edited by

          At 10GbE, every firewall rule matters. You could try to limit these or even disable firewalling to see if this makes a difference.

          I did, the result increases up to 7,2 Gbit/s using MTU 1500. Forwarding without firewalling is therefore faster.

          Frequency of my cpu is 2.6 Ghz, scaling to 3.8 Ghz (Xeon E3-1275 Turboboos) is a linear
          factor of 1,46 -> 5,0 Gbit/s -> 7,3 Gbit/s

          ===================================
          [  4]  0.00-100.00 sec  11.5 GBytes  991 Mbits/sec  773            sender
          [  4]  0.00-100.00 sec  11.5 GBytes  991 Mbits/sec                  receiver
          [  6]  0.00-100.00 sec  10.4 GBytes  896 Mbits/sec  738            sender
          [  6]  0.00-100.00 sec  10.4 GBytes  896 Mbits/sec                  receiver
          [  8]  0.00-100.00 sec  11.6 GBytes  997 Mbits/sec  860            sender
          [  8]  0.00-100.00 sec  11.6 GBytes  997 Mbits/sec                  receiver
          [ 10]  0.00-100.00 sec  9.39 GBytes  807 Mbits/sec  933            sender
          [ 10]  0.00-100.00 sec  9.39 GBytes  807 Mbits/sec                  receiver
          [ 12]  0.00-100.00 sec  11.6 GBytes  997 Mbits/sec  991            sender
          [ 12]  0.00-100.00 sec  11.6 GBytes  996 Mbits/sec                  receiver
          [ 14]  0.00-100.00 sec  10.4 GBytes  896 Mbits/sec  857            sender
          [ 14]  0.00-100.00 sec  10.4 GBytes  896 Mbits/sec                  receiver
          [ 16]  0.00-100.00 sec  8.44 GBytes  725 Mbits/sec  857            sender
          [ 16]  0.00-100.00 sec  8.44 GBytes  725 Mbits/sec                  receiver
          [ 18]  0.00-100.00 sec  10.3 GBytes  881 Mbits/sec  709            sender
          [ 18]  0.00-100.00 sec  10.3 GBytes  881 Mbits/sec                  receiver
          [SUM]  0.00-100.00 sec  83.7 GBytes  7.19 Gbits/sec  6718            sender
          [SUM]  0.00-100.00 sec  83.7 GBytes  7.19 Gbits/sec                  receiver

          170207_Forwarding.png_thumb
          170207_Forwarding.png

          1 Reply Last reply Reply Quote 0
          • H
            heper
            last edited by

            maybe there is an improvement when using pfSense 2.4 - BETA. (it uses freebsd 11 instead of 10.3)

            1 Reply Last reply Reply Quote 0
            • F
              fwcheck
              last edited by

              I did a comparision to a plain debian system. It is able to forward about 8.80 Gbit/s MTU 1500 with a minimal iptables ruleset+ NAT.
              Therefore maybe Freebsd and ESXi is not working optimal. I was able to measure a slightly higher forwarding throughput
              using two iperf3 servers on different ports (8 Threads) in total. The achivable rate was about 8.4 Gbit/s, graph is attached.

              I will try a plain Freebsd 10/10.3/11 for comparision and pfsense 2.4.

              Any other ideas to try?

              I tried SR-IOV, but i'm running into the following problems:
              ixv0: <intel(r) pro="" 10gbe="" virtual="" function="" network="" driver,="" version="" -="" 1.4.1-k="">mem 0xfd3f8000-0xfd3fbfff,0xfd3fc000-0xfd3fffff at device
              0.0 on pci11
              ixv0: MSIX config error
              ixv0: Allocation of PCI resources failed
              device_attach: ixv0 attach returned 6

              The error seems to be pretty like described under https://bugs.pcbsd.org/issues/4614

              170208_increase_forwarding.png
              170208_increase_forwarding.png_thumb</intel(r)>

              1 Reply Last reply Reply Quote 0
              • F
                fwcheck
                last edited by

                I was able to get SR-IOV running; you need a setting in boot/loader.conf as described here
                https://lists.freebsd.org/pipermail/freebsd-bugs/2015-October/064355.html

                Even without using SR-IOV this improves the performance. I am able to measure rates about 8 Gbit/s at MTU 1500
                using one system on esxi.

                However it seems to be difficult to reach more than 5 Mpps using Freebsd on a hypervisor.

                170518_Throughput.PNG
                170518_Throughput.PNG_thumb

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post
                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.