Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    10Gbe Tuning?

    Scheduled Pinned Locked Moved Hardware
    83 Posts 19 Posters 40.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      jasonlitka
      last edited by

      @gonzopancho:

      @Jason:

      I was able to get ~8Gbit/s between two FreeNAS 9.x boxes without jumbo frames when using 4 threads.  That's pretty close to wire.

      OK, Jason… FreeBSD won't forward at wirespeed on 10Gbps networks.

      Since the BSDRP guy can only manage to forward (no firewall, just fast forwarding) at a pinch over 1.8Mpps, (and you were doing, by my best estimate, 5.5Mpps), I'm going to assert that we still have work to do.

      brunoc:  we're currently engaged in a 10G performance study, but yes, part of the solution will be tuning, and part of it will be the threaded pf in pfSense version 2.2.

      One interesting thing of note is that at least one user here has had a lot of luck using pfSense on vSphere.  With virtualized NICs he seems to be getting better throughput than I am on bare-metal, even though I'm using faster CPUs, so I'm wondering how much of this is the Intel drivers.  The newest ones are better than the last, but they're still not exactly screaming along.

      I'll keep an eye on the 2.2 section of the forums.  Once it gets stable enough to run as the backup of a CARP pair (next to a 2.1.x box) maybe I'll upgrade one system at the office for testing.

      If there's any tuning that you want me to test out that can be done on 2.1.x, let me know.  I'd be glad to try a few things on my boxes.

      I can break anything.

      1 Reply Last reply Reply Quote 0
      • T
        tojaktoty
        last edited by

        @Jason:

        One interesting thing of note is that at least one user here has had a lot of luck using pfSense on vSphere.  With virtualized NICs he seems to be getting better throughput than I am on bare-metal, even though I'm using faster CPUs, so I'm wondering how much of this is the Intel drivers.  The newest ones are better than the last, but they're still not exactly screaming along.

        Where was that discussion about pfsense on esxi providing more throughput than your similar bare metal. I looked and can't find it. Thanks

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          I think it was this thread. I remember this figure seeming surprisingly high at the time, it still does:
          https://forum.pfsense.org/index.php?topic=72142.msg395165#msg395165

          Steve

          1 Reply Last reply Reply Quote 0
          • ?
            Guest
            last edited by

            I can't imagine any real performance gain for pf when running under VMware.

            1 Reply Last reply Reply Quote 0
            • B
              bl0815
              last edited by

              I have pfsense 2.1.4 on a new box with two CPUs: E5-2667 @2.90GHz, all 12 cores enabled, but hyperthreading and vt disabled.
              All traffic goes over one intel x520-sr2.
              With my simple test setup ( iperf between two VMs, traffic goes through the whole datacenter, with the pfsense box in the middle), I got up to 3Gbit/s (perhaps I could get more with better VMware-infrastructure) with a CPU load below 2.

              my /boot/loader.conf.local:

              kern.ipc.nmbclusters="262144"
              kern.ipc.nmbjumbop="262144"
              net.isr.bindthreads=0
              net.isr.maxthreads=1
              kern.random.sys.harvest.ethernet=0
              kern.random.sys.harvest.point_to_point=0
              kern.random.sys.harvest.interrupt=0
              net.isr.defaultqlimit=2048
              net.isr.maxqlimit=40960
              

              and my changes in system-tunables:

              hw.intr_storm_threshold=10000
              kern.ipc.maxsockbuf=16777216
              net.inet.tcp.sendbuf_max=16777216
              net.inet.tcp.recvbuf_max=16777216   
              net.inet.ip.fastforwarding=1
              net.inet.tcp.sendbuf_inc=262144
              net.inet.tcp.recvbuf_inc=262144 
              net.route.netisr_maxqlen=2048
              net.inet6.ip6.redirect=0
              net.inet.ip.redirect=0
              net.inet.ip.intr_queue_maxlen=2048
              

              And make sure to switch off LRO and TSO of the ix-interfaces. TSO is broken with IPv6, if it is enabled, only one paket is sent at once and then the box waits for the ACK until it sends the next one…
              Some of the options I found in the freebsd-wiki: https://wiki.freebsd.org/NetworkPerformanceTuning

              1 Reply Last reply Reply Quote 0
              • Q
                q54e3w
                last edited by

                Mine throughput completely sucks right now….Im seeing 600mbps (you read it right, not even 1gig) when testing iperf from my desktop to my pfSense router.  Ive applied the calomel tricks and tips re buffers etc and still seeing sucky perf so I need to do some debugging for sure. Im dreaming of the lefty heights of a 2gig connection right now!

                BTW, this guy nails 9.x gbps > https://forum.pfsense.org/index.php?topic=77144.msg435304#msg435304

                FYI Im using a1srm 2758f board with intel x520 SFP+ optical cables etc. I'm still limited to 600mbps on a gigabit ethernet cat6 wire to my quad i350 too.

                1 Reply Last reply Reply Quote 0
                • D
                  demco
                  last edited by

                  Just an observation on the 9.22Gbps test result.

                  1. The measurement is taken on LAN interface which is a bridge of 4 10Gbps + 1 1Gbps interfaces. It would be measuring the sum of all 5 interfaces.

                  2. The test setup seem to be connect 1 host to each of the 10Gbps ports. Have these 4 hosts ran iperf.

                  3. Most report seeing around 2Gbps on 10Gbps interfaces.  So 4x 2Gbps is within reach of the result.

                  4. If the 10Gbps ports are doing line rate, shouldn't the test be measuring 40Gbps instead of 9Gbps? Still 9Gbps is impressive on older hardware.

                  1 Reply Last reply Reply Quote 0
                  • Q
                    q54e3w
                    last edited by

                    Yes, the LAN reports the traffic on the bridge (mine is setup like this also) but Id assumed he was reporting line rate on 1 port rather than (4 * 2g + 1 * 1g) speeds. You are right though, without seeing his other ports there is ambiguity. I'd assumed given he spent the time to post he had close to line rate out of 1 port which theoretically should be possible, rather than close to line rate from 4+1…. good spot.

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      You must be hitting some limit. Are the NICs connecting at 10Gbps? Are you seeing errors on the interface? What does your CPU usage look like? Large interrupt load?

                      Steve

                      1 Reply Last reply Reply Quote 0
                      • H
                        Harvy66
                        last edited by

                        @irj972:

                        Mine throughput completely sucks right now….Im seeing 600mbps (you read it right, not even 1gig) when testing iperf from my desktop to my pfSense router.  Ive applied the calomel tricks and tips re buffers etc and still seeing sucky perf so I need to do some debugging for sure. Im dreaming of the lefty heights of a 2gig connection right now!

                        BTW, this guy nails 9.x gbps > https://forum.pfsense.org/index.php?topic=77144.msg435304#msg435304

                        FYI Im using a1srm 2758f board with intel x520 SFP+ optical cables etc. I'm still limited to 600mbps on a gigabit ethernet cat6 wire to my quad i350 too.

                        PFSense 2.2 will have better multi-core multi-stream performance. Your Atom CPU has poor single thread performance, even thought it should have decent aggregate throughput.

                        I'm getting 980mb, ~1.5gb with bi-directional test, with Iperf through PFSense NAT. All with 7.7% cpu load and no tweaking. The performance is entirely limited by my 2 testing computer's integrated NICs.

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          It still has almost double the single thread rating of, say, a D525 which can itself manage close to 600Mbps throughput.  :-
                          This test used the pfSense box as the end point though so they are not comparable.

                          Steve

                          1 Reply Last reply Reply Quote 0
                          • D
                            dmitripr
                            last edited by

                            @stephenw10:

                            It still has almost double the single thread rating of, say, a D525 which can itself manage close to 600Mbps throughput.  :-
                            This test used the pfSense box as the end point though so they are not comparable.

                            Steve

                            Steve, did you get anywhere with this?

                            I also just ran some iperf test, I have Atom D2550, and it's also maxing out at ~450-500 mbps when I do UDP from my pfsense box. I see the CPU staying right at 25-27% load during tests. I'm thinking that this is getting limited by the single thread of iperf on Atom.

                            Interestingly enough. I got a Lenovo T440 laptop with Win7, when I also run the UDP test from that (Intel NIC) it's also maxing out at 450-500 mbps.

                            I'm not sure what to make of that. Maybe an issue with 2.0.x iperf?

                            -Dmitri

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              Run 'top -SH' at the console to see how the usage breaks down across the cores.
                              How are the NICs connected? If they're PCI you might hit a bottleneck there.
                              Try running a test through pfSense instead of using it as an end-point.
                              The previous user who got greater than 600Mbps through his atom had to make some tweaks. I forget the details but I think he disabled some PCI power saving options in the bios.
                              You could try enabling ip fast-forwarding if your not using ipsec.

                              Steve

                              1 Reply Last reply Reply Quote 0
                              • D
                                dmitripr
                                last edited by

                                @stephenw10:

                                Run 'top -SH' at the console to see how the usage breaks down across the cores.
                                How are the NICs connected? If they're PCI you might hit a bottleneck there.
                                Try running a test through pfSense instead of using it as an end-point.
                                The previous user who got greater than 600Mbps through his atom had to make some tweaks. I forget the details but I think he disabled some PCI power saving options in the bios.
                                You could try enabling ip fast-forwarding if your not using ipsec.

                                Steve

                                I have embedded Broadcom NICs, not PCI.

                                Unfortunately I don't have enough (powerful enough) equipment to handle 1 Gbps simulation through the pfsense.  I got a Lenovo T440 with an i5, but like I said in my previous thread, the I can't get 1 Gbps saturation via iperf on it either (it should be able to, maybe it's a Win7 issue or something.) I also got a NAS, but it's a very slow processor.  I got macbook air as well, but without a gigabit adapter (wifi only).

                                So, using what I got. Pfsense –> Lenovo. TCP Window size of 128Kb:

                                [ ID] Interval      Transfer    Bandwidth
                                [  3]  0.0- 1.0 sec  37.6 MBytes  316 Mbits/sec
                                [  3]  1.0- 2.0 sec  39.1 MBytes  328 Mbits/sec
                                [  3]  2.0- 3.0 sec  38.4 MBytes  322 Mbits/sec
                                [  3]  3.0- 4.0 sec  37.8 MBytes  317 Mbits/sec
                                [  3]  4.0- 5.0 sec  37.1 MBytes  311 Mbits/sec
                                [  3]  5.0- 6.0 sec  36.9 MBytes  309 Mbits/sec
                                [  3]  6.0- 7.0 sec  37.1 MBytes  311 Mbits/sec
                                [  3]  7.0- 8.0 sec  37.0 MBytes  310 Mbits/sec
                                [  3]  8.0- 9.0 sec  40.0 MBytes  336 Mbits/sec
                                [  3]  9.0-10.0 sec  37.9 MBytes  318 Mbits/sec
                                [  3]  0.0-10.0 sec  379 MBytes  318 Mbits/sec

                                I was running top -SH in another session:

                                last pid: 65943;  load averages:  0.18,  0.04,  0.01    up 2+03:16:25  20:26:55
                                169 processes: 10 running, 139 sleeping, 3 stopped, 17 waiting
                                CPU:  0.0% user,  0.0% nice, 23.7% system, 24.9% interrupt, 51.3% idle
                                Mem: 834M Active, 1198M Inact, 699M Wired, 296K Cache, 416M Buf, 1180M Free
                                Swap: 8192M Total, 8192M Free

                                PID USERNAME PRI NICE  SIZE    RES STATE  C  TIME  WCPU COMMAND
                                  11 root    171 ki31    0K    64K CPU2    2  49.9H 91.16% idle{idle: cpu2}
                                  11 root    171 ki31    0K    64K RUN    3  50.3H 87.50% idle{idle: cpu3}
                                  11 root    171 ki31    0K    64K RUN    1  50.2H 83.25% idle{idle: cpu1}
                                  12 root    -68    -    0K  336K CPU0    0  10:10 60.89% intr{irq18: bge1
                                65943 root      76    0 13556K  2628K CPU1    1  0:08 54.88% iperf{iperf}
                                  11 root    171 ki31    0K    64K RUN    0  50.5H 43.55% idle{idle: cpu0}
                                34264 root      64  20  619M  301M bpf    1  17:53  0.00% snort{snort}
                                  258 root      76  20  6908K  1404K kqread  3  15:34  0.00% check_reload_stat
                                  12 root    -68    -    0K  336K WAIT    0  10:05  0.00% intr{irq16: bge0
                                  12 root    -32    -    0K  336K RUN    0  7:13  0.00% intr{swi4: clock}
                                64693 proxy    64  20  380M  364M kqread  2  3:35  0.00% squid
                                28093 root      44    0  5784K  1484K select  2  1:29  0.00% apinger
                                  23 root      20    -    0K    16K syncer  3  0:58  0.00% syncer
                                    0 root    -16    0    0K  176K sched  2  0:44  0.00% kernel{swapper}
                                  14 root    -16    -    0K    16K -      2  0:32  0.00% yarrow
                                20488 root      44    0 26272K  7532K kqread  0  0:24  0.00% lighttpd
                                86216 root      76  20  8296K  1932K wait    0  0:21  0.00% sh
                                  12 root    -32    -    0K  336K RUN    0  0:18  0.00% intr{swi4: clock}
                                    8 root    -16    -    0K    16K pftm    1  0:14  0.00% pfpurge
                                30278 dhcpd    44    0 15180K 10444K select  2  0:13  0.00% dhcpd

                                I'm not sure what the bottleneck is here. On second thought, it doesn't looks like a processor issue. Also, I already have ip fast-forward turned on (I do use IPsec, but have not had any issues with ip fast-forward yet).

                                Thanks for any help!

                                1 Reply Last reply Reply Quote 0
                                • D
                                  dmitripr
                                  last edited by

                                  Good news. I figured out the issue. The length of buffers was too short (1470 bytes for UDP by default), once I increased it to 16000 bytes things got moving much quicker.

                                  Again pfsense –> Lenovo:

                                  [2.1.4-RELEASE]: iperf -c 192.168.1.107 -u -b 1000m -i 1 -l 16000
                                  –----------------------------------------------------------
                                  Client connecting to 192.168.1.107, UDP port 5001
                                  Sending 16000 byte datagrams
                                  UDP buffer size: 56.0 KByte (default)

                                  [  3] local 192.168.1.1 port 46600 connected with 192.168.1.107 port 5001
                                  [ ID] Interval      Transfer    Bandwidth
                                  [  3]  0.0- 1.0 sec  104 MBytes  872 Mbits/sec
                                  [  3]  1.0- 2.0 sec  105 MBytes  884 Mbits/sec
                                  [  3]  2.0- 3.0 sec  108 MBytes  908 Mbits/sec
                                  [  3]  3.0- 4.0 sec  107 MBytes  894 Mbits/sec
                                  [  3]  4.0- 5.0 sec  109 MBytes  914 Mbits/sec
                                  [  3]  5.0- 6.0 sec  109 MBytes  915 Mbits/sec
                                  [  3]  6.0- 7.0 sec  109 MBytes  912 Mbits/sec
                                  [  3]  7.0- 8.0 sec  108 MBytes  909 Mbits/sec
                                  [  3]  8.0- 9.0 sec  106 MBytes  890 Mbits/sec
                                  [  3]  9.0-10.0 sec  105 MBytes  883 Mbits/sec
                                  [  3]  0.0-10.0 sec  1.05 GBytes  898 Mbits/sec
                                  [  3] Sent 70583 datagrams

                                  I'm pretty much hitting the practical limit of a gigabit right there.

                                  But when I switch to TCP, I'm still getting ~300mbps.

                                  1 Reply Last reply Reply Quote 0
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    Even though your NICs are on-board they will still be connected via either a PCI or PCIe bus to the chipset. It seems  unlikely that it would be PCI but you never know. The exact NIC chip code will tell you. Clearly the CPU is not the restriction here, all the cores are still running idle processes.

                                    Steve

                                    1 Reply Last reply Reply Quote 0
                                    • R
                                      razzfazz
                                      last edited by

                                      @dmitripr:

                                      12 root    -68    -    0K  336K CPU0    0  10:10 60.89% intr{irq18: bge1

                                      The interrupt load seems pretty high for <1Gbps throughput.

                                      1 Reply Last reply Reply Quote 0
                                      • D
                                        dmitripr
                                        last edited by

                                        @razzfazz:

                                        @dmitripr:

                                        12 root    -68    -    0K  336K CPU0    0  10:10 60.89% intr{irq18: bge1

                                        The interrupt load seems pretty high for <1Gbps throughput.

                                        I'm sure these are not the best NICs out there. :). But considering 4 cores here, this is only ~15% of CPU usage. Probably not too bad, but not great either. Intel NICs would fair better for sure.

                                        1 Reply Last reply Reply Quote 0
                                        • ?
                                          Guest
                                          last edited by

                                          @dmitripr:

                                          I'm sure these are not the best NICs out there. :). But considering 4 cores here, this is only ~15% of CPU usage. Probably not too bad, but not great either. Intel NICs would fair better for sure.

                                          And Chelsio better still.

                                          1 Reply Last reply Reply Quote 0
                                          • Q
                                            q54e3w
                                            last edited by

                                            new Intel driver v2.5.25 for x520 / x540 cards was released last week - has anybody tried it yet?

                                            https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=14688&lang=eng&ProdId=3412

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.