Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    10gbps performance issue

    Scheduled Pinned Locked Moved Hardware
    32 Posts 7 Posters 5.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H
      heper
      last edited by heper

      I believe the devs should remove iperf from base installs....
      These iperf threads keep popping up every month & conclusion is always the same:
      Don't run iperf on pfsense

      The only way to measure throughput is like this:
      (Iperf-server)----(pfsense)----(iperf-client)
      All other measurements are pointless and inaccurate.

      J johnpozJ 2 Replies Last reply Reply Quote 0
      • J
        jazzl0ver @heper
        last edited by

        @heper hope devs would not follow your suggestion. it's like "we've got a headache. let's cut the head out". very wise.

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          I don't think iperf will be removed any time soon.

          But I agree with heper, what you're testing is not anything that can ever happen in normal use.

          It can be useful to run iperf on the firewall to test a single interface at a time if you are seeing very bad throughput testing through the firewall.

          You have two 10GbE interfaces there. Just setup another device connected to another interfaces and run an iperf server on that. Then test to it from the client on another interface.

          Steve

          1 Reply Last reply Reply Quote 0
          • johnpozJ
            johnpoz LAYER 8 Global Moderator @heper
            last edited by

            @heper said in 10gbps performance issue:

            I believe the devs should remove iperf from base installs…

            Its not part of base install? If it is what is the point of the iperf package? Are you suggesting that the package to install iperf be removed as an option?

            An intelligent man is sometimes forced to be drunk to spend time with his fools
            If you get confused: Listen to the Music Play
            Please don't Chat/PM me for help, unless mod related
            SG-4860 24.11 | Lab VMs 2.8, 24.11

            1 Reply Last reply Reply Quote 0
            • H
              heper
              last edited by heper

              I see no point in having it available on pfsense.
              Time and time again, it's used to reach the wrong conclusions anyways.

              1 Reply Last reply Reply Quote 1
              • johnpozJ
                johnpoz LAYER 8 Global Moderator
                last edited by

                @heper said in 10gbps performance issue:

                Time and time again, it’s used to reach the wrong conclusions anyways.

                Will not disagree with you there.. But there are use cases when you understand that you might not see full speed on your interface using the tool. So for those people that don't or won't draw those conclusions when they understand the point of router is to route not as an end point device for such a tool.

                So not sure agree with removal... Removal will just have the users asking how to install it from the freebsd ports/packages even if not part of the pfsense repository.

                An intelligent man is sometimes forced to be drunk to spend time with his fools
                If you get confused: Listen to the Music Play
                Please don't Chat/PM me for help, unless mod related
                SG-4860 24.11 | Lab VMs 2.8, 24.11

                1 Reply Last reply Reply Quote 1
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  I personally would not want to see either the package removed or iperf3 removed from our repo. I regularly use those for testing. There are many legitimate use cases.
                  Often I use another pfSense box as a client/server since most of my test network is pfSense boxes for example.

                  Steve

                  1 Reply Last reply Reply Quote 1
                  • DerelictD
                    Derelict LAYER 8 Netgate
                    last edited by

                    Removing access to a tool that can be misused by some while being massively-useful to others sort of reeks of the "thinking" behind 🔫 control. pkg add iperf3 please.

                    (wth we still have a real gun emoji. someone's slacking.)

                    Chattanooga, Tennessee, USA
                    A comprehensive network diagram is worth 10,000 words and 15 conference calls.
                    DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
                    Do Not Chat For Help! NO_WAN_EGRESS(TM)

                    1 Reply Last reply Reply Quote 0
                    • J
                      jazzl0ver
                      last edited by

                      @stephenw10 we've finally replaced the CPU to Xeon X5560, but the issue is still in place. Here are the latest measurements:
                      Single flow:

                      [2.4.3-RELEASE][admin@pfSense]/root: iperf3 -s
                      -----------------------------------------------------------
                      Server listening on 5201
                      -----------------------------------------------------------
                      Accepted connection from 10.10.10.20, port 40256
                      [  5] local 10.10.10.254 port 5201 connected to 10.10.10.20 port 40258
                      [ ID] Interval           Transfer     Bitrate
                      [  5]   0.00-1.00   sec   150 MBytes  1.26 Gbits/sec
                      [  5]   1.00-2.00   sec   219 MBytes  1.83 Gbits/sec
                      [  5]   2.00-3.00   sec   227 MBytes  1.90 Gbits/sec
                      [  5]   3.00-4.00   sec   258 MBytes  2.16 Gbits/sec
                      [  5]   4.00-5.00   sec   298 MBytes  2.50 Gbits/sec
                      [  5]   5.00-6.00   sec   298 MBytes  2.50 Gbits/sec
                      [  5]   6.00-7.00   sec   298 MBytes  2.50 Gbits/sec
                      [  5]   7.00-8.00   sec   298 MBytes  2.50 Gbits/sec
                      [  5]   8.00-9.00   sec   298 MBytes  2.50 Gbits/sec
                      [  5]   9.00-10.00  sec   299 MBytes  2.51 Gbits/sec
                      [  5]  10.00-10.01  sec  1.99 MBytes  2.48 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [ ID] Interval           Transfer     Bitrate
                      [  5]   0.00-10.01  sec  2.58 GBytes  2.22 Gbits/sec                  receiver
                      

                      4 flows (-P 4):

                      [2.4.3-RELEASE][admin@pfSense]/root: iperf3 -s
                      -----------------------------------------------------------
                      Server listening on 5201
                      -----------------------------------------------------------
                      Accepted connection from 10.10.10.20, port 40426
                      [  5] local 10.10.10.254 port 5201 connected to 10.10.10.20 port 40428
                      [  8] local 10.10.10.254 port 5201 connected to 10.10.10.20 port 40430
                      [ 10] local 10.10.10.254 port 5201 connected to 10.10.10.20 port 40432
                      [ 12] local 10.10.10.254 port 5201 connected to 10.10.10.20 port 40434
                      [ ID] Interval           Transfer     Bitrate
                      [  5]   0.00-1.00   sec  45.7 MBytes   383 Mbits/sec
                      [  8]   0.00-1.00   sec  48.9 MBytes   410 Mbits/sec
                      [ 10]   0.00-1.00   sec  40.2 MBytes   337 Mbits/sec
                      [ 12]   0.00-1.00   sec  47.4 MBytes   397 Mbits/sec
                      [SUM]   0.00-1.00   sec   182 MBytes  1.53 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   1.00-2.00   sec  46.3 MBytes   389 Mbits/sec
                      [  8]   1.00-2.00   sec   108 MBytes   909 Mbits/sec
                      [ 10]   1.00-2.00   sec  49.1 MBytes   412 Mbits/sec
                      [ 12]   1.00-2.00   sec  38.7 MBytes   325 Mbits/sec
                      [SUM]   1.00-2.00   sec   243 MBytes  2.03 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   2.00-3.00   sec  46.9 MBytes   394 Mbits/sec
                      [  8]   2.00-3.00   sec   108 MBytes   907 Mbits/sec
                      [ 10]   2.00-3.00   sec  36.6 MBytes   307 Mbits/sec
                      [ 12]   2.00-3.00   sec  25.9 MBytes   217 Mbits/sec
                      [SUM]   2.00-3.00   sec   218 MBytes  1.83 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   3.00-4.00   sec  58.5 MBytes   491 Mbits/sec
                      [  8]   3.00-4.00   sec  94.0 MBytes   788 Mbits/sec
                      [ 10]   3.00-4.00   sec  44.5 MBytes   374 Mbits/sec
                      [ 12]   3.00-4.00   sec  37.4 MBytes   314 Mbits/sec
                      [SUM]   3.00-4.00   sec   234 MBytes  1.97 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   4.00-5.00   sec  56.7 MBytes   475 Mbits/sec
                      [  8]   4.00-5.00   sec  79.0 MBytes   663 Mbits/sec
                      [ 10]   4.00-5.00   sec  44.4 MBytes   372 Mbits/sec
                      [ 12]   4.00-5.00   sec  38.5 MBytes   323 Mbits/sec
                      [SUM]   4.00-5.00   sec   219 MBytes  1.83 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   5.00-6.00   sec  61.9 MBytes   520 Mbits/sec
                      [  8]   5.00-6.00   sec  70.0 MBytes   587 Mbits/sec
                      [ 10]   5.00-6.00   sec  48.5 MBytes   407 Mbits/sec
                      [ 12]   5.00-6.00   sec  42.3 MBytes   354 Mbits/sec
                      [SUM]   5.00-6.00   sec   223 MBytes  1.87 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   6.00-7.00   sec  68.5 MBytes   575 Mbits/sec
                      [  8]   6.00-7.00   sec  54.1 MBytes   454 Mbits/sec
                      [ 10]   6.00-7.00   sec  54.6 MBytes   458 Mbits/sec
                      [ 12]   6.00-7.00   sec  47.7 MBytes   400 Mbits/sec
                      [SUM]   6.00-7.00   sec   225 MBytes  1.89 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   7.00-8.00   sec  65.1 MBytes   546 Mbits/sec
                      [  8]   7.00-8.00   sec  55.4 MBytes   464 Mbits/sec
                      [ 10]   7.00-8.00   sec  49.2 MBytes   413 Mbits/sec
                      [ 12]   7.00-8.00   sec  49.9 MBytes   419 Mbits/sec
                      [SUM]   7.00-8.00   sec   220 MBytes  1.84 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   8.00-9.00   sec  67.0 MBytes   562 Mbits/sec
                      [  8]   8.00-9.00   sec  51.9 MBytes   435 Mbits/sec
                      [ 10]   8.00-9.00   sec  48.3 MBytes   405 Mbits/sec
                      [ 12]   8.00-9.00   sec  56.3 MBytes   472 Mbits/sec
                      [SUM]   8.00-9.00   sec   224 MBytes  1.88 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]   9.00-10.00  sec  65.1 MBytes   546 Mbits/sec
                      [  8]   9.00-10.00  sec  52.0 MBytes   436 Mbits/sec
                      [ 10]   9.00-10.00  sec  54.7 MBytes   459 Mbits/sec
                      [ 12]   9.00-10.00  sec  65.1 MBytes   546 Mbits/sec
                      [SUM]   9.00-10.00  sec   237 MBytes  1.99 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [  5]  10.00-10.01  sec   636 KBytes   432 Mbits/sec
                      [  8]  10.00-10.01  sec   636 KBytes   432 Mbits/sec
                      [ 10]  10.00-10.01  sec   663 KBytes   450 Mbits/sec
                      [ 12]  10.00-10.01  sec   764 KBytes   519 Mbits/sec
                      [SUM]  10.00-10.01  sec  2.64 MBytes  1.83 Gbits/sec
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [ ID] Interval           Transfer     Bitrate
                      [  5]   0.00-10.01  sec   582 MBytes   488 Mbits/sec                  receiver
                      [  8]   0.00-10.01  sec   722 MBytes   605 Mbits/sec                  receiver
                      [ 10]   0.00-10.01  sec   471 MBytes   395 Mbits/sec                  receiver
                      [ 12]   0.00-10.01  sec   450 MBytes   377 Mbits/sec                  receiver
                      [SUM]   0.00-10.01  sec  2.17 GBytes  1.86 Gbits/sec                  receiver
                      -----------------------------------------------------------
                      

                      top output during the tests:

                      [2.4.3-RELEASE][admin@pfSense]/root: top -aSH
                      last pid: 97946;  load averages:  0.72,  0.28,  0.12                                                        up 2+12:43:41  09:10:01
                      329 processes: 17 running, 241 sleeping, 71 waiting
                      CPU:  0.1% user,  0.5% nice,  2.5% system,  5.4% interrupt, 91.6% idle
                      Mem: 214M Active, 565M Inact, 830M Wired, 232M Buf, 30G Free
                      Swap: 3712M Total, 3712M Free
                      
                        PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
                         11 root       155 ki31     0K   256K CPU7    7  60.6H 100.00% [idle{idle: cpu7}]
                         11 root       155 ki31     0K   256K CPU1    1  60.6H 100.00% [idle{idle: cpu1}]
                         11 root       155 ki31     0K   256K CPU2    2  60.5H 100.00% [idle{idle: cpu2}]
                         11 root       155 ki31     0K   256K CPU9    9  60.5H 100.00% [idle{idle: cpu9}]
                         11 root       155 ki31     0K   256K CPU11  11  60.5H 100.00% [idle{idle: cpu11}]
                         11 root       155 ki31     0K   256K CPU13  13  60.5H 100.00% [idle{idle: cpu13}]
                         11 root       155 ki31     0K   256K CPU6    6  60.6H  99.99% [idle{idle: cpu6}]
                         11 root       155 ki31     0K   256K CPU4    4  60.6H  99.88% [idle{idle: cpu4}]
                         11 root       155 ki31     0K   256K CPU3    3  60.5H  98.83% [idle{idle: cpu3}]
                         11 root       155 ki31     0K   256K RUN    12  60.5H  98.45% [idle{idle: cpu12}]
                         11 root       155 ki31     0K   256K CPU10  10  60.5H  94.15% [idle{idle: cpu10}]
                         11 root       155 ki31     0K   256K CPU14  14  60.5H  89.01% [idle{idle: cpu14}]
                         12 root       -92    -     0K  1136K WAIT    0   2:07  84.99% [intr{irq277: bxe3:fp00}]
                         11 root       155 ki31     0K   256K CPU5    5  60.6H  84.03% [idle{idle: cpu5}]
                      24259 root        52    0 19752K  5628K select  9   0:20  75.33% iperf3 -s
                         11 root       155 ki31     0K   256K CPU8    8  60.5H  70.66% [idle{idle: cpu8}]
                         11 root       155 ki31     0K   256K CPU15  15  60.5H  52.12% [idle{idle: cpu15}]
                         11 root       155 ki31     0K   256K CPU0    0  60.5H  14.83% [idle{idle: cpu0}]
                        254 root        23    0   266M 44468K accept 12   0:28   1.20% php-fpm: pool nginx (php-fpm){php-fpm}
                      97017 root        40   20   728M   570M bpf     8   4:14   0.57% /usr/local/bin/snort -R 41368 -D -q --suppress-config-log -l /var/
                         12 root       -100    -     0K  1136K WAIT    0   0:53   0.25% [intr{irq20: hpet0 uhci3}]
                         12 root       -60    -     0K  1136K WAIT    9   3:06   0.12% [intr{swi4: clock (0)}]
                      82170 root        20    0 22116K  4796K CPU12  12   0:00   0.10% top -aSH
                         12 root       -92    -     0K  1136K WAIT    1   1:36   0.09% [intr{irq273: bxe2:fp01}]
                      10462 root        20    0 20356K  6412K select 11   0:10   0.07% /usr/local/sbin/openvpn --config /var/etc/openvpn/server1.conf
                         12 root       -92    -     0K  1136K WAIT    1   1:52   0.07% [intr{irq268: bxe1:fp01}]
                         12 root       -92    -     0K  1136K WAIT    0   2:19   0.06% [intr{irq267: bxe1:fp00}]
                         12 root       -92    -     0K  1136K WAIT    1   1:30   0.06% [intr{irq263: bxe0:fp01}]
                         12 root       -92    -     0K  1136K WAIT    2   2:15   0.06% [intr{irq264: bxe0:fp02}]
                         12 root       -92    -     0K  1136K WAIT    2   1:59   0.06% [intr{irq279: bxe3:fp02}]
                      60178 www         20    0 58924K 12688K kqread  9   0:01   0.05% /usr/local/sbin/haproxy -f /var/etc/haproxy/haproxy.cfg -p /var/ru
                         12 root       -92    -     0K  1136K WAIT    3   2:28   0.05% [intr{irq280: bxe3:fp03}]
                      60322 www         20    0 58924K 12632K kqread 11   0:01   0.04% /usr/local/sbin/haproxy -f /var/etc/haproxy/haproxy.cfg -p /var/ru
                      

                      Do you still see a CPU bottleneck here?

                      1 Reply Last reply Reply Quote 0
                      • X
                        xciter327
                        last edited by

                        To me it looks like each adapter (bxX) is using one queue each. The queues seem to be there, but they are not in use.

                           12 root       -92    -     0K  1136K WAIT    0   2:07  84.99% [intr{irq277: bxe3:fp00}]
                           12 root       -92    -     0K  1136K WAIT    1   1:36   0.09% [intr{irq273: bxe2:fp01}]
                           12 root       -92    -     0K  1136K WAIT    1   1:52   0.07% [intr{irq268: bxe1:fp01}]
                           12 root       -92    -     0K  1136K WAIT    0   2:19   0.06% [intr{irq267: bxe1:fp00}]
                           12 root       -92    -     0K  1136K WAIT    1   1:30   0.06% [intr{irq263: bxe0:fp01}]
                           12 root       -92    -     0K  1136K WAIT    2   2:15   0.06% [intr{irq264: bxe0:fp02}]
                           12 root       -92    -     0K  1136K WAIT    2   1:59   0.06% [intr{irq279: bxe3:fp02}]
                        

                        I would expect to see the load distributed between all of them.

                        J 1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          Have you tried a test through the firewall as opposed to terminating on it?

                          Steve

                          1 Reply Last reply Reply Quote 0
                          • J
                            jazzl0ver @xciter327
                            last edited by

                            Thanks, @xciter327 ! It does look like an answer! I gonna check it soon.

                            @stephenw10 , not yet. I have it on my checklist.

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              Mmm, it certainly isn't load spreading well. However no CPU core is at 100% either so that in itself should not be a restriction.

                              Steve

                              1 Reply Last reply Reply Quote 0
                              • J
                                jazzl0ver
                                last edited by

                                It appears there's a known issue with Broadcom BCM57810 adapters in FreeBSD (LACP bonding is not working well): https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213606

                                Today I tried to make some tests thru the HAProxy running on the firewall and the server has just screwed up after reaching ~140000 connections. Log contained:

                                Aug  9 05:20:17 pfSense kernel: bxe0: ERROR: ECORE: timeout waiting for state 1
                                Aug  9 05:20:17 pfSense kernel: bxe0: ERROR: Queue(3) SETUP failed (rc = -4)
                                Aug  9 05:20:17 pfSense kernel: bxe0: ERROR: Queue(3) setup failed rc = -4
                                Aug  9 05:20:18 pfSense rc.gateway_alarm[19058]: >>> Gateway alarm: WANGW (Addr:a.b.c.d Alarm:1 RTT:2000271ms RTTsd:3249226ms Loss:21%)
                                ...
                                Aug  9 05:20:28 pfSense kernel: bxe1: ERROR: TX watchdog timeout on fp[01], resetting!
                                Aug  9 05:20:34 pfSense kernel: bxe1: ERROR: ECORE: timeout waiting for state 7
                                Aug  9 05:21:02 pfSense kernel: bxe0: ERROR: FW failed to respond!
                                Aug  9 05:21:02 pfSense kernel: bxe0: ERROR: Initialization failed, stack notified driver is NOT running!
                                Aug  9 05:21:17 pfSense rc.gateway_alarm[45717]: >>> Gateway alarm: WANGW (Addr:a.b.c.d Alarm:1 RTT:0ms RTTsd:0ms Loss:100%)
                                ...
                                Aug  9 05:21:31 pfSense kernel: bxe2: Interface stopped DISTRIBUTING, possible flapping
                                Aug  9 05:21:42 pfSense sshd[82110]: Timeout, client not responding.
                                Aug  9 05:21:54 pfSense sshd[19888]: Timeout, client not responding.
                                Aug  9 05:21:55 pfSense kernel: bxe0: Interface stopped DISTRIBUTING, possible flapping
                                Aug  9 05:22:43 pfSense kernel: bxe1: ERROR: ECORE: timeout waiting for state 1
                                Aug  9 05:22:43 pfSense kernel: bxe1: ERROR: Queue(0) SETUP failed (rc = -4)
                                Aug  9 05:22:43 pfSense kernel: bxe1: ERROR: Setup leading failed! rc = -4
                                Aug  9 05:23:14 pfSense kernel: bxe1: ERROR: Initialization failed, stack notified driver is NOT running!
                                Aug  9 05:23:36 pfSense kernel: bxe3: Interface stopped DISTRIBUTING, possible flapping
                                Aug  9 05:24:23 pfSense kernel: bxe1: Interface stopped DISTRIBUTING, possible flapping
                                

                                Going to change the adapters to Intel.

                                1 Reply Last reply Reply Quote 0
                                • A anyn12 referenced this topic on
                                • First post
                                  Last post
                                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.