Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Poor 10gbps WAN throughput

    Scheduled Pinned Locked Moved General pfSense Questions
    42 Posts 8 Posters 2.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • V
      vertigo8 @stephenw10
      last edited by

      @stephenw10 said in Poor 10gbps WAN throughput:

      Ok that looks more like what I'd expect. Are you able to get the full output including the headers like:

      last pid: 99664;  load averages:  0.07,  0.20,  0.21                                                                                up 33+11:14:15  13:04:25
      365 threads:   3 running, 341 sleeping, 21 waiting
      CPU 0:  0.0% user,  0.4% nice,  1.2% system,  0.0% interrupt, 98.4% idle
      CPU 1:  0.0% user,  0.0% nice,  1.6% system,  0.0% interrupt, 98.4% idle
      Mem: 206M Active, 1361M Inact, 218M Wired, 84M Buf, 206M Free
      
        PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
         10 root        187 ki31     0B    16K CPU0     0 744.7H  98.73% [idle{idle: cpu0}]
         10 root        187 ki31     0B    16K RUN      1 743.5H  97.83% [idle{idle: cpu1}]
      68975 root         20    0    12M  5700K select   1 406:19   1.03% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
       8617 root         53   20   449M   418M bpf      1 112:32   0.60% /usr/local/bin/snort -R _6830 -M -D -q --suppress-config-log --daq pcap --daq-mode passiv
          0 root        -64    -     0B   176K -        0 186:10   0.36% [kernel{dummynet}]
      99664 root         20    0  7452K  3440K CPU1     1   0:00   0.30% top -HaSP
         14 root        -60    -     0B    88K -        0 105:45   0.22% [usb{usbus1}]
      
      last pid: 52380;  load averages:  1.18,  0.75,  0.44                                                                                                           up 0+00:13:11  12:42:43
      312 threads:   6 running, 272 sleeping, 34 waiting
      CPU 0:  0.0% user,  0.0% nice, 62.2% system,  0.0% interrupt, 37.8% idle
      CPU 1:  0.0% user,  0.0% nice, 41.3% system,  0.0% interrupt, 58.7% idle
      CPU 2:  0.0% user,  0.0% nice, 69.7% system,  0.0% interrupt, 30.3% idle
      CPU 3:  0.0% user,  0.0% nice, 23.7% system,  0.0% interrupt, 76.3% idle
      Mem: 127M Active, 111M Inact, 506M Wired, 56K Buf, 7038M Free
      ARC: 118M Total, 21M MFU, 93M MRU, 132K Anon, 771K Header, 3098K Other
           88M Compressed, 222M Uncompressed, 2.53:1 Ratio
      Swap: 1024M Total, 1024M Free
      
        PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
         11 root        187 ki31     0B    64K RUN      3  11:44  75.72% [idle{idle: cpu3}]
          0 root        -60    -     0B  1664K CPU2     2   1:06  71.20% [kernel{if_io_tqg_2}]
          0 root        -60    -     0B  1664K -        0   0:59  60.35% [kernel{if_io_tqg_0}]
         11 root        187 ki31     0B    64K CPU1     1  11:33  57.13% [idle{idle: cpu1}]
          0 root        -60    -     0B  1664K -        1   1:02  42.81% [kernel{if_io_tqg_1}]
         11 root        187 ki31     0B    64K CPU0     0  11:41  39.58% [idle{idle: cpu0}]
         11 root        187 ki31     0B    64K RUN      2  11:31  28.48% [idle{idle: cpu2}]
          0 root        -60    -     0B  1664K -        3   0:52  24.20% [kernel{if_io_tqg_3}]
          0 root        -60    -     0B  1664K -        2   0:02   0.13% [kernel{if_config_tqg_0}]
      24441 root         20    0    14M  4396K CPU3     3   0:00   0.09% top -HaSP
          7 root        -16    -     0B    16K pftm     3   0:00   0.04% [pf purge]
      74843 root         20    0    13M  3628K bpf      0   0:00   0.04% /usr/local/sbin/filterlog -i pflog0 -p /var/run/filterlog.pid
      99573 root         20    0    13M  2992K kqread   3   0:00   0.02% /usr/sbin/syslogd -s -c -c -l /var/dhcpd/var/run/log -P /var/run/syslog.pid -f /etc/syslog.conf
          2 root        -60    -     0B    64K WAIT     0   0:00   0.01% [clock{clock (0)}]
      85053 unbound      20    0    85M    54M kqread   1   0:00   0.01% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
          8 root        -16    -     0B    16K -        1   0:00   0.01% [rand_harvestq]
      29967 root         20    0    22M    11M select   2   0:00   0.01% sshd: admin@pts/0 (sshd)
         21 root        -16    -     0B    48K psleep   3   0:00   0.01% [pagedaemon{dom0}]
      88168 dhcpd        20    0    27M    13M select   3   0:00   0.01% /usr/local/sbin/dhcpd -user dhcpd -group _dhcp -chroot /var/dhcpd -cf /etc/dhcpd.conf -pf /var/run/dhcpd.pid ix1 ig
      

      Oops, truncated the output. This is what it's like during the test

      1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator
        last edited by

        Hmm, nothing jumps out there. What throughput were you seeing when that was taken?

        V 1 Reply Last reply Reply Quote 0
        • V
          vertigo8 @stephenw10
          last edited by vertigo8

          @stephenw10 2000 ish there. The ISP engineer came out today actually and used a thunderbolt OWC branded rj45 adapter on OSX. He was able to get closer to 8000. So I got no clue what the issue is. Would you expect 5000 to be reasonable?

          1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            I would expect to see something in the 3-4Gbps range with a TCP test using full sized packets. You can see from the values there that it scales to that sort of range until CPU cores start to hit 100%.

            Was the ISP engineer testing against the same servers? 'Special' ISP test site?

            V 1 Reply Last reply Reply Quote 0
            • V
              vertigo8 @stephenw10
              last edited by

              @stephenw10 Same test site but it does seem weird. Would it be worth setting jumbo packets on the MTU?

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                Only if the WAN supports an MTU >1500 which I doubt it does.

                Are you seeing any errors or collisions on the interfaces?

                Are you able to reassign the WAN NIC to something internal and test between the 10G NICs directly?
                Or test against a local 10G device on the WAN side?

                1 Reply Last reply Reply Quote 0
                • A
                  antst
                  last edited by antst

                  Actually, I have the same problem. It caps at 2.5Gbps with iperf3 (regardless of number of threads) and at. 3.5Gbps with speedest. Both ways, up and down, numbers are the same.
                  Wiring something directly to ONT gives full 8gbps, which are supposed to be there.

                  Also netgate 6100, attached to ONT via 10GBase-T.

                  I was expecting to get at least 6 with iperf, looking at specs.

                  Are device specs for standard frames or jumbo frames?

                  stephenw10S 1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator @antst
                    last edited by

                    @antst said in Poor 10gbps WAN throughput:

                    Also netgate 6100, attached to ONT via 10GBase-T.

                    Via an RJ-45 SFP+ module?

                    How where you running speedtest? From a client behind the 6100?

                    A 1 Reply Last reply Reply Quote 0
                    • A
                      antst @stephenw10
                      last edited by antst

                      @stephenw10
                      I tried all possible ways :)

                      but then looked at the web, and it turned out that this is actual cap of 6100.

                      Despite all my love for pfSense, flashed vyos on 6100 (took me hours to covert config) and throughput doubled.
                      Sad that TNSR is not available anymore for homelab.

                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        Mmm, I'd expect to see something in the 3-4Gbps range for most setups like that. There are a lot of factors though, WAN speed, latency, speedtest server etc.

                        A 1 Reply Last reply Reply Quote 0
                        • A
                          antst @stephenw10
                          last edited by antst

                          @stephenw10

                          yep, in the best attempts, I was able to push 3.5Gbps.
                          vyos pushes 5.5 easily (sometimes higher) with exactly the same infra and speedtest server.
                          Still need to get new router though. direct connect to the WAN from decent hardware pushes to 8Gbps.

                          1 Reply Last reply Reply Quote 0
                          • stephenw10S
                            stephenw10 Netgate Administrator
                            last edited by

                            Mmm, TNSR will push that easily on the 6100. You might email our sales guys and see what they can do for you. At least a trial to prove it out.

                            V A 2 Replies Last reply Reply Quote 0
                            • V
                              vertigo8 @stephenw10
                              last edited by

                              So I've been testing various things over the past few weeks.

                              I've changed my NIC to an X550-t2 and reinstalled win11. I was previously using an ASUS 10gbe card but many reviews said that it's flakey. I tend to agree.

                              I saw a number of forum posts from various sources outside of here, who suggested to disable any NIC onboard from the bios before installing, install the 10gb NIC and re-enable onboard NICs.

                              I have 2 onboard Intel 1gb NICs and to my surprise this actually worked. I'm literally using the same drivers for the 10gbe NIC and the speed is double. I'm now getting 4000 instead of 2000.

                              4000 is still below what it should be so I'm inclined to believe I'm hitting hardware limitations on the 6100. I may take the offer of trialling tnsr to see where that takes me. Failing that, I'm going to create my own pfsense box.

                              A 1 Reply Last reply Reply Quote 0
                              • stephenw10S
                                stephenw10 Netgate Administrator
                                last edited by

                                Hmm, that was just changes on the Windows host running the test client?

                                V 1 Reply Last reply Reply Quote 0
                                • A
                                  antst @stephenw10
                                  last edited by

                                  @stephenw10

                                  I am pretty sure TNSR will do :)
                                  Trial is no go, just wasting time for configuration. Is there something to assist in conversion of pfSense config to TNSR one?

                                  1 Reply Last reply Reply Quote 0
                                  • V
                                    vertigo8 @stephenw10
                                    last edited by

                                    Yes that's right. Same drivers too.

                                    It seems a lot of users of win11 were having the same slow speeds as me. The only change I made was jumbo frames on pfsense and on the NIC itself, as well as upping buffers to their max on win11 NIC settings. That seemed to give small differences in speed but nothing earth shattering. Disabling onboard NICs made the most difference.

                                    1 Reply Last reply Reply Quote 1
                                    • A
                                      antst @vertigo8
                                      last edited by antst

                                      @vertigo8
                                      Asus 10Gbe card, if I recall right, is on AQC107 chip or something. It hardly can push up to 10Gbps :)
                                      It is just cheap chip made for mainstream, where 5G is supposed to be enough for most.
                                      to get max from AQC107, is I recall correctly my experience, requires to have decent card on other side and tune both sides. AQC107 is tricky and sensitive to many things.
                                      If you have choice, better use intel, even if it is more expensive.
                                      I had to go with it only because I didn't have much choice in terms of TB4 10GbE nic.

                                      G 1 Reply Last reply Reply Quote 1
                                      • G
                                        Gblenn @antst
                                        last edited by

                                        @antst said in Poor 10gbps WAN throughput:

                                        @vertigo8
                                        Asus 10Gbe card, if I recall right, is on AQC107 chip or something. It hardly can push up to 10Gbps :)
                                        It is just cheap chip made for mainstream, where 5G is supposed to be enough for most.
                                        to get max from AQC107, is I recall correctly my experience, requires to have decent card on other side and tune both sides. AQC107 is tricky and sensitive to many things.
                                        If you have choice, better use intel, even if it is more expensive.
                                        I had to go with it only because I didn't have much choice in terms of TB4 10GbE nic.

                                        I get 8+ with the same chip on a TPLink card, but as I wrote earlier, only after doing a driver repair using the Marwell installer. And I have to do that after every off/on of the PC, otherwise it caps out at ~2Gbit.

                                        keyserK A 2 Replies Last reply Reply Quote 0
                                        • keyserK
                                          keyser Rebel Alliance @Gblenn
                                          last edited by

                                          @Gblenn Did you knwo you can do this:
                                          https://answers.microsoft.com/en-us/windows/forum/all/how-can-i-prevent-automatic-updating-a-specific/9967b1cf-dc6f-495d-82be-4ab3f3207ff1

                                          Love the no fuss of using the official appliances :-)

                                          G 1 Reply Last reply Reply Quote 0
                                          • A
                                            antst @Gblenn
                                            last edited by antst

                                            @Gblenn
                                            yep, seems that throughput in range 7-8Gbps is about maximum for this chip.
                                            occasionally I see up to 9, but rare.

                                            G 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.