Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Unable to hit 1Gbps connection through Bell PPPOE. Can't figure out bottleneck.

    Scheduled Pinned Locked Moved Hardware
    10 Posts 2 Posters 2.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      Slither13
      last edited by Slither13

      I've read about 100 different posts now and can't figure out why I can't hit 1Gbps upload or download. I'm stuck around 740mbps download and 540mbps upload. Ran tests on the pfSense box, lan computers and home server, got all similar speeds. When running the HH3000 I was getting 1.35 Gbps down and 940mbps up. Each device in the house was hitting 1Gbps both ways.

      Hardware:

      Dell Poweredge R210ii
      Xeon E3-1240 (3.3GHz)
      16 GB Ram
      60GB SSD
      Intel X520-DA2 10Gb
      SFP-H10GB-CU3m Cisco 10Gb Twinax Copper DAC Cable
      Mikrotik CSS326 Switch

      Setup:

      Bell 1.5gbps download, 940mbps upload package
      I removed the ONT from the Home Hub 3000 and plugged it into one of the Intel ports. I then ran the Twinax into the Mikrotik from the second SFP+ port.
      I set up the PPPOE under vlan 35. No connection problems there. MTU is set to 1500.
      I also ran the linux image and have the R210ii updated to December 2018, which was the last release. It was an update for Meltdown and Spectre.
      Interface ports are ix0 and ix1

      I followed the entire series by SpaceInvader One (Playlist)(This covered bios and initial setup)

      I'm running pfBlockerNG with 6 feeds and TLD

      When running a cli speed test. Temps hit 56C and CPU usage is around 10%. Standby is 1% or 2%

      loader.conf.local:
      net.isr.dispatch=deferred
      net.isr.maxthreads=-1
      net.inet.tcp.tso=0
      kern.ipc.nmbclusters="1000000"
      kern.ipc.nmbjumbop="524288"
      dev.ix.0.fc=0

      TCP and LRO are disabled. Hardware Checksum Offloading is on, but either option didn't change speeds.

      Any ideas on what the bottleneck could be?

      1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator
        last edited by

        Check the per core CPU usage using top -aSH at the command line while testing.

        The PPPoE session is terminated in pfSense? That will be limiting it to a single CPU core.

        Run packet capture and make sure you are not seeing packet fragmentation across the link. Adjust your MTU size to suit if you are.

        Is that one of those 2.5G devices to get the 1.5Gbps connection? I assume the NIC does not do 2.5G, it links at 1G?

        Steve

        1 Reply Last reply Reply Quote 1
        • S
          Slither13
          last edited by

          @stephenw10 said in Unable to hit 1Gbps connection through Bell PPPOE. Can't figure out bottleneck.:

          top -aSH

          The session is terminated in pfSense.

          When test fragmentation, I was good at 1472 and added the 28 for 1500.

          It is a 2.5Gbps device and I assume it connects at 1Gbps.

          Peak Download:

          last pid: 15944; load averages: 0.19, 0.23, 0.25 up 0+11:28:59 08:27:12
          233 processes: 9 running, 178 sleeping, 46 waiting

          Mem: 2380M Active, 491M Inact, 623M Wired, 247M Buf, 12G Free
          Swap: 2862M Total, 2862M Free

          PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
          11 root 155 ki31 0K 128K CPU1 1 673:19 100.00% [idle{idle: cpu1}]
          11 root 155 ki31 0K 128K CPU2 2 672:28 100.00% [idle{idle: cpu2}]
          11 root 155 ki31 0K 128K CPU5 5 671:19 100.00% [idle{idle: cpu5}]
          11 root 155 ki31 0K 128K CPU6 6 670:12 100.00% [idle{idle: cpu6}]
          11 root 155 ki31 0K 128K CPU4 4 669:41 100.00% [idle{idle: cpu4}]
          11 root 155 ki31 0K 128K RUN 3 672:17 98.78% [idle{idle: cpu3}]
          11 root 155 ki31 0K 128K CPU0 0 676:19 88.09% [idle{idle: cpu0}]
          11 root 155 ki31 0K 128K CPU7 7 669:24 86.77% [idle{idle: cpu7}]
          12 root -72 - 0K 736K WAIT 4 0:43 14.45% [intr{swi1: netisr 0}]
          12 root -92 - 0K 736K WAIT 0 0:32 12.60% [intr{irq264: ix0:q0}]
          12 root -92 - 0K 736K WAIT 0 0:04 1.07% [intr{irq273: ix1:q0}]
          12 root -72 - 0K 736K WAIT 3 0:01 0.78% [intr{swi1: netisr 3}]
          87195 root 21 0 98360K 39820K piperd 7 0:01 0.68% php-fpm: pool nginx (php-fpm){php-fpm}
          12 root -72 - 0K 736K WAIT 5 0:01 0.59% [intr{swi1: netisr 1}]
          12 root -72 - 0K 736K WAIT 0 0:08 0.49% [intr{swi1: netisr 6}]
          12 root -72 - 0K 736K WAIT 4 0:02 0.49% [intr{swi1: netisr 2}]
          12 root -72 - 0K 736K WAIT 1 0:02 0.49% [intr{swi1: netisr 5}]
          12 root -72 - 0K 736K WAIT 0 0:01 0.29% [intr{swi1: netisr 4}]

          Peak Upload:

          last pid: 50852; load averages: 0.60, 0.35, 0.29 up 0+11:30:29 08:28:42
          232 processes: 10 running, 177 sleeping, 45 waiting

          Mem: 2380M Active, 491M Inact, 635M Wired, 247M Buf, 12G Free
          Swap: 2862M Total, 2862M Free

          PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
          11 root 155 ki31 0K 128K CPU5 5 672:42 95.56% [idle{idle: cpu5}]
          11 root 155 ki31 0K 128K RUN 2 673:52 94.58% [idle{idle: cpu2}]
          11 root 155 ki31 0K 128K RUN 7 670:46 94.38% [idle{idle: cpu7}]
          11 root 155 ki31 0K 128K CPU4 4 671:03 93.26% [idle{idle: cpu4}]
          11 root 155 ki31 0K 128K CPU6 6 671:34 92.48% [idle{idle: cpu6}]
          11 root 155 ki31 0K 128K CPU3 3 673:40 91.26% [idle{idle: cpu3}]
          11 root 155 ki31 0K 128K CPU1 1 674:42 89.70% [idle{idle: cpu1}]
          11 root 155 ki31 0K 128K RUN 0 677:39 83.79% [idle{idle: cpu0}]
          12 root -72 - 0K 736K WAIT 2 0:54 15.38% [intr{swi1: netisr 0}]
          12 root -92 - 0K 736K CPU0 0 0:38 11.28% [intr{irq264: ix0:q0}]
          12 root -72 - 0K 736K WAIT 4 0:12 6.88% [intr{swi1: netisr 6}]
          12 root -72 - 0K 736K WAIT 1 0:05 5.96% [intr{swi1: netisr 2}]
          12 root -72 - 0K 736K WAIT 0 0:05 5.37% [intr{swi1: netisr 5}]
          12 root -72 - 0K 736K WAIT 4 0:03 3.76% [intr{swi1: netisr 1}]
          12 root -72 - 0K 736K WAIT 1 0:04 3.56% [intr{swi1: netisr 3}]
          12 root -72 - 0K 736K WAIT 3 0:03 3.08% [intr{swi1: netisr 4}]
          12 root -72 - 0K 736K WAIT 1 0:04 2.88% [intr{swi1: netisr 7}]
          12 root -92 - 0K 736K WAIT 6 0:02 2.88% [intr{irq279: ix1:q6}]

          1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            Hmm OK, well you're clearly not CPU limited there! Not more the 15% on one core.

            How are you actually testing? Where are you running the speedtest CLI?

            I would be suspicious of the link speed. I know others have gone to great lengths to get the 2.5G link by putting a 2.5G capable switch in between.
            Check Status > Interfaces for errors.

            Try running ifconfig -vm ix0 assuming your WAN is ix0.

            Steve

            1 Reply Last reply Reply Quote 1
            • S
              Slither13
              last edited by Slither13

              @stephenw10

              I ran a speed test on a computer on the network and ran top -aSH during the test.

              I originally installed pkg install -y py27-speedtest-cli and ran speedtest-cli to run the test on the pfSense box.

              I'm seeing In/Out errors = 0/0 on LAN and WAN

              This is from the ifconfig:

              ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1516
              options=e507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
              capabilities=f507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,NETMAP,RXCSUM_IPV6,TXCSUM_IPV6>
              ether 00:1b:21:a5:8b:74
              hwaddr 00:1b:21:a5:8b:74
              inet6 fe80::21b:21ff:fea5:8b74%ix0 prefixlen 64 scopeid 0x1
              nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
              media: Ethernet autoselect (Unknown <rxpause,txpause>)
              status: active
              supported media:
              media autoselect
              plugged: SFP/SFP+/SFP28 100G SWDM4 (SC)
              vendor: ALCATELLUCENT PN: 3FE46541AA SN: ALCLF88519E4 DATE: 2018-08-29
              module temperature: 32.85 C Voltage: 3.30 Volts
              RX: 0.02 mW (-15.36 dBm) TX: 1.77 mW (2.50 dBm)

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                @slither13 said in Unable to hit 1Gbps connection through Bell PPPOE. Can't figure out bottleneck.:

                media: Ethernet autoselect (Unknown <rxpause,txpause>)
                supported media: autoselect

                Mmm, that's not good. I assume it doesn't give you the options if setting 1Gbps fixed in the interface settings?

                You might try a different SFP module there. One that does give that option and set it.

                Hard to know what it's actually doing there. Do you see and errors on the Status > Interfaces page?

                Trying to link to something that is trying to do 2.5G is a bit of an unknown.

                Steve

                1 Reply Last reply Reply Quote 0
                • S
                  Slither13
                  last edited by

                  @stephenw10 said in Unable to hit 1Gbps connection through Bell PPPOE. Can't figure out bottleneck.:

                  Mmm, that's not good. I assume it doesn't give you the options if setting 1Gbps fixed in the interface settings?
                  You might try a different SFP module there. One that does give that option and set it.
                  Hard to know what it's actually doing there. Do you see and errors on the Status > Interfaces page?
                  Trying to link to something that is trying to do 2.5G is a bit of an unknown.
                  Steve

                  I'm stuck with the module as that's what the service needs to connect. Some special module or something. I'm not seeing errors anywhere. Can't change interface settings.

                  I can't connect at 2.5Gbps, but others are connecting at 1Gbps and hitting max speeds of 940mbps.

                  I'll keep trying and post back here if I figure anything out.

                  1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by

                    You might try ifconfig -vvvvvvm ix0 which I recently found gives a bit more info. Yes, really, 6 vs on the cable I had! 5 seemed enough for others.

                    Otherwise I'd try a packet capture and see if you have bad fragmentation or packet dupes/resends etc. Maybe try dsiabling pfscrub in Systsem > Advanced > Firewall&NAT, see if that makes any difference.

                    Steve

                    1 Reply Last reply Reply Quote 0
                    • S
                      Slither13
                      last edited by

                      @stephenw10 said in Unable to hit 1Gbps connection through Bell PPPOE. Can't figure out bottleneck.:

                      ifconfig -vvvvvvm ix0

                      Packet captue is fine, disabling the pfscrub made no difference.

                      Data:

                      ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1508
                      options=e000bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
                      capabilities=f507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,NETMAP,RXCSUM_IPV6,TXCSUM_IPV6>
                      ether 00:1b:21:a5:8b:74
                      hwaddr 00:1b:21:a5:8b:74
                      inet6 fe80::21b:21ff:fea5:8b74%ix0 prefixlen 64 scopeid 0x1
                      nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
                      media: Ethernet autoselect (Unknown)
                      status: active
                      supported media:
                      media autoselect
                      plugged: SFP/SFP+/SFP28 100G SWDM4 (SC)
                      vendor: ALCATELLUCENT PN: 3FE46541AA SN: ALCLF88519E4 DATE: 2018-08-29
                      Class: 1000BASE-LX
                      Length: (null)
                      Tech: (null)
                      Media: (null)
                      Speed: (null)
                      module temperature: 33.85 C Voltage: 3.30 Volts
                      RX: 0.02 mW (-15.41 dBm) TX: 1.78 mW (2.52 dBm)

                      SFF8472 DUMP (0xA0 0..127 range):
                      03 04 01 00 00 00 02 00 00 00 00 03 20 00 28 FF
                      00 00 00 00 41 4C 43 41 54 45 4C 4C 55 43 45 4E
                      54 20 20 20 20 20 20 20 33 46 45 34 36 35 34 31
                      41 41 20 20 20 20 20 20 30 30 30 31 05 1E FF DC
                      00 1A 00 00 41 4C 43 4C 46 38 38 35 31 39 45 34
                      20 20 20 20 31 38 30 38 32 39 20 20 68 F0 05 5D
                      41 4C 43 41 54 45 4C 20 33 46 45 34 36 35 34 31
                      41 41 30 31 32 42 56 4C 33 41 38 4A 4E 41 41 97
                      
                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        You might have more luck with a Broadcom NIC and patched FreeBSD driver:
                        http://www.dslreports.com/forum/r32230041-Internet-Bypassing-the-HH3K-up-to-2-5Gbps-using-a-BCM57810S-NIC

                        I have no way of testing that myself....

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.