Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Slow Gigabit download on a Quadcore Intel Celeron J1900 2.41Ghz

    Scheduled Pinned Locked Moved Hardware
    5 Posts 3 Posters 2.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • N
      Nonconformist
      last edited by

      Hello,

      I recently upgraded to Gigabit fiber from a 500Mbps/20Mbps cable line. The network setup is briefly as follows:

      ISP Modem + Router (Bell HH3000) ---> PFSense Box (Intel Quad-core CPU J1900, 4x Intel Gigabit NICs: I211-AT 10/100/1000 Controllers) ---> 5 port TP-Link switch ---> Netgear NightHawk R6400 Wireless AP.

      With this setup, I was always getting the rated speeds (500/20) on cable, regardless of whether I used wired/wireless.

      My current ISPs equipment (Bell Canada) cannot be put in bridged mode, so I've set the WAN interface to use PPPoE

      However, after switching to Gigabit fiber, I'm not getting gigabit download speeds at any point on the network behind the PFSense no matter what I have tried. Wireless speeds have dropped to around 300/300 as well; although I have no idea why, I'm really not worried about wireless speeds. Here's what I have tried so far multiple times with average speeds observed.

      • Direct Ethernet connection from ISP equipment to laptop - >925down/925up
      • Connecting this laptop with the same cable that connected ISP equipment to PF Sense's WAN port - >925down/925up
      • Connecting the same laptop via any Ethernet port from TP-Link or Netgear behind PFSense - ~600down/925up

      I changed the MTU on PFSense's WAN port to 1492 to account for the PPPoE connection and retested to no avail. This was conclusive enough to tell me that the PFSense was the bottleneck.

      PFSense itself isn't running any IDS/IPS or VPN connections. The only package running that could be intensive is pfblockerNG. The mbuf usage on PFSense is hardly 2%, although I observed CPU usage hits about 45% when doing the speedtests. Further troubleshooting this, these are the current settings in Advanced --> Networking:

      • Hardware Checksum Offloading - unchecked (enabled)
      • Hardware TCP Segmentation Offloading - unchecked (enabled)
      • Hardware Large Receive Offloading - checked (disabled)

      Toggling any of these on/off and turning off pfblockerNG has not made a difference.

      Would anyone point me in the right direction of:

      • Is this a PfSense configuration issue? Am I missing something?
      • Are there any PPPoE settings that need to be tweaked?
      • Is the hardware above not capable of sustained 1gbps? Should CPU be hitting 45% at all? The fact that upload speeds are largely unaffected and I get close to 1gbps tells me otherwise.

      Would appreciate any insight and assistance in this regard.

      1 Reply Last reply Reply Quote 0
      • RicoR
        Rico LAYER 8 Rebel Alliance
        last edited by Rico

        Sure you have no Speed Limiter / QoS running?
        Check out https://redmine.pfsense.org/issues/4821
        Maybe you have the same issue.

        -Rico

        1 Reply Last reply Reply Quote 0
        • N
          Nonconformist
          last edited by Nonconformist

          Thanks Rico, that indeed seems to be the issue. I'm not running any Traffic shaping either. As suggested by the linked post, just adding

          net.isr.dispatch=deferred
          

          seems to have made no difference, but I'm able to increase speeds to ~725mbps down /925mbps up with the following in /boot/loader.conf.local. Problem is, I'm not sure what tuning parameters actually helped and what can be done to get closer to 1gbps. Got these settings off this post.

          hw.igb.fc_setting=0
          hw.igb.rx_process_limit="-1"
          hw.igb.tx_process_limit="-1"
          hw.igb.txd="4096"
          hw.igb.rxd="4096"
          hw.igb.0.fc=0
          hw.igb.1.fc=0
          hw.igb.max_interrupt_rate="64000"
          net.link.ifqmaxlen="8192"
          net.pf.states_hashsize="2097152"
          net.pf.source_nodes_hashsize="65536"
          net.isr.dispatch=deferred
          net.inet.tcp.tso=0
          net.isr.defaultqlimit=4096
          
          1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            Try running at the command line when you're testing:
            top -aSH
            Hit q to quit out and leave the values for copy/pasting.

            Are you hitting a 100% on one CPU core?

            Steve

            1 Reply Last reply Reply Quote 0
            • N
              Nonconformist
              last edited by

              Here are the results before and during the speedtest:

              Before:

              11 root     155 ki31     0K    64K CPU0    0 154:42  98.58% [idle{idle: cpu0}]
              11 root     155 ki31     0K    64K CPU3    3 154:30  98.41% [idle{idle: cpu3}]
              11 root     155 ki31     0K    64K CPU1    1 154:18  97.95% [idle{idle: cpu1}]
              11 root     155 ki31     0K    64K RUN     2 154:51  97.35% [idle{idle: cpu2}]
              340 root      32    0 98776K 38364K accept  1   0:02   1.36% php-fpm: pool nginx (php-fpm){php-fpm}
              19 root     -16    -     0K    16K pftm    0   0:57   0.60% [pf purge]
              71606 root      20    0  7812K  4000K CPU2    2   0:00   0.12% top -aSH
              

              During:

              12 root     -72    -     0K   544K WAIT    1   2:17  98.74% [intr{swi1: netisr 0}]
               11 root     155 ki31     0K    64K CPU3    3 157:05  76.71% [idle{idle: cpu3}]
               11 root     155 ki31     0K    64K RUN     2 157:33  69.36% [idle{idle: cpu2}]
               11 root     155 ki31     0K    64K RUN     0 157:20  63.68% [idle{idle: cpu0}]
               12 root     -72    -     0K   544K WAIT    0   0:38  26.18% [intr{swi1: netisr 1}]
               12 root     -92    -     0K   544K WAIT    0   0:43  24.11% [intr{irq260: igb0:que 0}]
               12 root     -72    -     0K   544K WAIT    3   0:32  15.26% [intr{swi1: netisr 2}]
               12 root     -92    -     0K   544K WAIT    2   0:15   8.30% [intr{irq264: igb1:que 0}]
               12 root     -92    -     0K   544K WAIT    3   0:13   4.80% [intr{irq265: igb1:que 1}]
               11 root     155 ki31     0K    64K CPU1    1 156:40   2.67% [idle{idle: cpu1}]
              
              1 Reply Last reply Reply Quote 0
              • First post
                Last post
              Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.