Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    pfSense performance with Gb ONT

    Scheduled Pinned Locked Moved General pfSense Questions
    14 Posts 3 Posters 5.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      You need to use top -aSH to see how that load it using the CPU cores. One could be pegged at 100% already.

      Steve

      1 Reply Last reply Reply Quote 0
      • F
        fr3ddie
        last edited by

        I have repeated the test using top as you suggested: as expected I can also see 2 queues for each NIC (e.g. {irq263: igb2:que 0} and {irq264: igb2:que 1}) since the CPU is dual core; because of this, I am assuming the "system tunables" tweaks have been applied and the system is correctly running using multiple queues per each NIC. As a side note, I have also enabled the various hardware offloads for the NICs, using the appropriate checkbox on the pfSense webGUI.

        During the test the load seemed spread almost equally between the two cores with a 30% idle per core during maximum load period (550 Mbps downlink, 100 Mbps uplink): thus, it looks like the 70% cumulative maximum CPU time under load is confirmed.

        I don't know if this is enough to say that the box won't be able to keep up an hypothetical 1 Gbps throughput, since I don't know if the additional load would be linearly added to the current one or if the "pattern" is more on the logarithmic side...

        Do you have any advice about how to perform other, maybe more meaningful, tests or how to read these results?

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          I would do a test between in iperf3 server on WAN and a client on LAN to get an idea of the maximum throughput. That what I was doing above to see 940Mbps on the box I have.

          Steve

          1 Reply Last reply Reply Quote 0
          • C
            ck42
            last edited by

            Currently dealing with something somewhat similar. Have 1Gbps fiber that I'm feeding directly to pfsense.
            Using speedtest, I'm hovering around 500Mbps consistently.
            But using the speedtest-cli on the pfsense CLI, I'm consistently getting around 800Mbps. Curious to see if you are seeing this as well.

            I also discovered that my previous lower speeds are/were being caused by my traffic shaper. I removed the shaper and 'speedtest' tests immediately jumped (though not as far as I was expecting). As far as I know, the queues in the shaper are setup correctly.

            1 Reply Last reply Reply Quote 0
            • F
              fr3ddie
              last edited by

              Hello ck42,

              I am not able to answer to your doubt about the differences between running the throughput test directly from the pfSense machine and running it from a different machine settled in one pfSense-controlled network.
              Anyhow, I performed several tests in this second scenario during these COVID times (...) and the following are my discoveries.

              Based on just facts, without further extrapolations, the C2358 Atom with pfSense is not able to hold a sustained Gb rate, at least over a PPPoE connection.
              These results are based on the evidence that, performing a fast "swap" of the C2358 box with the modem coming from the provider (a Technicolor with an embedded Linux onboard) the performance of the communication line raised instantly from an average of 550 Mbps to an average of 750/800 Mbps, staying all of the other factors the same (same cables/connections, source/target servers, power supply, etc.).
              Swapping back to the pfSense box during the same test session the performance went back to ~550 Mbps.
              All of that said maintaining the same ~70% of used CPU during the test sessions, as reported in previous posts.

              The same test session has been repeated many times during the same day and on different days, in order to exclude random fluctuations of the performance of the line or other issues, showing similar results (at least a 200 Mbps drop using the C2358).

              To me these results generate two considerations:

              • obviously the C2358 is not able to fully take advantage of an optical (GPON) 1 Gbps PPPoE-backed connection when used in conjunction with pfSense;
              • most probably there is some sort of deep networking inefficiency in the BSD kernel and/or in pfSense as a "distro" (pf? PPPoEd? Unfortunately I don't have the skills to verify this), since the CPU load of the box stayed at 70% of utilization, instead of showing a saturation.

              In order to verify without a doubt the second bullet I should install something different on the machine (Windows? Some Linux flavor?) and execute the same tests but, also looking at the results already obtained, I personally don't have many doubts.

              I would like to hear any comment that may arise from all of this.

              Thank you very much for your attention

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                Yup, it because it's PPPoE. That is single threaded in pfSense currently, you will only ever see one queue on the NIC being used.
                You can get significant improvement there by setting net.isr.dispatch=deferred.
                See: https://docs.netgate.com/pfsense/en/latest/hardware/tuning-and-troubleshooting-network-cards.html#pppoe-with-multi-queue-nics

                Steve

                C 1 Reply Last reply Reply Quote 1
                • C
                  ck42 @stephenw10
                  last edited by

                  @stephenw10

                  How do I know if my NIC is a multi-queue NIC? (Using the onboard NIC)

                  1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by stephenw10

                    Most are. It will show in the boot log for most drivers:

                    Jun 12 14:35:20 	kernel 		igb1: <Intel(R) PRO/1000 PCI-Express Network Driver> port 0xe0a0-0xe0bf mem 0xdfe60000-0xdfe7ffff,0xdff2c000-0xdff2ffff irq 20 at device 20.0 on pci0
                    Jun 12 14:35:20 	kernel 		igb1: Using 1024 TX descriptors and 1024 RX descriptors
                    Jun 12 14:35:20 	kernel 		igb1: Using 2 RX queues 2 TX queues
                    Jun 12 14:35:20 	kernel 		igb1: Using MSI-X interrupts with 3 vectors
                    Jun 12 14:35:20 	kernel 		igb1: Ethernet address: 00:10:f3:4e:1f:67
                    Jun 12 14:35:20 	kernel 		igb1: netmap queues/slots: TX 2/1024, RX 2/1024 
                    

                    Steve

                    C 1 Reply Last reply Reply Quote 0
                    • C
                      ck42 @stephenw10
                      last edited by ck42

                      @stephenw10

                      Here's what I have for mine. Can't tell if it's multi-queued or not...

                      em1: Using MSIX interrupts with 3 vectors
                      em1: Ethernet address: 54:be:f7:38:b5:84
                      em1: netmap queues/slots: TX 1/1024, RX 1/1024
                      
                      1 Reply Last reply Reply Quote 0
                      • F
                        fr3ddie
                        last edited by

                        ck42, it looks like it is single-queued

                        1 Reply Last reply Reply Quote 1
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.