Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Poor performance on igb driver

    Hardware
    11
    45
    20.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by stephenw10

      You shouldn't need to set the igb queues to 1 any longer. That was a bug in much older versions.

      Just hit q in top when it's showing something useful and it will quit out and leave whatever was there available to copy and paste out.

      Are you routing traffic over OpenVPN?

      Steve

      1 Reply Last reply Reply Quote 0
      • T
        tman222
        last edited by

        Hi @bdaniel7

        I also agree that the CPU should be able to handle 1Gbit speeds fairly easily, especially if you are not trying run any IDS/IPS on top regular kernel packet processing.

        FreeBSD's network defaults aren't tuned too well for very high speed connections by default (although this is getting better in newer versions). Here is a link to a thread with some more parameters you can tune on your Intel NIC's:

        https://forum.netgate.com/topic/117072/dsl-reports-speed-test-causing-crash-on-upload

        Of those parameters, I"d probably adjust the RX/TX descriptors and processing limits first and see if that yields any improvements.

        Hope this helps.

        T 1 Reply Last reply Reply Quote 0
        • B
          bdaniel7
          last edited by

          I'm only using OpenVPN to access the internal network from outside.
          Which is happening when I'm at the office.

          0_1535194760006_top.jpg

          1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            How are you testing when that is shown? What is connected to igb0 and igb1?

            Is the CPU actually running at 1.9GHz? Do you have powerd enabled?

            Try running sysctl dev.cpu.0.freq when the test is running.

            Steve

            1 Reply Last reply Reply Quote 0
            • B
              bdaniel7
              last edited by

              igb0 is WAN, igb1 is LAN.

              I'm starting top -aSH as you suggested, then during the peak transfer, I exit from top with q.

              I had powerD enabled, with all (AC power, Battery power, Unknown power) set to Maximum.
              I disabled powerD but there is no difference.

              And I get this sysctl: unknown oid 'dev.cpu.0.freq'

              1 Reply Last reply Reply Quote 0
              • T
                tman222 @tman222
                last edited by

                @tman222 said in Poor performance on igb driver:

                Hi @bdaniel7

                I also agree that the CPU should be able to handle 1Gbit speeds fairly easily, especially if you are not trying run any IDS/IPS on top regular kernel packet processing.

                FreeBSD's network defaults aren't tuned too well for very high speed connections by default (although this is getting better in newer versions). Here is a link to a thread with some more parameters you can tune on your Intel NIC's:

                https://forum.netgate.com/topic/117072/dsl-reports-speed-test-causing-crash-on-upload

                Of those parameters, I"d probably adjust the RX/TX descriptors and processing limits first and see if that yields any improvements.

                Hope this helps.

                Hi @bdaniel7 - have you also tried tuning some of the additional parameters that I suggested? If yes, what were the results?

                1 Reply Last reply Reply Quote 0
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  Sorry I meant where are you testing between? Speedtest client on igb1 connecting to a server via igb0?

                  Steve

                  B 1 Reply Last reply Reply Quote 0
                  • B
                    bdaniel7 @stephenw10
                    last edited by

                    @stephenw10
                    Yes, the mediaconverter is connected to igb0, my Windows 10 client is connected to the igb1 port.

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      I don't see it having been asked so, are you connecting using PPPoE?

                      Steve

                      B 1 Reply Last reply Reply Quote 0
                      • B
                        bdaniel7 @stephenw10
                        last edited by

                        @stephenw10
                        Yes, I'm using PPPoE.

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          Ah, then that is the cause of the problem. You can see that all the loading is on one queue and hence one CPU core while the others are mostly idle. It's unfortunately a known issue with PPPoE in FreeBSD/pfSense right now.
                          https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856

                          However there is something you can do to mitigate it to some extent, set:
                          sysctl net.isr.dispatch=deferred

                          You can add that as a system tunable in System > Advanced if it makes a significant difference.

                          Be aware that doing so may negatively impact some other things, ALTQ traffic shaping in particular.

                          Steve

                          1 Reply Last reply Reply Quote 0
                          • B
                            bdaniel7
                            last edited by

                            Thank you for the clarification.
                            I should've stated from the beginning that I'm on PPPoE.
                            I added the net.isr.dispatch setting, but I don't have any improvements in speed.

                            I am now evaluating which option is cheaper and faster, buying a different board, with other (Intel) cards and keeping pfSense, or moving to Linux.

                            1 Reply Last reply Reply Quote 0
                            • B
                              bdaniel7
                              last edited by

                              These are my settings, by the way:

                              hw.igb.fc_setting=0
                              hw.igb.rxd="4096"
                              hw.igb.txd="4096"
                              net.link.ifqmaxlen="8192"
                              hw.igb.max_interrupt_rate="64000"
                              hw.igb.rx_process_limit="-1"
                              hw.igb.tx_process_limit="-1"
                              hw.igb.0.fc=0
                              hw.igb.1.fc=0
                              net.isr.defaultqlimit=4096
                              net.isr.dispatch=deferred
                              net.pf.states_hashsize="2097152"
                              net.pf.source_nodes_hashsize="65536"
                              hw.igb.enable_msix: 1
                              hw.igb.enable_aim: 1

                              B 1 Reply Last reply Reply Quote 0
                              • stephenw10S
                                stephenw10 Netgate Administrator
                                last edited by

                                Hmm, you should see some improvement in speed with that setting. You may need to restart the ppp session or at least clear the firewall state. Or reboot if it's being applied by system tunables.

                                Steve

                                1 Reply Last reply Reply Quote 0
                                • B
                                  bblacey @bdaniel7
                                  last edited by

                                  @bdaniel7 said in Poor performance on igb driver:

                                  These are my settings, by the way:

                                  hw.igb.fc_setting=0
                                  hw.igb.rxd="4096"
                                  hw.igb.txd="4096"
                                  net.link.ifqmaxlen="8192"
                                  hw.igb.max_interrupt_rate="64000"
                                  hw.igb.rx_process_limit="-1"
                                  hw.igb.tx_process_limit="-1"
                                  hw.igb.0.fc=0
                                  hw.igb.1.fc=0
                                  net.isr.defaultqlimit=4096
                                  net.isr.dispatch=deferred
                                  net.pf.states_hashsize="2097152"
                                  net.pf.source_nodes_hashsize="65536"
                                  hw.igb.enable_msix: 1
                                  hw.igb.enable_aim: 1

                                  I recently went through the process if identifying the performance culprit on the Intel NICs using a Lanner FW-7525A. It turns out, that for the igb driver, you want hw.igb.enable_msix=0 or hw.pci.enable_msix=0 to nudge the driver towards using msi interrupts over the less-performant MSIX interrupts (suggested here). This made a 4x difference on my system. It is also recommended to disable tso and lso on the igb drivers so include net.inet.tcp.tso=0 as well. Hope this helps.

                                  1 Reply Last reply Reply Quote 0
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    Hmm, interesting. I wouldn't have expected msi to any better than msix.
                                    What sort of figures did you see?

                                    Steve

                                    B 1 Reply Last reply Reply Quote 0
                                    • B
                                      bblacey @stephenw10
                                      last edited by

                                      @stephenw10 said in Poor performance on igb driver:

                                      Hmm, interesting. I wouldn't have expected msi to any better than msix.
                                      What sort of figures did you see?

                                      Steve

                                      Hmmm, I'm back to msix interrupts so that was a red herring. I'm able to fully saturate my 400/20 link (achieve 470/24) with both inbound and outbound firewall rules enabled. Here is my current config that seems to achieve this:

                                      [2.4.4-RELEASE][root@firewall.home]/root: cat /boot/loader.conf 
                                      kern.cam.boot_delay=10000
                                      # Tune the igb driver
                                      hw.igb.rx_process_limit=800  #100
                                      hw.igb.rxd=4096  #default 1024
                                      hw.igb.txd=4096  #default 1024
                                      # Disable msix interrupts on igb driver either via hw.pci or the narrower hw.igb
                                      #hw.pci.enable_msix=0   #default 1 (enabled, disable to nudge to msi interrupts)
                                      #hw.igb.enable_msix=0
                                      #net.inet.tcp.tso=0  #confirmed redundant with disable in GUI
                                      #hw.igb.fc_setting=0
                                      legal.intel_ipw.license_ack=1
                                      legal.intel_iwi.license_ack=1
                                      boot_multicons="YES"
                                      boot_serial="YES"
                                      console="comconsole,vidconsole"
                                      comconsole_speed="115200"
                                      autoboot_delay="3"
                                      hw.usb.no_pf="1"
                                      

                                      Basically, I'm using the defaults other than increasing the igb driver rx_process_limit, rxd and txd. I have disabled tso, lro and checksum offloading via the gui under System->Advanced->Networking (checked means disabled) and set kern.ipc.nmbclusters to 262144 under System->Advanced->Tunables.

                                      Hardware:

                                      CPU: Intel(R) Atom(TM) CPU  C2358  @ 1.74GHz (1750.04-MHz K8-class CPU)
                                        Origin="GenuineIntel"  Id=0x406d8  Family=0x6  Model=0x4d  Stepping=8
                                        Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
                                        Features2=0x43d8e3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,TSCDLT,AESNI,RDRAND>
                                        AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
                                        AMD Features2=0x101<LAHF,Prefetch>
                                        Structured Extended Features=0x2282<TSCADJ,SMEP,ERMS,NFPUSG>
                                        VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
                                        TSC: P-state invariant, performance statistics
                                      

                                      You might want to go back to the pfSense defaults and enable all networking offloading options are disabled (checked in the GUI), then tweak the igb driver elements as I did above test and then adjust key tunables such as kern.ipc.nmbclusters but more isn't necessarily better.

                                      1 Reply Last reply Reply Quote 0
                                      • M
                                        MarcoP
                                        last edited by

                                        I've just noticed you are on PPPoE, would increasing MSS clamping on the interface or setting MTU at 1492 help.

                                        1 Reply Last reply Reply Quote 0
                                        • stephenw10S
                                          stephenw10 Netgate Administrator
                                          last edited by

                                          You should put custom settings in /boot/loader.conf.local to avoid them being overwritten at upgrade. Create that file if it's not there.

                                          Steve

                                          1 Reply Last reply Reply Quote 1
                                          • N
                                            Nonconformist
                                            last edited by

                                            Hi @bdaniel7, any luck on achieving gigabit speeds after your tweaks? I’ve been running into the same issues as you with the same Qotom box.

                                            Posted about it [here] (https://forum.netgate.com/topic/137196/slow-gigabit-download-on-a-quadcore-intel-celeron-j1900-2-41ghz), and then used the tweaks in this thread.

                                            Still getting only about 730mbps on wired. 😐

                                            B 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.