Poor performance on igb driver
-
igb0
is WAN,igb1
is LAN.I'm starting
top -aSH
as you suggested, then during the peak transfer, I exit from top withq
.I had powerD enabled, with all (AC power, Battery power, Unknown power) set to Maximum.
I disabled powerD but there is no difference.And I get this
sysctl: unknown oid 'dev.cpu.0.freq'
-
@tman222 said in Poor performance on igb driver:
Hi @bdaniel7
I also agree that the CPU should be able to handle 1Gbit speeds fairly easily, especially if you are not trying run any IDS/IPS on top regular kernel packet processing.
FreeBSD's network defaults aren't tuned too well for very high speed connections by default (although this is getting better in newer versions). Here is a link to a thread with some more parameters you can tune on your Intel NIC's:
https://forum.netgate.com/topic/117072/dsl-reports-speed-test-causing-crash-on-upload
Of those parameters, I"d probably adjust the RX/TX descriptors and processing limits first and see if that yields any improvements.
Hope this helps.
Hi @bdaniel7 - have you also tried tuning some of the additional parameters that I suggested? If yes, what were the results?
-
Sorry I meant where are you testing between? Speedtest client on igb1 connecting to a server via igb0?
Steve
-
@stephenw10
Yes, the mediaconverter is connected to igb0, my Windows 10 client is connected to the igb1 port. -
I don't see it having been asked so, are you connecting using PPPoE?
Steve
-
@stephenw10
Yes, I'm using PPPoE. -
Ah, then that is the cause of the problem. You can see that all the loading is on one queue and hence one CPU core while the others are mostly idle. It's unfortunately a known issue with PPPoE in FreeBSD/pfSense right now.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856However there is something you can do to mitigate it to some extent, set:
sysctl net.isr.dispatch=deferred
You can add that as a system tunable in System > Advanced if it makes a significant difference.
Be aware that doing so may negatively impact some other things, ALTQ traffic shaping in particular.
Steve
-
Thank you for the clarification.
I should've stated from the beginning that I'm on PPPoE.
I added the net.isr.dispatch setting, but I don't have any improvements in speed.I am now evaluating which option is cheaper and faster, buying a different board, with other (Intel) cards and keeping pfSense, or moving to Linux.
-
These are my settings, by the way:
hw.igb.fc_setting=0
hw.igb.rxd="4096"
hw.igb.txd="4096"
net.link.ifqmaxlen="8192"
hw.igb.max_interrupt_rate="64000"
hw.igb.rx_process_limit="-1"
hw.igb.tx_process_limit="-1"
hw.igb.0.fc=0
hw.igb.1.fc=0
net.isr.defaultqlimit=4096
net.isr.dispatch=deferred
net.pf.states_hashsize="2097152"
net.pf.source_nodes_hashsize="65536"
hw.igb.enable_msix: 1
hw.igb.enable_aim: 1 -
Hmm, you should see some improvement in speed with that setting. You may need to restart the ppp session or at least clear the firewall state. Or reboot if it's being applied by system tunables.
Steve
-
@bdaniel7 said in Poor performance on igb driver:
These are my settings, by the way:
hw.igb.fc_setting=0
hw.igb.rxd="4096"
hw.igb.txd="4096"
net.link.ifqmaxlen="8192"
hw.igb.max_interrupt_rate="64000"
hw.igb.rx_process_limit="-1"
hw.igb.tx_process_limit="-1"
hw.igb.0.fc=0
hw.igb.1.fc=0
net.isr.defaultqlimit=4096
net.isr.dispatch=deferred
net.pf.states_hashsize="2097152"
net.pf.source_nodes_hashsize="65536"
hw.igb.enable_msix: 1
hw.igb.enable_aim: 1I recently went through the process if identifying the performance culprit on the Intel NICs using a Lanner FW-7525A. It turns out, that for the igb driver, you want
hw.igb.enable_msix=0
orhw.pci.enable_msix=0
to nudge the driver towards using msi interrupts over the less-performant MSIX interrupts (suggested here). This made a 4x difference on my system. It is also recommended to disable tso and lso on the igb drivers so includenet.inet.tcp.tso=0
as well. Hope this helps. -
Hmm, interesting. I wouldn't have expected msi to any better than msix.
What sort of figures did you see?Steve
-
@stephenw10 said in Poor performance on igb driver:
Hmm, interesting. I wouldn't have expected msi to any better than msix.
What sort of figures did you see?Steve
Hmmm, I'm back to
msix
interrupts so that was a red herring. I'm able to fully saturate my 400/20 link (achieve 470/24) with both inbound and outbound firewall rules enabled. Here is my current config that seems to achieve this:[2.4.4-RELEASE][root@firewall.home]/root: cat /boot/loader.conf kern.cam.boot_delay=10000 # Tune the igb driver hw.igb.rx_process_limit=800 #100 hw.igb.rxd=4096 #default 1024 hw.igb.txd=4096 #default 1024 # Disable msix interrupts on igb driver either via hw.pci or the narrower hw.igb #hw.pci.enable_msix=0 #default 1 (enabled, disable to nudge to msi interrupts) #hw.igb.enable_msix=0 #net.inet.tcp.tso=0 #confirmed redundant with disable in GUI #hw.igb.fc_setting=0 legal.intel_ipw.license_ack=1 legal.intel_iwi.license_ack=1 boot_multicons="YES" boot_serial="YES" console="comconsole,vidconsole" comconsole_speed="115200" autoboot_delay="3" hw.usb.no_pf="1"
Basically, I'm using the defaults other than increasing the
igb
driverrx_process_limit
,rxd
andtxd
. I have disabledtso
,lro
andchecksum
offloading via the gui under System->Advanced->Networking (checked means disabled) and setkern.ipc.nmbclusters
to262144
under System->Advanced->Tunables.Hardware:
CPU: Intel(R) Atom(TM) CPU C2358 @ 1.74GHz (1750.04-MHz K8-class CPU) Origin="GenuineIntel" Id=0x406d8 Family=0x6 Model=0x4d Stepping=8 Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE> Features2=0x43d8e3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,TSCDLT,AESNI,RDRAND> AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM> AMD Features2=0x101<LAHF,Prefetch> Structured Extended Features=0x2282<TSCADJ,SMEP,ERMS,NFPUSG> VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID TSC: P-state invariant, performance statistics
You might want to go back to the pfSense defaults and enable all networking offloading options are disabled (checked in the GUI), then tweak the
igb
driver elements as I did above test and then adjust key tunables such askern.ipc.nmbclusters
but more isn't necessarily better. -
I've just noticed you are on PPPoE, would increasing MSS clamping on the interface or setting MTU at 1492 help.
-
You should put custom settings in /boot/loader.conf.local to avoid them being overwritten at upgrade. Create that file if it's not there.
Steve
-
Hi @bdaniel7, any luck on achieving gigabit speeds after your tweaks? I’ve been running into the same issues as you with the same Qotom box.
Posted about it [here] (https://forum.netgate.com/topic/137196/slow-gigabit-download-on-a-quadcore-intel-celeron-j1900-2-41ghz), and then used the tweaks in this thread.
Still getting only about 730mbps on wired.
-
@nonconformist
Hi, nope, I couldn't get any speed higher than 550 Mbps when I tried the tweaks.
Then I abandoned the subject, due to lack of time.I will try the tweaks from the article you posted.
-
any dropped packets?
netstat -ihw 1
-
Since you have cores waiting, you could try to avoid locks when switching between them with:
net.isr.bindthreads="1"
-
@marcop Couldn't check through the week so doing this over the weekend. Long story short, no dropped packets.
net.isr.bindthreads="1"
actually brought the download/upload speeds down to 680/800 from 740/934.
Reading more about this, it's beginning to look like achieving 1G download isn't possible with the igb0 driver with a PPPoE WAN connection.