2.5Gbps NICs only getting 1.5Gbps
-
@Stewart
I've used -t 120 -P 10 -w 4MB on
PC1-WAN-LAN-PC2I don't know, maybe it's some hardware limitation, can't say until we know what is motherboard layout and how did you configure your pfSense. I'd start with using only WAN and LAN, two physical ports only, no bridge, and then re-test
PC1-WAN-LAN-PC2 speed -
There should be zero difference between a LAN and a WAN port.
What makes a WAN port 'different' is the differing firewall rules and additional tasks such as NAT (amongst other things) which require the CPU to work harder.
️
-
@RobbieTT There's certainly no difference from a hardware perspective. I don't know if there is some form of NAT queuing that could be slowing it down or something in the tuneables that needs to be tweaked to get higher speeds. I know that with the APU2 there were some things to change to get it to hit 1Gbps across the WAN. I haven't set any firewall rules or anything on this unit. In fact, I've tried checking the "Disable All Packet Filtering" to turn off the firewall and the NAT to see if it helps and it didn't.
It is a bit peculiar.
-
There are a number of BSD and pfSense tuning guides for higher bandwidth. Having waded through a number of them most seem to have been applied to pfSense in subsequent releases.
Other I have yet to test include:
For PPPoE WANs with Multi-Queue NICs (single core issue) an entry for net.isr.dispatch=deferred can lead to performance gains on affected hardware. pfSense default in sysctl = net.isr.dispatch=direct net.isr.dispatch="deferred" To disable flow control on all ix interfaces. For LAN use the default of “3” is appropriate. pfSense default in sysctl = hw.ix.flow_control: 3 hw.ix.flow_control="0" To disable flow control on individual igc interface (eg igc.3 when used as WAN). pfSense default in sysctl = dev.igc.3.fc: 3 dev.igc.3.fc="0" Increasing hw.igc.max_interrupt_rate of up to 20000 provides gains in some scenarios. pfSense default in sysctl = hw.igc.max_interrupt_rate: 8000 hw.igc.max_interrupt_rate=“16000" Some FreeBSD tuning guides suggest removing the igc rx_process_limit for higher performance. Maximum number of received packets to process at a time. Default is 100 packets. A value of -1 means unlimited. pfSense default in sysctl = hw.igc.rx_process_limit=100 hw.igc.rx_process_limit="-1" Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)to a value larger than the default value of 1500. Best throughput results are seen with a large MTU; use 9706 if possible (ref Intel) or use 9216 (ref iflib stated maximum). Use the ifconfig command to confirm or increase the MTU size. To confirm the MTU used between two specific devices, use: route get <destination_IP_address> To set MTU on an ix interface for LAN use, enter the following where 1 is the interface number: ifconfig ix1 mtu 9216 Doubling (or more) IP Input Queue (intr_queue) length on highly loaded systems may be required, if queue drops are above zero. To check for queue drops (ideally adjust to a zero value) via CLI use command: sysctl net.inet.ip.intr_queue_drops net.inet.ip.intr_queue_drops: 0 pfSense default in sysctl = net.inet.ip.intr_queue_maxlen: 1000 net.inet.ip.intr_queue_maxlen="2000" For RJ45 interface stability energy efficient ethernet (eee) may need to be disabled. The default pfSense setting of 1 means disabled (which is somewhat counterintuitive) and 0 means enabled. pfSense (eg igc.3) default in sysctl = dev.igc.3.eee_control=1 dev.igc.3.eee_control=“1”
I attach no weight to the above; more akin to my musing than anything else - your issue is probably elsewhere but you never know.
️
-
Yes, pretty much the only difference there is NAT vs no NAT.
Try increasing the interrupt rate. That is a known limitation in igc.
-
@stephenw10 @RobbieTT
I added hw.igc.max_interrupt_rate with a value of 16000 in the Tunables page but it didn't make a difference. Is there a tuning guide for IGC NICs? Most everything I'm finding is for IGB.Also, the slowdown isn't related to LAN vs WAN as I previously inidicated. I'm creating a Matrix of ports and iperf and it kind of appears random. Most of the time it is the 1.6Gbps but sometimes I get the ~2.4Gbps. While I thought it was reproducible before I can no longer reproduce it consistently.
-
@Stewart
My examples in the scroll box are only ix or igc, no igb here!️
-
hw.igc.max_interrupt_rate is a loader value, it needs to be added in /boot/loader.conf.local. Create that file if required.
-
@stephenw10 I had a fleeting thought that might be the case. How can I tell the difference if something goes in the tuneables or if it goes into the loader.conf,local file?
-
Check the man page: https://man.freebsd.org/cgi/man.cgi?query=igc
In general though hw.X values are loader variables. dev.X are sysctls. But not always!
And you can usually read the loader values as sysctls after boot but not set them.
-
@stephenw10 Clear as mud! Thanks! I need to go re-rack and add a switch. Hopefully I can work on this later today.
Edit: When I put that in I get this message on boot:
Setting up extended sysctls...sysctl: oid 'hw.igc.max_interrupt_rate is a read only tunable
sysctl: Tunablevalues are set in /boot/loader.conf
sysctl: oid 'hw.igc.max_interrupt_rate' is a read only tunable
sysctl: Tunable values are set in /boot/loader.conf
done.Adding the setting didn't change the speeds but it may be because I've done it wrong if theses errors indicate anything. I put it into the file /boot/loader.conf.local.
-
You're seeing that error because it's still set in System Tunables (sysctls). If it's set correctly in loader.conf.local it will show as that value when read via sysctl after boot.
-
This doesn't appear to be a pfSense thing. I've switched over to Windows and am getting the same speed results. So far the advice I've received from HUNSN is to enable Turbo, but Turbo is enabled by default so it isn't that. I'll update this thread if I get it solved or if anyone has experience with these units or has any idea what BIOS modifications to try, please let me know.
-
Something in the way it's connected? PCIe bus using enough channels?
-
@stephenw10 The connections are about as simple as it comes. Right now it's just a PC with a 10Gbe nic plugged directly into the HUNSN box with a CAT6 cable. Link speed is 2.5Gbps. Still just getting the 1.59Gbps speeds in iperf. Using pfSense it was 2 PCs connected to 2 ports on the HUNSN running iperf on each of the boxes. Still just 1.59Gbps.
With the 2 PCs connected directly to each other they link up at 10Gbps and I get iperf around 8Gbps.
-
Sorry I meant how they're connected internally. Looks like the n100 doesn't have any NIC on the SoC so I assume they are PCIe connected. Though even 1 channel of PCIev1 should pass more than 1.5Gbps....
-
Please provide output of those commands
pciconf -lcv igc0 pciconf -lcv igc1 pciconf -lcv igc2 pciconf -lcv igc3 pciconf -lcv igc4
-
@w0w I sure wish these had console ports. I'll try to type them out.
igc0@pci0:1:0: class=0x020000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x125c subvendor=0x08086 subdevice=0x0000
vendor = 'Intel Corporation'
device = 'Ethernet Controller'
class = network
subclass = ethernet
cap 01[40] = powerspec 3 supports D0 D3 current D0
cap 05[50] = MSI supports 1 message, 64 bit, vector masks
cap 11[70] = MSI-X supports 5 messages, enabled. Table in map 0x1c[0x0], PBA in map 0x1c[0x2000]
cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR RO NS
Max Read 512
link x1(x1) speed 2.5(5.0) ASPM disabled (L1)
ecap 0001[100] = AER 2 0 fatal 1 non-fatal 1 corrected
ecap 0003[140] = Serial 1 00e259ffff005232
ecap 0018[1c0] = LTR 1
ecap 001f[1f0] = Precision Time Measurement 1
ecap 001e[1e0] = L1 PM Substates 1igc1@pci0:2:0: class=0x020000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x125c subvendor=0x08086 subdevice=0x0000
vendor = 'Intel Corporation'
device = 'Ethernet Controller'
class = network
subclass = ethernet
cap 01[40] = powerspec 3 supports D0 D3 current D0
cap 05[50] = MSI supports 1 message, 64 bit, vector masks
cap 11[70] = MSI-X supports 5 messages, enabled. Table in map 0x1c[0x0], PBA in map 0x1c[0x2000]
cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR RO NS
Max Read 512
link x1(x1) speed 2.5(5.0) ASPM disabled (L1)
ecap 0001[100] = AER 2 0 fatal 1 non-fatal 1 corrected
ecap 0003[140] = Serial 1 00e259ffff005233
ecap 0018[1c0] = LTR 1
ecap 001f[1f0] = Precision Time Measurement 1
ecap 001e[1e0] = L1 PM Substates 1igc2@pci0:3:0: class=0x020000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x125c subvendor=0x08086 subdevice=0x0000
vendor = 'Intel Corporation'
device = 'Ethernet Controller'
class = network
subclass = ethernet
cap 01[40] = powerspec 3 supports D0 D3 current D0
cap 05[50] = MSI supports 1 message, 64 bit, vector masks
cap 11[70] = MSI-X supports 5 messages, enabled. Table in map 0x1c[0x0], PBA in map 0x1c[0x2000]
cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR RO NS
Max Read 512
link x1(x1) speed 5.0(5.0) ASPM disabled (L1)
ecap 0001[100] = AER 2 0 fatal 1 non-fatal 1 corrected
ecap 0003[140] = Serial 1 00e259ffff005234
ecap 0018[1c0] = LTR 1
ecap 001f[1f0] = Precision Time Measurement 1
ecap 001e[1e0] = L1 PM Substates 1igc3@pci0:1:0: class=0x020000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x125c subvendor=0x08086 subdevice=0x0000
vendor = 'Intel Corporation'
device = 'Ethernet Controller'
class = network
subclass = ethernet
cap 01[40] = powerspec 3 supports D0 D3 current D0
cap 05[50] = MSI supports 1 message, 64 bit, vector masks
cap 11[70] = MSI-X supports 5 messages, enabled. Table in map 0x1c[0x0], PBA in map 0x1c[0x2000]
cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR RO NS
Max Read 512
link x1(x1) speed 5.0(5.0) ASPM disabled (L1)
ecap 0001[100] = AER 2 0 fatal 1 non-fatal 1 corrected
ecap 0003[140] = Serial 1 00e259ffff005235
ecap 0018[1c0] = LTR 1
ecap 001f[1f0] = Precision Time Measurement 1
ecap 001e[1e0] = L1 PM Substates 1igc4@pci0:5:0: class=0x020000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x125c subvendor=0x08086 subdevice=0x0000
vendor = 'Intel Corporation'
device = 'Ethernet Controller'
class = network
subclass = ethernet
cap 01[40] = powerspec 3 supports D0 D3 current D0
cap 05[50] = MSI supports 1 message, 64 bit, vector masks
cap 11[70] = MSI-X supports 5 messages, enabled. Table in map 0x1c[0x0], PBA in map 0x1c[0x2000]
cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR RO NS
Max Read 512
link x1(x1) speed 5.0(5.0) ASPM disabled (L1)
ecap 0001[100] = AER 2 0 fatal 1 non-fatal 1 corrected
ecap 0003[140] = Serial 1 00e259ffff005236
ecap 0018[1c0] = LTR 1
ecap 001f[1f0] = Precision Time Measurement 1
ecap 001e[1e0] = L1 PM Substates 1 -
You can copy/paste that out of an SSH session.
-
@Stewart
Three ports are full speed pcie 5Gbit/s and two are only 2.5Gbit/s.
You need to find out what ports are 5Gbit and re-test only using those ports.
For me, it looks like some hardware limitation. I can only suppose, that those two ports with 2.5 speed are using some pcie hub, not CPU one, but some external.