2.5Gbps NICs only getting 1.5Gbps
-
I've testing out a new HUNSN RJ38 appliance. It's an N100 CPU with 5x i226-v NICs. I have 3 devices in my test setup; 1PC on the LAN, 1PC on the WAN, and the RJ38. I have iperf installed on the 2 computers with each computer using an ASUS 10Gb card.
If I plug the computers directly into each other then I get roughly 8Gbps. (I haven't looked into why it isn't pressing the 10Gbps that the cards are capable of.) If I connect the computers through the RJ38, I'm only getting roughly 1.5Gbps.
pfSense shows the CPU utilization on the firewall at less than 10% during the transfer. The PCs both show less than 25% CPU utilization. So it seems all the devices can handle the speeds just fine. I know the PCs are linking properly because I'm well over the 1Gbps mark.
What can I check/tune to get it to move up to the full 2.5Gbps?
-
@Stewart
What settings are you using for iperf3?
I did similar test with i226-v 2-port and got 2.4Gbps, but CPU was 4670K. -
@w0w Just did some more testing. I've used:
iperf3.exe -c 172.16.16.16
iperf3.exe -c 172.16.16.16 -P 2
iperf3.exe -c 172.16.16.16 -P 4I'm configuring to use the ports as:
WAN1
WAN2
LAN1
LAN2
MGMTIf I connect as such:
WAN1 - Server
LAN1 - Client
iperf3 = 1.47Gbps
iperf3 -P 2 = 1.61GbpsIf I move the server to the second LAN port then I get:
LAN2 - Server
LAN1 - Client
iperf3 = 1.50Gbps
iperf3 -P 2 = 2.37GbpsSo it seems the issue is when it is traversing LAN-WAN and not LAN-LAN.
Edit: Just tried "Disable hardware checksum offload" and "Suppress ARP Messages" but neither had any affect.
-
@Stewart
I've used -t 120 -P 10 -w 4MB on
PC1-WAN-LAN-PC2I don't know, maybe it's some hardware limitation, can't say until we know what is motherboard layout and how did you configure your pfSense. I'd start with using only WAN and LAN, two physical ports only, no bridge, and then re-test
PC1-WAN-LAN-PC2 speed -
There should be zero difference between a LAN and a WAN port.
What makes a WAN port 'different' is the differing firewall rules and additional tasks such as NAT (amongst other things) which require the CPU to work harder.
️
-
@RobbieTT There's certainly no difference from a hardware perspective. I don't know if there is some form of NAT queuing that could be slowing it down or something in the tuneables that needs to be tweaked to get higher speeds. I know that with the APU2 there were some things to change to get it to hit 1Gbps across the WAN. I haven't set any firewall rules or anything on this unit. In fact, I've tried checking the "Disable All Packet Filtering" to turn off the firewall and the NAT to see if it helps and it didn't.
It is a bit peculiar.
-
There are a number of BSD and pfSense tuning guides for higher bandwidth. Having waded through a number of them most seem to have been applied to pfSense in subsequent releases.
Other I have yet to test include:
For PPPoE WANs with Multi-Queue NICs (single core issue) an entry for net.isr.dispatch=deferred can lead to performance gains on affected hardware. pfSense default in sysctl = net.isr.dispatch=direct net.isr.dispatch="deferred" To disable flow control on all ix interfaces. For LAN use the default of “3” is appropriate. pfSense default in sysctl = hw.ix.flow_control: 3 hw.ix.flow_control="0" To disable flow control on individual igc interface (eg igc.3 when used as WAN). pfSense default in sysctl = dev.igc.3.fc: 3 dev.igc.3.fc="0" Increasing hw.igc.max_interrupt_rate of up to 20000 provides gains in some scenarios. pfSense default in sysctl = hw.igc.max_interrupt_rate: 8000 hw.igc.max_interrupt_rate=“16000" Some FreeBSD tuning guides suggest removing the igc rx_process_limit for higher performance. Maximum number of received packets to process at a time. Default is 100 packets. A value of -1 means unlimited. pfSense default in sysctl = hw.igc.rx_process_limit=100 hw.igc.rx_process_limit="-1" Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)to a value larger than the default value of 1500. Best throughput results are seen with a large MTU; use 9706 if possible (ref Intel) or use 9216 (ref iflib stated maximum). Use the ifconfig command to confirm or increase the MTU size. To confirm the MTU used between two specific devices, use: route get <destination_IP_address> To set MTU on an ix interface for LAN use, enter the following where 1 is the interface number: ifconfig ix1 mtu 9216 Doubling (or more) IP Input Queue (intr_queue) length on highly loaded systems may be required, if queue drops are above zero. To check for queue drops (ideally adjust to a zero value) via CLI use command: sysctl net.inet.ip.intr_queue_drops net.inet.ip.intr_queue_drops: 0 pfSense default in sysctl = net.inet.ip.intr_queue_maxlen: 1000 net.inet.ip.intr_queue_maxlen="2000" For RJ45 interface stability energy efficient ethernet (eee) may need to be disabled. The default pfSense setting of 1 means disabled (which is somewhat counterintuitive) and 0 means enabled. pfSense (eg igc.3) default in sysctl = dev.igc.3.eee_control=1 dev.igc.3.eee_control=“1”
I attach no weight to the above; more akin to my musing than anything else - your issue is probably elsewhere but you never know.
️
-
Yes, pretty much the only difference there is NAT vs no NAT.
Try increasing the interrupt rate. That is a known limitation in igc.
-
@stephenw10 @RobbieTT
I added hw.igc.max_interrupt_rate with a value of 16000 in the Tunables page but it didn't make a difference. Is there a tuning guide for IGC NICs? Most everything I'm finding is for IGB.Also, the slowdown isn't related to LAN vs WAN as I previously inidicated. I'm creating a Matrix of ports and iperf and it kind of appears random. Most of the time it is the 1.6Gbps but sometimes I get the ~2.4Gbps. While I thought it was reproducible before I can no longer reproduce it consistently.
-
@Stewart
My examples in the scroll box are only ix or igc, no igb here!️
-
hw.igc.max_interrupt_rate is a loader value, it needs to be added in /boot/loader.conf.local. Create that file if required.
-
@stephenw10 I had a fleeting thought that might be the case. How can I tell the difference if something goes in the tuneables or if it goes into the loader.conf,local file?
-
Check the man page: https://man.freebsd.org/cgi/man.cgi?query=igc
In general though hw.X values are loader variables. dev.X are sysctls. But not always!
And you can usually read the loader values as sysctls after boot but not set them.
-
@stephenw10 Clear as mud! Thanks! I need to go re-rack and add a switch. Hopefully I can work on this later today.
Edit: When I put that in I get this message on boot:
Setting up extended sysctls...sysctl: oid 'hw.igc.max_interrupt_rate is a read only tunable
sysctl: Tunablevalues are set in /boot/loader.conf
sysctl: oid 'hw.igc.max_interrupt_rate' is a read only tunable
sysctl: Tunable values are set in /boot/loader.conf
done.Adding the setting didn't change the speeds but it may be because I've done it wrong if theses errors indicate anything. I put it into the file /boot/loader.conf.local.
-
You're seeing that error because it's still set in System Tunables (sysctls). If it's set correctly in loader.conf.local it will show as that value when read via sysctl after boot.
-
This doesn't appear to be a pfSense thing. I've switched over to Windows and am getting the same speed results. So far the advice I've received from HUNSN is to enable Turbo, but Turbo is enabled by default so it isn't that. I'll update this thread if I get it solved or if anyone has experience with these units or has any idea what BIOS modifications to try, please let me know.
-
Something in the way it's connected? PCIe bus using enough channels?
-
@stephenw10 The connections are about as simple as it comes. Right now it's just a PC with a 10Gbe nic plugged directly into the HUNSN box with a CAT6 cable. Link speed is 2.5Gbps. Still just getting the 1.59Gbps speeds in iperf. Using pfSense it was 2 PCs connected to 2 ports on the HUNSN running iperf on each of the boxes. Still just 1.59Gbps.
With the 2 PCs connected directly to each other they link up at 10Gbps and I get iperf around 8Gbps.
-
Sorry I meant how they're connected internally. Looks like the n100 doesn't have any NIC on the SoC so I assume they are PCIe connected. Though even 1 channel of PCIev1 should pass more than 1.5Gbps....
-
Please provide output of those commands
pciconf -lcv igc0 pciconf -lcv igc1 pciconf -lcv igc2 pciconf -lcv igc3 pciconf -lcv igc4