Pfsense box not reaching 2.5gbps
-
First of all, thanks for all your replies. They are helping me a ton!
@keyser I have been researching and found this doc: Intel Performance Optimization Guide
They have this note in there:
Some PCIe x8 slots are actually configured as x4 slots. These slots have insufficient bandwidth for full line rate with some dual port devices. The driver can detect this situation and will write the following message in the system log: “PCI-Express bandwidth available for this card is not sufficient for optimal performance. For optimal performance a x8 PCI-Express slot is required.” If this error occurs, moving your adapter to a true x8 slot will resolve the issue.
How can I access those logs in pfsense?
I did the following test that's making me believe more and more the PCIe lane is being saturated on the j5040 board that's running pfsense:
- Took the NIC and plugged it on a Gigabyte Z390 board using the secondary x16 PCIe that runs at x8. (172.26.1.42)
- Installed the latest drivers on Windows 11
- Took the USB3.0 Ethernet adapter and plugged it in a MacbookPro (172.26.1.254)
- Ran iperf3 server on the Windows machine and the client on the Mackbook Pro = 2.35gbps
- Ran iperf3 server on the MacbookPro and the client on the Windows machine = 2.37gbps
The top part shows the results of the Windows machine as a iperf server and the bottom as a client.
What other tests do you suggest?
Thanks again!
-
@python_ip Those logs are generated by the Windows driver I assume. Regardless, thats not really relevant as you are in a X 1 slot anyways.
The Card certainly works as intended as shown by your test, but what you really need is to test from LAN towards a iPerf Server on WAN - If your ISP has a iPerf server you might be able to use that to test towards the 2Gbit.
Then see if you can pull that across pfSense without running the iperf server/client on pfsense itself. -
Install pfSense on that Gigabyte board and retest. If it's a PCIe issue you will still see full throughput there.
But I'd be surprised if it;s actually the PCIe slow restricting this.
Definitely try setting one receive queue. We set that on the 4200.
Be sure to disable any unnecessary PCIe devices (sound cards, firewire etc) that may be on the same bus.
-
@keyser I did the following tests:
iperf reverse mode from the fastest server I could find online = almost 1gbps
Speed test from the Comcast Business site = 1.6gbps
@stephenw10 I set override_nrxqs = 1 in the /boot/loader.cfg and did a reboot:
The value persisted the reboot and here are the results of a speed test afterwards:
Should I set the flag for igc1 too? That's the interface running WAN.
Should I try advanced config flags like that one?Installing pfsense on the z390 board will be a test that's going to take some time. Not an easy one for me right now.
-
@stephenw10 @keyser for more detail about the config of the interfaces, I have attached the output of sysctl dev.igc for each interface.
Let me know if you have any recommended settings for other parameters. Thanks!
-
@python_ip Hmm, I’m out of ideas. Intel NICs “always” plays nice with pfSense, so the only other logical test is on that other motherboard. But really - I would expect the J5040 to be able to handle 2.5Gbit, and I would also expect PCIe 2.0 x 1 to actually handle your 2Gbit ISP service (up and down) albeit at the very limit. So somethings off.
-
Here is yet another test… running the iperf server on the MacBook and the client on the pfsense box = 1.53gbps
I have also disabled the onboard audio and the onboard lan:
-
@python_ip said in Pfsense box not reaching 2.5gbps:
Should I set the flag for igc1 too? That's the interface running WAN.
Yes, if both igc NICs are the path, try that. The WAN would be most affected in a download test by that receive value.
-
@stephenw10 no luck…
I found this guide: https://calomel.org/network_performance.html
There is a chart where they recommend the PCIe 2.0 x1 for a gigabit firewall.
I will try a few more things but I think I am maxing out the PCIe lane.
-
Just to put some closure to this thread... It was the PCIe lane being saturated. Upgraded the hardware to a motherboard with a PCIe 3.0 x16 and this is the result: