Cannot Achieve 10g pfsense bottleneck
-
Just for info.
When transferring large files between my TrueNas system and my Windows11 Pro PC, both using NVME SSD. I have transfer speeds above 5Gbit.
Situation is as follows:
- NAS <> 10G-switch <> pfSense <(lagg)> 10G-switch <> PC.
- NAS, pfSense and PC all equipped with ConnectX4 cards used at a speed of 10G.
- using jumbo frames (9000) on the connection
- transferring data between two NVME SSD's
- PC to NAS 5Gbit
- NAS to PC almost 9Gbit
- my fpSense system is build arround a older PC-mainbord having an Intel i5 6600K systeem (kaby lake Q1 2017). 4 core CPU
I am almost sure the PC is the speed limiting factor. The PC performance when transferring small files is 'dramatic'Intel i5 6600K systeem (kaby lake Q1 2017). 4 core
-
@pwood999 said in Cannot Achieve 10g pfsense bottleneck:
Maybe share your PfSense config, with any public IP's, Certs, etc. obfuscated ?
Or just screenshots of the VLAN firewall rules & any Limiter/Shaper queue settings ?
Check this post or an XML Redactor that might be helpful.
link redactorI will check what I can do about sharing the config. I think I saw some github repo for anonymizing the config.
Edit: Yep found it
Github pfsense-redactor@Averlon said in Cannot Achieve 10g pfsense bottleneck:
Did you configure the NIC queues down to 4 as well and tested SpeedShift at Package Level? The hwpstate_intel driver works quite well with Broadwell CPUs and does shown improvements (according to your post) towards 6Gbps on your Skylake CPUs. Compared to your previous posted results, this is an improvement of almost 1Gbps.
Yeah, I did all that. But 6G is not consistent, I am still getting mostly 5G.
I still think some configuration issue on the pfsense side of things. I am considering making a fresh install and testing things out then reloading my config.
@Averlon said in Cannot Achieve 10g pfsense bottleneck:
What about the interface counter on that Ubiquiti switch, especially the ones for the 25gbps Uplinks - are there any error counter / drops shown?
I see no errors.
@louis2 said in Cannot Achieve 10g pfsense bottleneck:
I am almost sure the PC is the speed limiting factor. The PC performance when transferring small files is 'dramatic'Intel i5 6600K systeem (kaby lake Q1 2017). 4 core
not similar to my case since I can achieve 10g on L2 with the same devices I test so I've ruled out the clients as the limiting factor.
I will try to adjust my settings as close to defaults as possible to see if it makes any difference.
-
Hi all
Very interesting topic: I'm experiencing the same issues with similar limitations on 10Gbit/s link.
I'm experimenting since a year with possible settings and test-scenarios. No success so far.One session limited to ~600 Mbit/s.
10 sessions limited to ~5 Gbit/s -
I was able to increase the throughput per session from 600 Mbit/s to 1.2 Gbit/s by adding this config
hw.pci.honor_msi_blacklist=0to /boot/loader.conf
Then a reboot is required.
Source: https://lists.freebsd.org/pipermail/freebsd-bugs/2015-October/064355.html
-
@TomTheOne Are you on VMware?
-
@Laxarus
No, it's a Intel based hardware. But I experienced the same 5 Gbit/s limitation when the firewall was vm-based back in the days. -
@TomTheOne this is interesting. I will try this too but you still cannot saturate the 10g link right?
-
@Laxarus
No, I can't. I have opened a ticket with a support request at the hardware-manufacturer. They requestd some details about the test-scenario and videos. I delivered the details. Let's see if I get an update on this. I suspect my hardware is not powerfull enought, even when there are 4x 10Gbit/s SPF+ ports on the board. -
@Laxarus
Here some results after the modification[2.8.0-RELEASE][admin@XX.XX.XX.XX]/root: iperf3 -c speedtest.init7.net -u -b 10G -R Connecting to host speedtest.init7.net, port 5201 Reverse mode, remote host speedtest.init7.net is sending [ 5] local XX.XX.XX.XX port 12350 connected to 82.197.188.129 port 5201 [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-1.00 sec 287 MBytes 2.41 Gbits/sec 0.003 ms 58505/264700 (22%) [ 5] 1.00-2.00 sec 295 MBytes 2.47 Gbits/sec 0.003 ms 55304/267151 (21%) [ 5] 2.00-3.00 sec 291 MBytes 2.44 Gbits/sec 0.004 ms 53480/262251 (20%) [ 5] 3.00-4.00 sec 300 MBytes 2.51 Gbits/sec 0.003 ms 55269/270479 (20%) [ 5] 4.00-5.00 sec 290 MBytes 2.43 Gbits/sec 0.003 ms 61091/269117 (23%) [ 5] 5.00-6.00 sec 302 MBytes 2.53 Gbits/sec 0.003 ms 53292/270271 (20%) [ 5] 6.00-7.00 sec 317 MBytes 2.65 Gbits/sec 0.003 ms 44540/272178 (16%) [ 5] 7.00-8.02 sec 316 MBytes 2.61 Gbits/sec 0.003 ms 42222/269450 (16%) [ 5] 8.02-9.00 sec 292 MBytes 2.50 Gbits/sec 0.003 ms 50357/260090 (19%) [ 5] 9.00-10.00 sec 292 MBytes 2.45 Gbits/sec 0.003 ms 57960/267419 (22%) - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.00 sec 3.64 GBytes 3.12 Gbits/sec 0.000 ms 0/0 (0%) sender [ 5] 0.00-10.00 sec 2.91 GBytes 2.50 Gbits/sec 0.003 ms 532020/2673106 (20%) receiver iperf Done. [2.8.0-RELEASE][admin@XX.XX.XX.XX]/root: iperf3 -c speedtest.init7.net -u -b 10G Connecting to host speedtest.init7.net, port 5201 [ 5] local XX.XX.XX.XX port 7880 connected to 82.197.188.129 port 5201 [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-1.00 sec 142 MBytes 1.19 Gbits/sec 102167 [ 5] 1.00-2.03 sec 145 MBytes 1.19 Gbits/sec 104081 [ 5] 2.03-3.06 sec 149 MBytes 1.21 Gbits/sec 107313 [ 5] 3.06-4.03 sec 136 MBytes 1.17 Gbits/sec 97372 [ 5] 4.03-5.01 sec 124 MBytes 1.06 Gbits/sec 89114 [ 5] 5.01-6.00 sec 142 MBytes 1.21 Gbits/sec 102318 [ 5] 6.00-7.00 sec 134 MBytes 1.13 Gbits/sec 96599 [ 5] 7.00-8.03 sec 145 MBytes 1.18 Gbits/sec 104394 [ 5] 8.03-9.00 sec 133 MBytes 1.15 Gbits/sec 95249 [ 5] 9.00-10.03 sec 145 MBytes 1.18 Gbits/sec 104132 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.03 sec 1.36 GBytes 1.17 Gbits/sec 0.000 ms 0/1002739 (0%) sender [ 5] 0.00-10.04 sec 1.36 GBytes 1.17 Gbits/sec 0.008 ms 0/1002739 (0%) receiver iperf Done.I clearly see my hardware is not able to handle it, at 2.50 Gbit/s I'm loosing 20% of the packages.
[ 5] 0.00-10.00 sec 2.91 GBytes 2.50 Gbits/sec 0.003 ms 532020/2673106 (20%) -
@TomTheOne unfortunately for me, it did not make a difference.
-
Try using multiple parallel streams. I've never managed to get full speed over 10G interfaces on any hardware.
-P, --parallel # number of parallel client streams to run