Cannot Achieve 10g pfsense bottleneck
-
@stephenw10 said in Cannot Achieve 10g pfsense bottleneck:
about 2 hours ago
Mmm, drive speed and boot type really shouldn't make any difference to throughput.
This makes a difference if we want to migrate to a Proxmox VM. When a SATA drive is used, you can prepare a new drive with Proxmox and a pfSense VM on another system, then just move it over and reassign the interfaces in Proxmox. Just use a USB Ethernet adapter as the management interface on both PCs. With NVMe it can be more complicated and may require more downtime.
@Laxarus try the iperf command provided in my previous message and post back the results.
-
Right but no difference to the the throughput of the resulting install.
-
@stephenw10 said in Cannot Achieve 10g pfsense bottleneck:
about 2 hours ago
Right but no difference to the the throughput of the resulting install.
Definitely yes.

-
@Averlon The reason I suggest testing each server to & from PfSense was just to verify that part of the E2E path - especially as the 25G link is used for all VLAN's to the Microtik.
Server1 --> Microtik --> PfSense (DS & US)
Server2 --> Microtik --> PfSense (DS & US)It would at least verify the firewall rules on the VLAN's & the VLAN's through the Microtik can pass the full bandwidth.
-
@w0w said in Cannot Achieve 10g pfsense bottleneck:
@Laxarus try the iperf command provided in my previous message and post back the results.
still 5G and occasional 6G
@Averlon said in Cannot Achieve 10g pfsense bottleneck:
My suggestion is to disable HT / SMT, scale queues down to 4 and there might be another improvement. The Intel SpeedShift may work better on packed level rather than core level.
disabled HT but this did not make any difference
-
@Laxarus said in Cannot Achieve 10g pfsense bottleneck:
still 5G and occasional 6G
OK, so how exactly is the Intel XXV710 dual 25G connected to the Ubiquiti switch, and what is the exact switch model, ports, cables, and transceivers you’re using if any?
-
@w0w
Switch: USW-EnterpriseXG-24
Connection: Unifi SFP28 DAC cable (UC-DAC-SFP28)
I disabled the LAGG so there is only a single cable now.
Do you think these cables dont play nice with pfsense?
But I also tested the 10g rj-45 built-in port but still no difference so I've ruled this out.
At this point, I am entertaining the idea of putting all 10G devices in same vlan/switch and stick with L2.
-
@Laxarus said in Cannot Achieve 10g pfsense bottleneck:
Do you think these cables dont play nice with pfsense?
I don’t think so. The more I look at it, the more I think it’s some software glitch — but where exactly is the bottleneck? It looks just like some queues/limiters. This CPU should do 30-40 Gbit with fw filtering and 60 Gbit just for routing. I don’t know — something is broken.
-
Maybe share your PfSense config, with any public IP's, Certs, etc. obfuscated ?
Or just screenshots of the VLAN firewall rules & any Limiter/Shaper queue settings ?
Check this post or an XML Redactor that might be helpful.
link redactor -
@Laxarus said in Cannot Achieve 10g pfsense bottleneck:
disabled HT but this did not make any difference
Did you configure the NIC queues down to 4 as well and tested SpeedShift at Package Level? The hwpstate_intel driver works quite well with Broadwell CPUs and does shown improvements (according to your post) towards 6Gbps on your Skylake CPUs. Compared to your previous posted results, this is an improvement of almost 1Gbps.
How is the throughput if you disable the firewall (pfctl -d) and use pfsense as router only. NAT won't be available once you disable the firewall. You can re-enable by running pfctl -e and it will load your last ruleset. If you don't see any significant difference with firewall disabled, you can be at least sure, it's not the firewall ruleset slowing things down.
What about the interface counter on that Ubiquiti switch, especially the ones for the 25gbps Uplinks - are there any error counter / drops shown?
-
Just for info.
When transferring large files between my TrueNas system and my Windows11 Pro PC, both using NVME SSD. I have transfer speeds above 5Gbit.
Situation is as follows:
- NAS <> 10G-switch <> pfSense <(lagg)> 10G-switch <> PC.
- NAS, pfSense and PC all equipped with ConnectX4 cards used at a speed of 10G.
- using jumbo frames (9000) on the connection
- transferring data between two NVME SSD's
- PC to NAS 5Gbit
- NAS to PC almost 9Gbit
- my fpSense system is build arround a older PC-mainbord having an Intel i5 6600K systeem (kaby lake Q1 2017). 4 core CPU
I am almost sure the PC is the speed limiting factor. The PC performance when transferring small files is 'dramatic'Intel i5 6600K systeem (kaby lake Q1 2017). 4 core
-
@pwood999 said in Cannot Achieve 10g pfsense bottleneck:
Maybe share your PfSense config, with any public IP's, Certs, etc. obfuscated ?
Or just screenshots of the VLAN firewall rules & any Limiter/Shaper queue settings ?
Check this post or an XML Redactor that might be helpful.
link redactorI will check what I can do about sharing the config. I think I saw some github repo for anonymizing the config.
Edit: Yep found it
Github pfsense-redactor@Averlon said in Cannot Achieve 10g pfsense bottleneck:
Did you configure the NIC queues down to 4 as well and tested SpeedShift at Package Level? The hwpstate_intel driver works quite well with Broadwell CPUs and does shown improvements (according to your post) towards 6Gbps on your Skylake CPUs. Compared to your previous posted results, this is an improvement of almost 1Gbps.
Yeah, I did all that. But 6G is not consistent, I am still getting mostly 5G.
I still think some configuration issue on the pfsense side of things. I am considering making a fresh install and testing things out then reloading my config.
@Averlon said in Cannot Achieve 10g pfsense bottleneck:
What about the interface counter on that Ubiquiti switch, especially the ones for the 25gbps Uplinks - are there any error counter / drops shown?
I see no errors.
@louis2 said in Cannot Achieve 10g pfsense bottleneck:
I am almost sure the PC is the speed limiting factor. The PC performance when transferring small files is 'dramatic'Intel i5 6600K systeem (kaby lake Q1 2017). 4 core
not similar to my case since I can achieve 10g on L2 with the same devices I test so I've ruled out the clients as the limiting factor.
I will try to adjust my settings as close to defaults as possible to see if it makes any difference.
-
Hi all
Very interesting topic: I'm experiencing the same issues with similar limitations on 10Gbit/s link.
I'm experimenting since a year with possible settings and test-scenarios. No success so far.One session limited to ~600 Mbit/s.
10 sessions limited to ~5 Gbit/s -
I was able to increase the throughput per session from 600 Mbit/s to 1.2 Gbit/s by adding this config
hw.pci.honor_msi_blacklist=0to /boot/loader.conf
Then a reboot is required.
Source: https://lists.freebsd.org/pipermail/freebsd-bugs/2015-October/064355.html
-
@TomTheOne Are you on VMware?
-
@Laxarus
No, it's a Intel based hardware. But I experienced the same 5 Gbit/s limitation when the firewall was vm-based back in the days. -
@TomTheOne this is interesting. I will try this too but you still cannot saturate the 10g link right?
-
@Laxarus
No, I can't. I have opened a ticket with a support request at the hardware-manufacturer. They requestd some details about the test-scenario and videos. I delivered the details. Let's see if I get an update on this. I suspect my hardware is not powerfull enought, even when there are 4x 10Gbit/s SPF+ ports on the board. -
@Laxarus
Here some results after the modification[2.8.0-RELEASE][admin@XX.XX.XX.XX]/root: iperf3 -c speedtest.init7.net -u -b 10G -R Connecting to host speedtest.init7.net, port 5201 Reverse mode, remote host speedtest.init7.net is sending [ 5] local XX.XX.XX.XX port 12350 connected to 82.197.188.129 port 5201 [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-1.00 sec 287 MBytes 2.41 Gbits/sec 0.003 ms 58505/264700 (22%) [ 5] 1.00-2.00 sec 295 MBytes 2.47 Gbits/sec 0.003 ms 55304/267151 (21%) [ 5] 2.00-3.00 sec 291 MBytes 2.44 Gbits/sec 0.004 ms 53480/262251 (20%) [ 5] 3.00-4.00 sec 300 MBytes 2.51 Gbits/sec 0.003 ms 55269/270479 (20%) [ 5] 4.00-5.00 sec 290 MBytes 2.43 Gbits/sec 0.003 ms 61091/269117 (23%) [ 5] 5.00-6.00 sec 302 MBytes 2.53 Gbits/sec 0.003 ms 53292/270271 (20%) [ 5] 6.00-7.00 sec 317 MBytes 2.65 Gbits/sec 0.003 ms 44540/272178 (16%) [ 5] 7.00-8.02 sec 316 MBytes 2.61 Gbits/sec 0.003 ms 42222/269450 (16%) [ 5] 8.02-9.00 sec 292 MBytes 2.50 Gbits/sec 0.003 ms 50357/260090 (19%) [ 5] 9.00-10.00 sec 292 MBytes 2.45 Gbits/sec 0.003 ms 57960/267419 (22%) - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.00 sec 3.64 GBytes 3.12 Gbits/sec 0.000 ms 0/0 (0%) sender [ 5] 0.00-10.00 sec 2.91 GBytes 2.50 Gbits/sec 0.003 ms 532020/2673106 (20%) receiver iperf Done. [2.8.0-RELEASE][admin@XX.XX.XX.XX]/root: iperf3 -c speedtest.init7.net -u -b 10G Connecting to host speedtest.init7.net, port 5201 [ 5] local XX.XX.XX.XX port 7880 connected to 82.197.188.129 port 5201 [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-1.00 sec 142 MBytes 1.19 Gbits/sec 102167 [ 5] 1.00-2.03 sec 145 MBytes 1.19 Gbits/sec 104081 [ 5] 2.03-3.06 sec 149 MBytes 1.21 Gbits/sec 107313 [ 5] 3.06-4.03 sec 136 MBytes 1.17 Gbits/sec 97372 [ 5] 4.03-5.01 sec 124 MBytes 1.06 Gbits/sec 89114 [ 5] 5.01-6.00 sec 142 MBytes 1.21 Gbits/sec 102318 [ 5] 6.00-7.00 sec 134 MBytes 1.13 Gbits/sec 96599 [ 5] 7.00-8.03 sec 145 MBytes 1.18 Gbits/sec 104394 [ 5] 8.03-9.00 sec 133 MBytes 1.15 Gbits/sec 95249 [ 5] 9.00-10.03 sec 145 MBytes 1.18 Gbits/sec 104132 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.03 sec 1.36 GBytes 1.17 Gbits/sec 0.000 ms 0/1002739 (0%) sender [ 5] 0.00-10.04 sec 1.36 GBytes 1.17 Gbits/sec 0.008 ms 0/1002739 (0%) receiver iperf Done.I clearly see my hardware is not able to handle it, at 2.50 Gbit/s I'm loosing 20% of the packages.
[ 5] 0.00-10.00 sec 2.91 GBytes 2.50 Gbits/sec 0.003 ms 532020/2673106 (20%) -
@TomTheOne unfortunately for me, it did not make a difference.