Slow upload speeds on HP Z2 G9 PFSense Box
-
You replaced the i5 with an i7? Or is that the client you're testing from? Not that it should matter.
Bridging has always been somewhat fragile in pfSense/FreeBSD and can create some unexpected traffic scenarios. It would be good to rule that out entirely if you can. But, yes, I agree that it seems unlikely here since the 6100 passes it OK.
-
@stephenw10 Sorry, it’s an i5-14500.
-
@stephenw10 said in Slow upload speeds on HP Z2 G9 PFSense Box:
You replaced the i5 with an i7? Or is that the client you're testing from? Not that it should matter.
Bridging has always been somewhat fragile in pfSense/FreeBSD and can create some unexpected traffic scenarios. It would be good to rule that out entirely if you can. But, yes, I agree that it seems unlikely here since the 6100 passes it OK.
Any other suggestions for diagnostics here? I'm just about at my wit's end - This is a relatively high end workstation with ECC RAM, and otherwise all standardized components.
-
Did you try an iperf test between an internal client and pfSense directly?
If it is some low level issue I'd expect to see the same issue there for the client sending. Though in that scenario it does cross the bridge differently.
You could disabling filtering entirely. If the issue remains that proves it's a driver/hardware issue rather than something in pf.
-
@stephenw10 I'll try running iperf3 in server mode later tonight/tomorrow and see what my Mac Studio client (my control) gets to it.
The other data I've got is, I've got 2 of these HP Z2 G9s as nodes with the exact same NIC running Proxmox VE (Spicy Debian) and I have none of these upload speed issues with either the command prompt or from within LXCs and VMs.
If I did attempt a reinstall, just to give it a clear slate, will my Netgate ID/registration remain the same?
-
Yes the NDI will remain unchanged. You could install 24.11 directly again.
-
@stephenw10 said in Slow upload speeds on HP Z2 G9 PFSense Box:
Yes the NDI will remain unchanged. You could install 24.11 directly again.
As a control, I ran iperf3 on the 6100 and used my Mac to see what I'd get.
# iperf3 -c (router IP) -P 120 [ ID] Interval Transfer Bitrate Retr [SUM] 0.00-10.00 sec 2.48 GBytes 2.13 Gbits/sec sender [SUM] 0.00-10.02 sec 2.45 GBytes 2.10 Gbits/sec receiver
# iperf3 -c (router IP) -P 120 -R [ ID] Interval Transfer Bitrate Retr [SUM] 0.00-10.07 sec 3.61 GBytes 3.08 Gbits/sec 29586 sender [SUM] 0.00-10.00 sec 3.48 GBytes 2.99 Gbits/sec receiver
I'm getting better performance through the 6100 than I am hitting it directly.
-
Good, that's what I'd expect to see. At those speeds you're probably seeing iperf using 100% of one CPU core. iperf is deliberately single threaded. And pfSense is optimised for routing not serving.
As a side note using 120 streams is probably counter productive. You usually won't see any increase beyond the available number of NIC queues. So 8 for the ix NICs in the 6100.
On the i5 one CPU core is capable of far higher iperf values and the remaining cores are capable of pushing it.
-
Looks like you're correct. 6100:
# iperf3 -c (router IP) -P 8 [SUM] 0.00-10.00 sec 2.48 GBytes 2.13 Gbits/sec sender [SUM] 0.00-10.03 sec 2.47 GBytes 2.12 Gbits/sec receiver # iperf3 -c (router IP) -P 8 -R [ ID] Interval Transfer Bitrate Retr [SUM] 0.00-10.01 sec 3.68 GBytes 3.16 Gbits/sec 14 sender [SUM] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver
I'll try to get i5 numbers in the next day or so. Each core on that is more powerful than the entire Atom CPU, so I'd expect to see higher numbers, unless there's a LAN or bridging issue...hopefully this'll help give us that data.
-
@Bear said in Slow upload speeds on HP Z2 G9 PFSense Box:
Each core on that is more powerful than the entire Atom CPU
Ha, yup. So it would be interesting to see what the limiting factor is there. Unknown throttling aside.
-
Running with the i5-14500...
# iperf3 -c (router IP) -P 8 [ ID] Interval Transfer Bitrate Retr [SUM] 0.00-10.00 sec 8.71 GBytes 7.48 Gbits/sec sender [SUM] 0.00-10.00 sec 8.70 GBytes 7.47 Gbits/sec receiver # iperf3 -c (router IP) -P 8 -R [ ID] Interval Transfer Bitrate Retr [SUM] 0.00-10.01 sec 11.0 GBytes 9.40 Gbits/sec 167 sender [SUM] 0.00-10.01 sec 10.9 GBytes 9.39 Gbits/sec receiver
Bear in mind, the RG is connected to the same NIC that the "LAN" side of the bridge is. Just the second port.
Any other thoughts/suggestions?
-
Hmm, well that's what you might expect to see without any issues.
So it tests OK from a LAN client to pfSense. And Ok from pfSense to a WAN side server. But not from the client to the server dircetly.
You're going to have to test without the bridge. It's the one part of your setup that's both unusual and known to cause problems.
-
@stephenw10 It’s not testing okay from PFSense to the WAN side. The up speeds are diminished. The up test isn’t long enough to show the speed drop. And even then, it’s still significantly lower.
Getting rid of the bridge is a non-starter. I’ve got a complex setup with multiple physical network segments on the same subnet that have rules for accessing each other. It’s the primary reason I’ve been using pfsense since it was Mon0wall. I can’t spent 8+ hours trying to figure out how to make this all work and rewriting firewall rules just for another data point when none of the other data points have yielded any actionable remedial suggestions.
The only things I’m left with are maybe my transceiver works on the 6100 but not the X520 for some reason. So I’ll either try another transceiver or try a x710-based copper NIC instead of the SFP+ based one as the 710 supports NBaseT.
-
Well in your earlier tests you were seeing >1Gbps from pfSense to the external server but you said you were still seeing ~400Mbps from a client behind it. Is that not the case?
The actual value of the upload from pfSense directly is never accurate, especially it high values. But it's a useful test when it returns above the throughput throttle.
Even if you need a bridged setup it would still be useful to run a test without the bridge. If that doesn't show a restriction then there is clearly something in the bridge config causing it. In which case we can dig into that. Though there's not much you can set there,
-
@stephenw10 I was seeing 1.4Gbit from the client behind it that, over time, tapers down to 450-700Mbit. On the 6100, same setup, seeing 2.2-3.5Gbit steady, depending on time.
-
Well if I was testing that I would still repeat that test with a very basic setup, no bridging.
However that seems to imply a WAN side issue so if there are no errors I'd look at the sysctl mactstats for the WAN NIC for anything that is getting exhausted without throwing an actual error.
Checking the output of
netstat -m
during the test might also be interesting. -
This is a production network with active users. I can't just "turn off bridging."
I'm going to try a different transceiver/chipset to see if that has any impact, and then try a 710-based copper NIC after that to completely avoid using a transceiver if that's not successful.
I don't see why the bridge itself which has worked fine on the 6100 would suddenly be the root cause of issues with the i5 system.
-
@Bear said in Slow upload speeds on HP Z2 G9 PFSense Box:
I don't see why the bridge itself which has worked fine on the 6100 would suddenly be the root cause of issues with the i5 system.
Me either. I'm not aware of any specific issue that would present like this. But I've seen many issues with bridged interfaces behaving unexpectedly.
Can you connect a local iperf server to a different NIC/interface so you are routing it and test to/from that?
-
@stephenw10 Yup - I can try that, though it would be on a different Intel NIC. Same IX driver though. - An X550.
I'm not using the X550 for the WAN as it doesn't support multi gig, at least not under FreeBSD from what I gather. The X710 should. More data tomorrow. :)
-
The X550-T does support 2.5 and 5G. It's one of the few NICs that does. But I assume you have the SFP varient? If you module works in the X710 and X553 I would expect it to work there too.
But it should also be a good for a local test at 10G.