hoping for 10Gbps, getting sub 1Gbps speed Xeon E3-1270 v5 3.6GHz
-
@cool_corona It's a single NIC route. If you want to test throughput you should test the THROUGH part of it
-
Hardware
pfSense box
Dell R230 Xeon E3-1270 v5 @ 3.6GHz 16GB 2x Samsung 850 SSD in ZFS redundant pool HP NC523SPF NIC in PCIe port 2 (which I believe is full 16 lanes)
switches & cables & optics
unifi aggregation 10G switches Intel 850mm SFP+ optics mm patch cables (same ones used to get faster results with 6100)
Testing
iperf3:
iperf3 -c server.fqdn.foo.bar -P 10
iperf3 -c server.fqdn.foo.bar -P 10 -R
iperf3 -c server.fqdn.foo.bar -P 10 -6As I see it, when tested on LAN the traffic never reaches pfsense.
Its only throughput on pfsense thats the issue and could be disk subsystem related on the pfsense hardware if offloading is disabled.
-
Hmm, I wouldn't expect anything to be written to disk there unless something is misconfigured, somehow using swap maybe. You should see that in iostat if it was though.
It's clearly not a CPU limit with those numbers. No core is close to 100%. -
@spacebass exatly that's what I am doing, using a host behind!
-
@ogghi for what it’s worth, I determined those HP NICs just aren’t great in FreeBSD. It’s unclear how many queues they use and the driver doesn’t seem to support any kind of manual or dynamic assignment.
I moved to an intel NIC in that same box and am now getting closer to 8Gbps.
-
@spacebass To be sure our provider actually will change their core router in our office, let's see. Maybe it's not our pFsense' issue after all xD
-
“Make sure you have multiple queues in attached for each NIC.”
And how do we do that?
-
@jimbob-indiana said in hoping for 10Gbps, getting sub 1Gbps speed Xeon E3-1270 v5 3.6GHz:
And how do we do that?
what I think I've learned is it both NIC and driver dependent. For instance, now that I've moved to an Intel NIC, at boot (via dmesg) I can see that the system automatically assigns tx and rx queues based on the number of CPU cores I have.
On the HP NC523SFP NIC's driver, there does not seem to be any way to set or have the system manually assign queues.
-
Most NICs will enable multiple queues by default if it's possible. You will usually see 1Rx and 1Tx queue per CPU core up to a limit defined by the NIC chip. Usually 4 or 8.
However some NICs will use 1 queue by default, notably vmx, and most can be configured to use just one or might be detecting something incorrectly. So if you see poor throughput you should check so see how many queues are in use. Most drivers report it in the boot log.ix0: <Intel(R) X553 N (SFP+)> mem 0x80400000-0x805fffff,0x80604000-0x80607fff at device 0.0 on pci9 ix0: Using 2048 TX descriptors and 2048 RX descriptors ix0: Using 8 RX queues 8 TX queues ix0: Using MSI-X interrupts with 9 vectors ix0: allocated for 8 queues ix0: allocated for 8 rx queues ix0: Ethernet address: 00:08:a2:12:e2:ca ix0: eTrack 0x8000084b PHY FW V65535 ix0: netmap queues/slots: TX 8/2048, RX 8/2048
-
@stephenw10 said in hoping for 10Gbps, getting sub 1Gbps speed Xeon E3-1270 v5 3.6GHz:
Most drivers report it in the boot log.
can confirm that the QL driver does not report at boot and does not have an (apparent) setting for configuring queues.
I checked in TrueNAS too and there's nothing in the boot logs or driver config about queues.
-
update to this thread:
I've moved to an Intel X520-DA2 dual port NIC and I'm getting much better performance. I had to do some tuning. But I'm now getting about 7-8Gbps to my ISP's iperf3 server which seems reasonable for 3 hops away.
I get about the same routing across subnets (vLANS) through pfSense.
I'm also not processor or thread limited any more.
At this point, I'll consider that a 'mostly win' - seems like a massive improvement from where I was. Assuming this box stays stable, I'll purchase support from Netgate since this will be my first time not running on Netgate hardware (outside of some VMs).
Thanks everyone who chimed in here.
-