Help with 25G Speeds on HA pfSense Routers (LACP) Using Mellanox ConnectX-5 NIC
-
pfSense is a bad server. It's optimised as a router. You should test through it if you possibly can rather than to or from it directly.
Seeing 25Gbps in a single stream when pfSense is sending is surprising. Impressive. Do you see more if you run multiple streams? Or multiple simultaneous iperf instances?
-
What config do you have on pfSense for that test? A lot of rules? Basic install?
-
@stephenw10 Pretty much just a basic install with HA and some vlans created. We are in the testing phase so we wanted to have as much of a baseline as possible. On your previous post you asked some good questions that I will try to test later today. Right now I have to do "work" ie email and other admin nonsense.
-
Update: I was able to do some more testing and I rechecked the MTU settings for everything and found some things I missed. I then set up to ubuntu vms on each proxmox node. Each proxmox node had the necessary vlans created (I'm using openvswitch) and I was able to get 25gb across the vlans from one ubuntu box to another.
-
Interesting, I may want to do some additional testing in my lab on this, I've never managed to push PF much beyond about 10 gig, even with iperf and ideal scenarios, so this is super interesting.
-
@planedrop I spent all day at it and just started looking at everything again because it didn't add up. Heres a screen shot of one of the results.  and i'm using SR-IOV.
I passthrough one of the ports on the x520 as WAN and give that to the pfsense VM. I then create as many VFs as I want from the remaining Lan PF. I assign one of the Lan VF's to pfsense to function as the method of transfer between LAN and WAN.
in doing this, i'm noticing exactly as you did, reverse (-R) direction providing full line rate, but regular direction providing 1/10th the speed. For me that being around 1.3gbit.
If I create another VM, some random linux VM like ubuntu or lbuntu - that VF to VF communication using iperf3 will do line rate (10gbit) forwards and backwards. It won't have any issues.
The communication from pfsense to any of the VF's (doing VF to VF communication) has issues in the normal direction. The reverse direction provides full line rate.
i've tested many settings and nothing will fix this. Its also TCP which has more issues than UDP. Using UDP I can achieve 3.5gbit with iperf3 (vs the 1.3gbit of TCP). Changing any of the offload settings doesn't seem to fix it. Or rather if I do disable some of them, then even the -R direction becomes a crawl.
-
@MindlessMavis This is an older thread, so forgive me if I am missing something, I did not re-read the entirety of it .
But this thread with the super high speeds captured was on dedicated hardware, not virtualized through Proxmox.
Virtualizing a firewall is always going to result in weird behavior and shouldn't be used in production.
-
@planedrop yep its an older thread, i arrived here through googling the issue i was having and this was one of the top results.
whilst it is a virtualised environment, the commonality between them exists and my setup is largely indistinguishable from a non-virtualised setup. i'm using dedicated cores, binding IRQs to those cores, isolating them, using hardware passthrough.
theres a variety of other threads who have had similar experiences with SR-IOV too.
just to add this was with the default MTU of 1500 and rx / tx descriptors of 4k on the linux side
improving this on the freebsd side (by placing into the loader.conf.local)
#enable iflib overrides for ixv interfaces dev.ixv.0.iflib.override_qs_enable="1" dev.ixv.1.iflib.override_qs_enable="1" # Set 4K descriptors dev.ixv.0.iflib.override_nrxds="4096" dev.ixv.0.iflib.override_ntxds="4096" dev.ixv.1.iflib.override_nrxds="4096" dev.ixv.1.iflib.override_ntxds="4096" # Enable iflib overrides for ix interface dev.ix.0.iflib.override_qs_enable="1" # Set 4K descriptors for ix interface dev.ix.0.iflib.override_nrxds="4096" dev.ix.0.iflib.override_ntxds="4096"
seems to have helped out with regard to rx_no_dma_resources issues, you can generally check if you are having those same issues by looking at
sysctl dev.ix | grep -E "(r_drops|r_stalls|r_restarts|no_desc_avail|credits|override_n[rt]xds)"