Help with 25G Speeds on HA pfSense Routers (LACP) Using Mellanox ConnectX-5 NIC
-
@kilo40 You would use it in conjunction with your firewall... it doesn't do firewally things really it's just a packet-pusher.
FreeBSD just can't push more than 10-12Gbps at a time because it's all, as I understand it, done at the kernel level.
-
@rcoleman-netgate Well I guess I'll be learning TNSR thanks for your response I was scratching my head all day trying to figure this out.
-
@rcoleman-netgate said in Help with 25G Speeds on HA pfSense Routers (LACP) Using Mellanox ConnectX-5 NIC:
use it in conjunction with your firewall... it doesn't do firewally things really it's just a packet-pusher.
So what firewall is recommended to use in conjunction with TNSR.
Or is it intended firewall functionality will be added in the future. -
@Patch said in Help with 25G Speeds on HA pfSense Routers (LACP) Using Mellanox ConnectX-5 NIC:
So what firewall is recommended to use in conjunction with TNSR.
pfSense.
-
@Patch Yeah you'd just put pfSense behind the TNSR router.
Maybe not the best example, but a simple one, could be TNSR at the edge connected to some super high speed fiber optic WAN, then you have a pfSense box for each department which handles the actual firewalling and just uses TNSR for it's own default gateway.
-
Yeah you likely won't see 25Gbps. Especially not in a single thread TCP test like that.
Where exactly are you testing between in that iperf test?
-
@stephenw10 In the test where I wasn't getting the expected speeds pfsense was the iperf3 server and proxmox was the client. However if I make proxmox the server and pfsense the client I get the full 25G. Below are the results.
** PfSense as iper3 server**
root@idm-node01:~# iperf3 -c 10.10.92.2 -i 1
Connecting to host 10.10.92.2, port 5201
[ 5] local 10.10.92.10 port 53808 connected to 10.10.92.2 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 107 MBytes 896 Mbits/sec 275 1.20 MBytes
[ 5] 1.00-2.00 sec 448 MBytes 3.75 Gbits/sec 0 1.41 MBytes
[ 5] 2.00-3.00 sec 475 MBytes 3.98 Gbits/sec 29 1.25 MBytes
[ 5] 3.00-4.00 sec 481 MBytes 4.04 Gbits/sec 3 1.07 MBytes
[ 5] 4.00-5.00 sec 478 MBytes 4.01 Gbits/sec 0 1.36 MBytes
[ 5] 5.00-6.00 sec 481 MBytes 4.04 Gbits/sec 35 1.20 MBytes
[ 5] 6.00-7.00 sec 476 MBytes 4.00 Gbits/sec 0 1.46 MBytes
[ 5] 7.00-8.00 sec 476 MBytes 4.00 Gbits/sec 18 1.31 MBytes
[ 5] 8.00-9.00 sec 475 MBytes 3.98 Gbits/sec 20 1.15 MBytes
[ 5] 9.00-10.00 sec 479 MBytes 4.02 Gbits/sec 0 1.43 MBytes
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 4.27 GBytes 3.67 Gbits/sec 380 sender
[ 5] 0.00-10.00 sec 4.27 GBytes 3.67 Gbits/sec receiveriperf Done.
** Proxmox as server, pfsense as client**
Accepted connection from 10.10.92.2, port 18728
[ 5] local 10.10.92.10 port 5201 connected to 10.10.92.2 port 50384
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 2.36 GBytes 20.3 Gbits/sec
[ 5] 1.00-2.00 sec 2.73 GBytes 23.4 Gbits/sec
[ 5] 2.00-3.00 sec 2.73 GBytes 23.4 Gbits/sec
[ 5] 3.00-4.00 sec 2.49 GBytes 21.4 Gbits/sec
[ 5] 4.00-5.00 sec 2.73 GBytes 23.4 Gbits/sec
[ 5] 5.00-6.00 sec 2.73 GBytes 23.5 Gbits/sec
[ 5] 6.00-7.00 sec 2.73 GBytes 23.5 Gbits/sec
[ 5] 7.00-8.00 sec 2.73 GBytes 23.5 Gbits/sec
[ 5] 8.00-9.00 sec 2.73 GBytes 23.5 Gbits/sec
[ 5] 9.00-10.00 sec 2.72 GBytes 23.4 Gbits/sec
[ 5] 10.00-10.00 sec 193 KBytes 12.4 Gbits/sec
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 26.7 GBytes 22.9 Gbits/sec receiver**** This is the output from pfsense as the client as well***
Connecting to host 10.10.92.10, port 5201
Cookie: zbn2lcktyyydi2a56bdfequy7f7nb5un5ysv
TCP MSS: 1460 (default)
[ 5] local 10.10.92.2 port 50384 connected to 10.10.92.10 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.36 GBytes 2417 MBytes/sec 5 690 KBytes
[ 5] 1.00-2.00 sec 2.73 GBytes 2791 MBytes/sec 0 1.41 MBytes
[ 5] 2.00-3.00 sec 2.73 GBytes 2792 MBytes/sec 0 2.15 MBytes
[ 5] 3.00-4.00 sec 2.49 GBytes 2554 MBytes/sec 22 569 KBytes
[ 5] 4.00-5.00 sec 2.73 GBytes 2793 MBytes/sec 0 1.29 MBytes
[ 5] 5.00-6.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.03 MBytes
[ 5] 6.00-7.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.77 MBytes
[ 5] 7.00-8.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.98 MBytes
[ 5] 8.00-9.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.98 MBytes
[ 5] 9.00-10.00 sec 2.72 GBytes 2789 MBytes/sec 0 2.98 MBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 26.7 GBytes 2732 MBytes/sec 27 sender
[ 5] 0.00-10.00 sec 26.7 GBytes 2732 MBytes/sec receiver
CPU Utilization: local/sender 73.3% (5.5%u/67.9%s), remote/receiver 72.3% (1.9%u/70.4%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done
-
pfSense is a bad server. It's optimised as a router. You should test through it if you possibly can rather than to or from it directly.
Seeing 25Gbps in a single stream when pfSense is sending is surprising. Impressive. Do you see more if you run multiple streams? Or multiple simultaneous iperf instances?
-
What config do you have on pfSense for that test? A lot of rules? Basic install?
-
@stephenw10 Pretty much just a basic install with HA and some vlans created. We are in the testing phase so we wanted to have as much of a baseline as possible. On your previous post you asked some good questions that I will try to test later today. Right now I have to do "work" ie email and other admin nonsense.
-
Update: I was able to do some more testing and I rechecked the MTU settings for everything and found some things I missed. I then set up to ubuntu vms on each proxmox node. Each proxmox node had the necessary vlans created (I'm using openvswitch) and I was able to get 25gb across the vlans from one ubuntu box to another.
-
Interesting, I may want to do some additional testing in my lab on this, I've never managed to push PF much beyond about 10 gig, even with iperf and ideal scenarios, so this is super interesting.
-
@planedrop I spent all day at it and just started looking at everything again because it didn't add up. Heres a screen shot of one of the results.  and i'm using SR-IOV.
I passthrough one of the ports on the x520 as WAN and give that to the pfsense VM. I then create as many VFs as I want from the remaining Lan PF. I assign one of the Lan VF's to pfsense to function as the method of transfer between LAN and WAN.
in doing this, i'm noticing exactly as you did, reverse (-R) direction providing full line rate, but regular direction providing 1/10th the speed. For me that being around 1.3gbit.
If I create another VM, some random linux VM like ubuntu or lbuntu - that VF to VF communication using iperf3 will do line rate (10gbit) forwards and backwards. It won't have any issues.
The communication from pfsense to any of the VF's (doing VF to VF communication) has issues in the normal direction. The reverse direction provides full line rate.
i've tested many settings and nothing will fix this. Its also TCP which has more issues than UDP. Using UDP I can achieve 3.5gbit with iperf3 (vs the 1.3gbit of TCP). Changing any of the offload settings doesn't seem to fix it. Or rather if I do disable some of them, then even the -R direction becomes a crawl.
-
@MindlessMavis This is an older thread, so forgive me if I am missing something, I did not re-read the entirety of it .
But this thread with the super high speeds captured was on dedicated hardware, not virtualized through Proxmox.
Virtualizing a firewall is always going to result in weird behavior and shouldn't be used in production.