Pfsense 2.1 + esxi poor network performance
-
Hi,
i observe poor network performance with pfsense when used with esxi.
All my test are make with iperf with firewalling disabled.
I've run different test with e1000 and vmxnet3 driver, pfsense (2.03 and 2.1) 32 and 64 bits, freebsd and debian distribution.Here the result of my test:
pfsense e1000
[2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.32
–----------------------------------------------------------
Client connecting to 172.17.8.32, TCP port 5001
TCP window size: 65.0 KByte (default)[ 3] local 172.17.8.33 port 28383 connected with 172.17.8.32 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.2 sec 276 MBytes 228 Mbits/secpfsense vmxnet2
[2.1-RC0][root@pfsense2.localdomain]/root(2): /usr/local/bin/iperf -c 172.17.8.28
–----------------------------------------------------------
Client connecting to 172.17.8.28, TCP port 5001
TCP window size: 65.0 KByte (default)[ 3] local 172.17.8.29 port 37385 connected with 172.17.8.28 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-79.6 sec 4.02 GBytes 434 Mbits/secpfsense vmxnet3
[2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.28
–----------------------------------------------------------
Client connecting to 172.17.8.28, TCP port 5001
TCP window size: 65.0 KByte (default)[ 3] local 172.17.8.29 port 59092 connected with 172.17.8.28 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 3.15 GBytes 2.71 Gbits/secdebian e1000
iperf -c 172.17.8.36
–----------------------------------------------------------
Client connecting to 172.17.8.36, TCP port 5001
TCP window size: 23.5 KByte (default)[ 3] local 172.17.8.37 port 40286 connected with 172.17.8.36 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 2.57 GBytes 2.21 Gbits/secdebian vmxnet3
iperf -c 172.17.8.36
–----------------------------------------------------------
Client connecting to 172.17.8.36, TCP port 5001
TCP window size: 23.5 KByte (default)[ 3] local 172.17.8.37 port 41585 connected with 172.17.8.36 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 9.84 GBytes 8.45 Gbits/secfreebsd 8.3 e1000
freebsd2# /usr/local/bin/iperf -c 172.17.8.38
–----------------------------------------------------------
Client connecting to 172.17.8.38, TCP port 5001
TCP window size: 32.5 KByte (default)[ 3] local 172.17.8.39 port 30375 connected with 172.17.8.38 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 2.26 GBytes 1.94 Gbits/secfreebsd 8.3 vmxnet3
freebsd2# /usr/local/bin/iperf -c 172.17.8.38
–----------------------------------------------------------
Client connecting to 172.17.8.38, TCP port 5001
TCP window size: 32.5 KByte (default)[ 3] local 172.17.8.39 port 31077 connected with 172.17.8.38 port 5001
pfsense e1000
[2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.32
–----------------------------------------------------------
Client connecting to 172.17.8.32, TCP port 5001
TCP window size: 65.0 KByte (default)[ 3] local 172.17.8.33 port 28383 connected with 172.17.8.32 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.2 sec 276 MBytes 228 Mbits/secpfsense vmxnet2
[2.1-RC0][root@pfsense2.localdomain]/root(2): /usr/local/bin/iperf -c 172.17.8.28
–----------------------------------------------------------
Client connecting to 172.17.8.28, TCP port 5001
TCP window size: 65.0 KByte (default)[ 3] local 172.17.8.29 port 37385 connected with 172.17.8.28 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-79.6 sec 4.02 GBytes 434 Mbits/secpfsense vmxnet3
[2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.28
–----------------------------------------------------------
Client connecting to 172.17.8.28, TCP port 5001
TCP window size: 65.0 KByte (default)[ 3] local 172.17.8.29 port 59092 connected with 172.17.8.28 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 3.15 GBytes 2.71 Gbits/secfreebsd 9.1 e1000
root@freebsd2:/root # /usr/local/bin/iperf -c 172.17.8.38
–----------------------------------------------------------
Client connecting to 172.17.8.38, TCP port 5001
TCP window size: 32.5 KByte (default)[ 3] local 172.17.8.39 port 25752 connected with 172.17.8.38 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 2.94 GBytes 2.53 Gbits/secfreebsd 9.1 vmxnet3
root@freebsd2:/root # /usr/local/bin/iperf -c 172.17.8.38
–----------------------------------------------------------
Client connecting to 172.17.8.38, TCP port 5001
TCP window size: 32.5 KByte (default)[ 3] local 172.17.8.39 port 54521 connected with 172.17.8.38 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 6.72 GBytes 5.77 Gbits/secI can't understand why pfsense has not the same result than freebsd.
Is anyone aware of a culprit?By the way, i observe better performance for pfsense under kvm with virtio than xen.
Best regards.
-
Interesting question and useful numbers which is always good. You seem to have missed out some of the iperf results above. Do you have those numbers?
Steve
-
Effectively, my fault, bad copy/paste.
I need to rebuild the vm …So the result for freebsd8.3
e1000
freebsd2# /usr/local/bin/iperf -c 172.17.8.226
Client connecting to 172.17.8.226, TCP port 5001
TCP window size: 32.5 KByte (default)[ 3] local 172.17.8.227 port 21848 connected with 172.17.8.226 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 2.37 GBytes 2.04 Gbits/secand the result with the vmxnet3 driver:
freebsd2# /usr/local/bin/iperf -c 172.17.8.226
–----------------------------------------------------------
Client connecting to 172.17.8.226, TCP port 5001
TCP window size: 32.5 KByte (default)[ 3] local 172.17.8.227 port 56445 connected with 172.17.8.226 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 3.79 GBytes 3.26 Gbits/sec -
Ah, well the figure that stands out is FreeBSD 8.3 with vmxnet3. I assume it was far higher than pfSense, >5Gbps?
Steve
-
Yes,
but why performance with pfsense are under freebsd on the same bsd version?
-
Had you enabled "pf" (even with a very simple ruleset) when testing stock FreeBSD 8.3 against pfSense ?
-
Nope,
because i was only testing performance without firewalling. I know i will loose a little with firewalling enbled.
I'm very surprised that network performance under freebsd are lower than under debian.
Maybe the ethernet driver are not so good under freebsd.
But i did not expect so awfull performance with pfsense.It is a know issue?
Regards.
-
How did you disable pf?
Steve
-
There is a check box under the System/Advanced/Firewall-NAT called "Disable all packet filtering."
And when i do a pfctl -d under console, i got pfctl: pf not enabled.
So i think it's disabled. -
Fair enough, that seems disabled.
Still routing? NATing stuff?Steve
-
You have to check even the TSO and LRO iirc those get disabled by default on pfSense.
Also worth checking the difference between sysct net.isr between FreeBSD and pfSense.While iperf gives performance for stream it does not generalize the different workload requirements.
Also you are not testing forwarding performance when running iperf on pfsense itself and you have to consider that.
-
Yes i'm already aware of this.
For now it is just a test for test the virtualization.
I was thinking that i get the same result with freebsd and pfsense. -
You've missed ESXi version and hardware you are using.
Is it standalone server or vSphere cluster, do you have all esx host drivers up to date? -
One things that seems to help your workload is polling.
So enable polling and test with it.Also in pfSense the kern.hz is reduced to 100 when VMware is detected might be worth uping that to same value as FreeBSD.
It used to be problematic at the time though if you run with vmware-tools than probably is worth testing that scenario. -
Not sure if this is related but earlier this week using the daily snapshots my download speed went from 250mbit to 12 when i run through the pfsense box, bypassing the box and hooking directly to cable modem gives 250 again. Upload speed does not seem to be affected (15mbit with or without pfsense) Only started to see this earlier this week. Disabled IPv6 but problem still persists to this day.
-
Okay, couple things.
- Which ESXi -exactly-? Version and build number.
- Are you running the vmxnet2 as Flexible?
- Can you please retest with the 'legacy' interface? Preferably in pcn(4) mode over lnc(4) mode. I'm rusty so I forget how to force that behavior. (Hell, I forget if the PCIID changes were committed.)
- How many em(4) (aka e1000) interfaces are you running during testing? Yes, this matters.
I think part of the problem is that 2.1 pulled in a bad em(4) branch - but I haven't had time to test more in depth.
EDIT: Oh, can you also please check to see if you have "calcru: runtime went backwards" messages?