ESXi 6.0, vmxnet3 and iPerf3
-
I was running some tests for laughs and came across something unusual. I have a Dell blade acting as the ESXi 6.0 host. I created a couple of simple vSwitches for LAN1, LAN2. The default switch is the WAN. I install pfSense with all defaults, assigned and set the interfaces, WAN, LAN1 and LAN2.
WAN 10.10.0.250/24
LAN1 172.16.11.1/24
LAN2 172.16.12.1/24DHCP is enabled on both LAN1 and LAN2. Rule added on LAN2 to allow any to any. I then create 2 Lubuntu 14.10 clients and put them both on LAN1. When I run iperf3, between them I get ~6.2 Gbps. However, when I move the second client to LAN2 and run the test again, my bandwidth drops to ~1.4 Gbps….
When going client to client direct, I get 6+ Gbps. When going through pfSense interfaces, it drops to just over 1 Gbps. I installed open-vm-tools just to see if there was any difference in my tests, but there was not.
Anyone have any ideas?
-
Maybe I am not following the test properly but are you expecting the performance to be the same in both cases?
One is internal traffic and the other is routed.
-
While I don't expect them to be the same, I don't think it's normal for the traffic to be clamped down by a factor of 4.
-
I see. So the whole testbed is virtual vswitches and vnics? It looks like the routed traffic is at wire speed.
-
So the whole testbed is virtual vswitches and vnics?
Yes, I didn't want any external influences, so everything is being done within the confines of the host itself.
It looks like the routed traffic is at wire speed.
Do you mean the direct traffic? I expect full wire speed when going direct. I expect slightly less than full speed when routing, but not a drop like this.
-
https://forum.pfsense.org/index.php?topic=87675.0
seems pretty normal … not much more to expect
-
The testbed can be constructed without LAN2.
Create 1 virtual standard switch, 1 pfsense vm and 2 lubuntu vm on the same subnet.
Using iperf to measure internal traffic on my ESXI 6 host, the test results are
lubuntu to lubuntu is ~10Gbits/sec
pfsense to lubuntu is ~2Gbits/sec. -
I wonder why your throughput is so much higher than mine. How do you actually get 10 Gbps? The realworld max including overhead should be in the 6-8 Gbps range.
-
What is limiting the 6-8 Gbps max? I have seen benchmark reports of 20Gbps for internal vm to vm traffic. The difference may be attributed to the ESXI host processor.
Usage max 6382MHz 47% during my vm to vm iperf test. This is nearly double the average.
-
yeah my little n40l does not 6GBps linux to linux on same vswitch..
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 2.42 GBytes 2.08 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 2.42 GBytes 2.08 Gbits/sec receiverThen again its a older hp N40L – doesn't have a lot of horsepower, but hey for the price when I got it was a steal.. Very happy with it, runs all my vms great.
But yeah pfsense does seem sluggish compared to linux using the native drivers.. Really need to install a freebsd native vm and test that.. This is to the exact same vm running server in the above test just from pfsense which has interface on the same vswitch
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 454 MBytes 381 Mbits/sec sender
[ 4] 0.00-10.00 sec 453 MBytes 380 Mbits/sec receiverI don't have any real issues with this sort of speed.. And get about the same from a physical machine on that lan segment to pfsense.. But from same phy machine to the linux vm
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.02 GBytes 880 Mbits/sec sender
[ 4] 0.00-10.00 sec 1.02 GBytes 880 Mbits/sec receiverWhile my internet is only 50/10 it does seem strange that network performance seems low on pfsense while other vms pretty much close to wire speed over the physical network, etc.
I would really like to see pfsense perform as well as the linux vm.. These tests were done with pfsense 2.2.2 64bit and iperf3_11
edit: ok installed freebsd right from freebsd disk1, and get these without any tools installed testing to the same linux vm used as iperf3 server in all the above tests.
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.01 sec 1.55 GBytes 1.33 Gbits/sec sender
[ 4] 0.00-10.01 sec 1.55 GBytes 1.33 Gbits/sec receiverSo why does fresh freebsd no tools at all installed perform better than pfsense?
-
So why does fresh freebsd no tools at all installed perform better than pfsense?
What is running on freebsd? I notice that if I disable packet filtering on pfsense during iperf test, the result gets a nice bump up. If I throw more CPU and memory to pfsense vm… nada.
-
yeah nothing on freebsd, just clean install.. sshd is prob only thing running. packet filter takes that big of hit? even for stuff from its own local interface to ip on same segment. Have to give that a test. Yeah more mem or 1 or 2 cpus doesn't seem to matter all that much. When I build my new screaming esxi host this summer will see how it compares.
-
I can consistently get slightly better performance on the local LAN with E1000 on pfSense 2.1.5 than I can with vmxnet3 on 2.2.2. vmxnet3 is still better when crossing LANs, but still sucky compared to local.
-
yes 2.1.5 with e1000 is faster on esxi … its odd because 2.2 series have this multithreaded pf and all.
i'm thinking there are serious performance tweaks to be made to get more out of it .... but i wouldn't know how/where to start looking for tweaks.
it would be cool if the devs would put some stuff on the wiki to get more performance out of esxi