Vmware vmxnet3 nic vs. e1000 vs. hardware-install - throughput performance
-
Yes, although it's pretty safe to assume that the bottleneck is due to VM Ethernet driver …
-
Great post.
Like an oasis of fact in a desert of speculation! :)Steve
-
would the results be different when the hardware supports virtualization technology? (intel VT, AMD-v).
I wonder if the vmxnet drivers would benefit from them techs.Thanks for the tests by the way :)
-
Hi,
i've recently started using pfsense again and it's running as a VM on my NAS.
i am curious to know, if one would get the same results using VT-d. i can pass the NICs directly to the pfsense VM. The reason why i haven't done this yet is because i have another VM that is a heavy downloader (WAN-speed is 128 Mbit). My thoughts were: with both VMs using the same controller, the traffic would stay within the hypervisor. If i dedicate the NICs to the pfsense VM only, i assume that traffic would have to leave the ESXi-Host and travel back through the switch.
Am i guessing correctly? Would that extra traffic be negligible compared to the stress i save the CPU?
Thanks.
-
Interesting test, thanks for sharing.
I wonder what the numbers would be with pf disabled (no nat, no packet filtering).
Allright, so i disabled pf under "system - advanced - firewall/nat". I then ran the test using the e1000 driver, and the vmxnet3 driver. The results are similar. 100% cpu in vmware graphs, 850mbit throughput.
-
I've also tried enabling/disabling TSO, powerD, fast tcp forwarding etc… But so far i haven't been able to get above the 850mbit marker.
-
notice your media is set at 10gbaseT. i'm using open vmtools and my all my intel gigabit cards with vmxnet3 are only recorded as 1000baseT
-
notice your media is set at 10gbaseT. i'm using open vmtools and my all my intel gigabit cards with vmxnet3 are only recorded as 1000baseT
I'm using the vendor supplied vmtools, and 10Gbit is their default speed. Source:http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013083
-
Thanks for doing this testing.. Could you see what cpu usage you get when your not maxing out the pipe.
Put a switch between, and on the 1 box set interface to 100full and then on pfsense you got Gig – so max your going to see is 1/10 of what nic can do.. Is the cpu usage less in this mode on the vmxnet3?
This would be more of a setup you might see in normal usage -- isp is not always a gig connection, and you rarely see 100% saturation of the line etc.
-
ah, i knew there was a reason why i didn't pursue this further at the time. i was using vlans and the vmxnet driver didn't support that at the time. there is a patch but i don't want to apply it at the moment on 2.1_x64 and it doesn't appear that the nics are detected as i've added a new one with the vmxnet3 driver and pfsense isn't seeing it at the moment.
miloman, seeing as your interfaces have been found under the vmxnet3 driver, could you try and see if you can add a vlan for that interface? if so, i might push it a little further to get them working. -
Thanks for doing this testing.. Could you see what cpu usage you get when your not maxing out the pipe.
Put a switch between, and on the 1 box set interface to 100full and then on pfsense you got Gig – so max your going to see is 1/10 of what nic can do.. Is the cpu usage less in this mode on the vmxnet3?
This would be more of a setup you might see in normal usage -- isp is not always a gig connection, and you rarely see 100% saturation of the line etc.
I see where you're going. I'll be doing this test later today.
-
ah, i knew there was a reason why i didn't pursue this further at the time. i was using vlans and the vmxnet driver didn't support that at the time. there is a patch but i don't want to apply it at the moment on 2.1_x64 and it doesn't appear that the nics are detected as i've added a new one with the vmxnet3 driver and pfsense isn't seeing it at the moment.
miloman, seeing as your interfaces have been found under the vmxnet3 driver, could you try and see if you can add a vlan for that interface? if so, i might push it a little further to get them working.Vlans are indeed supported under PfSense 2.1_beta0 with the vmxnet3nic. You don't need to use the patch.
For me the vmxnet3 NIC was essential for getting better performance/througput. But it was useless in a production setup seeing vlan tagging wasn't supported. After my tests, i don't see why i should bother installing the driver and introducing a potential vmware tools/driver crash when the performance of the e1000 is pretty much the same.
-
thanks for the testing. does make me wonder why my vmxnet3 interfaces ain't showing. i'm using 2.1_x64 beta0 and when setting the driver to vmxnet3, pfsense doesn't see the additional interfaces.
vmxnet.ko is loaded and has corrected permissions but still doesn't show. -
I see where you're going. I'll be doing this test later today.
So for example my internet connection is about 16MBps sustained - sure it boosts to like 25, but on say a sustained download it levels off at about 16MBps – so maybe in this scenario e1000 causes 40% cpu while vmxnet3 only uses 30% ?
-
Here ya go…
Throughput capped to 100mbit using a switch.
Test with computers connected to each other only by using a switch = 96.5Mbit (this number is used for reference as to which speeds are possible without any firewalling)
Test with firewall in between doing the routing/firewalling = 94.5mbitYou can see the CPU usage in the screenshot i've attached. In this test the vmxnet3 driver uses a bit less cpu than the e1000. But i'm not impressed.
-
thanks for the testing. does make me wonder why my vmxnet3 interfaces ain't showing. i'm using 2.1_x64 beta0 and when setting the driver to vmxnet3, pfsense doesn't see the additional interfaces.
vmxnet.ko is loaded and has corrected permissions but still doesn't show.In this thread i've linked to the guide i used to install the vmware tools supplied with esxi 5.0. Those worked for me.
-
Is that with ESXi correct, I would image that direct install would be closer to the direct connect speed like gigabit speeds are.
-
Is that with ESXi correct, I would image that direct install would be closer to the direct connect speed like gigabit speeds are.
My bad… I should've written that the computers were connected to each other using a switch. I've edited my post.
-
Hi,
i've recently started using pfsense again and it's running as a VM on my NAS.
i am curious to know, if one would get the same results using VT-d. i can pass the NICs directly to the pfsense VM. The reason why i haven't done this yet is because i have another VM that is a heavy downloader (WAN-speed is 128 Mbit). My thoughts were: with both VMs using the same controller, the traffic would stay within the hypervisor. If i dedicate the NICs to the pfsense VM only, i assume that traffic would have to leave the ESXi-Host and travel back through the switch.
Am i guessing correctly? Would that extra traffic be negligible compared to the stress i save the CPU?
Thanks.
Hi,
I just tested this - I have a pfSense VM running with 2NIC passed through using IOMMU. I decided to run an iperf server on the pfSense and an iperf client on my laptop, connected to the pfSense via a gigabit switch. I found that
- pfSense CPU usage went to 10% due to iperf
- Throughput around 94mbits/s
I am slightly confused as to why this is happening - as far as I am aware all devices on my network are gigabit capable, so I'd have to look into this, but indications are that one can expect to see full gigabit throughput when using IOMMU passthrough.
Regarding traffic staying in hypervisor or leaving to the switch, it would be a case of using an extra port on your switch (like I have). Most switches worth their salt have enough internal bandwidth to shift traffic along all interfaces without slowing. So the only cost would be an extra port used on your switch.
To comment on my particular situation, I was irritated by the increased power consumption by a shared device (for the LAN, WAN was still passed through). This was caused by CPU consumption rising dramatically. I calculated I would only have been capable of 50-100mbits through from LAN<->WAN (no good if my WAN speed increases, but fine for now). As a result, I also passed through the LAN device. This resulted in stable power consumption when data was passing LAN<->WAN and no noticeable CPU usage. Unfortunately, because the drivers are not as good in freeBSD as they are in Linux (Xen Hypervisor), the idle power consumption of the system as a whole has risen by a couple of Watts due to two NICs being controlled by pfSense. My choices are mostly motivated by power consumption concerns.
Hope this was helpful.
Regards,Yax
EDIT:
100Mbit speed was caused by my cat5e not allowing gigabit speeds. Cat6 colved that problem. Now I see approx 550Mbit/s from client-server through the switch, with pfSense using approx 40% CPU (but only 5% is reported as iperf). Testing with a direct connection to pfSense shows a throughput of 550Mbits/s also. Not really sure why.
-
"But i'm not impressed."
Yeah it doesn't look like much there, but when talking CPU cycles on a VM - even if small difference, over time that adds up.
Again thanks for taking the time to actually test these drivers - I run the vmxnet3 on all my other vms other than pfsense. With vmxnet3 drivers I could not vpn into my work from client behind pfsense. With e1000 connects no problem - strange. Good to see it's not all that much of difference in performance.