Poor network thruput with ESX 3.5U4
-
Hope this has not been covered yet - the search function did not return any specific hits…
I just built a new pfSense setup in the lab using ESX 3.5U4 (QuadCore 2.8Ghz CPU, 8GB RAM). I setup a pfSense VM with 2GB RAM, 20GB HDD, and 3 network interfaces (WAN, DMZ, LAN). I also created three new CentOS 5.3 VMs and put each in the corresponding network. Although I was able to get everything working without any issues, I am having a serious network performance issue with the setup.
On my DMZ VM, I enabled apache and put a simple 200MB test file in /var/www/html. On my WAN VM, I am able to get the file via "wget" but the performance is very sporadic and slow. At the beginning of the transfer, wget starts around 20MB/sec then eventually slows down to 0, the back up to 5MB/sec, then down to zero, etc. It has a constant up-and-down transfer rate pattern.
Some things I have tried:
-
Installed the OpenVM package
-
Disabled NAT and configured the pfSense as a router (no effect on performance)
-
Enabled/disabled any traffic shaping (no effect on performance)
-
Modified my DMZ VM to temporarily live on the WAN network to prove VM networking performance was good (it was - I was able to get 110MB/sec from the DMZ VM)
-
Verified CPU and RAM were well within acceptable ranges (no CPU spikes, not out of RAM, etc)
-
Use both "Flexible" and "Enhanced" VM NICs
At this point, I am running out of ideas. Does pfSense require any specific NIC tuning to get adequate performance? What am I missing?
-
-
I use ESX 4 and have the same performance problems.
If I copy a file in the same subnet I get transfer rates from about 30 MB/s VM to VM and 45 MB/s HW to HW. If I do the same test in different subnets, pfSense is GW, the rates are 5 MB/s and 7 MB/s… >:(
I use "flexible" NICs because with "e1000" the results get more worse 900 KB/s and 1.5 MB/s...
With Astaro or Endian VMs, both Linux, the rates are 25 MB/s and 35 MB/s! :) Why?
I googeld a lot but found no hints. Please help!!!
Update: I removed Open VM Tools and installed Original VM Tools. The rates are 1-2 MB/s higher...
-
I installed my new home server today with Vsphere ESXi 4.0 Update 1.
I was hoping to be able to run pfsense on this machine as wel (saves me allot of cash with less hardware being on 24/7), but to my dissapointment I am having serious throughput problems :(.
My WAN is a 60mbit connection, Lan is 1000mbit. I'm able to get 4-5MB/sec max instead of the aprox 7mb/sec (60Mbit wan).
Right now I have two virtual switches (WAN & LAN) which are both connected to seperate Intel PRO GT 1000mbit adapters.
Have tried everything, but I'm afraid I'm going to have to turn on my phyiscal pfsense machine again..Anybody know how to solve this? Or has anybody managed to get more than 4-5mb/sec?
-
Check the interface speed on ESXi, and see if it matches setup on the switch…..
It can cause slow speeds when not on auto or 1Gbit on both sides....
-
Interface speed is correct and have tried different duplex modes.
-
I have rather good performance with pfSense 1.2.3 running on ESXi 3.5.0 U3 - even on my P4 (Netburst) Xeon system - i.e. no hardware virtualization.
I'm using the E1000 NICs in this case - and do pretty well. I can get metrics soon, with luck.
-
I installed my new home server today with Vsphere ESXi 4.0 Update 1.
I was hoping to be able to run pfsense on this machine as wel (saves me allot of cash with less hardware being on 24/7), but to my dissapointment I am having serious throughput problems :(.
My WAN is a 60mbit connection, Lan is 1000mbit. I'm able to get 4-5MB/sec max instead of the aprox 7mb/sec (60Mbit wan).
Right now I have two virtual switches (WAN & LAN) which are both connected to seperate Intel PRO GT 1000mbit adapters.
Have tried everything, but I'm afraid I'm going to have to turn on my phyiscal pfsense machine again..Anybody know how to solve this? Or has anybody managed to get more than 4-5mb/sec?
What kind of NICs did the VM get assigned? E1000 or the "flexible" type? If flexible, there is your problem. Make sure the NICs are E1000.
The easiest way to do this: When installing pfSense inside a VM, choose the "Linux 64-bit" OS type, save the VM config, edit the VM, then switch to 32bit Linux. The NICs will stay as E1000 NICs. Next, install pfSense as usual…
-
What kind of NICs did the VM get assigned? E1000 or the "flexible" type? If flexible, there is your problem. Make sure the NICs are E1000.
I get totally different results, see my previous post. With "flexible" and original VM Tools the throughput is 6-7 times higher!!! Tried it on different ESX servers and also with PF VM as only VM on the Server…
-
i'm having network throughput issues as well.. my gigabit network could only reach 480mbps at max when I did iPerf tests with the clients on the network.. I have my PF setup in a ESXi4 environment with e1000 nic.. forced the interfaces on 1Gbps Full Duplex as well.. still can't figure out what's the issue ???
-
final word is you WANT to use e1000 as the network driver. if you are noticing a performance improvement by changing it to flexible, that is a sure sign of something really wrong with the setup of your system. also make sure that the Open-vm-tools package is installed.
you are correct to have separate virtual switches for WAN and LAN.
with a clean install on good specs, there is no reason that a pfSense VMware shouldn't run at full speed as if it was installed bare metal.
please post the specs of the hardware as well as requested specs of all VM's you are running parallel.
-
final word is you WANT to use e1000 as the network driver. if you are noticing a performance improvement by changing it to flexible, that is a sure sign of something really wrong with the setup of your system.
Yeah, there is a huge performance difference between flexible and e1000, the latter is 5-10 times faster on normally functioning ESX boxes.
with a clean install on good specs, there is no reason that a pfSense VMware shouldn't run at full speed as if it was installed bare metal.
You'll never get full speed with any VM, there is overhead in virtualization, but it should be close.