NIC performance with ESXi
RobinGill last edited by
I've currently got a pf vm with 1 vcpu and 2Gb, 1Gb reserved and 1600MHz cpu cycles reserved (basically all cycles for one core). The host is a dell poweredge 2900 with 1 x xeon 5310 quad core @ 1.6, 2 x 73Gb 15k sas drives in a raid 1 mirror using a perc 5i with bbu, and using the onboard broadcom nics. The wan link is an adsl connection handled by some cheap tplink router with nat disabled, so that takes the 1st ip of a /29, and the tplink is the default gateway for the rest of the /29.
Not using vmware tools as from my understanding this only helps with being able to gracefully shutdown through vsphere client, and to allow memory ballooning.
I was observing slight packet loss when pinging out I then forgot about the onboard and plugged in an intel pro 1000/mt quad port into a pcix slot. The wan has a dedicated physical port while the lan shares it's physical port with the esxi management network as well as an xp vm.
Now the packet loss issue seems to have got better, but I'm still noticing a little latency. Pinging the tplink remotely and at the same time pinging the pfsense vm remotely, I'm seeing a few random spikes (normally around 20-30ms higher than pinging the tplink).
Thing is there is virtually no traffic going through the router at the moment and cpu graphs seem to be showing fairly low utilisation, which has me a little concerned that if it is heavily loaded then voip services may suffer. The last time I deployed a pfSense in production with intel nic's and a local gateway, there was pretty much zero latency between them all the time. Bandwidth testing seems to be fine.
Has anyone got any ideas whether this is an issue with virtualising pfSense and where I may be going wrong?
biggsy last edited by
It's unlikely to be down to ESXi.
If you haven't already, try changing your network cables then the NIC ports you're using.
Are you using E1000 interfaces as virtual NICs for pfSense?
tastymonkey last edited by
Not sure if you have figured out the problem yet but I will give you my experience on the matter.
I have the firewall as a VM.
I have a Dell Poweredge 1950. It has two dual core processors running at 3Ghz. I have 32GB of RAM in it.
It has a dual onboard Broadcom NIC and I added a third NIC which is an Intel 1Gb NIC.
Both Broadcom NICs are on the internal network as well as the management network.
The Intel NIC is on the WAN side.
This is running ESXi 5.
I have three vSwitches. One for the LAN side for which the two Broadcom are attached to. One for the DMZ side and it has no physical NICs on it, and a third that has the Intel NIC attached to it. I gave pfSense three NICs to play with. One on each vswitch.
I did not limit the firewall CPU but I did keep it using a single core(not dedicated). I did limit the amount of RAM it uses to just 512M. It is running the 64bit version of pfSense 2.0.1. I have watched the performance tab on the VM for a while and tried to make it eat up cycles and it is rather well behaved so I am not worried about putting CPU limits.
I have not noticed any speed limits at all on my connection that is not based on the limits of my tiered internet connection on U-verse. I even play games and my ping rate is not any worse then when I was using a simple hardware firewall/router. I have the U-verse router give me the external IP so that pfSense can handle it.
biggsy may be right on the network cables. If you have one that has a minor problem you will get dropped packets and often it will not work at all.
One other thing, I installed the 3rd party VMTools that is an add-on for pfSense. It is listed in the add-on area on the web interface. It makes pfSense shutdown nicely if I tell the server to shut down so I let it stay installed.
gibby916 last edited by
I'll tell you right now what your problem is. Having your management share with your LAN over the same NIC is causing the issues. Without going into great detail, essentially there is a lot of broadcasting that is occuring over that same link. You need to separate those two (have your management on its own subnet) and you will see a great increase in performance).