@quetzalcoatl:
Thanks matguy.
But when you said "pfSense won't really use much more than 2" did you mean 2 cores or 2 gigs of ram.
Since you then talk about ESX it sounds that you are talking about cores.
Besides all the PFSense stuff going on here i have a question for you.
If you meant cores that means that if i have a 6 core CPU and 2 VMs and i assign 6 cores to each one of them, those VMs will actually end up being slower than giving them only 3 cores each?
Because if one of the VMs is idle, the other one should be able to take advantage of all 6 cores, unless the idle VM is actually slowing down all 6 cores even if it's idle. Maybe it depends also on the OS you have inside the VMs.
TIA!
Yes, I was talking about cores. Having multiple VMs with a couple vCPUs (assuming your VM host has, say, 4 or more cores) is fine as ESX(i) can schedule them easily. When a single VM has as many (or close to) vCPus as cores in your host it can become difficult to schedule a busy VM as it may have to wait for enough cores to become available all at once.
Generally ESX(i) has to schedule all the cores of a multi-vCPU VM to run at the same time (I think the physical CPU may do some command re-shuffling, but as far as ESX(i) is concerned, they need to be fed to the CPUs at the same time.) It needs to do that whether or not anything is actually happening on those vCPUs, so even an idle vCPU needs to be scheduled as though it was a busy one.
That causes 2 problems: 1, scheduling these large groups of vCPUs in an otherwise busy host, where that group of 6 vCPUs may have to wait a few, or many CPU cycles for enough cores to become free (think of it like a large family that all wants to ride the roller coaster together, they may have to wait for the next train or 2 to get enough open seats.) 2, filling an otherwise busy physical CPU with cycles that are forced idle by idle vCPUs that have to be scheduled when there may be only 1 or 2 that are actually processing anything.
Like I was saying, this may not be an issue for you if you have very few VMs running on that host, especially if the others are single vCPU VMs, or even 2 vCPU. I share this more for others that may read this; it's probably not doing you any harm as long as you're not seeing contention or other instability.
I come from more dense environments, where a single host is probably hosting 10 to 30 VMs. Even on hosts with 12 to 16 physical cores we generally put a limit on VMs to 4 vCPUs, and even then we generally require real justification for going over 2.