How Much RAM do you forward to your pfSense Guest
-
As topic.
How much RAM and why?
I am debating between 1 and 2GB.
-
Depends on how many simultaneous connections you'll have, and what packages if any you'll run. We run VMs with anywhere from 128 MB to 4 GB.
-
RAM is horrendously cheap. I think my home one has 4GB…just because 'it can'
-
RAM is horrendously cheap. I think my home one has 4GB…just because 'it can'
Oh no, no no no. I don't think you quite understand the question. In the VM hosting world, RAM is everything. (I feel like I'm talking about power for Apollo 13.)
CPU is a mostly un-predictable, reactive load. You can pile a whole bunch of VMs on a single host and max out the CPU and many of the VMs will still run ok enough (depending on… everything.) RAM is a different beast. Over-load RAM on a host and you're paging to disk, and not in the nice and friendly way that Windows does (slight sarcasm,) a VM host doesn't really know what RAM a guest needs to have immediately available, so what it pages out to disk may -really- screw over your VM(s).
While "RAM is horrendously cheap", being able to host a lot of RAM on a single system isn't, and that's where the cost bites you. While your "home one has 4GB... just because 'it can'" a VM host that only has, say, 32GB of RAM may be wasting a lot by dedicating 4GB to a simple isolating firewall, for something like a small DEV or Sandbox environment; especially if you have to run a few of them.
So, I think the question is more like, how little can he give to a pfSense VM and still have it run well? Like cmb said, it depends on what you're doing. A simple isolating firewall for a DEV or Sandbox environment, with a fairly low feature base (just routes) might be fine in 128. If you see issue, bump it up to more. If you're doing more than that, it'll need more. I've heard that simple installs for standard routing of a small number of people (house, small office) do perfectly fine in 256MB. Install a few packages, though, you'll need more, but I don't think you'll long term break anything by trying it with less than it "needs", if it doesn't work, just shut it down and boot it with more RAM.
In the VM hosting world, I try to give it less, bump it up if it actually needs it.
-
I run it with 512mb for about 500 servers and cloud services to clients…
PFSense will be the choice for a frontend in the upcoming scenario with 75.000 VM's migrating globally to my systems over the next 5 years. (one client)
-
If you're not using snort…. 1GB is plenty. If you are using snort, go 4GB :P.
-
If you're not using snort…. 1GB is plenty. If you are using snort, go 4GB :P.
I agree. Many packages (snort, squid, etc.) will eat memory and CPU, but the base pfSense won't use much. Mine has 2GB allocated (32-bit) and the Dashboard is only reporting 6% usage.
-
I'm running my pfSense on xen with 256MB. Only packages are iperf, cron and openVPN TAP fix. RAM sits around 47% usage idle, and I've not seen it past 70% usage (but I don't RRD, so I can't know). I don't run more because I want it for other VMs and if it doesn't exceed 70% then it is wasted to give it 512MB or 1GB.
-
I virtualize pfsense with virtual box in a 32 Gigabyte of RAM host.
of those 32 Gigabytes, 25 are assigned to pfsense.
Pfsense has squid installed which has 15 Gigabytes dedicated for RAM web caching, and all the rest is for pfsense itself.
My server has 6 cores but i assign 4 cores to pfsense because i think it becomes less stable if you assign more than 4 cores to the pfsense box.
Maybe the instability is caused by some other reason….i need more testing but it's a production environment. -
I virtualize pfsense with virtual box in a 32 Gigabyte of RAM host.
of those 32 Gigabytes, 25 are assigned to pfsense.
Pfsense has squid installed which has 15 Gigabytes dedicated for RAM web caching, and all the rest is for pfsense itself.
My server has 6 cores but i assign 4 cores to pfsense because i think it becomes less stable if you assign more than 4 cores to the pfsense box.
Maybe the instability is caused by some other reason….i need more testing but it's a production environment.I've attributed the instability to running 64-bit pfSense as a VM. Question though; why not just run a separate squid VM to lessen the footprint of your firewall?
-
*** I've attributed the instability to running 64-bit pfSense as a VM. Question though; why not just run a separate squid VM to lessen the footprint of your firewall? ***
It's really easy to install the squid package from pfsense.
Also i believe that having 2 VM will actually make the firewall footprint even bigger.
I chose to use lots of RAM and many CPU cores with pfsense not beacause i'm forced to but because pfSense gives me the freedom to do it.
RAM is getting seriously cheap these days and giving many more gigs of squid cache for my clients may not make much difference but more is always better… -
*** I've attributed the instability to running 64-bit pfSense as a VM. Question though; why not just run a separate squid VM to lessen the footprint of your firewall? ***
It's really easy to install the squid package from pfsense.
Also i believe that having 2 VM will actually make the firewall footprint even bigger.
I chose to use lots of RAM and many CPU cores with pfsense not beacause i'm forced to but because pfSense gives me the freedom to do it.
RAM is getting seriously cheap these days and giving many more gigs of squid cache for my clients may not make much difference but more is always better…pfSense won't really use much more than 2, so in most versions of ESX(i) you'll actually slow things down and/or add contention between VMs. ESX(i) has to schedule all vCPUs to excecute at the same time, even if they're not actually doing anything. If there aren't enough free cores to schedule it on it'll end up waiting for cores to become ready. While this may not affect you in your current configuration, it's something to be concerned about on hosts with multiple VMs.
-
Thanks matguy.
But when you said "pfSense won't really use much more than 2" did you mean 2 cores or 2 gigs of ram.
Since you then talk about ESX it sounds that you are talking about cores.Besides all the PFSense stuff going on here i have a question for you.
If you meant cores that means that if i have a 6 core CPU and 2 VMs and i assign 6 cores to each one of them, those VMs will actually end up being slower than giving them only 3 cores each?
Because if one of the VMs is idle, the other one should be able to take advantage of all 6 cores, unless the idle VM is actually slowing down all 6 cores even if it's idle. Maybe it depends also on the OS you have inside the VMs.
TIA!
-
Not for that very reason, but yes assigning all 6 cores to a VM and sharing them with others, will slow the overall performance down.
-
Thanks matguy.
But when you said "pfSense won't really use much more than 2" did you mean 2 cores or 2 gigs of ram.
Since you then talk about ESX it sounds that you are talking about cores.Besides all the PFSense stuff going on here i have a question for you.
If you meant cores that means that if i have a 6 core CPU and 2 VMs and i assign 6 cores to each one of them, those VMs will actually end up being slower than giving them only 3 cores each?
Because if one of the VMs is idle, the other one should be able to take advantage of all 6 cores, unless the idle VM is actually slowing down all 6 cores even if it's idle. Maybe it depends also on the OS you have inside the VMs.
TIA!
Yes, I was talking about cores. Having multiple VMs with a couple vCPUs (assuming your VM host has, say, 4 or more cores) is fine as ESX(i) can schedule them easily. When a single VM has as many (or close to) vCPus as cores in your host it can become difficult to schedule a busy VM as it may have to wait for enough cores to become available all at once.
Generally ESX(i) has to schedule all the cores of a multi-vCPU VM to run at the same time (I think the physical CPU may do some command re-shuffling, but as far as ESX(i) is concerned, they need to be fed to the CPUs at the same time.) It needs to do that whether or not anything is actually happening on those vCPUs, so even an idle vCPU needs to be scheduled as though it was a busy one.
That causes 2 problems: 1, scheduling these large groups of vCPUs in an otherwise busy host, where that group of 6 vCPUs may have to wait a few, or many CPU cycles for enough cores to become free (think of it like a large family that all wants to ride the roller coaster together, they may have to wait for the next train or 2 to get enough open seats.) 2, filling an otherwise busy physical CPU with cycles that are forced idle by idle vCPUs that have to be scheduled when there may be only 1 or 2 that are actually processing anything.
Like I was saying, this may not be an issue for you if you have very few VMs running on that host, especially if the others are single vCPU VMs, or even 2 vCPU. I share this more for others that may read this; it's probably not doing you any harm as long as you're not seeing contention or other instability.
I come from more dense environments, where a single host is probably hosting 10 to 30 VMs. Even on hosts with 12 to 16 physical cores we generally put a limit on VMs to 4 vCPUs, and even then we generally require real justification for going over 2.