Considering Hypervisor to include pfSense, NO experience.
-
I think it's funny that you two are the ones who replied to this post. I kept seeing people recommend against pfSense as a VM next to a NAS, and sometimes as a VM in general which I thought was particularly strange considering it's commercially offered as a virtual product.
I think the general idea was keeping all of your eggs in one basket can obviously be bad if you experience a hardware failure, but this is for home use so down would be annoying to me but that's it.
Anyways I read a few posts by you two specifically arguing for pfSense as a VM with all of your other services so I figured I'd look into it.Thanks for the recommendation! I'll look into ESXi.
I know that hardware recommendations will be based on my specifc needs, but do you have any general recommendation as far as how much CPU (how many cores/clock rate) I'll need, how much RAM?
-
do you have any general recommendation
It really does depend on the use-case. If all you want is a simple home install, I've read about people installing ESXi on a dinky Zotac ZBox and then running pfSense inside that as the sole VM. AT the office, I use a Dell NX3000 blade with 48G RAM tied to a SAN via iSCSI.
-
Well none of my needs are intense, 50/5 line on pfsense, probably eventually a 150/15 maybe a 300/30 WAY down the line but I simply don't need it in the foreseeable future, so probably not. I do use OpenVPN but fairly modern AES-NI should still not require a really high clock speed for my needs.
HTPC, currently using a J3355B which hardware wise can do 4k HEVC 10bit with the appropriate drivers. This is possible because it has the appropriate instruction set, something older would work really hard to do the same.
FreeNAS, I want to start out with 4x4TB drives in a single RAIDZ2 zpool. I would probably only add one more zvol to that in the foreseeable future, and even that would be down the road. I don't want or need anything fancy like L2ARC, ZIL, dedup, etc.
I would like the potential to be able to hit 2Gigabit network speeds if I add a second zvol. But in the beginning I will only have a gigabit network.
There won't ever be more than 10 clients connected at once and even that would be rare. It would be light use, basically a local cloud where the family stores documents, photos, videos, etc.).
From reading around this type of setup is not incredibly CPU intensive, but that's a pretty generalized statement so I really don't know how much CPU it needs.So from my understanding of things (which isn't very good) I would probably be best suited by a lower end modern CPU that includes modern instruction sets for things like AES-NI, HEVC 10 bit and supports virtualization and ECC.
So going from there I see the i3-7101TE, it checks all of those blocks but is only dual core with HT. I don't know how virtualization works with CPUs, can I allocate more virtual CPU's to individual VM's than I have physical CPU's? If so is there a limit to this (practical/actual)? How does hyper-threading play into virtual CPU's?
The whole point of virtualizing all of these boxes is to ultimately save money. So I'm not trying to "future-proof" beyond what I've already mentioned (150/15 WAN, max 2 giabit LAN throughput for the NAS and even that is not a hard requirement). I'm looking for a machine that is capable of doing what I need without choking, but I don't want to pay for an incredibly powerful CPU that is complete overkill for what I need.
So would an i3-7101TE work for this?
If not could an atom with lower clocks and more cores work?
Do I ultimately need a xeon for this? I know nothing about xeons and which generations include what technology? My fear with a xeon would be that since it's not consumer oriented that it wouldn't include the GPU instructions for HEVC, forcing me to either get one that has enough power to brute force 4k HEVC 10bit, or buy a discrete GPU. Either of those options are undesirable from an initial buy in and power cost standpoint. -
While I know that it is generally advised against (for very good reason im sure), I'm considering consolidating my machines to ones and running everything from a hypervisor.
Virtualization is becoming, if not already, an industry standard, so I don't know who's advising against it. My next question was going to be are you talking about virtualization in general or specifically about virtualizing PFsense, but you already touched on it. Virtualizing PFsense isn't necessarily advised against, but in general you're always going to get better performance running on bare metal with the same hardware. The caveat to that is, you'll never notice a performance difference until you get to the faster connections. For example, I have a friend who was on a virtual instance of PFsense and upgraded to 1 Gbit fiber. While virtualized, his connection could only push about 400 Mbit of data, so we had to move PFsense to a bare metal server go get the full gigabit bandwidth.
1.) Can these three systems reasonably be run on the same (free) hypervisor?
Yes.
2.) Will I save money by virtualizing these three systems as opposed to dedicating a computer for each system? (Long term cost savings in terms of both initial buy in and power costs [assuming relatively high power cost @ ~.20/kWh).[/quote]
Absolutely.3.) If 1&2=yes, what free hypervisor do you recommend (if any) for a non-IT pro with no virtualzation experience?
I'd also recommend ESXi. Just like with anything else, both products have strengths and weaknesses that may sway you one way or the other depending on your priorities, but if you take a poll of the masses, I think you'll find that the vast majority lean towards ESXi.
I kept seeing people recommend against pfSense as a VM next to a NAS, and sometimes as a VM in general which I thought was particularly strange considering it's commercially offered as a virtual product.
I think the general idea was keeping all of your eggs in one basket can obviously be bad if you experience a hardware failure, but this is for home use so down would be annoying to me but that's it.It's not necessarily about having all your eggs in the same basket, it's the fact that the scheduler is managing resources for all the VM's, so if you don't tweak the settings for certain VM's they will hog resources and affect the performance of other VM's. For a firewall, I believe it's recommend to set reservations for cpu and ram. You can also research playing with custom "shares" settings. In addition to tweaking cpu and ram settings for your firewall VM, when possible I always dedicate two separate NIC's to the VM and enable DirectPath I/O to ensure maximum performance.
The only thing I've heard that you might want to stay away from virtualizing is a MS SQL server because of I/O concerns. Outside of that, virtualize away.
-
Meeh I have virtualised pretty much everything… Mysql, mssql, exchange 2013&2016 even fileserver :)
But my pfsense is on bare metal with standby virtual machine if it goes offline...
So yeah, I would virtualise it..
In past there were security considerations which do not apply any more...
I remember that smoothwall ppl were strongly against but that was some 8 yrs in past :) -
While I know that it is generally advised against (for very good reason im sure), I'm considering consolidating my machines to ones and running everything from a hypervisor.
Virtualization is becoming, if not already, an industry standard, so I don't know who's advising against it. My next question was going to be are you talking about virtualization in general or specifically about virtualizing PFsense, but you already touched on it. Virtualizing PFsense isn't necessarily advised against, but in general you're always going to get better performance running on bare metal with the same hardware. The caveat to that is, you'll never notice a performance difference until you get to the faster connections. For example, I have a friend who was on a virtual instance of PFsense and upgraded to 1 Gbit fiber. While virtualized, his connection could only push about 400 Mbit of data, so we had to move PFsense to a bare metal server go get the full gigabit bandwidth.
1.) Can these three systems reasonably be run on the same (free) hypervisor?
Yes.
2.) Will I save money by virtualizing these three systems as opposed to dedicating a computer for each system? (Long term cost savings in terms of both initial buy in and power costs [assuming relatively high power cost @ ~.20/kWh).[/quote]
Absolutely.3.) If 1&2=yes, what free hypervisor do you recommend (if any) for a non-IT pro with no virtualzation experience?
I'd also recommend ESXi. Just like with anything else, both products have strengths and weaknesses that may sway you one way or the other depending on your priorities, but if you take a poll of the masses, I think you'll find that the vast majority lean towards ESXi.
I kept seeing people recommend against pfSense as a VM next to a NAS, and sometimes as a VM in general which I thought was particularly strange considering it's commercially offered as a virtual product.
I think the general idea was keeping all of your eggs in one basket can obviously be bad if you experience a hardware failure, but this is for home use so down would be annoying to me but that's it.It's not necessarily about having all your eggs in the same basket, it's the fact that the scheduler is managing resources for all the VM's, so if you don't tweak the settings for certain VM's they will hog resources and affect the performance of other VM's. For a firewall, I believe it's recommend to set reservations for cpu and ram. You can also research playing with custom "shares" settings. In addition to tweaking cpu and ram settings for your firewall VM, when possible I always dedicate two separate NIC's to the VM and enable DirectPath I/O to ensure maximum performance.
The only thing I've heard that you might want to stay away from virtualizing is a MS SQL server because of I/O concerns. Outside of that, virtualize away.
Thank you very much for your detailed input! It sounds like ESXi is the way to go, I'm starting to read up on it already. It'll be awhile before I pull the trigger on this, my systems are all working great right now and I have a lot of learning to do before I start migrating to VM's, but it does sound like virtualization makes the most sense for what I'm trying to accomplish.
-
The attack surface of ESXI is very small, and hardened over years of use in the enterprise. I would never run Hyper-V on a full Windows server – too much Windows security baggage. I think they have a smaller version of Hyper-V server but I don't know for sure since I really don't care for Hyper-V. VMware for me.
-
Starting to read up on how CPU virtualization works here http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf
This statement looks generally promising for using lower end equipment.
In most environments ESXi allows significant levels of CPU overcommitment (that is, running more
vCPUs on a host than the total number of physical processor cores in that host) without impacting virtual
machine performance.
If an ESXi host becomes CPU saturated (that is, the virtual machines and other loads on the host demand
all the CPU resources the host has), latency-sensitive workloads might not perform well.It does reinforce marvosa's comment on guarding resources for your firewall.
certain VM's they will hog resources and affect the performance of other VM's. For a firewall, I believe it's recommend to set reservations for cpu and ram.
Still have more reading to do to figure out how to correlate physical CPU cores and clocks to how much I can overcommit.
-
Reading through this document: https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf
The answer to CPU overcommitment is basically it depends and I'll have to figure it out by monitoring my specific setup.
BUT, it does give some very encouraging generalized guidance.In VMware a physical CPU (pCPU) refers to either an actual core or a hyperthreaded core, so a dual core HT cpu has 4 pCPU's as far as VMware is concerned.
It goes on to reference a Dell whitepaper that states a vCPU to pCPU ratio as high as 3:1 is no problem.
So based on this broad generalization I can expect a stable environment with up to 12 vCPU's on an i3-7101TE @ 3.4GHz.From what I've read FreeNAS typically operates with 2 vCPU's, assuming the same for pfSense, and for LibreElec (although based on my current use of it I think 1 vCPU would be more than adequate).
That puts me at a 2:1 ratio for what I'm trying to accomplish, so it sounds like a modern low end i3 is perfect for this application.Paired with something like a Supermicro X11SSH-CTF with built in LSI HBA for VT-d passthrough and built in X550 10Gb Ethernet, it seems like this combo would last me for a very long time, ideally 10+ years.
The only issue here is that the 7101's aren't being sold yet (at least I can't find them). But I probably won't be going through with this for over a year anyways as I take time to learn FreeBSD / FreeNAS / ESXi before I start actually using trying to use them.
Hopefully by then board prices will have dropped a little bit and the 7101's will be commercially available.Another appealing option is a passively cooled supermicro xeon d-1518 combo, but the primary concern there is the lack of iGPU for hardware HEVC 10 bit decode. As I understand it intel iGPU passthrough isn't fully supported by ESXi yet, hopefully over the course of the next year+ that will get supported and this will all make sense for me.
-
You could also take a look at Proxmox.
Running perfectly fine here on a Supermicro X11-SSL-F with a Core i3 7100 (16GB, old 350W Enermax EES350AWT-ERP, 2x WD RED 2 TB drives MD mirror + a cheap Sandisk SSD: about 26 Watts when idle).
14 vCPUs provisioned, running a Win10 VM, several Linux VMs (including Zabbix with MySQL) and my pfSense 2.4 beta, occasionally a few additional test machines also run with no problem.
The only thing that's slow, are the drives when installing the weekly Ubuntu Kernel upgrades :)
Windows / pfSense run from the SSD, lightning fast.
pfSense throughput is around 940 MBit iperf3 from LAN against a test VM in WAN, OpenVPN around 250MBit/s measured locally from WAN. -
I have Hyper-V running on:
ASRock Rack Z97M WS
Core i3-4160
16GB (2x8GB) Crucial Ballistix Sport VLP
128GB SanDisk Z400s SSD <- for Hyper-V
240GB Intel SSD 530 <- for VM's
320GB 2.5" WD Blue <- for downloads
Delta Electronics DPS-250AB-53A (250Watt|80Plus Bronze)VM's:
pfSense > 2048 RAM assigned
Win8.1 > 3072 RAM assigned | For stuff that works only on Windows and needs to run all the time
Debian > 2048 RAM assigned | Webserver / MQTT-Server - Nginx, HiveMQ, MariaDB, InfluxDB, Grafana, ExpressionEngine
Debian > 1024 RAM assigned | FreeSWITCH Telephony platform
Debian > 512 RAM assigned | Pi-HolePower consumption is around ~20W
I use just Hyper-V (it's free) not Server 20XX… - so no GUI.
Have it running since 2 Years without problems on Hyper-V 2012 R2 and switched to 2016 a week ago. -
@KOM:
The attack surface of ESXI is very small, and hardened over years of use in the enterprise. I would never run Hyper-V on a full Windows server – too much Windows security baggage. I think they have a smaller version of Hyper-V server but I don't know for sure since I really don't care for Hyper-V. VMware for me.
It's about the same attack surface for Hyper-V as for Esx-i (the kernel, the nic driver and the hypervisor).
The rest of the windows server should be seen as a vm without default internet connection - therefore it doesn't expose any attack surface to the internet. However, for a multiserver installation i would use server core (windows without a gui) or if on 2016 - nanoserver for my Hyper-v hosts. Not for security but for easier (aka less patching).There is a 100% free version of server core with hyper-v. It's called hyper-v server and can be found on MS download page. It's listed as a trial without an end date and it has exactly the same features as the paid version, the only difference is licensing for windows VM:s on top. For free you get zero licenses, for standard ed. You get two and for datacenter - unlimited
-
Yes if i remember right it's the same if you enable Hyper-V on your Windows installation -> Windows becomes a VM
Also Hyper-V is not reachable from the outside if you disable "Allow management operation system to share this network adapter" on
the virtual switch that is your WAN.