Dedicated or Shared Host Hardware???
-
From a security point of view, is it best to have a VM pfSense on its own isolated host hardware or is it okay to share the host hardware with other line of business servers?
My issue is the whole point of virtualization is to more fully and efficiently utilize server hardware but there has to be some kind of security risk when you share the hardware of your network edge with internal servers. I can go either way as I have the hardware but I would prefer to be more green without introducing unnecessary and real risk to my network.
Either way I enjoy the flexibility and recoverability of the vm for pfsense.
Thanks for you input and thoughts.
-
You'll find a slew of other threads on the subject. Summary:
-
Many of us don't recommend it since it does reduce security
-
Many people don't care about the reduced security since it is only for their home network or they don't believe they would be a target
-
-
I think I did not ask my question clearly enough.
I am not asking a "VM" vs. physical server setup.
I am going to do a VM for pfSense no matter what so is it ok to put my pfsense vm on an ESXi host that has other line of business vm's running on it or should it be on its own isolated ESXi host server with no other vm's on it?
my guess is that if I am gong to do a vm for pfs, it should be on its own ESXi hardware.
-
My answer still stands, though if you are going to go down the VM route having it on it's own host is probably better (for some value of better) than running it on a shared host. It would still be better to run it directly on the hardware though.
-
@Cry:
My answer still stands, though if you are going to go down the VM route having it on it's own host is probably better (for some value of better) than running it on a shared host. It would still be better to run it directly on the hardware though.
Other than snapshots/VM copies what is the benefit of running on a dedicated host vs. standalone hardware? What can the VM do for you that decent RAID and backup PSU can't?
Are you suggesting running 2x dedicated hosts pooled and then using multiple virtual PFs across a pool of dedicated hosts for CARP? That's not a terrible idea - although still not the best.
At home I might consider virtual so that I can cut down on my power bill and consolidate but for a client I'd never consider sharing a firewall host with anything else.
-
For Home, I would do what I can to cut cost also. i would have a dedicated NIC though, for just WAN traffic.
I don't see the need of a VM for pfsense if the entire machine is dedicated to it anyway. a reinstall of ESX and restoring a VM takes more time than just installing pfsense and restoring a backup copy of the config. pfsense will probably run better if not virtualized, but you probably won't see any speed benefit if you have a slow internet connection (sub 100Mbit). If you have less then 300Mbit, you can get an atom 330 based board and it will only use 0.5 amps at 120 volts. If you have less than 100Mbit, then there are others out there like the alix or the like which uses even less. Considering the hardware you have to have even to install ESX, you are going to be looking at 2 or more amps. That would be more "green".
For security at work or client, I would never put a firewall of any kind on an VM host with other guests. If you are going to do a VM either way, put pfSense on its own hardware.
-
Using VM was a big step for me but there is a learning curve to using any type of technology. It took me a year before I could actually understand what virtualization could offer in terms of reliability and stability. I now understand it more and use it confidently in all my production environments. It has let me keep down time to a minimum, but with anything computer related nothing ever beats backups, backups, backups… Also you need to train how to use recovery very efficiently. I do understand why people have problems with virtualization as it is basically the setups and planning which most like to move past it and just implement. I have nearly all my clients servers running xen (not xenserver) with pfsense as a DomU with no problems whatsoever and running between 5 - 7 other DomUs in it ranging from file servers to voip to security cams and also desktops for remote desktop solutions. I believe its best to play around with xen without gui and it may teach you more than you think though it gets frustrating at times (like traffic shaper ;) ). I have chosen xen as my platform of deployment and it hasn't failed me yet. Mostly I use xen for pci pass thru with and without iommu and AMD is my choice of server platform running Debian.
-
I don't have a problem with virtual environments. I run ESX, VirtualBox and Xen in various deployments. I try to keep edge security at the edge. Even if it is the only 2 dedicated machine (clustered FWs), I think that security should remain more isolated. It does serve more than just the VMs. I am intrigued by the PCI pass through with Xen. The only problem I have ever had with Xen was compatibility and stability. It could very well be just me, but I have found ESX more stable and portable. Could just be familiarity also.
I think I will try Xen again since it has been so long since I used it.
-
From a security point of view, is it best to have a VM pfSense on its own isolated host hardware or is it okay to share the host hardware with other line of business servers?
My issue is the whole point of virtualization is to more fully and efficiently utilize server hardware but there has to be some kind of security risk when you share the hardware of your network edge with internal servers. I can go either way as I have the hardware but I would prefer to be more green without introducing unnecessary and real risk to my network.
Either way I enjoy the flexibility and recoverability of the vm for pfsense.
Thanks for you input and thoughts.
To answer your Question, The "safest" way it to install a hypervisor or bare metal virtualisor like ESX or Hyper-V Server from Microsoft. At that point you can create a virtual LAN with multiple switches and dedicate 1 Adapter Bridged to the VM and the other to Bridged to a virtual switch used to administer the host and internal LAN. safer yes since this is bare metal virtualisation. Vmware server that runs on an OS is debatable if hack able despite protocols being disabled on the adapter side.
As you already know by now virtualisation allows for hardware consolidation. Hypervisor's leverage that idea. Long are the days that you need a dedicated hardware for each server.