Hypervisors and near native performance
-
A few questions - I am a newb with pfsense and hypervisors (a project I am embarking on) so I'm a little lost with all the available hypervisor types - bear with me. My ideas are probably flawed but that's what I'm here for. :)
Thanks in advance.
Obviously there will always be overhead with virtualisation; correct me if I'm wrong here - but from what I've read most of the performance drop, with pfsense and virtualisation, is when dealing with virtualised NIC devices.
After reading through the forums and reaching a general conclusion that PCI passthru (VT-d/IOMMU) is the way to go with virtualisation:- PCI Passthru: I was wondering what the opinion of such an option is?.. in terms of throughput - is it a noticeable difference on CPU utilization when comparing with virtualised NICs? Passed through devices - is there a large performance hit compared to native devices? I would pass through all interfaces including the WAN port. My main concern is PFsense sucking up all my CPU time and only achieving 500Mbps (not sluggish at all but only half the bandwidth intended).
- Hypervisor type: Which virtualisation type would turn out the best performance? Full virtualisation, Hardware, PV? - which hypervisor would you then recommend? Security is obviously a concern. My current choice is KVM or Xen.
Some background on the intended setup (if it helps):
It will be a relatively busy box for personal home use only (1 or two connected clients). The intention is to achieve 1Gbps, over a hardware interface i.e. physically connected clients, when needed; but the average load will be well under that - it's mainly to achieve near local write speeds on my NAS and for the odd file transfer from other instances.Pfsense will implement:
A transparent Proxy + VPN client, Snort, Clamav, QoS.Platform (not yet chosen):
-
CPU: ivybridge 1155 - i5 (VT-d, EPT)
-
BOARD: 1155 socket motherboard / 2x Gb intel NICs / VT-d enabled / adequate bus speed / 4 or 8GB RAM
-
Expansion cards: 1x PCIe (x1) WiFi (access point), 1x PCI ADSL2+ modem (presents itself as an RTL8100CL NIC): http://goo.gl/vR5MK, possibly a PCIe (x4) NIC.
-
SETUP: 3x vservers - router (pfsense + pci passthru) / *NAS (Gentoo) / *P2P server (Gentoo). *obviously these will connect through a vswitch
-
HYPERVISOR: VT-d capable hypervisor
Thanks again.
-
Well I don't virtualize my router, but I have used a virtualized file storage server for a while. I ran a bunch of informal tests using iperf and file transfer speeds to gauge the quality of the connection.
I've found performance is pretty good with virtualized nics but passing through a physical nic is better. I got about 780mbps with a virtualized nic and about 75-100 mbps higher when I passed through a real intel nic.
I currently moved my storage server to a real physical host on bare-metal yet my performance is about the same as with the passed through nic.
For the hypervisor I recommend ESXi. It's a fantastic bit of software. It's feature-packed, stable, easy to use, and free for small deployments. I wouldn't use xen, unless you meant cirtix's xenserver, and even then it's not as easy to use as esxi. And kvm is fine, but not as good as esxi imo. Give esxi a try, I'd be surprised if you weren't happy with it.
Regarding full vs paravirtualiztion. You'd want to go with PV whenever possible. Guests with PV drivers "know" they're a VM so they skip certain tasks that would be too resource intensive to virtualize and let the hypervisor handle it. This gives more performance with lower resource costs. A win-win situation.
-
Thanks. I forgot to reply. :)
I intend to go PV with pci passthrough on the NICs.
One question I have is regarding virtual switches - on the same host I intend to run two other machines, one being a NAS (Gentoo), however, the NICs will have been passed through to pfsense - so the remaining two instances will have virtualised NICs. Since I intend to achieve near gigabit write speeds - would there be a massive spike in CPU utilisation?
-
ok, I'm assuming you have 2 physical nics.
Keep in mind esxi will need one physical nic for its management network. So you can't pass both to pfsense.
I would pass 1 to pfsense for WAN and keep the other for esxi. I'd then make a virtual switch, add esxi's nic to that, and give every vm a virtual nic and add those to the same vSwitch.
So pfsense would have the passed physical nic for wan, and a virtual nic for lan traffic.
Unless you're fortunate enough to have a wan link capable of pushing your lan speeds to the limit, virtualized nic or physical nic won't really mater. Use any of the vmxnet nics after you install the open vm tools package, those are vmware's PV nics and are high performance drivers with low cpu overhead.
-
If your motherboard has slots, more physical interfaces are $30-100+ each (real intel - the higher cost if you seek real intel "server-rated") - my system at present has 2 built-in and a third plugged in - you can also get multiple physical interfaces per expansion slot in the higher-end cards. One reason a full-sized motherboard can be a good thing.
I am utterly ignorant of VM stuff myself at present, but stopped into this section of the forum to try and become more educated about options for open source/freeware VMs since I did rather overbuild my system, so it's something I'm thinking about as I start work on bringing the second one on-line (the first is running bare-metal, loafing, and the second might also - but I thought I should look into it, at least.)
-
On the Hypervisor side, depending on your background (Windows), Hyper-V is also a good alternative for pfSense (if using the custom pfSense ISO with integrated synthetic drivers, see "Hyper-V integration installed with pfSense 2.0.1" http://forum.pfsense.org/index.php/topic,56565.0.html). Hyper-V doesn't have some of the more advanced ESXi's features, specially third party management extensions, but for small setups, it is more than adequate, and you don't have CPU and number of VM limitations (that you get with the free ESXi version), and the paravirtualized driver support in Windows guests can make a considerable difference in performance (ie, it helps performance if all/most of your guest VMs will be running Windows or have synthetic drivers).
You can also download the free Windows Server 2012 Hyper-V Core Edition (ie, Windows Server w/o GUI w/Hyper-V).
To achieve near native performance, you should pre-allocate virtual hard disk, but it helps to also have the option to use dynamically growing disks (for disks storing files that seldom change).
As for which version to get (2008 R2 vs 2012), it is up to you. It might take some getting used to the "Modern UI" in WSrv12, but there are considerable improvements in Hyper-V 3.0, making it worth it (and you don't have to deal with the Modern UI that much once you have Hyper-V installed).
As for the network card, for a small server, I typically get a VMQ (VMDq)/SRV-IO Intel capable dual port card and set them up in a Team (using Intel's enhanced drivers) - see http://www.intel.com/support/network/sb/CS-030993.htm