PfSense virtio networking under KVM
-
any news about that ?
are the virtio drivers included in the 2.0 final release ? -
Hello!
Running pfSense-2.0-RELEASE-i386
Same problem here :)
On E1000 with KVM I'm getting 250-300Mbit/s
This isn't enough for internal routing (for example when using pfSense to route traffic between 192.168.x.x and 10.x.x.x).
I'm just curious about that.Okay. Some tests.
Q1: why I have to exit shell and go into shell again (press 8…) to see new command (iperf) available after adding in packages?
Q2: reaching about 240-250Mbit/s DebianVM+virtio-net -> pfSenseVM+e1000 and pfSenseVM+e1000 -> DebianVM+virtio-net
Q3: there is no problem to reach about 900Mbit/s from DebianVM+virtio-net to the client.
Q4: timeouts on IDE drive attached in pfSenseVM. No such thing when using virtio-storage in DebianVM.
Both VM placed on the same storage - software mdadm RAID1 with 2 drives.
-
maybe this could help:
http://people.freebsd.org/~kuriyama/virtio/
-
@TooMeek, thanx for the tests. Do you have testet the drivers that WetWilly has postet?
Regards, Valle
-
I tried the drivers that WetWilly has posted, but I can't see any virtio network interface in pfSense.
How can I catch the log of loading /boo/loader.conf? -
well, the driver I posted is for freebsd 8.2 and 9.0
If I'm not mistaken pfSense 2. is based upon 8.1, so the driver most likely need (alot) tweaking.
-
Is inclusion of virtio drivers in pfSense planned in the near future?
I need a permanent virtual router solution to do inter vnet routing on KVM host using no NAT, firewall rules as needed, and OpenOSPFd (to distribute vnets to main hardware firewall/router dynamically) and first installed pfSense (2.0.1). It was functionally up to the task of course, but as it lacks virtio (and possible other KVM related optimizations) the performance was horrendous.
I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host even though the actual usage in pfSense VM was about 100% idle under a couple of hundred connections and maybe around 1-20mbit/s traffic. This indicates a massive performance hit in the emulated VM-interface alone.
Testing routing performance with iperf I couldn't get more than about 450mbit/s and then the CPU was completely maxed out making the web interface unusable during the test.Because of this I instead installed Debian Wheezy, activated forwarding and installed quagga for OSPF using virtio both for disk and network interfaces and this machine idles at 0-0.1% CPU usage on KVM host and during iperf tests the routing performance was a full 944mbit/s, which is as good as it possibly can get without jumbo frames etc, using only 13-16% CPU power on the KVM host which is very good.
Internal guest to guest performance using virtio and Debian guests was 19.5 gbit/s with iperf, so the performance is potentially extremely good. I don't know if FreeBSD can reach those speeds on a KVM host at all, but pfSense should at least be able to idle at around 0% host CPU usage and do plain routing at gigabit speeds without any problems to be an alternative for VM appliances in KVM based hypervisors.
As I have networked power monitoring on the power receptacles I could actually see the power usage decrease by several watts by replacing the pfSense VM with Debian because of the high idle CPU usage.
-
I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host
I'm running Fedora 16 and KVM version 0.15.1 on a Xeon E3-1240 and not seeing these problems. I gave the VM 1 VCPU and 512 MB of RAM.
My server has 2x 82574L NICs. I configured them this way:
- WAN - Using Intel IOMMU, I passed one of the Intel NICs to the PFSense VM (had to add intel_iommu=yes to kernel startup and specify host-device in virt-install). PFSense sees the raw network card and identifies it as Intel PRO/1000 Network Connection 7.2.3.
- LAN - I configured the 2nd Intel NIC as a bridge (br0) and when I setup the VM, I specified the NIC model as e1000. PFSense sees this network card as Intel PRO/1000 Legacy Network Connection 1.0.3.
It's hard to tell why you are having performances problems because there isn't enough information here.
What OS are you running as the hypervisor?
What version of KVM?
What NICs are you using?
What drivers are you using for the NICs (in PFSense)?
How did you setup your VM? Did you use "hvm"I wonder if you used "hvm" when you setup your VM. I don't know how you configured your VM. Here is how I configured mine.
virt-install \
–connect qemu:///system
--name=pfsense
--virt-type kvm
--hvm
--vcpus=1
--ram=512
--disk path=/home/vm/pfsense,size=4
--network bridge=br0,model=e1000 \ <--------- LAN NIC
--cdrom=/home/iso/bsd/pfSense-2.0.1-RELEASE-amd64.iso
--os-variant=freebsd8
--graphics vnc,listen=0.0.0.0
--host-device=03:00.0 <--------- WAN NIC (raw pci device) -
I have fired up my pfSense VM again to answer your questions.
As long as the pfSense server is idling with almost no traffic going through it the CPU-usage is only from 1-5%, still high, but not extremely so.
But with a couple of houndred connection and a few mbit/s the situation is as in my previous post.My system consists of 4xIntel 82574L NICs (on the mainboard) and using direct hardware access is of course not even an option. That way you loose most of the good things that make you use virtualization in the first place. Troublesome live migration, hardware dependencies and so on. I still have VT-d active in BIOS just in case.
I use Proxmox VE 2.0 (latest beta) which is a Debian based distro purely optimized for KVM/OpenVZ using it's own virtualization optimized kernel based on Debian Squeeze's 2.6.32-kernel.
Hardware Virtualization definitively works in this setup, as I get very good performance from Linux and also Windows guests when using virtio drivers.
¨
Hardware Virtualization is also active for pfSense and I use e1000 emulated NICs.Versions:
pve-qemu-kvm v1.0.1 (you run the previous version with your 0.15.1).You are only commenting on your idle CPU-usage (on the KVM host I hope, NOT the guest), which is almost as it should be in my setup too when there is absolutely no traffic at all, but I severely doubt that you can max the gigabit with low CPU usage on your e1000 interfaces even though you say you have no problems.
If you could do something like this to test pfSense's routing performance using e1000 interfaces with iperf it would be more helpful:
client -> NIC1 -> vmbr0 -> e1000 -> pfsense -> e1000 -> vmbr1 -> virtio or physical NIC2 -> client 2
My guess is that you too will get very bad performance far below gigabit speeds with very high CPU usage.
-
I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host
This happend to me with HP Proliant ML110 G5 with Intel(R) Pentium(R) Dual CPU E2160 @ 1.80GHz that has no virtualization capabilites Intel
Virtualization Technology (VT-x)
http://ark.intel.com/products/29739/Intel-Pentium-Processor-E2160-(1M-Cache-1_80-GHz-800-MHz-FSB)
This also can happend with disabled VT-x in server BIOS. KVM is running then in basic mode, causing randomly pfSense hangs and 100% usage in idle. -
I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host
This happend to me with HP Proliant ML110 G5 with Intel(R) Pentium(R) Dual CPU E2160 @ 1.80GHz that has no virtualization capabilites Intel
Virtualization Technology (VT-x)
http://ark.intel.com/products/29739/Intel-Pentium-Processor-E2160-(1M-Cache-1_80-GHz-800-MHz-FSB)
This also can happend with disabled VT-x in server BIOS. KVM is running then in basic mode, causing randomly pfSense hangs and 100% usage in idle.Both VT-x and VT-d is enabled and working for sure which of course the performance of all the other linux and Windows based VMs with virtio confirms.
The high cpu usage is a result of about 5 mbit/s and a couple of hundred connections. When there is absolutely no traffic at all the idle usage is 1-5%.
As I said I get about 450 mbit/s max performance from pfsense (non routed, less than half that when routed) which is about as good as it gets with pfsense on KVM without better support in FreeBSD.Until someone can show me hard iperf performance numbers that greatly surpasses 450mbit/s on KVM for pfSense I really don't believe that there is something wrong with my KVM-setup and not pfSense, at least as long as linux pfSense replacement vm to vm performance is 19,6 GBit/s, routed performance is 9,6gbit/s and physical host to linux vm is 36 gbit/s with iperf on the same KVM host.
Basically 1/40 of the network performance with pfSense vs linux on KVM.