PfSense virtio networking under KVM



  • Hi.

    I took it upon myself to compile the FreeBSD virtio network driver to use on my pfSense that I have running on a KVM host.

    The files and quick instructions are posted on the web:

    http://wp.area26.se/2011/03/24/virtio-for-pfsense-2-0rc1/

    Quick instructions:

    Make a backup of your pfSense VM!
        Download the driver files from the URL above.
        Unzip and then copy the files (*.ko) to your pfSense, and put them under /boot/kernel
        Edit the file /boot/loader.conf and add the following lines:
        virtio_load="YES"
        virtio_pci_load="YES"
        if_vtnet_load="YES"
        Shut down pfSense, change the NIC type to “virtio” and restart.
        Re-assign your interfaces in pfSense.

    It seems to work OK for me, so it may work for you as well - YMMV.

    The driver BTW was not written by me, this is where I found it:

    http://lists.freebsd.org/pipermail/freebsd-current/2011-January/022036.html

    All appropriate credit goes to the original author…



  • We have added this driver to 2.0.  Please test a new snapshot and it should be present sometime tomorrow.



  • I've installed the latest snapshot from iso. The drivers are not included in the installed system (in /boot/kernel). If I install/configure them from the mentioned source, they work fine, but an update launched on the web interface deletes the drivers from /boot/kernel. I need to copy them back after every update and reconfigure the interfaces. It's rather tedious since I need to switch back and forth between e1000 and virtio in KVM… Can you suggest a solution?



  • Hasn't anyone tried a similar setup? For example Debian's kFreeBSD installs the virtio network drives by default, also working nicely. The reinstalling of drivers can be semi-automated with init scripts but I don't really think that would be the solution.



  • The download link isn't available any more.

    can anybody upload the driver.

    thanks



  • Same here.

    I will love to get a hand on these drivers.





  • rapidshare isn't serious man!



  • I guess you can use the other six services to download.



  • Apparently this mod breaks traffic shaper, the interfaces are not available under traffic shaper. I wonder if there is an easy way to fix it…

    http://forum.pfsense.org/index.php/topic,37518.0.html


  • Rebel Alliance Developer Netgate

    Try adding it to the list at line 3520 of /etc/inc/interfaces.inc

    It may not really be altq capable, but you can try.



  • Will the virtio driver be included in the final release of PfSense 2.0?

    It's mentioned in this thread that it's included since 2.0 but I won't get it working only by installing it manually.

    "We have added this driver to 2.0.  Please test a new snapshot and it should be present sometime tomorrow."



  • Hello,

    i would bump this topic and ask, are there now virtio drivers in the latest pfSense 2.0 RCx?

    Regards, Valle



  • @jimp

    can you please make a statement and tell us if in the final pfsense 2.0 will be virtio drivers for kvm?

    Thanx for answer.

    Valshare



  • after a view tests, i can say that the actual virtio driver didn´t work correkt. More than 3 network interfaces didn´t work correct (routing)



  • thanks for that information. did you also test the virtio disk driver?



  • hi,

    no i didn´t test die virtio disc driver because of the trouble with the virtio net drivers. I think its better to wait for the offical, when there ever will be released.

    I have tested successfull the network e1000 and iscsi disc drivers with pfsense.



  • any news about that ?
    are the virtio drivers included in the 2.0 final release ?



  • Hello!
    Running pfSense-2.0-RELEASE-i386
    Same problem here :)
    On E1000 with KVM I'm getting 250-300Mbit/s
    This isn't enough for internal routing (for example when using pfSense to route traffic between 192.168.x.x and 10.x.x.x).
    I'm just curious about that.

    Okay. Some tests.
    Q1: why I have to exit shell and go into shell again (press 8…) to see new command (iperf) available after adding in packages?
    Q2: reaching about 240-250Mbit/s DebianVM+virtio-net -> pfSenseVM+e1000 and pfSenseVM+e1000 -> DebianVM+virtio-net
    Q3: there is no problem to reach about 900Mbit/s from DebianVM+virtio-net to the client.
    Q4: timeouts on IDE drive attached in pfSenseVM. No such thing when using virtio-storage in DebianVM.
    Both VM placed on the same storage - software mdadm RAID1 with 2 drives.










  • @TooMeek, thanx for the tests. Do you have testet the drivers that WetWilly has postet?

    Regards, Valle



  • I tried the drivers that WetWilly has posted, but I can't see any virtio network interface in pfSense.
    How can I catch the log of loading /boo/loader.conf?



  • well, the driver I posted is for freebsd 8.2 and 9.0

    If I'm not mistaken pfSense 2. is based upon 8.1, so the driver most likely need (alot) tweaking.



  • Is inclusion of virtio drivers in pfSense planned in the near future?

    I need a permanent virtual router solution to do inter vnet routing on KVM host using no NAT, firewall rules as needed, and OpenOSPFd (to distribute vnets to main hardware firewall/router dynamically)  and first installed pfSense (2.0.1). It was functionally up to the task of course, but as it lacks virtio (and possible other KVM related optimizations) the performance was horrendous.

    I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host even though the actual usage in pfSense VM was about 100% idle under a couple of hundred connections and maybe around 1-20mbit/s traffic. This indicates a massive performance hit in the emulated VM-interface alone.
    Testing routing performance with iperf I couldn't get more than about 450mbit/s and then the CPU was completely maxed out making the web interface unusable during the test.

    Because of this I instead installed Debian Wheezy, activated forwarding and installed quagga for OSPF using virtio both for disk and network interfaces and this machine idles at 0-0.1% CPU usage on KVM host and during iperf tests the routing performance was a full 944mbit/s, which is as good as it possibly can get without jumbo frames etc, using only 13-16% CPU power on the KVM host which is very good.

    Internal guest to guest performance using virtio and Debian guests was 19.5 gbit/s with iperf, so the performance is potentially extremely good. I don't know if FreeBSD can reach those speeds on a KVM host at all, but pfSense should at least be able to idle at around 0% host CPU usage and do plain routing at gigabit speeds without any problems to be an alternative for VM appliances in KVM based hypervisors.

    As I have networked power monitoring on the power receptacles I could actually see the power usage decrease by several watts by replacing the pfSense VM with Debian because of the high idle CPU usage.



  • @marsboer:

    I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host

    I'm running Fedora 16 and KVM version 0.15.1 on a Xeon E3-1240 and not seeing these problems. I gave the VM 1 VCPU and 512 MB of RAM.

    My server has 2x 82574L NICs. I configured them this way:

    • WAN - Using Intel IOMMU, I passed one of the Intel NICs to the PFSense VM (had to add intel_iommu=yes to kernel startup and specify host-device in virt-install). PFSense sees the raw network card and identifies it as Intel PRO/1000 Network Connection 7.2.3.
    • LAN - I configured the 2nd Intel NIC as a bridge (br0) and when I setup the VM, I specified the NIC model as e1000. PFSense sees this network card as Intel PRO/1000 Legacy Network Connection 1.0.3.

    It's hard to tell why you are having performances problems because there isn't enough information here.

    What OS are you running as the hypervisor?
    What version of KVM?
    What NICs are you using?
    What drivers are you using for the NICs (in PFSense)?
    How did you setup your VM? Did you use "hvm"

    I wonder if you used "hvm" when you setup your VM. I don't know how you configured your VM.  Here is how I configured mine.

    virt-install \

    –connect qemu:///system
    --name=pfsense
    --virt-type kvm
    --hvm
    --vcpus=1
    --ram=512
    --disk path=/home/vm/pfsense,size=4
    --network bridge=br0,model=e1000 \    <--------- LAN NIC
    --cdrom=/home/iso/bsd/pfSense-2.0.1-RELEASE-amd64.iso
    --os-variant=freebsd8
    --graphics vnc,listen=0.0.0.0
    --host-device=03:00.0 <--------- WAN NIC (raw pci device)



  • I have fired up my pfSense VM again to answer your questions.
    As long as the pfSense server is idling with almost no traffic going through it the CPU-usage is only from 1-5%, still high, but not extremely so.
    But with a couple of houndred connection and a few mbit/s the situation is as in my previous post.

    My system consists of 4xIntel 82574L NICs (on the mainboard) and using direct hardware access is of course not even an option. That way you loose most of the good things that make you use virtualization in the first place. Troublesome live migration, hardware dependencies and so on. I still have VT-d active in BIOS just in case.

    I use Proxmox VE 2.0 (latest beta) which is a Debian based distro purely optimized for KVM/OpenVZ using it's own virtualization optimized kernel based on Debian Squeeze's 2.6.32-kernel.
    Hardware Virtualization definitively works in this setup, as I get very good performance from Linux and also Windows guests when using virtio drivers.
    ¨
    Hardware Virtualization is also active for pfSense and I use e1000 emulated NICs.

    Versions:
    pve-qemu-kvm v1.0.1 (you run the previous version with your 0.15.1).

    You are only commenting on your idle CPU-usage (on the KVM host I hope, NOT the guest), which is almost as it should be in my setup too when there is absolutely no traffic at all, but I severely doubt that you can max the gigabit with low CPU usage on your e1000 interfaces even though you say you have no problems.

    If you could do something like this to test pfSense's routing performance using e1000 interfaces with iperf it would be more helpful:

    client -> NIC1 -> vmbr0 -> e1000 -> pfsense -> e1000 -> vmbr1 -> virtio or physical NIC2 -> client 2

    My guess is that you too will get very bad performance far below gigabit speeds with very high CPU usage.



  • I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host

    This happend to me with HP Proliant ML110 G5 with Intel(R) Pentium(R) Dual  CPU  E2160  @ 1.80GHz that has no virtualization capabilites Intel® Virtualization Technology (VT-x)
    http://ark.intel.com/products/29739/Intel-Pentium-Processor-E2160-(1M-Cache-1_80-GHz-800-MHz-FSB)
    This also can happend with disabled VT-x in server BIOS. KVM is running then in basic mode, causing randomly pfSense hangs and 100% usage in idle.



  • @TooMeeK:

    I gave it 1 CPU-core (Xeon E3 1220 3.1GHz) and 512MB RAM but it actually idled at around 50-60% CPU usage on the KVM host

    This happend to me with HP Proliant ML110 G5 with Intel(R) Pentium(R) Dual  CPU  E2160  @ 1.80GHz that has no virtualization capabilites Intel® Virtualization Technology (VT-x)
    http://ark.intel.com/products/29739/Intel-Pentium-Processor-E2160-(1M-Cache-1_80-GHz-800-MHz-FSB)
    This also can happend with disabled VT-x in server BIOS. KVM is running then in basic mode, causing randomly pfSense hangs and 100% usage in idle.

    Both VT-x and VT-d is enabled and working for sure which of course the performance of all the other linux and Windows based VMs with virtio confirms.

    The high cpu usage is a result of about 5 mbit/s and a couple of hundred connections. When there is absolutely no traffic at all the idle usage is 1-5%.
    As I said I get about 450 mbit/s max performance from pfsense (non routed, less than half that when routed) which is about as good as it gets with pfsense on KVM without better support in FreeBSD.

    Until someone can show me hard iperf performance numbers that greatly surpasses 450mbit/s on KVM for pfSense I really don't believe that there is something wrong with my KVM-setup and not pfSense, at least as long as linux pfSense replacement vm to vm performance is 19,6 GBit/s, routed performance is 9,6gbit/s and physical host to linux vm is 36 gbit/s with iperf on the same KVM host.

    Basically 1/40 of the network performance with pfSense vs linux on KVM.


Log in to reply