PFsense and KVM



  • I'm having a bit of trouble working out the proper setup for running a PFsense VM on a KVM OS, specifically working out how to configure the LAN interfaces properly. The system I'm running on currently has 2 onboard ethernet ports, though I'm planning on putting in another 2 ports in the future to handle some planned expansion. The network setup that I currently have envisioned for this machine is: Internet <- Modem <- WAN Interface <- PFsense VM <- LAN Interface <- Other misc VMs and systems on LAN. I plan to eventually move to Internet 1 <- Modem 1 <- WAN Interface 1 <- PFsense VM <- LAN Interface <- certain VMs and systems plus Internet 2 <- WAN Interface 2 <- LAN Interface <- Other systems (separation achieved through VLANs or equivalent rules in PFsense) with a spare link for either a second LAN interface if my first idea doesn't work out, or a direct line for VOIP if it does.

    I realize that I could bypass this problem by having the modem itself handle DHCP for the WAN interface, but I plan to keep the modem itself running in bridge mode do to the sheer lack of configuration allowed by the ISP supplied router/modem. Any ideas on how I could pull this off would be appreciated.



  • I just setup something very similar. I used Fedora 16 because it appears to have the most recent KVM support.

    Instead of double posting my reply to another thread, here is a link:

    http://forum.pfsense.org/index.php/topic,34911.msg235031.html#msg235031



  • Hmm, that does make sense, bypassing the OS/virtulization layers by using IOMMU will make the wan nic appear as if it was running on bare hardware which would then allow me to set things up within pfsense as i always have.

    Am I correct in assuming that I would not need to do any kind of special configuration of the LAN interface within the OS other than setting it up in bridged mode? As if I understand the mechanics behind this correctly than the PFsense VM should handle the routing between interfaces on it's own, once it's fully set up so I wouldn't need to account for that in Fedora/CentOS itself.

    I forsee potential configuration glitches when i do upgrade my setup to dual wan, but they shouldn't be an insurmountable issue, and it's nothing that I need worry about until I come to that situation anyway.



  • @valunthar:

    Hmm, that does make sense, bypassing the OS/virtulization layers by using IOMMU will make the wan nic appear as if it was running on bare hardware which would then allow me to set things up within pfsense as i always have.

    Am I correct in assuming that I would not need to do any kind of special configuration of the LAN interface within the OS other than setting it up in bridged mode? As if I understand the mechanics behind this correctly than the PFsense VM should handle the routing between interfaces on it's own, once it's fully set up so I wouldn't need to account for that in Fedora/CentOS itself.

    You're not making it appear as if it is bare hardware. It IS bare hardware, that only the specified VM can use. That's the beauty of VT-D/IOMMU! The hypervisor and other VMs can no longer see or use this NIC.

    So the way I set mine up was that I told my Fedora to give the VMs two nics,

    one of them a physical device:
    –host-device=03:00.0
    (no need to specify that it as --network, since PFSense gets the NIC and knows what to do with it. nothing else, not even my hypervisor can use this nic)

    and one of them a bridged NIC.
    --network bridge=br0,model=e1000

    I didn't have to do anything in PFSense. I just installed PFSense, and it saw two NICs.



  • Happy pfSense on KVM user here!

    I'm running pfSense on different boxes, all as KVM, examples:

    • Dell PowerEdge R210 II (second generation), 2 x Gigabit onboard. Bridged one as WAN, second as LAN.
    • Dell PowerEdge R310, 2 x Gigabit onboard. Same as above.
    • HP ML110 G6. 2 x Gigabit, same as above.
    • Usual desktop with ASrock sh**ty board, 1 x Gigabit onboard for WAN bridge and 2 x Gigabit LX/SX PCI-X cards for bonding in round-robin mode for LAN bridge (high-availability). All TCP Offload engines enabled.
      In my case 200Mbit on virtualised pfSense is more than enough. But there is no problem to assign physical NIC to the VM to reach more..

    My home config is following:
    lspci | grep Eth

    00:07.0 Bridge: nVidia Corporation MCP61 Ethernet (rev a2)
    01:08.0 Ethernet controller: Intel Corporation 82543GC Gigabit Ethernet Controller (Fiber) (rev 02)
    01:0a.0 Ethernet controller: Intel Corporation 82545GM Gigabit Ethernet Controller (rev 04)
    

    cat /proc/net/bonding/bond0

    Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)
    
    Bonding Mode: load balancing (round-robin)
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 200
    Down Delay (ms): 200
    
    Slave Interface: eth1
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 1
    Permanent HW addr: 55:44:33:22:11:00
    Slave queue ID: 0
    
    Slave Interface: eth2
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 1
    Permanent HW addr: 00:11:22:33:44:55
    Slave queue ID: 0
    
    

    cat /etc/network/interfaces

    auto lo
    iface lo inet loopback
    
    # The bonded network interface
    auto bond0
    iface bond0 inet manual
        bond-slaves none
        bond-mode   balance-rr
        bond-miimon 100
        #bond_lacp_rate fast
        #bond_ad_select 0
        arp_interval 80
        up /sbin/ifenslave bond0 eth1 eth2
        down /sbin/ifenslave bond0 -d eth1 eth2
    
    # Enslave all the physical interfaces
    #Card #1 Intel PRO/1000 MF Server Adapter - FIBER
    auto eth1
    iface eth1 inet manual
        bond-master bond0
    
    #Card #2 Intel PRO/1000 F Server Adapter - FIBER
    auto eth2
    iface eth2 inet manual
        bond-master bond0
    
    # Bridge LAN to virtual network KVM
    auto br0
    iface br0 inet static
        address 192.168.0.1
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.254
        dns-nameservers 192.168.0.254 8.8.8.8
        bridge-ports  bond0
        bridge-fd     9
        bridge-hello  2
        bridge-maxage 12
        bridge-stp    off
    
    #Card #3 - modem
    auto eth0
    iface eth0 inet manual
    
    #LAN bridge for virtual network KVM - modem
    iface br1 inet manual
            bridge_ports eth0
            bridge_stp off
            bridge_fd 0
            bridge_maxwait 0
            metric 1
    auto br1
    

    And offload options for these NICs:
    /etc/rc.local

    echo "Enabling TSO on Fiber Intel PRO/1000 cards..."
    ethtool -K eth1 rx on tx on sg on gso on gro on
    ethtool -K eth2 rx on tx on sg on gso on gro on tso on
    

    Running on Debian stable 64 bit. Hope this helps setting networking for KVM.
    ps. May I ask: anyone tried bypass 2 NICs with IOMMU and then bridge them inside VM?



  • @TooMeeK:

    Happy pfSense on KVM user here!

    I'm running pfSense on different boxes, all as KVM,

    Good to hear. I'm really happy with KVM. Performance is great and best of all, I don't need Windows or a Windows client to manage my VMs.



  • Just for update.
    I'm successfully running pfSense in KVM on HP Proliant ML110 G5 even on CPU without of support of Intel® Virtualization Technology (VT-x) and with 1GB RAM only, seriously.
    It has Intel Pentium E2160 http://ark.intel.com/products/29739/Intel-Pentium-Processor-E2160-(1M-Cache-1_80-GHz-800-MHz-FSB) processor. I had some problems to even start BSD VM on that, however..
    .. changing emulator from:

    <emulator>/usr/bin/kvm</emulator>
    

    to:

    <emulator>/usr/bin/qemu</emulator>
    

    did the job.
    Performance impact is noticeable, however it still can shape 10Mbit link.
    On KVM emulator there was Kernel Panics on VM start and 100% CPU usage problem.
    On Qemu no problem.
    The router uptime is:



Log in to reply