How to use non-legacy drivers (Xen XCP + OpenXenManager + 2.0 RELEASE amd64)

  • Everything sortof works… Doing research on OpenXenManager and Xen XCP - seems like a stripped/gutted version of the Citrix products and I'm having license expiration issues as well as hardware/dynamic resource allocation issues so I'm not so sure how "free & open source" Xen's XCP is vs. Citrix XenServer + XenCenter (console) since the only difference seems to be that the Xen XCP + OpenXenManager is pretty darn buggy.

    I can install W7, Server 2k8, etc. and as soon as I install the XenServer Tools (still unsure why its pushing Citrix's XenServer Tools but that's another issue) it recognizes the proper drivers and the NICs connect at gigabit speeds and function properly. Still unable to assign GPUs and/or overbook resources but again, not this issue.


    pfSense 2.0 amd64 RELEASE works and installs fine, boots fine, etc. except that its using legacy drivers and the NICs (Intel Pro 1000 PT Dual NIC) show conneciton at 100mbps full duplex not 1000 full duplex.

    Also, (and again this may be my issue with Xen not pfsense) it only lets me allocate one CPU to the VM so I'm not sure if Xen is combining resources into "one virtual CPU" or not - both on my W7, ubuntu and XP and pfsense images the max CPUs I can see and/or allocate is 1 while I can see up to 8 in the drop down menu.

    I see this on the dashboard:

    CPU Type
    Intel(R) Xeon(R) CPU X5460 @ 3.16GHz

    but agian - no current speed vs. max clock so I can only assume PowerD isn't running. HOWEVER on the dashboard it says "CPU usage: 3-4%" whereas when I look at my VM management console I see the pfSense VM using 16-18% of 1 vCPU. I can allocate and change memory but can not allocate or change the number of CPUs.

    At this point I'm less concerned with the RAM and CPU and disk, etc. and more concerned with getting the proper drivers vs. using the legacy drivers.

    About the stuff I'm running:

    2.0-RELEASE (amd64)
    built on Tue Sep 13 17:05:32 EDT 2011
    You are on the latest version.

    Dell Precision T7400 running 2x Intel(R) Xeon(R) CPU X5460 @ 3.16GHz (two quad-core CPUs), 32GB of DDR2 ECC RAM (8x4GB), using Dell's integrated SAS controller with 4x 500gb HDD in a simple stripe. NICs are 1x onboard Broadcom (Dell mobo) being used for Xen management console only. 1x Intel Pro 1000 PT dual NIC and 1x Intel pro 1000 PT quad NIC.

    Software/VM OS:
    Xen XCP 1.1.0-50674c
    Xen/software recognizes the NICs as eth0 (broadcom NIC) and will recognize the Intel Pro PT NICs as em0, em1, etc. whereas pfSense on Xen sees re0, re1, etc.

    When I install a Windows or Ubuntu VM and install the Xen Server Tools the windows/linux OS's recognize hardware as 'virtual device' but they function properly.

    Is there a way to install Xen Server Tools for BSD? I have not found anything yet.


  • Check your domu config file and maybe add something like this :

    vif = [    'type=ioemu,mac=xx:xx:xx:xx:xx:xx,bridge=eth0,model=e1000' ]

    I am using Debian Squeeze and XEN 4.0.1 .

  • I'm reading more and more (finding random things on Google) about people having driver issues and other issues with Xen 4.0.1 - but I don't know what the exact problem is (if any) or if there are fixes.

    My WAN is 100mbit so that's fine but I'd like my LANs and optionals to be gigabit - and I'd like my wireless to work.

  • If you need you LAN which I presume is Gigabit LAN then with that setting in DomU would work. But if you want to give it exclusively to pfSense then you would have to pci pass thru to it. I have absolutely no problems running giga speed with or without pci pass thru. To pci pass thru your system needs to have AMD's iommu or Intel's VT-d enabled in BIOS. Since you can only install pfSense properly in HVM mode. I have not experience any problems in a small environments of about 10-15 computers. pfSense 2.0 is running HAVP and Squid with Traffic Shapping and a few VPN (PPTP and OpenVPN). For wireless I have not used yet as there were no requirements for the sites, But with pci pass thru there should be no diffrence at all to configure in pfsense as it would detect it.

  • Another config parameter you can use is


    but make sure that in vif to remove type=ioemu, and on your disk parameter have to change to something like this :-

    disk = [ 'phy:/dev/vg1/xpsp3,hda,w', 'phy:/dev/loop/0,hdc:cdrom,r' ]

    Hope this helps

  • I'm now having trouble with pretty much every virtual software - on Xen I can't even figure out how to make it happen (on OpenXenManager) and on XenServer I only have a trial so that's out.

    On VMware ESXi 5.0 Enterprise Plus I can see the HW pass thru in the configuration but every time I've selected it the system freezes during boot and I've had to re-image the system.

    This is a Dell dual-xeon motherboard, not in a server chassis but its a Dell Precision T7400 Workstation - the same exact chipsets that are included in all Dell servers of that vintage.

    I have two items in my BIOS related to Virtualization - VMM and Intel VT for Direct I/O. Both are turned ON.

    I'm at a bit of a loss because the ESXi 5.0 is a legit licensed (non-trial), fully paid for, etc. VMware support wasn't much help. Its frustrating to be able to see it but not utilize passthru.

    Also - The VMware ESXi uses a driver that pfSense recognizes so my only issues with passthu are really the Cisco AIR-PI21AG-A-K9 wireless card (to the pfsense or even having ESXi recognize it) and video cards (not related).

    I should have the ability to allocate and share GPU on this version (enterprise plus) as well as pass thru the Cisco AIR-PI21AG-A-K9 card directly to pfSense, which I can't.

    Thx for any help, thoughts, suggestions, etc.

Log in to reply