[HowTo] Install pfSense 2.1 RC0 amd64 on Xen 4.3 as PV HVM
-
I've been trying to make PVHVM (or just PV) work for a long time, and finally got through it with your procedure! I've done all the steps but 2 and 4 since I built the stable version and everything but ALTQ works fine for now! I've noticed a slight improvement in latency between the LAN and the PfSense VM and the web browsing feels a bit snappier!
It's a shame though that ALTQ is not implemented in the xennet drivers as traffic shaping is one of the big features of pfSense. I am no expert, but I might do a bit of research just to see what could be done.
Thanks a lot!
-
Hi
i also try to make an pfsense PVHVM running on an intel atom cpu without vt/x-d … so it needs PV.
Sabrewarrior is it possible to provide your config's and may a small compressed image of your PV-pfsense ?
i am trying since days without succes atm ....
best regards
ren22 -
Your welcome KarboN, good luck with the ALTQ. I haven't had much time to mess around with it yet, might start working on in a week or 2.
Hello Ren22, theres a difference between PVHVM and PV. I haven't tried to make a pure i386 PV pfSense. I can upload an amd64 PVHVM iso of the 2.1 Release later on today though.
Also if you are trying to make a PVHVM you should easily be able to with these instructions, that way you have more control over what you want to trim down on as far as the Kernel goes.
-
Thanks for posting the howto Sabrewarrior!
I'm still trying to get a paravirt kernel built and working, but as a fall back, PVHVM will still make a huge difference to cpu usage (I have NAS4free running PVHVM with my LSI card - the difference between emulated e1000 and xen network driver is huge).
-
Great, the howto worked for me, building 2-1 on BSD 8.3, many thanks!
The only gotcha I encoutnered was that the iso image didn't build - a quick look at the logs and it turns out that I needed to install the cdrtools package.
cd /usr/ports/sysutils/cdrtools && make depends install
After that the iso image built just fine, and I was able to install from the iso image using a simple HVM xen config, with the extra line "pty='serial'" so that I could use the xen console to do the install.
In the console text during boot I saw this, confirming its running in PVHVM mode:
xenbusb_front0: <xen frontend="" devices="">on xenstore0 xenbusb_add_device: Device device/suspend/event-channel ignored. State 6 xn0: <virtual network="" interface="">at device/vif/0 on xenbusb_front0 xenbusb_back0: <xen backend="" devices="">on xenstore0 xctrl0: <xen control="" device="">on xenstore0 GEOM: ad0s1: geometry does not match label (255h,63s != 16h,63s). xn0: backend features: feature-sg feature-gso-tcp4 xbd0: 5120MB <virtual block="" device="">at device/vbd/51712 on xenbusb_front0 SMP: AP CPU #1 Launched! Root mount waiting for: usbus0</virtual></xen></xen></virtual></xen>
I did notice that its still got the re0 device (HVM emulated realtek) showing up alongside the xn0 one, so I'll need to disable that in rc.local.
I also just did PCI passthrough for an Intel 82574L Gigabit ethernet card (dedicated NIC is for WAN connection, on separate physical subnet - I'm using the Xen network interface xn0 for internal LAN), although I haven't done more than boot the VM with the PCI device at this stage (it let me configure it).
Now I've got a working install, I'll have a play with making a paravirt domU, although in my experience paravirt performance is largely the same as PVHVM, with the only difference being the ability to pass through PCI devices without needing IOMMU/VT-d.
(FYI my dom0 is Centos 6.4 using standard Centos-supported xen-4.2 and kernel-3.4.61)
-
That's odd that its showing up twice. I would go over the vm config file or do a brctl show in the Dom0 to see whats going on.
-
Yes - not sure why its showing up twice.
Here's the output from bridge control:
]# brctl show
bridge name bridge id STP enabled interfaces
xenbr0 8000.0025900d41b6 no eth0
vif1.0
vif5.0
vif5.0-emuThe domU config file is pretty simple - nothing out of the ordinary:
name = 'pfsense21'
builder = 'hvm'
memory = 1024
vcpus=2
acpi=1
vif = [ 'bridge=xenbr0, mac=00:16:3e:14:01:97' ]
disk = [ 'phy:/dev/vg_ssd/pfsense21,xvda,w' ]
boot='c'
serial='pty'
pci = [ '0000:05:00.0' ]An ifconfig on the domU shows both the xn0 and re0 interfaces (along with others including the pci-passthrough Intel em0 one:
re0: flags=8802 <broadcast,simplex,multicast>metric 0 mtu 1500
options=9b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum>ether 00:16:3e:14:01:97
media: Ethernet autoselect (100baseTX <full-duplex,flowcontrol,rxpause,txpause>)
status: active
xn0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
options=3 <rxcsum,txcsum>ether 00:16:3e:14:01:97
inet 192.168.0.10 netmask 0xffffff00 broadcast 192.168.0.255
inet6 fe80::216:3eff:fe14:197%xn0 prefixlen 64 scopeid 0x8
nd6 options=1 <performnud>media: Ethernet manual
status: activeNote in the above, both interfaces have the same MAC address. Interestingly, a check on the dom0 shows:
~]# xl network-list pfsense21
Idx BE Mac Addr. handle state evt-ch tx-/rx-ring-ref BE-path
0 0 00:16:3e:14:01:97 0 4 7 769/770 /local/domain/0/backend/vif/5/0I'd leave it alone, except that I have an issue with networking which seems to be affecting some network traffic via random high latency. First thing to try is rebuilding my PVHVM kernel with a clean build environment.</performnud></rxcsum,txcsum></up,broadcast,running,simplex,multicast></full-duplex,flowcontrol,rxpause,txpause></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum></broadcast,simplex,multicast>
-
I also just switched the active LAN device to the re0 one.. it appears to work:
re0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
options=9b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum>ether 00:16:3e:14:01:97
inet 192.168.0.10 netmask 0xffffff00 broadcast 192.168.0.255
inet6 fe80::216:3eff:fe14:197%re0 prefixlen 64 scopeid 0x1
nd6 options=1 <performnud>media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
xn0: flags=8802 <broadcast,simplex,multicast>metric 0 mtu 1500
options=503 <rxcsum,txcsum,tso4,lro>ether 00:16:3e:14:01:97
media: Ethernet manual
status: activeDefinitely time to try rebuilding my kernel and see how it goes :)</rxcsum,txcsum,tso4,lro></broadcast,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum></up,broadcast,running,simplex,multicast>
-
Hello Ren22, theres a difference between PVHVM and PV. I haven't tried to make a pure i386 PV pfSense. I can upload an amd64 PVHVM iso of the 2.1 Release later on today though.
Hi Sabrewarrior - any chance you could upload the PVHVM amd64 image? And ideally, the contents of your "conf/kernels" directory?
I built the i386 32-bit pfSense iso ok and it boots PVHVM, but as noted it has both the re0 and xn0 interfaces - I checked a vanilla XENHVM freeBSD8.3 kernel and it has the same symptoms and appears to be a bug in the 32-bit HVM code, so I decided to have a go at a 64-bit kernel seeing as how you've had success at it.
I built a vanilla amd64 freeBSD8.3 kernel, and it works just fine without duplicate network interfaces, so I figured a pfSense kernel should be easy. However, I'm getting a Fatal trap 12 error booting the pfSense XENHVM kernel - which means there has to be an issue in the pfSense code somewhere, unless there's something I'm missing in the options/devices.
If you could post/upload your amd64 PVHVM iso for me to try, it'll at least give me an indication whether the issue is my build process, or whether it doesn't like my hardware combo (Supermicro X8SIL and Xeon X3450 cpu).
Any help appreciated - its getting frustrating after about 8 iso builds in the last 24 hours!
-
Hello xlot,
Sorry for the delay. I am honestly at a loss to explain the 2 interfaces since brctl is only showing 1 interface being shared, assuming vif1.0 is for a different DomU. Also both your interfaces seem to have the same mac address which is a very bad thing and might be the cause of your high latency.As promised I have uploaded the amd64 PVHVM image to my blog which is in my signature (don't worry no ads!). I have attached the config file I used here though. My sever is running AMD not Intel. I have not tested this image so let me know how it works. I will be working on a 32 bit PV image now.
-
Many thanks! The dual-interface problem appears to be something to do with the xenpci driver for the i386 platform.
As suspected, your kernel conf file is identical to one of the variations I've tried (note - the same outcome can be achieved by just replacing the import statement to use the "XENHVM" bsd kernel conf instead of "GENERIC" - saves adding the extra lines).
I just downloaded your image and will have a try and report back here. Thanks for your help - using a known-good build will tell me whether the problem lies with my hardware combo, or possibly dom0 (I'm suddenly wondering as I type this whether I should try the old xm toolset using xend, instead of the newer xl/xenstored one).
-
Ok, your image gives the same error I've been getting on the 64-bit platform. The trace shows that its the same problem as found here:
http://lists.freebsd.org/pipermail/freebsd-xen/2013-June/001613.html which in turn links to this post that includes a patch:
http://lists.freebsd.org/pipermail/freebsd-xen/2012-September/001359.html
Here's my trace. Its the same issue and looks like it might be a problem with xenstored on my dom0 (xen-4.2.3 on kernel 3.4.61 - stock centos packages).
Fatal trap 12: page fault while in kernel mode cpuid = 0; apic id = 00 fault virtual address = 0xffffff010294c2ef fault code = supervisor write data, page not present instruction pointer = 0x20:0xffffffff80a56a68 stack pointer = 0x28:0xffffffff816b19f0 frame pointer = 0x28:0xffffffff816b1a30 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = trace trap, interrupt enabled, resume, IOPL = 0 current process = 0 (swapper)
-
I just posted to the Development section - patching the xenstore.c file with the fix found on the freebsd-xen mailing list has worked and I've got a pfSense VM running using the Xen-native block device drivers (xn0 and xbd0).
Info here:
http://forum.pfsense.org/index.php/topic,69546.0.html
Thanks for your help sabrewarrior.
Incidentally, it seems that there's a problem with my pci-passthrough of an Intel Corporation 82574L NIC - I was getting irq errors and the interface wasn't working, but that was fixed by forcing use of MSI interrupts rather than MSIX, by adding these lines to loader.conf:
hw.pci.enable_msi=1 hw.pci.enable_msix=0
So finally, I've got it all working as wanted, and I can start to actually test the new setup.. woohoo!
-
Yeah, I am using ubuntu with xen 4.3 so that might be one of the patches that fixes your problem. I get irq errors with my vr interface, I wonder if its a related problem to MSI interrupts.
-
Well never thought that I could make a pure PV work but after some ridiculous amount of time spent, I have managed to get a working pfSense i386 PV iso. I am gonna make another one with a bit more clean up to make it straightforward and documentation so I can post the how to.
Issues so far:
SMP is broken. If you use more than 1 vcpu it will not work. Issue is with FreeBSD Xen drivers again. First function that gives an error can be found in src under sys/xen/interface/xen.h at line 80: #define __HYPERVISOR_vcpu_op 24It gives the following panic at vcpu=2:
SMP: Added CPU 0 (BSP)
SMP: Added CPU 1 (AP)
gdtpfn=83ec38 pdptpfn=89c17
panic: HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, &ctxt): /usr/pfSensesrc/src/sys/i386/xen/mp_machdep.c:930
cpuid = 0
KDB: enter: panic -
I posted some info before in another thread here:
http://forum.pfsense.org/index.php/topic,67866.msg375470.html#msg375470
The problems with pure-PV on freeBSD are described here on a post from the freebsd-xen list:
http://freebsd.1045724.n5.nabble.com/Paravirt-domU-and-PCI-Passthrough-td5858307.html
IIRC, there's also a memory limit for PV on i386 - various old posts on the freebsd-xen list mentioned somewhere around 700 megs.
I've been running pfSense as my gateway for a week for one small subnet - and working through reading docs.. PVHVM is seeming workable (I need PCI passthrough - I prefer to run the pfSense VM with a separate physical NIC/subnet for the external interface.
-
Would you mind posting the contents of you pfsense.cfg file?