Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Proxmox 3.1 (KVM) + pfsense 2.1 -> no boot, no VirtIO NIC possible

    Scheduled Pinned Locked Moved Virtualization
    7 Posts 3 Posters 7.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • N
      Nachtfalke
      last edited by

      Hi,

      I am running Proxmox VE 3.1 on a Fujitsu Siemens Primergy TX120 S6 Server with Intel Xeon E1220v2 CPU.

      I have to VMs running, one is Windows Server 2008R2, and I have KVM Hardware Virtualization enabled on Proxmox. This is working. I installed the VirtIO dribvers for HDD and NICs and all worked,

      The second VM is running pfsense 2.1 AMD64. If I enable KVM Hardware Virtualization on Proxmox pfsense stops booting after the menue. I tried with disabling ACPI, Single User mode, and so on. It does not boot. If I disable KVM Hardware Virtualization and change the CPU type from "host" to "Sandy Bridge" or "KVM64" it work.s pfsense is booting.

      So I left KVM disabled and followed the VirtIO tutorial on the pfsense docs. The unfortunately just worked for HDD and for the ballooning driver. If I change the NIC to VirtIO I do not get a connection. With the emulated E1000 it works.

      So for me it looks like I need to do some other configurations on pfsense 2.1 to make it work properly with KVM virtualization on Proxmox. I am not sure if this is a Proxmox problem because it works with a Windows guest.

      I would really appreciate any help and tips because pfsense is very slow in my VM, when changing/editing configurations on pfsense it tooks some times 1-2 minutes until the GUI responds again. Having a look at System activity when doing changes I can see that the PHP process consumes ~70-90% of CPU of one core.

      Unfortunately I do not have any other possibility to install pfsense 2.1 on bare metal hardware and I need to get it working as VM with best performance as possible.

      Thank you for your help in advance!

      1 Reply Last reply Reply Quote 0
      • S
        StefanZ
        last edited by

        Hi Nachtfalke,

        try using the x86 version of pfSense. I'm virtualizing a pfSense instance via KVM on CentOS 6.5 x64 and had also no luck in booting the x64 version.

        I'm also experiencing relatively high load for the VM even if it's doing nothing (~20-25% from one CPU Host usage).

        I previosly ran a Vyatta instance where I didn't had such a load. Might be a FreeBSD problem…

        1 Reply Last reply Reply Quote 0
        • P
          pftdm007
          last edited by

          Try setting the CPU type to Qemu64 in proxmox..  I had a similar issue with pfsense 2.1 (64bit) not passing the kernel bootup and it was oopsing..

          For me changing the CPU type to qemu64 was the trick.

          1 Reply Last reply Reply Quote 0
          • S
            StefanZ
            last edited by

            Unfortunately the setting does not affect the CPU load problem :(

            The upper process is from the pfSense VM (only idling, not even a routing configuration), the bottom one is from my IPFire VM which I'm using now because of the load issue. However, I would gladly go back to pfSense but the resource hunger of FreeBSD on KVM is blocking me atm…

              PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
            21782 qemu      20   0 1560m 539m 5204 S 20.0  3.4   1:34.03 qemu-kvm           
            21202 qemu      20   0 4203m 234m 3348 S  0.0  1.5 120:13.13 qemu-kvm
            
            1 Reply Last reply Reply Quote 0
            • P
              pftdm007
              last edited by

              Let me look at my VM config and see whats different from yours…. I dont have such high host core load...

              Whats your host CPU?

              1 Reply Last reply Reply Quote 0
              • P
                pftdm007
                last edited by

                Just reinstalled pfsense (fresh install from scratch) on my proxmox VE node, I did this:

                delete the virtual HDD
                Create a new one (.raw using IDE)
                Created my NIC's (3 of them, one in e1000, the others with virtio)
                CPU guest set to Qemu64
                Booted and installed pfsense 64bit without a glitch => Of course it wouldnt detect my 2 virtio nics but I modified the bootloader as per the virtio wiki and rebooted)
                Shutdown the VM
                Changed the HDD type from IDE to Virtio, and modified the NIC1 from e1000 to Virtio
                rebooted VM, all is working fine

                No CPU over-stress, with 2 cores with Snort running on two instances, I have an average CPU load of less than 10 percent.  I couldnt test with other packages as squid havp and squidguard were too bugy to be used (are they even still maintained??).  Same for ntop.

                Firewall is blazing fast and stable now.

                If you have the same config as I do (or close to that) and have other issues, then I suspect host CPU problems or other issues at the hypervisor's level..

                1 Reply Last reply Reply Quote 0
                • S
                  StefanZ
                  last edited by

                  Okay, some details about my config:

                  System: Fujitsu Primergy TX140S1p
                  OS: CentOS 6.5 x64
                  CPU: Intel XEON E3-1230 v2 (Sandy Bridge)

                  I tried to set up a 64 bit pfSense the same way, but when I chose qemu64 I got an error about my CPU not supporting 'svm' which is a virtualization feature of AMD cpus.
                  The equivalent feature for Intel is VM-X.
                  To my surprise vmx was not in the listed features in /proc/cpuinfo (CPU support is given). I checked in the BIOS and also saw it supported there, but I missed the configuration dialog to enable/disable it. Some investigation later I flashed back my BIOS version and now have VTX in the /proc/cpuinfo again. I'll have to check if this bug really came in with the last BIOS or something else was amiss. For now I'm running the older BIOS version.

                  Even after this, however, I could not start with qemu64 bit CPU because of the same error. This seems to be a http://www.redhat.com/archives/libvir-list/2010-January/msg00053.html

                  I decided to do a pfSense 32bit installation from scratch (without virito, had some problems with DNS packages when using virito on the NICs), but I still have the high load issue.

                  Here's my VM configuration:

                  
                   <domain type="kvm"><name>pfSense</name>
                    <uuid>a3106783-4d62-2344-ec01-011922e4339b</uuid>
                    <memory unit="KiB">1048576</memory>
                    <currentmemory unit="KiB">1048576</currentmemory>
                    <vcpu placement="static">2</vcpu>
                    <os><type arch="i686" machine="rhel6.5.0">hvm</type></os> 
                    <features><acpi><apic><pae></pae></apic></acpi></features> 
                    <cpu mode="custom" match="exact"><model fallback="allow">qemu32</model></cpu> 
                    <clock offset="utc"><on_poweroff>destroy</on_poweroff>
                    <on_reboot>restart</on_reboot>
                    <on_crash>restart</on_crash>
                    <devices><emulator>/usr/libexec/qemu-kvm</emulator>
                      <disk type="file" device="disk"><driver name="qemu" type="raw" cache="none"><source file="/srv/virtualization/pfSense.img">
                        <target dev="hda" bus="ide"><address type="drive" controller="0" bus="0" target="0" unit="0">
                  
                      <disk type="block" device="cdrom"><driver name="qemu" type="raw"><target dev="hdc" bus="ide"><readonly><address type="drive" controller="0" bus="1" target="0" unit="0">
                  
                      <controller type="usb" index="0"><address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x2">
                  
                      <controller type="ide" index="0"><address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x1">
                  
                      <interface type="bridge"><mac address="52:54:00:f6:d7:01"><source bridge="br1">
                        <model type="e1000"><address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0">
                  
                      <interface type="bridge"><mac address="52:54:00:00:32:99"><source bridge="br99">
                        <model type="e1000"><address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0">
                  
                      <serial type="pty"><target port="0"></target></serial> 
                      <console type="pty"><target type="serial" port="0"></target></console> 
                  
                      <graphics type="vnc" port="-1" autoport="yes"><video><model type="cirrus" vram="9216" heads="1"><address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0">
                  
                      <memballoon model="virtio"><address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0">
                  
                  I guess the versions from Proxmox and CentOS are not quite the same. The CentOS ones might be outdated. I think I'll have another try with either CentOS 7 or switch to Proxmox. Just need to get some hardware for the change … :)
                  </address></memballoon> </address></model></video></graphics> </address></model></mac></interface> </address></model></mac></interface> </address></controller> </address></controller> </address></readonly></target></driver></disk> </address></target></driver></disk></devices></clock></domain> 
                  
                  1 Reply Last reply Reply Quote 0
                  • First post
                    Last post
                  Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.