Best free virtualization plataform to run Pfsense under a Debian host?



  • Which one can you guys recommend?
    I used Vmware Server on the past, but I see it's now no longer supported.


  • Rebel Alliance Global Moderator

    Why not just move on to esxi - free supported, 5.1 just came out.  Get rid of the whole host OS thing and just run esxi on the bare metal, then run whatever VM oses you need.

    I run esxi on a HP N40L, I bumped up to 8GB ram and added 2nd nic.  Runs pfsense, my NAS, ubuntu as my 24/7/365 shell - and then many other linux for play and test centos, mint, freebsd along with windows test Vms 2k8r2, win7, win8, etc.



  • @johnpoz:

    Why not just move on to esxi - free supported, 5.1 just came out.  Get rid of the whole host OS thing and just run esxi on the bare metal, then run whatever VM oses you need.

    I run esxi on a HP N40L, I bumped up to 8GB ram and added 2nd nic.  Runs pfsense, my NAS, ubuntu as my 24/7/365 shell - and then many other linux for play and test centos, mint, freebsd along with windows test Vms 2k8r2, win7, win8, etc.

    I was thinking about mentioning this idea too, but johnpoz beat me to it while I was in a meeting.  Of course, it'll depend on your hardware as you'll need at least 2 cores / CPU's and x64 support.  Although, those are fairly trivial requirements in fairly modern hardware, still good to know before you try to clean install.

    Otherwise, I'm sure someone has some recommendations that directly answer your questions (I don't)



  • http://wiki.debian.org/Xen

    but i would prefer esx(i) if your hardware is supported. less hassle and paid support if needed



  • @heper:

    http://wiki.debian.org/Xen

    but i would prefer esx(i) if your hardware is supported. less hassle and paid support if needed

    I'm not a Xen guy, but I don't think pfSense can run Paravirtualized under Xen (please correct me if I'm wrong), so I think you'll have a minimum hardware requirement of VT-x (or AMD-v.)  The main minimum requirement for ESXi is dual core (or socket) and x86-64 support.

    So, check your specs, if it has VT-x, but no x86-64 = Xen.  x86-64 and multi-core, but no VT-x, ESXi.  All of the above?  Personally, I'd go for ESXi, but that's just me. ;)

    If you have neither VT-x nor x86-64 support, your hardware may be generally too old to really do much for virtualization, and at that age, it probably doesn't support a whole lot of RAM, either.

    <beware, tangents="" ahead="">Well, I guess an old Dell 2650 would take 12GB and supports neither VT-x nor x86-64 support, but over a few months of operation you'd probably be better off buying new hardware for all the power that thing would suck down.  (Or my favorite: an old HP or Dell Core2Duo desktop.)  Oddballs and Atom's aside, in general, the odds of having a regular machine that does neither VT-x nor have x86-64 support but can take over 4GB of RAM are pretty slim (Actually, systems with the Atom's without VT-x and x86-64 likely won't take over 4GB either, nor would they likely have multiple NICs nor PCI/PCI-E slots.)  You're back in P4 Hyper-threading era for that, as the Pentium D added across the board x86-64 support.  I don't know of much of any mainstream socket 478 motherboards that supported more than 4GB of ram, until you get in to the server markets, and then you're more than likely in Xeon land, and all bets are off with server chipsets.  Even though a 32 bit x86 OS can't natively page more than 4GB of ram (including memory space occupied by components) some applications can utilize that ram, like SQL servers and such.  So I've seen plenty of wacky servers with x86 chipsets and a whole lot of ram, but not anything you'd want to run at home.

    Ok, there's a slim chance that someone has an early socket 775 machine with a P4 HT Prescott(2) or Cedar Mill that just has Hyper-Threading and no x86-64 support, in a machine supports 8GB of ram.  There's a slight overlap in the compatibility matrix.  There's also some P4 HT's with x86-64 support in socket 775, but I don't think socket 478 has the pins for any x68-64 support.  It almost seems like a marketing idea to have socket 775 CPU's without x86-64, as they seem to be basically the same die.  But I digress (finally.)</beware,>



  • I would go further and say that for acceptable network performance (based on your requirements), VT-d is also required with xen. But yes, bare minimum is VT-x, because you can't run pfSense paravirtualised out of the box.



  • I ran pfs 1 and then 2 on a dell 2950 ESXi 5 server for almost 2 years with no issues.  I used E1000 and most recently the vmnx nic drivers.  The host's dual Xeon E5440's did not have VT but pfs ran great anyways.  It would be great to be able to do hardware pass-through for the nics but I do not think I would have noticed.

    since then I have gone to a bare metal pfs install on an old Dell 1750 and it runs just the same.

    So VMWare esxi is a great platform for pfs.  The pfs vm rebooted a lot faster then the dell 1750…I miss that capability where I could sneak in a reboot during lunch hours O:



  • @photonman:

    I ran pfs 1 and then 2 on a dell 2950 ESXi 5 server for almost 2 years with no issues.  I used E1000 and most recently the vmnx nic drivers.  The host's dual Xeon E5440's did not have VT but pfs ran great anyways.  It would be great to be able to do hardware pass-through for the nics but I do not think I would have noticed.

    since then I have gone to a bare metal pfs install on an old Dell 1750 and it runs just the same.

    So VMWare esxi is a great platform for pfs.  The pfs vm rebooted a lot faster then the dell 1750…I miss that capability where I could sneak in a reboot during lunch hours O:

    A Dell PowerEdge 2950 with E5440's should support VT-x, but that only gives you x86-64 support to your VM's in ESX(i).  VT-d is required for direct hardware passthrough, which nothing that fits in a 2950 will give you (I think, please prove this wrong.)  I have run a few ESX(i) clusters on 2950's and they're great VM hosts, for both x86 and x86-64 VM's.  Now, though, the 32GB stock limit on RAM is a bit of a hindrance; some will support 64GB with a BIOS update, but it's somewhat hilariously expensive for that much high capacity DDR2.  (I think it was like $600 or $700 for the RAM when I had to look a few months ago, which I think it's a bit silly to spend that much on a 5 to 7 year old server.)

    Exceptions aside, VT-x became available at the 2950 point.  2650's had 2 sockets for single core x86 capable Xeons.  2850's had 2 sockets for single core x86-64 capable Xeons, but no VT-X support (in ESX(i), that means no 64 bit VM's.)  2950's had 2 sockets for Dual or Quad core x86-64 capable Xeons with VT-x support (64 bit VMs, yay!).

    http://ark.intel.com/products/33082/Intel-Xeon-Processor-E5440-12M-Cache-2_83-GHz-1333-MHz-FSB



  • Hello! I've been running Proxmox VE for about a year with pfSense and have no problem. I'm running Proxmox VE on a Dell 2950 III - the one with the 8 - 2.5" drives. I've tested this locally and get near line speed (1Gb/s), just wish I actually had that at my house. I've never put pfSense in a Proxmox VE cluster, but I'd imagine it would work just like any other VM. I have 4G RAM, 4 Cores and a CPU units of 100,000. In Proxmox VE, CPU units is priority of CPU time to that particular VM with the default of 1000. Most I leave at 1000, but my PBX and pfSense are higher. I didn't need to go to 100K, but did that for future use if I need it.

    I have the two NICs that are onboard as WAN and LAN, I have just added a dual port NIC to the mix and have OPT1 and OPT2, which is incredibly easy to do in Proxmox VE. I'm getting ready to deploy this setup at work and will have a pretty detailed write up when I'm done if you'd like for me to share this with you.



  • Agreed proxmox is pretty good too… I like how proxmox uses a web interface and you dont have to have a windows box for esxi (correct me if im wrong) proxmox is fairly simple to learn and navigate,  it is based on debian too. but i do recommend proxmox or esxi.


Locked