VM processor selection
-
ESXi isn't nearly as picky as some people seem to suggest. It supports a lot of standard hardware; I've run it on many OEM desktops, white boxes, OEM "Enterprise" servers, etc. Also realize that there's 2 versions of "supported": Supported Hardware and Supported Configuration. Supported Hardware detects and runs fine, a Supported Configuration is a known good matrix of hardware. A Supported Configuration is generally required for direct support/help from VMWare, not necessarily that it won't work fine.
VT is only required if you want to run x64 bit OS's, almost any P4+ processor is going to run x86 VMs just fine with ESXi (assuming the rest of the hardware is supported.) Core2's and above generally can run x64 VMs with their VT. There may be support issues for some types of RAID cards, most cheap and/or onboard RAID-ish controllers may show up in ESXi as individual drives, not as a RAID array. Super new hardware may not be fully supported yet, but mostly you'd run in to drive controller issues, or maybe some onboard NICs; other hardware you probably don't care about anyway.
The Intel cards 99%+ are supported, there's very few, if any, Intel cards that I've seen that aren't supported directly.
Your idea of simply "migrating" your 3TB drive over to a the host is fine, the main issue with doing something like that is the lack of portability (can't vMotion), but you won't care about that. It might be a little scary, though, be careful to not simply attach it as storage, otherwise VMWare will simply format it as a new, blank datastore.
If you're really worried about compatibility, just finding an older HP DC7700 or some other major OEM machine (like Dell) with a Core2Duo or higher should work for you. The onboard NIC on a DC7700 is gigabit and supported by VMWare. Adding a dual port PCI Express Intel NIC will give you 3 good Gigabit ports. (I think the DC7700 has a x16 and an x1 slot, plus a couple standard PCI slots.)
To put it in perspective, I run ESXi on a DC7700 w/ 8GB of ram and an Intel dual port PCI-E card. I used to also run a pair of Dell PowerEdge 2850 servers running ESXi, each with a quad port Intel PCI-X network card; one as an ESXi host and the other as a FreeNAS iSCSI host. I eventually migrated everything to just the DC7700 and turned off the 2850s as they were taking up way too much power. The 7700 runs 2x 2008 domain controllers, vCenter on 2003, plus the random test machine(s) running XP. It generally hovers around 4GB of used RAM, 4GB free.
-
Excellent. Thank you for the input! I have been searching around and found quite a few people who have used standard consumer components with ESXi and had no problems even though they aren't on the "supported" list. I may still end up going the ESXi route after a little more research. I found that people have recommended the Biostar TH67+ board as a good low priced ESXi component and I may end up going with that. I would like vt-d support but finding a motherboard that supports it is difficult and they are usually much more expensive. When the vt-d enabled CPU is factored in it adds quite a bit to the build. I may split the difference and get a vt-d motherboard and the g530 chip so I can upgrade later if I find I want the extra features.
-
@KM:
Excellent. Thank you for the input! I have been searching around and found quite a few people who have used standard consumer components with ESXi and had no problems even though they aren't on the "supported" list. I may still end up going the ESXi route after a little more research. I found that people have recommended the Biostar TH67+ board as a good low priced ESXi component and I may end up going with that. I would like vt-d support but finding a motherboard that supports it is difficult and they are usually much more expensive. When the vt-d enabled CPU is factored in it adds quite a bit to the build. I may split the difference and get a vt-d motherboard and the g530 chip so I can upgrade later if I find I want the extra features.
Just to be clear, don't confuse VT-d with standard VT-x. I'll get to VT-d in a minute, but you would need VT-x if you want to run any x64 VMs. (If you're building a new ESX host, get something with at least VT-x, it may be listed as just "VT".)
VT-d is an extension on to VT-x. The only thing you'll be missing by not having VT-d is direct hardware passthrough to the VM, meaning if you have a specialized piece of hardware that VMWare doesn't natively support you could, otherwise with VT-d, pass that piece of hardware directly to the VM. It's fairly unlikely that you'd do that, that would be for things like video capture cards or some other specialized interface cards, which you're probably not using anyway. Or, if, for some reason, you -really- wanted some other piece of hardware, like a network card or RAID card, or what-have-you to be directly passed through to the VM guest OS. There's reasons to do that, mind you, but most people wouldn't and I can't think of a lot of situations where I've thought: "Damn, I wish I could just present that hardware directly to the VM."
On the other hand, no version of VT is required to pass through a USB device to a VM, that just works (assuming the device is supported or just works, and some devices don't.)
-
I guess it comes down to how much you can convince your wife you need to spend but I'd second the HP N40L that johnpoz mentioned - small, relatively quiet, low power and four disk slots. Quite popular with the whitebox crowd.
If you haven't found this one already, you should have a look at: http://www.vm-help.com/
-
The g530 does support vt-x, but so does pretty much any new processor. The reason I was thinking about vt-d for a later implementation was so that I could directly pass a nic to the router VM for security reasons as there will also be a file server VM on the machine. If I'm wrong on this point I would be glad to hear it. It also leaves me room to play with different stuff, but as I said I don't think I can justify the expense at this point.
Thanks again for the input everyone.
-
@KM:
The g530 does support vt-x, but so does pretty much any new processor. The reason I was thinking about vt-d for a later implementation was so that I could directly pass a nic to the router VM for security reasons as there will also be a file server VM on the machine. If I'm wrong on this point I would be glad to hear it. It also leaves me room to play with different stuff, but as I said I don't think I can justify the expense at this point.
Thanks again for the input everyone.
In theory, I guess it might be more secure, but I still have yet to hear of any actually exploitable security holes when the back end VMWare management isn't exposed to the front end Internet. As long as the WAN is on a dedicated port, even if it's routed through VMWare, there's no evidence to indicate that it's not plenty secure. Even if it's on a shared port with VLANs configured on the switch(s), that's still plenty secure. Enough so that trying to do similar directly through HW passthrough and having pfSense tag the VLANs shouldn't buy you any extra layer of security. (Not that it sounds like you intend to do anything like that.)
I've heard of people talking about theories of security holes in VMWare, but I still have yet to see an actually exploitable example on a properly configured ESXi host.
If you're looking for an excuse to justify the cost, the theory might be sufficient, but I wouldn't buy it. If you had some other use, like you needed a guest OS to directly control a RAID card, for some reason, that might be more legit to me.
-
Matguy this is some great information you're providing, thanks! :)
Having never tried it I'm pretty vague about VMware. I would assume that if you had a CPU that supported VT-d you may be able to take advantage of the various hardware offloading features on a NIC to a greater extent?
The only time I have used VMware was on the, previously mentioned, HP microserver and it was super slick. ;) However the cpu in that quite low powered, no where near the processing capability of the G530.
Steve
-
… if you had a CPU that supported VT-d ...
Pretty sure VT-d requires BIOS (i.e., motherboard) support as well as CPU support.
-
Awesome, thanks for the info matguy. I think I have settled on a final build and will probably order the parts today or tomorrow.
I still haven't figured out the hard drive configuration but I can throw something together fairly easily I think.
Intel DQ67SW motherboard which supports vt-d
g530 chip (no vt-d support)
8GB ram (easily upgraded)Going this route allows me to upgrade my cpu for more power and vt-d support if I find I want it at a later date and only adds about $60 to the build. I'll let you know how it goes!
-
Matguy this is some great information you're providing, thanks! :)
Having never tried it I'm pretty vague about VMware. I would assume that if you had a CPU that supported VT-d you may be able to take advantage of the various hardware offloading features on a NIC to a greater extent?
The only time I have used VMware was on the, previously mentioned, HP microserver and it was super slick. ;) However the cpu in that quite low powered, no where near the processing capability of the G530.
Steve
Hey, no problem, ESX and ESXi are kind of my baby…
Standard Hardware Offloading (TOE) is supported by ESXi, they just need to be on the HCL. Now, if you're talking about specialized hardware, like, say, the Bigfoot Killer Nics, to get full use out of 'em you'd probably have to use VT-d, I didn't see 'em on the HCL, but I think there's drivers out for 'em.