VM processor selection
-
Does the G530 have hardware support for virtualization? (some time ago some of the low-end CPUs didn't.)
-
Good point. Looks like it has some. VT-x but not VT-d for example. Though it also looks like you need an i5 minimum for VT-d.
Steve
-
That is a good point. I will have to look into this. Does anyone have any experience running ESXi on a g530 or similar chip? It looks like buying a chip that supports VT-d will add another $150 to the build.
-
KM,
You don't say what motherboard and NICs you plan to use but I hope you realize ESXi is very fussy about that stuff - it's not just the processor.
In terms of sizing, on a second-hand HP dc7900 with Core2 duo E7600 @ 3GHz with 4GB I manage to run:
-
pfSense with 30/1 Mb/s cable and 16/1 Mb/s ADSL connections, openVPN, postfix/postscreen relay and pfblocker but no squid
-
a Windows-based mail server
-
a Linux-based web server
-
another Windows machine that monitors the UPS
All Intel NICs.
The whole thing isn't worked hard at all but I'm very happy with it as a home/lab setup.
Hope that helps
-
-
Thanks biggsy, that information is very helpful. I have a dual port 1Gbps Intel NIC that I will use for this build. I am still trying to figure out motherboard and processor selection because I am planning on running ESXi, although I see that the E7600 doesn't have vt-d support which gives me some hope that the cost of the project won't get out of control. My only concern before I buy the components is the file server VM. I have a 3TB HDD that I want to migrate from my old file server. I want to use it as is without formatting and converting to a vmdk. I'm still looking over the vmware forums to find the answer. I'll post back when I find something.
As an aside, does anyone know where I can get a low profile adapter for the dual port Intel NIC? I bought mine used and it didn't come with one.
-
So, after some digging I found that for my application I don't need vt-d support. ESXi can do RDM to physical drives without too much effort, although it is unsupported. However, my digging uncovered some other problems. ESXi hardware support is quite slim and getting the right components will add significant cost to the overall build. I really like the networking configuration tools available in ESXi but I'm not sure I can justify the added expense (to my wife) without first investigating other implementation options. I will be checking out KVM as a solution next.
Thanks for all the help!
-
This is a home / lab build
Yeah you can do RDM without much effort and had no problems with esxi 5. I can tell you what I setup for my home lab, not currently running snort. But I openvpn into my home network pretty much every day.
Running multiple vms on a HP N40L (Ultra Micro Tower Server System AMD Turion II Neo N40L 1.5GHz 2C 2GB (1 x 2GB) 1 x 250GB LFF SATA) – very cheap little lab box, bump it up to 8GB ($40), add another nic ($35) and and drive for storage if you need that and its ready to rock. I had gotten mine on sale a while back for $269 with free shipping, now its back to $349 but still a good price.
http://www.newegg.com/Product/Product.aspx?Item=N82E16859107052
Its a little beast, very small footprint - small power usage. With 4 drives in mine it idles at like 55 watts.
pfsense is running on there, I don't have any issues maxing out my download pipe with is like 25Mbps, and I use it to serve up all the media in my house as well. Running a copy of 2011 server essentials (bigger brother of whs) with drivepool addon and RDM to to drives.. Get like 80+MBps read write to the drive pool from my physical box.
Not sure what your budget is -- but so far have been more than please with my very cheap little everything box, runs my linux vm 24/7 so I always have access to my tools. Runs my storage vm 24/7 I use web interface into utorrent to have it grab the latest linux distros I need ;) Pfsense running a given, and then a few other test vms that are not always on w7, centos, 2k8r2, etc. etc.
-
ESXi isn't nearly as picky as some people seem to suggest. It supports a lot of standard hardware; I've run it on many OEM desktops, white boxes, OEM "Enterprise" servers, etc. Also realize that there's 2 versions of "supported": Supported Hardware and Supported Configuration. Supported Hardware detects and runs fine, a Supported Configuration is a known good matrix of hardware. A Supported Configuration is generally required for direct support/help from VMWare, not necessarily that it won't work fine.
VT is only required if you want to run x64 bit OS's, almost any P4+ processor is going to run x86 VMs just fine with ESXi (assuming the rest of the hardware is supported.) Core2's and above generally can run x64 VMs with their VT. There may be support issues for some types of RAID cards, most cheap and/or onboard RAID-ish controllers may show up in ESXi as individual drives, not as a RAID array. Super new hardware may not be fully supported yet, but mostly you'd run in to drive controller issues, or maybe some onboard NICs; other hardware you probably don't care about anyway.
The Intel cards 99%+ are supported, there's very few, if any, Intel cards that I've seen that aren't supported directly.
Your idea of simply "migrating" your 3TB drive over to a the host is fine, the main issue with doing something like that is the lack of portability (can't vMotion), but you won't care about that. It might be a little scary, though, be careful to not simply attach it as storage, otherwise VMWare will simply format it as a new, blank datastore.
If you're really worried about compatibility, just finding an older HP DC7700 or some other major OEM machine (like Dell) with a Core2Duo or higher should work for you. The onboard NIC on a DC7700 is gigabit and supported by VMWare. Adding a dual port PCI Express Intel NIC will give you 3 good Gigabit ports. (I think the DC7700 has a x16 and an x1 slot, plus a couple standard PCI slots.)
To put it in perspective, I run ESXi on a DC7700 w/ 8GB of ram and an Intel dual port PCI-E card. I used to also run a pair of Dell PowerEdge 2850 servers running ESXi, each with a quad port Intel PCI-X network card; one as an ESXi host and the other as a FreeNAS iSCSI host. I eventually migrated everything to just the DC7700 and turned off the 2850s as they were taking up way too much power. The 7700 runs 2x 2008 domain controllers, vCenter on 2003, plus the random test machine(s) running XP. It generally hovers around 4GB of used RAM, 4GB free.
-
Excellent. Thank you for the input! I have been searching around and found quite a few people who have used standard consumer components with ESXi and had no problems even though they aren't on the "supported" list. I may still end up going the ESXi route after a little more research. I found that people have recommended the Biostar TH67+ board as a good low priced ESXi component and I may end up going with that. I would like vt-d support but finding a motherboard that supports it is difficult and they are usually much more expensive. When the vt-d enabled CPU is factored in it adds quite a bit to the build. I may split the difference and get a vt-d motherboard and the g530 chip so I can upgrade later if I find I want the extra features.
-
@KM:
Excellent. Thank you for the input! I have been searching around and found quite a few people who have used standard consumer components with ESXi and had no problems even though they aren't on the "supported" list. I may still end up going the ESXi route after a little more research. I found that people have recommended the Biostar TH67+ board as a good low priced ESXi component and I may end up going with that. I would like vt-d support but finding a motherboard that supports it is difficult and they are usually much more expensive. When the vt-d enabled CPU is factored in it adds quite a bit to the build. I may split the difference and get a vt-d motherboard and the g530 chip so I can upgrade later if I find I want the extra features.
Just to be clear, don't confuse VT-d with standard VT-x. I'll get to VT-d in a minute, but you would need VT-x if you want to run any x64 VMs. (If you're building a new ESX host, get something with at least VT-x, it may be listed as just "VT".)
VT-d is an extension on to VT-x. The only thing you'll be missing by not having VT-d is direct hardware passthrough to the VM, meaning if you have a specialized piece of hardware that VMWare doesn't natively support you could, otherwise with VT-d, pass that piece of hardware directly to the VM. It's fairly unlikely that you'd do that, that would be for things like video capture cards or some other specialized interface cards, which you're probably not using anyway. Or, if, for some reason, you -really- wanted some other piece of hardware, like a network card or RAID card, or what-have-you to be directly passed through to the VM guest OS. There's reasons to do that, mind you, but most people wouldn't and I can't think of a lot of situations where I've thought: "Damn, I wish I could just present that hardware directly to the VM."
On the other hand, no version of VT is required to pass through a USB device to a VM, that just works (assuming the device is supported or just works, and some devices don't.)
-
I guess it comes down to how much you can convince your wife you need to spend but I'd second the HP N40L that johnpoz mentioned - small, relatively quiet, low power and four disk slots. Quite popular with the whitebox crowd.
If you haven't found this one already, you should have a look at: http://www.vm-help.com/
-
The g530 does support vt-x, but so does pretty much any new processor. The reason I was thinking about vt-d for a later implementation was so that I could directly pass a nic to the router VM for security reasons as there will also be a file server VM on the machine. If I'm wrong on this point I would be glad to hear it. It also leaves me room to play with different stuff, but as I said I don't think I can justify the expense at this point.
Thanks again for the input everyone.
-
@KM:
The g530 does support vt-x, but so does pretty much any new processor. The reason I was thinking about vt-d for a later implementation was so that I could directly pass a nic to the router VM for security reasons as there will also be a file server VM on the machine. If I'm wrong on this point I would be glad to hear it. It also leaves me room to play with different stuff, but as I said I don't think I can justify the expense at this point.
Thanks again for the input everyone.
In theory, I guess it might be more secure, but I still have yet to hear of any actually exploitable security holes when the back end VMWare management isn't exposed to the front end Internet. As long as the WAN is on a dedicated port, even if it's routed through VMWare, there's no evidence to indicate that it's not plenty secure. Even if it's on a shared port with VLANs configured on the switch(s), that's still plenty secure. Enough so that trying to do similar directly through HW passthrough and having pfSense tag the VLANs shouldn't buy you any extra layer of security. (Not that it sounds like you intend to do anything like that.)
I've heard of people talking about theories of security holes in VMWare, but I still have yet to see an actually exploitable example on a properly configured ESXi host.
If you're looking for an excuse to justify the cost, the theory might be sufficient, but I wouldn't buy it. If you had some other use, like you needed a guest OS to directly control a RAID card, for some reason, that might be more legit to me.
-
Matguy this is some great information you're providing, thanks! :)
Having never tried it I'm pretty vague about VMware. I would assume that if you had a CPU that supported VT-d you may be able to take advantage of the various hardware offloading features on a NIC to a greater extent?
The only time I have used VMware was on the, previously mentioned, HP microserver and it was super slick. ;) However the cpu in that quite low powered, no where near the processing capability of the G530.
Steve
-
… if you had a CPU that supported VT-d ...
Pretty sure VT-d requires BIOS (i.e., motherboard) support as well as CPU support.
-
Awesome, thanks for the info matguy. I think I have settled on a final build and will probably order the parts today or tomorrow.
I still haven't figured out the hard drive configuration but I can throw something together fairly easily I think.
Intel DQ67SW motherboard which supports vt-d
g530 chip (no vt-d support)
8GB ram (easily upgraded)Going this route allows me to upgrade my cpu for more power and vt-d support if I find I want it at a later date and only adds about $60 to the build. I'll let you know how it goes!
-
Matguy this is some great information you're providing, thanks! :)
Having never tried it I'm pretty vague about VMware. I would assume that if you had a CPU that supported VT-d you may be able to take advantage of the various hardware offloading features on a NIC to a greater extent?
The only time I have used VMware was on the, previously mentioned, HP microserver and it was super slick. ;) However the cpu in that quite low powered, no where near the processing capability of the G530.
Steve
Hey, no problem, ESX and ESXi are kind of my baby…
Standard Hardware Offloading (TOE) is supported by ESXi, they just need to be on the HCL. Now, if you're talking about specialized hardware, like, say, the Bigfoot Killer Nics, to get full use out of 'em you'd probably have to use VT-d, I didn't see 'em on the HCL, but I think there's drivers out for 'em.