Save to My DOJO
For part two of our series, we are going to jump into and investigate a few host options for a VMware home lab. In part one, we started with some shared storage options. Now that you have decided on that, you’ll need at least a host to start with.
I know what you’re thinking, you can’t have a lab and do anything cool with just one host! Right? While in a production environment, that’s correct. You’ll need several hosts to do anything “cool” but not so much in a lab! The interesting thing here is you can have a successful VMware lab using only a single host. I do it on occasion and have had great success. You’ll just have to rely on nested virtualization for the rest of it.
The way I’ll break this post down will be a couple host options.
Host Option #1
While this won’t be an exact hardware list, one of the secrets is running on non-supported hardware that some business have since retired and sent to the trash heap. It might not be useful to them but an old Dell PowerEdge R620 will run ESXi 6.7 pretty easily. Just don’t expect any kind of support. For a few hundred dollars, you’ve got a similar box with the following specs:
- Intel Xeon 2650 x2 – 2.00 GHz
- 128 GB DDR3 RAM
- 146GB SAS 15K x2 HD
- IDRAC 7
- H710 RAID Controller
- Redundant Power Supplies
I’ve run anything from old HPs, Dells, and Supermicros. One site I recommend checking out is LabGopher. It’s pretty popular on the /r/homelab subreddit. The biggest issue I have with setups like these would be that they are power hungry, louder and produce far more heat than my next host option. As I mentioned in my storage post, I do actively run a Supermicro chassis in my lab for a bulk storage NAS and have been able to quiet it by running some basic cron jobs that control the IPMI fan settings. The Dells and HPs have been more tolerable in terms of sound levels at least in my experience.
Host Option #2
A good classic standby and my personal favorite is the Intel NUC. It’s been my go-to system over the last few years because of its size, power and lack of loud fans. The system outlined below works great with ESXi 6.7 and after buying 2 of the 16GB sticks of memory, you’ll have a maxed out NUC that’s ready to do just about anything. You can either pick up multiples of this system to build a physical cluster or just a single unit and nest your ESXi hosts on top of a single host. It really just depends on your overall budget.
- Intel NUC (NUC7i7BNH) Intel Core i7
- Samsung 850 EVO 500GB 2.5-Inch SATA III Internal SSD (MZ-75E500B/AM)
- SSD Samsung NVMe SM951 128GB M.2 PCIe 3.0, 2000/650MBs, IOPS 300k/83K
- 2 of Crucial 16GB Single DDR4 2133 MT/s (PC4-17000) SODIMM 260-Pin Memory – CT16G4SFD8213
- SanDisk Cruzer Fit 16GB USB 2.0 Low-Profile Flash Drive
In most situations, you will want to pick up 2 to 3 of the same configuration above in order to cluster them. If that’s the case, you might consider only putting 16GB of memory into each unit.
Now, if budget is an issue you can just buy a single host but be sure to buy 32GB of memory at the very least. If you do decide to go this route you will want to take a look at the 3rd option below. It’s a basically a single host with nested ESXi on top of it.
Host Option #3
For the 3rd option, we’re actually going to use the same host configuration as above. Nothing really changes here except for the fact that you will use nested virtualization on top of a single host. My recommendation is to install ESXi. You can install and run nested virtualization in VMware Workstation but I’d only do that as a last resort. For some good resources on installing Nested ESXi, check out this post by Jason right here on the Altaro VMware blog. Additionally, William Lam has great resources and even has appliances you can use. I also recommend checking out William’s scripts. You can deploy a 3 node ESXi cluster and a vCenter appliance on top of a single host in approximately 35 minutes or so if you’d like to automate the process. You can check his site out here.
In my mind, the biggest benefit of host options 3 is the ability to tear it all down quickly. I also do this to run a 6.5 and 6.7 environment next to each other and not having to worry about breaking anything.
In my mind, the best lab is a hybrid lab. I like to have 2 physical hosts and then nest other ESXi environments on top of them. I find it the easiest way overall. I stopped buying so much physical gear around 2014 and have not looked back.
I’ve seen some unique home lab setups, from people running ESXi on re-purposed thin clients, yes thin clients! Honestly, it does not take a bunch of power to do it. I’ve even seen people buy a lot of broken laptops on eBay and use them as hosts. Sure they have broken screens but who cares! Not to mention that they have built-in battery backups. It certainly doesn’t hurt to try it on anything.
Several years ago I was into Shuttle XPC systems and they are great as long as you’re up for dealing with injecting the vibs and drivers for the Realtek NICS. It’s an easy process with PowerCLI but I think for the overall value, the Intel NUC is still your best bet overall. I am definitely curious to see where VMware goes with the ESXi on ARM that they announced this past year at VMworld. From a lab perspective it would be so cool to run it on a few Raspberry Pis but who knows where they will take that. I’ll be following it closely though. I’ll probably never be able to get rid of my full blown ESXi host but if I could save some money on a few low-cost ARM options for test hosts, I’m all in!
Hope this post has helped. Next time we’ll check out some networking gear that you can use in your lab.
How about you? Have you had good luck with a certain piece of hardware in your lab that I didn’t mention in this post? Let us know in the comments section below! The more options everyone has, the better!
[the_ad id=”4738″][thrive_leads id=’18673′]
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!