Hyper-V Server 2012 actually doesn’t need much in the way of hardware in order to operate. You’re going to need more than that to make it really work.
The published requirements for Hyper-V Server 2012 are actually the same as those for Windows Server 2012 which are pretty much the same as the requirements for 2008 R2. TechNet lists them all for you. They are:
- 1.4 GHz 64-bit processor
- 512 MB RAM
- 32 GB hard drive space
Not bad, huh? I had a laptop in 2005 that offered more than that. There are a few things you’ll need to make your installation successful:
- DVD drive (or some other method to load the bits onto the system)
- Monitor with at least 800 x 600 resolution
- Keyboard and mouse
Still not a dealbreaker for most people, I’d guess. But, you’re not really going to get Hyper-V Server going on that system with any real success.
For Hyper-V, your CPU absolutely must support Data Execution Prevention and hardware virtualization and, just to state the obvious, those features must be enabled. Your system’s exact implementation will vary by vendor. Second-level address translation (SLAT) support is optional but desirable, especially if you have memory-intensive workloads. If you will be using RemoteFX, see the RemoteFX section at the end of this article. If you will be employing Client Hyper-V (a component of Windows 8), SLAT is required.
CPU in your guests does not correspond to CPU in your physical host. The recommendation is that you assign no more than 8 vCPUs per physical core with guests prior to 2008 R2/Windows 7 and no more than 12 vCPUs per physical on later guests. However, these numbers are difficult to understand in practice. You might have a VDI scenario where there are 100 2-vCPU desktops on a single host but only around 20 of those are ever likely to be active at a time. Strictly by the numbers, that would appear to need a 16-core system. In reality, that 16-core system is going to be nearly idle most of the time. On the other hand, you might be considering virtualizing a real-time communications system, such as a Lync mediation server. That’s pretty much going to need be designed at a 1-to-1 ratio and possibly with more vCPU than a physical deployment would ask for.
The takeaway is that there is some math to vCPU allotments but it’s really not going to be found in a generic 1-to-x statement. When a virtual machine wants to execute a thread, it’s first going to see if it has enough vCPUs (a virtual machine, like a physical machine, can only run one thread per logical processor). If it does, it will attempt to run that thread. Since it’s virtualized, Hyper-V will attempt to schedule that thread on behalf of the virtual machine. If there’s a core available, the thread will run. Otherwise, it will wait. Threads will be given time slices just as they would in a non-virtualized world. That means that they get a certain amount of time to complete and then, if there is any contention, they are suspended while another thread operates. All that said, the 1-to-8 and 1-to-12 numbers weren’t simply invented out of thin air. If you aren’t sure, they are likely to serve you well.
Hyper-V Server can certainly run with as little as 512 MB of RAM. Good luck squeezing even a single virtual machine in there with it, however. If you’ve got any non-Windows processes running in the management operating system, such as backup software, it’s going to want some memory, too. Plan your system design so that Hyper-V and the management OS have 2 GB available, and all will run smoothly. You can pressure it down to about 1 GB without noticeably impacting performance, but after that your mileage will vary.
If high-density is your aim, there’s a bit of planning to be done for the virtual machines. Up to the first gigabyte of guest memory, Hyper-V will use 32MB. For each gigabyte after that, Hyper-V has an overhead of 8 MB. If the guest is using Dynamic Memory and isn’t at its maximum, a buffer will also be in place. By default, this is 10% of the RAM that the guest’s current demand. You can modify this buffer size and Hyper-V can also opt to use less than the designated amount. Since Hyper-V does not overcommit memory, it is safe to squeeze in what you can. However, if you squeeze too tightly, some virtual machines will not be allowed what they need to perform well.
The requirements don’t even list a network adapter, except to say that you need an Internet connection during installation. Of course, it would certainly be possible to run Hyper-V without any network access and I have no doubt that there are at least a few applications for such a configuration. Most of us are going to need network connectivity for the management OS and the guests. For a standalone system, one gigabit adapter is pretty much the absolute minimum and two is much preferred. This way, you can allow the management OS to have its own line while the guests share the other. For a clustered system, you’ll want at least three gigabit adapters. We’ve discussed networking fairly thoroughly elsewhere and will definitely revisit it in future posts, so I won’t dive into that here.
The minimum of 32 GB is fairly accurate although I would recommend something more like 40 GB for the management OS. Space is rarely an issue though. What is often asked is what sort of storage to use for Hyper-V itself. Realistically, you can use just about anything you want as long as it meets the 32 GB minimum and you can boot your host from it. The drives don’t need to be terribly fast as the hypervisor and management OS have minimal demands. You’ll notice the speed, or lack thereof, almost exclusively at boot time. In my systems, I prefer a hardware RAID-1 of the smallest drives available, which anymore is 300GB. It’s a cheap way to get some fault tolerance for your host.
2012 does a good job of sizing its page file, but if you’re using 2008 R2, your page file is probably oversized. We published a guide last year on setting this appropriately.
The virtual machines themselves should be placed on storage other than that which contains Hyper-V. There is a science to designing this storage but it is rather involved. For the purposes of this post, just remember that I/Os per second (IOPS) is the primary metric that most sizing strategies take. You will probably design your storage in a RAID array of some form, for redundancy if nothing else. The single best way to increase IOPS in an array is to increase spindle-count (or SSD count). For spinning disks, the next most important factor is the rotational speed (measured in RPM, higher is better). Form factor also makes a difference (2.5″ vs. 3.5″) as it establishes the maximum distance that a read/write head can possibly travel; smaller is faster. Unless you have an unlimited budget, take care in the way that you design your storage. Many people dramatically overestimate what their actual IOPS requirements are (or blindly follow bad advice from “experts”) and wind up building gigantic RAID-10 arrays capable of 2000 or more IOPS when they are going to be deploying five virtual machines that will average 20 IOPS apiece — and will rarely access disk at precisely the same time. If you have an existing load to measure, take the time to measure it and size appropriately. If you don’t, try to find out what people with similar loads are actually experiencing. Almost all systems are far more dependent on reads than writes, so don’t be discouraged from using RAID-5 arrays based solely on the fact that RAID-5 writes are slower unless you’re certain that you’ll have heavy write needs. There is no value in taking the capacity-per-dollar hit of RAID-10 when a lower RAID level would provide the IOPS that you actually need.
RemoteFX has changed quite a bit from 2008 R2 to 2012. If you’re not familiar with RemoteFX, it is a set of technologies intended to enhance the end-user experience. As such, it is really only of value when virtualizing desktop operating systems. In 2008 R2, you needed to have a host with a SLAT-capable CPU and a GPU that was capable of both DirectX 9.0c and 10.0. What you got from these was the ability to smoothly run vide0-intensive operations, such as movies and 3D applications inside a virtual machine.
For 2012, I’ve found a lot of conflicting documentation. I had seen an older document that seemed to indicate that SLAT was the only requirement and that you could get some basic video enhancements even without a dedicated GPU. Newer documentation indicates that your host must have a GPU capable of DirectX 11 with a recent WDDM driver. That same documentation insists you need to install Windows Server 2012 with GUI and run Hyper-V as a role, but all of the services required by RemoteFX are also available in Hyper-V Server 2012, so this requirement seems suspect. Here’s the link to that newer document. While I’d love to give some authoritative information on this subject to clear the air, it seems as though the safest bet is to simply ensure you have a GPU that fits the requirements. Even if it’s not strictly necessary, it will certainly provide a superior RemoteFX experience.
Have any questions?
Leave a comment below!
Backing up Hyper-V
If you’d like to make backing up your Hyper-V VMs easy, fast and reliable, check out Altaro Hyper-V Backup v4. It’s free for up to 2 VMs and supports Hyper-V Server 2012 R2! Need more? Download a 30-day trial of our Unlimited Edition here: http://www.altaro.com/hyper-v-backup/.