Hyper-V’s Actual Hardware Requirements Hyper-V’s Actual Hardware Requirements

30 Apr by Eric Siron     10     Hyper-V Articles

Hyper-V Server 2012 actually doesn’t need much in the way of hardware in order to operate. You’re going to need more than that to make it really work.

The published requirements for Hyper-V Server 2012 are actually the same as those for Windows Server 2012 which are pretty much the same as the requirements for 2008 R2. TechNet lists them all for you. They are:

  • 1.4 GHz 64-bit processor
  • 512 MB RAM
  • 32 GB hard drive space

Not bad, huh? I had a laptop in 2005 that offered more than that. There are a few things you’ll need to make your installation successful:

  • DVD drive (or some other method to load the bits onto the system)
  • Monitor with at least 800 x 600 resolution
  • Keyboard and mouse

Still not a dealbreaker for most people, I’d guess. But, you’re not really going to get Hyper-V Server going on that system with any real success.


For Hyper-V, your CPU absolutely must support Data Execution Prevention and hardware virtualization and, just to state the obvious, those features must be enabled. Your system’s exact implementation will vary by vendor. Second-level address translation (SLAT) support is optional but desirable, especially if you have memory-intensive workloads. If you will be using RemoteFX, see the RemoteFX section at the end of this article. If you will be employing Client Hyper-V (a component of Windows 8), SLAT is required.

CPU in your guests does not correspond to CPU in your physical host. The recommendation is that you assign no more than 8 vCPUs per physical core with guests prior to 2008 R2/Windows 7 and no more than 12 vCPUs per physical on later guests. However, these numbers are difficult to understand in practice. You might have a VDI scenario where there are 100 2-vCPU desktops on a single host but only around 20 of those are ever likely to be active at a time. Strictly by the numbers, that would appear to need a 16-core system. In reality, that 16-core system is going to be nearly idle most of the time. On the other hand, you might be considering virtualizing a real-time communications system, such as a Lync mediation server. That’s pretty much going to need be designed at a 1-to-1 ratio and possibly with more vCPU than a physical deployment would ask for.

The takeaway is that there is some math to vCPU allotments but it’s really not going to be found in a generic 1-to-x statement. When a virtual machine wants to execute a thread, it’s first going to see if it has enough vCPUs (a virtual machine, like a physical machine, can only run one thread per logical processor). If it does, it will attempt to run that thread. Since it’s virtualized, Hyper-V will attempt to schedule that thread on behalf of the virtual machine. If there’s a core available, the thread will run. Otherwise, it will wait. Threads will be given time slices just as they would in a non-virtualized world. That means that they get a certain amount of time to complete and then, if there is any contention, they are suspended while another thread operates. All that said, the 1-to-8 and 1-to-12 numbers weren’t simply invented out of thin air. If you aren’t sure, they are likely to serve you well.


Hyper-V Server can certainly run with as little as 512 MB of RAM. Good luck squeezing even a single virtual machine in there with it, however. If you’ve got any non-Windows processes running in the management operating system, such as backup software, it’s going to want some memory, too. Plan your system design so that Hyper-V and the management OS have 2 GB available, and all will run smoothly. You can pressure it down to about 1 GB without noticeably impacting performance, but after that your mileage will vary.

If high-density is your aim, there’s a bit of planning to be done for the virtual machines. Up to the first gigabyte of guest memory, Hyper-V will use 32MB. For each gigabyte after that, Hyper-V has an overhead of 8 MB. If the guest is using Dynamic Memory and isn’t at its maximum, a buffer will also be in place. By default, this is 10% of the RAM that the guest’s current demand. You can modify this buffer size and Hyper-V can also opt to use less than the designated amount. Since Hyper-V does not overcommit memory, it is safe to squeeze in what you can. However, if you squeeze too tightly, some virtual machines will not be allowed what they need to perform well.


The requirements don’t even list a network adapter, except to say that you need an Internet connection during installation. Of course, it would certainly be possible to run Hyper-V without any network access and I have no doubt that there are at least a few applications for such a configuration. Most of us are going to need network connectivity for the management OS and the guests. For a standalone system, one gigabit adapter is pretty much the absolute minimum and two is much preferred. This way, you can allow the management OS to have its own line while the guests share the other. For a clustered system, you’ll want at least three gigabit adapters. We’ve discussed networking fairly thoroughly elsewhere and will definitely revisit it in future posts, so I won’t dive into that here.

Hard Drive

The minimum of 32 GB is fairly accurate although I would recommend something more like 40 GB for the management OS. Space is rarely an issue though. What is often asked is what sort of storage to use for Hyper-V itself. Realistically, you can use just about anything you want as long as it meets the 32 GB minimum and you can boot your host from it. The drives don’t need to be terribly fast as the hypervisor and management OS have minimal demands. You’ll notice the speed, or lack thereof, almost exclusively at boot time. In my systems, I prefer a hardware RAID-1 of the smallest drives available, which anymore is 300GB. It’s a cheap way to get some fault tolerance for your host.

2012 does a good job of sizing its page file, but if you’re using 2008 R2, your page file is probably oversized. We published a guide last year on setting this appropriately.

The virtual machines themselves should be placed on storage other than that which contains Hyper-V. There is a science to designing this storage but it is rather involved. For the purposes of this post, just remember that I/Os per second (IOPS) is the primary metric that most sizing strategies take. You will probably design your storage in a RAID array of some form, for redundancy if nothing else. The single best way to increase IOPS in an array is to increase spindle-count (or SSD count). For spinning disks, the next most important factor is the rotational speed (measured in RPM, higher is better). Form factor also makes a difference (2.5″ vs. 3.5″) as it establishes the maximum distance that a read/write head can possibly travel; smaller is faster. Unless you have an unlimited budget, take care in the way that you design your storage. Many people dramatically overestimate what their actual IOPS requirements are (or blindly follow bad advice from “experts”) and wind up building gigantic RAID-10 arrays capable of 2000 or more IOPS when they are going to be deploying five virtual machines that will average 20 IOPS apiece — and will rarely access disk at precisely the same time. If you have an existing load to measure, take the time to measure it and size appropriately. If you don’t, try to find out what people with similar loads are actually experiencing. Almost all systems are far more dependent on reads than writes, so don’t be discouraged from using RAID-5 arrays based solely on the fact that RAID-5 writes are slower unless you’re certain that you’ll have heavy write needs. There is no value in taking the capacity-per-dollar hit of RAID-10 when a lower RAID level would provide the IOPS that you actually need.

RemoteFX Components

RemoteFX has changed quite a bit from 2008 R2 to 2012. If you’re not familiar with RemoteFX, it is a set of technologies intended to enhance the end-user experience. As such, it is really only of value when virtualizing desktop operating systems. In 2008 R2, you needed to have a host with a SLAT-capable CPU and a GPU that was capable of both DirectX 9.0c and 10.0. What you got from these was the ability to smoothly run vide0-intensive operations, such as movies and 3D applications inside a virtual machine.

For 2012, I’ve found a lot of conflicting documentation. I had seen an older document that seemed to indicate that SLAT was the only requirement and that you could get some basic video enhancements even without a dedicated GPU. Newer documentation indicates that your host must have a GPU capable of DirectX 11 with a recent WDDM driver. That same documentation insists you need to install Windows Server 2012 with GUI and run Hyper-V as a role, but all of the services required by RemoteFX are also available in Hyper-V Server 2012, so this requirement seems suspect. Here’s the link to that newer document. While I’d love to give some authoritative information on this subject to clear the air, it seems as though the safest bet is to simply ensure you have a GPU that fits the requirements. Even if it’s not strictly necessary, it will certainly provide a superior RemoteFX experience.

Have any questions?

Leave a comment below!


Backing up Hyper-V

If you’d like to make backing up your Hyper-V VMs easy, fast and reliable, check out Altaro Hyper-V Backup v4. It’s free for up to 2 VMs and supports Hyper-V Server 2012 R2! Need more? Download a 30-day trial of our Unlimited Edition here: http://www.altaro.com/hyper-v-backup/.

Receive all our free Hyper-V guides, checklists and ebooks first via email!

(Don’t worry, we hate spam too!)

Eric Siron

I have worked in the information technology field since 1998. I have designed, deployed, and maintained server, desktop, network, and storage systems. I provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. Along the way, I have achieved a number of Microsoft certifications and was a Microsoft Certified Trainer for four years. In 2010, I deployed a Hyper-V Server 2008 R2 system and began writing about my experiences. Since then, I have been writing regular blogs and contributing what I can to the Hyper-V community through forum participation and free scripts.

10 Responses to “Hyper-V’s Actual Hardware Requirements”

  1. Bororo

    Hi Eric,
    thank you for this article, I’m starting to consider Hyper-V as a solution for small business and unfortunately didn’t find too much about HW sizing so articles like this one are really appreciated.
    I would like to install Hyper-V on Server 2012 Standard server (HP DL380 G8 server, 2xCPU,32GB RAM) with mirrored 2x300GB 15k SFF SAS disks for system and 6x300GB 10k SFF disks configured as RAID50. Based on your sentence …The virtual machines themselves should be placed on storage other than that which contains Hyper-V… I would like to ask you if to store virtual machine files on RAID50 drives even these are only 10k rpm?
    Thank you,

    • Eric Siron
      Eric Siron

      I/O needs will be determined by the virtual machines you place on the disks. RAID-50 on 10k disks is just fine as long as the virtual machines don’t demand more than it provides. I apologize for the vague answer, but it’s difficult to be more specific without knowing more about your planned virtual machines.

  2. Bororo

    It’s planned for 2 VM – Domain Controller and Exchange 2013 for up to 30 users. I know it’s not best, but in case we are not able to buy SBS anymore (2012 Essentails if for 25 users and MS forcing to use Office365), we consider this solution as optimal. I’ll try to do some extensive testing and will see if there is significant performance issue when storing VM files on 10k RAID50 drives. Thank you for your answer,

    • Eric Siron
      Eric Siron

      For a DC and an Exchange Server of that size, I think you’ll be fine.

  3. John

    we have same server and wants use this one 2 cpu 2.4 xeon 8 core for exchange server 150 user and GP server and sql 2012 and also sharepoint 2013 and also lync server 2013. is it possible? what is your recommend for new server?

  4. Abhishek Shukla

    hey Eric,

    we are planning to establish a data center and need to use hardware virtualization .
    few things i need to know :
    1) how to set VM’s on x-86 platform.
    2) OS load on VM’s
    3) Relation/Dependencies of CPU cores with VM’s
    4) Virtualization load on OS.

    • Eric Siron
      Eric Siron

      This is much more information than I can possibly cover in this sort of environment, although I have an article planned on subject #3 to be published within the next month or so.
      If you are interested in working this out yourself but with some guidance, I recommend this: http://www.packtpub.com/microsoft-hyperv-cluster-design/book
      Otherwise, I recommend you try to engage a local Microsoft partner that has competency in virtualization. Microsoft consulting services should be able to help you find someone.

  5. Jeff G

    Just curious why you recommend putting VMs on different disks than the Hypervisor. Your article states that the primary load placed on storage by the Hypervisor is during boot, and that demands are minimal at other times. Seems to me to make the case for putting everything on the same array. This would also have the effect of increasing spindle count, as you’d have the benefit of all the disks’ IOPS supporting the VM storage.

    • Eric Siron
      Eric Siron

      You know Jeff, in a lot of ways, I agree with you.
      I continue to separate mine primarily because that’s Microsoft’s recommendation and second because I rarely use storage that’s inside the same system as the hypervisor.
      One reason to keep them separate would be if you ever need to blow away the management operating system and not worry about harm coming to the guests. While I consider it a horrid personal failure any time I reformat an OS disk to solve a problem, it is a valid approach when you want to upgrade the hypervisor/management OS.
      The approach that I would prefer would be very hardware-dependent. If I had a controller that supported it, I would combine all the drives into a single large physical array. Then I would use the RAID system’s firmware to create two separate logical disks, one for the management OS and one for all the guests. That way, the only drawback is that you’ll have a few slack GBs set aside for the management OS. That’s usually an acceptable price to pay.

Leave a comment