Hyper-V’s Actual Hardware Requirements

Hyper-V Server 2012 actually doesn’t need much in the way of hardware in order to operate. You’re going to need more than that to make it really work.

The published requirements for Hyper-V Server 2012 are actually the same as those for Windows Server 2012 which are pretty much the same as the requirements for 2008 R2. TechNet lists them all for you. They are:

  • 1.4 GHz 64-bit processor
  • 512 MB RAM
  • 32 GB hard drive space

Not bad, huh? I had a laptop in 2005 that offered more than that. There are a few things you’ll need to make your installation successful:

  • DVD drive (or some other method to load the bits onto the system)
  • Monitor with at least 800 x 600 resolution
  • Keyboard and mouse

Still not a dealbreaker for most people, I’d guess. But, you’re not really going to get Hyper-V Server going on that system with any real success.

CPU

For Hyper-V, your CPU absolutely must support Data Execution Prevention and hardware virtualization and, just to state the obvious, those features must be enabled. Your system’s exact implementation will vary by vendor. Second-level address translation (SLAT) support is optional but desirable, especially if you have memory-intensive workloads. If you will be using RemoteFX, see the RemoteFX section at the end of this article. If you will be employing Client Hyper-V (a component of Windows 8), SLAT is required.

CPU in your guests does not correspond to CPU in your physical host. The recommendation is that you assign no more than 8 vCPUs per physical core with guests prior to 2008 R2/Windows 7 and no more than 12 vCPUs per physical on later guests. However, these numbers are difficult to understand in practice. You might have a VDI scenario where there are 100 2-vCPU desktops on a single host but only around 20 of those are ever likely to be active at a time. Strictly by the numbers, that would appear to need a 16-core system. In reality, that 16-core system is going to be nearly idle most of the time. On the other hand, you might be considering virtualizing a real-time communications system, such as a Lync mediation server. That’s pretty much going to need be designed at a 1-to-1 ratio and possibly with more vCPU than a physical deployment would ask for.

The takeaway is that there is some math to vCPU allotments but it’s really not going to be found in a generic 1-to-x statement. When a virtual machine wants to execute a thread, it’s first going to see if it has enough vCPUs (a virtual machine, like a physical machine, can only run one thread per logical processor). If it does, it will attempt to run that thread. Since it’s virtualized, Hyper-V will attempt to schedule that thread on behalf of the virtual machine. If there’s a core available, the thread will run. Otherwise, it will wait. Threads will be given time slices just as they would in a non-virtualized world. That means that they get a certain amount of time to complete and then, if there is any contention, they are suspended while another thread operates. All that said, the 1-to-8 and 1-to-12 numbers weren’t simply invented out of thin air. If you aren’t sure, they are likely to serve you well.

Memory

Hyper-V Server can certainly run with as little as 512 MB of RAM. Good luck squeezing even a single virtual machine in there with it, however. If you’ve got any non-Windows processes running in the management operating system, such as backup software, it’s going to want some memory, too. Plan your system design so that Hyper-V and the management OS have 2 GB available, and all will run smoothly. You can pressure it down to about 1 GB without noticeably impacting performance, but after that your mileage will vary.

If high-density is your aim, there’s a bit of planning to be done for the virtual machines. Up to the first gigabyte of guest memory, Hyper-V will use 32MB. For each gigabyte after that, Hyper-V has an overhead of 8 MB. If the guest is using Dynamic Memory and isn’t at its maximum, a buffer will also be in place. By default, this is 10% of the RAM that the guest’s current demand. You can modify this buffer size and Hyper-V can also opt to use less than the designated amount. Since Hyper-V does not overcommit memory, it is safe to squeeze in what you can. However, if you squeeze too tightly, some virtual machines will not be allowed what they need to perform well.

[optin-monster-shortcode id=”u4bw5fa5efzfughm”]

Network

The requirements don’t even list a network adapter, except to say that you need an Internet connection during installation. Of course, it would certainly be possible to run Hyper-V without any network access and I have no doubt that there are at least a few applications for such a configuration. Most of us are going to need network connectivity for the management OS and the guests. For a standalone system, one gigabit adapter is pretty much the absolute minimum and two is much preferred. This way, you can allow the management OS to have its own line while the guests share the other. For a clustered system, you’ll want at least three gigabit adapters. We’ve discussed networking fairly thoroughly elsewhere and will definitely revisit it in future posts, so I won’t dive into that here.

Hard Drive

The minimum of 32 GB is fairly accurate although I would recommend something more like 40 GB for the management OS. Space is rarely an issue though. What is often asked is what sort of storage to use for Hyper-V itself. Realistically, you can use just about anything you want as long as it meets the 32 GB minimum and you can boot your host from it. The drives don’t need to be terribly fast as the hypervisor and management OS have minimal demands. You’ll notice the speed, or lack thereof, almost exclusively at boot time. In my systems, I prefer a hardware RAID-1 of the smallest drives available, which anymore is 300GB. It’s a cheap way to get some fault tolerance for your host.

2012 does a good job of sizing its page file, but if you’re using 2008 R2, your page file is probably oversized. We published a guide last year on setting this appropriately.

The virtual machines themselves should be placed on storage other than that which contains Hyper-V. There is a science to designing this storage but it is rather involved. For the purposes of this post, just remember that I/Os per second (IOPS) is the primary metric that most sizing strategies take. You will probably design your storage in a RAID array of some form, for redundancy if nothing else. The single best way to increase IOPS in an array is to increase spindle-count (or SSD count). For spinning disks, the next most important factor is the rotational speed (measured in RPM, higher is better). Form factor also makes a difference (2.5″ vs. 3.5″) as it establishes the maximum distance that a read/write head can possibly travel; smaller is faster. Unless you have an unlimited budget, take care in the way that you design your storage. Many people dramatically overestimate what their actual IOPS requirements are (or blindly follow bad advice from “experts”) and wind up building gigantic RAID-10 arrays capable of 2000 or more IOPS when they are going to be deploying five virtual machines that will average 20 IOPS apiece — and will rarely access disk at precisely the same time. If you have an existing load to measure, take the time to measure it and size appropriately. If you don’t, try to find out what people with similar loads are actually experiencing. Almost all systems are far more dependent on reads than writes, so don’t be discouraged from using RAID-5 arrays based solely on the fact that RAID-5 writes are slower unless you’re certain that you’ll have heavy write needs. There is no value in taking the capacity-per-dollar hit of RAID-10 when a lower RAID level would provide the IOPS that you actually need.

RemoteFX Components

RemoteFX has changed quite a bit from 2008 R2 to 2012. If you’re not familiar with RemoteFX, it is a set of technologies intended to enhance the end-user experience. As such, it is really only of value when virtualizing desktop operating systems. In 2008 R2, you needed to have a host with a SLAT-capable CPU and a GPU that was capable of both DirectX 9.0c and 10.0. What you got from these was the ability to smoothly run vide0-intensive operations, such as movies and 3D applications inside a virtual machine.

For 2012, I’ve found a lot of conflicting documentation. I had seen an older document that seemed to indicate that SLAT was the only requirement and that you could get some basic video enhancements even without a dedicated GPU. Newer documentation indicates that your host must have a GPU capable of DirectX 11 with a recent WDDM driver. That same documentation insists you need to install Windows Server 2012 with GUI and run Hyper-V as a role, but all of the services required by RemoteFX are also available in Hyper-V Server 2012, so this requirement seems suspect. Here’s the link to that newer document. While I’d love to give some authoritative information on this subject to clear the air, it seems as though the safest bet is to simply ensure you have a GPU that fits the requirements. Even if it’s not strictly necessary, it will certainly provide a superior RemoteFX experience.

 

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

45 thoughts on "Hyper-V’s Actual Hardware Requirements"

  • Bororo says:

    Hi Eric,
    thank you for this article, I’m starting to consider Hyper-V as a solution for small business and unfortunately didn’t find too much about HW sizing so articles like this one are really appreciated.
    I would like to install Hyper-V on Server 2012 Standard server (HP DL380 G8 server, 2xCPU,32GB RAM) with mirrored 2x300GB 15k SFF SAS disks for system and 6x300GB 10k SFF disks configured as RAID50. Based on your sentence …The virtual machines themselves should be placed on storage other than that which contains Hyper-V… I would like to ask you if to store virtual machine files on RAID50 drives even these are only 10k rpm?
    Thank you,
    Bororo

    • Eric Siron says:

      I/O needs will be determined by the virtual machines you place on the disks. RAID-50 on 10k disks is just fine as long as the virtual machines don’t demand more than it provides. I apologize for the vague answer, but it’s difficult to be more specific without knowing more about your planned virtual machines.

  • Bororo says:

    It’s planned for 2 VM – Domain Controller and Exchange 2013 for up to 30 users. I know it’s not best, but in case we are not able to buy SBS anymore (2012 Essentails if for 25 users and MS forcing to use Office365), we consider this solution as optimal. I’ll try to do some extensive testing and will see if there is significant performance issue when storing VM files on 10k RAID50 drives. Thank you for your answer,
    Bororo

  • John says:

    we have same server and wants use this one 2 cpu 2.4 xeon 8 core for exchange server 150 user and GP server and sql 2012 and also sharepoint 2013 and also lync server 2013. is it possible? what is your recommend for new server?
    thanks

  • Abhishek Shukla says:

    hey Eric,

    we are planning to establish a data center and need to use hardware virtualization .
    few things i need to know :
    1) how to set VM’s on x-86 platform.
    2) OS load on VM’s
    3) Relation/Dependencies of CPU cores with VM’s
    4) Virtualization load on OS.

    • Eric Siron says:

      This is much more information than I can possibly cover in this sort of environment, although I have an article planned on subject #3 to be published within the next month or so.
      If you are interested in working this out yourself but with some guidance, I recommend this: http://www.packtpub.com/microsoft-hyperv-cluster-design/book
      Otherwise, I recommend you try to engage a local Microsoft partner that has competency in virtualization. Microsoft consulting services should be able to help you find someone.

  • Jeff G says:

    Just curious why you recommend putting VMs on different disks than the Hypervisor. Your article states that the primary load placed on storage by the Hypervisor is during boot, and that demands are minimal at other times. Seems to me to make the case for putting everything on the same array. This would also have the effect of increasing spindle count, as you’d have the benefit of all the disks’ IOPS supporting the VM storage.

    • Eric Siron says:

      You know Jeff, in a lot of ways, I agree with you.
      I continue to separate mine primarily because that’s Microsoft’s recommendation and second because I rarely use storage that’s inside the same system as the hypervisor.
      One reason to keep them separate would be if you ever need to blow away the management operating system and not worry about harm coming to the guests. While I consider it a horrid personal failure any time I reformat an OS disk to solve a problem, it is a valid approach when you want to upgrade the hypervisor/management OS.
      The approach that I would prefer would be very hardware-dependent. If I had a controller that supported it, I would combine all the drives into a single large physical array. Then I would use the RAID system’s firmware to create two separate logical disks, one for the management OS and one for all the guests. That way, the only drawback is that you’ll have a few slack GBs set aside for the management OS. That’s usually an acceptable price to pay.

  • sherif says:

    thank you for this article, I’m starting to consider Hyper-V as a solution for our business
    I would like to install Hyper-V on Server 2012 Standard server (HP DL380 G8 server, 2xCPU,64GB RAM)3x300GB 10k SFF disks. I want to create application server on physical computer and SQL data base on virtual mechin,what is u r best recomandation on this and how I can make arrange the RAM and HDD space on vertual machine,r u think is it stable solution for application and database on same server ?

  • Juan says:

    Hi Eric,
    In our organization we have 4 physical dekstop computers hosting 3 VMs. Two of these servers use their hardware exclusively in a developer-collaborative environment to share and storage the code replicating in both (these two don’t use VMs). The 3 VMs hosts are stored in the remaining two pcs (we have a low-end CRM and a Helpdesk service and some older services that use low resources). We want to clean all that, buy some high-end enterprise servers (we were thinking in having a hardware-redundant solution in case of breakdown) and virtualize everything using W2012. We are talking about low processor usage, but a medium-high disk usage. We don’t need very powerful servers, but the main idea would be to have the same HyperV configuration in each of them pointing to a centralized storage and having a good backup system outside the organization once a day (we do not want cloud solutions). Do you reccomend this approach? 2 physical 1 centralized storage and 1 backup system?. Thanks in advance for your help.

    • Eric Siron says:

      That’s exactly what I would build. I would focus resource allocation on getting a good storage system (reliability trumps all) and a good backup solution (because even the most reliable things fail). Where you say “same HyperV configuration in each of them”, I’m thinking cluster.

  • Mit says:

    Eric,
    I am considering using Hyper-V Windows Server 2012R2 Standard for the following scenario:

    60 Users enviro
    Server will run Hyper-V host and one VM for Domain Controller and another VM as a File Server.
    Hardware: Dell R520, Four 2TB drives setup as RAID10

    What recommendations can you provide in regards to the setup of the File Server with the Storage on the same Server?

    Thanks.

    • Eric Siron says:

      For a typical file server with only 60 users, estimate how much space you want available and use a dynamically expanding virtual disk for it.

  • Mike says:

    Hi Eric,

    Thanks for taking the time to answer these questions. Th replies alone have been informative let alone the actual article.

    I’m doing some work for a small business at the moment. They only have one ageing server and it badly needs replacing. The requirement for the time being is only basic file and print, with SQL express running a very small load.

    I had planned to put in a server with 2012 R2 running 2 VM’s. Then split the role the current server is doing into DNS & DHCP on one VM and the file/print/sql on the other VM. Keeping the host OS, just being an OS.

    The question I have is this : there is currently no AD and this would be a benefit ongoing, especially as we transition them to being more cloud based. Where would you put the DC in this instance? On the HOST? Would we need to specify another machine? Or could the HOST still run as Workgroup with the DNS,DHCP server running AD with a backup to Azure?

    Thanks in advance.

    Mike.

    • Eric Siron says:

      I would definitely not make the host an AD system. You’ll get yourself a stack of problems that you don’t want, not least of which that it sucks up a guest virtualization right for no good reason. For a small installation, I would make one guest run DC, DNS, and DHCP. All the other items can go on a second guest. If you have the money for a second Windows Server license, I would first split SQL away from the file and print roles. If you have that second license and the host has enough horsepower to run 4 VMs, I would further split print away from file because the print server role likes to have problems that are best solved by reboot. DC/DNS/DHCP can comfortably co-exist on the same system up to several hundred users.

  • Jim says:

    We are a small business looking to move from our old SBS 2003 server (the DC) and Server 2003 Standard network to one new computer running two Server 2012 R2 systems as VMs. We like the reliability of a simple RAID 1 mirror system for everything. I am considering a system where a 100 GB partition would hold the host with the rest of the drive being for the guest system OSes and another drive just for storage for the guests. One guest would be for the DC, including AD, DNS, DHCP, print server and possibly fax services. The second guest would be the file server, Remote Desktop server with Remote Web Access (With all TS functionality on the one system.), Workgroup SQL Server, applications written in house, backup software and managed anti-virus software. No great demand is ever placed on any of these services in our environment but we do need them all. As I understand it you should allocate RAM for the host and each guest and it’s possible to allocate cores. The question then becomes how to allocate resources and what’s necessary in total. What I’m thinking is the drive space allocation mentioned earlier and 4 GB of RAM for the host, 12 GB for the DC and 16 GB for the other guest. Does this configuration make sense for a business that’s just under 25 users right now (The reason I don’t think we should go with Essentials.)?

    • Eric Siron says:

      RAID-1 might work but there will be disk bottlenecks. Most any other modern RAID level will treat you better.
      16GB RAM is about 15.25GB too much for a 25 user organization’s domain controller if you run it on Windows Core. About 15GB too much if you run it on full GUI. Put faxing on the other one, not on the DC, and give the second guest all of that extra RAM.

  • Stefano Fereri says:

    May I put all that together, what we learned so far, and list what is needed for a successor of an SBS 2003/2008/2011?

    A common scenario for an “SBS company” is: Up to one dozen of users, maybe two dozens, no extra room for noisy equipment, no cloud option (for security reasons or because of small bandwidth).

    1. Use RAID 1 (for few users) or four disk RAID 10 array with 10k disks (15k’s to noisy) or SSD, disk size as you need.

    2. 64GB of RAM – fits nearly every need.

    3. One of these average server CPUs like E5-2620v3 (6 cores).

    4. First VM with AD/DNS/DHCP/file/print, may be backup and AV, second VM: Exchange.

    5. 2 GB of RAM for Host, 8GB for first VM (may be 12GB with Backup and AV), rest for second VM (it’s plenty for a small Exchange, but with 64GB there’s at least some reserve, if you need another VM, to separate print services or whatever).

    6. Set RAM as static, at least for Exchange

    7. Simply give each server – host, VM 1 and VM 2 – two vCPUs, there should be no performance issues in SMB scenarios.

    8. In a typically small server box there are two network ports, use first one as dedicated port (without virtual switch) for host, connect the second one to virtual switch and connect your VMs to this virtual switch.

    9. Put host and VMs on same array or host on an USB flash drive, because it does not need much i/o; for the latter you will need to use free Hyper-V-Server 2012 R2 instead of Server 2012 R2 with Hyper-V-role (whether core or GUI).

    10. If host is placed on array there’s no need to add another partition (i.e. drive D:) for storing the VMs, but it is no damage to do so.

    11. Place every VM with it’s associated files in a separate folder (whether on hosts C: or an extra D:).

    12. There’s no need to have an extra data partition (D:) within VM 1, but preferably within VM 2, for the Exchange database; but don’t create a second partition within VHDx file of VM 2, instead of that create an extra VHDx file as data partition.

    Any mistakes? Any comments?

    Still a mystery to me is how one can achieve a good backup as we had it in the physical era, with serialization and brick level restore, if you refuse – as you definitely should – to install backup software on host.

    Thanks and regards,
    Stefano

    • Eric Siron says:

      I don’t like cookie-cutter solutions in general but I suppose this isn’t a bad starting place.
      I wouldn’t do your #8 at all. Converged networking is the present.
      Not sure why anyone wouldn’t want to install a backup solution on the host. It’s much cleaner for the guests. Host-based solutions are dramatically easier to manage — scheduling, storage, visibility, portability, the whole thing. I personally only use a guest-based solution when the host-based solution just can’t handle my needs. Brick level Exchange restores are possible with host-level backups. Altaro Backup for Hyper-V can do it.

      • Stefano says:

        Thanks Eric. Not sure what you mean with “converged networking”. Isn’t dedicating one NIC to host and the second NIC to the VMs the recommended way?

        Installing third party software on the host itself, which you installed as core because you are so hysterically that you even want to try to decrease the number of patches and fixes from the OS manufacturer ITSELF, seems to me contradictory. Since backup software needs to be patched as well – via Internet? – you expose the most important part of your Hyper-V system to additional threads and create additional instabilities, don’t you?

        And the backup of VMs could be achieved by the host itself, using a small batch file and VSS, although then there is no way for brick level restores. But if you want the latter then there seems to be no alternative to installing backup software on host and install agents into your VMs.

        • Eric Siron says:

          That was the recommended way in 2010. In 2015, we converge. If you have two adapters, you team them and use the team to host a single virtual switch.

          Of all the arguments to use Core over a GUI installation, patching reduction is the weakest. It doesn’t extend to the backup software at all. If it’s reboots that you’re worried about, only the most incompetent of backup software vendors require a reboot after every, or even most patches. If it’s for the lower attack footprint, you achieve a net loss in security if your mitigation strategy requires you to spend extra time in tedious manual maintenance that could be better spent on security and not in repetitive tasks that encourage shortcuts. If connecting your host to the Internet bothers you, everyone has offline installation/patching methods. If you wish to avoid all the benefits of host-based backups and add all this overhead to your workload, that’s your choice. It is not something I would ever recommend to anyone.

  • Stefano Fereri says:

    Additionally to point 6: It is safe to make VHDx of VM 1 “thin” (i.e. dynamical), but VHDx for Exchange not, as Microsoft does not recommend it, like they don’t with RAM.

  • Omar says:

    Hi Eric,

    I have bought two HP servers DL 360 G5 and 380 G5 both servers are with 32 GB ram and with raid controllers. So just want to ask what you think how many VMs I can run smoothly on them in a lab environment. I am thinking to add a SAN as well at some point in future. Please reply me thanks.

    • Eric Siron says:

      You can run as many VMs as you want until you run out of resources.

      • Omar says:

        Hi Eric,

        Trust you are well. Just want to ask in a lab environment is it ok to Keep Hyper-v host , SCVMM and all the VMs and system files etc. on the same server ? or can you please give me an idea what’s the best thing to do ? I have two HP servers. And what raid you want me to configure ? Please let me know thanks.

  • Carlos Suazo says:

    Excellent guidance, I have a question. If my system has multiple NIC’s of the same kind. Is it better to team them up into a single one? When is recommended to use them separately? Thank You.- CAS

  • David Valdez says:

    I’ve got an odd one I’m currently researching. We are using DOS-based Foxpro at the new company where I work. It’s a long and convoluted path to here, but we must remain DOS-based for the foreseeable future. We currently use a physical machine, running Windows 2008 (NOT R2) for the DOS services. We’re looking to virtualize using Hyper-V. Today, there’s a set of 8 cores that are sequestered per user to ensure decent performance.

    Got a suggestion for the HW requirements and any gotchas and/or recommendations?

    • Eric Siron says:

      2008 is not a supported guest operating system for the 2016 release of Hyper-V. I believe the final version that did support it was 2008 R2. I don’t even know what would happen if you tried install the guest services on that OS. I’m assuming that the DOS services wouldn’t work on anything past 2008. So, you’re kind of stuck.
      2008 R2 is still supported so you could run that version of Hyper-V.
      You’d have a hard time under-sizing a host so badly that it wouldn’t work for this. I have no idea what 8 cores per user does for you. I don’t recall that FoxPro ever used a multi-threaded library and the last DOS version that I saw ran perfectly on a 100MHz single-core CPU with 4MB of RAM. The last Windows version that I saw… I’m fairly certain was about the same. I don’t think that hardware is your concern.

  • Lior says:

    Hi Eric.

    We have a 512GB RAM host that runs (I know it sounds stupid, but there’s a reason for that) only 1 VM.
    The maximum amount of memory Hyper-V allows for this VM is ~490GB. Does it make sense? It doesn’t match the equation above (also saw it on other places).
    Do I have a way to troubleshoot it (PerfMon, etc.)?

    Thanks.

  • Buster Schweiker says:

    Very good blog post. I definitely appreciate this site. Continue the good work!

Leave a comment or ask a question

Your email address will not be published. Required fields are marked *

Your email address will not be published.

Notify me of follow-up replies via email

Yes, I would like to receive new blog posts by email

What is the color of grass?

Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.