Installing a Hyper-V Cluster | Hyper-V Clusters – Part 2 Installing a Hyper-V Cluster | Hyper-V Clusters – Part 2

30 Aug by Eric Siron     0     Hyper-V Articles

In the previous posting – Hyper-V Failover Clusters – Part I, we looked at what a Hyper-V R2 Failover Cluster is, the basics of how it operates, and what you need to set one up. This post will cover the design phase. This is the portion where you’ll lay everything out and determine how you want the final product to look prior to building anything. This is arguably the most critical portion, especially if you want everything to work the first time.

Part 1 - Hyper-V Failover Clusters Overview
Part 2 - Installing a Hyper-V Cluster - This article
Part 3 - Creating your Hyper-V Cluster
Part 4 - Common Hyper-V Cluster Issues and Pitfalls

Hopefully, you’ve already visualized what your cluster will look like. You’ll want to convert that idea to a digital or paper diagram even if you’re the only administrator. On the diagram and/or attached worksheets, fill in as much information as you can to fully document the system. Necessary information will include the make/model of included hardware, logical names, and IP addresses. You’ll need to establish the starting sizes and locations of cluster shared volumes, pass-through disks, and shared LUNs. All of this must be kept in a safe place. Initially its purpose will be as a simple reminder to you as you go forward. It will continue to serve you as a reminder and will help anyone else in the event that they need to take up where you left off.


Pay special attention to the network configuration of your hosts. This is the biggest tripping point for newcomers to Hyper-V clustering. There are multiple ways to do this but this article will only cover the best practices configuration.

  • Have one network card (or more) dedicated to each of the following roles: virtual switches, host management, LiveMigration, and cluster communications. If you will be using cluster shared volumes, that traffic will travel on the cluster communications network. If your storage will be connected via iSCSI, you’ll need at least one dedicated network for that traffic as well.
  • The network cards for host management, LiveMigration, and cluster communications are all resources that will be controlled by the clustering service. It will group them by their subnets. That means that you must have a subnet for each separate role, and the network cards on each host that will be used in that role must be part of that subnet. In the optimal configuration, different roles cannot share a subnet (because the subnets themselves are the way that the clustering service distinguishes them). Virtual switches are not clustered resources (because the TCP/IP protocol isn’t bound to them, which means they don’t have IP addresses so the cluster service can’t even see them). Hyper-V will manage these by name, so you must ensure that each host in the cluster refers to its virtual switches by the exact same name(s). iSCSI networks are also not cluster resources.
  • All of these roles must use gigabit cards at the least. It is theoretically possible to use a slower NIC for cluster communications if CSV traffic is segregated, but it is neither recommended nor supported. If using some 10Gb cards, these are most suited for the virtual switches, LiveMigration networks, CSV networks, and any iSCSI connections. The host management cards will have high traffic during virtual machine backups but are otherwise generally low volume. Unless sharing CSV traffic, the cluster communications lines mainly carry heartbeat data, which is not large and does not benefit from a 10Gb pipe.
  • Do not team iSCSI connections. Use MPIO instead. Teaming can be used for the other roles, but be judicious; teaming can cause more problems than it solves.


Every virtual machine adds another security concern to your Hyper-V cluster. For that reason, your cluster’s security is a high priority. Unfortunately, there isn’t a singular best practice here, and all approaches have tradeoffs. The first choice you need to make is whether or not to include your Hyper-V hosts as members of your domain. Leaving them as standalone units means that compromising a domain account won’t grant access to the parent partition(s) or vice versa. However, it’s much more difficult to work with non-domain members remotely, and local SAM databases are much easier to crack than Active Directory accounts.

My recommendation is to make all of your Hyper-V hosts members of your domain. This will make them far easier to manage without meaningfully increasing your security risks. A relatively simple and effective way to keep your Hyper-V hosts safe is by keeping them entirely within non-routed subnets. If they can’t directly reach the Internet, no one on the Internet can directly reach them. Because your virtual switches support 802.1q trunking without granting access to the parent partition’s TCP/IP stack, you can still connect your virtual machines to the Internet without exposing your Hyper-V hosts. To manage the hosts, simply place a NIC for all machines they need to access in the same unrouted network. See the following diagram for an example:

Hyper-V Isolation

Hyper-V Isolation

Don’t forget about your shared storage devices! Some of them can be attached to the domain, but all will retain a local security database of some sort and will have a default user name and password combination that are public knowledge. At minimum, change these passwords. These devices can also be isolated on a non-routed network. You can follow the diagram above and duplicate the setup of the unrouted network for your iSCSI traffic. If you’re using managed switches, secure those as well.

For a very detailed coverage of securing Hyper-V, refer to Microsoft’s guide. That was written for the original Hyper-V product, but it’s still applicable.

Load Balancing

After high availability and failover, the next most common purpose for clustering is load balancing. While you should design your cluster so that it can run all your virtual machines with at least one failed node, most of the time, all the nodes in your cluster will be fully operational. You should not fear using them all, as long as you keep the high availability and failover purposes in mind.

Load balancing in this context is simply described as distributing your virtual machines across your nodes in such a way that all of them bear a roughly equal burden. There are two basic approaches; one is to fully utilize all hardware from the beginning and add resources if necessary, the second is to build beyond initial capacity with the expectation of growth. Which approach you take will be based on budget, expected load, and expected growth. If it’s in your budget, it is generally better to use more powerful hardware than you actually need. It is uncommon for loads to remain static; they usually grow. Also, the cost to improve and extend hardware is almost always higher than purchasing it with the extra capacity already installed.

The primary resource you’ll be concerned with balancing is main system memory. RAM cannot be shared, while most other resources can. Dynamic memory allows for some level of increasing virtual machines density, but there will always be a certain minimum amount of required RAM that you’ll need to provision.

CPU resources are shared. Microsoft supports eight virtual CPUs per physical core — twelve if the virtual machines are Windows 7 or Windows Server 2008 R2. The virtual CPUs are not directly mapped to the physical. Individual threads are sequenced evenly across all available physical units.

Storage is the final resource you will be concerned with. The space itself cannot be shared, but access to the storage device is. All high availability virtual machines will be using these fairly continuously. If you are using virtual hard disks (VHDs) whether on cluster shared volumes or host-connected LUNs, the Hyper-V hosts will broker these connections. If you are using pass-through disks, the virtual machines themselves will connect directly. The primary resource to load-balance will be IOPS (input/output per second). In most cases, the storage device will be the limit more than the hosts. However, if your cluster connects to multiple devices, it is possible that the hosts themselves will present a bottleneck. In any case, if the storage device(s) can handle more IOPS than your hosts, you’ll want to distribute your virtual machines in such a way that they do not bog down one host if others are idle. Hyper-V controls access to these devices through SCSI-3 reservations. Most devices have a hard limit of how many simultaneous reservations they can maintain. This limit is typically high but it is worth finding out what it is.

You can use Microsoft’s Assessment and Planning Tool to help you determine how much hardware you’ll need initially.


Licensing was covered in detail in a previous post on Hyper-V Clusters. This is something you’ll need to consider, as all virtual machines must be covered by a proper license at all times.

Plan for Expansion

You’ll want to consider the likelihood that your deployment will need to grow in the future. You can overbuy initially, scale up by adding hardware to existing hosts, or scale out by adding hosts into your cluster. You may cluster a maximum of sixteen hosts.

 Next Time…

This concludes the design portion of this series. The next installment will explore the actual creation process. Check out the article on creating your Hyper-V Cluster

Hyper-V Failover Cluster Series

Part 1 - Hyper-V Failover Clustering Overview
Part 2 - Installing a Hyper-V Cluster - This article
Part 3 - Creating your Hyper-V Cluster
Part 4 - Common Hyper-V Cluster Issues and Pitfalls

Have any questions?

Leave a comment below!


Backing up Hyper-V

If you’d like to make backing up your Hyper-V VMs easy, fast and reliable, check out Altaro Hyper-V Backup v4. It’s free for up to 2 VMs and supports Hyper-V Server 2012 R2! Need more? Download a 30-day trial of our Unlimited Edition here:


Receive all our free Hyper-V guides, checklists and ebooks first via email!

(Don’t worry, we hate spam too!)

Eric Siron

I have worked in the information technology field since 1998. I have designed, deployed, and maintained server, desktop, network, and storage systems. I provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. Along the way, I have achieved a number of Microsoft certifications and was a Microsoft Certified Trainer for four years. In 2010, I deployed a Hyper-V Server 2008 R2 system and began writing about my experiences. Since then, I have been writing regular blogs and contributing what I can to the Hyper-V community through forum participation and free scripts.

Leave a comment