How to Architect and Implement Networks for a Hyper-V Cluster

How to Architect and Implement Networks for a Hyper-V Cluster

We recently published a quick tip article recommending the number of networks you should use in a cluster of Hyper-V hosts. I want to expand on that content to make it clear why we’ve changed practice from pre-2012 versions and how we arrive at this guidance. Use the previous post for quick guidance; read this one to learn the supporting concepts. These ideas apply to all versions from 2012 onward.

Why Did We Abandon Practices from 2008 R2?

If you dig on TechNet a bit, you can find an article outlining how to architect networks for a 2008 R2 Hyper-V cluster. While it was perfect for its time, we have new technologies that make its advice obsolete. I have two reasons for bringing it up:

  • Some people still follow those guidelines on new builds — worse, they recommend it to others
  • Even though we no longer follow that implementation practice, we still need to solve the same fundamental problems

We changed practices because we gained new tools to address our cluster networking problems.

What Do Cluster Networks Need to Accomplish for Hyper-V?

Our root problem has never changed: we need to ensure that we always have enough available bandwidth to prevent choking out any of our services or inter-node traffic. In 2008 R2, we could only do that by using multiple physical network adapters and designating traffic types to individual pathways. Note: It was possible to use third-party teaming software to overcome some of that challenge, but that was never supported and introduced other problems.

Starting from our basic problem, we next need to determine how to delineate those various traffic types. That original article did some of that work. We can immediately identify what appears to be four types of traffic:

  • Management (communications with hosts outside the cluster, ex: inbound RDP connections)
  • Standard inter-node cluster communications (ex: heartbeat, cluster resource status updates)
  • Cluster Shared Volume traffic
  • Live Migration

However, it turns out that some clumsy wording caused confusion. Cluster communication traffic and Cluster Shared Volume traffic are exactly the same thing. That reduces our needs to three types of cluster traffic.

What About Virtual Machine Traffic?

You might have noticed that I didn’t say anything about virtual machine traffic above. Same would be true if you were working up a different kind of cluster, such as SQL. I certainly understand the importance of that traffic; in my mind, service traffic prioritizes above all cluster traffic. Understand one thing: service traffic for external clients is not clustered. So, your cluster of Hyper-V nodes might provide high availability services for virtual machine vmabc, but all of vmabc‘s network traffic will only use its owning node’s physical network resources. So, you will not architect any cluster networks to process virtual machine traffic.

As for preventing cluster traffic from squelching virtual machine traffic, we’ll revisit that in an upcoming section.

Fundamental Terminology and Concepts

These discussions often go awry over a misunderstanding of basic concepts.

  • Cluster Name Object: A Microsoft Failover Cluster has its own identity separate from its member nodes known as a Cluster Name Object (CNO). The CNO uses a computer name, appears in Active Directory, has an IP, and registers in DNS. Some clusters, such as SQL, may use multiple CNOs. A CNO must have an IP address on a cluster network.
  • Cluster Network: A Microsoft Failover Cluster scans its nodes and automatically creates “cluster networks” based on the discovered physical and IP topology. Each cluster network constitutes a discrete communications pathway between cluster nodes.
  • Management network: A cluster network that allows inbound traffic meant for the member host nodes and typically used as their default outbound network to communicate with any system outside the cluster (e.g. RDP connections, backup, Windows Update). The management network hosts the cluster’s primary cluster name object. Typically, you would not expose any externally-accessible services via the management network.
  • Access Point (or Cluster Access Point): The IP address that belongs to a CNO.
  • Roles: The name used by Failover Cluster Management for the entities it protects (e.g. a virtual machine, a SQL instance). I generally refer to them as services.
  • Partitioned: A status that the cluster will give to any network on which one or more nodes does not have a presence or cannot be reached.
  • SMB: ALL communications native to failover clustering use Microsoft’s Server Message Block (SMB) protocol. With the introduction of version 3 in Windows Server 2012, that now includes innate multi-channel capabilities (and more!)

Are Microsoft Failover Clusters Active/Active or Active/Passive?

Microsoft Failover Clusters are active/passive. Every node can run services at the same time as the other nodes, but no single service can be hosted by multiple nodes. In this usage, “service” does not mean those items that you see in the Services Control Panel applet. It refers to what the cluster calls “roles” (see above). Only one node will ever host any given role or CNO at any given time.

How Does Microsoft Failover Clustering Identify a Network?

The cluster decides what constitutes a network; your build guides it, but you do not have any direct input. Any time the cluster’s network topology changes, the cluster service re-evaluates.

First, the cluster scans a node for logical network adapters that have IP addresses. That might be a physical network adapter, a team’s logical adapter, or a Hyper-V virtual network adapter assigned to the management operating system. It does not see any virtual NICs assigned to virtual machines.

For each discovered adapter and IP combination on that node, it builds a list of networks from the subnet masks. For instance, if it finds an adapter with an IP of and a subnet mask of, then it creates a network.

The cluster then continues through all of the other nodes, following the same process.

Be aware that every node does not need to have a presence in a given network in order for failover clustering to identify it; however, the cluster will mark such networks as partitioned.

What Happens if a Single Adapter has Multiple IPs?

If you assign multiple IPs to the same adapter, one of two things will happen. Which of the two depends on whether or not the secondary IP shares a subnet with the primary.

When an Adapter Hosts Multiple IPs in Different Networks

The cluster identifies networks by adapter first. Therefore, if an adapter has multiple IPs, the cluster will lump them all into the same network. If another adapter on a different host has an IP in one of the networks but not all of the networks, then the cluster will simply use whichever IPs can communicate.

As an example, see the following network:

The second node has two IPs on the same adapter and the cluster has added it to the existing network. You can use this to re-IP a network with minimal disruption.

A natural question: what happens if you spread IPs for the same subnet across different existing networks? I tested it a bit and the cluster allowed it and did not bring the networks down. However, it always had the functional IP pathway to use, so that doesn’t tell us much. Had I removed the functional pathways, then it would have collapsed the remaining IPs into an all-new network and it would have worked just fine. I recommend keeping an eye on your IP scheme and not allowing things like that in the first place.

When an Adapter Hosts Multiple IPs in the Same Network

The cluster will pick a single IP in the same subnet to represent the host in that network.

What if Different Adapters on the Same Host have an IP in the Same Subnet?

The same outcome occurs as if the IPs were on the same adapter: the cluster picks one to represent the cluster and ignores the rest.

The Management Network

All clusters (Hyper-V, SQL, SOFS, etc.) require a network that we commonly dub Management. That network contains the CNO that represents the cluster as a singular system. The management network has little importance for Hyper-V, but external tools connect to the cluster using that network. By necessity, the cluster nodes use IPs on that network for their own communications.

The management network will also carry cluster-specific traffic. More on that later.

Note: Replica uses a management network.

Cluster Communications Networks (Including Cluster Shared Volume Traffic)

A cluster communications network will carry:

  • Cluster heartbeat information. Each node must hear from every other node within a specific amount of time (1 second by default). If it does not hear from a minimum of nodes to maintain quorum, then it will begin failover procedures. Failover is more complicated than that, but beyond the scope of this article.
  • Cluster configuration changes. If any configuration item changes, whether to the cluster’s own configuration or the configuration or status of a protected service, the node that processes the change will immediately transmit to all of the other nodes so that they can update their own local information store.
  • Cluster Shared Volume traffic. When all is well, this network will only carry metadata information. Basically, when anything changes on a CSV that updates its volume information table, that update needs to be duplicated to all of the other nodes. If the change occurs on the owning node, less data needs to be transmitted, but it will never be perfectly quiet. So, this network can be quite chatty, but will typically use very little bandwidth. However, if one or more nodes lose direct connectivity to the storage that hosts a CSV, all of its I/O will route across a cluster network. Network saturation will then depend on the amount of I/O the disconnected node(s) need(s).

Live Migration Networks

That heading is a bit of misnomer. The cluster does not have its own concept of a Live Migration network per se. Instead, you let the cluster know which networks you will permit to carry Live Migration traffic. You can independently choose whether or not those networks can carry other traffic.

Other Identified Networks

The cluster may identify networks that we don’t want to participate in any kind of cluster communications at all. iSCSI serves as the most common example. We’ll learn how to deal with those.

Architectural Goals

Now we know our traffic types. Next, we need to architect our cluster networks to handle them appropriately. Let’s begin by understanding why you shouldn’t take the easy route of using a singular network. A minimally functional Hyper-V cluster only requires that “management” network. Stopping there leaves you vulnerable to three problems:

  • The cluster will be unable to select another IP network for different communication types. As an example, Live Migration could choke out the normal cluster hearbeat, causing nodes to consider themselves isolated and shut down
  • The cluster and its hosts will be unable to perform efficient traffic balancing, even when you utilize teams
  • IP-based problems in that network (even external to the cluster) could cause a complete cluster failure

Therefore, you want to create at least one other network. In the pre-2012 model we could designate specific adapters to carry specific traffic types. In the 2012 and later model, we simply create at least one more additional network to allow cluster communications but not client access. Some benefits:

  • Clusters of version 2012 or new will automatically employ SMB multichannel. Inter-node traffic (including Cluster Shared Volume data) will balance itself without further configuration work.
  • The cluster can bypass trouble on one IP network by choosing another; you can help by disabling a network in Failover Cluster Manager
  • Better load balancing across alternative physical pathways

The Second Supporting Network… and Beyond

Creating networks beyond the initial two can add further value:

  • If desired, you can specify networks for Live Migration traffic, and even exclude those from normal cluster communications. Note: For modern deployments, doing so typically yields little value
  • If you host your cluster networks on a team, matching the number of cluster networks to physical adapters allows the teaming and multichannel mechanisms the greatest opportunity to fully balance transmissions. Note: You cannot guarantee a perfectly smooth balance

Architecting Hyper-V Cluster Networks

Now we know what we need and have a nebulous idea of how that might be accomplished. Let’s get into some real implementation. Start off by reviewing your implementation choices. You have three options for hosting a cluster network:

  • One physical adapter or team of adapters per cluster network
  • Convergence of one or more cluster networks onto one or more physical teams or adapters
  • Convergence of one or more cluster networks onto one or more physical teams claimed by a Hyper-V virtual switch

A few pointers to help you decide:

  • For modern deployments, avoid using one adapter or team for a cluster network. It makes poor use of available network resources by forcing an unnecessary segregation of traffic.
  • I personally do not recommend bare teams for Hyper-V cluster communications. You would need to exclude such networks from participating in a Hyper-V switch, which would also force an unnecessary segregation of traffic.
  • The most even and simple distribution involves a singular team with a Hyper-V switch that hosts all cluster network adapters and virtual machine adapters. Start there and break away only as necessary.
  • A single 10 gigabit adapter swamps multiple gigabit adapters. If your hosts have both, don’t even bother with the gigabit.

To simplify your architecture, decide early:

  • How many networks you will use. They do not need to have different functions. For example, the old management/cluster/Live Migration/storage breakdown no longer makes sense. One management and three cluster networks for a four-member team does make sense.
  • The IP structure for each network. For networks that will only carry cluster (including intra-cluster Live Migration) communication, the chosen subnet(s) do not need to exist in your current infrastructureAs long as each adapter in a cluster network can reach all of the others at layer 2 (Ethernet), then you can invent any IP network that you want.

I recommend that you start off expecting to use a completely converged design that uses all physical network adapters in a single team. Create Hyper-V network adapters for each unique cluster network. Stop there, and make no changes unless you detect a problem.

Comparing the Old Way to the New Way (Gigabit)

Let’s start with a build that would have been common in 2010 and walk through our options up to something more modern. I will only use gigabit designs in this section; skip ahead for 10 gigabit.

In the beginning, we couldn’t use teaming. So, we used a lot of gigabit adapters:


There would be some variations of this. For instance, I would have added another adapter so that I could use MPIO with two iSCSI networks. Some people used Fiber Channel and would not have iSCSI at all.

Important Note: The “VMs” that you see there means that I have a virtual switch on that adapter and the virtual machines use it. It does not mean that I have created a VM cluster network. There is no such thing as a VM cluster network. The virtual machines are unaware of the cluster and they will not talk to it (if they do, they’ll use the Management access point like every other non-cluster system).

Then, 2012 introduced teaming. We could then do all sorts of fun things with convergence. My very least favorite:

This build takes teams to an excess. Worse, the management, cluster, and Live Migration teams will be idle almost all the time, meaning that this 60% of this host’s networking capacity will be generally unavailable.

Let’s look at something a bit more common. I don’t like this one either, but I’m not revolted by it either:

A lot of people like that design because, so they say, it protects the management adapter from problems that affect the other roles. I cannot figure out how they perform that calculus. Teaming addresses any probable failure scenarios. For anything else, I would want the entire host to fail out of the cluster. In this build, a failure that brought the team down but not the management adapter would cause its hosted VMs to become inaccessible because the node would remain in the cluster. That’s because the management adapter would still carry cluster heartbeat information.

My preferred design follows:

Now we are architected against almost all types of failure. In a “real-world” build, I would still have at least two iSCSI NICs using MPIO.

What is the Optimal Gigabit Adapter Count?

Because we had one adapter per role in 2008 R2, we often continue using the same adapter count in our 2012+ builds. I don’t feel that’s necessary for most builds. I am inclined to use two or three adapters in data teams and two adapters for iSCSI. For anything past that, you’ll need to have collected some metrics to justify the additional bandwidth needs.

10 Gigabit Cluster Network Design

10 gigabit changes all of the equations. In reasonable load conditions, a single 10 gigabit adapter moves data more than 10 times faster than a single gigabit adapter. When using 10 GbE, you need to change your approaches accordingly. First, if you have both 10GbE and gigabit, just ignore the gigabit. It is not worth your time. If you really want to use it, then I would consider using it for iSCSI connections to non-SSD systems. Most installations relying on iSCSI-connected spinning disks cannot sustain even 2 Gbps, so gigabit adapters would suffice.

Logical Adapter Counts for Converged Cluster Networking

I didn’t include the Hyper-V virtual switch in any of the above diagrams, mostly because it would have made the diagrams more confusing. However, I would use a Hyper-V team to host all of the logical adapters necessary. For a non-Hyper-V cluster, I would create a logical team adapter for each role. Remember that on a logical team, you can only have a single logical adapter per VLAN. The Hyper-V virtual switch has no such restrictions. Also remember that you should not use multiple logical team adapters on any team that hosts a Hyper-V virtual switch. Some of the behavior is undefined and your build might not be supported.

I would always use these logical/virtual adapter counts:

  • One management adapter
  • A minimum of one cluster communications adapter up to n-1, where n is the number of physical adapters in the team. You can subtract one because the management adapter acts as a cluster adapter as well

In a gigabit environment, I would add at least one logical adapter for Live Migration. That’s optional because, by default, all cluster-enabled networks will also carry Live Migration traffic.

In a 10 GbE environment, I would not add designated Live Migration networks. It’s just logical overhead at that point.

In a 10 GbE environment, I would probably not set aside physical adapters for storage traffic. At those speeds, the differences in offloading technologies don’t mean that much.

Architecting IP Addresses

Congratulations! You’ve done the hard work! Now you just need to come up with an IP scheme. Remember that the cluster builds networks based on the IPs that it discovers.

Every network needs one IP address for each node. Any network that contains an access point will need an additional IP for the CNO. For Hyper-V clusters, you only need a management access point. The other networks don’t need a CNO.

Only one network really matters: management. Your physical nodes must use that to communicate with the “real” network beyond. Choose a set of IPs available on your “real” network.

For all the rest, the member IPs only need to be able to reach each other over layer 2 connections. If you have an environment with no VLANs, then just make sure that you pick IPs in networks that don’t otherwise exist. For instance, you could use for something, as long as that’s not a “real” range on your network. Any cluster network without a CNO does not need to have a gateway address, so it doesn’t matter that those networks won’t be routable. It’s preferred, in fact.

Implementing Hyper-V Cluster Networks

Once you have your architecture in place, you only have a little work to do. Remember that the cluster will automatically build networks based on the subnets that it discovers. You only need to assign names and set them according to the type of traffic that you want them to carry. You can choose:

  • Allow cluster communication (intra-node heartbeat, configuration updates, and Cluster Shared Volume traffic)
  • Allow client connectivity to cluster resources (includes cluster communication) and cluster communications (you cannot choose client connectivity without cluster connectivity)
  • Prevent participation in cluster communications (often used for iSCSI and sometimes connections to external SMB storage)

As much as I like PowerShell for most things, Failover Cluster Manager makes this all very easy. Access the Networks tree of your cluster:

I’ve already renamed mine in accordance with their intended roles. A new build will have “Cluster Network”, “Cluster Network 1”, etc. Double-click on one to see which IP range(s) it assigned to that network:

Work your way through each network, setting its name and what traffic type you will allow. Your choices:

  • Allow cluster network communication on this network AND Allow clients to connect through this network: use these two options together for the management network. If you’re building a non-Hyper-V cluster that needs access points on non-management networks, use these options for those as well. Important: The adapters in these networks SHOULD register in DNS.
  • Allow cluster network communication on this network ONLY (do not check Allow clients to connect through this network): use for any network that you wish to carry cluster communications (remember that includes CSV traffic). Optionally use for networks that will carry Live Migration traffic (I recommend that). Do not use for iSCSI networks. Important: The adapters in these networks SHOULD NOT register in DNS.
  • Do not allow cluster network communication on this network: Use for storage networks, especially iSCSI. I also use this setting for adapters that will use SMB to connect to a storage server running SMB version 3.02 in order to run my virtual machines. You might want to use it for Live Migration networks if you wish to segregate Live Migration from cluster traffic (I do not do or recommend that).

Once done, you can configure Live Migration traffic. Right-click on the Networks node and click Live Migration Settings:

Check a network’s box to enable it to carry Live Migration traffic. Use the Up and Down buttons to prioritize.

What About Traffic Prioritization?

In 2008 R2, we had some fairly arcane settings for cluster network metrics. You could use those to adjust which networks the cluster would choose as alternatives when a primary network was inaccessible. We don’t use those anymore because SMB multichannel just figures things out. However, be aware that the cluster will deliberately choose Cluster Only networks over Cluster and Client networks for inter-node communications.

What About Hyper-V QoS?

When 2012 first debuted, it brought Hyper-V networking QoS along with it. That was some really hot new tech, and lots of us dove right in and lost a lot of sleep over finding the “best” configuration. And then, most of us realized that our clusters were doing a fantastic job balancing things out all on their own. So, I would recommend that you avoid tinkering with Hyper-V QoS unless you have tried going without and had problems. Before you change QoS, determine what traffic needs to be attuned or boosted before you change anything. Do not simply start flipping switches, because the rest of us already tried that and didn’t get results. If you need to change QoS, start with this TechNet article.

Your thoughts?

Does your preferred network management system differ from mine? Have you decided to give my arrangement a try? How id you get on? Let me know in the comments below, I really enjoy hearing from you guys!

Upgrading Hyper-V 2012 R2 to Hyper-V 2016

Upgrading Hyper-V 2012 R2 to Hyper-V 2016

Ready to make the jump from Hyper-V 2012 R2 to 2016? With each successive iteration of Hyper-V, the move gets easier. You have multiple ways to make the move. If you’re on the fence about upgrading, some of the techniques involve a bit less permanence.

What This Article Will Not Cover

I’m not going to show you how to install Hyper-V. The process has not changed since 2012. We probably owe the community a brief article on installing though…

I will not teach you how to use Hyper-V or its features. You need to know:

  • How to install Hyper-V
  • How to install and access Hyper-V’s native tools: Hyper-V Manager, PowerShell, and, where applicable, Failover Cluster Manager
  • How to use Hyper-V Replica, if you will be taking any of the HVR options
  • How to use Live Migration

I won’t make any special distinctions between Hyper-V Server and Windows Server with Hyper-V.

I will not show anything about workgroup configurations. Stop making excuses and join the domain.

I’m not going to talk about Windows 10, except in passing. I’m not going to talk about versions prior to 2012 R2. I don’t know if you can skip over 2012 R2.

What This Article Will Cover

What we will talk about:

  • Virtual Machine Configuration File Versions
  • Rolling cluster upgrades: I won’t spend much time on that because we already have an article
  • Cross-version Live Migration
  • Hyper-V Replica
  • Export/import
  • In-place host upgrades

Virtual Machine Configuration File Versions

Each new iteration of Hyper-V brings a new format for the virtual machine definition file. It also brings challenges when you’re running different versions of Hyper-V. Historically, Hyper-V really only wants to run virtual machines that use its preferred definition version. If it took in an older VM, it would want to upconvert it. 2016 changes that pattern a little bit. It will happily run version 5.0 VMs (2012 R2) without any conversion at all. That means that you can freely move a version 5.0 virtual machine between a system running 2012 R2 Hyper-V and a system running 2016. The Windows 10/Windows Server 2016 version of Hyper-V Manager includes a column so that you can see the version:


The version has been included in the Msvm_VirtualSystemSettingData WMI class for some time and exposed as a property in Get-VM. However, the Get-VM cmdlet in version 2 of the Hyper-V module (ships with W10/WS2016/HV2016) now includes the version in the default view:

upgrading hyper-v 2012 r2 to 2016 - version 5.0

The capability of 2016 to directly operate the older version enables all of the features that we’ll talk about in this article.

Rolling Cluster Upgrades

2016 gives an all-new upgrade option. “Rolling cluster upgrade” allows you to upgrade individual cluster nodes to 2016. At least, we describe it that way. More accurately, clusters of Hyper-V hosts can contain both 2012 R2 and 2016 simultaneously. So, “upgrading” may not be the correct term to use for individual nodes. You can upgrade them, of course, but you can also wipe them out and start over or replace them with all-new hardware. Whatever you’re doing, the process boils down to: take down a 2012 R2 node, insert a 2016 node.

A feature called “cluster functional level” enables this mixing of versions. When the first 2016 node joins the cluster, it becomes a “mixed mode” cluster running at a “functional level” of 2012 R2. Once the final 2012 R2 node has been removed, you just run Update-ClusterFunctionalLevel. Then, at your convenience, you can upgrade the configuration version of the virtual machines.

Adrian Costea wrote a fuller article on rolling cluster upgrades.

Cross-Version Live Migration

Due to the versioning feature that we opened the article with, Live Migration can freely move a version 5.0 virtual machine between a 2012 R2 system and a 2016 system. If both of the hosts belong to the system cluster (see the previous section), then you don’t need to do anything else. Contrary to some myths being passed around, you do not need to configure anything special for intra-cluster Live Migrations to work.

To Live Migrate between hosts that do not belong to the same cluster, you need to configure constrained delegation. That has not changed from 2012 R2. However, one thing has changed: you don’t want to restrict delegation to Kerberos on 2016 systems anymore. Instead, open it up to any protocol. I provided a PowerShell script to do the work for you. If you’d rather slog through the GUI, that same article shows a screenshot of where you’d do it.

Special note on constrained delegation configuration between 2012 R2 and 2016: Constrained Delegation’s behavior can be… odd. It gets stranger when combing 2012 R2 with 2016. On a 2016’s systems property sheet, always select “Use any authentication protocol”. On a 2012 R2 system, always select “Use Kerberos only”. I found that I was able to migrate from 2016 to 2012 R2 without setting any delegation at all, which I find… odd. When moving from 2012 R2, I found that I always had to start the migration from the 2016 side. Nothing I did ever allowed for a successful move when I initiated it from the 2012 R2 side. I expect that your mileage will vary. If you get errors, just try a different combination. I promise you, this migration path does work.

Cross-Version Hyper-V Replica

If you’re reading straight through, you’ll find that this section repeats much of what you’ve already seen.

Hyper-V Replica will happily move virtual machines using configuration version 5.0 between 2012 R2 and 2016 systems. The fundamental configuration steps do not change between the two versions.

Export and Import

The export feature has changed a great deal since its initial inception. Once upon a time, it would create an .exp file in place of the XML file. Without that .exp file, Hyper-V would not be able to import an exported virtual machine. That limitation disappeared with 2012. Since then, Hyper-V can import a virtual machine directly from its XML file. You don’t even need to export it anymore. If you wanted, you could just copy the folder structure over to a new host.

However, the export feature remains. It does two things that a regular file copy cannot:

  • Consolidation of virtual machine components. If you’ve ever looked at the settings for a virtual machine, you’d know that you can scatter its components just about anywhere. The export feature places all of a virtual machine’s files and attached VHD/Xs into a unified folder structure.
  • Active state preservation. You can export a running virtual machine. It will be imported where it left off.

When you export a virtual machine, it retains its configuration version. The import process on 2016 does not upgrade version 5.0 virtual machines. They will remain at version 5.0 until you deliberately upgrade them. Therefore, just as with Live Migration and Replica, you can use export/import to move version 5.0 virtual machines between 2012 R2 and 2016.

In-Place Host Upgrades

Windows has earned a reputation for coping poorly with operating system upgrades. Therefore, a lot of people won’t even try it anymore. I can’t say that I blame them. However, a lot of people haven’t noticed that the upgrade process has changed dramatically. Once upon a time, there was a great deal of backing up and in-place overwrites. The Windows upgrade process no longer does any of that. It renames the Windows folder to Windows.old and creates an all-new Windows folder from the install image. But, the matter of merging in the old settings remains. Most problems source from that.

I have not personally attempted an upgrade of Windows Server for many years now. I do not exactly know what would happen if you simply upgraded a 2012 R2 system directly to 2016. On paper, it should work just fine. In principle…

If you choose the direct upgrade route, I would:

  • Get a good backup and manually verify it.
  • Schedule enough time to allow for the entire thing to finish, go horribly wrong, and rebuild from scratch
  • Make a regular file copy of all of the VMs to some alternative location

Wipe and Reinstall

If you want to split the difference a bit, you could opt to wipe out Windows/Hyper-V Server without hurting your virtual machines. Doing so allows you to make a clean install on the same hardware. Just make certain that they’re not in the same location that you’re wiping out. You can do that with a regular file copy or just by holding them on a separate partition from the management operating system. Once the upgrade has completed, import the virtual machines. If you’re going to run them from the same location, use the Register option.

Leveraging Cross-Version Virtual Machine Migration Options

All of these options grant you a sort of “try before you commit” capability. In-place upgrades fit that category the least; going back will require some sacrifice. However, the other options allow you to move freely between the two versions.

Some people have reported encountering performance issues on 2016 that they did not have with 2012 R2. To date, I have not seen any reason to believe that 2016 possesses any inherent flaws. I haven’t personally involved myself with any of these systems, so I can only speculate. So far, these reports seem isolated, which would indicate situational rather than endemic problems. Hardware or drivers that aren’t truly ready for 2016 might cause problems like these. If you have any concerns at all, wouldn’t you like the ability to quickly revert to a 2012 R2 environment? Wouldn’t you also like to be able to migrate to 2016 at your leisure?

Cross-Version Virtual Machine Limitations

Unfortunately, this flexibility does not come without cost. Or, to put a more positive spin on it, upgrading the configuration version brings benefits. Different version levels bring their own features. I didn’t track down a map of versions to features. If you upgrade from 5.0 to the current version (8.0 as of this writing), then you will enable all of the following:

  • Hot-Add and Hot-Remove of memory and network adapters
  • Production Checkpoints/Disable Checkpoints
  • Key Storage Drive (Gen 1)
  • Shielded VM (Gen 2)
  • Virtual Trust Platform Module (vTPM) (Gen 2)
  • Linux Secure Boot
  • PowerShell Direct

When you’re ready to permanently make the leap to 2016, you can upgrade a virtual machine with Update-VMVersion. You’ll also find on the VM’s right-click menu:


For either method to be successful, the virtual machine must be turned off.

How to Change Virtual Switch & VLAN During Hyper-V Live Migration

How to Change Virtual Switch & VLAN During Hyper-V Live Migration

Luke wrote a fantastic article that detailed how to use Compare-VM to Live Migrate a Hyper-V virtual machine to a host with a different virtual switch name. What happens if you need to change the VLAN as well? Turns out, you can do so easily.

How to Change the VLAN ID When Using Compare-VM for Live Migration

We start out the same way that we would for a standard migration when switch names don’t match. Get a compatibility report and store it in a variable:

If we want to verify that the problem is an incompatible virtual switch, we just look at the Message property of the Incompatibilities items:

VLAN compatibility report - performing live migration

If the destination host does have a virtual switch with the sHoame name, you won’t get this line item in the compatibility report. In fact, you might not get a compatibility report at all. We’ll come back to that situation momentarily.

It’s not quite obvious, but the output above shows you three different incompatibility items. Let’s roll up one level and see the objects themselves. We do that by only asking for the Incompatibilities property.

VLAN incompatibilities compare VM

We can’t read the message (why I showed you the other way first), but we can clearly see three distinct objects. The first two have no meaningful associated action; they only tell you a story. The last one, though, we can do something with. Look at the Source item on it.


VLAN incompatibilities source live migration

If the [2] doesn’t make sense, it’s array notation. The first item is 0, second is 1, and the third (the one that we’re interested in) is 2. I could have also used Where-Object with the MessageID.

Can you identify that returned object? It’s a VMNetworkAdapter.

The incompatibility report embeds a copy of the virtual machine’s virtual network adapter. Luke’s article tells you to modify the network adapter’s connection during migration. However, you can modify any setting on that virtual network adapter object that you could on any other. That includes the VLAN.

Change the VLAN and the switch in the compatibility report like this:

I did all of that using the popular “one-liner” format. I’ve never been a huge fan of the one-liners fad; it’s usually preposterous showboating. But, if you can follow this one, it lets you work interactively. If you’d rather go multi-line, say for automation purposes, you can build something like this:

Once you’ve got the settings the way that you like them, perform the Live Migration:

Don’t forget that if the VM’s storage location(s) at the destination host will be different paths than on the source, you need to specify the location(s) when you make the Compare-VM call. Otherwise, you’ll get the networking part prepared for Move-VM, but then it will fail because of storage.

Changing the VLAN without a Compatibility Report

I tried for a while to generate a malleable compatibility report when the switch names match. You can run Compare-VM, of course. Doing so will get you a VMCompatibilityReport object. But, you won’t get the 33012 object/error combination object that we need to modify. There’s no way for the VLAN itself to cause an error because every Hyper-V switch supports VLANs 1-4096. The .Net objects involved (Microsoft.HyperV.PowerShell.VMCompatibilityReport and Microsoft.HyperV.PowerShell.VMCompatibilityError) do not have constructors that I can figure out how to call from PowerShell. I thought of a few ways to deal with that, but they were inelegant at best.

Instead, I chose to move the VLAN assignment out of the Live Migration:

A slightly different method would involve using Get-VM first and saving that to $MovingVM, then manipulating $MovingVM. I chose this method to save other tinkerers the trouble of exploring PassThru in this context. PassThru with Move-VM captures the original virtual machine, not the transferred virtual machine. Also, I didn’t need to match by VMId. I chose that technique because virtual machine names are not guaranteed to be unique. So, you have some room to change this script to suit your needs.

Whatever modifications you come up with, you’ll wind up with a two step operation:

  1. Move the virtual machine to the target host
  2. Change the VLAN

I hear someone overthinking it: If we’re accustomed to Live Migration causing only a minor blip in network connectivity, won’t this two step operation cause a more noticeable delay? Yes, it will. But that’s not because we’ve split it into two steps. It’s because the VLAN is being changed. That’s always going to cause a more noticeable interruption. The amount of effort required to combine the VLAN change into the Live Migration would not yield worthwhile results.

I should also point out the utility of the $MovedVM object. We focused on the VLAN and virtual network adapter in this article. With $MovedVM, you can modify almost any aspect of the virtual machine.

Confusing Terms and Concepts in Hyper-V

Confusing Terms and Concepts in Hyper-V



If I ever got a job at Microsoft, I’d want my title to be “He Who Fixes Stupid Names and Labels”. Depending upon my mood, I envision that working out in multiple ways. Sometimes, I see myself in a product meeting with someone locked in a condescending glare, and asking, “Really? With nearly one million unique words in the English language, we’re going with ‘Core’? Again?” Other times, I see myself stomping around like Gordon Ramsay, bellowing, “This wording is so unintelligible that it could be a plot for a Zero Wing sequel!” So, now you know one of the many reasons that I don’t work for Microsoft. But, the degree of my fitness to work in a team aside, the abundance of perplexing aspects of the Hyper-V product generates endless confusion for newcomers. I’ve compiled a shortlist to help cut through a few of them.


This particular item doesn’t have a great deal of relevance to Hyper-V for most of us. On the back end, there is a great deal of intersection in the technologies. Site Recovery allows you to replicate your on-premises virtual machines into Azure. But, there’s not a lot of confusion about the technology that I’m aware of. It’s listed here, and first, as an example of what we’re up against. Think about what the word “azure” means. It is the color of a clear, cloudless sky. You think one thing when a salesman walks in and says, “Hi, we’d like to introduce you to our cloud product called ‘Azure’.” That sounds nice, right? What if, instead, he said, “Hi, we’d like to introduce you to our cloud product called ‘Cloudless’.” What?

"Microsoft Drab" Just Doesn't have the Same Ring

“Microsoft Drab” Just Doesn’t have the Same Ring

Azure’s misnomer appears to be benign, as it works very well and sells very well. I just want you to be aware that, if you’re confused when reading a product label or a dialog box, it’s probably not your fault. Microsoft doesn’t appear to invest many resources in the “Thoroughly Think Through Labelling” department.

What Should I Call Hyper-V, Anyway?

Some of the confusion kicks in right at the beginning. Most people know that Hyper-V is Microsoft’s hypervisor, which is good. But, then they try to explain what they’re using, and everything immediately goes off the rails.

First, there’s Hyper-V. That part, we all understand. Or, at least we think that we understand. When you just use the word “Hyper-V”, that’s just the hypervisor. It’s completely independent of how you acquired or installed or use the hypervisor. It applies equally to Hyper-V Server and Windows Server with Hyper-V.

Second, there’s Client Hyper-V. It’s mostly Hyper-V, but with different bells and whistles. You only find Client Hyper-V in the client editions of Windows, conveniently enough. So, if you’ve installed some product whose name includes the word “Server”, then you are not using Client Hyper-V. Simple enough, right?

Third, there’s the fictitious “Hyper-V Core”. I’ve been trying to get people to stop saying this for years, but I’m giving up now. Part of it is that it’s just not working. Another part of it:


With Microsoft actively working against me, I don’t like my odds. Sure, they’ve cleaned up a lot of these references, but I suspect that they’ll never completely go away.

What I don’t like about the label/name “Hyper-V Core” is that it implies the existence of “Hyper-V not Core”. Therefore, people download Hyper-V Server and want to know why it’s all command-line based. People will also go to the forums and ask for help with “Hyper-V Core”, so then there’s at least one round of, “What product are you really using?”

What Does it Mean to “Allow management operating system to share this network adapter”?

The setting in question appears on the Virtual Switch Manager’s dialog when you create a virtual switch in Hyper-V Manager:


The corresponding PowerShell parameter for New-VMSwitch is AllowManagementOs.

If I had that job that we were talking about a bit ago, that Hyper-V Manager line would say, “Connect the management operating system to this virtual switch.” The PowerShell parameter would be ConnectManagementOs. Then the labels would be true, explainable, and comprehensible.

Whether you choose the Hyper-V Manager path or the PowerShell route, this function creates a virtual network adapter for the management operating system and attaches it to the virtual switch that you’re creating. It does not “share” anything, at least not in any sense that this phrasing evokes. For more information, we have an article that explains the Hyper-V virtual switch.

I Downloaded and Installed Hyper-V. Where Did My Windows 7/8/10 Go?

I see this question often enough to know that there are a significant number of people that encounter this problem. The trainer in me must impart a vital life lesson: If the steps to install a product include anything like “boot from a DVD or DVD image”, then it is making substantial and potentially irreversible changes.

If you installed Hyper-V Server, your original operating environment is gone. You may not be out of luck, though. If you didn’t delete the volume, then your previous operating system is in a folder called “Windows.old”. Don’t ask me or take this to the Hyper-V forums, though, because this is not a Hyper-V problem. Find a forum for the operating system that you lost and ask how to recover it from the Windows.old folder. There are no guarantees.

Many of the people that find themselves in this position claim that Microsoft didn’t warn them, which is absolutely not true.

The first warning occurs if you attempt to upgrade. It prevents you from doing so and explicitly says what the only other option, “Custom”, will do:


If you never saw that because you selected Custom first, then you saw this warning:


That warning might be a bit too subtle, but you had another chance. After choosing Custom, you then decided to either install over the top of what you had or delete a partition. Assuming that you opted to use what was there, you saw this dialog:


The dialog could use some cleanup to cover the fact that it might have detected something other than a previous installation of Hyper-V Server, but there’s a clear warning that something new is pushing out something old. If you chose to delete the volume so that you could install Hyper-V Server on it, that warning is inescapably blatant:


If this has happened to you, then I’m sorry, but you were warned. You were warned multiple times.

How Many Hyper-V Virtual Switches Should I Use?

I often see questions in this category from administrators that have VMware experience. Hyper-V’s virtual switch is markedly different from what VMware does, so you should not expect a direct knowledge transfer.

The default answer to this question is always “one”. If you’re going to be putting your Hyper-V hosts into a cluster, that strengthens the case for only one. A single Hyper-V virtual switch performs VLAN isolation and identifies local MAC addresses to prevent superfluous trips to the physical network for intra-VM communications. So, you rarely gain anything from using two or more virtual switches. We have a more thorough article on the subject of multiple Hyper-V switches.

Checkpoint? Snapshot? Well, Which Is it?

To save time, I’m going to skip definitions here. This is just to sort out the terms. A Hyper-V checkpoint is a Hyper-V snapshot. They are not different. The original term in Hyper-V was “snapshot”. That caused confusion with the Volume Shadow Copy Service (VSS) snapshot. Hyper-V’s daddy, “Virtual Server”, used the term “checkpoint”. System Center Virtual Machine Manager has always used the term “checkpoint”. The “official” terms have been consolidated into “checkpoint”. You’ll still find many references to snapshots, such as:


But We Officially Don’t Say “Snapshot”

We writers are looking forward to many more years of saying “checkpoint (or snapshot)”.

Do I Delete a Checkpoint? Or Merge It? Or Apply It? Or Something Else? What is Going on Here?

If you’re the person that developed the checkpoint actions, all of these terms make a lot of sense. If you’re anyone else, they’re an unsavory word soup.

  • Delete: “Delete” is confusing because deleting a checkpoint keeps your changes. Coming into this cold, you might think that deleting a checkpoint would delete changes. Just look under the hood, though. When you create a checkpoint, it makes copies of the virtual machine’s configuration files and starts using new ones. When you delete that checkpoint, that tells Hyper-V to delete the copies of the old configuration. That makes more sense, right? Hyper-V also merges the data in post-checkpoint differencing disks back into the originals, then deletes the differencing disks.
  • Merge (checkpoint): When you delete a checkpoint (see previous bullet point), the differencing disks that were created for its attached virtual hard disks are automatically merged back into the original. You can’t merge a checkpoint, though. That’s not a thing. That can’t be a thing. How would you merge a current VM with 2 vCPUs and its previous setting with 4 vCPUs? Split the difference? Visitation of 2 vCPUs every other weekend?
  • Merge (virtual hard disk): First, make sure that you understand the previous bullet point. If there’s a checkpoint, you want to delete it and allow that process to handle the virtual hard disk merging on your behalf. Otherwise, you’ll bring death and pestilence. If the virtual hard disk in question is not related to a checkpoint but still has a differencing disk, then you can manually merge them.
  • Apply: The thought process behind this term is just like the thinking behind Delete. Remember those copies that your checkpoint made? When you apply the checkpoint, the settings in those old files are applied to the current virtual machine. That means that applying a checkpoint discards your changes. As for the virtual hard disks, Hyper-V stops using the differencing disk that was created when the virtual machine was checkpointed and starts using a new differencing disk that is a child of the original virtual hard disk. Whew! Get all of that?
  • Revert: This verb makes sense to everyone, I think. It reverts the current state of the virtual machine to the checkpoint state. Technologically, Hyper-V applies the settings from the old files and discards the differencing disk. It creates a new, empty differencing disk and starts the virtual machine from it. In fact, the only difference between Revert and Apply is the opportunity to create another checkpoint to hold the changes that you’re about to lose. If I had that job, there would be no Apply. There would only be Revert (keep changes in a new checkpoint) and Revert (discard changes).

If this is tough to keep straight, it might make you feel better to know that my generation was expected to remember that Windows boots from the system disk to run its system from the boot disk. No one has ever explained that one to me. When you’re trying to keep this checkpoint stuff straight, just try to think of it from the perspective of the files that constitute a checkpoint.

If you want more information on checkpoints, I happen to like one of my earlier checkpoint articles. I would also recommend searching the blog on the “checkpoint” keyword, as we have many articles written by myself and others.

Dynamic Disks and Dynamically Expanding Virtual Hard Disks

“Dynamically expanding virtual hard disk” is a great big jumble of words that nobody likes to say. So, almost all of us shorten it to “dynamic disk”. Then, someone sees that the prerequisites list for the product that they want to use says, “does not support dynamic disks”. Panic ensues.

Despite common usage, these terms are not synonymous.

With proper planning and monitoring, dynamically expanding hard disks are perfectly safe to use.

Conversely, Dynamic disks are mostly useless. A handful of products require them, but hopefully they’ll all die soon (or undergo a redesign, that could work too). In the absence of an absolute, defined need, you should never use Dynamic disks. The article linked in the previous paragraph explains the Dynamic disk, if you’re interested. For a quicker explanation, just like at this picture from Disk Management:

Basic and Dynamic Disks

Basic and Dynamic Disks

Dynamic disks, in the truest sense of the term, are not a Hyper-V technology.

Which Live Migration Do I Want?

I was attempting to answer a forum question in which the asker was configuring Constrained Delegation so that he could Live Migrate a virtual machine from one physical cluster node to another physical node in the same cluster. I rightly pointed out that nodes in the same cluster do not require delegation. It took a while for me to understand that he was attempting to perform a Shared Nothing Live Migration of an unclustered guest between the two nodes. That does require delegation in some cases.

To keep things straight, understand that Hyper-V offers multiple virtual machine migration technologies. Despite all of them including the word “migration” and most of them including the word “live”, they are different. They are related because they all move something Hyper-V, but they are not interchangeable terms.

This is the full list:

  • Quick Migration: Quick Migration moves a virtual machine from one host to another within a cluster. So it’s said, the virtual machine must be clustered, not simply on a cluster node. It is usually the fastest of the migration techniques because nothing is transmitted across the network. If the virtual machine is on, it is first saved. Ownership is transferred to the target node. If the virtual machine was placed in a saved state for the move, it is resumed.
  • Live Migration: A Live Migration has the same requirement as a Quick Migration: it is only applicable to clustered virtual machines. Additionally, the virtual machine must be turned on (otherwise, it wouldn’t be “live”). Live Migration is slower than Quick Migration because CPU threads, memory, and pending I/O must be transferred to the target host, but it does not involve an interruption in service. The virtual machine experiences no outage except for the propagation of its network adapters’ MAC address change throughout the network.
  • Storage Live Migration: A Storage Live Migration involves the movement of any files related to a virtual machine. It could be all of them, or it could be any subset. “Storage Live Migration” is just a technology name; the phrase never appears anywhere in any of the tools. You select one of the options to “Move” and then you choose to only move storage. You can choose a new target location on the same host or remote storage, but a Storage Live Migration by itself cannot change a virtual machine’s owner to a new physical host. Unlike a “Live Migration”, the “Live” in “Storage Live Migration” is optional.
  • Shared Nothing Live Migration: The “Shared Nothing” part of this term can cause confusion because it isn’t true. The “live” bit doesn’t help, because the VM can be off or saved, if you want. The idea is that the source and destination hosts don’t need to be in the same cluster, so they don’t need to share a common storage pool. Their hosts do need to share a domain and at least one network, though. I’m not sure what I would have called this one, so maybe I’m glad that I don’t have that job. Anyway, as with Storage Live Migration, you’ll never see this phrase in any of the tools. It’s simply one of the “move” options.

If you’re seeking help from others, it’s important to use the proper term. Otherwise, your confusion will become their confusion and you might never find any help.

What Else?

I’ve been doing this long enough that I might be missing other things that just don’t make sense. Let us know what’s boggled you about Hyper-V and we’ll add it to the list.

Your definitive guide to Troubleshoot Hyper-V Live Migration

Your definitive guide to Troubleshoot Hyper-V Live Migration

As you saw in our earlier article that explained how Live Migration works, and how you can get it going in 2012 and 2012 R2, it is a process that requires a great deal of cooperation between source and target computer. It’s also a balancing act involving a great deal of security concerns. Failures within a cluster are uncommon, especially if the cluster has passed the validation wizard. Failures in Shared Nothing Live Migrations are more likely. Most issues are simple to isolate and troubleshoot.

General Troubleshooting

There are several known problem-causers that I can give you direct advice on. Some are less common. If you can’t find exactly what you’re looking for in this post, I can at least give you a starting point.

Migration-Related Event Log Entries

If you’re moving clustered virtual machines, the Cluster Events node in Failover Cluster Manager usually collects all the relevant events. If they’ve been reset or expired from that display, you can still use Event Viewer at these paths:

  • Applications and Service Logs\Microsoft\Windows\Hyper-V-High-Availability\Admin
  • Applications and Service Logs\Microsoft\Windows\Hyper-V-VMMS\Admin

The “Hyper-V-High-Availability” tree usually has the better messages, although it has a few nearly useless ones, such as event ID 21111, “Live migration of ‘Virtual Machine VMName’ failed.” Most Live Migration errors come with one of three statements:

  • Migration operation for VMName failed
  • Live Migration did not succeed at the source
  • Live Migration did not succeed at the destination

These will usually, but not always, be accompanied by supporting text that further describes the problem. “Source” messages often mean that the problem is so bad and obvious that Hyper-V can’t even attempt to move the virtual machine. These usually have the most helpful accompanying text. “Destination” messages usually mean that either there is a configuration mismatch that prevents migration or the problem did not surface until the migration was either underway or nearly completed. You might find that these have no additional information or that what is given is not very helpful. In that case, specifically check for permissions issues and that the destination host isn’t having problems accessing the virtual machine’s storage.

Inability to Create Symbolic Links

As we talked about each virtual machine migration method in our explanatory article, part of the process is for the target host to create a symbolic link in C:\ProgramData\Microsoft\Windows\Hyper-V\Virtual Machines. This occurs under a special built-in credential named NT Virtual Machine\Virtual Machines, which has a “well-known” (read as: always the same) security identifier (SID) of S-1-5-83-0.

Some attempts to harden Hyper-V result in a policy being assigned from the domain involve only granting the Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\Create symbolic links right to the built-in Administrators account. Doing so will certainly cause Live Migrations to fail and can sometimes cause other virtual machine creation events to fail.

Your best option is to just not tinker with this branch of group policy. I haven’t ever even heard of an attack mitigated by trying to improve on the contents of this area. If you simply must override it from the domain, then add in an entry in your group policy for it. You can just type in the full name as shown in the first paragraph of this section.

Create Symbolic Link

Create Symbolic Link


Note: The “Run as a Service” right must also be assigned to the same account. Not having that right usually causes more severe problems than Live Migration issues, but it’s mentioned here for completeness.

Inability to Perform Actions on Remote Computers

Live Migration and Shared Nothing Live Migration invariably involves two computers at minimum. If you’re sitting at your personal computer with a Failover Cluster Manager or PowerShell open, telling Host A to migrate a virtual machine to Host B, that’s three computers that are involved. Most Access Denied errors during Live Migrations involve this multi-computer problem.

Solution 1: CredSSP

CredSSP is kind of a terrible thing. It allows one computer to store the credentials for a user or computer and then use them on a second computer. It’s sort of like cached credentials, only transmitted over the network. It’s not overly insecure, but it’s also not something that security officers are overly fond of. You can set this option on the Authentication protocol section of the Advanced Features section of the Live Migration configuration area on the Hyper-V Settings dialog.

Live Migration Advanced Settings

Live Migration Advanced Settings


CredSSP has the following cons:

  • Not as secure as Kerberos
  • Only works when logged in directly to the source host

CredSSP has only one pro: You don’t have to configure delegation. My preference? Configure delegation.

Solution 2: Delegation

Delegation can be a bit of a pain to configure, but in the long-term it is worth it. Delegation allows one computer to pass on Kerberos tickets to other computers. It doesn’t have CredSSP’s hop limit; computers can continue passing credentials on to any computer that they’re allowed to delegate to as far as is necessary.

Delegation has the following cons:

  • It can be tedious to configure.
  • If not done thoughtfully, can needlessly expose your environment to security risks.

Delegation’s major pro is that as long as you can successfully authenticate to one host that can delegate, you can use it to Live Migrate to or from any host it has delegation authority for.

As far as the “thoughtfulness”, the first step is to use Constrained Delegation. It is possible to allow a computer to pass on credentials for any purpose, but it’s unnecessary.

Delegation is done using Active Directory Users and Computers or PowerShell. I have written an article that explains both ways and includes a full PowerShell script to make this much easier for multiple machines.

Be aware that delegation is not necessary for Quick or Live Migrations between nodes of the same cluster.

Mismatched Physical CPUs

Since you’re taking active threads and relocating them to a CPU in another computer, it seems only reasonable that the target CPU must have the same instruction set as the source. If it doesn’t, the migration will fail. There are hard and soft versions of this story. If the CPUs are from different manufacturers, that’s a hard stop. Live Migration is not possible. If the CPUs are from the same manufacturer, that could be a soft problem. Use CPU compatibility mode:

CPU Compatibility

CPU Compatibility


As shown in the screenshot, the virtual machine needs to be turned off to change this setting.

A very common question is: What are the effects of CPU compatibility mode? For almost every standard server usage, the answer is: none. Every CPU from a manufacturer has a common core set of available instructions and usually a few common extensions. Then, they have extra function sets. Applications can query the CPU for its CPUID information, which contains information about its available function sets. When the compatibility mode box is checked, all of those extra sets are hidden; the virtual machine and its applications can only see the common instruction sets. These extensions are usually related to graphics processing and are almost never used by any server software. So, VDI installations might have trouble when enabling this setting, but virtualized server environments usually will not.

This screenshot was taken from a virtual machine with compatibility disabled using CPU-Z software:

CPU Compatibility Off

CPU Compatibility Off


The following screen shot shows the same virtual machine with no change made except the enabling of compatibility mode:

CPU Compatibility On

CPU Compatibility On


Notice how many things are the same and what is missing from the Instructions section.

Insufficient Resources

If the target system cannot satisfy the memory or disk space requirements of the virtual machine, any migration type will fail. These errors are usually very specific about what isn’t available.

Virtual Switch Name Mismatch

The virtual machine must be able to connect its virtual adapter(s) to a switch(es) with the same name. Furthermore, a clustered virtual machine cannot move if it is using an internal or private virtual switch, even if the target host has a switch with the same name.

If it’s a simple problem with a name mismatch, you can use Compare-VM to overcome the problem while still performing a Live Migration. The basic process is to use Compare-VM to generate a report, then pass that report to Move-VM. Luke Orellan has written an article explaining the basics of using Compare-VM. If you need to make other changes, such as where the files are stored, notice that Compare-VM has all of those parameters. If you use a report from Compare-VM with Move-VM, you cannot supply any other parameters to Move-VM.

Live Migration Error 0x8007274C

This is a very common error in Live Migration that always traced to network problems. If the source and destination hosts are in the same cluster, start by running the Cluster Validation Wizard, only specifying the network tests. That might tell you right away what the problem is. Other possibilities:

  • Broken or not completely attached cables
  • Physical adapter failures
  • Physical switch failures
  • Switches with different configurations
  • Teams with different configurations
  • VMQ errors
  • Jumbo frame misconfiguration

If the problem is intermittent, check teaming configurations first; one pathway might be clear while another has a problem.

Storage Connectivity Problems

A maddening cause for some “Live Migration did not succeed at the destination” messages is a problem with storage connectivity. These aren’t always obvious, because everything might appear to be in order. Do any independent testing that you can. Specifically:

  • If the virtual machine is placed on SMB 3 storage, double-check that the target host has Full Control on the SMB 3 share and the backing NTFS locations. If possible, remove and re-add the host.
  • If the virtual machine is placed on iSCSI or remote Fibre Channel storage, double-check that it can work with files in that location. iSCSI connections sometimes fail silently. Try disconnecting and reconnecting to the target LUN. A reboot might be required.
Hyper-V Live Migration methods in 2012 and 2012 R2 VMs

Hyper-V Live Migration methods in 2012 and 2012 R2 VMs

The purpose of this article is to demonstrate the various methods for performing Live Migrations in Hyper-V. You can check out our previous article that provides a step by step guide to how Hyper-V Live Migration works, or you can check out our troubleshooting guide to Hyper-V Live Migration in case you ran into any errors.

What is Live Migration?

Live Migration is the name of Hyper-V’s technology that moves virtual machines between physical hosts without service interruption. This feature appears in multiple forms:

  • Live Migration: While often used as an umbrella term, the most precise meaning for these two words is the transfer of ownership of a clustered virtual machine from one node to another node within the same cluster.
  • Storage Live Migration: A Storage Live Migration transfers one or more of a virtual machine’s constituent files from one location to another without interrupting service to the owning virtual machine. A Storage Live Migration might or might not occur in conjunction with a Live Migration.
  • Shared Nothing Live Migration: A Shared Nothing Live Migration is similar to a standard Live Migration in that it transfers a virtual machine from one host to another without interruption, but the primary difference is that the virtual machine cannot be clustered. The most common utilization for a Shared Nothing Live Migration is moving a virtual machine from one standalone host to another. However, it is possible to transfer from a node that is a cluster member, provided that the virtual machine is not a cluster role at the time of transfer. Shared Nothing Live Migration typically occurs in conjunction with a Storage Live Migration, but it is not necessary if the virtual machine’s files are on an SMB 3 share accessible by both the source and target physical hosts.

What is Quick Migration?

Quick Migration is an earlier technology that moved virtual machines between cluster nodes but does cause a brief service interruption. Live Migration is preferred for running virtual machines, but because the active state of the virtual machine is saved to disk instead of transferred and actively synchronized over the network, Quick Migration usually enjoys a noticeably lower total operation time. Quick Migration is the only way for non-running virtual machines to be transferred.

PowerShell vs. GUI Methods

This article will demonstrate both PowerShell and GUI methods (via Hyper-V Manager and Failover Cluster Manager). While I normally make it a point to encourage everyone to learn and use PowerShell whenever possible, I make no particular preference on Quick and Live Migrations. I recommend that you at least learn how the PowerShell techniques work, especially if you are considering building any automated solutions. For day-to-day operations, the GUI is easier. I’ll show you the PowerShell techniques first with the GUI methods following.

Quick Migrations in PowerShell

PowerShell can easily move virtual machines:

If you do not specify –Node, the cluster decides where to place the virtual machine(s).

To move multiple virtual machine simultaneously, you can use this cmdlet in a loop or pipe in output from Get-VM.

Live Migrations in PowerShell

The PowerShell syntax for Live Migrations is even easier than Quick Migrations:

If you do not specify –Node, the cluster decides where to place the virtual machine(s).

To move multiple virtual machine simultaneously, you can use this cmdlet in a loop or pipe in output from Get-VM.

For Live Migrations, you can also use Move-VM:

Move-VM will fail if the virtual machine is not on and it requires a destination host to be specified. If not run from the source host, it will also fail if delegation is not enabled. See the Troubleshooting section.

Storage Live Migrations Using PowerShell

Another capability of the Move-VM cmdlet is relocation of a virtual machine’s storage. That usage is probably better explained by its relation to Shared Nothing Live Migration, so that section will explain it further. However, there is a cmdlet just for Storage Live Migration: Move-VMStorage. This cmdlet has quite a few options, not all of which will be discussed here. Use Get-Help to see the full list or read the related page on TechNet. The cmdlet does have four separate parameter sets, but there are only two major variants. Your major decision is whether you want to move all of the VMs files to the same location or if you want to specify locations for individual components. You can make either of these moves by specifying the VM by its name or by using its VM object (as in, from Get-VM). The examples shown here only specify the virtual machine by its name.

Using Move-VMStorage to Relocate All of VM’s Files to a Single Location

Using a single location is the easiest:

The above example moves all of the files for the virtual machine named “svtest” to the G:\ drive. In this case, that is a Cluster Disk (not a CSV) that has been specifically assigned to the virtual machine svtest. This usage results in the following:

  • If a folder named Snapshots does not exist on G:, it will be created. Any VM configuration files that belong to an existing checkpoint for the VM will be placed there.
  • If a folder named Virtual Hard Disks does not exist on G:, it will be created. Any virtual hard disks attached to the VM will be placed there.
  • If a folder named Virtual Machines does not exist on G:, it will be created. The VM’s XML description file will be placed in it. A subfolder named with the virtual machine’s ID will be created. The VM’s BIN and VSV will be placed there.

Using Move-VMStorage to Separate a VM’s Files

Multiple locations can be somewhat complicated because the VHD/X files need to be in an array of hash tables. Each hash table in this array needs to be in the format of: @{‘SourceFilePath’ = ‘G:\Virtual Hard Disks\svtest.vhdx’; ‘DestinationFilePath’ = ‘C:\ClusterStorage\CSV1\Virtual Hard Disks\svtest.vhdx’}. The locations and VHDX files are specific to my installation, of course, but the “SourceFilePath” and “DestinationFilePath” key names are required. Notice the  @{}  wrapper; this is what makes this a hash table. Every VHD/X set that you’re going to move needs to be placed inside one of these, and then all of those (even if it’s only one), needs to be placed inside a standard PowerShell array. The wrapper for that is @(). Separate VHDX hash tables inside the array with commas. An example of such an array of hash tables:

A more complete example:

Notice that I used neither the SnapshotFilePath nor the SmartPagingFilePath parameter. Whereas the Failover Cluster Manager tool will leave any files that you did not specifically instruct it to move in their original location, the cmdlet the way that I have issued it will move all unspecified components to the same location as the VirtualMachinePath parameter, if it is specified.

Preparing a Clustered Virtual Machine for a Shared Nothing Live Migration Using PowerShell

Before you can move a clustered virtual machine, you must remove it from its cluster. This operation does not cause downtime for the virtual machine, but it may cause issues for other applications. For instance, System Center Virtual Machine Manager will mark any virtual machine on a Cluster Shared Volume that isn’t clustered as “unsupported configuration” and will refuse to allow you to manipulate it within that console. Also, running a non-clustered virtual machine on a cluster member’s shared storage (besides SMB 3) can have other undesirable effects if any failures should occur, so do not perform these steps until you are ready to start the Shared Nothing Live Migration.

To prepare a clustered virtual machine for Shared Nothing Live Migration using PowerShell:

Note 1: The name of the virtual machine’s cluster group does not always match its cluster group name. This is especially likely to be true if you created the virtual machine using System Center Virtual Machine Manager. You can use Get-ClusterGroup to view all clustered roles.

Note 2: Remove-VMFromCluster is an alias for Remove-ClusterGroup.

Shared Nothing Live Migration Using PowerShell

Shared Nothing Live Migration is enabled by using Move-VM. It works almost exactly like Move-VMStorage, but adds a DestinationHost parameter. In fact, everything you learned from Move-VMStorage above (in the Storage Live Migrations Using PowerShell section) applies to Move-VM.


Performing a Quick Migration in Failover Cluster Manager

Failover Cluster Manager is the only native GUI tool that can perform a Quick Migration.

  1. Switch to the Roles tree node.
  2. To move multiple VMs, use CTRL+Click to select them.
  3. Right-click the VM(s) to be moved.
  4. In the context menu, go to Move and Quick Migration.
    1. To allow the cluster to select the destination, click Best Possible Node.
    2. To manually choose the destination, click Select Node… This will cause a dialog to appear with all cluster nodes displayed. Double-click one or highlight it and click OK.
Quick Migration in Failover Cluster Manager

Quick Migration in Failover Cluster Manager


Performing Live Migrations in Failover Cluster Manager

As with Quick Migrations, Failover Cluster Manager is the only native GUI tool that can perform a Quick Migration.

  1. Switch to the Roles tree node.
  2. To move multiple VMs, use CTRL+Click to select them.
  3. Right-click the VM(s) to be moved.
  4. In the context menu, go to Move and Live Migration.
    1. To allow the cluster to select the destination, click Best Possible Node.
    2. To manually choose the destination, click Select Node… This will cause a dialog to appear with all cluster nodes displayed. Double-click one or highlight it and click OK.
Live Migration Example

Live Migration Example


If you have selected more virtual machines than can be migrated at once, the cluster will choose the maximum number and begin moving them. The rest will show a status of Migration queued.

Performing a Storage Live Migration

A Storage Live Migration can be performed for non-clustered virtual machines using Hyper-V Manager. Only Failover Cluster Manager can handle them for clustered virtual machines.

Storage Live Migrations in Hyper-V Manager

For non-clustered virtual machines:

  1. Right-click on the virtual machine in the list and select Move…
  2. The Move Wizard will open. The first screen is informational. Click Next when ready.
  3. On the Choose Move Type screen, choose Move the virtual machine’s storage and click Next.
    Storage Live Migration Selection

    Storage Live Migration Selection


  4. Now, you’ll need to make a generic choice of what you want to move. There are three options:
    1. Move all of the virtual machine’s data to the same location. This will place all of the virtual machine’s files except its VHD/Xs in a ‘Virtual Machines’ folder at the target location. All of the hard disks will be placed in a ‘Virtual Hard Disks’ subfolder.
    2. Move the virtual machine’s data to different locations. You’ll be prompted for the location of each component. This is the only option we’ll go through in this walk-through as it is a superset of the other two.
    3. Move only the virtual machine’s hard disks. This will skip the component section and have you pick which VHD/X files to move and where to put them.
      Move Options

      Move Options


  5. After that, you’ll make more specific decisions about what to move. If you picked the first option in step 4, you’ll just have a single target location. Otherwise, you’ll have the screen as shown in the following screen shot. Whether or not the components are visible depends on whether you chose the second or third option in step 4. Uncheck anything that you want to leave behind.
    Move Components

    Move Components


  6. You will now progress through a series of screens that ask you where to place each individual item. How many of these screens you see depends on your selections in steps 4 and 5. Each looks similar to the following, and you will not be allowed to proceed until you have selected a suitable destination for the current item. When the wizard validates the target location, it will fill in the Available space field. If you’re manually typing, it attempts to validate the storage location after each letter, so results are immediate.
    Destination Path Selection

    Destination Path Selection


  7. Once you have made all of your selections, you will be presented with a summary screen. If all of your selections are correct, click Finish to start the move.

Storage Live Migrations in Failover Cluster Manager

Failover Cluster Manager is the tool to use for clustered virtual machines.

  1. Switch to the Roles tree node.
  2. To move multiple VMs, use CTRL+Click to select them.
  3. Right-click the VM(s) to be moved.
  4. In the context menu, go to Move and Virtual Machine Storage.
    Starting Failover Cluster Storage Live Migration

    Starting Failover Cluster Storage Live Migration


  5. The Move Virtual Machine Storage dialog will open. Select destination(s) for the virtual machine’s files and click Start to begin the move. This dialog and its components are explained below.

The Storage Live Migration window has quite a few pieces to give you fine-grained control over the placement of your virtual machine’s files. The following is an index that matches with the numbers I’ve super-imposed on the screenshot:

Failover Cluster Manager Storage Live Migration Window

Failover Cluster Manager Storage Live Migration Window


Legend for the Preceding Screenshot

  1. In the first column, you’ll find the virtual machine(s) that you selected to move. In this screenshot, I’ve expanded the virtual machine, which shows its constituent components. If I had selected other virtual machines to move, they would be listed below this one.
  2. This column shows you where the components of the virtual machine currently are.
  3. The Destination Folder Path is initially empty. As you select target locations, they will be indicated here.
  4. The box at the lower left shows valid destinations. Initially, only Cluster Shared Volumes will be visible. If a Cluster Disk has been assigned directly to a selected virtual machine, that will appear as well. You can expand any of the items here to show their subfolders. You can drag-drop any item from the top to any location that appears here. Alternatively, you can highlight an object in the top and click the Copy button (near the number 1 in the screenshot) and then use the Paste button (above and to the left of the number 6 in the screenshot) to set that as the item’s target location.
  5. The Add Share button shows a small dialog that allows you to manually type the name of a share. If it is reachable, it will then appear in the bottom left box under a Shares tree. It can then be used as described in number 4.
  6. Items in the lower-right box in black font are components that will be moved into the currently highlighted location on the left. Items in gray font are components that are already present. This output can be a little confusing because only VHD/X files are named. Items such as Checkpoints and Current configuration do not indicate which virtual machine they are for. Take caution when moving virtual machines simultaneously and be mindful of the contents of the third column. If an item in black font is highlighted, the Delete button (above the number 6 in the screenshot) will remove its current move target (meaning that it will be left where it is when the move is started).

One of Storage Live Migration’s powers is that the virtual machine doesn’t need to have all of its components in a single place. This window allows you to design the precise layout of its constituent files. However, remember that:

  • The Checkpoints location does not affect AVHD/X files. Those will always be placed in the same location as their parent VHD/X files.
  • If a Virtual Machines folder does not exist at the designated target location for Current configuration and Smart Paging, it will be created.
  • If a Snapshots folder does not exist at the designated target location for Checkpoints, it will be created.
  • Virtual disk files will be placed exactly where they are targeted without creation of any subfolders.
  • The source folder structure will not be altered.

Preparing a Clustered Virtual Machine for a Shared Nothing Live Migration

Before you can move a clustered virtual machine, you must remove it from its cluster. This operation does not cause downtime for the virtual machine, but it may cause issues for other applications. For instance, System Center Virtual Machine Manager will mark any virtual machine on a Cluster Shared Volume that isn’t clustered as “unsupported configuration” and will refuse to allow you to manipulate it within that console. Also, running a non-clustered virtual machine on a cluster member’s shared storage (besides SMB 3) can have other undesirable effects if any failures should occur, so do not perform these steps until you are ready to start the Shared Nothing Live Migration.

To prepare a clustered virtual machine for Shared Nothing Live Migration using Failover Cluster Manager, right-click on the virtual machine in the Roles section and click Remove. You’ll be prompted to remove its cluster resources. The virtual machine is not affected, only its status within the cluster.

Shared Nothing Live Migration in Hyper-V Manager

The only freely-provided GUI tool from Microsoft that can perform a Shared Nothing Live Migration is Hyper-V Manager. The process is similar to that of a Storage Live Migration, so you’ll likely notice some similarities between the two processes.

  1. Right-click on the virtual machine in the list and select Move…
  2. The Move Wizard will open. The first screen is informational. Click Next when ready.
  3. On the Choose Move Type screen, leave Move the virtual machine selected and click Next.
    Move Type

    Move Type


  4. Choose the host that you wish to migrate the virtual machine to. You can type it in or Browse to it. Click Next.
    Destination Host

    Destination Host


  5. The Move Options dialog deals with the general placement of the virtual machine’s components.
    1. Move the virtual machine’s data to a single location. If you choose this option, the next window will ask you for the location to place the virtual machine files. The VHD/X files will be placed in a ‘Virtual Hard Disks’ subfolder of that location. All others will be placed in a ‘Virtual Machines’ subfolder. The Browse button operates from the perspective of the VMMS service on the target machine.
    2. Move the virtual machine’s data by selecting where to move the items. This option will cause a series of dialogs to be generated that will ask about each component separately. This is the only option that will be further explored in this walkthrough.
    3. Move only the virtual machine. If you choose this option, you will not be given any options for file placement. The wizard will attempt to swap the XML file registration from the source to the destination without making any other changes, just as it does with a cluster Live Migration. This can only work if the files are in an SMB 3 share opened to both the source and destination hosts or if they are in a Cluster Shared Volumes and both hosts are members of the same cluster.
      Move Options

      Move Options


  6. If you chose the second option in step 5, you’ll be presented with a dialog that further inquires how you wish to distribute the virtual machine’s data on the target host. It has three options.
    1. Move the virtual machine’s data automatically. The first option is the simplest. It will place the files in the same folder structure that they use on the source host.
    2. Move the virtual machine’s virtual hard disks to different locations. This choice is almost identical to the third option, except that it only deals with virtual hard disk files.
    3. Move the virtual machine’s items to different locations. If you choose this option, you’ll be presented with a series of dialogs that ask you to place each individual component, including the virtual hard disks.
      Advanced Move Options

      Advanced Move Options


  7. If you chose the third option in step 6, you’ll be presented with a dialog asking which of the components you wish to move. If you chose the second option, you’ll get the same dialog but it will only contain the VHD/X files. For a Shared Nothing Live Migration to succeed, all of the source files must either be moved or be in a location that both hosts can reach.
    Move Components

    Move Components


  8. If you made any choices in previous steps that involve moving items, you will now progress through a series of screens that ask you where to place each individual piece. How many of these screens you see depends on your earlier selections. Each looks similar to the following, and you will not be allowed to proceed until you have selected a suitable destination for the current item. When the wizard validates the target location, it will fill in the Available space field. If you’re manually typing, it attempts to validate the storage location after each letter, so results are immediate. Because you’re moving to a different host, all of these destinations are taken from the perspective of that host.
    Destination Path Selection

    Destination Path Selection


  9. Finally, you will be presented with a summary screen. If all looks well, press Finish to begin the transfer.
  10. If necessary, re-cluster the virtual machine with the Configure Role wizard in Failover Cluster Manager or with the Add-ClusterVirtualMachineRole cmdlet.

If all goes well, the files will be removed from the source location upon completion. Sometimes, these files are left behind even on success. Source folder structures will not be modified.

A step-by-step guide to understand Hyper-V Live Migration

A step-by-step guide to understand Hyper-V Live Migration

In the dawn of computer system virtualization, its primary value proposition was a dramatic increase in efficiency of hardware utilization. As virtualization matured, the addition of new features increased its scope. Virtual machine portability continues to be one of the most popular and useful enhancement categories, especially technologies that enable rapid host-to-host movements. This article will examine the migration technologies provided by Hyper-V.

If you’re looking for an article that describes how to perform Hyper-V Live Migration in 2012 and 2012 R2 just follow the link, and we also have an article that discusses how you can troubleshoot Hyper-V Live Migration.

What is Live Migration?

When Hyper-V first debuted, its only migration technology was “Quick Migration”. This technology, which still exists and is still useful, is a nearly downtime-free method for moving a virtual machine from one host to another within a failover cluster. Quick Migration was followed by Live Migration, which is a downtime-free technique for performing the same function. Live Migration has since been expanded with a number of enticing features, including the ability to move between two hosts that do not use failover clustering.

Quick and Live Migration Requirements

All migrations in this article require that the source and destination systems be joined to the same domain or across trusting domains, with the lone exception being a Storage Live Migration from one location on a system to another location on the same system. Quick Migration and Live Migration require the source and destination hosts to be members of the same failover cluster.

Live Migration within a cluster requires at least one cluster network to be enabled for Live Migration. Right-click on the Networks node in Failover Cluster Manager and click Live Migration Settings. The dialog seen in the following screenshot will appear. Use it to enable and prioritize Live Migration networks.

Network for Live Migration

Network for Live Migration

Shared Nothing Live Migration and some migrations using PowerShell will require constrained delegation to be configured. We have an article that gives a brief explanation of the GUI method and includes a script for configuring it more quickly.

Quick Migration

Despite advances in Live Migration technology, Quick Migration has not gone away. Quick Migration has the following characteristics:

  • Noticeable downtime for the virtual machine
  • Requires the source and target hosts to be in the same failover cluster
  • Requires the source and target hosts to be able to access the same storage, although not necessarily simultaneously
  • Can be used by virtual machines on Cluster Disks, Cluster Shared Volumes, and SMB 3 shares
  • The virtual machine can be in any state
  • Only virtual machine ownership changes hands; files do not move
  • Often succeeds where other migration techniques fail, especially when pass-through disks and questionable security policies are involved
  • The virtual machine does not know that anything changed (unless something is monitoring the Hyper-V Data Exchange KVP registry section). If the Hyper-V Time Synchronization service is disabled or non-functional, the virtual machine will lose time until its next synchronization.
  • Used by a failover cluster to move low priority virtual machines during a Drain operation
  • Used by a failover cluster to move virtual machines in response to a host crash. If the host suffered some sort of physical failure that did not allow it to save its virtual machines first, they will behave as though they had lost power.

The primary reason that you will continue to use Quick Migration is that a virtual machine that is not in some active state cannot be Live Migrated. More simply, Quick Migration is how you move virtual machines that are turned off.

Anatomy of a Quick Migration

The following diagrams show the process of a Quick Migration.

The beginning operations of a Quick Migration vary based on whether or not the virtual machine is on. If it is, then it is placed into a Saved State. I/O and CPU operations are suspended and they, along with all the contents of memory, are saved to disk. The saved location must be on a shared location, such as a CSV, or on a Cluster Disk. Once this is completed, the only thing that remains on the source host is a symbolic link that points to the actual location of virtual machine’s XML file:

Quick Migration Phase 1

Quick Migration Phase 1


Next, the cluster makes a copy of that symbolic link on the target node and, if the virtual machine’s files are on a Cluster Disk, transfers ownership of that disk to the target node. The symbolic link is removed from the source host so that it only exists on the target. Finally, the virtual machine is resumed from its Saved State, if it was running when the process started:

Quick Migrations Phase 2

Quick Migrations Phase 2


If the virtual machine was running when the migration occurred, the amount of time required for the operation to complete will be determined by how much time is necessary for all the contents of memory to be written to disk on the source and then copied back to memory on the destination. Therefore, the total time necessary is a function of the size of the virtual machine’s memory and the speed of the disk subsystem. The networking speed between the two nodes is almost completely irrelevant, because only information about the symbolic link is transferred.

If the virtual machine wasn’t running to begin with, then the symbolic link is just moved. This process is effectively instantaneous. These symbolic links exist in C:\ProgramData\Microsoft\Windows\Hyper-V\Virtual Machines on every system that runs Hyper-V. Although it’s rare anymore, the system drive might not be C:. You can always find the root folder by using the ProgramData global environment variable. In the command-line environment and Windows, that variable is %PROGRAMDATA%. In PowerShell, it is $env:ProgramData.

The cluster keeps track of a virtual machine’s whereabouts by creating a resource that it permanently assigns to the virtual machine. The nodes communicate with each other about protected items by referring to them according to their resource. They can be viewed at HKEY_LOCAL_MACHINE\Cluster\Resources on any cluster node. Do not modify the contents of this registry key!

Live Migration

There are a few varieties of Live Migration. In official terms, when you see only the two words “Live Migration” together, that indicates a particular operation with the following characteristics:

  • Only a brief outage occurs. Applications within the virtual machine will not notice any downtime. Clients connecting over the network might, especially if they use UDP or other lossy protocols. Live Migration should always recover the guest on the target host within the TCP timeout window.
  • Requires the source and target hosts to be in the same failover cluster
  • Requires the source and target hosts to be able to access the same storage, although not necessarily simultaneously
  • Can be used by virtual machines on Cluster Disks, Cluster Shared Volumes, and SMB 3 shares
  • The virtual machine must be running
  • Only virtual machine ownership and its active state changes hands; files do not move
  • More likely to fail than a Quick Migration, especially when pass-through disks and questionable security policies are involved
  • Designed so that failures do not affect the running virtual machine except in extremely rare situations. If a failure occurs, the virtual machine should continue running on the source host without interruption.
  • Used by a failover cluster to move all except low priority virtual machines during a Drain operation (automatic during a controlled host shutdown)
  • The virtual machine does not know that anything changed (unless something is monitoring the Hyper-V Data Exchange KVP registry section). While time in the virtual machine is affected, the differences are much less impacting than with Quick Migration.

There are numerous similarities between Quick Migration and Live Migration, with the most important being that the underlying files do not move. The biggest difference is, of course, that the active state of the virtual machine is moved without taking it offline.

Anatomy of a Live Migration

The following diagrams show the process of a Live Migration.

To start, a shallow copy of the virtual machine is created on the target node. At this time, the cluster does some basic checking, such as validating that the target host can receive the source virtual machine – sufficient memory, disk connectivity, etc. It essentially creates a shell of the virtual machine:

Live Migration Phase 1

Live Migration Phase 1


Once the shell is created, the cluster begins copying the contents of the virtual machine’s memory across the network to the target host. At the very end of that process, CPU and I/O are suspended and their states, along with any memory contents that were in flux when the bulk of memory was being transferred, are copied over to the target. If necessary, ownership of the storage (applicable to Cluster Disks and pass-through disks) is transferred to the target. Finally, operations are resumed on the target:

Live Migration Phase 2

Live Migration Phase 2


While not precisely downtime-free, the interruption is almost immeasurably brief. Usually the longest delay is in the networking layer while the virtual machine’s MAC address is registered on the new physical switch port and its new location is propagated throughout the network.

The final notes in the Quick Migration section regarding the cluster resource are also applicable here.

It is not uncommon for Live Migration to be slower than Quick Migration for the same virtual machine. Where the speed of Quick Migration is mostly dependent upon the disk subsystem, Live Migration relies upon the network speed between the two hosts. It’s not unusual for the network to be slower and have more contention.

Settings that Affect Live Migration Speed

There are a few ways that you can impact Live Migration speeds.

Maximum Concurrent Live Migrations

The first is found in the host’s Hyper-V settings with a simple limit on how many Live Migrations can be active at a time:

Basic Live Migration Settings

Basic Live Migration Settings


At the top of the screen, you can select the number of simultaneous Live Migrations. Using Failover Cluster Manager or PowerShell, you can direct the cluster to move several virtual machines at once. This setting controls how many will travel at the same time. How this setting affects Live Migration speed depends upon your networking configuration performance options.

If you only have a single adapter enabled for Live Migration, then this setting has a very straightforward effect. If the number is higher, individual migrations will take longer. If the number is lower, any given migration will be faster but queued machines will wait longer for their turn. Overtly, this does not truly affect the total time for all machines to migrate. However, you should remember from the previous discussion that the virtual machine must be paused for the final set of changes to be transferred. If you have a great many virtual machines moving at once, that delay might need to be longer as there won’t be as much bandwidth available for that final bit. Differences will likely be minimal.

Other effects this setting can have will be discussed along with the following Performance options setting.

Before moving on, note that the networks list in the preceding screenshot is not applicable to this type of Live Migration. We’ll revisit it in the Shared Nothing Live Migration section.

Live Migration Transmission Methods

The second group of speed-related settings are found on the host’s advanced Live Migration settings tab, visible in this screenshot:

Live Migration Advanced Settings

Live Migration Advanced Settings


We’re specifically concerned with the Performance options section. There are three options that you can choose:

  • TCP/IP: This is the traditional Live Migration method that started with 2008 R2. It uses a single TCP/IP stream for each Live Migration.
  • Compression: This option is new beginning in 2012. As the text on the screen says, data to be transmitted is first compressed. This can reduce the amount of time that the network is in use, and sometimes dramatically since memory contents usually compress very well. The drawback is that the source and destination CPUs have to handle the compression and decompression cycles. Unless you have old or overburdened CPUs, the Compression method is usually superior to the TCP/IP method.
  • SMB: The SMB method is also new with 2012. By using SMB, your Live Migrations are given the opportunity to make use of the new advanced features of the third generation of the SMB protocol, including SMB Direct and SMB Multichannel.

These choices can be deceptively simple or blindly confusing, depending on your experience with all of these technologies. In addition to the stated text, their efficacy is dependent upon your networking configuration and the number of simultaneous Live Migrations. There are a few ways to quickly cut through the fog:

  • When conditions are ideal, the SMB method is the fastest. Many conditions must be met. First, your adapters must support RDMA. Most gigabit adapters do not have this feature. Furthermore, RDMA doesn’t work when the adapter is hosting a Hyper-V virtual switch (this will change in the 2016 version). Without RDMA, SMB Multichannel does function, but the cumulative speed is not better than the TCP/IP method. While RDMA can work on a team when no Hyper-V switch is present, SMB won’t create multiple channels unless multiple unique IPs are designated, so only one adapter will be used. Therefore, only use SMB when all of the following are true:
    • The adapters to use for SMB support RDMA
    • The adapters to use for SMB are not part of a Hyper-V switch
    • The adapters to use for SMB are unteamed
  • The Compression method should be preferred to the TCP/IP method unless you are certain that insufficient CPU is available for the compression/decompression cycle. The network transfer portion functions identically.
  • The TCP/IP and Compression methods will create a unique TCP/IP stream for every concurrent Live Migration. Therefore, you have two methods to distribute Live Migration traffic across multiple adapters:
    • Dynamic and address hash load-balancing algorithms on a network team. The source port used in each concurrent Live Migration is unique, which will allow both of these algorithms to balance the transmission across unique adapters. Dynamic can rebalance migrations in-flight, resulting in the greatest possible throughput. This is, of course, dependent upon the configuration and conditions of the target. Consider setting your maximum concurrent Live Migrations to match the number of adapters in your team.
    • Unique physical or virtual adapters. If multiple adapters are assigned to Live Migration, the cluster will distribute unique Live Migrations across them. If the adapters are virtual and on a team of physical adapters, any of the load-balancing algorithms will attempt to distribute the load. Consider setting your maximum concurrent Live Migrations to match the number of physical adapters used.

Network QoS for Live Migration

The third method involves setting network Quality of Service. This is a potentially complicated topic and is not known for paying dividends to those who overthink it. My standing recommendation is that you not manipulate QoS until you have witnessed a condition that proves its usefulness.

Microsoft has an article that covers multiple possible QoS configurations and includes the PowerShell commands necessary. It’s worth a read, but you must remember that these are just examples.

If you’re using the SMB mode for Live Migration, there are some special cmdlets that you’ll want to be aware of. Jose Barreto has documented them well on his blog.

Storage [Live] Migration

Starting with the 2012 version, Hyper-V now has a mechanism to relocate the storage for a virtual machine, even while it is turned on. It is utilizes some of the same underlying technology that allows differencing virtual hard disks to function and to merge them back into their parents while they are in use. This feature has the following characteristics:

  • Does not require the virtual machine to be clustered
  • Target storage can be anywhere that the owning host has access to – local or remote
  • Despite the name, Storage Live Migration is not a Live Migration. Storage Live Migration does not require the virtual machine to change hosts. Whereas Quick and Live Migrations are handled by the cluster, Storage Live Migrations are handled by the Virtual Machine Management Service (VMMS).
  • Storage Live Migration is designed to remove the source files upon completion, but will often leave orphaned folders behind.
  • VHD/X files are moved first, then the remaining files.
  • If the source files are on a Cluster Disk, that disk will be unassigned from the virtual machine and placed in the cluster’s Available Storage category.

The Anatomy of a Storage Live Migration

Conceptually, Storage Live Migration is a much simpler process than either Quick Migration or Live Migration. If the virtual machine is off, the files are simply copied to the new location and the virtual machine’s configuration files are modified to point to it. If the virtual machine is on, the transfer operates much like a differencing disk: writes occur on the destination and reads occur from wherever the data is newest. Of course, that’s only applicable to the virtual disk files. All others are just moved normally.

Storage Live Migration and Hyper-V Replica

I have fielded several questions about moving the replica of a virtual machine. If you’ve configured Hyper-V Replica, you know that your only storage option when setting it up is a singular default storage location per remote host. You can’t get around that. However, what seems to slip past people is that it is, just as its name says, a default location — it is not a permanent location. Once the replica is created, it is a distinct virtual machine. You are free to use Storage Live Migration to relocate any Replica anywhere that its host can reach. It will continue to function normally.

Shared Nothing Live Migration

Shared Nothing Live Migration is the last of the migration modes. I placed it here because it’s a sort of “all of the above” mode, with a single major exception: you cannot perform a Shared Nothing Live Migration (which I will hereafter simply call SNLM) on a clustered virtual machine. This does not mean that you cannot use SNLM to move a virtual machine to or from a cluster. The purpose of SNLM is to move virtual machines between hosts that are not members of the same cluster. This usually involves a Storage Live Migration as well as a standard Live Migration, all in one operation.

Many of the points from Live Migration and Storage Live Migration apply to Storage Live Migration as well. To reduce reading fatigue, I’m only going to point out the most important points:

  • Source and target systems only need to be Hyper-V hosts in the same or trusting domains.
  • The virtual machine cannot be clustered.
  • You can use SNLM to move a guest from a host running 2012 to a host running 2012 R2. You cannot move the other way.
  • If the VM’s files are on a mutually accessible SMB 3 location, you do not need to relocate them.
  • The settings as shown under the Maximum Concurrent Live Migrations section near the top of this article
Network Ports Related to Hyper-V

Network Ports Related to Hyper-V


I firmly believe that Hyper-V is best implemented using Hyper-V Server and remote management techniques. Set it up once and never connect to its console again. With a bit of creativity, you can even deploy vendor-supplied firmware updates without accessing a local session. My approach does not enjoy community consensus; in fact, I’m unaware of any general agreement on the matter at all. One thing I do know for certain is that humans follow the path of least resistance. If option A is more difficult than option B, almost everyone will follow option B. Even people that take the hard road a few times at first will eventually fall back to the easy route, especially in times of distress.

With all of that said, I also firmly believe that digital security needs to be taken seriously. You may be in a low-security environment that doesn’t handle any sensitive information, but there is still a basic level of expected due diligence. Attackers are sometimes out to steal storage space and network bandwidth, not information. You have a responsibility to at least attempt to prevent that from happening. Even if you don’t feel like you owe to your current employer for whatever reason, you might want to work somewhere else someday. No prospective employer will be impressed if you have developed poor security habits.  Good security practice involves firewalls. Yes, they’re annoying when they inhibit legitimate traffic, but they are a simple and effective way to stop the most common assaults.

This article has two purposes. The first is to succinctly lay out the TCP/IP network ports information for Hyper-V management and activities. This allows you to configure firewalls as necessary. The second is to help you through some configuration dos and don’ts. This article might have information that helps you connect to workgroup-joined Hyper-V host, but it was not written with that usage in mind. I have not heard any compelling reason for that configuration to exist, therefore, I will not waste any further energy enabling it.

Inbound Hyper-V-Related TCP/IP Ports

The rules table below is used on 2012 R2 and should also apply to 2012. Every firewall configuration is a bit different. The following diagram gives a generalized idea of where you’ll be working:

Location of Firewall Rules

Location of Firewall Rules

Hyper-V Host Inbound Rules
Port Target Source Purpose
All dynamic ports (49152-65535) Management IP; CNO Management servers and stations; RDS broker Server Manager and other tools that use Remote Procedure Call
80 Management IP Other Hyper-V hosts that can replicate to this host Hyper-V Replica (only if using insecure traffic)
135 Management IP Management servers and stations; RDS broker RPC Endpoint Mapper (sometimes for WMI as well)
443 Management IP Other Hyper-V hosts that can replicate to this host; VMM servers Hyper-V Replica (only if using secure traffic) and System Center VMM
445 Management IP and Cluster IPs; Live Migration IPs if using SMB mode Management servers and stations that would copy files to this host; members of the same cluster; Live Migration sources if using SMB mode; RDS broker Inbound file copy; cluster communications; Cluster Shared Volumes redirected access; Live Migration if using SMB mode
2179 Management IP; CNO Management servers and stations; RDS broker Communication to Virtual Machine Management Service
TCP & UDP 3389 Management IP Management stations Console access
5985 Management IP Management servers and stations; RDS broker WSMan and WMI; notably used by PowerShell Remoting
5986 Management IP Management servers and stations; SCVMM Secure WSMan and WMI
6600 Live Migration IPs Other Hyper-V hosts that can Live Migrate to this host Live Migration when using standard TCP or compressed TCP modes

The above table should be mostly self-explanatory. There are a handful of notes:

  • You probably don’t need to open all dynamic ports. I’ve only seen the first five or six ever actually being used. The problem is, if Microsoft has documented the true range of used ports anywhere, I have never found it. I only stumbled on the necessity to open the ports at all via netstat and Wireshark traces from failing connections. PSS didn’t even know about them.
  • For the best outcomes, do not use any firewalls at all on cluster-only networks. The above will work, but (unscientifically), the Wireshark traces I’ve seen on cluster networks suggest that it uses other undocumented methods for communications that are likely more efficient. This would not apply to cluster-and-client networks such as the management network. Keep all of your cluster-only networks in the same layer-2 network with no routers, preferably in a VLAN or otherwise isolated network.
  • “CNO” is cluster name object. Server Manager in particular wants to be able to talk to it. In reality, it maps directly to the management IP of the host that currently maintains the cluster core resources and as long as Server Manager can reach that, everything will be fine even if Server Manager is displeased. If you can ignore Server Manager’s complaints, so can I.
  • “RDS” is “Remote Desktop Services”. I hope the clarification isn’t necessary, but this is for virtual machine RDS and not session-based RDS.
  • Very few things have any need to use 5986 because WSMan traffic is always encrypted. Your host will only be listening on 5986 if you enable secure WSMan with a specially configured WinRM listener.

Outbound Hyper-V-Related TCP/IP Ports

Most institutions won’t block traffic outbound. However, you might need to use it to configure the inbound firewalls on other hosts. It’s also helpful if you have interim hardware firewalls. The rules are presented from the perspective of the following diagram:

Hyper-V-Related Outbound Firewall Rules

Hyper-V-Related Outbound Firewall Rules

Hyper-V Host Outbound Rules
Port Target Source Purpose
All dynamic ports (49152-65535) Management IP of other Hyper-V hosts; RDS broker Management IP Server Manager and other tools that use Remote Procedure Call if you will manage one host from another; RDS broker if using RDS
80 Other Hyper-V hosts that this host will replicate to Management IP Hyper-V Replica (only if using insecure traffic)
135 Management IP Management servers and stations; RDS broker RPC Endpoint Mapper (sometimes for WMI as well)
443 Other Hyper-V hosts that this host will replicate to; SCVMM Management IP Hyper-V Replica (only if using secure traffic) and System Center VMM
445 SMB 3 systems that present VM storage to this host; SMB 2+ targets with files this host needs to access, such as ISO; SCVMM library hosts; Live Migration target hosts if using SMB; other members of the same cluster; RDS broker Management IPs; cluster network IPs; RDS User Profile Disk share File copy; SMB access; Live Migrations in SMB mode; cluster communications; Cluster Shared Volume communications; RDS UPD host
3260 iSCSI Portals and Targets iSCSI discovery and initiator IPs iSCSI
6600 Other hosts this machine can Live Migrate to Live Migration IPs Live Migration when using standard TCP or compressed TCP modes

There isn’t much else to say with this table.

Other Hyper-V Related Traffic

Since I included port information for RDS, I might as well round it out with port information for RDS that doesn’t involve the hosts themselves. I’ll also include additional port information for System Center Virtual Machine Manager. While I’m at it, I’ll throw in some common ports just for the sake of completeness. I think the source/destination fields should be enough to help you figure out where to place these rules.

Other Hyper-V Firewall Rules
Port Target Source Purpose
All dynamic ports (49152-65535) All RDS hosts besides RDS broker RDS broker Server Manager and other tools that use Remote Procedure Call — the RDS broker is a central management hub
TCP/UDP 53 DNS servers Everyone DNS lookups
80 RDS Clients RDS web VDI’s web access is a standard IIS site
389 Domain Controllers Domain members Standard AD authentication
443 RDS Clients RDS web VDI’s web access is a standard IIS site
443 RDS Gateway RDS Clients Client-to-gateway connections handled via 443
443 RDS virtual machines RDS Clients If gateway is not used, clients authenticate directly to their own VMs on 443
686 Domain Controllers Domain members AD authentication using LDAPS; secure authentication is now also available on the 389 port
3389 RDS virtual machines in unmanaged pools RDS Broker Client-to-VM connectivity
3389 RDS Broker RDS Gateway Gateway facilitating of client-to-VM connectivity
3389 RDS virtual machines RDS Clients If an RDS gateway is not used, everyone is connected directly to his or her VDI virtual machine
UDP 3391 Same rules as 3389 Same rules as 3389 When this transport is enabled, higher-performing UDP is used for the connection. Duplicate all of the port 3389 rules to UDP 3391.
5504 RDS Broker RDS Web Hand-off from web authenticator to the RDS broker
5985 RDS Gateway RDS Broker RDS Broker control of RDS Gateway
8100 SCVMM Management stations Connect VMM console to VMM server service
8530 (default) Windows Server Update Services hosts All Windows/Windows Server/Hyper-v Server Windows Updates; may be configured to use another port

A few notes here:

  • I didn’t include all of the VMM rules. A more comprehensive list is available on the TechNet wiki. My real complaint with that list is that it does not include source/destination information and not all of the rules are obvious. For instance, you do need the Hyper-V hosts to be able to talk to the SCVMM server on port 443.
  • There is a similar TechNet wiki article for Remote Desktop Services, but the parts that aren’t difficult to sort out are just wrong and its overall state is such that, in good conscience, I cannot link it. Through some indirect channels, I have been left with the impression that the RDS team’s opinion on ever making that page useful or correct is “that’s not our job”. I have created the entries on these lists through Wireshark traces and observed trial and error. I can say that they have all worked just fine for me, but you might need to do some sleuthing of your own if you are doing some sort of configuration that I am not. PSS only has access to the same broken list so they will not be able to help you further, unless you don’t know how to use network traces or netstat to locate blocked ports.


42 Best Practices for Balanced Hyper-V Systems

42 Best Practices for Balanced Hyper-V Systems

Last year, Nirmal Sharma wrote a fantastic article on this blog titled 23 Best Practices to improve Hyper-V and VM Performance. This sparked up a very lively discussion in the comments section; some were very strongly in favor of some items, some very strongly opposed to others. What I think was perhaps missed in some of these comments was that, as Nirmal stated in the title, his list was specifically “to improve Hyper-V and VM performance.” If squeezing every last drop of horsepower out of your Hyper-V host is your goal, then it’s pretty hard to find any serious flaws with his list. (more…)

Looking Forward to Hyper-V in Server 10

Looking Forward to Hyper-V in Server 10

New year, new products! Some time in 2015, we’re all going to be graced with the newest edition of Windows and Windows Server, and along with them, Hyper-V. I wish I had a slick code name to give you, like “Viridian”, but it seems like most in-progress Microsoft products are now just code-named “vNext”. I’ve spent some time going over the published feature list. Some of the introductions will be very welcome. Some make me a bit less than enthusiastic.

Sole Sourcing

I’ve been burned more than once by writing about pre-release features and software conditions, even within a few months of release. The next version of Hyper-V is still quite a ways away, so there is still time for significant change. To that end, I’m only going to work with the officially published material on Hyper-V. Even that material could still be considered malleable at this point, but, in my opinion, it’s safer than relying on any third-party sources.

The Features I Like

We’ll start with those that I’m looking forward to.

Production Checkpoints

This is an interesting evolution of checkpoints (formerly snapshots). The first thing I want to point out is that they do not replace the existing checkpoint functionality; it’s just a different way of taking a checkpoint. There are two things of note here: first, and most important, Production Checkpoints are supported “for all production workloads.” “All“. They’re still not going to replace a proper backup solution, but now they’ll be the next best thing. The second important change is that they will, according to the wording of that document, tell Linux guests to “flush their file system buffers”. As far as I know, the current built-in and agentless systems don’t do that at all. This new power will give you a checkpoint that works like a backup that’s somewhere between crash-consistent and application-consistent. It won’t be application-consistent because not all applications’ buffers are tied to the file system buffer. It’s still about as good as you’re going to get on Linux at this phase. On this subject, I’ve also heard some rumors about changes to backup, but they’re not on this list, so…

Hyper-V Manager Improvements

Something we spend a lot of time on in the forums is explaining to people about why Hyper-V Manager doesn’t magically work when you run it as an administrator (because it is a computer-to-computer authentication, not user-to-computer) and how to set up constrained delegation or the Trusted Hosts list. Hyper-V Manager will now start working with alternate credentials, which means that at least some aspect of the computer-to-computer authentication is going away. That will certainly remove the need for constrained delegation. I assume that configuring Trusted Hosts will still be a necessity for those who insist on not joining their Hyper-V hosts to their domains, as WS-MAN does need some form of computer authentication. I certainly hope that this is how it will work, as I really hope Microsoft doesn’t start encouraging people to try to rely on workgroup-grade security. However, it doesn’t officially say so we won’t know until someone tries. At least for intra-domain authentication, this change will make things much easier.

Integration Services Delivered through Windows Update

This will be great, mainly for installation time. To have these updates synchronized with all the other updates will definitely be a blessing. Of course, this assumes that the Integrated Services are tested before being placed on Windows Update. The new feature might be a mixed blessing. For now, I’m cautiously optimistic.

Connected Standby

One of the features advertised for Hyper-V in its 2012 incarnation was that it would support sleep modes. Apparently, that wasn’t true. If it was, I didn’t encounter anyone who found the magic combination that would make it work. Why would I want sleep support on Hyper-V? Well, because not all instances of Hyper-V are production, that’s why. I use a “development” machine to work on CoreFig and all the scripts that I share here. It doesn’t get used even 2% of any given day, so why should it suck power 100% of the day? Sure, I can turn it off and on, but that’s pretty annoying when I just want to test one or two lines. I also had Hyper-V enabled on my laptop for a time, as I did quite a bit of development work there, as well. I’m the guy that likes to fold up the laptop and stick it in the bag when I’m ready to move. Enabling Client Hyper-V made that a bad plan for me, since I really didn’t want a portable heater. I also enabled Client Hyper-V on my desktop, which was a disaster. I don’t recall now if that’s because it would sleep and never wake up or if it wouldn’t sleep, but I do remember that I had to disable the role. What really makes it unforgivable for me is that sleep was a published feature of 8/2012 and it never really worked correctly.

Connected Standby is a new sleep mode introduced in Windows 8. It needs some pretty special hardware to function, so it’s still not going to work on any of the systems I have. However, many of you that carry around the latest and greatest laptops for demo purposes already have access to the features, so maybe you’ll get mileage out of it. For my personal purposes, I’d still like Hyper-V to allow my boxes to just plain old go to sleep because I don’t actually care if my test guests keep running, but this would be more welcome than the current option of going through a complete power cycle each time.

Rolling Cluster Upgrades

With Microsoft’s ever-shrinking release cycles and the community trust they’ve forfeited in the company’s ability to release quality software, an improved upgrade experience was an absolute necessity. What this feature does is basically let you run multiple versions of Windows/Hyper-V Server in the same cluster and freely move virtual machines between them. That will allow you to perform an in-place upgrade of a cluster without all the juggling tricks we had to do in the previous versions.

Multiple Virtual Machine Configuration Versions

This is essentially the technology that allows the rolling cluster upgrades to work. The same host can run virtual machines with the 2012 R2 format and the Server 10 format. This makes me a little happy, as you’ll see more clearly after you read the next session.

The Features I Don’t Care So Much About

With the exception of the first, I won’t characterize these features as “bad”, because that would be untrue. It’s just that I wouldn’t consider an in-place upgrade just for them.

Déjà-New Virtual Machine File Format

The more things change, the more they stay the same. First, we had VMC, and later VMCX, files that described virtual machines under Virtual PC. And then we got XML files in Hyper-V, which were awesome because we could understand what was going on without needing an interpreter and we could make all sorts of clever manual repairs when Hyper-V (or, more likely, an antivirus package) went into hack-and-slash mode on a VM. Well, apparently, Microsoft didn’t like that, because we’re going back to VMCX files in vNext. This is not awesome. Unless something is forthcoming or I missed a publication, the VMCX file format is closed. That means anything we discover about it will be through hackery. Manually fixing them or making minor adjustments will immediately be off the reservation and completely unsupported. This is not progress.

The official line for this regression to vLastLastLastLastLast is twofold. First, they claim that it is to “increase the efficiency of reading and writing virtual machine configuration data”. While I’m sure this is technically true, I’d like to know at what scale this efficiency becomes measurable, much less meaningful. I’m thinking there aren’t many installations that qualify.

The second given reason is, “It is also designed to reduce the potential for data corruption in the event of a storage failure”. This sounds more like the silly belief I’ve been fighting against for well over a decade, that files in a “binary” format are somehow inherently resistant to modification and corruption in a way that “text” files are not. It kills me how many otherwise intelligent IT people believe this nonsense. The difference between “text” files and “binary” files is pure semantics. A computer can no more read any given “binary” file without a dedicated parser than it can a “text” file because they’re both nothing but serially-stored bits whose organization is by 100% human-imposed meaning. The only “real” difference between the two is that “binary” files usually have a much wider range of acceptable unique bit-orderings than “text” files. When you think about that for a bit, it actually means that corruption should be harder to detect in “binary” files, not easier. I mean, if you “TYPE” a “text” file and your computer starts making beeping sounds, it’s a safe bet that the file is corrupted. For “binary” files, it’s perfectly normal. The notion that “binary” somehow provides superior stability over “text” files is like saying that Japanese is more stable than English on the sole basis that I can’t read or write Japanese without an external translator. The reason that the current XML reading/writing mechanism doesn’t have this ability “to reduce the potential for data corruption” is because there is no programming in place to make it that way, not because it couldn’t have been done.

At this point, I’m highly skeptical that this is a positive change.

Hot Add and Remove of Memory and Network Adapters

I think this is really about feature parity with the competition more than anything else. As far as I’m concerned, if you’re swapping memory and NICs in and out of your guests so much that it’s a show-stopper that it can’t currently be done online, then your provisioning skills are what need to be upgraded, not your hypervisor. That said, it’s a neat feature. I’ll certainly test it in my lab, but I suspect that I’ll use it somewhere around never in production. Oh, and as you’d probably expect, it’s only going to work for Generation 2 VMs.

Enhancements to Storage Quality of Service

I kind of feel bad about putting this feature in this section because it’s a good thing. It’s just a bit before its time. Unlike the existing storage QoS settings that can limit a VM in any situation, this new feature set requires that VMs live on a Scale-out File Server. That means Windows Server storage. Microsoft is certainly making inroads in storage and I expect adoption to continue to climb. However, I expect the saturation measurement of shops that are ripping out their functional EMC and NetApp deployments to replace them with SOFS is probably somewhere around zero percent. So, it’s good that Microsoft is getting out ahead of the game and providing features that will be desirable when people do start thinking about sending the big vendors and their forklifts away, but I don’t know that it’s going to make a big splash in 2015.

Honorable Mention: Linux Secure Boot

I have no real opinion as this feature just doesn’t affect me that much. The purpose of Secure Boot is to guard against corruption in the boot process of a guest, such as the changes made by a rootkit. The “firmware”, which in this case is Hyper-V, maintains a secured database of acceptable boot image signatures and won’t allow any boot image that doesn’t match the list to start. The problem is, there are a lot of legitimate operating systems out there that aren’t in Hyper-V’s database. This change extends this protection to some Linux guests. I’m not sure why this wasn’t available in vNow, but there it is.

A Lot Can Happen in a Few Months

As I said before, there’s still quite a bit of time between now and the actual release date. It will be interesting to see how solid these features are in comparison to what we’ll get in the final product.

I’ve refrained from speculating, but you’re certainly welcome to speculate all you want.

Your Thoughts

What do you think? What are you looking forward to most? Leave a comment and let us know!


Page 1 of 212