How to Architect and Implement Networks for a Hyper-V Cluster

How to Architect and Implement Networks for a Hyper-V Cluster

We recently published a quick tip article recommending the number of networks you should use in a cluster of Hyper-V hosts. I want to expand on that content to make it clear why we’ve changed practice from pre-2012 versions and how we arrive at this guidance. Use the previous post for quick guidance; read this one to learn the supporting concepts. These ideas apply to all versions from 2012 onward.

Why Did We Abandon Practices from 2008 R2?

If you dig on TechNet a bit, you can find an article outlining how to architect networks for a 2008 R2 Hyper-V cluster. While it was perfect for its time, we have new technologies that make its advice obsolete. I have two reasons for bringing it up:

  • Some people still follow those guidelines on new builds — worse, they recommend it to others
  • Even though we no longer follow that implementation practice, we still need to solve the same fundamental problems

We changed practices because we gained new tools to address our cluster networking problems.

What Do Cluster Networks Need to Accomplish for Hyper-V?

Our root problem has never changed: we need to ensure that we always have enough available bandwidth to prevent choking out any of our services or inter-node traffic. In 2008 R2, we could only do that by using multiple physical network adapters and designating traffic types to individual pathways. Note: It was possible to use third-party teaming software to overcome some of that challenge, but that was never supported and introduced other problems.

Starting from our basic problem, we next need to determine how to delineate those various traffic types. That original article did some of that work. We can immediately identify what appears to be four types of traffic:

  • Management (communications with hosts outside the cluster, ex: inbound RDP connections)
  • Standard inter-node cluster communications (ex: heartbeat, cluster resource status updates)
  • Cluster Shared Volume traffic
  • Live Migration

However, it turns out that some clumsy wording caused confusion. Cluster communication traffic and Cluster Shared Volume traffic are exactly the same thing. That reduces our needs to three types of cluster traffic.

What About Virtual Machine Traffic?

You might have noticed that I didn’t say anything about virtual machine traffic above. Same would be true if you were working up a different kind of cluster, such as SQL. I certainly understand the importance of that traffic; in my mind, service traffic prioritizes above all cluster traffic. Understand one thing: service traffic for external clients is not clustered. So, your cluster of Hyper-V nodes might provide high availability services for virtual machine vmabc, but all of vmabc‘s network traffic will only use its owning node’s physical network resources. So, you will not architect any cluster networks to process virtual machine traffic.

As for preventing cluster traffic from squelching virtual machine traffic, we’ll revisit that in an upcoming section.

Fundamental Terminology and Concepts

These discussions often go awry over a misunderstanding of basic concepts.

  • Cluster Name Object: A Microsoft Failover Cluster has its own identity separate from its member nodes known as a Cluster Name Object (CNO). The CNO uses a computer name, appears in Active Directory, has an IP, and registers in DNS. Some clusters, such as SQL, may use multiple CNOs. A CNO must have an IP address on a cluster network.
  • Cluster Network: A Microsoft Failover Cluster scans its nodes and automatically creates “cluster networks” based on the discovered physical and IP topology. Each cluster network constitutes a discrete communications pathway between cluster nodes.
  • Management network: A cluster network that allows inbound traffic meant for the member host nodes and typically used as their default outbound network to communicate with any system outside the cluster (e.g. RDP connections, backup, Windows Update). The management network hosts the cluster’s primary cluster name object. Typically, you would not expose any externally-accessible services via the management network.
  • Access Point (or Cluster Access Point): The IP address that belongs to a CNO.
  • Roles: The name used by Failover Cluster Management for the entities it protects (e.g. a virtual machine, a SQL instance). I generally refer to them as services.
  • Partitioned: A status that the cluster will give to any network on which one or more nodes does not have a presence or cannot be reached.
  • SMB: ALL communications native to failover clustering use Microsoft’s Server Message Block (SMB) protocol. With the introduction of version 3 in Windows Server 2012, that now includes innate multi-channel capabilities (and more!)

Are Microsoft Failover Clusters Active/Active or Active/Passive?

Microsoft Failover Clusters are active/passive. Every node can run services at the same time as the other nodes, but no single service can be hosted by multiple nodes. In this usage, “service” does not mean those items that you see in the Services Control Panel applet. It refers to what the cluster calls “roles” (see above). Only one node will ever host any given role or CNO at any given time.

How Does Microsoft Failover Clustering Identify a Network?

The cluster decides what constitutes a network; your build guides it, but you do not have any direct input. Any time the cluster’s network topology changes, the cluster service re-evaluates.

First, the cluster scans a node for logical network adapters that have IP addresses. That might be a physical network adapter, a team’s logical adapter, or a Hyper-V virtual network adapter assigned to the management operating system. It does not see any virtual NICs assigned to virtual machines.

For each discovered adapter and IP combination on that node, it builds a list of networks from the subnet masks. For instance, if it finds an adapter with an IP of 192.168.10.20 and a subnet mask of 255.255.255.0, then it creates a 192.168.10.0/24 network.

The cluster then continues through all of the other nodes, following the same process.

Be aware that every node does not need to have a presence in a given network in order for failover clustering to identify it; however, the cluster will mark such networks as partitioned.

What Happens if a Single Adapter has Multiple IPs?

If you assign multiple IPs to the same adapter, one of two things will happen. Which of the two depends on whether or not the secondary IP shares a subnet with the primary.

When an Adapter Hosts Multiple IPs in Different Networks

The cluster identifies networks by adapter first. Therefore, if an adapter has multiple IPs, the cluster will lump them all into the same network. If another adapter on a different host has an IP in one of the networks but not all of the networks, then the cluster will simply use whichever IPs can communicate.

As an example, see the following network:

The second node has two IPs on the same adapter and the cluster has added it to the existing network. You can use this to re-IP a network with minimal disruption.

A natural question: what happens if you spread IPs for the same subnet across different existing networks? I tested it a bit and the cluster allowed it and did not bring the networks down. However, it always had the functional IP pathway to use, so that doesn’t tell us much. Had I removed the functional pathways, then it would have collapsed the remaining IPs into an all-new network and it would have worked just fine. I recommend keeping an eye on your IP scheme and not allowing things like that in the first place.

When an Adapter Hosts Multiple IPs in the Same Network

The cluster will pick a single IP in the same subnet to represent the host in that network.

What if Different Adapters on the Same Host have an IP in the Same Subnet?

The same outcome occurs as if the IPs were on the same adapter: the cluster picks one to represent the cluster and ignores the rest.

The Management Network

All clusters (Hyper-V, SQL, SOFS, etc.) require a network that we commonly dub Management. That network contains the CNO that represents the cluster as a singular system. The management network has little importance for Hyper-V, but external tools connect to the cluster using that network. By necessity, the cluster nodes use IPs on that network for their own communications.

The management network will also carry cluster-specific traffic. More on that later.

Note: Replica uses a management network.

Cluster Communications Networks (Including Cluster Shared Volume Traffic)

A cluster communications network will carry:

  • Cluster heartbeat information. Each node must hear from every other node within a specific amount of time (1 second by default). If it does not hear from a minimum of nodes to maintain quorum, then it will begin failover procedures. Failover is more complicated than that, but beyond the scope of this article.
  • Cluster configuration changes. If any configuration item changes, whether to the cluster’s own configuration or the configuration or status of a protected service, the node that processes the change will immediately transmit to all of the other nodes so that they can update their own local information store.
  • Cluster Shared Volume traffic. When all is well, this network will only carry metadata information. Basically, when anything changes on a CSV that updates its volume information table, that update needs to be duplicated to all of the other nodes. If the change occurs on the owning node, less data needs to be transmitted, but it will never be perfectly quiet. So, this network can be quite chatty, but will typically use very little bandwidth. However, if one or more nodes lose direct connectivity to the storage that hosts a CSV, all of its I/O will route across a cluster network. Network saturation will then depend on the amount of I/O the disconnected node(s) need(s).

Live Migration Networks

That heading is a bit of misnomer. The cluster does not have its own concept of a Live Migration network per se. Instead, you let the cluster know which networks you will permit to carry Live Migration traffic. You can independently choose whether or not those networks can carry other traffic.

Other Identified Networks

The cluster may identify networks that we don’t want to participate in any kind of cluster communications at all. iSCSI serves as the most common example. We’ll learn how to deal with those.

Architectural Goals

Now we know our traffic types. Next, we need to architect our cluster networks to handle them appropriately. Let’s begin by understanding why you shouldn’t take the easy route of using a singular network. A minimally functional Hyper-V cluster only requires that “management” network. Stopping there leaves you vulnerable to three problems:

  • The cluster will be unable to select another IP network for different communication types. As an example, Live Migration could choke out the normal cluster hearbeat, causing nodes to consider themselves isolated and shut down
  • The cluster and its hosts will be unable to perform efficient traffic balancing, even when you utilize teams
  • IP-based problems in that network (even external to the cluster) could cause a complete cluster failure

Therefore, you want to create at least one other network. In the pre-2012 model we could designate specific adapters to carry specific traffic types. In the 2012 and later model, we simply create at least one more additional network to allow cluster communications but not client access. Some benefits:

  • Clusters of version 2012 or new will automatically employ SMB multichannel. Inter-node traffic (including Cluster Shared Volume data) will balance itself without further configuration work.
  • The cluster can bypass trouble on one IP network by choosing another; you can help by disabling a network in Failover Cluster Manager
  • Better load balancing across alternative physical pathways

The Second Supporting Network… and Beyond

Creating networks beyond the initial two can add further value:

  • If desired, you can specify networks for Live Migration traffic, and even exclude those from normal cluster communications. Note: For modern deployments, doing so typically yields little value
  • If you host your cluster networks on a team, matching the number of cluster networks to physical adapters allows the teaming and multichannel mechanisms the greatest opportunity to fully balance transmissions. Note: You cannot guarantee a perfectly smooth balance

Architecting Hyper-V Cluster Networks

Now we know what we need and have a nebulous idea of how that might be accomplished. Let’s get into some real implementation. Start off by reviewing your implementation choices. You have three options for hosting a cluster network:

  • One physical adapter or team of adapters per cluster network
  • Convergence of one or more cluster networks onto one or more physical teams or adapters
  • Convergence of one or more cluster networks onto one or more physical teams claimed by a Hyper-V virtual switch

A few pointers to help you decide:

  • For modern deployments, avoid using one adapter or team for a cluster network. It makes poor use of available network resources by forcing an unnecessary segregation of traffic.
  • I personally do not recommend bare teams for Hyper-V cluster communications. You would need to exclude such networks from participating in a Hyper-V switch, which would also force an unnecessary segregation of traffic.
  • The most even and simple distribution involves a singular team with a Hyper-V switch that hosts all cluster network adapters and virtual machine adapters. Start there and break away only as necessary.
  • A single 10 gigabit adapter swamps multiple gigabit adapters. If your hosts have both, don’t even bother with the gigabit.

To simplify your architecture, decide early:

  • How many networks you will use. They do not need to have different functions. For example, the old management/cluster/Live Migration/storage breakdown no longer makes sense. One management and three cluster networks for a four-member team does make sense.
  • The IP structure for each network. For networks that will only carry cluster (including intra-cluster Live Migration) communication, the chosen subnet(s) do not need to exist in your current infrastructureAs long as each adapter in a cluster network can reach all of the others at layer 2 (Ethernet), then you can invent any IP network that you want.

I recommend that you start off expecting to use a completely converged design that uses all physical network adapters in a single team. Create Hyper-V network adapters for each unique cluster network. Stop there, and make no changes unless you detect a problem.

Comparing the Old Way to the New Way (Gigabit)

Let’s start with a build that would have been common in 2010 and walk through our options up to something more modern. I will only use gigabit designs in this section; skip ahead for 10 gigabit.

In the beginning, we couldn’t use teaming. So, we used a lot of gigabit adapters:

 

There would be some variations of this. For instance, I would have added another adapter so that I could use MPIO with two iSCSI networks. Some people used Fiber Channel and would not have iSCSI at all.

Important Note: The “VMs” that you see there means that I have a virtual switch on that adapter and the virtual machines use it. It does not mean that I have created a VM cluster network. There is no such thing as a VM cluster network. The virtual machines are unaware of the cluster and they will not talk to it (if they do, they’ll use the Management access point like every other non-cluster system).

Then, 2012 introduced teaming. We could then do all sorts of fun things with convergence. My very least favorite:

This build takes teams to an excess. Worse, the management, cluster, and Live Migration teams will be idle almost all the time, meaning that this 60% of this host’s networking capacity will be generally unavailable.

Let’s look at something a bit more common. I don’t like this one either, but I’m not revolted by it either:

A lot of people like that design because, so they say, it protects the management adapter from problems that affect the other roles. I cannot figure out how they perform that calculus. Teaming addresses any probable failure scenarios. For anything else, I would want the entire host to fail out of the cluster. In this build, a failure that brought the team down but not the management adapter would cause its hosted VMs to become inaccessible because the node would remain in the cluster. That’s because the management adapter would still carry cluster heartbeat information.

My preferred design follows:

Now we are architected against almost all types of failure. In a “real-world” build, I would still have at least two iSCSI NICs using MPIO.

What is the Optimal Gigabit Adapter Count?

Because we had one adapter per role in 2008 R2, we often continue using the same adapter count in our 2012+ builds. I don’t feel that’s necessary for most builds. I am inclined to use two or three adapters in data teams and two adapters for iSCSI. For anything past that, you’ll need to have collected some metrics to justify the additional bandwidth needs.

10 Gigabit Cluster Network Design

10 gigabit changes all of the equations. In reasonable load conditions, a single 10 gigabit adapter moves data more than 10 times faster than a single gigabit adapter. When using 10 GbE, you need to change your approaches accordingly. First, if you have both 10GbE and gigabit, just ignore the gigabit. It is not worth your time. If you really want to use it, then I would consider using it for iSCSI connections to non-SSD systems. Most installations relying on iSCSI-connected spinning disks cannot sustain even 2 Gbps, so gigabit adapters would suffice.

Logical Adapter Counts for Converged Cluster Networking

I didn’t include the Hyper-V virtual switch in any of the above diagrams, mostly because it would have made the diagrams more confusing. However, I would use a Hyper-V team to host all of the logical adapters necessary. For a non-Hyper-V cluster, I would create a logical team adapter for each role. Remember that on a logical team, you can only have a single logical adapter per VLAN. The Hyper-V virtual switch has no such restrictions. Also remember that you should not use multiple logical team adapters on any team that hosts a Hyper-V virtual switch. Some of the behavior is undefined and your build might not be supported.

I would always use these logical/virtual adapter counts:

  • One management adapter
  • A minimum of one cluster communications adapter up to n-1, where n is the number of physical adapters in the team. You can subtract one because the management adapter acts as a cluster adapter as well

In a gigabit environment, I would add at least one logical adapter for Live Migration. That’s optional because, by default, all cluster-enabled networks will also carry Live Migration traffic.

In a 10 GbE environment, I would not add designated Live Migration networks. It’s just logical overhead at that point.

In a 10 GbE environment, I would probably not set aside physical adapters for storage traffic. At those speeds, the differences in offloading technologies don’t mean that much.

Architecting IP Addresses

Congratulations! You’ve done the hard work! Now you just need to come up with an IP scheme. Remember that the cluster builds networks based on the IPs that it discovers.

Every network needs one IP address for each node. Any network that contains an access point will need an additional IP for the CNO. For Hyper-V clusters, you only need a management access point. The other networks don’t need a CNO.

Only one network really matters: management. Your physical nodes must use that to communicate with the “real” network beyond. Choose a set of IPs available on your “real” network.

For all the rest, the member IPs only need to be able to reach each other over layer 2 connections. If you have an environment with no VLANs, then just make sure that you pick IPs in networks that don’t otherwise exist. For instance, you could use 192.168.77.0/24 for something, as long as that’s not a “real” range on your network. Any cluster network without a CNO does not need to have a gateway address, so it doesn’t matter that those networks won’t be routable. It’s preferred, in fact.

Implementing Hyper-V Cluster Networks

Once you have your architecture in place, you only have a little work to do. Remember that the cluster will automatically build networks based on the subnets that it discovers. You only need to assign names and set them according to the type of traffic that you want them to carry. You can choose:

  • Allow cluster communication (intra-node heartbeat, configuration updates, and Cluster Shared Volume traffic)
  • Allow client connectivity to cluster resources (includes cluster communication) and cluster communications (you cannot choose client connectivity without cluster connectivity)
  • Prevent participation in cluster communications (often used for iSCSI and sometimes connections to external SMB storage)

As much as I like PowerShell for most things, Failover Cluster Manager makes this all very easy. Access the Networks tree of your cluster:

I’ve already renamed mine in accordance with their intended roles. A new build will have “Cluster Network”, “Cluster Network 1”, etc. Double-click on one to see which IP range(s) it assigned to that network:

Work your way through each network, setting its name and what traffic type you will allow. Your choices:

  • Allow cluster network communication on this network AND Allow clients to connect through this network: use these two options together for the management network. If you’re building a non-Hyper-V cluster that needs access points on non-management networks, use these options for those as well. Important: The adapters in these networks SHOULD register in DNS.
  • Allow cluster network communication on this network ONLY (do not check Allow clients to connect through this network): use for any network that you wish to carry cluster communications (remember that includes CSV traffic). Optionally use for networks that will carry Live Migration traffic (I recommend that). Do not use for iSCSI networks. Important: The adapters in these networks SHOULD NOT register in DNS.
  • Do not allow cluster network communication on this network: Use for storage networks, especially iSCSI. I also use this setting for adapters that will use SMB to connect to a storage server running SMB version 3.02 in order to run my virtual machines. You might want to use it for Live Migration networks if you wish to segregate Live Migration from cluster traffic (I do not do or recommend that).

Once done, you can configure Live Migration traffic. Right-click on the Networks node and click Live Migration Settings:

Check a network’s box to enable it to carry Live Migration traffic. Use the Up and Down buttons to prioritize.

What About Traffic Prioritization?

In 2008 R2, we had some fairly arcane settings for cluster network metrics. You could use those to adjust which networks the cluster would choose as alternatives when a primary network was inaccessible. We don’t use those anymore because SMB multichannel just figures things out. However, be aware that the cluster will deliberately choose Cluster Only networks over Cluster and Client networks for inter-node communications.

What About Hyper-V QoS?

When 2012 first debuted, it brought Hyper-V networking QoS along with it. That was some really hot new tech, and lots of us dove right in and lost a lot of sleep over finding the “best” configuration. And then, most of us realized that our clusters were doing a fantastic job balancing things out all on their own. So, I would recommend that you avoid tinkering with Hyper-V QoS unless you have tried going without and had problems. Before you change QoS, determine what traffic needs to be attuned or boosted before you change anything. Do not simply start flipping switches, because the rest of us already tried that and didn’t get results. If you need to change QoS, start with this TechNet article.

Your thoughts?

Does your preferred network management system differ from mine? Have you decided to give my arrangement a try? How id you get on? Let me know in the comments below, I really enjoy hearing from you guys!

Hyper-V Quick Tip: How Many Cluster Networks Should I Use?

Hyper-V Quick Tip: How Many Cluster Networks Should I Use?

Q: How many networks should I employ for my clustered Hyper-V Hosts?

A: At least two, architected for redundancy, not services.

This answer serves as a quick counter to oft-repeated cluster advice from the 2008/2008 R2 era. Many things have changed since then and architecture needs to keep up. It assumes that you already know how to configure networks for failover clustering.

In this context, “networks” means “IP networks”. Microsoft Failover Cluster defines and segregates networks by their subnets:

Why the Minimum of Two?

Using two or more networks grants multiple benefits:

  • The cluster automatically bypasses some problems in IP networks, preventing any one problem from bringing the entire cluster down
    • External: a logical network failure that breaks IP communication
    • Internal: a Live Migration that chokes out heartbeat information, causing nodes to exit the cluster
  • An administrator can manually exclude a network to bypass problems
  • If hosted by a team, the networking stack can optimize traffic more easily when given multiple IP subnets
  • If necessary, traffic types can be prioritized

Your two networks must contain one “management” network (allows for cluster and client connections). All other networks beyond the first should either allow cluster communications only or prevent all cluster communications (ex: iSCSI). A Hyper-V cluster does not need more than one management network.

How Many Total?

You will need to make architectural decisions to arrive at the exact number of networks appropriate for your system. Tips:

  • Do not use services as a deciding point. For instance, do not build a dedicated Live Migration or CSV network. Let the system balance traffic.
    • In some rare instances, you may have network congestion that necessitates segregation. For example, heavy Live Migration traffic over few gigabit adapters. In that case, create a dedicated Live Migration network and employ Hyper-V QoS to limit its bandwidth usage
  • Do take physical pathways into account. If you have four physical network adapters in a team that hosts your cluster networks, then create four cluster networks.
  • Avoid complicated network builds. I see people trying to make sense out of things like 6 teams and two Hyper-V switches on 8x gigabit adapters with 4x 10-gigabit adapters. You will create a micro-management nightmare situation without benefit. If you have any 10-gigabit, just stop using the gigabit. Preferably, converge onto one team and one Hyper-V switch. Let the system balance traffic.

 

Do you have a question for Eric?

Ask your question in the comments section below and we may feature it in the next “Quick Tip” blog post!

The ABC of Hyper-V – 7 Steps to Get Started

The ABC of Hyper-V – 7 Steps to Get Started

Authors commonly struggle with the blank, empty starting page of a new work. So, if you’ve just installed Hyper-V and don’t know what to do with that empty space, you’re in good company. Let’s take a quick tour of the steps from setup to production-ready.

1. Make a Virtual Machine

It might seem like jumping ahead, but go ahead and create a virtual machine now. It doesn’t matter what you name it. It doesn’t matter how you configure it. It won’t be ready to use and you might get something wrong — maybe a lot of somethings — but go ahead. You get three things out of this exercise:

  • It’s what we authors do when we aren’t certain how to get started with writing. Just do something. It doesn’t matter what. Just break up that empty white space.
  • You learn that none of it is permanent. Make a mistake? Oh well. Change it.
  • You have a focused goal. You know that the VM won’t function without some more work. Instead of some nebulous “get things going” problem, you have a specific virtual machine to fix up.

If you start here, then you’ll have no network for the virtual machine and you may wind up with it sitting on the C: drive. That’s OK.

If you want to know the basic steps for how to create a virtual machine in Hyper-V Manager, start with this article.

2. Install Updates and Configure the System

I try to get the dull, unpleasant things out of the way before I do anything with Hyper-V. In no set order:

3. Configure Hyper-V Networking

For Hyper-V configuration, I always start with my networking stack. You will likely spend a lot of time on this, especially if you’re still new to Hyper-V.

For a server deployment, I recommend that you start with my overview article. It will help you to conceptualize and create a diagram of your configuration design before you build anything. At the end, the article contains further useful links to how-to and in-depth articles: https://www.altaro.com/hyper-v/simple-guide-hyper-v-networking/

For a Windows 10 deployment using a current release build, you’ll automatically get a “Default Switch”. If you connect your virtual machines to it, they’ll get an IP, DNS, and NAT functionality without any further effort on your part. You can read more about that (and some other nifty Windows 10 Hyper-V features) in Sarah Cooley’s article: https://blogs.technet.microsoft.com/virtualization/2017/11/13/whats-new-in-hyper-v-for-windows-10-fall-creators-update/

4. Prepare Storage for Virtual Machines

Storage often needs a lot of time to configure correctly as well.

First, you need to set up the basic parts of storage, such as SAN LUNs and volume. I’d like to give you a 100% thorough walk-through on it (perhaps at a later date), but I couldn’t possibly cover more than a few options. However, I’ve covered a few common methods in this article: https://www.altaro.com/hyper-v/storage-and-hyper-v-part-6-how-to-connect/. I didn’t cover fiber channel because no two vendors are similar enough to write a good generic article. I didn’t cover Storage Spaces Direct because it didn’t exist yet and I still don’t have an S2D cluster of my own to instruct from.

Whatever you choose to use for storage, you need at least one NTFS or ReFS location to hold your VMs. I’m not even going to entertain any discussion about pass-through disks because, seriously, join this decade and stop with that nonsense already. I’m still recommending NTFS because I’m not quite sold on ReFS for Hyper-V yet, but ReFS will work. One other thing to note about ReFS, is to make sure your backup/recovery vendor supports it.

5. Configure Hyper-V Host Settings

You probably won’t want to continue using Hyper-V’s defaults for long. Storage, especially, will probably not be what you want. Let’s modify some defaults. Right-click your host in Hyper-V Manager and click Hyper-V Settings.

This window has many settings, far more than I want to cover in a quick start article. I’ll show you a few things, though.

Let’s start with the two storage tabs:

You can rehome these anywhere that you like. Note:

  • For the Virtual Hard Disks setting, all new disks created using default settings will appear directly in that folder
  • For the Virtual Machines setting, all non-disk VM files will be created in special subfolders

In my case, my host will own local and clustered VMs. I’ll set my defaults to a local folder, but one that’s not as deep as what Hyper-V starts with.

Go explore a bit. Look at the rest of the default settings. Google what they mean, if you need. If you’ll be doing Shared Nothing Live Migrations, I recommend that you enable migrations on the Live Migrations tab.

6. Fix Up Your VM’s Settings

Remember that VM that I told you to create back in step one? I hope you did that because now you get to practice working with real settings on a real virtual machine. In this step, we’ll focus on the simple, direct settings. Right-click on your virtual machine and click Settings.

If you followed right through, then the VM’s virtual network adapter can’t communicate because it has no switch connection. So, jump down to the Network Adapter tab. In the Virtual Switch setting, where it says, Not connected, change it to that switch that you created in step 3.

Again, poke through and learn about the settings for your virtual machine. You’ll have a lot more to look at than you did for the host. Take special notice of:

  • Memory. Each VM defaults to 1GB of dynamic memory. You can only change a few settings during creation. You can change many more now.
  • Processor: Each VM defaults to a single virtual CPU. You’ll probably want to bump that up to at least 2. We have a little guidance on that, but the short version: don’t stress out about it too much.
  • Automatic start and stop actions: These only work for VMs that won’t be clustered.

Check out the rest of it. Look up anything that seems interesting.

7. Practice with Advanced Activities and Settings

If you followed both step one and the storage location bit of step five, then that virtual machine might not be in the location that you desire. Not a problem at all. Right-click it and choose Move. On the relevant wizard page, select Move the virtual machine’s storage:

Proceed through the wizard. If you want more instruction, follow our article.

Not everything has a predefined setting, unfortunately. You’ll occasionally need to do some more manual work. I encourage you to look into PowerShell, if you haven’t already.

Changing a Virtual Machine’s VHDX Name

Let’s say that you decided that you didn’t like the name of the virtual machine that you created in step one. Or, that you were just fine with the name, but you didn’t like the name of its VHDX. You can change the virtual machine’s name very simply: just highlight it and press [F2] or right-click it and select Rename. Hyper-V stores virtual machine’s names as properties in their xml/vmcx files, so you don’t need to change those. If you put the VM in a specially-named folder, then you can use the instructions above to move it to a new one. The VHDX doesn’t change so easily, though.

Let’s rename a virtual machine’s virtual hard disk file:

  1. The virtual machine must be off. Sorry.
  2. On the virtual hard disk’s tab in the virtual machine’s settings, click Remove:
  3. Click Apply. That will remove the disk but leave the window open. We’ll be coming back momentarily.
  4. Use whatever method you like to rename the VHDX file.
  5. Back in the Hyper-V virtual machine’s settings, you should have been left on the controller tab for the disk that you removed, with Hard Drive selected. Click Add:
  6. Browse to the renamed file:
  7. Click OK.

Your virtual machine can now be started with its newly renamed hard disk.

Tip: If you feel brave, you can try to rename the file in the browse dialog, thereby skipping the need to drop out to the operating system in step 4. I have had mixed results with this due to permissions and other environmental factors.

Tip: If you want to perform a storage migration and rename a VHDX, you can wait to perform the storage migration until you have detached the virtual hard disk. The remaining files will transfer instantly and you won’t have a copy of the VHDX. After you have performed the storage migration, you can manually move the VHDX to its new home. If the same volume hosts the destination location, the move will occur almost instantly. From there, you can proceed with the rename and attach operations. You can save substantial amounts of time that way.

Bonus round: All of these things can be scripted.

Moving Forward

In just a few simple steps, you learned the most important things about Hyper-V. What’s next? Installing a guest operating system, of course. Treat that virtual machine like a physical machine, and you’ll figure it out in no time.

Need any Help?

If you’re experiencing serious technical difficulties you should contact the Microsoft support team but for general pointers and advice, I’d love to help you out! Write to me using the comment section below and I’ll get back to you ASAP!

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

For years, I’d never heard of this problem. Then, suddenly, I’m seeing it everywhere. It’s not easy to precisely outline a symptom tree for you. Networked applications will behave oddly. Remote desktop sessions may skip or hang. Some network traffic will not pass at all. Other traffic will behave erratically. Rather than try to give you a thorough symptom tree, we’ll just describe the setup that can be addressed with the contents of this article: you’re using Hyper-V with a third-party network load balancer and experiencing network-related problems.

Acknowledgements

Before I ever encountered it, the problem was described to me by one my readers. Check out our Complete Guide to Hyper-V Networking article and look in the comments section for Jahn’s input. I had a different experience, but that conversation helped me reach a resolution much more quickly.

Problem Reproduction Instructions

The problem may appear under other conditions, but should always occur under these:

  • The network adapters that host the Hyper-V virtual switch are configured in a team
    • Load-balancing algorithm: Dynamic
    • Teaming mode: Switch Independent (likely occurs with switch-embedded teaming as well)
  • Traffic to/from affected virtual machines passes through a third-party load-balancer
    • Load balancer uses a MAC-based system for load balancing and source verification
      • Citrix Netscaler calls its feature “MAC based forwarding”
      • F5 load balancers call it “auto last hop”
    • The load balancer’s “internal” IP address is on the same subnet as the virtual machine’s
  • Sufficient traffic must be exiting the virtual machine for Hyper-V to load balance some of it to a different physical adapter

I’ll go into more detail later. This list should help you determine if you’re looking at an article that can help you.

Resolution

Fixing the problem is very easy, and can be done without downtime. I’ll show the options in preference order. I’ll explain the impacting differences later.

Option 1: Change the Load-Balancing Algorithm

Your best bet is to change the load-balancing algorithm to “Hyper-V port”. You can change it in the lbfoadmin.exe graphical interface if your management operating system is GUI-mode Windows Server. To change it with PowerShell (assuming only one team):

There will be a brief interruption of networking while the change is made. It won’t be as bad as the network problems that you’re already experiencing.

Option 2: Change the Teaming Mode

Your second option is to change your teaming mode. It’s more involved because you’ll also need to update your physical infrastructure to match. I’ve always been able to do that without downtime as long as I changed the physical switch first, but I can’t promise the same for anyone else.

Decide if you want to use Static teaming or LACP teaming. Configure your physical switch accordingly.

Change your Hyper-V host to use the same mode. If your Hyper-V system’s management operating system is Windows Server GUI, you can use lbfoadmin.exe. To change it in PowerShell (assuming only one team):

or

In this context, it makes no difference whether you pick static or LACP. If you want more information, read our article on the teaming modes.

Option 3: Disable the Feature on the Load Balancer

You could tell the load balancer to stop trying to be clever. In general, I would choose that option last.

An Investigation of the Problem

So, what’s going on? What caused all this? If you’ve got an environment that matches the one that I described, then you’ve unintentionally created the perfect conditions for a storm.

Whose fault is it? In this case, I don’t really think that it’s fair to assign fault. Everyone involved is trying to make your network traffic go faster. They sometimes do that by playing fast and loose in that gray area between Ethernet and TCP/IP. We have lots of standards that govern each individually, but not so many that apply to the ways that they can interact. The problem arises because Microsoft is playing one game while your load balancer plays another. The games have different rules, and neither side is aware that another game is afoot.

Traffic Leaving the Virtual Machine

We’ll start on the Windows guest side (also applies to Linux). Your application inside your virtual machine wants to send some data to another computer. That goes something like this:

  1. Application: “Network, send this data to computer www.altaro.com on port 443”.
  2. Network: “DNS server, get me the IP for www.altaro.com”
  3. Network: “IP layer, determine if the IP address for www.altaro.com is on the same subnet”
  4. Network: “IP layer, send this packet to the gateway”
  5. IP layer passes downward for packaging in an Ethernet frame
  6. Ethernet layer transfers the frame

The part to understand: your application and your operating system don’t really care about the Ethernet part. Whatever happens down there just happens. Especially, it doesn’t care at all about the source MAC.

lb_out_traffic

 

Traffic Crossing the Hyper-V Virtual Switch

Because this particular Ethernet frame is coming out of a Hyper-V virtual machine, the first thing that it encounters is the Hyper-V virtual switch. In our scenario, the Hyper-V virtual switch rests atop a team of network adapters. As you’ll recall, that team is configured to use the Dynamic load balancing algorithm in Switch Independent mode. The algorithm decides if load balancing can be applied. The teaming mode decides which pathway to use and if it needs to repackage the outbound frame.

Switch independent mode means that the physical switch doesn’t know anything about a team. It only knows about two or more Ethernet endpoints connected in standard access mode. A port in that mode can “host” any number of MAC addresses;the physical switch’s capability defines the limit. However, the same MAC address cannot appear on multiple access ports simultaneously. Allowing that would cause all sorts of problems.

lb_broken_si_traffic

 

So, if the team wants to load balance traffic coming out of a virtual machine, it needs to ensure that the traffic has a source MAC address that won’t cause the physical switch to panic. For traffic going out anything other than the primary adapter, it uses the MAC address of the physical adapter.

lb_good_si_traffic

 

So, no matter how many physical adapters the team owns, one of two things will happen for each outbound frame:

  • The team will choose to use the physical adapter that the virtual machine’s network adapter is registered on. The Ethernet frame will travel as-is. That means that its source MAC address will be exactly the same as the virtual network adapter’s (meaning, not repackaged)
  • The team will choose to use an adapter other than the one that the virtual machine’s network adapter is registered on. The Ethernet frame will be altered. The source MAC address will be replaced with the MAC address of the physical adapter

Note: The visualization does not cover all scenarios. A virtual network adapter might be affinitized to the second physical adapter. If so, its load balanced packets would travel out of the shown “pNIC1” and use that physical adapter’s MAC as a source.

Traffic Crossing the Load Balancer

So, our frame arrives at the load balancer. The load balancer has a really crummy job. It needs to make traffic go faster, not slower. And, it acts like a TCP/IP router. Routers need to unpackage inbound Ethernet frames, look at their IP information, and make decisions on how to transmit them. That requires compute power and time.

lb_router_hard

If it needs too much time to do all this, then people would prefer to live without the load balancer. That means that the load balancer’s manufacturer doesn’t sell any units, doesn’t make any money, and goes out of business. So, they come up with all sorts of tricks to make traffic faster. One way to do that is by not doing quite so much work on the Ethernet frame. This is a gross oversimplification, but you get the idea:

lb_router_easy

Essentially, the load balancer only needs to remember which MAC address sent which frame, and then it doesn’t need to worry so much about all that IP nonsense (it’s really more complicated than that, but this is close enough).

The Hyper-V/Load Balancer Collision

Now we’ve arrived at the core of the problem: Hyper-V sends traffic from virtual machines using source MAC addresses that don’t belong to those virtual machines. The MAC addresses belong to the physical NIC. When the load balancer tries to associate that traffic with the MAC address of the physical NIC, everything breaks.

Trying to be helpful (remember that), the load balancer attempts to return what it deems as “response” traffic to the MAC that initiated the conversation. The MAC, in this case, belongs directly to that second physical NIC. It wasn’t expecting the traffic that’s now coming in, so it silently discards the frame.

That happens because:

  • The Windows Server network teaming load balancing algorithms are send only; they will not perform reverse translations. There are lots of reasons for that and they are all good, so don’t get upset with Microsoft. Besides, it’s not like anyone else does things differently.
  • Because the inbound Ethernet frame is not reverse-translated, its destination MAC belongs to a physical NIC. The Hyper-V virtual switch will not send any Ethernet frame to a virtual network adapter unless it owns the destination MAC
  • In typical system-to-system communications, the “responding” system would have sent its traffic to the IP address of the virtual machine. Through the normal course of typical networking, that traffic’s destination MAC would always belong to the virtual machine. It’s only because your load balancer is trying to speed things along that the frame is being sent to the physical NIC’s MAC address. Otherwise, the source MAC of the original frame would have been little more than trivia.

Stated a bit more simply: Windows Server network teaming doesn’t know that anyone cares about its frames’ source MAC addresses and the load balancer doesn’t know that anyone is lying about their MAC addresses.

Why Hyper-V Port Mode Fixes the Problem

When you select the Hyper-V port load balancing algorithm in combination with the switch independent teaming mode, each virtual network adapter’s MAC address is registered on a single physical network adapter. That’s the same behavior that Dynamic uses. However, no load balancing is done for any given virtual network adapter; all traffic entering and exiting any given virtual adapter will always use the same physical adapter. The team achieves load balancing by placing each virtual network adapter across its physical members in a round-robin fashion.

lb_si_hp

Source MACs will always be those of their respective virtual adapters, so there’s nothing to get confused about.

I like this mode as a solution because it does a good job addressing the issue without making any other changes to your infrastructure. The drawback would be if you only had a few virtual network adapters and weren’t getting the best distribution. For a 10GbE system, I wouldn’t worry.

Why Static and LACP Fix the Problem

Static and LACP teaming involve your Windows Server system and the physical switch agreeing on a single logical pathway that consists of multiple physical pathways. All MAC addresses are registered on that logical pathway. Therefore, the Windows Server team has no need of performing any source MAC substitution regardless of the load balancing algorithm that you choose.

lb_stdlacp

Since no MAC substitution occurs here, the load balancer won’t get anything confused.

I don’t like this method as much. It means modifying your physical infrastructure. I’ve noticed that some physical switches don’t like the LACP failover process very much. I’ve encountered some that need a minute or more to notice that a physical link was down and react accordingly. With every physical switch that I’ve used or heard of, the switch independent mode fails over almost instantly.

That said, using a static or LACP team will allow you to continue using the Dynamic load balancing algorithm. All else being equal, you’ll get a more even load balancing distribution with Dynamic than you will with Hyper-V port mode.

Why You Should Let the Load Balancer Do Its Job

The third listed resolution suggests disabling the related feature on your load balancer. I don’t like that option, personally. I don’t have much experience with the Citrix product, but I know that the F5 buries their “Auto Last Hop” feature fairly deeply. Also, these two manufacturers enable the feature by default. It won’t be obvious to a maintainer that you’ve made the change.

However, your situation might dictate that disabling the load balancer’s feature causes fewer problems than changing the Hyper-V or physical switch configuration. Do what works best for you.

Using a Different Internal Router Also Addresses the Issue

In all of these scenarios, the load balancer performs routing. Actually, these types of load balancers always perform routing, because they present a single IP address for the service to the outside world and translate internally to the back-end systems.

However, nothing states that the internal source IP address of the load balancer must exist in the same subnet as the back-end virtual machines. You might do that for performance reasons; as I said above, routing incurs overhead. However, this all a known quantity and modern routers are pretty good at what they do. If any router is present between the load balancer and the back-end virtual machines, then the MAC address issue will sort itself out regardless of your load balancing and teaming mode selections.

Have You Experienced this Phenomenon?

If so, I’d love to hear from you. What system did you experience it happening? How did you resolve the situation (if you were able)? Perhaps you’ve just encountered it and arrived here to get a solution – if so let me know if this explanation was helpful or if you need any further assistance regarding your particular environment. The comment section below awaits.

The Really Simple Guide to Hyper-V Networking

The Really Simple Guide to Hyper-V Networking

If you’re just getting started with Hyper-V and struggling with the networking configuration, you are not alone. I (and others) have written a great deal of introductory material on the subject, but sometimes, that’s just too much. I’m going to try a different approach. Rather than a thorough deep-dive on the topic that tries to cover all of the concepts and how-to, I’m just going to show you what you’re trying to accomplish. Then, I can just link you to the necessary supporting information so that you can make it into reality.

Getting Started

First things first. If you have a solid handle on layer 2 and layer 3 concepts, that’s helpful. If you have experience networking Windows machines, that’s also helpful. If you come to Hyper-V from a different hypervisor, then that knowledge won’t transfer well. If you apply ESXi networking design patterns to Hyper-V, then you will create a jumbled mess that will never function correctly or perform adequately.

Your Goals for Hyper-V Networking

You have two very basic goals:

  1. Ensure that the management operating system can communicate on the network
  2. Ensure that virtual machines can communicate on the network

rsn_goals

Any other goals that you bring to this endeavor are secondary, at best. If you have never done this before, don’t try to jump ahead to routing or anything else until you achieve these two basic goals.

Hyper-V Networking Rules

Understand what you must, can, and cannot do with Hyper-V networking:

  • You can connect the management operating system to a physical network directly using a physical network adapter or a team of physical network adapters.
    rsn_managementos_connect
  • You cannot connect any virtual machine to a physical network directly using a physical network adapter or team.
    rns_vm_nopnic
  • If you wish for a virtual machine to have network access, you must use a Hyper-V virtual switch. There is no bypass or pass-through mode.
    rsn_vm_connect
  • A Hyper-V virtual switch uses a physical network adapter or team. It completely takes over that adapter or team; nothing else can use it.
    rsn_clutchedadapter
  • It is possible for the management operating system to connect through a Hyper-V virtual switch, but it is not required.
    rsn_mos_via_vnic
  • It is not possible for the management operating system and the virtual switch to use a physical adapter or team at the same time. The “share” terminology that you see in all of the tools is a lie.
    rsn_mosorvs

What the Final Product Looks Like

It might help to have visualizations of correctly-configured Hyper-V virtual switches. I will only show images with a single physical adapter. You can use a team instead.

Networking for a Single Hyper-V Host, the Old Way

An old technique has survived from the pre-Hyper-V 2012 days. It uses a pair of physical adapters. One belongs to the management operating system. The other hosts a virtual switch that the virtual machines use. I don’t like this solution for a two adapter host. It leaves both the host and the virtual machines with a single point of failure. However, it could be useful if you have more than two adapters and create a team for the virtual machines to use. Either way, this design is perfectly viable whether I like it or not.

rsn_vswitch_split

 

Networking for a Single Hyper-V Host, the New Way

With teaming, you can just join all of the physical adapters together and let it host a single virtual switch. Let the management operating system and all of the guests connect through it.

rsn_vswitch_unified

 

Networking for a Clustered Hyper-V Host

For a stand-alone Hyper-V host, the management operating system only requires one connection to the network. Clustered hosts benefit from multiple connections. Before teaming was directly supported, we used a lot of physical adapters to make that happen. Now we can just use one big team to handle our host and our guest traffic. That looks like this:

rns_vswitch_cluster

 

VLANs

VLANs seem to have some special power to trip people up. A few things:

  • The only purpose of a VLAN is to separate layer 2 (Ethernet) traffic.
  • VLANs are not necessary to separate layer 3 (IP) networks. Many network administrators use VLANs to create walls around specific layer 3 networks, though. If that describes your network, you will need to design your Hyper-V hosts to match. If your physical network doesn’t use VLANs, then don’t worry about them on your Hyper-V hosts.
  • Do not create one Hyper-V virtual switch per VLAN the way that you configure ESXi. Every Hyper-V virtual switch automatically supports untagged frames and VLANs 1-4096.
  • Hyper-V does not have a “default” VLAN designation.
  • Configure VLANs directly on virtual adapters, not on the virtual switch.

Other Quick Pointers

I’m going to provide you with some links so you can do some more reading and get some assistance with configuration. However, some quick things to point out:

  • The Hyper-V virtual switch does not have an IP address of its own.
  • You do not manage the Hyper-V virtual switch via an IP or management VLAN. You manage the Hyper-V virtual switch using tools in the management or a remote operating system (Hyper-V Manager, PowerShell, and WMI/CIM).
  • Network connections for storage (iSCSI/SMB): Preferably, network connections for storage will use dedicated, unteamed physical adapters. If you can’t do that, then you can create dedicated virtual NICs in the management operating system
  • Multiple virtual switches: Almost no one will ever need more than one virtual switch on a Hyper-V host. If you have VMware experience, especially do not create virtual switches just for VLANs.
  • The virtual machines’ virtual network adapters connect directly to the virtual switch. You do not need anything in the management operating system to assist them. You don’t need a virtual adapter for the management operating system that has anything to do with the virtual machines.
  • Turn off VMQ for every gigabit physical adapter that will host a virtual switch. If you team them, the logical team NIC will also have a VMQ setting that you need to disable.

For More Information

I only intend for this article to be a quick introduction to show you what you’re trying to accomplish. We have several articles to help you dive into the concepts and the necessary steps for configuration.

 

How Device Naming for Network Adapters Works in Hyper-V 2016

How Device Naming for Network Adapters Works in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:

harn_getnetadapter

In contrast, a physical system employing Consistent Device Naming would automatically name the network adapters in some fashion that reflects their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.

ndn_gui

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
    ndn_netadvprops
  2. When the Ethernet # Properties dialog appears, click Configure:
    ndn_netpropsconfbutton
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:
    ndn_display

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.

ndn_psall

Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:

ndn_psselectobject

I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

7 Powerful Scripts for Practical Hyper-V Network Configurations

7 Powerful Scripts for Practical Hyper-V Network Configurations

I firmly believe in empowerment. I feel that I should supply you with knowledge, provide you with how-tos, share insights and experiences, and release you into the world to make your own decisions. However, I came to that approach by standing at the front of a classroom. During class, we’d almost invariably walk through exercises. Since this is a blog and not a classroom, I do things differently. We don’t have common hardware in a controlled environment, so I typically forgo the exercises bit. As a result, that leaves a lot of my readers at the edge of a cliff with no bridge to carry them from theory to practice. And, of course, there are those of you that would love to spend time reading about concepts but just really need to get something done right now. If you’re stopped at Hyper-V networking, this is the article for you.

Script Inventory

These scripts are included in this article:

Basic Usage

I’m going to show each item as a stand-alone script. First, you’ll locate the one that best aligns with what you’re trying to accomplish. You’ll copy/paste that into a .ps1 PowerShell script file on your system. You’ll need to edit the script to provide information about your environment so that it will work for you. I’ll have you set each of those items at the beginning of the script. Then, you’ll just need to execute the script on your host.

Most scripts will have its own “basic usage” heading that explains a bit about how you’d use it without modification.

Enhanced Usage

I could easily compile these into standalone executables that you couldn’t tinker with. Even though I want to give you a fully prepared springboard, I also want you to learn how the system works and what you’re doing to it.

Most scripts will have its own “enhanced usage” heading that gives some ideas how you might exploit or extend it yourself.

Configure networking for a single host with a single adapter

Use this script for a standalone system that only has one physical adapter. It will:

  • Disable VMQ for the physical adapter
  • Create a virtual switch on the adapter
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it.

Advanced Usage for this Script

As-is, this script should be complete for most typical single-adapter systems. You might choose to disable some items. For instance, if you are using this on Windows 10, you might not want to provide a fixed IP address. In that case, just put a # sign at the beginning of lines 42 onward. When the virtual network adapter is created, it will remain in DHCP mode.

 

Configure a standalone host with 2-4 gigabit adapters for converged networking

Use this script for a standalone host that has between two and four gigabit adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. Be aware that it will have problems if you already have a team.

Advanced Usage for this Script

This script serves as the base for the remaining scripts on this page. Likewise, you could use it as a base for your own. You could also use any of the items as examples for whatever similar actions you wish to accomplish in your own scripts.

 

Configure a standalone host with 2-4 10 GbE adapters for converged networking

Use this script for a standalone host that has between two and four 10GbE adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

It won’t take a great deal of sleuthing to discover that this script is identical to the preceding one, except that it does not disable VMQ.

 

Configure a clustered host with 2-4 gigabit adapters for converged networking

Use this script for a host that has between two and four gigabit adapters that will be a member of a cluster. Like the previous scripts, it will employ a converged networking configuration. The script will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Configure a clustered host with 2-4 10 GbE adapters for converged networking

This script is identical to the preceding except that it leaves VMQ enabled. It does the following:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

These notes are identical to those of the preceding script.

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

These notes are identical to those of the preceding script.

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Set preferred order for cluster Live Migration networks

This script aligns with the two preceding scripts to ensure that the cluster chooses the named “Live Migration” adapter first when moving virtual machines between nodes. The “Cluster” virtual adapter will be used second. The management adapter will be used as the final fallback.

Basic Usage for this Script

Use this script after you’ve run one of the above two clustered host scripts and joined them into a cluster.

Advanced Usage for this Script

Modify this to change the order of Live Migration adapters. You must specify all adapters recognized by the cluster. Check the “MigrationExcludeNetworks” registry key that’s in the same location as “MigrationNetworkOrder”.

 

Exclude cluster networks from Live Migration

This script is intended to be used as an optional adjunct to the preceding script. Since my scripts set up all virtual adapters to be used in Live Migration, the network names used here are fabricated.

Basic Usage for this Script

You’ll need to set the network names to match yours, but otherwise, the script does not need to be altered.

Advanced Usage for this Script

This script will need to be modified in order to be used at all.

 

How to Hot Add/Remove Virtual Network Adapters in Hyper-V 2016

How to Hot Add/Remove Virtual Network Adapters in Hyper-V 2016

Last week I showed you how to hot add/remove memory in Hyper-V 2016 and this week I’m covering another super handy new feature that system admins will also love. In fact, Hyper-V 2016 brought many fantastic features. Containers! It also added some features that indicate natural product maturation. On that list, we find “hot add/remove of virtual network adapters”. If that’s not obvious, it means that you can now add or remove virtual network adapters to/from running virtual machines.

Requirements for Hyper-V Hot Add/Remove of Virtual Network Adapters

To make hot add/remove of network adapters work in Hyper-V, you must meet these requirements:

  • Hypervisor must be 2016 version (Windows 10, Windows Server 2016, or Hyper-V Server 2016)
  • Virtual machine must be generation 2
  • To utilize the Device Naming feature, the virtual machine version must be at least 6.2. The virtual machine configuration version does not matter if you do not attempt to use Device Naming. Meaning, you can bring a version 5.0 virtual machine over from 2012 R2 to 2016 and hot add a virtual network adapter. A discussion on Device Naming will appear in a different article.

The guest operating system may need an additional push to realize that a change was made. I did not encounter any issues with the various operating systems that I tested.

How to Use PowerShell to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

I always recommend PowerShell to work with second or higher network adapters to a virtual machine. Otherwise, they’re all called “Network Adapter”. Sorting that out can be unpleasant.

Adding a Virtual Adapter with PowerShell

Use Add-VMNetworkAdapter to add a network adapter to a running Hyper-V guest. That’s the same command that you’d use for an offline guest, as well. I don’t know why the authors chose the verb “Add” instead of “New”.

The above will work on a virtual machine with a configuration version of at least 6.2. If the virtual machine is set to a lower version, you get a rather confusing message that talks about DVD drives:

It does eventually get around to telling you exactly what it doesn’t like. You can avoid this error by not specifying the DeviceNaming parameter. If you’re scripting, you can avoid the parameter by employing splatting or by setting DeviceNaming to Off.

You can use any of the other parameters of Add-VMNetworkAdapter normally.

Removing a Virtual Adapter with PowerShell

To remove the adapter, use Remove-VMNetworkAdapter:

This is where things can get… interesting. Especially if you didn’t specify a unique name for the adapter. The Name parameter works like a search filter; it will remove any adapter that perfectly matches that name. So, if all of the virtual machine’s network adapters use the default name Network Adapter, and you specify Network Adapter for the Name parameter, then all of that VM’s adapters will be removed.

To address that issue, you’ll need to employ some cleverness. A quick ‘n’ dirty option would be to just remove all of the adapters, then add one. By default, that one adapter will pick up an IP from an available DHCP server. Since you can specify a static MAC address with the StaticMacAddress parameter of Add-VMNetworkAdapter, you can control that behavior with reservations.

You could also filter adapters by MAC address:

You could also use arrays to selectively remove items:

You could even use a loop to knock out all adapters after the first:

In my unscientific testing, virtual machine network adapters are always stored and retrieved in the order in which they were added, so the above script should always remove every adapter except the original. Based on the file format, I would expect that to always hold true. However, no documentation exists that outright supports that; use this sort of cleverness with caution.

I recommend naming your adapters to save a lot of grief in these instances.

How to Use the GUI to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

These instructions work for both Hyper-V Manager and Failover Cluster Manager. Use the virtual machine’s Settings dialog in either tool.

Adding a Virtual Network Adapter in the GUI

Add a virtual network adapter to a running VM the same way that you add one to a stopped VM:

  1. On the VM’s Settings dialog, start on the Add Hardware page. The Network Adapter entry should be black, not gray. If it’s gray, then the VM is either Generation 1 or not in a valid state to add an adapter:
    harn_newhardware
  2. Highlight Network Adapter and click Add.
  3. You will be taken to a screen where you can fill out all of the normal information for a network adapter. Set all items as desired.
    harn_newadapter
  4. Once you’ve set everything to your liking, click OK to add the adapter and close the dialog or Apply to add the adapter and leave the dialog open.

Removing a Virtual Network Adapter in the GUI

As with adding an adapter, removing an adapter for a running virtual machine is performed the same way as adding one:

  1. Start on the Settings dialog for the virtual machine. Switch to the tab for the adapter that you wish to remove:
    harn_addedadapter
  2. Click the Remove button.
    harn_removeadapter
  3. The tab for the adapter to be removed will have all of its text crossed out. The dialog items for it will turn gray.
    harn_removeadapterpending
  4. Click OK to remove the adapter and close the dialog or Apply to remove the adapter and leave the dialog open. Click Cancel if you change your mind. For OK or Apply, a prompt will appear with a warning that you’ll probably disrupt network communications:
    harn_removeprompt

Hot Add/Remove of Hyper-V Virtual Adapters for Linux Guests

I didn’t invest a great deal of effort into testing, but this feature works for Linux guests with mixed results. A Fedora guest running on my Windows 10 system was perfectly happy with it:

harn_linux

OpenSUSE Leap… not so much:

harn_noleap

But then, I added another virtual network adapter to my OpenSUSE system. This time, I remembered to connect it to a virtual switch before adding. It liked that much better:

harn_leapon

So, the moral of the story: for Linux guests, always specify a virtual switch when hot adding a virtual network card. Connecting it afterward does not help.

Also notice that OpenSUSE Leap did not ever automatically configure the adapter for DHCP, whereas Fedora did. As I mentioned in the beginning of the article, you might need to give some environments an extra push.

Also, Leap seemed to get upset when I hot removed the adapter:

harn_leapout

To save your eyes, the meat of that message says: “unable to send revoke receive buffer to netvsp”. I don’t know if that’s serious or not. The second moral of this story, then: hot removing network adapters might leave some systems in an inconsistent, unhappy state.

My Thoughts on Hyper-V’s Hot Add/Remove of Network Adapters Feature

Previous versions of Hyper-V did not have this feature and I never missed it. I wasn’t even aware that other hypervisors had it until I saw posts from people scrounging for any tiny excuse to dump hate on Microsoft. Sure, I’ve had a few virtual machines with services that benefited from multiple network adapters. However, I knew of that requirement going in, so I just built them appropriately from the beginning. I suppose that’s a side effect of competent administration. Overall, I find this feature to be a hammer desperately seeking a nail.

That said, it misses the one use that I might have: it doesn’t work for generation 1 VMs. As you know, a generation 1 Hyper-V virtual machine can only PXE boot from a legacy network adapter. The legacy network adapter has poor performance. I’d like to be able to remove that legacy adapter post-deployment without shutting down the virtual machine. That said, it’s very low on my wish list. I’m guessing that we’ll eventually be using generation 2 VMs exclusively, so the problem will handle itself.

During my testing, I did not find any problems at all using this feature with Windows guests. As you can see from the Linux section, things didn’t go quite as well there. Either way, I would think twice about using this feature with production systems. Network disruptions do not always behave exactly as you might think because networks often behave unexpectedly. Multi-homed systems often crank the “strange” factor up somewhere near “haunted”. Multi-home a system and fire up Wireshark. I can almost promise that you’ll see something that you didn’t expect within the first five minutes.

I know that you’re going to use this feature anyway, and that’s fine; that’s why it’s there. I would make one recommendation: before removing an adapter, clear its TCP/IP settings and disconnect it from the virtual switch. That gives the guest operating system a better opportunity to deal with the removal of the adapter on familiar terms.

95 Best Practices for Optimizing Hyper-V Performance

95 Best Practices for Optimizing Hyper-V Performance

We can never get enough performance. Everything needs to be faster, faster, faster! You can find any number of articles about improving Hyper-V performance and best practices, of course, unfortunately, a lot of the information contains errors, FUD, and misconceptions. Some are just plain dated. Technology has changed and experience is continually teaching us new insights. From that, we can build a list of best practices that will help you to tune your system to provide maximum performance.

How to optimize Hyper-V Performance

Philosophies Used in this Article

This article focuses primarily on performance. It may deviate from other advice that I’ve given in other contexts. A system designed with performance in mind will be built differently from a system with different goals. For instance, a system that tries to provide high capacity at a low price point would have a slower performance profile than some alternatives.

  • Subject matter scoped to the 2012 R2 and 2016 product versions.
  • I want to stay on target by listing the best practices with fairly minimal exposition. I’ll expand ideas where I feel the need; you can always ask questions in the comments section.
  • I am not trying to duplicate pure physical performance in a virtualized environment. That’s a wasted effort.
  • I have already written an article on best practices for balanced systems. It’s a bit older, but I don’t see anything in it that requires immediate attention. It was written for the administrator who wants reasonable performance but also wants to stay under budget.
  • This content targets datacenter builds. Client Hyper-V will follow the same general concepts with variable applicability.

General Host Architecture

If you’re lucky enough to be starting in the research phase — meaning, you don’t already have an environment — then you have the most opportunity to build things properly. Making good purchase decisions pays more dividends than patching up something that you’ve already got.

  1. Do not go in blind.
    • Microsoft Assessment and Planning Toolkit will help you size your environment: MAP Toolkit
    • Ask your software vendors for their guidelines for virtualization on Hyper-V.
    • Ask people that use the same product(s) if they have virtualized on Hyper-V.
  2. Stick with logo-compliant hardware. Check the official list: https://www.windowsservercatalog.com/
  3. Most people will run out of memory first, disk second, CPU third, and network last. Purchase accordingly.
  4. Prefer newer CPUs, but think hard before going with bleeding edge. You may need to improve performance by scaling out. Live Migration requires physical CPUs to be the same or you’ll need to enable CPU compatibility mode. If your environment starts with recent CPUs, then you’ll have the longest amount of time to be able to extend it. However, CPUs commonly undergo at least one revision, and that might be enough to require compatibility mode. Attaining maximum performance may reduce virtual machine mobility.
  5. Set a target density level, e.g. “25 virtual machines per host”. While it may be obvious that higher densities result in lower performance, finding the cut-off line for “acceptable” will be difficult. However, having a target VM number in mind before you start can make the challenge less nebulous.
  6. Read the rest of this article before you do anything.

Management Operating System

Before we carry on, I just wanted to make sure to mention that Hyper-V is a type 1 hypervisor, meaning that it runs right on the hardware. You can’t “touch” Hyper-V because it has no direct interface. Instead, you install a management operating system and use that to work with Hyper-V. You have three choices:

Note: Nano Server initially offered Hyper-V, but that functionality will be removed (or has already been removed, depending on when you read this). Most people ignore the fine print of using Nano Server, so I never recommended it anyway.

TL;DR: In absence of a blocking condition, choose Hyper-V Server. A solid blocking condition would be the Automatic Virtual Machine Activation feature of Datacenter Edition. In such cases, the next preferable choice is Windows Server in Core mode.

I organized those in order by distribution size. Volumes have been written about the “attack surface” and patching. Most of that material makes me roll my eyes. No matter what you think of all that, none of it has any meaningful impact on performance. For performance, concern yourself with the differences in CPU and memory footprint. The widest CPU/memory gap lies between Windows Server and Windows Server Core. When logged off, the Windows Server GUI does not consume many resources, but it does consume some. The space between Windows Server Core and Hyper-V Server is much tighter, especially when the same features/roles are enabled.

One difference between Core and Hyper-V Server is the licensing mechanism. On Datacenter Edition, that does include the benefit of Automatic Virtual Machine Activation (AVMA). That only applies to the technological wiring. Do not confuse it with the oft-repeated myth that installing Windows Server grants guest licensing privileges. The legal portion of licensing stands apart; read our eBook for starting information.

Because you do not need to pay for the license for Hyper-V Server, it grants one capability that Windows Server does not: you can upgrade at any time. That allows you to completely decouple the life cycle of your hosts from your guests. Such detachment is a hallmark of the modern cloud era.

If you will be running only open source operating systems, Hyper-V Server is the natural choice. You don’t need to pay any licensing fees to Microsoft at all with that usage. I don’t realistically expect any pure Linux shops to introduce a Microsoft environment, but Linux-on-Hyper-V is a fantastic solution in a mixed-platform environment. And with that, let’s get back onto the list.

Management Operating System Best Practices for Performance

  1. Prefer Hyper-V Server first, Windows Server Core second
  2. Do not install any software, feature, or role in the management operating system that does not directly aid the virtual machines or the management operating system. Hyper-V prioritizes applications in the management operating system over virtual machines. That’s because it trusts you; if you are running something in the management OS, it assumes that you really need it.
  3. Do not log on to the management operating system. Install the management tools on your workstation and manipulate Hyper-V remotely.
  4. If you must log on to the management operating system, log off as soon as you’re done.
  5. Do not browse the Internet from the management operating system. Don’t browse from any server, really.
  6. Stay current on mainstream patches.
  7. Stay reasonably current on driver versions. I know that many of my peers like to install drivers almost immediately upon release, but I can’t join that camp. While it’s not entirely unheard of for a driver update to bring performance improvements, it’s not common. With all of the acquisitions and corporate consolidations going on in the hardware space — especially networking — I feel that the competitive drive to produce quality hardware and drivers has entered a period of decline. In simple terms, view new drivers as a potential risk to stability, performance, and security.
  8. Join your hosts to the domain. Systems consume less of your time if they answer to a central authority.
  9. Use antivirus and intrusion prevention. As long you choose your anti-malware vendor well and the proper exclusions are in place, performance will not be negatively impacted. Compare that to the performance of a compromised system.
  10. Read through our article on host performance tuning.

Leverage Containers

In the “traditional” virtualization model, we stand up multiple virtual machines running individual operating system environments. As “virtual machine sprawl” sets in, we wind up with a great deal of duplication. In the past, we could justify that as a separation of the environment. Furthermore, some Windows Server patches caused problems for some software but not others. In the modern era, containers and omnibus patch packages have upset that equation.

Instead of building virtual machine after virtual machine, you can build a few virtual machines. Deploy containers within them. Strategies for this approach exceed the parameters of this article, but you’re aiming to reduce the number of disparate complete operating system environments deployed. With careful planning, you can reduce density while maintaining a high degree of separation for your services. Fewer kernels are loaded, fewer context switches occur, less memory contains the same code bits, fewer disk seeks to retrieve essentially the same information from different locations.

  1. Prefer containers over virtual machines where possible.

CPU

You can’t do a great deal to tune CPU performance in Hyper-V. Overall, I count that among my list of “good things”; Microsoft did the hard work for you.

  1. Follow our article on host tuning; pay special attention to C States and the performance power settings.
  2. For Intel chips, leave hyperthreading on unless you have a defined reason to turn it off.
  3. Leave NUMA enabled in hardware. On your VMs’ property sheet, you’ll find a Use Hardware Topology button. Remember to use that any time that you adjust the number of vCPUs assigned to a virtual machine or move it to a host that has a different memory layout (physical core count and/or different memory distribution).
    best pratices for optimizing hyper-v performance - settings NUMA configuration
  4. Decide whether or not to allow guests to span NUMA nodes (the global host NUMA Spanning setting). If you size your VMs to stay within a NUMA node and you are careful to not assign more guests than can fit solidly within each NUMA node, then you can increase individual VM performance. However, if the host has trouble locking VMs into nodes, then you can negatively impact overall memory performance. If you’re not sure, just leave NUMA at defaults and tinker later.
  5. For modern guests, I recommend that you use at least two virtual CPUs per virtual machine. Use more in accordance with the virtual machine’s performance profile or vendor specifications. This is my own personal recommendation; I can visibly detect the response difference between a single vCPU guest and a dual vCPU guest.
  6. For legacy Windows guests (Windows XP/Windows Server 2003 and earlier), use 1 vCPU. More will likely hurt performance more than help.
  7. Do not grant more than 2 vCPU to a virtual machine without just cause. Hyper-V will do a better job reducing context switches and managing memory access if it doesn’t need to try to do too much core juggling. I’d make exceptions for very low-density hosts where 2 vCPU per guest might leave unused cores. At the other side, if you’re assigning 24 cores to every VM just because you can, then you will hurt performance.
  8. If you are preventing VMs from spanning NUMA nodes, do not assign more vCPU to a VM than you have matching physical cores in a NUMA node (usually means the number of cores per physical socket, but check with your hardware manufacturer).
  9. Use Hyper-V’s priority, weight, and reservation settings with great care. CPU bottlenecks are highly uncommon; look elsewhere first. A poor reservation will cause more problems than it solves.

Memory

I’ve long believed that every person that wants to be a systems administrator should be forced to become conversant in x86 assembly language, or at least C. I can usually spot people that have no familiarity with programming in such low-level languages because they almost invariably carry a bizarre mental picture of how computer memory works. Fortunately, modern memory is very, very, very fast. Even better, the programmers of modern operating system memory managers have gotten very good at their craft. Trying to tune memory as a systems administrator rarely pays dividends. However, we can establish some best practices for memory in Hyper-V.

  1. Follow our article on host tuning. Most importantly, if you have multiple CPUs, install your memory such that it uses multi-channel and provides an even amount of memory to each NUMA node.
  2. Be mindful of operating system driver quality. Windows drivers differ from applications in that they can permanently remove memory from the available pool. If they do not properly manage that memory, then you’re headed for some serious problems.
  3. Do not make your CSV cache too large.
  4. For virtual machines that will perform high quantities of memory operations, avoid dynamic memory. Dynamic memory disables NUMA (out of necessity). How do you know what constitutes a “high volume”? Without performance monitoring, you don’t.
  5. Set your fixed memory VMs to a higher priority and a shorter startup delay than your Dynamic Memory VMs. This ensures that they will start first, allowing Hyper-V to plot an optimal NUMA layout and reduce memory fragmentation. It doesn’t help a lot in a cluster, unfortunately. However, even in the best case, this technique won’t yield many benefits.
  6. Do not use more memory for a virtual machine than you can prove that it needs. Especially try to avoid using more memory than will fit in a single NUMA node.
  7. Use Dynamic Memory for virtual machines that do not require the absolute fastest memory performance.
  8. For Dynamic Memory virtual machines, pay the most attention to the startup value. It sets the tone for how the virtual machine will be treated during runtime. For virtual machines running full GUI Windows Server, I tend to use a startup of either 1 GB or 2 GB, depending on the version and what else is installed.
  9. For Dynamic Memory VMs, set the minimum to the operating system vendor’s stated minimum (512 MB for Windows Server). If the VM hosts a critical application, add to the minimum to ensure that it doesn’t get choked out.
  10. For Dynamic Memory VMs, set the maximum to a reasonable amount. You’ll generally discover that amount through trial and error and performance monitoring. Do not set it to an arbitrarily high number. Remember that, even on 2012 R2, you can raise the maximum at any time.

Check the CPU section for NUMA guidance.

Networking

In the time that I’ve been helping people with Hyper-V, I don’t believe that I’ve seen anyone waste more time worrying about anything that’s less of an issue than networking. People will read whitepapers and forums and blog articles and novels and work all weekend to draw up intricately designed networking layouts that need eight pages of documentation. But, they won’t spend fifteen minutes setting up a network utilization monitor. I occasionally catch grief for using MRTG since it’s old and there are shinier, bigger, bolder tools, but MRTG is easy and quick to set up. You should know how much traffic your network pushes. That knowledge can guide you better than any abstract knowledge or feature list.

That said, we do have many best practices for networking performance in Hyper-V.

  1. Follow our article on host tuning. Especially pay attention to VMQ on gigabit and separation of storage traffic.
  2. If you need your network to go faster, use faster adapters and switches. A big team of gigabit won’t keep up with a single 10 gigabit port.
  3. Use a single virtual switch per host. Multiple virtual switches add processing overhead. Usually, you can get a single switch to do whatever you wanted multiple switches to do.
  4. Prefer a single large team over multiple small teams. This practice can also help you to avoid needless virtual switches.
  5. For gigabit, anything over 4 physical ports probably won’t yield meaningful returns. I would use 6 at the outside. If you’re using iSCSI or SMB, then two more physical adapters just for that would be acceptable.
  6. For 10GbE, anything over 2 physical ports probably won’t yield meaningful returns.
  7. If you have 2 10GbE and a bunch of gigabit ports in the same host, just ignore the gigabit. Maybe use it for iSCSI or SMB, if it’s adequate for your storage platform.
  8. Make certain that you understand how the Hyper-V virtual switch functions. Most important:
    • You cannot “see” the virtual switch in the management OS except with Hyper-V specific tools. It has no IP address and no presence in the Network and Sharing Center applet.
    • Anything that appears in Network and Sharing Center that you think belongs to the virtual switch is actually a virtual network adapter.
    • Layer 3 (IP) information in the host has no bearing on guests — unless you create an IP collision
  9. Do not create a virtual network adapter in the management operating system for the virtual machines. I did that before I understood the Hyper-V virtual switch, and I have encountered lots of other people that have done it. The virtual machines will use the virtual switch directly.
  10. Do not multi-home the host unless you know exactly what you are doing. Valid reasons to multi-home:
    • iSCSI/SMB adapters
    • Separate adapters for cluster roles. e.g. “Management”, “Live Migration”, and “Cluster Communications”
  11. If you multi-home the host, give only one adapter a default gateway. If other adapters must use gateways, use the old route command or the new New-NetRoute command.
  12. Do not try to use internal or private virtual switches for performance. The external virtual switch is equally fast. Internal and private switches are for isolation only.
  13. If all of your hardware supports it, enable jumbo frames. Ensure that you perform validation testing (i.e.: ping storage-ip -f -l 8000)
  14. Pay attention to IP addressing. If traffic needs to locate an external router to reach another virtual adapter on the same host, then traffic will traverse the physical network.
  15. Use networking QoS if you have identified a problem.
    • Use datacenter bridging, if your hardware supports it.
    • Prefer the Weight QoS mode for the Hyper-V switch, especially when teaming.
    • To minimize the negative side effects of QoS, rely on limiting the maximums of misbehaving or non-critical VMs over trying to guarantee minimums for vital VMs.
  16. If you have SR-IOV-capable physical NICs, it provides the best performance. However, you can’t use the traditional Windows team for the physical NICs. Also, you can’t use VMQ and SR-IOV at the same time.
  17. Switch-embedded teaming (2016) allows you to use SR-IOV. Standard teaming does not.
  18. If using VMQ, configure the processor sets correctly.
  19. When teaming, prefer Switch Independent mode with the Dynamic load balancing algorithm. I have done some performance testing on the types (near the end of the linked article). However, a reader commented on another article that the Dynamic/Switch Independent combination can cause some problems for third-party load balancers (see comments section).

Storage

When you need to make real differences in Hyper-V’s performance, focus on storage. Storage is slow. The best way to make storage not be slow is to spend money. But, we have other ways.

  1. Follow our article on host tuning. Especially pay attention to:
    • Do not break up internal drive bays between Hyper-V and the guests. Use one big array.
    • Do not tune the Hyper-V partition for speed. After it boots, Hyper-V averages zero IOPS for itself. As a prime example, don’t put Hyper-V on SSD and the VMs on spinning disks. Do the opposite.
    • The best ways to get more storage speed is to use faster disks and bigger arrays. Almost everything else will only yield tiny differences.
  2. For VHD (not VHDX), use fixed disks for maximum performance. Dynamically-expanding VHD is marginally, but measurably, slower.
  3. For VHDX, use dynamically-expanding disks for everything except high-utilization databases. I receive many arguments on this, but I’ve done the performance tests and have years of real-world experience. You can trust that (and run the tests yourself), or you can trust theoretical whitepapers from people that make their living by overselling disk space but have perpetually misplaced their copy of diskspd.
  4. Avoid using shared VHDX (2012 R2) or VHDS (2016). Performance still isn’t there. Give this technology another maturation cycle or two and look at it again.
  5. Where possible, do not use multiple data partitions in a single VHD/X.
  6. When using Cluster Shared Volumes, try to use at least as many CSVs as you have nodes. Starting with 2012 R2, CSV ownership will be distributed evenly, theoretically improving overall access.
  7. You can theoretically improve storage performance by dividing virtual machines across separate storage locations. If you need to make your arrays span fewer disks in order to divide your VMs’ storage, you will have a net loss in performance. If you are creating multiple LUNs or partitions across the same disks to divide up VMs, you will have a net loss in performance.
  8. For RDS virtual machine-based VDI, use hardware-based or Windows’ Hyper-V-mode deduplication on the storage system. The read hits, especially with caching, yield positive performance benefits.
  9. The jury is still out on using host-level deduplication for Windows Server guests, but it is supported with 2016. I personally will be trying to place Server OS disks on SMB storage deduplicated in Hyper-V mode.
  10. The slowest component in a storage system is the disk(s); don’t spend a lot of time worrying about controllers beyond enabling caching.
  11. RAID-0 is the fastest RAID type, but provides no redundancy.
  12. RAID-10 is generally the fastest RAID type that provides redundancy.
  13. For Storage Spaces, three-way mirror is fastest (by a lot).
  14. For remote storage, prefer MPIO or SMB multichannel over multiple unteamed adapters. Avoid placing this traffic on teamed adapters.
  15. I’ve read some scattered notes that say that you should format with 64 kilobyte allocation units. I have never done this, mostly because I don’t think about it until it’s too late. If the default size hurts anything, I can’t tell. Someday, I’ll remember to try it and will update this article after I’ve gotten some performance traces. If you’ll be hosting a lot of SQL VMs and will be formatting their VHDX with 64kb AUs, then you might get more benefit.
  16. I still don’t think that ReFS is quite mature enough to replace NTFS for Hyper-V. For performance, I definitely stick with NTFS.
  17. Don’t do full defragmentation. It doesn’t help. The minimal defragmentation that Windows automatically performs is all that you need. If you have some crummy application that makes this statement false, then stop using that application or exile it to its own physical server. Defragmentation’s primary purpose is to wear down your hard drives so that you have to buy more hard drives sooner than necessary, which is why employees of hardware vendors recommend it all the time. If you have a personal neurosis that causes you pain when a disk becomes “too” fragmented, use Storage Live Migration to clear and then re-populate partitions/LUNs. It’s wasted time that you’ll never get back, but at least it’s faster. Note: All retorts must include verifiable and reproducible performance traces, or I’m just going to delete them.

Clustering

For real performance, don’t cluster virtual machines. Use fast internal or direct-attached SSDs. Cluster for redundancy, not performance. Use application-level redundancy techniques instead of relying on Hyper-V clustering.

In the modern cloud era, though, most software doesn’t have its own redundancy and host clustering is nearly a requirement. Follow these best practices:

  1. Validate your cluster. You may not need to fix every single warning, but be aware of them.
  2. Follow our article on host tuning. Especially pay attention to the bits on caching storage. It includes a link to enable CSV caching.
  3. Remember your initial density target. Add as many nodes as necessary to maintain that along with sufficient extra nodes for failure protection.
  4. Use the same hardware in each node. You can mix hardware, but CPU compatibility mode and mismatched NUMA nodes will have at least some impact on performance.
  5. For Hyper-V, every cluster node should use a minimum of two separate IP endpoints. Each IP must exist in a separate subnet. This practice allows the cluster to establish multiple simultaneous network streams for internode traffic.
    • One of the addresses must be designated as a “management” IP, meaning that it must have a valid default gateway and register in DNS. Inbound connections (such as your own RDP and PowerShell Remoting) will use that IP.
    • None of the non-management IPs should have a default gateway or register in DNS.
    • One alternative IP endpoint should be preferred for Live Migration. Cascade Live Migration preference order through the others, ending with the management IP. You can configure this setting most easily in Failover Cluster Manager by right-clicking on the Networks node.
    • Further IP endpoints can be used to provide additional pathways for cluster communications. Cluster communications include the heartbeat, cluster status and update messages, and Cluster Shared Volume information and Redirected Access traffic.
    • You can set any adapter to be excluded from cluster communications but included in Live Migration in order to enforce segregation. Doing so generally does not improve performance, but may be desirable in some cases.
    • You can use physical or virtual network adapters to host cluster IPs.
    • The IP for each cluster adapter must exist in a unique subnet on that host.
    • Each cluster node must contain an IP address in the same subnet as the IPs on other nodes. If a node does not contain an IP in a subnet that exists on other nodes, then that network will be considered “partitioned” and the node(s) without a member IP will be excluded from that network.
    • If the host will connect to storage via iSCSI, segregate iSCSI traffic onto its own IP(s). Exclude it/them from cluster communications and Live Migration. Because they don’t participate in cluster communications, it is not absolutely necessary that they be placed into separate subnets. However, doing so will provide some protection from network storms.
  6. If you do not have RDMA-capable physical adapters, Compression usually provides the best Live Migration performance.
  7. If you do have RDMA-capable physical adapters, SMB usually provides the best Live Migration performance.
  8. I don’t recommend spending time tinkering with the metric to shape CSV traffic anymore. It utilizes SMB, so the built-in SMB multi-channel technology can sort things out.

Virtual Machines

The preceding guidance obliquely covers several virtual machine configuration points (check the CPU and the memory sections). We have a few more:

  1. Don’t use Shielded VMs or BitLocker. The encryption and VMWP hardening incur overhead that will hurt performance. The hit is minimal — but this article is about performance.
  2. If you have 1) VMs with very high inbound networking needs, 2) physical NICs >= 10GbE, 3) VMQ enabled, 4) spare CPU cycles, then enable RSS within the guest operating systems. Do not enable RSS in the guest OS unless all of the preceding are true.
  3. Do not use the legacy network adapter in Generation 1 VMs any more than absolutely necessary.
  4. Utilize checkpoints rarely and briefly. Know the difference between standard and “production” checkpoints.
  5. Use time synchronization appropriately. Meaning, virtual domain controllers should not have the Hyper-V time synchronization service enabled, but all other VMs should (generally speaking). The hosts should pull their time from the domain hierarchy. If possible, the primary domain controller should be pulling from a secured time source.
  6. Keep Hyper-V guest services up-to-date. Supported Linux systems can be updated via kernel upgrades/updates from their distribution repositories. Windows 8.1+ and Windows Server 2012 R2+ will update from Windows Update.
  7. Don’t do full defragmentation in the guests, either. Seriously. We’re administering multi-spindle server equipment here, not displaying a progress bar to someone with a 5400-RPM laptop drive so that they feel like they’re accomplishing something.
  8. If the virtual machine’s primary purpose is to run an application that has its own replication technology, don’t use Hyper-V Replica. Examples: Active Directory and Microsoft SQL Server. Such applications will replicate themselves far more efficiently than Hyper-V Replica.
  9. If you’re using Hyper-V Replica, consider moving the VMs’ page files to their own virtual disk and excluding it from the replica job. If you have a small page file that doesn’t churn much, that might cost you more time and effort than you’ll recoup.
  10. If you’re using Hyper-V Replica, enable compression if you have spare CPU but leave it disabled if you have spare network. If you’re not sure, use compression.
  11. If you are shipping your Hyper-V Replica traffic across an encrypted VPN or keeping its traffic within secure networks, use Kerberos. SSL en/decryption requires CPU. Also, the asymmetrical nature of SSL encryption causes the encrypted data to be much larger than its decrypted source.

Monitoring

You must monitor your systems. Monitoring is not and has never been, an optional activity.

  1. Be aware of Hyper-V-specific counters. Many people try to use Task Manager in the management operating system to gauge guest CPU usage, but it just doesn’t work. The management operating system is a special-case virtual machine, which means that it is using virtual CPUs. Its Task Manager cannot see what the guests are doing.
  2. Performance Monitor has the most power of any built-in tool, but it’s tough to use. Look at something like Performance Analysis of Logs (PAL) tool, which understands Hyper-V.
  3. In addition to performance monitoring, employ state monitoring. With that, you no longer have to worry (as much) about surprise events like disk space or memory filling up. I like Nagios, as regular readers already know, but you can select from many packages.
  4. Take periodic performance baselines and compare them to earlier baselines

 

If you’re able to address a fair proportion of points from this list, I’m sure you’ll see a boost in Hyper-V performance. Don’t forget this list is not exhaustive and I’ll be adding to it periodically to ensure it’s as comprehensive as possible however if you think there’s something missing, let me know in the comments below and you may see the number 95 increase!

Get Involved on twitter: #How2HyperV

Get involved on twitter where we will be regularly posting excerpts from this article and engaging the IT community to help each other improve our use of Hyper-V. Got your own Hyper-V tips or tricks for boosting performance? Use the hashtag #How2HyperV when you tweet and share your knowledge with the world!

#How2HyperV Tweets

The Complete Guide to Hyper-V Networking

The Complete Guide to Hyper-V Networking

I frequently write about all sorts of Hyper-V networking topics. I was surprised to learn that we’ve never published a unified article that gives a clear and complete how-to that brings all of these related topics into one resource. We’ll fix that right now.

Understanding the Basics of Hyper-V Networking

We have produced copious amounts of material explaining the various concepts around Hyper-V networking. I want to spend as little time as possible on that here. Comprehension is very important, though, so here’s an index of expository work:

  • How the Hyper-V Virtual Switch Works: If you don’t understand the contents of that article, you will have a very difficult time administering Hyper-V. Read it, and read it again until you have absorbed it. It answers easily 90% of the questions that I receive about Hyper-V networking. If something there doesn’t make sense, ask.
  • The OSI model and Hyper-V: A quick read on the OSI model and a brief introduction to its relevance to Hyper-V. If you’ve been skimming over the terms “layer 2” and “layer 3” because you don’t have a solid understanding of them, read it.
  • Hyper-V and VLANs: That article ties closely to the OSI article. VLANs are a layer 2 technology. Due to common usage, newcomers often confuse them with layer 3 operations. I’m frequently asked about trunking multiple VLANs into a virtual machine, even though I’m fairly certain that most people don’t really want to do that. This article should help you sort out those concepts.
  • Hyper-V and IP: That article also ties closely to the OSI article and contrasts against the VLAN article. It doesn’t contain a great deal of direct Hyper-V knowledge, but it should help fill any of the most serious deficiencies in TCP/IP comprehension.
  • Hyper-V and Link Aggregation (Teaming): That article describes the concepts around NIC teaming and addresses some of the myths that I encounter. The article that you’re reading now will bring you the “how”.
  • Hyper-V and DNS: If I were to compile a list of ridiculously simple technologies that people tend to ridiculously over-complicate, I’d place DNS in the top slot. Hyper-V itself cares nothing about DNS, but its management operating systems and guests care very much. Poor DNS configurations can be blamed for nearly all of the world’s technological ills. You must learn it. It won’t take long.
  • Hyper-V and Binding Order: Lots of administrators spend lots of time wringing their hands over network binding order. Stop. Only the DNS subsystem and one other thing (that I’ve now forgotten about) pay any attention to the binding order. If you get that, then you don’t really need to read the linked article.
  • Hyper-V and Load Balancing Algorithms: The “hows” of load balancing algorithms will be on display in the article that you’re reading. If you want to understand the “what” and the “why”, then follow the link.
  • Hyper-V and MPIO and Teaming for Storage: I see lots of complaints from people that create a switch independent team on a pair of 10GbE pipes that wind back to a storage array with 5x 10,000 RPM disks. They test it with a file copy and don’t understand why they can’t move 20Gbps. Invariably, they blame Hyper-V. If you don’t want to be that guy, the linked article should help.

That should serve as a decent reference on the concepts. If you don’t understand something written below, it’s probably because you don’t understand something linked above.

Contents of this Article

I will demonstrate common PowerShell and, where available, GUI methods for working with:

  • Standard network adapter teaming
  • Hyper-V virtual switch
  • Switch Embedded Teaming
  • Hyper-V virtual adapters

PowerShell or GUI?

Use PowerShell for quick, precise, repeatable, scriptable operations. Use the GUI to do the same amount of work in twice the time following four times as many instructions. I will show all of the PowerShell methods first for the benefit of those that just want to get things done. If you prefer to plod through dozens of GUI screens, scroll to the bottom half of the article. Be aware that many things can’t be done in the GUI.

If you’re just getting started with PowerShell, remember to use tab completion! It makes all the difference!

Creating and Working with Network Adapter Teams for Hyper-V in PowerShell

If you’re interested in Switch Embedded Teaming (Server 2016 only), then look a few headings downward. This section applies to the standard Microsoft teaming methods.

First things first. You need to know which adapters to add to the team. Discover your available adapters:

I’ll use my system for reference. I’ve renamed all of the adapters in my system so that I can recognize them. If your hardware supports Consistent Device Naming, then you’ll likely already have actionable names (like “Slot 4 Port 1”). If not, you’ll need to find your own way to identify adapters. I use my switch’s interface to enable the ports one at a time, identifying the adapters as they switch to Connected status.

Creating and Working with Network Adapter Teams for Hyper-V

The PowerShell cmdlets for networking allow you to use names, indexes, or descriptions to manipulate adapters. The teaming cmdlets only work with names.

Create a Windows Team

Create teams with New-NetLbfoTeam.

I use my demo machines’ “P*L” adapters for Hyper-V teams. One way to create a team for them:

I usually Name my team for the virtual switch that I create on it, but choose any name that you like. The TeamMembers field accepts a comma-separated list of the names of the physical adapters to add to the team. I promised not to go into detail on the options, and I won’t. Just remember that the other parameters and their values are selectable by tab completion. SwitchIndependent is the preferred teaming mode in most cases with LACP being second. I have never seen any compelling reason to use a load balancing algorithm other than Dynamic. Most people will want to use the Dynamic load balancing algorithm as it combines the best of Hyper-V Port mode and the hash mode along with some special features that can relocate traffic dynamically. However, if you will be combining Switch Independent and Dynamic with an external third-party load balancer, I recommend that you read the comment section for helpful warnings from reader Jahn.

To save even more time and space, the cmdlet is smart enough to allow you to use wildcards for the adapter names:

If you want to avoid the prompt for scripting purposes, add the Force parameter.

A Note on the Team NIC

When you create a team, you also create a logical adapter that represents that team. A logical team NIC (often abbreviated as tNIC) works in a conceptually similar fashion to a Hyper-V virtual NIC. You treat it just like you would a physical adapter — give it an IP address, etc. The team determines what to do with your traffic. If you use the cmdlets as shown above, one team NIC will be created and it will have the same name as the team (“vSwitch”, in this case). You can override that name with the TeamNicName parameter.

You can also add more team NICs to a team. For a team that hosts a Hyper-V virtual switch, it’s neither recommended nor supported, although the system will allow it. Additional tNICs must be created in their own VLAN, which hides that VLAN from the team. Also, it’s not documented or clear how additional tNICs affect any QoS settings on a Hyper-V virtual switch.

For the rest of this article, the single automatically-created tNIC will be the only one referenced.

Examine Teams and tNICs

View all teams and their statuses with Get-NetLbfoTeam. You don’t need to supply any parameters. I get more use out of Get-NetLbfoTeamMember, also without parameters.

Remove and Add Team Members

You can easily remove team members if you have the need:

And add them:

Removing an adapter obviously disrupts the traffic on that member, but the team will handle it well. You can add a team member at any time.

Delete a Team

Use Remove-NetLbfoTeam to get rid of a team. You can use the Name parameter to reverse what you’ve done. Since my hosts only ever use a single team, I can do this:

Working with the Hyper-V Virtual Switch

I always use Hyper-V virtual switches and Microsoft teams together, so I have a certain technique. You may choose a different path. Just understand that external switches must be created on an adapter. I will always use the default tNIC. If you’re not teaming, then you’ll pick a single physical NIC. Use Get-NetAdapter as shown in the teaming section above to determine the name of the adapter that you wish to use.

Create a Virtual Switch

Use New-VMSwitch to create a new switch. Most commonly, you’ll want the external type (refer to the articles linked at the beginning if you need an explanation). External switches require you to specify a logical or physical (but not virtual) adapter. You can use its friendly name or its less friendly description. I use the name. In my case, I’m binding to a team’s logical adapter, so, as explained a bit ago, I’ll use the team’s name.

For internal or private, use the SwitchType parameter instead of the NetAdapterName parameter and do not use AllowManagementOS.

Several things to note about the New-VMSwitch cmdlet:

  • New-VMSwitch is not one of the better-developed cmdlets. Usually, when tabbing through available parameters, your options are presented in a sensible order. New-VMSwitch’s parameters are all over the place.
  • The documentation for every version of New-VMSwitch always says that the default MinimumBandwidthMode is Weight. I used to classify this as an error, but it’s been going on for so long I’m starting to wonder if it’s an unfunny inside joke or a deliberate lie. The default is Absolute. Most people won’t ever need QoS, so I don’t know that it has practical importance. However, you can’t change a switch’s QoS mode after it’s been created, so I’d rather tell you this up front.
  • The “AllowManagementOS” parameter’s name is nonsense. What it really means is “immediately create a virtual adapter for the management operating system”. The only reason that I don’t allow it to create one is because it uses the same name for the virtual adapter as the virtual switch. That’s confusing for people that don’t know how all of this works. You can always add virtual adapters later, so the “allow” verb makes no sense whatsoever.

Manipulate a Virtual Switch

Use Set-VMSwitch to make changes to your switch. The cmdlet has so many options that I can’t rationally explain it all. Just scan the parameter list to find what you want. A couple of notes, though:

  • You can’t change the QoS mode of an existing virtual switch.
  • You can switch between External, Internal, and Private types.
    • To go from External to either of the other types: Set-VMSwitch -Name vSwitch -SwitchType Internal. Just use Private instead of Internal if you want that switch type.
    • To from Private or Internal to External: Set-VMSwitch -Name vSwitch -NetAdapterName vSwitch. You’d also use this format to move a virtual switch from one physical/logical network adapter to another.
  • You can’t rename a virtual switch with this cmdlet. Use Rename-VMSwitch.

Remove a Virtual Switch

Appropriately enough, Remove-VMSwitch removes a virtual switch.

You can remove all virtual switches in one shot:

When a switch is removed, virtual NICs on VMs are disconnected. Virtual NICs for the management OS are destroyed.

Speaking of virtual NICs, that’s the next thing you care about if you’re using a standard virtual switch. I’ll explain them after the Switch Embedded Team section.

Working with Hyper-V Switch Embedded Teams

Server 2016 adds Switch Embedded Teaming. If you’re planning to create a team of gigabit adapters, then I recommend that you use the traditional teaming method outlined above. I wrote an article explaining why.

Create a Switch Embedded Team (SET)

Use the familiar New-VMSwitch to set it up, but add the EnableEmbeddedTeaming option. Two other options not shown in the following are EnableIov and EnablePacketDirect.

The documentation continues to be wrong on MinimumBandwidthMode. If you don’t specify otherwise, you get Absolute. Prefer Weight.

Use EnableIov if, and only if, you have 10GbE adapters that support it. I cannot find any details on Packet Direct anywhere. Everyone just repeats that it provides a low-latency connection that bypasses the virtual switch. A few sources add that it will force Hyper-V Port load balancing mode. My hardware doesn’t support it, so I can’t test it. I assume that it only works on 10GbE and probably only with SR-IOV.

Once a SET has been created, you view it with both Get-VMSwitch and Get-VMSwitchTeam. For whatever reason, they decided that the output should include the difficult-to-read interface descriptions instead of names like Get-NetLbfoTeam. You can see the adapter names with something like this:

The SET cmdlets have no analog for Get-NetLbfoTeamMember.

SET does not expose a logical adapter to Windows the way that LBFO does.

Manipulate a Switch Embedded Team

You can change the members and the load balancing mode for a SET using Set-VMSwitchTeam.

Add and Remove SET Members

Instead of Set-VMSwitchTeam, you can use Add-VMSwitchTeamMember and Remove-VMSwitchTeamMember.

Remove a SET

Use Remove-VMSwitch to remove a SET. There is no Remove-VMSwitchTeam cmdlet.

Working with Virtual Network Adapters

You can attach virtual network adapters (vNICs) to the management operating system or virtual machines. You’ll most commonly use them with virtual machines, but you’ll also usually do less work with them. Their default settings tend to be sufficient and you can work with them through their owning virtual machine’s GUI property pages.

For almost every vNIC-related cmdlet, you must indicate whether you’re working with a management OS vNIC or a VM’s vNIC. Do this with the ManagementOS switch parameter or by supplying a value for either the VM or the VMName parameters. If you have a vNIC object, such as the one output by Get-VMNetworkAdapter, then you can pipe it to most of the vNIC cmdlets or provide it as the VMNetworkAdapter parameter. You won’t need to specify any of the other identifying parameters, including those previously mentioned in this paragraph, when you provide the vNIC object.

View a Virtual Network Adapter

The simple act of creating a virtual machine or virtual switch with AllowManagementOS set, creates a vNIC. To view them all:

Ordinarily, we give descriptive names to management OS vNICs, especially when we use more than one. If you didn’t specify AllowManagementOS, then you’ll have a vNIC with the same name as your vSwitch.

Each management OS vNIC will appear in the Network Connections applet and Get-NetAdapter with the format vEthernet (vNICName). Avoid confusion by changing the default vNIC’s name (shown in a bit). Many newcomers believe that this vNIC is the virtual switch because of that name. You cannot “see” the virtual switch anywhere except in Hyper-V-specific management tools.

Ordinarily, we leave the default name of “Network Adapter” for virtual machine vNICs. New in 2016, changes to a guest’s vNIC name will appear in the guest operating system if it supports Consistent Device Naming (CDN).

Manipulate a Virtual Network Adapter

Use Set-VMNetworkAdapter to change vNIC settings. As you can see, this cmdlet is quite busy; I could write multiple full-length articles on various parameter groups. Settings categories available with this command:

  • Quality of service (Qos)
  • Security (MAC spoofing, router guard, DHCP guard, storm)
  • Replica
  • In-guest teaming
  • Performance (VMQ, IOV, vRSS, Packet Direct)

You need a different cmdlet for VLAN manipulation, though.

Manipulate Virtual Network Adapter VLANs

Use Set-VMNetworkAdapterVlan for all things VLAN on vNICs.

To place a management OS vNIC into a VLAN:

Remember that the VlanId parameter requires the Access parameter.

Also remember that there is no such thing as “VLAN 0”. For some unknown reason, the cmdlet will accept it and assign the adapter to VLAN 0, but strange things might happen. Usually, it’s just that you can’t get traffic in or out of the adapter. If you want to clear the adapter’s VLAN, don’t use VLAN 0. Use Untagged:

I’m not going to cover trunking or private VLANs. Trunking is very advanced and I don’t think more than 5 percent of the people that have asked me how to do it really wanted to do it. If you want a single virtual machine to exist in multiple VLANs, add virtual adapters and assign individual VLANs. Private VLANs require you to work with PrimaryVlanId, SecondaryVlanId, SecondaryVlanIdList, Promiscuous, Community, and Isolated as necessary. If you need to use private VLANs, then you or your networking engineer should already understand each of these terms and intuitively understand how to use the parameters.

Since we’re commonly asked, the Promiscuous parameter on Set-VMNetworkAdapterVlan does not have anything to do with accepting or participating in all passing layer 2 traffic. It is only for private VLANs.

Adding and Removing Virtual Network Adapters

Use Add-VMNetworkAdapter and Remove-VMNetworkAdapter for their respective tasks.

Connecting and Disconnecting Virtual Network Adapters to/from Virtual Switches

These cmdlets only work for virtual machine vNICs. You cannot dis/connect management OS vNICs; you can only add or remove them.

Connect always works. You do not need to disconnect an adapter from its current virtual switch to connect it to a new one. If you want to connect all of a VM’s vNICs to the same switch, specify only the virtual machine in VMName.

If you provide the Name parameter, then only that vNIC will be altered:

These two cmdlets do not provide a VM parameter. It is possible for two virtual machines to have the same name. If you need to discern between two VMs with the same name, use the pipeline and filter from other cmdlets:

Use Disconnect-VMNetworkAdapter the same way, leaving off the SwitchName parameter.

VLAN information is preserved across dis/connects.

Other vNIC Settings

I did not touch on the entire range of possible vNIC cmdlets or their settings. You can go to the root 2016 Hyper-V PowerShell page and view all available cmdlets. Search the page for adapter, and you’ll find many hits.

Using the GUI for Hyper-V Networking

The GUI lags dramatically behind PowerShell for most things related to Hyper-V. I doubt any category shows that as strongly as networking. So, whether you (or I) like it or not, using the GUI for Hyper-V network qualifies as “beginner mode”. Most of the things that I showed you above cannot be done in the GUI at all. So, unless you’re managing a single host with a single network adapter, the GUI will probably not help you much.

The following sections show you the few things that you can do in the GUI.

Working with Windows Teams

The GUI does allow you some decent capability when working with Windows teams.

Create a Windows Team

You can use the GUI to create teams on Server 2012 and later. You can find the applet in Server Manager on the Local Server tab.

Using the GUI for Hyper-V Networking

You can also run lbfoadmin.exe from the Run window or an elevated prompt.

Once open, click the Tasks drop-down in the Teams section. Click New Team.

Tasks drop-down in the Teams section

You’ll get the NIC Teaming/New team dialog, where you’ll need to fill out most fields:

NIC Teaming/New team

Manipulate a Team

To make changes to your team later, just return to the same screens and dialogs using the same methods as you used to create the team.

Manipulate a Team

Delete a Team

To delete a team, use the Delete function in the same place on the main lbfoadmin screen where you found the New Team function. Make sure to highlight the team that you want to delete, first.

Delete a Team

Working with the Hyper-V Virtual Switch

The GUI provides very limited ability to work with Hyper-V virtual switches. You can’t configure QoS (except on vNICs) and it allows nearly nothing to be done for management OS vNICs.

Create a Hyper-V Virtual Switch

When using the Add Roles wizard to enable Hyper-V, you can create a virtual switch. I won’t cover that. If you’re looking at that screen, wondering what to do, I recommend that you skip it and follow the PowerShell directions above. If you simply must use a GUI, then wait until after the role finishes installing and create one using Hyper-V Manager.

To create a new virtual switch in Hyper-V Manager:

  1. Right-click the host in Hyper-V Manager and click Virtual Switch Manager. Alternatively, you’ll find this same menu at the far right of the main screen under Actions.
    Working with the Hyper-V Virtual Switch
  2. At the left of the dialog, highlight New virtual network switch.
    Create a Hyper-V Virtual Switch
  3. On the right, choose the type of switch that you want to create. I’m not entirely sure why it even asks because you can pick anything you want once you click Create Virtual Switch.
    Create a Hyper-V Virtual Switch Type
  4. The creation screen itself is very busy. I’ll tackle that in a moment. First, look to the left of the dialog at the blue text. It’s a new entry named New Virtual Switch. It represents what you’re working on now. If you change the name, you’ll see this list item change as well. You can use Apply to make changes and continue working without closing the dialog. You can even add another switch before you accept this one.
    New Virtual Switch

Now for the new switch screen. Look after the screenshot for an explanation of the items:

Virtual Switch properties - Hyper-V Networking

First item: name your switch.

I would skip the notes field, especially in a failover cluster.

For Connection Type, you’re decided between External, Internal, and Private. That’s why I don’t understand why it asked you on the initial dialog. If you choose External, you’ll need to pick a logical or physical adapter for binding. Unfortunately, you can only see the fairly useless adapter description fields. Look in the Network Connections applet to determine which is which. This right here is one of the primary reasons I like switch creation in PowerShell better.

Remember that the IOV setting is permanent.

I despise the item here called Allow management operating system to share this network adapter. That description has absolutely no relation to what the checkbox does. If you check it, it will automatically create a virtual NIC in the management OS for this virtual switch and give it the same name as the virtual switch. That’s all it does. There is no “sharing”, and there is no permanent allowing or disallowing going on.

The VLAN ID section ties to the nonsensical “Allow…” field. If you let the system create a management OS vNIC for you, then you can use this to give it a VLAN ID.

You can use the Remove button if you decide that you don’t want to create the virtual switch after all. Cancel would work, too.

Where’s the QoS? Oh, you can’t set the QoS mode for a virtual switch using the GUI. PowerShell only. If you use this screen to create a virtual switch, it will use the Absolute QoS mode. Forever. Another reason to choose PowerShell.

Manipulate a Virtual Switch

To make changes to a virtual switch, follow the exact steps that you did to create one, except choose the existing virtual switch at the left of the Virtual Switch Manager dialog. Of course, you can’t change much, but there it is.

Remove a Virtual Switch

Retrace your creation steps. Select the virtual switch at the left of the Virtual Switch Manager screen. Click the Remove button at the bottom right.

Working with Hyper-V Switch Embedded Teams

You can’t use the GUI to work with Hyper-V SET. PowerShell-only.

You can use the Virtual Switch Manager as described previously to remove one, though.

Working with Hyper-V Virtual Network Adapters

The GUI provides passably decent ability to work with vNICs — for guests. The only place that you can do anything with management OS vNICs is on that virtual switch creation screen. You can add or remove exactly one vNIC and you can set or remove its VLAN. You can’t use the GUI to work with two or more management OS vNICs. In fact, if you use PowerShell to add a second management OS vNIC, all related items in the dialog are grayed out and unusable.

But, for virtual machines, the GUI exposes most functionality.

Manipulate Virtual Network Adapters on Virtual Machines

In Hyper-V Manager or Failover Cluster Manager, open up the Settings dialog for the virtual machine to work with. On the left, you can find the vNIC that you want to work with. Highlight it, and the page will switch to its configuration screen. In the following screenshot, I’ve also expanded the vNIC so that you can see its subtabs, Hardware Acceleration and Advanced Features.

Manipulate Virtual Network Adapters on Virtual Machines

On this screen, you can change the virtual switch this adapter connects to, or disconnect it. You can change or remove its VLAN. You can set its QoS. The numbers here are given in Absolute since that’s the default. It doesn’t change if your switch uses Weight mode. I would use PowerShell for that. You can also Remove the vNIC here.

The Hardware Acceleration subtab:

Hardware Acceleration

Here, you can change:

  • If a VMQ can be assigned to this vNIC. The host’s adapters must support VMQ and a queue must be available for this checkbox to have any effect.
  • IPSec task offloading. If the host’s physical adapter supports IPSec task offloading and has sufficient resources, the guest can offload IPSec tasks to the hardware.
  • An SR-IOV virtual function can be assigned to this NIC. The host’s adapters and motherboard must support IOV, it must be enabled on the adapter and in BIOS, the virtual switch must either be unteamed or on a SET, and a virtual function must be available for this checkbox to have any effect.

The Advanced Features subtab:

Advanced Features

Note that this screen scrolls, and I didn’t capture it all.

Here, you can change:

  • MAC address, mode and address both
  • Whether or not the guest can spoof the MAC
  • If the guest is prevented from receiving DHCP discover/request frames
  • If the guest is prevented from receiving router discovery packets
  • If a failover cluster will move the guest if it loses network connectivity (Protected network)
  • If the vNIC’s traffic is mirrored to another vNIC. This feature seems to have troubles, FYI.
  • If teaming is allowed in the guest. The guest requires at least two vNICs and the virtual switch must be placed on a team or SET for this to function.
  • The Device naming switch allows the name of the vNIC to be propagated into the guest where an OS that supports Consistent Device Naming (CDN) can use it. Note that this is disabled by default, and the GUI doesn’t allow you to rename the vNIC. Use PowerShell for that.

Remove a Virtual Network Adapter

To remove a vNIC from a guest, find its tab in the VM’s settings dialog in Hyper-V Manager or Failover Cluster Manager. Use the Remove button at the bottom right. You’ll find a screenshot above in the Manipulate Virtual Network Adapters on Virtual Machines section.

 

Note: This guide will be periodically updated to make sure it covers all possible Hyper-V Networking problems. If you think I’ve missed anything please let me know in the comments below.

Page 1 of 41234