Fixing Erratic Behavior on Hyper-V with Network Load Balancers

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

For years, I’d never heard of this problem. Then, suddenly, I’m seeing it everywhere. It’s not easy to precisely outline a symptom tree for you. Networked applications will behave oddly. Remote desktop sessions may skip or hang. Some network traffic will not pass at all. Other traffic will behave erratically. Rather than try to give you a thorough symptom tree, we’ll just describe the setup that can be addressed with the contents of this article: you’re using Hyper-V with a third-party network load balancer and experiencing network-related problems.

Acknowledgements

Before I ever encountered it, the problem was described to me by one my readers. Check out our Complete Guide to Hyper-V Networking article and look in the comments section for Jahn’s input. I had a different experience, but that conversation helped me reach a resolution much more quickly.

Problem Reproduction Instructions

The problem may appear under other conditions, but should always occur under these:

  • The network adapters that host the Hyper-V virtual switch are configured in a team
    • Load-balancing algorithm: Dynamic
    • Teaming mode: Switch Independent (likely occurs with switch-embedded teaming as well)
  • Traffic to/from affected virtual machines passes through a third-party load-balancer
    • Load balancer uses a MAC-based system for load balancing and source verification
      • Citrix Netscaler calls its feature “MAC based forwarding”
      • F5 load balancers call it “auto last hop”
    • The load balancer’s “internal” IP address is on the same subnet as the virtual machine’s
  • Sufficient traffic must be exiting the virtual machine for Hyper-V to load balance some of it to a different physical adapter

I’ll go into more detail later. This list should help you determine if you’re looking at an article that can help you.

Resolution

Fixing the problem is very easy, and can be done without downtime. I’ll show the options in preference order. I’ll explain the impacting differences later.

Option 1: Change the Load-Balancing Algorithm

Your best bet is to change the load-balancing algorithm to “Hyper-V port”. You can change it in the lbfoadmin.exe graphical interface if your management operating system is GUI-mode Windows Server. To change it with PowerShell (assuming only one team):

There will be a brief interruption of networking while the change is made. It won’t be as bad as the network problems that you’re already experiencing.

Option 2: Change the Teaming Mode

Your second option is to change your teaming mode. It’s more involved because you’ll also need to update your physical infrastructure to match. I’ve always been able to do that without downtime as long as I changed the physical switch first, but I can’t promise the same for anyone else.

Decide if you want to use Static teaming or LACP teaming. Configure your physical switch accordingly.

Change your Hyper-V host to use the same mode. If your Hyper-V system’s management operating system is Windows Server GUI, you can use lbfoadmin.exe. To change it in PowerShell (assuming only one team):

or

In this context, it makes no difference whether you pick static or LACP. If you want more information, read our article on the teaming modes.

Option 3: Disable the Feature on the Load Balancer

You could tell the load balancer to stop trying to be clever. In general, I would choose that option last.

An Investigation of the Problem

So, what’s going on? What caused all this? If you’ve got an environment that matches the one that I described, then you’ve unintentionally created the perfect conditions for a storm.

Whose fault is it? In this case, I don’t really think that it’s fair to assign fault. Everyone involved is trying to make your network traffic go faster. They sometimes do that by playing fast and loose in that gray area between Ethernet and TCP/IP. We have lots of standards that govern each individually, but not so many that apply to the ways that they can interact. The problem arises because Microsoft is playing one game while your load balancer plays another. The games have different rules, and neither side is aware that another game is afoot.

Traffic Leaving the Virtual Machine

We’ll start on the Windows guest side (also applies to Linux). Your application inside your virtual machine wants to send some data to another computer. That goes something like this:

  1. Application: “Network, send this data to computer www.altaro.com on port 443”.
  2. Network: “DNS server, get me the IP for www.altaro.com”
  3. Network: “IP layer, determine if the IP address for www.altaro.com is on the same subnet”
  4. Network: “IP layer, send this packet to the gateway”
  5. IP layer passes downward for packaging in an Ethernet frame
  6. Ethernet layer transfers the frame

The part to understand: your application and your operating system don’t really care about the Ethernet part. Whatever happens down there just happens. Especially, it doesn’t care at all about the source MAC.

lb_out_traffic

 

Traffic Crossing the Hyper-V Virtual Switch

Because this particular Ethernet frame is coming out of a Hyper-V virtual machine, the first thing that it encounters is the Hyper-V virtual switch. In our scenario, the Hyper-V virtual switch rests atop a team of network adapters. As you’ll recall, that team is configured to use the Dynamic load balancing algorithm in Switch Independent mode. The algorithm decides if load balancing can be applied. The teaming mode decides which pathway to use and if it needs to repackage the outbound frame.

Switch independent mode means that the physical switch doesn’t know anything about a team. It only knows about two or more Ethernet endpoints connected in standard access mode. A port in that mode can “host” any number of MAC addresses;the physical switch’s capability defines the limit. However, the same MAC address cannot appear on multiple access ports simultaneously. Allowing that would cause all sorts of problems.

lb_broken_si_traffic

 

So, if the team wants to load balance traffic coming out of a virtual machine, it needs to ensure that the traffic has a source MAC address that won’t cause the physical switch to panic. For traffic going out anything other than the primary adapter, it uses the MAC address of the physical adapter.

lb_good_si_traffic

 

So, no matter how many physical adapters the team owns, one of two things will happen for each outbound frame:

  • The team will choose to use the physical adapter that the virtual machine’s network adapter is registered on. The Ethernet frame will travel as-is. That means that its source MAC address will be exactly the same as the virtual network adapter’s (meaning, not repackaged)
  • The team will choose to use an adapter other than the one that the virtual machine’s network adapter is registered on. The Ethernet frame will be altered. The source MAC address will be replaced with the MAC address of the physical adapter

Note: The visualization does not cover all scenarios. A virtual network adapter might be affinitized to the second physical adapter. If so, its load balanced packets would travel out of the shown “pNIC1” and use that physical adapter’s MAC as a source.

Traffic Crossing the Load Balancer

So, our frame arrives at the load balancer. The load balancer has a really crummy job. It needs to make traffic go faster, not slower. And, it acts like a TCP/IP router. Routers need to unpackage inbound Ethernet frames, look at their IP information, and make decisions on how to transmit them. That requires compute power and time.

lb_router_hard

If it needs too much time to do all this, then people would prefer to live without the load balancer. That means that the load balancer’s manufacturer doesn’t sell any units, doesn’t make any money, and goes out of business. So, they come up with all sorts of tricks to make traffic faster. One way to do that is by not doing quite so much work on the Ethernet frame. This is a gross oversimplification, but you get the idea:

lb_router_easy

Essentially, the load balancer only needs to remember which MAC address sent which frame, and then it doesn’t need to worry so much about all that IP nonsense (it’s really more complicated than that, but this is close enough).

The Hyper-V/Load Balancer Collision

Now we’ve arrived at the core of the problem: Hyper-V sends traffic from virtual machines using source MAC addresses that don’t belong to those virtual machines. The MAC addresses belong to the physical NIC. When the load balancer tries to associate that traffic with the MAC address of the physical NIC, everything breaks.

Trying to be helpful (remember that), the load balancer attempts to return what it deems as “response” traffic to the MAC that initiated the conversation. The MAC, in this case, belongs directly to that second physical NIC. It wasn’t expecting the traffic that’s now coming in, so it silently discards the frame.

That happens because:

  • The Windows Server network teaming load balancing algorithms are send only; they will not perform reverse translations. There are lots of reasons for that and they are all good, so don’t get upset with Microsoft. Besides, it’s not like anyone else does things differently.
  • Because the inbound Ethernet frame is not reverse-translated, its destination MAC belongs to a physical NIC. The Hyper-V virtual switch will not send any Ethernet frame to a virtual network adapter unless it owns the destination MAC
  • In typical system-to-system communications, the “responding” system would have sent its traffic to the IP address of the virtual machine. Through the normal course of typical networking, that traffic’s destination MAC would always belong to the virtual machine. It’s only because your load balancer is trying to speed things along that the frame is being sent to the physical NIC’s MAC address. Otherwise, the source MAC of the original frame would have been little more than trivia.

Stated a bit more simply: Windows Server network teaming doesn’t know that anyone cares about its frames’ source MAC addresses and the load balancer doesn’t know that anyone is lying about their MAC addresses.

Why Hyper-V Port Mode Fixes the Problem

When you select the Hyper-V port load balancing algorithm in combination with the switch independent teaming mode, each virtual network adapter’s MAC address is registered on a single physical network adapter. That’s the same behavior that Dynamic uses. However, no load balancing is done for any given virtual network adapter; all traffic entering and exiting any given virtual adapter will always use the same physical adapter. The team achieves load balancing by placing each virtual network adapter across its physical members in a round-robin fashion.

lb_si_hp

Source MACs will always be those of their respective virtual adapters, so there’s nothing to get confused about.

I like this mode as a solution because it does a good job addressing the issue without making any other changes to your infrastructure. The drawback would be if you only had a few virtual network adapters and weren’t getting the best distribution. For a 10GbE system, I wouldn’t worry.

Why Static and LACP Fix the Problem

Static and LACP teaming involve your Windows Server system and the physical switch agreeing on a single logical pathway that consists of multiple physical pathways. All MAC addresses are registered on that logical pathway. Therefore, the Windows Server team has no need of performing any source MAC substitution regardless of the load balancing algorithm that you choose.

lb_stdlacp

Since no MAC substitution occurs here, the load balancer won’t get anything confused.

I don’t like this method as much. It means modifying your physical infrastructure. I’ve noticed that some physical switches don’t like the LACP failover process very much. I’ve encountered some that need a minute or more to notice that a physical link was down and react accordingly. With every physical switch that I’ve used or heard of, the switch independent mode fails over almost instantly.

That said, using a static or LACP team will allow you to continue using the Dynamic load balancing algorithm. All else being equal, you’ll get a more even load balancing distribution with Dynamic than you will with Hyper-V port mode.

Why You Should Let the Load Balancer Do Its Job

The third listed resolution suggests disabling the related feature on your load balancer. I don’t like that option, personally. I don’t have much experience with the Citrix product, but I know that the F5 buries their “Auto Last Hop” feature fairly deeply. Also, these two manufacturers enable the feature by default. It won’t be obvious to a maintainer that you’ve made the change.

However, your situation might dictate that disabling the load balancer’s feature causes fewer problems than changing the Hyper-V or physical switch configuration. Do what works best for you.

Using a Different Internal Router Also Addresses the Issue

In all of these scenarios, the load balancer performs routing. Actually, these types of load balancers always perform routing, because they present a single IP address for the service to the outside world and translate internally to the back-end systems.

However, nothing states that the internal source IP address of the load balancer must exist in the same subnet as the back-end virtual machines. You might do that for performance reasons; as I said above, routing incurs overhead. However, this all a known quantity and modern routers are pretty good at what they do. If any router is present between the load balancer and the back-end virtual machines, then the MAC address issue will sort itself out regardless of your load balancing and teaming mode selections.

Have You Experienced this Phenomenon?

If so, I’d love to hear from you. What system did you experience it happening? How did you resolve the situation (if you were able)? Perhaps you’ve just encountered it and arrived here to get a solution – if so let me know if this explanation was helpful or if you need any further assistance regarding your particular environment. The comment section below awaits.

The Really Simple Guide to Hyper-V Networking

The Really Simple Guide to Hyper-V Networking

If you’re just getting started with Hyper-V and struggling with the networking configuration, you are not alone. I (and others) have written a great deal of introductory material on the subject, but sometimes, that’s just too much. I’m going to try a different approach. Rather than a thorough deep-dive on the topic that tries to cover all of the concepts and how-to, I’m just going to show you what you’re trying to accomplish. Then, I can just link you to the necessary supporting information so that you can make it into reality.

Getting Started

First things first. If you have a solid handle on layer 2 and layer 3 concepts, that’s helpful. If you have experience networking Windows machines, that’s also helpful. If you come to Hyper-V from a different hypervisor, then that knowledge won’t transfer well. If you apply ESXi networking design patterns to Hyper-V, then you will create a jumbled mess that will never function correctly or perform adequately.

Your Goals for Hyper-V Networking

You have two very basic goals:

  1. Ensure that the management operating system can communicate on the network
  2. Ensure that virtual machines can communicate on the network

rsn_goals

Any other goals that you bring to this endeavor are secondary, at best. If you have never done this before, don’t try to jump ahead to routing or anything else until you achieve these two basic goals.

Hyper-V Networking Rules

Understand what you must, can, and cannot do with Hyper-V networking:

  • You can connect the management operating system to a physical network directly using a physical network adapter or a team of physical network adapters.
    rsn_managementos_connect
  • You cannot connect any virtual machine to a physical network directly using a physical network adapter or team.
    rns_vm_nopnic
  • If you wish for a virtual machine to have network access, you must use a Hyper-V virtual switch. There is no bypass or pass-through mode.
    rsn_vm_connect
  • A Hyper-V virtual switch uses a physical network adapter or team. It completely takes over that adapter or team; nothing else can use it.
    rsn_clutchedadapter
  • It is possible for the management operating system to connect through a Hyper-V virtual switch, but it is not required.
    rsn_mos_via_vnic
  • It is not possible for the management operating system and the virtual switch to use a physical adapter or team at the same time. The “share” terminology that you see in all of the tools is a lie.
    rsn_mosorvs

What the Final Product Looks Like

It might help to have visualizations of correctly-configured Hyper-V virtual switches. I will only show images with a single physical adapter. You can use a team instead.

Networking for a Single Hyper-V Host, the Old Way

An old technique has survived from the pre-Hyper-V 2012 days. It uses a pair of physical adapters. One belongs to the management operating system. The other hosts a virtual switch that the virtual machines use. I don’t like this solution for a two adapter host. It leaves both the host and the virtual machines with a single point of failure. However, it could be useful if you have more than two adapters and create a team for the virtual machines to use. Either way, this design is perfectly viable whether I like it or not.

rsn_vswitch_split

 

Networking for a Single Hyper-V Host, the New Way

With teaming, you can just join all of the physical adapters together and let it host a single virtual switch. Let the management operating system and all of the guests connect through it.

rsn_vswitch_unified

 

Networking for a Clustered Hyper-V Host

For a stand-alone Hyper-V host, the management operating system only requires one connection to the network. Clustered hosts benefit from multiple connections. Before teaming was directly supported, we used a lot of physical adapters to make that happen. Now we can just use one big team to handle our host and our guest traffic. That looks like this:

rns_vswitch_cluster

 

VLANs

VLANs seem to have some special power to trip people up. A few things:

  • The only purpose of a VLAN is to separate layer 2 (Ethernet) traffic.
  • VLANs are not necessary to separate layer 3 (IP) networks. Many network administrators use VLANs to create walls around specific layer 3 networks, though. If that describes your network, you will need to design your Hyper-V hosts to match. If your physical network doesn’t use VLANs, then don’t worry about them on your Hyper-V hosts.
  • Do not create one Hyper-V virtual switch per VLAN the way that you configure ESXi. Every Hyper-V virtual switch automatically supports untagged frames and VLANs 1-4096.
  • Hyper-V does not have a “default” VLAN designation.
  • Configure VLANs directly on virtual adapters, not on the virtual switch.

Other Quick Pointers

I’m going to provide you with some links so you can do some more reading and get some assistance with configuration. However, some quick things to point out:

  • The Hyper-V virtual switch does not have an IP address of its own.
  • You do not manage the Hyper-V virtual switch via an IP or management VLAN. You manage the Hyper-V virtual switch using tools in the management or a remote operating system (Hyper-V Manager, PowerShell, and WMI/CIM).
  • Network connections for storage (iSCSI/SMB): Preferably, network connections for storage will use dedicated, unteamed physical adapters. If you can’t do that, then you can create dedicated virtual NICs in the management operating system
  • Multiple virtual switches: Almost no one will ever need more than one virtual switch on a Hyper-V host. If you have VMware experience, especially do not create virtual switches just for VLANs.
  • The virtual machines’ virtual network adapters connect directly to the virtual switch. You do not need anything in the management operating system to assist them. You don’t need a virtual adapter for the management operating system that has anything to do with the virtual machines.
  • Turn off VMQ for every gigabit physical adapter that will host a virtual switch. If you team them, the logical team NIC will also have a VMQ setting that you need to disable.

For More Information

I only intend for this article to be a quick introduction to show you what you’re trying to accomplish. We have several articles to help you dive into the concepts and the necessary steps for configuration.

 

How Device Naming for Network Adapters Works in Hyper-V 2016

How Device Naming for Network Adapters Works in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. I don’t know for sure, but I’m guessing that the Hyper-V Integration Services enable this feature.

Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:

harn_getnetadapter

In contrast, a physical system employing Consistent Device Naming would have automatically named the network adapters in some fashion that reflected their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.

ndn_gui

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
    ndn_netadvprops
  2. When the Ethernet # Properties dialog appears, click Configure:
    ndn_netpropsconfbutton
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:
    ndn_display

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.

ndn_psall

Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:

ndn_psselectobject

I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

7 Powerful Scripts for Practical Hyper-V Network Configurations

7 Powerful Scripts for Practical Hyper-V Network Configurations

I firmly believe in empowerment. I feel that I should supply you with knowledge, provide you with how-tos, share insights and experiences, and release you into the world to make your own decisions. However, I came to that approach by standing at the front of a classroom. During class, we’d almost invariably walk through exercises. Since this is a blog and not a classroom, I do things differently. We don’t have common hardware in a controlled environment, so I typically forgo the exercises bit. As a result, that leaves a lot of my readers at the edge of a cliff with no bridge to carry them from theory to practice. And, of course, there are those of you that would love to spend time reading about concepts but just really need to get something done right now. If you’re stopped at Hyper-V networking, this is the article for you.

Script Inventory

These scripts are included in this article:

Basic Usage

I’m going to show each item as a stand-alone script. First, you’ll locate the one that best aligns with what you’re trying to accomplish. You’ll copy/paste that into a .ps1 PowerShell script file on your system. You’ll need to edit the script to provide information about your environment so that it will work for you. I’ll have you set each of those items at the beginning of the script. Then, you’ll just need to execute the script on your host.

Most scripts will have its own “basic usage” heading that explains a bit about how you’d use it without modification.

Enhanced Usage

I could easily compile these into standalone executables that you couldn’t tinker with. Even though I want to give you a fully prepared springboard, I also want you to learn how the system works and what you’re doing to it.

Most scripts will have its own “enhanced usage” heading that gives some ideas how you might exploit or extend it yourself.

Configure networking for a single host with a single adapter

Use this script for a standalone system that only has one physical adapter. It will:

  • Disable VMQ for the physical adapter
  • Create a virtual switch on the adapter
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it.

Advanced Usage for this Script

As-is, this script should be complete for most typical single-adapter systems. You might choose to disable some items. For instance, if you are using this on Windows 10, you might not want to provide a fixed IP address. In that case, just put a # sign at the beginning of lines 42 onward. When the virtual network adapter is created, it will remain in DHCP mode.

 

Configure a standalone host with 2-4 gigabit adapters for converged networking

Use this script for a standalone host that has between two and four gigabit adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. Be aware that it will have problems if you already have a team.

Advanced Usage for this Script

This script serves as the base for the remaining scripts on this page. Likewise, you could use it as a base for your own. You could also use any of the items as examples for whatever similar actions you wish to accomplish in your own scripts.

 

Configure a standalone host with 2-4 10 GbE adapters for converged networking

Use this script for a standalone host that has between two and four 10GbE adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

It won’t take a great deal of sleuthing to discover that this script is identical to the preceding one, except that it does not disable VMQ.

 

Configure a clustered host with 2-4 gigabit adapters for converged networking

Use this script for a host that has between two and four gigabit adapters that will be a member of a cluster. Like the previous scripts, it will employ a converged networking configuration. The script will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Configure a clustered host with 2-4 10 GbE adapters for converged networking

This script is identical to the preceding except that it leaves VMQ enabled. It does the following:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

These notes are identical to those of the preceding script.

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

These notes are identical to those of the preceding script.

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Set preferred order for cluster Live Migration networks

This script aligns with the two preceding scripts to ensure that the cluster chooses the named “Live Migration” adapter first when moving virtual machines between nodes. The “Cluster” virtual adapter will be used second. The management adapter will be used as the final fallback.

Basic Usage for this Script

Use this script after you’ve run one of the above two clustered host scripts and joined them into a cluster.

Advanced Usage for this Script

Modify this to change the order of Live Migration adapters. You must specify all adapters recognized by the cluster. Check the “MigrationExcludeNetworks” registry key that’s in the same location as “MigrationNetworkOrder”.

 

Exclude cluster networks from Live Migration

This script is intended to be used as an optional adjunct to the preceding script. Since my scripts set up all virtual adapters to be used in Live Migration, the network names used here are fabricated.

Basic Usage for this Script

You’ll need to set the network names to match yours, but otherwise, the script does not need to be altered.

Advanced Usage for this Script

This script will need to be modified in order to be used at all.

 

How to Hot Add/Remove Virtual Network Adapters in Hyper-V 2016

How to Hot Add/Remove Virtual Network Adapters in Hyper-V 2016

Last week I showed you how to hot add/remove memory in Hyper-V 2016 and this week I’m covering another super handy new feature that system admins will also love. In fact, Hyper-V 2016 brought many fantastic features. Containers! It also added some features that indicate natural product maturation. On that list, we find “hot add/remove of virtual network adapters”. If that’s not obvious, it means that you can now add or remove virtual network adapters to/from running virtual machines.

Requirements for Hyper-V Hot Add/Remove of Virtual Network Adapters

To make hot add/remove of network adapters work in Hyper-V, you must meet these requirements:

  • Hypervisor must be 2016 version (Windows 10, Windows Server 2016, or Hyper-V Server 2016)
  • Virtual machine must be generation 2
  • To utilize the Device Naming feature, the virtual machine version must be at least 6.2. The virtual machine configuration version does not matter if you do not attempt to use Device Naming. Meaning, you can bring a version 5.0 virtual machine over from 2012 R2 to 2016 and hot add a virtual network adapter. A discussion on Device Naming will appear in a different article.

The guest operating system may need an additional push to realize that a change was made. I did not encounter any issues with the various operating systems that I tested.

How to Use PowerShell to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

I always recommend PowerShell to work with second or higher network adapters to a virtual machine. Otherwise, they’re all called “Network Adapter”. Sorting that out can be unpleasant.

Adding a Virtual Adapter with PowerShell

Use Add-VMNetworkAdapter to add a network adapter to a running Hyper-V guest. That’s the same command that you’d use for an offline guest, as well. I don’t know why the authors chose the verb “Add” instead of “New”.

The above will work on a virtual machine with a configuration version of at least 6.2. If the virtual machine is set to a lower version, you get a rather confusing message that talks about DVD drives:

It does eventually get around to telling you exactly what it doesn’t like. You can avoid this error by not specifying the DeviceNaming parameter. If you’re scripting, you can avoid the parameter by employing splatting or by setting DeviceNaming to Off.

You can use any of the other parameters of Add-VMNetworkAdapter normally.

Removing a Virtual Adapter with PowerShell

To remove the adapter, use Remove-VMNetworkAdapter:

This is where things can get… interesting. Especially if you didn’t specify a unique name for the adapter. The Name parameter works like a search filter; it will remove any adapter that perfectly matches that name. So, if all of the virtual machine’s network adapters use the default name Network Adapter, and you specify Network Adapter for the Name parameter, then all of that VM’s adapters will be removed.

To address that issue, you’ll need to employ some cleverness. A quick ‘n’ dirty option would be to just remove all of the adapters, then add one. By default, that one adapter will pick up an IP from an available DHCP server. Since you can specify a static MAC address with the StaticMacAddress parameter of Add-VMNetworkAdapter, you can control that behavior with reservations.

You could also filter adapters by MAC address:

You could also use arrays to selectively remove items:

You could even use a loop to knock out all adapters after the first:

In my unscientific testing, virtual machine network adapters are always stored and retrieved in the order in which they were added, so the above script should always remove every adapter except the original. Based on the file format, I would expect that to always hold true. However, no documentation exists that outright supports that; use this sort of cleverness with caution.

I recommend naming your adapters to save a lot of grief in these instances.

How to Use the GUI to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

These instructions work for both Hyper-V Manager and Failover Cluster Manager. Use the virtual machine’s Settings dialog in either tool.

Adding a Virtual Network Adapter in the GUI

Add a virtual network adapter to a running VM the same way that you add one to a stopped VM:

  1. On the VM’s Settings dialog, start on the Add Hardware page. The Network Adapter entry should be black, not gray. If it’s gray, then the VM is either Generation 1 or not in a valid state to add an adapter:
    harn_newhardware
  2. Highlight Network Adapter and click Add.
  3. You will be taken to a screen where you can fill out all of the normal information for a network adapter. Set all items as desired.
    harn_newadapter
  4. Once you’ve set everything to your liking, click OK to add the adapter and close the dialog or Apply to add the adapter and leave the dialog open.

Removing a Virtual Network Adapter in the GUI

As with adding an adapter, removing an adapter for a running virtual machine is performed the same way as adding one:

  1. Start on the Settings dialog for the virtual machine. Switch to the tab for the adapter that you wish to remove:
    harn_addedadapter
  2. Click the Remove button.
    harn_removeadapter
  3. The tab for the adapter to be removed will have all of its text crossed out. The dialog items for it will turn gray.
    harn_removeadapterpending
  4. Click OK to remove the adapter and close the dialog or Apply to remove the adapter and leave the dialog open. Click Cancel if you change your mind. For OK or Apply, a prompt will appear with a warning that you’ll probably disrupt network communications:
    harn_removeprompt

Hot Add/Remove of Hyper-V Virtual Adapters for Linux Guests

I didn’t invest a great deal of effort into testing, but this feature works for Linux guests with mixed results. A Fedora guest running on my Windows 10 system was perfectly happy with it:

harn_linux

OpenSUSE Leap… not so much:

harn_noleap

But then, I added another virtual network adapter to my OpenSUSE system. This time, I remembered to connect it to a virtual switch before adding. It liked that much better:

harn_leapon

So, the moral of the story: for Linux guests, always specify a virtual switch when hot adding a virtual network card. Connecting it afterward does not help.

Also notice that OpenSUSE Leap did not ever automatically configure the adapter for DHCP, whereas Fedora did. As I mentioned in the beginning of the article, you might need to give some environments an extra push.

Also, Leap seemed to get upset when I hot removed the adapter:

harn_leapout

To save your eyes, the meat of that message says: “unable to send revoke receive buffer to netvsp”. I don’t know if that’s serious or not. The second moral of this story, then: hot removing network adapters might leave some systems in an inconsistent, unhappy state.

My Thoughts on Hyper-V’s Hot Add/Remove of Network Adapters Feature

Previous versions of Hyper-V did not have this feature and I never missed it. I wasn’t even aware that other hypervisors had it until I saw posts from people scrounging for any tiny excuse to dump hate on Microsoft. Sure, I’ve had a few virtual machines with services that benefited from multiple network adapters. However, I knew of that requirement going in, so I just built them appropriately from the beginning. I suppose that’s a side effect of competent administration. Overall, I find this feature to be a hammer desperately seeking a nail.

That said, it misses the one use that I might have: it doesn’t work for generation 1 VMs. As you know, a generation 1 Hyper-V virtual machine can only PXE boot from a legacy network adapter. The legacy network adapter has poor performance. I’d like to be able to remove that legacy adapter post-deployment without shutting down the virtual machine. That said, it’s very low on my wish list. I’m guessing that we’ll eventually be using generation 2 VMs exclusively, so the problem will handle itself.

During my testing, I did not find any problems at all using this feature with Windows guests. As you can see from the Linux section, things didn’t go quite as well there. Either way, I would think twice about using this feature with production systems. Network disruptions do not always behave exactly as you might think because networks often behave unexpectedly. Multi-homed systems often crank the “strange” factor up somewhere near “haunted”. Multi-home a system and fire up Wireshark. I can almost promise that you’ll see something that you didn’t expect within the first five minutes.

I know that you’re going to use this feature anyway, and that’s fine; that’s why it’s there. I would make one recommendation: before removing an adapter, clear its TCP/IP settings and disconnect it from the virtual switch. That gives the guest operating system a better opportunity to deal with the removal of the adapter on familiar terms.

95 Best Practices for Optimizing Hyper-V Performance

95 Best Practices for Optimizing Hyper-V Performance

We can never get enough performance. Everything needs to be faster, faster, faster! You can find any number of articles about improving Hyper-V performance and best practices, of course, unfortunately, a lot of the information contains errors, FUD, and misconceptions. Some are just plain dated. Technology has changed and experience is continually teaching us new insights. From that, we can build a list of best practices that will help you to tune your system to provide maximum performance.

How to optimize Hyper-V Performance

Philosophies Used in this Article

This article focuses primarily on performance. It may deviate from other advice that I’ve given in other contexts. A system designed with performance in mind will be built differently from a system with different goals. For instance, a system that tries to provide high capacity at a low price point would have a slower performance profile than some alternatives.

  • Subject matter scoped to the 2012 R2 and 2016 product versions.
  • I want to stay on target by listing the best practices with fairly minimal exposition. I’ll expand ideas where I feel the need; you can always ask questions in the comments section.
  • I am not trying to duplicate pure physical performance in a virtualized environment. That’s a wasted effort.
  • I have already written an article on best practices for balanced systems. It’s a bit older, but I don’t see anything in it that requires immediate attention. It was written for the administrator who wants reasonable performance but also wants to stay under budget.
  • This content targets datacenter builds. Client Hyper-V will follow the same general concepts with variable applicability.

General Host Architecture

If you’re lucky enough to be starting in the research phase — meaning, you don’t already have an environment — then you have the most opportunity to build things properly. Making good purchase decisions pays more dividends than patching up something that you’ve already got.

  1. Do not go in blind.
    • Microsoft Assessment and Planning Toolkit will help you size your environment: MAP Toolkit
    • Ask your software vendors for their guidelines for virtualization on Hyper-V.
    • Ask people that use the same product(s) if they have virtualized on Hyper-V.
  2. Stick with logo-compliant hardware. Check the official list: https://www.windowsservercatalog.com/
  3. Most people will run out of memory first, disk second, CPU third, and network last. Purchase accordingly.
  4. Prefer newer CPUs, but think hard before going with bleeding edge. You may need to improve performance by scaling out. Live Migration requires physical CPUs to be the same or you’ll need to enable CPU compatibility mode. If your environment starts with recent CPUs, then you’ll have the longest amount of time to be able to extend it. However, CPUs commonly undergo at least one revision, and that might be enough to require compatibility mode. Attaining maximum performance may reduce virtual machine mobility.
  5. Set a target density level, e.g. “25 virtual machines per host”. While it may be obvious that higher densities result in lower performance, finding the cut-off line for “acceptable” will be difficult. However, having a target VM number in mind before you start can make the challenge less nebulous.
  6. Read the rest of this article before you do anything.

Management Operating System

Before we carry on, I just wanted to make sure to mention that Hyper-V is a type 1 hypervisor, meaning that it runs right on the hardware. You can’t “touch” Hyper-V because it has no direct interface. Instead, you install a management operating system and use that to work with Hyper-V. You have three choices:

Note: Nano Server initially offered Hyper-V, but that functionality will be removed (or has already been removed, depending on when you read this). Most people ignore the fine print of using Nano Server, so I never recommended it anyway.

TL;DR: In absence of a blocking condition, choose Hyper-V Server. A solid blocking condition would be the Automatic Virtual Machine Activation feature of Datacenter Edition. In such cases, the next preferable choice is Windows Server in Core mode.

I organized those in order by distribution size. Volumes have been written about the “attack surface” and patching. Most of that material makes me roll my eyes. No matter what you think of all that, none of it has any meaningful impact on performance. For performance, concern yourself with the differences in CPU and memory footprint. The widest CPU/memory gap lies between Windows Server and Windows Server Core. When logged off, the Windows Server GUI does not consume many resources, but it does consume some. The space between Windows Server Core and Hyper-V Server is much tighter, especially when the same features/roles are enabled.

One difference between Core and Hyper-V Server is the licensing mechanism. On Datacenter Edition, that does include the benefit of Automatic Virtual Machine Activation (AVMA). That only applies to the technological wiring. Do not confuse it with the oft-repeated myth that installing Windows Server grants guest licensing privileges. The legal portion of licensing stands apart; read our eBook for starting information.

Because you do not need to pay for the license for Hyper-V Server, it grants one capability that Windows Server does not: you can upgrade at any time. That allows you to completely decouple the life cycle of your hosts from your guests. Such detachment is a hallmark of the modern cloud era.

If you will be running only open source operating systems, Hyper-V Server is the natural choice. You don’t need to pay any licensing fees to Microsoft at all with that usage. I don’t realistically expect any pure Linux shops to introduce a Microsoft environment, but Linux-on-Hyper-V is a fantastic solution in a mixed-platform environment. And with that, let’s get back onto the list.

Management Operating System Best Practices for Performance

  1. Prefer Hyper-V Server first, Windows Server Core second
  2. Do not install any software, feature, or role in the management operating system that does not directly aid the virtual machines or the management operating system. Hyper-V prioritizes applications in the management operating system over virtual machines. That’s because it trusts you; if you are running something in the management OS, it assumes that you really need it.
  3. Do not log on to the management operating system. Install the management tools on your workstation and manipulate Hyper-V remotely.
  4. If you must log on to the management operating system, log off as soon as you’re done.
  5. Do not browse the Internet from the management operating system. Don’t browse from any server, really.
  6. Stay current on mainstream patches.
  7. Stay reasonably current on driver versions. I know that many of my peers like to install drivers almost immediately upon release, but I can’t join that camp. While it’s not entirely unheard of for a driver update to bring performance improvements, it’s not common. With all of the acquisitions and corporate consolidations going on in the hardware space — especially networking — I feel that the competitive drive to produce quality hardware and drivers has entered a period of decline. In simple terms, view new drivers as a potential risk to stability, performance, and security.
  8. Join your hosts to the domain. Systems consume less of your time if they answer to a central authority.
  9. Use antivirus and intrusion prevention. As long you choose your anti-malware vendor well and the proper exclusions are in place, performance will not be negatively impacted. Compare that to the performance of a compromised system.
  10. Read through our article on host performance tuning.

Leverage Containers

In the “traditional” virtualization model, we stand up multiple virtual machines running individual operating system environments. As “virtual machine sprawl” sets in, we wind up with a great deal of duplication. In the past, we could justify that as a separation of the environment. Furthermore, some Windows Server patches caused problems for some software but not others. In the modern era, containers and omnibus patch packages have upset that equation.

Instead of building virtual machine after virtual machine, you can build a few virtual machines. Deploy containers within them. Strategies for this approach exceed the parameters of this article, but you’re aiming to reduce the number of disparate complete operating system environments deployed. With careful planning, you can reduce density while maintaining a high degree of separation for your services. Fewer kernels are loaded, fewer context switches occur, less memory contains the same code bits, fewer disk seeks to retrieve essentially the same information from different locations.

  1. Prefer containers over virtual machines where possible.

CPU

You can’t do a great deal to tune CPU performance in Hyper-V. Overall, I count that among my list of “good things”; Microsoft did the hard work for you.

  1. Follow our article on host tuning; pay special attention to C States and the performance power settings.
  2. For Intel chips, leave hyperthreading on unless you have a defined reason to turn it off.
  3. Leave NUMA enabled in hardware. On your VMs’ property sheet, you’ll find a Use Hardware Topology button. Remember to use that any time that you adjust the number of vCPUs assigned to a virtual machine or move it to a host that has a different memory layout (physical core count and/or different memory distribution).
    best pratices for optimizing hyper-v performance - settings NUMA configuration
  4. Decide whether or not to allow guests to span NUMA nodes (the global host NUMA Spanning setting). If you size your VMs to stay within a NUMA node and you are careful to not assign more guests than can fit solidly within each NUMA node, then you can increase individual VM performance. However, if the host has trouble locking VMs into nodes, then you can negatively impact overall memory performance. If you’re not sure, just leave NUMA at defaults and tinker later.
  5. For modern guests, I recommend that you use at least two virtual CPUs per virtual machine. Use more in accordance with the virtual machine’s performance profile or vendor specifications. This is my own personal recommendation; I can visibly detect the response difference between a single vCPU guest and a dual vCPU guest.
  6. For legacy Windows guests (Windows XP/Windows Server 2003 and earlier), use 1 vCPU. More will likely hurt performance more than help.
  7. Do not grant more than 2 vCPU to a virtual machine without just cause. Hyper-V will do a better job reducing context switches and managing memory access if it doesn’t need to try to do too much core juggling. I’d make exceptions for very low-density hosts where 2 vCPU per guest might leave unused cores. At the other side, if you’re assigning 24 cores to every VM just because you can, then you will hurt performance.
  8. If you are preventing VMs from spanning NUMA nodes, do not assign more vCPU to a VM than you have matching physical cores in a NUMA node (usually means the number of cores per physical socket, but check with your hardware manufacturer).
  9. Use Hyper-V’s priority, weight, and reservation settings with great care. CPU bottlenecks are highly uncommon; look elsewhere first. A poor reservation will cause more problems than it solves.

Memory

I’ve long believed that every person that wants to be a systems administrator should be forced to become conversant in x86 assembly language, or at least C. I can usually spot people that have no familiarity with programming in such low-level languages because they almost invariably carry a bizarre mental picture of how computer memory works. Fortunately, modern memory is very, very, very fast. Even better, the programmers of modern operating system memory managers have gotten very good at their craft. Trying to tune memory as a systems administrator rarely pays dividends. However, we can establish some best practices for memory in Hyper-V.

  1. Follow our article on host tuning. Most importantly, if you have multiple CPUs, install your memory such that it uses multi-channel and provides an even amount of memory to each NUMA node.
  2. Be mindful of operating system driver quality. Windows drivers differ from applications in that they can permanently remove memory from the available pool. If they do not properly manage that memory, then you’re headed for some serious problems.
  3. Do not make your CSV cache too large.
  4. For virtual machines that will perform high quantities of memory operations, avoid dynamic memory. Dynamic memory disables NUMA (out of necessity). How do you know what constitutes a “high volume”? Without performance monitoring, you don’t.
  5. Set your fixed memory VMs to a higher priority and a shorter startup delay than your Dynamic Memory VMs. This ensures that they will start first, allowing Hyper-V to plot an optimal NUMA layout and reduce memory fragmentation. It doesn’t help a lot in a cluster, unfortunately. However, even in the best case, this technique won’t yield many benefits.
  6. Do not use more memory for a virtual machine than you can prove that it needs. Especially try to avoid using more memory than will fit in a single NUMA node.
  7. Use Dynamic Memory for virtual machines that do not require the absolute fastest memory performance.
  8. For Dynamic Memory virtual machines, pay the most attention to the startup value. It sets the tone for how the virtual machine will be treated during runtime. For virtual machines running full GUI Windows Server, I tend to use a startup of either 1 GB or 2 GB, depending on the version and what else is installed.
  9. For Dynamic Memory VMs, set the minimum to the operating system vendor’s stated minimum (512 MB for Windows Server). If the VM hosts a critical application, add to the minimum to ensure that it doesn’t get choked out.
  10. For Dynamic Memory VMs, set the maximum to a reasonable amount. You’ll generally discover that amount through trial and error and performance monitoring. Do not set it to an arbitrarily high number. Remember that, even on 2012 R2, you can raise the maximum at any time.

Check the CPU section for NUMA guidance.

Networking

In the time that I’ve been helping people with Hyper-V, I don’t believe that I’ve seen anyone waste more time worrying about anything that’s less of an issue than networking. People will read whitepapers and forums and blog articles and novels and work all weekend to draw up intricately designed networking layouts that need eight pages of documentation. But, they won’t spend fifteen minutes setting up a network utilization monitor. I occasionally catch grief for using MRTG since it’s old and there are shinier, bigger, bolder tools, but MRTG is easy and quick to set up. You should know how much traffic your network pushes. That knowledge can guide you better than any abstract knowledge or feature list.

That said, we do have many best practices for networking performance in Hyper-V.

  1. Follow our article on host tuning. Especially pay attention to VMQ on gigabit and separation of storage traffic.
  2. If you need your network to go faster, use faster adapters and switches. A big team of gigabit won’t keep up with a single 10 gigabit port.
  3. Use a single virtual switch per host. Multiple virtual switches add processing overhead. Usually, you can get a single switch to do whatever you wanted multiple switches to do.
  4. Prefer a single large team over multiple small teams. This practice can also help you to avoid needless virtual switches.
  5. For gigabit, anything over 4 physical ports probably won’t yield meaningful returns. I would use 6 at the outside. If you’re using iSCSI or SMB, then two more physical adapters just for that would be acceptable.
  6. For 10GbE, anything over 2 physical ports probably won’t yield meaningful returns.
  7. If you have 2 10GbE and a bunch of gigabit ports in the same host, just ignore the gigabit. Maybe use it for iSCSI or SMB, if it’s adequate for your storage platform.
  8. Make certain that you understand how the Hyper-V virtual switch functions. Most important:
    • You cannot “see” the virtual switch in the management OS except with Hyper-V specific tools. It has no IP address and no presence in the Network and Sharing Center applet.
    • Anything that appears in Network and Sharing Center that you think belongs to the virtual switch is actually a virtual network adapter.
    • Layer 3 (IP) information in the host has no bearing on guests — unless you create an IP collision
  9. Do not create a virtual network adapter in the management operating system for the virtual machines. I did that before I understood the Hyper-V virtual switch, and I have encountered lots of other people that have done it. The virtual machines will use the virtual switch directly.
  10. Do not multi-home the host unless you know exactly what you are doing. Valid reasons to multi-home:
    • iSCSI/SMB adapters
    • Separate adapters for cluster roles. e.g. “Management”, “Live Migration”, and “Cluster Communications”
  11. If you multi-home the host, give only one adapter a default gateway. If other adapters must use gateways, use the old route command or the new New-NetRoute command.
  12. Do not try to use internal or private virtual switches for performance. The external virtual switch is equally fast. Internal and private switches are for isolation only.
  13. If all of your hardware supports it, enable jumbo frames. Ensure that you perform validation testing (i.e.: ping storage-ip -f -l 8000)
  14. Pay attention to IP addressing. If traffic needs to locate an external router to reach another virtual adapter on the same host, then traffic will traverse the physical network.
  15. Use networking QoS if you have identified a problem.
    • Use datacenter bridging, if your hardware supports it.
    • Prefer the Weight QoS mode for the Hyper-V switch, especially when teaming.
    • To minimize the negative side effects of QoS, rely on limiting the maximums of misbehaving or non-critical VMs over trying to guarantee minimums for vital VMs.
  16. If you have SR-IOV-capable physical NICs, it provides the best performance. However, you can’t use the traditional Windows team for the physical NICs. Also, you can’t use VMQ and SR-IOV at the same time.
  17. Switch-embedded teaming (2016) allows you to use SR-IOV. Standard teaming does not.
  18. If using VMQ, configure the processor sets correctly.
  19. When teaming, prefer Switch Independent mode with the Dynamic load balancing algorithm. I have done some performance testing on the types (near the end of the linked article). However, a reader commented on another article that the Dynamic/Switch Independent combination can cause some problems for third-party load balancers (see comments section).

Storage

When you need to make real differences in Hyper-V’s performance, focus on storage. Storage is slow. The best way to make storage not be slow is to spend money. But, we have other ways.

  1. Follow our article on host tuning. Especially pay attention to:
    • Do not break up internal drive bays between Hyper-V and the guests. Use one big array.
    • Do not tune the Hyper-V partition for speed. After it boots, Hyper-V averages zero IOPS for itself. As a prime example, don’t put Hyper-V on SSD and the VMs on spinning disks. Do the opposite.
    • The best ways to get more storage speed is to use faster disks and bigger arrays. Almost everything else will only yield tiny differences.
  2. For VHD (not VHDX), use fixed disks for maximum performance. Dynamically-expanding VHD is marginally, but measurably, slower.
  3. For VHDX, use dynamically-expanding disks for everything except high-utilization databases. I receive many arguments on this, but I’ve done the performance tests and have years of real-world experience. You can trust that (and run the tests yourself), or you can trust theoretical whitepapers from people that make their living by overselling disk space but have perpetually misplaced their copy of diskspd.
  4. Avoid using shared VHDX (2012 R2) or VHDS (2016). Performance still isn’t there. Give this technology another maturation cycle or two and look at it again.
  5. Where possible, do not use multiple data partitions in a single VHD/X.
  6. When using Cluster Shared Volumes, try to use at least as many CSVs as you have nodes. Starting with 2012 R2, CSV ownership will be distributed evenly, theoretically improving overall access.
  7. You can theoretically improve storage performance by dividing virtual machines across separate storage locations. If you need to make your arrays span fewer disks in order to divide your VMs’ storage, you will have a net loss in performance. If you are creating multiple LUNs or partitions across the same disks to divide up VMs, you will have a net loss in performance.
  8. For RDS virtual machine-based VDI, use hardware-based or Windows’ Hyper-V-mode deduplication on the storage system. The read hits, especially with caching, yield positive performance benefits.
  9. The jury is still out on using host-level deduplication for Windows Server guests, but it is supported with 2016. I personally will be trying to place Server OS disks on SMB storage deduplicated in Hyper-V mode.
  10. The slowest component in a storage system is the disk(s); don’t spend a lot of time worrying about controllers beyond enabling caching.
  11. RAID-0 is the fastest RAID type, but provides no redundancy.
  12. RAID-10 is generally the fastest RAID type that provides redundancy.
  13. For Storage Spaces, three-way mirror is fastest (by a lot).
  14. For remote storage, prefer MPIO or SMB multichannel over multiple unteamed adapters. Avoid placing this traffic on teamed adapters.
  15. I’ve read some scattered notes that say that you should format with 64 kilobyte allocation units. I have never done this, mostly because I don’t think about it until it’s too late. If the default size hurts anything, I can’t tell. Someday, I’ll remember to try it and will update this article after I’ve gotten some performance traces. If you’ll be hosting a lot of SQL VMs and will be formatting their VHDX with 64kb AUs, then you might get more benefit.
  16. I still don’t think that ReFS is quite mature enough to replace NTFS for Hyper-V. For performance, I definitely stick with NTFS.
  17. Don’t do full defragmentation. It doesn’t help. The minimal defragmentation that Windows automatically performs is all that you need. If you have some crummy application that makes this statement false, then stop using that application or exile it to its own physical server. Defragmentation’s primary purpose is to wear down your hard drives so that you have to buy more hard drives sooner than necessary, which is why employees of hardware vendors recommend it all the time. If you have a personal neurosis that causes you pain when a disk becomes “too” fragmented, use Storage Live Migration to clear and then re-populate partitions/LUNs. It’s wasted time that you’ll never get back, but at least it’s faster. Note: All retorts must include verifiable and reproducible performance traces, or I’m just going to delete them.

Clustering

For real performance, don’t cluster virtual machines. Use fast internal or direct-attached SSDs. Cluster for redundancy, not performance. Use application-level redundancy techniques instead of relying on Hyper-V clustering.

In the modern cloud era, though, most software doesn’t have its own redundancy and host clustering is nearly a requirement. Follow these best practices:

  1. Validate your cluster. You may not need to fix every single warning, but be aware of them.
  2. Follow our article on host tuning. Especially pay attention to the bits on caching storage. It includes a link to enable CSV caching.
  3. Remember your initial density target. Add as many nodes as necessary to maintain that along with sufficient extra nodes for failure protection.
  4. Use the same hardware in each node. You can mix hardware, but CPU compatibility mode and mismatched NUMA nodes will have at least some impact on performance.
  5. For Hyper-V, every cluster node should use a minimum of two separate IP endpoints. Each IP must exist in a separate subnet. This practice allows the cluster to establish multiple simultaneous network streams for internode traffic.
    • One of the addresses must be designated as a “management” IP, meaning that it must have a valid default gateway and register in DNS. Inbound connections (such as your own RDP and PowerShell Remoting) will use that IP.
    • None of the non-management IPs should have a default gateway or register in DNS.
    • One alternative IP endpoint should be preferred for Live Migration. Cascade Live Migration preference order through the others, ending with the management IP. You can configure this setting most easily in Failover Cluster Manager by right-clicking on the Networks node.
    • Further IP endpoints can be used to provide additional pathways for cluster communications. Cluster communications include the heartbeat, cluster status and update messages, and Cluster Shared Volume information and Redirected Access traffic.
    • You can set any adapter to be excluded from cluster communications but included in Live Migration in order to enforce segregation. Doing so generally does not improve performance, but may be desirable in some cases.
    • You can use physical or virtual network adapters to host cluster IPs.
    • The IP for each cluster adapter must exist in a unique subnet on that host.
    • Each cluster node must contain an IP address in the same subnet as the IPs on other nodes. If a node does not contain an IP in a subnet that exists on other nodes, then that network will be considered “partitioned” and the node(s) without a member IP will be excluded from that network.
    • If the host will connect to storage via iSCSI, segregate iSCSI traffic onto its own IP(s). Exclude it/them from cluster communications and Live Migration. Because they don’t participate in cluster communications, it is not absolutely necessary that they be placed into separate subnets. However, doing so will provide some protection from network storms.
  6. If you do not have RDMA-capable physical adapters, Compression usually provides the best Live Migration performance.
  7. If you do have RDMA-capable physical adapters, SMB usually provides the best Live Migration performance.
  8. I don’t recommend spending time tinkering with the metric to shape CSV traffic anymore. It utilizes SMB, so the built-in SMB multi-channel technology can sort things out.

Virtual Machines

The preceding guidance obliquely covers several virtual machine configuration points (check the CPU and the memory sections). We have a few more:

  1. Don’t use Shielded VMs or BitLocker. The encryption and VMWP hardening incur overhead that will hurt performance. The hit is minimal — but this article is about performance.
  2. If you have 1) VMs with very high inbound networking needs, 2) physical NICs >= 10GbE, 3) VMQ enabled, 4) spare CPU cycles, then enable RSS within the guest operating systems. Do not enable RSS in the guest OS unless all of the preceding are true.
  3. Do not use the legacy network adapter in Generation 1 VMs any more than absolutely necessary.
  4. Utilize checkpoints rarely and briefly. Know the difference between standard and “production” checkpoints.
  5. Use time synchronization appropriately. Meaning, virtual domain controllers should not have the Hyper-V time synchronization service enabled, but all other VMs should (generally speaking). The hosts should pull their time from the domain hierarchy. If possible, the primary domain controller should be pulling from a secured time source.
  6. Keep Hyper-V guest services up-to-date. Supported Linux systems can be updated via kernel upgrades/updates from their distribution repositories. Windows 8.1+ and Windows Server 2012 R2+ will update from Windows Update.
  7. Don’t do full defragmentation in the guests, either. Seriously. We’re administering multi-spindle server equipment here, not displaying a progress bar to someone with a 5400-RPM laptop drive so that they feel like they’re accomplishing something.
  8. If the virtual machine’s primary purpose is to run an application that has its own replication technology, don’t use Hyper-V Replica. Examples: Active Directory and Microsoft SQL Server. Such applications will replicate themselves far more efficiently than Hyper-V Replica.
  9. If you’re using Hyper-V Replica, consider moving the VMs’ page files to their own virtual disk and excluding it from the replica job. If you have a small page file that doesn’t churn much, that might cost you more time and effort than you’ll recoup.
  10. If you’re using Hyper-V Replica, enable compression if you have spare CPU but leave it disabled if you have spare network. If you’re not sure, use compression.
  11. If you are shipping your Hyper-V Replica traffic across an encrypted VPN or keeping its traffic within secure networks, use Kerberos. SSL en/decryption requires CPU. Also, the asymmetrical nature of SSL encryption causes the encrypted data to be much larger than its decrypted source.

Monitoring

You must monitor your systems. Monitoring is not and has never been, an optional activity.

  1. Be aware of Hyper-V-specific counters. Many people try to use Task Manager in the management operating system to gauge guest CPU usage, but it just doesn’t work. The management operating system is a special-case virtual machine, which means that it is using virtual CPUs. Its Task Manager cannot see what the guests are doing.
  2. Performance Monitor has the most power of any built-in tool, but it’s tough to use. Look at something like Performance Analysis of Logs (PAL) tool, which understands Hyper-V.
  3. In addition to performance monitoring, employ state monitoring. With that, you no longer have to worry (as much) about surprise events like disk space or memory filling up. I like Nagios, as regular readers already know, but you can select from many packages.
  4. Take periodic performance baselines and compare them to earlier baselines

 

If you’re able to address a fair proportion of points from this list, I’m sure you’ll see a boost in Hyper-V performance. Don’t forget this list is not exhaustive and I’ll be adding to it periodically to ensure it’s as comprehensive as possible however if you think there’s something missing, let me know in the comments below and you may see the number 95 increase!

Get Involved on twitter: #How2HyperV

Get involved on twitter where we will be regularly posting excerpts from this article and engaging the IT community to help each other improve our use of Hyper-V. Got your own Hyper-V tips or tricks for boosting performance? Use the hashtag #How2HyperV when you tweet and share your knowledge with the world!

#How2HyperV Tweets

The Complete Guide to Hyper-V Networking

The Complete Guide to Hyper-V Networking

I frequently write about all sorts of Hyper-V networking topics. I was surprised to learn that we’ve never published a unified article that gives a clear and complete how-to that brings all of these related topics into one resource. We’ll fix that right now.

Understanding the Basics of Hyper-V Networking

We have produced copious amounts of material explaining the various concepts around Hyper-V networking. I want to spend as little time as possible on that here. Comprehension is very important, though, so here’s an index of expository work:

  • How the Hyper-V Virtual Switch Works: If you don’t understand the contents of that article, you will have a very difficult time administering Hyper-V. Read it, and read it again until you have absorbed it. It answers easily 90% of the questions that I receive about Hyper-V networking. If something there doesn’t make sense, ask.
  • The OSI model and Hyper-V: A quick read on the OSI model and a brief introduction to its relevance to Hyper-V. If you’ve been skimming over the terms “layer 2” and “layer 3” because you don’t have a solid understanding of them, read it.
  • Hyper-V and VLANs: That article ties closely to the OSI article. VLANs are a layer 2 technology. Due to common usage, newcomers often confuse them with layer 3 operations. I’m frequently asked about trunking multiple VLANs into a virtual machine, even though I’m fairly certain that most people don’t really want to do that. This article should help you sort out those concepts.
  • Hyper-V and IP: That article also ties closely to the OSI article and contrasts against the VLAN article. It doesn’t contain a great deal of direct Hyper-V knowledge, but it should help fill any of the most serious deficiencies in TCP/IP comprehension.
  • Hyper-V and Link Aggregation (Teaming): That article describes the concepts around NIC teaming and addresses some of the myths that I encounter. The article that you’re reading now will bring you the “how”.
  • Hyper-V and DNS: If I were to compile a list of ridiculously simple technologies that people tend to ridiculously over-complicate, I’d place DNS in the top slot. Hyper-V itself cares nothing about DNS, but its management operating systems and guests care very much. Poor DNS configurations can be blamed for nearly all of the world’s technological ills. You must learn it. It won’t take long.
  • Hyper-V and Binding Order: Lots of administrators spend lots of time wringing their hands over network binding order. Stop. Only the DNS subsystem and one other thing (that I’ve now forgotten about) pay any attention to the binding order. If you get that, then you don’t really need to read the linked article.
  • Hyper-V and Load Balancing Algorithms: The “hows” of load balancing algorithms will be on display in the article that you’re reading. If you want to understand the “what” and the “why”, then follow the link.
  • Hyper-V and MPIO and Teaming for Storage: I see lots of complaints from people that create a switch independent team on a pair of 10GbE pipes that wind back to a storage array with 5x 10,000 RPM disks. They test it with a file copy and don’t understand why they can’t move 20Gbps. Invariably, they blame Hyper-V. If you don’t want to be that guy, the linked article should help.

That should serve as a decent reference on the concepts. If you don’t understand something written below, it’s probably because you don’t understand something linked above.

Contents of this Article

I will demonstrate common PowerShell and, where available, GUI methods for working with:

  • Standard network adapter teaming
  • Hyper-V virtual switch
  • Switch Embedded Teaming
  • Hyper-V virtual adapters

PowerShell or GUI?

Use PowerShell for quick, precise, repeatable, scriptable operations. Use the GUI to do the same amount of work in twice the time following four times as many instructions. I will show all of the PowerShell methods first for the benefit of those that just want to get things done. If you prefer to plod through dozens of GUI screens, scroll to the bottom half of the article. Be aware that many things can’t be done in the GUI.

If you’re just getting started with PowerShell, remember to use tab completion! It makes all the difference!

Creating and Working with Network Adapter Teams for Hyper-V in PowerShell

If you’re interested in Switch Embedded Teaming (Server 2016 only), then look a few headings downward. This section applies to the standard Microsoft teaming methods.

First things first. You need to know which adapters to add to the team. Discover your available adapters:

I’ll use my system for reference. I’ve renamed all of the adapters in my system so that I can recognize them. If your hardware supports Consistent Device Naming, then you’ll likely already have actionable names (like “Slot 4 Port 1”). If not, you’ll need to find your own way to identify adapters. I use my switch’s interface to enable the ports one at a time, identifying the adapters as they switch to Connected status.

Creating and Working with Network Adapter Teams for Hyper-V

The PowerShell cmdlets for networking allow you to use names, indexes, or descriptions to manipulate adapters. The teaming cmdlets only work with names.

Create a Windows Team

Create teams with New-NetLbfoTeam.

I use my demo machines’ “P*L” adapters for Hyper-V teams. One way to create a team for them:

I usually Name my team for the virtual switch that I create on it, but choose any name that you like. The TeamMembers field accepts a comma-separated list of the names of the physical adapters to add to the team. I promised not to go into detail on the options, and I won’t. Just remember that the other parameters and their values are selectable by tab completion. SwitchIndependent is the preferred teaming mode in most cases with LACP being second. I have never seen any compelling reason to use a load balancing algorithm other than Dynamic. Most people will want to use the Dynamic load balancing algorithm as it combines the best of Hyper-V Port mode and the hash mode along with some special features that can relocate traffic dynamically. However, if you will be combining Switch Independent and Dynamic with an external third-party load balancer, I recommend that you read the comment section for helpful warnings from reader Jahn.

To save even more time and space, the cmdlet is smart enough to allow you to use wildcards for the adapter names:

If you want to avoid the prompt for scripting purposes, add the Force parameter.

A Note on the Team NIC

When you create a team, you also create a logical adapter that represents that team. A logical team NIC (often abbreviated as tNIC) works in a conceptually similar fashion to a Hyper-V virtual NIC. You treat it just like you would a physical adapter — give it an IP address, etc. The team determines what to do with your traffic. If you use the cmdlets as shown above, one team NIC will be created and it will have the same name as the team (“vSwitch”, in this case). You can override that name with the TeamNicName parameter.

You can also add more team NICs to a team. For a team that hosts a Hyper-V virtual switch, it’s neither recommended nor supported, although the system will allow it. Additional tNICs must be created in their own VLAN, which hides that VLAN from the team. Also, it’s not documented or clear how additional tNICs affect any QoS settings on a Hyper-V virtual switch.

For the rest of this article, the single automatically-created tNIC will be the only one referenced.

Examine Teams and tNICs

View all teams and their statuses with Get-NetLbfoTeam. You don’t need to supply any parameters. I get more use out of Get-NetLbfoTeamMember, also without parameters.

Remove and Add Team Members

You can easily remove team members if you have the need:

And add them:

Removing an adapter obviously disrupts the traffic on that member, but the team will handle it well. You can add a team member at any time.

Delete a Team

Use Remove-NetLbfoTeam to get rid of a team. You can use the Name parameter to reverse what you’ve done. Since my hosts only ever use a single team, I can do this:

Working with the Hyper-V Virtual Switch

I always use Hyper-V virtual switches and Microsoft teams together, so I have a certain technique. You may choose a different path. Just understand that external switches must be created on an adapter. I will always use the default tNIC. If you’re not teaming, then you’ll pick a single physical NIC. Use Get-NetAdapter as shown in the teaming section above to determine the name of the adapter that you wish to use.

Create a Virtual Switch

Use New-VMSwitch to create a new switch. Most commonly, you’ll want the external type (refer to the articles linked at the beginning if you need an explanation). External switches require you to specify a logical or physical (but not virtual) adapter. You can use its friendly name or its less friendly description. I use the name. In my case, I’m binding to a team’s logical adapter, so, as explained a bit ago, I’ll use the team’s name.

For internal or private, use the SwitchType parameter instead of the NetAdapterName parameter and do not use AllowManagementOS.

Several things to note about the New-VMSwitch cmdlet:

  • New-VMSwitch is not one of the better-developed cmdlets. Usually, when tabbing through available parameters, your options are presented in a sensible order. New-VMSwitch’s parameters are all over the place.
  • The documentation for every version of New-VMSwitch always says that the default MinimumBandwidthMode is Weight. I used to classify this as an error, but it’s been going on for so long I’m starting to wonder if it’s an unfunny inside joke or a deliberate lie. The default is Absolute. Most people won’t ever need QoS, so I don’t know that it has practical importance. However, you can’t change a switch’s QoS mode after it’s been created, so I’d rather tell you this up front.
  • The “AllowManagementOS” parameter’s name is nonsense. What it really means is “immediately create a virtual adapter for the management operating system”. The only reason that I don’t allow it to create one is because it uses the same name for the virtual adapter as the virtual switch. That’s confusing for people that don’t know how all of this works. You can always add virtual adapters later, so the “allow” verb makes no sense whatsoever.

Manipulate a Virtual Switch

Use Set-VMSwitch to make changes to your switch. The cmdlet has so many options that I can’t rationally explain it all. Just scan the parameter list to find what you want. A couple of notes, though:

  • You can’t change the QoS mode of an existing virtual switch.
  • You can switch between External, Internal, and Private types.
    • To go from External to either of the other types: Set-VMSwitch -Name vSwitch -SwitchType Internal. Just use Private instead of Internal if you want that switch type.
    • To from Private or Internal to External: Set-VMSwitch -Name vSwitch -NetAdapterName vSwitch. You’d also use this format to move a virtual switch from one physical/logical network adapter to another.
  • You can’t rename a virtual switch with this cmdlet. Use Rename-VMSwitch.

Remove a Virtual Switch

Appropriately enough, Remove-VMSwitch removes a virtual switch.

You can remove all virtual switches in one shot:

When a switch is removed, virtual NICs on VMs are disconnected. Virtual NICs for the management OS are destroyed.

Speaking of virtual NICs, that’s the next thing you care about if you’re using a standard virtual switch. I’ll explain them after the Switch Embedded Team section.

Working with Hyper-V Switch Embedded Teams

Server 2016 adds Switch Embedded Teaming. If you’re planning to create a team of gigabit adapters, then I recommend that you use the traditional teaming method outlined above. I wrote an article explaining why.

Create a Switch Embedded Team (SET)

Use the familiar New-VMSwitch to set it up, but add the EnableEmbeddedTeaming option. Two other options not shown in the following are EnableIov and EnablePacketDirect.

The documentation continues to be wrong on MinimumBandwidthMode. If you don’t specify otherwise, you get Absolute. Prefer Weight.

Use EnableIov if, and only if, you have 10GbE adapters that support it. I cannot find any details on Packet Direct anywhere. Everyone just repeats that it provides a low-latency connection that bypasses the virtual switch. A few sources add that it will force Hyper-V Port load balancing mode. My hardware doesn’t support it, so I can’t test it. I assume that it only works on 10GbE and probably only with SR-IOV.

Once a SET has been created, you view it with both Get-VMSwitch and Get-VMSwitchTeam. For whatever reason, they decided that the output should include the difficult-to-read interface descriptions instead of names like Get-NetLbfoTeam. You can see the adapter names with something like this:

The SET cmdlets have no analog for Get-NetLbfoTeamMember.

SET does not expose a logical adapter to Windows the way that LBFO does.

Manipulate a Switch Embedded Team

You can change the members and the load balancing mode for a SET using Set-VMSwitchTeam.

Add and Remove SET Members

Instead of Set-VMSwitchTeam, you can use Add-VMSwitchTeamMember and Remove-VMSwitchTeamMember.

Remove a SET

Use Remove-VMSwitch to remove a SET. There is no Remove-VMSwitchTeam cmdlet.

Working with Virtual Network Adapters

You can attach virtual network adapters (vNICs) to the management operating system or virtual machines. You’ll most commonly use them with virtual machines, but you’ll also usually do less work with them. Their default settings tend to be sufficient and you can work with them through their owning virtual machine’s GUI property pages.

For almost every vNIC-related cmdlet, you must indicate whether you’re working with a management OS vNIC or a VM’s vNIC. Do this with the ManagementOS switch parameter or by supplying a value for either the VM or the VMName parameters. If you have a vNIC object, such as the one output by Get-VMNetworkAdapter, then you can pipe it to most of the vNIC cmdlets or provide it as the VMNetworkAdapter parameter. You won’t need to specify any of the other identifying parameters, including those previously mentioned in this paragraph, when you provide the vNIC object.

View a Virtual Network Adapter

The simple act of creating a virtual machine or virtual switch with AllowManagementOS set, creates a vNIC. To view them all:

Ordinarily, we give descriptive names to management OS vNICs, especially when we use more than one. If you didn’t specify AllowManagementOS, then you’ll have a vNIC with the same name as your vSwitch.

Each management OS vNIC will appear in the Network Connections applet and Get-NetAdapter with the format vEthernet (vNICName). Avoid confusion by changing the default vNIC’s name (shown in a bit). Many newcomers believe that this vNIC is the virtual switch because of that name. You cannot “see” the virtual switch anywhere except in Hyper-V-specific management tools.

Ordinarily, we leave the default name of “Network Adapter” for virtual machine vNICs. New in 2016, changes to a guest’s vNIC name will appear in the guest operating system if it supports Consistent Device Naming (CDN).

Manipulate a Virtual Network Adapter

Use Set-VMNetworkAdapter to change vNIC settings. As you can see, this cmdlet is quite busy; I could write multiple full-length articles on various parameter groups. Settings categories available with this command:

  • Quality of service (Qos)
  • Security (MAC spoofing, router guard, DHCP guard, storm)
  • Replica
  • In-guest teaming
  • Performance (VMQ, IOV, vRSS, Packet Direct)

You need a different cmdlet for VLAN manipulation, though.

Manipulate Virtual Network Adapter VLANs

Use Set-VMNetworkAdapterVlan for all things VLAN on vNICs.

To place a management OS vNIC into a VLAN:

Remember that the VlanId parameter requires the Access parameter.

Also remember that there is no such thing as “VLAN 0”. For some unknown reason, the cmdlet will accept it and assign the adapter to VLAN 0, but strange things might happen. Usually, it’s just that you can’t get traffic in or out of the adapter. If you want to clear the adapter’s VLAN, don’t use VLAN 0. Use Untagged:

I’m not going to cover trunking or private VLANs. Trunking is very advanced and I don’t think more than 5 percent of the people that have asked me how to do it really wanted to do it. If you want a single virtual machine to exist in multiple VLANs, add virtual adapters and assign individual VLANs. Private VLANs require you to work with PrimaryVlanId, SecondaryVlanId, SecondaryVlanIdList, Promiscuous, Community, and Isolated as necessary. If you need to use private VLANs, then you or your networking engineer should already understand each of these terms and intuitively understand how to use the parameters.

Since we’re commonly asked, the Promiscuous parameter on Set-VMNetworkAdapterVlan does not have anything to do with accepting or participating in all passing layer 2 traffic. It is only for private VLANs.

Adding and Removing Virtual Network Adapters

Use Add-VMNetworkAdapter and Remove-VMNetworkAdapter for their respective tasks.

Connecting and Disconnecting Virtual Network Adapters to/from Virtual Switches

These cmdlets only work for virtual machine vNICs. You cannot dis/connect management OS vNICs; you can only add or remove them.

Connect always works. You do not need to disconnect an adapter from its current virtual switch to connect it to a new one. If you want to connect all of a VM’s vNICs to the same switch, specify only the virtual machine in VMName.

If you provide the Name parameter, then only that vNIC will be altered:

These two cmdlets do not provide a VM parameter. It is possible for two virtual machines to have the same name. If you need to discern between two VMs with the same name, use the pipeline and filter from other cmdlets:

Use Disconnect-VMNetworkAdapter the same way, leaving off the SwitchName parameter.

VLAN information is preserved across dis/connects.

Other vNIC Settings

I did not touch on the entire range of possible vNIC cmdlets or their settings. You can go to the root 2016 Hyper-V PowerShell page and view all available cmdlets. Search the page for adapter, and you’ll find many hits.

Using the GUI for Hyper-V Networking

The GUI lags dramatically behind PowerShell for most things related to Hyper-V. I doubt any category shows that as strongly as networking. So, whether you (or I) like it or not, using the GUI for Hyper-V network qualifies as “beginner mode”. Most of the things that I showed you above cannot be done in the GUI at all. So, unless you’re managing a single host with a single network adapter, the GUI will probably not help you much.

The following sections show you the few things that you can do in the GUI.

Working with Windows Teams

The GUI does allow you some decent capability when working with Windows teams.

Create a Windows Team

You can use the GUI to create teams on Server 2012 and later. You can find the applet in Server Manager on the Local Server tab.

Using the GUI for Hyper-V Networking

You can also run lbfoadmin.exe from the Run window or an elevated prompt.

Once open, click the Tasks drop-down in the Teams section. Click New Team.

Tasks drop-down in the Teams section

You’ll get the NIC Teaming/New team dialog, where you’ll need to fill out most fields:

NIC Teaming/New team

Manipulate a Team

To make changes to your team later, just return to the same screens and dialogs using the same methods as you used to create the team.

Manipulate a Team

Delete a Team

To delete a team, use the Delete function in the same place on the main lbfoadmin screen where you found the New Team function. Make sure to highlight the team that you want to delete, first.

Delete a Team

Working with the Hyper-V Virtual Switch

The GUI provides very limited ability to work with Hyper-V virtual switches. You can’t configure QoS (except on vNICs) and it allows nearly nothing to be done for management OS vNICs.

Create a Hyper-V Virtual Switch

When using the Add Roles wizard to enable Hyper-V, you can create a virtual switch. I won’t cover that. If you’re looking at that screen, wondering what to do, I recommend that you skip it and follow the PowerShell directions above. If you simply must use a GUI, then wait until after the role finishes installing and create one using Hyper-V Manager.

To create a new virtual switch in Hyper-V Manager:

  1. Right-click the host in Hyper-V Manager and click Virtual Switch Manager. Alternatively, you’ll find this same menu at the far right of the main screen under Actions.
    Working with the Hyper-V Virtual Switch
  2. At the left of the dialog, highlight New virtual network switch.
    Create a Hyper-V Virtual Switch
  3. On the right, choose the type of switch that you want to create. I’m not entirely sure why it even asks because you can pick anything you want once you click Create Virtual Switch.
    Create a Hyper-V Virtual Switch Type
  4. The creation screen itself is very busy. I’ll tackle that in a moment. First, look to the left of the dialog at the blue text. It’s a new entry named New Virtual Switch. It represents what you’re working on now. If you change the name, you’ll see this list item change as well. You can use Apply to make changes and continue working without closing the dialog. You can even add another switch before you accept this one.
    New Virtual Switch

Now for the new switch screen. Look after the screenshot for an explanation of the items:

Virtual Switch properties - Hyper-V Networking

First item: name your switch.

I would skip the notes field, especially in a failover cluster.

For Connection Type, you’re decided between External, Internal, and Private. That’s why I don’t understand why it asked you on the initial dialog. If you choose External, you’ll need to pick a logical or physical adapter for binding. Unfortunately, you can only see the fairly useless adapter description fields. Look in the Network Connections applet to determine which is which. This right here is one of the primary reasons I like switch creation in PowerShell better.

Remember that the IOV setting is permanent.

I despise the item here called Allow management operating system to share this network adapter. That description has absolutely no relation to what the checkbox does. If you check it, it will automatically create a virtual NIC in the management OS for this virtual switch and give it the same name as the virtual switch. That’s all it does. There is no “sharing”, and there is no permanent allowing or disallowing going on.

The VLAN ID section ties to the nonsensical “Allow…” field. If you let the system create a management OS vNIC for you, then you can use this to give it a VLAN ID.

You can use the Remove button if you decide that you don’t want to create the virtual switch after all. Cancel would work, too.

Where’s the QoS? Oh, you can’t set the QoS mode for a virtual switch using the GUI. PowerShell only. If you use this screen to create a virtual switch, it will use the Absolute QoS mode. Forever. Another reason to choose PowerShell.

Manipulate a Virtual Switch

To make changes to a virtual switch, follow the exact steps that you did to create one, except choose the existing virtual switch at the left of the Virtual Switch Manager dialog. Of course, you can’t change much, but there it is.

Remove a Virtual Switch

Retrace your creation steps. Select the virtual switch at the left of the Virtual Switch Manager screen. Click the Remove button at the bottom right.

Working with Hyper-V Switch Embedded Teams

You can’t use the GUI to work with Hyper-V SET. PowerShell-only.

You can use the Virtual Switch Manager as described previously to remove one, though.

Working with Hyper-V Virtual Network Adapters

The GUI provides passably decent ability to work with vNICs — for guests. The only place that you can do anything with management OS vNICs is on that virtual switch creation screen. You can add or remove exactly one vNIC and you can set or remove its VLAN. You can’t use the GUI to work with two or more management OS vNICs. In fact, if you use PowerShell to add a second management OS vNIC, all related items in the dialog are grayed out and unusable.

But, for virtual machines, the GUI exposes most functionality.

Manipulate Virtual Network Adapters on Virtual Machines

In Hyper-V Manager or Failover Cluster Manager, open up the Settings dialog for the virtual machine to work with. On the left, you can find the vNIC that you want to work with. Highlight it, and the page will switch to its configuration screen. In the following screenshot, I’ve also expanded the vNIC so that you can see its subtabs, Hardware Acceleration and Advanced Features.

Manipulate Virtual Network Adapters on Virtual Machines

On this screen, you can change the virtual switch this adapter connects to, or disconnect it. You can change or remove its VLAN. You can set its QoS. The numbers here are given in Absolute since that’s the default. It doesn’t change if your switch uses Weight mode. I would use PowerShell for that. You can also Remove the vNIC here.

The Hardware Acceleration subtab:

Hardware Acceleration

Here, you can change:

  • If a VMQ can be assigned to this vNIC. The host’s adapters must support VMQ and a queue must be available for this checkbox to have any effect.
  • IPSec task offloading. If the host’s physical adapter supports IPSec task offloading and has sufficient resources, the guest can offload IPSec tasks to the hardware.
  • An SR-IOV virtual function can be assigned to this NIC. The host’s adapters and motherboard must support IOV, it must be enabled on the adapter and in BIOS, the virtual switch must either be unteamed or on a SET, and a virtual function must be available for this checkbox to have any effect.

The Advanced Features subtab:

Advanced Features

Note that this screen scrolls, and I didn’t capture it all.

Here, you can change:

  • MAC address, mode and address both
  • Whether or not the guest can spoof the MAC
  • If the guest is prevented from receiving DHCP discover/request frames
  • If the guest is prevented from receiving router discovery packets
  • If a failover cluster will move the guest if it loses network connectivity (Protected network)
  • If the vNIC’s traffic is mirrored to another vNIC. This feature seems to have troubles, FYI.
  • If teaming is allowed in the guest. The guest requires at least two vNICs and the virtual switch must be placed on a team or SET for this to function.
  • The Device naming switch allows the name of the vNIC to be propagated into the guest where an OS that supports Consistent Device Naming (CDN) can use it. Note that this is disabled by default, and the GUI doesn’t allow you to rename the vNIC. Use PowerShell for that.

Remove a Virtual Network Adapter

To remove a vNIC from a guest, find its tab in the VM’s settings dialog in Hyper-V Manager or Failover Cluster Manager. Use the Remove button at the bottom right. You’ll find a screenshot above in the Manipulate Virtual Network Adapters on Virtual Machines section.

 

Note: This guide will be periodically updated to make sure it covers all possible Hyper-V Networking problems. If you think I’ve missed anything please let me know in the comments below.

How to Optimize Hyper-V Performance for Dell PowerEdge T20

How to Optimize Hyper-V Performance for Dell PowerEdge T20

 

A little while back, we published an eBook detailing how to build an inexpensive Hyper-V cluster. At that price point, you’re not going to find anything that breaks performance records. Such a system could meet the needs of a small business, though. For those of you lucky enough to have a more substantial budget, it also works well as a cheap test lab. Whatever your usage, the out-of-box performance can be improved.

The steps in this article were written using the hardware in the previously linked eBook. If you have a Dell T20 that uses a different build, you may not have access to the same options. You may also need to look elsewhere for guidance on configuring additional hardware that I do not have.

A little upfront note: Never expect software or tweaks to match better hardware. If you expect a few switches and tips to turn a T20 into competition for a latest generation PowerEdge R-series, you will leave disappointed. I am always amazed by people that buy budget hardware and then get upset because it acts like budget hardware. If you need much better performance, break out your wallet and buy better hardware.

Step 1: Disable C-States

The number one thing you should always do on all systems to improve Hyper-V performance: disable C-States. You make that change in the system’s BIOS. The T20’s relevant entry appears below. Just clear the box.

t20perf_cstates

I also recommend that you disable SpeedStep, although you probably won’t gain much by doing so.

Step 2: Update Drivers

I know, I know, updating drivers is the oldest of all so-called “performance enhancement” cliches. Bear with me, though. All of the hardware works just fine with Windows default drivers, but the drivers unlock some options that you’ll need.

Start at https://support.dell.com. You’ll be asked for the system’s service tag. At an elevated PowerShell prompt, enter gwmi win32_bios and look at the SerialNumber line:

t20perf_servicetag

Highlight and press [Enter] to copy it to the clipboard.

Select the Drivers and Downloads tab, then locate the Change OS link so that you can select the correct operating system. Dell periodically changes their support site, so you may something different, but these named options have been the same for a while:

t20perf_driversystem

Items that you want:

  • BIOS (reboots without asking; stop your VMs first)
  • Chipset
  • Intel(R) Management Engine Components Installer
  • Intel Rapid Storage Technology Driver and Management Console
  • Intel Rapid Storage Technology F6 Driver

After gathering those files, go to Intel’s support site: https://downloadcenter.intel.com/.

This site also changes regularly. What I did was search for the “I217-LM”. On its list of downloads, I found the Intel Ethernet Adapter Connections CD. That includes drivers for just about every Intel network adapter in existence. If you have the system build that I described in the eBook, this file will update the onboard adapter and the add-in PRO/1000 PTs (and any other Intel network adapters that you might have chosen).

If you’re targeting a GUI-less system, unblock the files. An example:

If you prefer the mouse, then you can use each item’s individual property dialog instead.

Also make sure that you use a GUI system to unzip the Intel CD prior to attempting to install on a GUI-less system.

I’m sure you can install drivers without my help. Do that and read on.

Step 3: Networking Performance Tweaks

Three things you want to do for networking:

  1. Enable jumbo frames for storage adapters
  2. Disable power management
  3. Disable VMQ on any team adapters

Enabling Jumbo Frames

First, make sure that jumbo frames are enabled on your physical switch. It’s always OK for a system to use smaller frames on equipment that has larger frames enabled. The other way around usually just causes excessive fragmentation. That hurts performance, but things still work. Sometimes, it causes Ethernet frames to never be delivered. Always configure your switch first. Many require a power cycle for the change to take effect.

Once jumbo frames are set on your switch, make the change on the T20’s physical adapters. You can make the change in the GUI or in PowerShell.

Enabling Jumbo Frames via the GUI

  1. In Network Connections, access an Intel adapter’s property sheet.
  2. Click the Configure button.
  3. Switch to the Advanced tab.
  4. Set Jumbo Packet to its highest number; usually 9014.

When you install the Intel network drivers and management pack, the I217-LM driver page will look like the following:

t20perf_i217jumbo

Intel adapters not under management will look like this:

t20perf_regularjumbo

Enabling Jumbo Frames in PowerShell

PowerShell makes this fast and simple:

Disabling Network Adapter Power Management

Windows likes to turn off network adapters. Unfortunately, it doesn’t always do the best job ensuring that you’re not still using it. You can disable power management using the GUI or PowerShell.

Disabling Network Adapter Power Management in the GUI

Navigate to the adapter’s properties like you did to enable jumbo frames. This time, go to the Power Management tab. For a device under the control of the Intel management system, just uncheck Reduce link speed during system idle.

t20perf_i217speedreduce

For adapters using default drivers, uncheck Allow the computer to turn off this device to save power:

t20perf_regularnetpm

Disabling Network Adapter Power Management in PowerShell

The process is a bit more involved in PowerShell, but I’ve done the hard work for you. Just copy/paste into an elevated PowerShell prompt or run as a script:

Disable VMQ on Team Adapters

None of the adapters included with this system or in the eBook build support VMQ. That’s good because I don’t know of any manufacturers that properly implement VMQ on gigabit adapters. However, if you create a native Microsoft LBFO team, VMQ will be enabled on it. Whether or not it does anything… I don’t know. I do know that I seemed to clear up some strange issues when I disabled it on 2012. So, I’ve been doing it ever since. It’s quick and easy, so even if it doesn’t help, it certainly won’t hurt.

Note: If you are using the build from the eBook, only follow this section on the Hyper-V hosts. The storage server won’t use VMQ anyway.

Disabling VMQ on Team Adapters Using the GUI

Find the team adapter in Network Connections. It should be quite obvious, since the icon shows two physical adapters. Its description field will say Microsoft Network Adapter Multiplexor Driver.

t20perf_teamadapter

Open it up and get into its adapter configuration properties just as you did for the physical adapters above. Switch to the Advanced tab. Find the Virtual Machine Queues entry and set it to Disabled:

t20perf_teamadapterkillvmq

Disabling VMQ on Team Adapters in PowerShell

PowerShell can make short work of this task:

Step 4: Storage Performance Tweaks

The disks in these systems are slow. Nothing will change that. But, we can even out their transfer patterns a bit.

Changing the Disk Cache Mode for the Hyper-V Hosts

The Hyper-V hosts don’t do a great deal of disk I/O. In my personal configuration, I do place my domain controllers locally. However, for any domain these systems could reasonably handle, the domain controllers will perform very little I/O. We’ll enable the read cache on these systems. It will help, but you may not see much improvement due to their normal usage pattern.

Note: I have not attempted any of this on a GUI-less system. If the graphical interface works, you’ll find its exe at: “C:\Program Files\Intel\Intel(R) Rapid Storage Technology\IAStorUI.exe”.

Under the Intel Start menu entry, open Intel® Rapid Storage Technology. Switch to the Performance tab. You could disable the Link Power Management. Its not going to help much on the Hyper-V hosts. Change the Cache mode to Read only.

t20perf_hvstoragecache

Changing the Disk Cache Mode for the Storage Host

The storage server does most of the heavy lifting in this build. We can set some stronger caching policies that will help its performance.

Warning: These steps are safe only if you have a battery backup system that will safely shut down the system in the event of a power outage. As shipped, these systems do not have an internal battery backup for the RAID arrays. You can purchase add-on cards that provide that functionality. My system has one external battery that powers all three hosts. However, its USB interface connects only to the storage system. Do not follow these steps for your Hyper-V hosts unless you have a mechanism to gracefully shut them down in a power outage.

Follow the same steps to access the Intel® Rapid Storage Technology‘s Performance tab as you did on the Hyper-V hosts. This time, disable the power management option, enable write-cache buffer flushing, and set the cache mode to Write back:

t20perf_batterystoragecache

Microsoft’s Tuning Guide

At this point, you’ve made the best improvements that you’re likely to get with this hardware. However, Microsoft publishes tuning guides that might give you a bit more.

Access the 2016 tuning guide: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/index

The 2016 guide doesn’t contain very many instructions to follow; it contains a great deal of information. Aside from changing the power plan to High Performance, you won’t find much to do.

The 2012 R2 guide contains more activities: https://msdn.microsoft.com/en-us/library/windows/hardware/dn567657(v=vs.85).aspx. I do not know how many of these settings are still honored in 2016. I do know that any further changes that you make based on this guide involve trade-offs. For instance, you can disable the I/O balancer; that might speed up I/O for one VM that feels slow, but at the cost of allowing storage bottlenecks.

Test

After any performance change, test things out. You shouldn’t expect to see any Earth-shattering improvements. You definitely don’t want things to become worse. If any issues occur, retrace your steps and undo changes until performance returns — if it returns. It’s not uncommon for performance tweaking to uncover failing hardware. It’s best to carry out these changes soon after you spin up your new equipment for that reason.

Testing Jumbo Frames

Verify that jumbo frame works by pinging a target IP connected via physical switch using the following form:

If pings drop (but normal pings go through) or you receive a message that says, “Packet needs to be fragmented but DF set.”, then something along the way does not support jumbo frames.

The “8000” number doesn’t need to be exact, but it must be large enough to ensure that you are sending a jumbo Ethernet frame (somewhere in the 6000s and over). Due to variances in the way that “jumbo” can be calculated, the displayed “9014” will almost never work. Usually, Windows will send an unfragmented ping no larger than 8972 bytes.

Verify Settings After Driver Updates

Some driver updates return settings to defaults. You might lose jumbo frames and power management settings, for instance. It’s tempting to automate driver settings, but network setting changes cause network transmission interrupts. You’re better off performing manual verification.

 

Hyper-V Hot Topics – January

Hyper-V Hot Topics – January

 

Join Andrew Mason from Microsoft (Principal Program Manager on the Nano Server team at Microsoft), and MVP Andy Syrewicze in an AMA webinar on March 16th to discuss Nano Server. Register for the webinar and get answers directly from Andrew!

Hello again everyone! It’s a new year, and January is past us, so that means it’s time for another edition of the Altaro Hyper-V Hot Topics Series!

For those that aren’t aware, this series focuses on interesting links, and useful howtos from the Hyper-V world, from the previous month. So, with that said, let’s dig right into all the cool Hyper-V stuff from January!

1. Bulk Changing Virtual Hard Disk Path

Author: Ben Armstrong

I came across this one during one of my nightly reviews of my twitter feed, and it instantly caught my eye. Mainly because I’ve been in the same situation mentioned in the article and I know many administrators who have been there as well. The use case that Ben is talking about in this article is that of a file movement use case. Let’s say you’ve just XCOPYed a bunch of virtual hard disks from one location to another, how do you quickly fix the pathing for all the affected VMs? If this describes you, you’ll want to take a look. At the very least add it to you bookmarks for a rainy day!

2. Nano Server PowerShell Package Management

Author: Thomas Maurer

One of the most common questions I get about Nano Server is: “how do I manage the installed software packages?” Nano Server can seem foreign to administrators that aren’t familiar with the CLI and especially so if you have very little PowerShell experience. Thomas Maurer has put together a nice little howto on the basics of managing Nano Server Packages with PowerShell. If you’re looking for a quick reference on the different functions and features, this is the link for you!

3. Hyper-V vs. KVM for OpenStack Performance

Author: Ben Armstrong

Another one from Ben Armstrong from the Hyper-V team and it’s a goody, especially if you like OpenStack. For those that aren’t aware, OpenStack is an open source platform that is designed for hosting cloud infrastructure, and it essentially allows you to bring your own hypervisor, including Hyper-V. Ben has included several links in his post here that catalog a full test of performance between Hyper-V and KVM when used for OpenStack. If you’re looking into OpenStack and you’re on the fence as to what Hypervisor to choose, it’s a good performance article to review.

4. System Center VMM 2016 Feature Demos on Channel 9

Author: Microsoft Server and Management Blog

Not everyone is running System Center VMM, but it’s still good to know what it is, and what it’s capabilities are in case you ever need to add those capabilities into your environment. Whether this describes you, or if you’re already running SCVMM, these are good videos to watch during lunch for the next couple of days. This link contains a list of various feature demos for the newest version of SCVMM, and all of said videos are worth a look.

5. What’s New in Windows Server 2016 Networking

Author: OEMTV

With all that’s changed in Networking in Windows Server 2016, I’ve needed a refresher several times since 2016’s release back in October. One of my good friends Keith Mayer was featured on this episode of OEMTV on Microsoft’s Channel 9 website and it’s a good way to get up to speed again on all the new networking stuff in 2016. I’ve embedded the video below for ease of viewing.

6. Deploying Nano Server Using MDT

Author: Michael Niehaus

It’s no secret that deploying Nano Server can be somewhat difficult. For one it’s a vastly different deployment method than what has been used in previous versions/editions of Windows Server, and it involves quite a bit of CLI kung-fu, which can be difficult for new administrators who may not necessarily be used to PowerShell. This article will show you how you can use MDT to take a little bit of the work out of your Nano server deployment and make things a bit easier overall.

Wrap-Up

Well that wraps things up for us this month! As always, if you know of a cool link or howto, that you feel should be in this list, feel free to share in the comments section below!

With that said, we’ll be back next month for another edition of Hyper-V Hot Topics, so stay tuned!

Client Hyper-V in Windows 10

Client Hyper-V in Windows 10

 

Windows 8 introduced the first incarnation of Hyper-V for the desktop. It’s the same core hypervisor as the Hyper-V that ships with the server SKUs, minus a few features. Like its big brother on the server side, Windows 10 brings several new features to the desktop.

What is Client Hyper-V?

Client Hyper-V is an edition of Hyper-V that is geared toward desktop environments that could be thought of as “Hyper-V Lite”. It shares most of Hyper-V’s features and even brings some of its own. Highlights:

  • Type 1 hypervisor: A type 1 hypervisor is a complete kernel and performs direct hardware control. This means that when you enable Hyper-V in Windows 10, the physical hardware boots to Client Hyper-V, not your root Windows 10 installation. Client Hyper-V then starts up your pre-existing root Windows 10 environment as the management operating system. A management operating system is known as the “root partition” or “partition 0” or the “parent partition” in other hypervisors’ terminologies. It is a virtual machine, but it also has the special ability to exert control over the hypervisor. Contrast this with type 2 hypervisors, which are applications that run inside a normal host operating system and do not have direct access to hardware nor any control over the management operating system. Almost all other desktop-oriented hypervisors are type 2.
  • Guest interaction: Interoperability with guest operating systems is a crucial component for a desktop-oriented hypervisor. Client Hyper-V offers:
    • Sharing and mapping of most host hardware
    • Copy/paste of files from host-to-guest and vice versa (supported guests only)
    • Copy/paste of clipboard content from host-to-guest and vice versa (supported guests only)
  • RemoteFX: Windows 10 brings support for some RemoteFX features into Client Hyper-V. Most importantly, the full functionality of your graphics adapter will be made available to guests for both 2D and 3D acceleration.
  • Connected Standby: If your management operating system goes to sleep, your guests will be OK. When you resume, they will be exactly where they left off.
  • Linux guests: Client Hyper-V directly supports the same guest operating systems that Hyper-V does. This does not necessarily exclude other Linux distributions, but your mileage may vary.
  • Fully virtualized environment: That phrase could be taken to mean a great many things, but what I mostly intend to convey is that whether or not a specific operating system is directly supported as a guest does not indicate whether or not it will function. Hyper-V’s virtual machines are complete x86/AMD64 environments. If an operating system would otherwise run in that environment (most importantly, on your physical CPU’s architecture), then it will almost certainly operate under Hyper-V. Without direct support, however, it may run poorly.
  • Secure environment: Client Hyper-V provides the same security offerings as Hyper-V:
    • Secure boot: If Client Hyper-V doesn’t recognize the boot signature in the guest operating system, it won’t start it. This provides solid protection against root kits.
    • Shielded VMs: The topic of Shielded VMs is very large and won’t be covered in detail in this article. Microsoft’s Windows Server blog has decent starter material on the subject. Essentially, if you are concerned that someone copying the files of your virtual machine to their own local machine is of concern, you have options.
  • Storage Live Migration: You can move a virtual machine from one physical storage location to another without taking it offline.
  • Run VMs from remote storage: Your virtual machines can be stored locally, which is the most typical configuration. You can also run virtual machines from SMB and iSCSI storage.
  • Full-screen support: You can run Client Hyper-V guests within a window, allow them to consume an entire screen, or have them consume all screens on a multi-monitor system. Unfortunately, there is no native way to use only a subset of screens in a multi-monitor setup.
  • Nested virtualization: Need to test detailed environments on Client Hyper-V? No problem! As long as you’ve got sufficient hardware, you can run Hyper-V and Client Hyper-V within Hyper-V. The software does not impose any limitations on depth.
  • Containers. Hyper-V Containers are also available with Client Hyper-V.
  • Network Address Translation in the virtual switch. One place that Microsoft’s desktop hypervisor has consistently lagged behind the competition is its guest networking capabilities. One thing that it has sorely lacked is the ability to perform NAT operations for guests. That meant that you had to have an available IP address on the existing network for each Client Hyper-V guest. Client Hyper-V in Windows 10 will provide network address translation (NAT) services for its guests. This especially comes in handy when you’ve got a wireless adapter that just won’t work with the virtual switch.

What Features does Client Hyper-V Lack in Comparison to Hyper-V?

Most of the features that are “missing” in Client Hyper-V are not especially useful in a desktop-oriented hypervisor environment anyway.

  • Live Migration and Shared Nothing Live Migration: Windows 10 can’t be clustered, so it’s only natural that Live Migration wouldn’t be supported. Shared Nothing Live Migration would have its uses, but it’s not available.
  • Hyper-V Replica: Windows 10 can’t participate as a member in Hyper-V Replica. This particular feature is intended for server-side disaster recovery, so it makes sense that it’s not available in Client Hyper-V.
  • Advanced networking functionality: The only advanced networking available for Client Hyper-V guests is NAT. There is no teaming in the host, not the standard LBFO configuration or switch-embedded teaming (SET).
  • Advanced hardware functionality: virtual fiber channel and some SR-IOV features are not available. The hardware that these features apply to are almost never found in desktop-grade equipment.

Licensing and Client Hyper-V

We’ve produced extensive work around licensing and Hyper-V with articles, eBooks, and webinars. None of them have meaningfully touched on Client Hyper-V. Simply put, a Windows 10 license provides for exactly one instance, period. It does not contain any guest instance rights whatsoever. If you want to run a guest instance of Windows 10, then you must purchase another license to cover that instance. If you wish to run any Windows Server guests on Windows 10, you must license the hardware to cover those instances in accordance with the new per-core rules. Linux distributions will follow their distributors’ rules.

Uses for Client Hyper-V

The benefits of server virtualization are quite clear. They generally center around the fact that server hardware is typically underutilized. That’s usually not the case for client hardware. Desktop and laptop computers don’t usually have as many resources as server computers before you consider cutting them up. CPU is usually the only resource with significant capacity to spare, and even that doesn’t have much availability for some users. So, why would you want to split up limited resources on your desktop system? Here are a few reasons:

  • Software development: Software developers have many reasons to use virtualization.
    • Sandbox environment: If you’re writing systems-level programs, it’s usually not a good idea to allow a bug to cripple the computer that you’re developing on. Checkpoints and kernel isolation alleviate this concern. I particularly like using virtual machines for developing Windows installer packages. Recent versions of Visual Studio rely on Client Hyper-V for testing mobile device applications.
    • Multi-OS targeting: Whatever version of Windows you’re running, almost no developers can guarantee that their users will have the same. Having virtual machines on your desktop allow you to quickly verify that your application runs the same on different operating systems and different bitnesses (32- vs. 64-bit).
  • Systems administration: Systems administrators have many uses for virtualization on the desktop, even though many suffer in silence without realizing that there are solutions to many of their daily headaches.
    • Proper security levels. You know that you are supposed to run in a lowered security environment so that your administrative account isn’t signed in all of the time. You also know that it’s much easier to say than it is to do, especially since even some of Microsoft’s tools don’t work appropriately with Run As. Using virtualization on your desktop allows you to be signed in with multiple accounts simultaneously without the headaches of Run As.
    • User access testing. Another oft-overlooked usage is testing privilege levels for non-administrative accounts. For instance, you can create a test account with the same membership as one of your domain users to test that account’s ability to connect to certain resources. Run As can only take you so far. Logging into a virtual instance with alternative credentials without interrupting anything else that you’re doing is an invaluable capability.
    • Application testing. Software developers may test their software to some degree, but you need to know how it’s going to interact in your environment before pushing it out to your users.
  • Security operations: A virtual machine provides a great many opportunities for information security workers:
    • Sandbox environment: If you’re not certain if something is malicious software, build an environment that it’s free to destroy. Virtual machines present a wonderful walled garden. You can place suspect material inside a VHDX, mark it read-only, then attach it to your checkpointed test virtual machine. If it turns out to be malicious, you can revert the checkpoint or just delete the virtual machine outright.
    • Penetration testing: Build a duplicate of your production environment inside Client Hyper-V instances and hack away. Obviously, there are cons as well as pros to testing against a duplicate of your production environment instead of the actual environment, but this is a good place to start.
    • Forensics labs: Most computer forensic tasks need to be performed on the impacted system(s), but sometimes you need a place to tear into a chunk of code or a data file. Virtual machines provide the perfect environment.
  • Down-level environments: Windows 7 Pro and Enterprise shipped with “Windows XP Mode”, a pre-built Windows XP environment that ran under the built-in Virtual PC type 2 hypervisor. We lost that free virtualized instance along with Virtual PC in Windows 8, but Client Hyper-V still provides the base capability to run down-level operating systems. Unfortunately, Windows XP isn’t on the supported list for Client Hyper-V in Windows 10, but it does work (slowly). Between the defunct Windows XP and the current Windows 10 are four versions of Microsoft’s desktop operating system. There are any number of reasons that you might need one of those environments, at least on a part-time or temporary basis. Client Hyper-V might be exactly what you need.
  • Demonstrations: If you need to demonstrate software or software environments and your simple laptop instance isn’t adequate, you can build very complex structures within Client Hyper-V for use on the road.

Client Hyper-V Requirements

With Windows 10, Client Hyper-V and the server-based Hyper-V have the same hardware requirements. Client Hyper-V is not available in every Windows 10 SKU.

  • Windows 10 Professional, Enterprise, or Education editions. Windows 10 Home edition does not contain Hyper-V, nor do any of the various mobile SKUs.
  • Hardware-assisted virtualization. Intel calls it “VT-x”, AMD calls it “AMD-V”. Most BIOSs usually just have a simple option to enable virtualization features. This technology has been commonplace for long enough that most functional systems will support it.
  • Data execution prevention. An old malware technique involves placing malicious code into a data segment and then directing the CPU to execute it. Data execution prevention forces the system to rigidly respect data segments as not being executable. Intel calls theirs “XD” and AMD calls theirs “NX”. Microsoft unifies them as “DEP”. BIOSs will have various labels that are generally easy to identify. This technology also enough years to be ubiquitous. It’s also typically enabled by default, so you can almost always simply expect it to be present.
  • 4GB of memory. I’m not certain if there is a hard check in place for this condition, but your experience would likely be fairly miserable if you’ve got less.
  • VM Monitor Mode extensions. Intel names theirs “VT-c”. I don’t believe that AMD has any particular name for it. This is a new requirement over Client Hyper-V in Windows 8.x. Even though the name is somewhat foreign to many people, you usually won’t have difficulty providing it. It’s not quite as common as DEP and hardware-assisted virtualization, though. If Client Hyper-V won’t run on your system, this might be why.
  • Second-level Address Translation. Second-level Address Translation (SLAT) has been commonplace on CPUs for several generations. It has always been a requirement for Client Hyper-V. It is an always-on native feature of CPUs and there is no activity to enable or disable it. Check your CPU’s specification sheet to determine if it has SLAT support.
  • (For nested virtualization) Intel VT-x and EPT technology. I don’t know the technical (or perhaps political) details, but AMD users are not welcome in Hyper-V’s nested virtualization world. You need an Intel chip with these technologies available and enabled.

You can quickly and easily verify if you can run Hyper-V on your current system by opening an elevated command or PowerShell prompt and running systeminfo . Look toward the end of the output for the following section:

System Info Hyper-V Check

System Info Hyper-V Check

 

Enabling Client Hyper-V

Client Hyper-V ships as a Windows 10 component, so you don’t “install” it, per se. You enable it.

  1. Right-click on the Start button (or press Win+[X]) and select Programs and Features.
    WinX Menu

    WinX Menu

     

  2. In the Programs and Features window, click Turn Windows features on or off. If UAC is on, you’ll need to confirm switching to administrative mode.
    Programs and Features

    Programs and Features

     

  3. Choose the Hyper-V options that best suit your intent. The only required item is Hyper-V Hypervisor but it will be difficult to do anything with if you don’t enable the other components as well.  This article isn’t going to discuss Containers, but those are enabled here as well, as shown in the screenshot.
    Client Hyper-V Features

    Client Hyper-V Features

     

  4. After clicking OK, you’ll need to restart the computer.

Fewer steps are required to enable Client Hyper-V in PowerShell. In an elevated prompt:

If you’re looking to install a subset of the components from within PowerShell, our earlier article has greater detail.

Using Client Hyper-V

I want to reiterate that, as a type 1 hypervisor, Client Hyper-V is always running. You do not need to start the hypervisor and there is no service that you can go look at or start or stop. There is the Hyper-V Virtual Machine Management service (VMMS.exe), but, as its name explicitly states, it is a management service. Without it, you as the administrator cannot interact with Client Hyper-V, but it is still there and doing its job.

Management Tools

There are three built-in tools that you’ll use to interact with Client Hyper-V:

  • PowerShell. PowerShell is the most thorough way to manage with Hyper-V. You can start it within an elevated standard command prompt by typing PowerShell and pressing [Enter]. I tend to dig it out of the Start menu and pin it to the taskbar.
  • Hyper-V Manager. Hyper-V Manager is not as robust as PowerShell for management, but it is adequate. It is also helpful for connecting to virtual machines’ consoles. Hyper-V Manager can be found in Administrative Tools. I tend to pin this to the Start menu. You must Run As Administrator if you are logged in with a non-administrative account.
  • Virtual Machine Connection. This tool can be invoked within Hyper-V Manager by right-clicking on a virtual machine and clicking Connect. You can also enter vmconnect into any elevated prompt (including the Cortana local search). You will need to Run As Administrator. Once it opens, you can pick the virtual machine that you want to connect to from the drop-down.

It’s worth reiterating that you must always interact with Hyper-V using elevated prompts or graphical applications opened with Run As Administrator. VMConnect is kind enough to tell you when you don’t have sufficient permissions, but most other tools will be silent. For instance, running Get-VM as a non-administrator will simply return nothing, even when virtual machines are present and operational. Hyper-V Manager’s virtual machine list will also be empty.

Configuring Client Hyper-V

I’m not going to take you through every option available for Client Hyper-V, but I’m going to touch on the biggest points. Client Hyper-V works perfectly well immediately after you enable it and reboot, so it’s really just a matter of putting a few final touches on it.

To get started, open up Hyper-V Manager. In Windows versions past, you could simply open the Start menu and start typing “Hyper-V Manager” and after a few keystrokes, the built-in search tool would find it. This still works for most people, but many have encountered a bug where Cortana struggles to find things on the local computer. That went away for me after a few updates but the suggestions that I found to fix it directly did not work for me. If you find the “Windows Administrative Tools” group in the Start menu, Hyper-V Manager is there. I suggest pinning it to the Start menu or something similar so that you can easily reach it later.

Once you have Hyper-V Manager open, it should already have your local computer selected. Right-click on it and click Hyper-V Settings. If that option isn’t available, you didn’t Run as administrator.

A screenshot of the window that you’ll see is below. I’ve changed it to the Physical GPU tab so that you can see that RemoteFX is functioning and has automatically selected the video adapter. Sift through the other tabs and check that items are set as you expect. Feel free to change things as suit your needs. I would recommend keeping Enhanced Session Mode enabled everywhere you find it, or you’ll lose some host/guest interaction capabilities with your virtual machines. If you’re not certain about a setting, the best thing to do is leave it alone.

Basic Configuration

Basic Configuration

 

Once you’re finished there, right-click on the local computer again and click Virtual Switch Manager. Make certain that, on the right side, the selected type is External and click Create Virtual Switch.

Starting Virtual Switch Manager

Starting Virtual Switch Manager

 

Set the following options on the new virtual switch page:

  • Name the virtual switch. There is no need to get fancy with this. Use something that you can remember and that you can type. Future you will thank you.
  • Next to the External network dot, choose the physical network adapter that will host the virtual switch. You may need to look in your network adapter list if it’s not obvious which is which.
  • Unless you have multiple virtual network adapters, leave Allow management operating system to share this network adapter. If you don’t, the physical adapter that you choose will be able to connect virtual machines to the physical network, but you’ll have to manually create a virtual NIC for the management operating system (that’s the Windows 10 installation that’s running Client Hyper-V) to continue using the physical network (or come back in here and check the box).
  • If your management operating system currently participates in a VLAN, check the Enable virtual LAN identification for the management operating system and enter the necessary VLAN ID in the text box. If you’re at home using regular home networking equipment, you’ll definitely want to skip this box. If you’re connected to commercial-grade equipment at work and don’t already know what to do, it’s highly likely that you will skip this configuration. Otherwise, talk to your network administrator.
Virtual Switch Manager

Virtual Switch Manager

 

When you’re done with setting up the virtual switch and you OK out, you might lose connectivity for a few moments while the networking settles down. It needs to recreate your networking stack on a new virtual adapter, so expect that to take a bit of time.

If you connected to a wireless network adapter, you might face some difficulties. You wouldn’t be the first. I do not personally have much expertise in addressing these problems but your odds of finding suitable resolution are not great. Usually, it either works in the beginning or it never works at all. You can try updating drivers. If you just can’t get it to work, never fear! You can follow the steps in the next section to create a NAT network so that you don’t need a virtual switch on top of the physical adapter.

Configuring NAT Networking in Client Hyper-V

This process has changed several times since the feature was introduced in a technical preview of Client Hyper-V and a lot of the currently available instructions are wrong. Even Get-Help for New-VMSwitch shows options that don’t work. The following instructions were tested and known good for Client Hyper-V on Windows 10 build 14393.447 (run “winver” from the Start menu).

As with many things in Windows, there is more than one way to make this all happen. There is only a single step that absolutely requires PowerShell, which means that I’m going to counsel you to do the whole thing in PowerShell. You can do all the other steps in the GUI if you prefer. So, I’ll start with a basic outline of the steps, then I’ll show you the PowerShell that can make it happen:

  1. Determine what network range you want your NAT network to be. You only get a single NAT network on a Client Hyper-V system.
  2. Create an internal virtual switch.
  3. On the management operating system’s adapter for the internal virtual switch from step 2, assign an IP that will function as the router IP on that network that you thought up in step 1.
  4. Create the NAT network (PowerShell only).
  5. Attach virtual machine(s) to the virtual switch that you created in step 2 on the network that you created in step 1 using the IP that you assigned in step 3 as the router.
  6. Assign IPs from within the guest operating systems.

Be mindful of step 6. Client Hyper-V does not contain a DHCP server and will not distribute addresses for that NAT network. In case you were about to ask, no, Client Hyper-V does not support DHCP relay, either. You must either create a virtual machine running DHCP in that network or you must manually assign IPs. I’d go with the latter.

Here are steps 2-4 in PowerShell for the network 192.168.100.0/24 (192.168.100.1 through 192.168.100.254):

The first line creates an internal virtual switch named “vSwitchNAT” (could be done in Hyper-V Manager, as you saw above). An internal switch allows the host operating system and guest operating systems to use the same virtual switch, which, by definition, means that a virtual adapter will be automatically created for the management operating system. When such a virtual adapter is automatically created, its name is always in the format of vEthernet (name of the virtual switch), so I know that this one will be named “vEthernet (vSwitchNAT)”. If you name your virtual switch differently, use that name on your second line. I have also decided to pick the first valid IP in the network that I created, hence the 192.168.100.1 (this could be done in network connections). Note that I do not give it any routing information, such as a default gateway. The third line creates the NAT network. It will see the adapter with IP 192.168.100.1 and automatically treat it is as the router for that network.

Now, I just need to connect virtual machines to it. On the properties for a virtual machine, I do exactly that:

Select NAT Switch

Select NAT Switch

 

I finish up by accessing the guest’s networking and giving it an IP on that network and using the host management adapter as the default gateway:

NAT Client IPv4

NAT Client IPv4

 

NAT in Windows 10 has more capability than what I’ve shown you, but this should be enough for most. Start on the NAT PowerShell page for more information.

Continuing On…

There are a great many other things that I could show you, much more than would fit in a simple blog post. If this is your first time using Client Hyper-V, or any Hyper-V, spend some time kicking the tires. Begin by right-clicking your host in Hyper-V Manager and clicking New and Virtual Machine. The wizard should be easy enough to follow. Once your new VM is created, right-click on it and click Connect. That’s VMConnect. You’ll find the start and stop controls. You’re now on your way to being a Client Hyper-V guru!

Page 1 of 41234