Networking in Hyper-V commonly confuses newcomers, even those with experience in other hypervisors. The Hyper-V virtual switch is one of the product’s steeper conceptual hurdles, but quite simple once you invest the time to learn about it. Digesting this article will provide the necessary knowledge to properly plan a Hyper-V virtual switch and understand how it will operate in production. I will not spend any time on network configuration for System Center Virtual Machine Manager, but because that product needlessly overcomplicates the situation with multiple pointless layers, the solid grounding on the Hyper-V virtual switch that can be obtained from this article is critical if you don’t want to be hopelessly lost in VMM. If you know all about the Hyper-V Virtual Switch, but want a guide on how to create it, we have that here.

For an overall guide to Hyper-V networking read my post entitled “The Complete Guide to Hyper-V Networking“.

What is the Hyper-V Virtual Switch?

The very first thing that you must understand is that Hyper-V’s virtual switch is truly a virtual switch. That is to say, it is a software construct that operates within the active memory of a Hyper-V host that performs Ethernet frame switching functionality. It can use single or teamed physical network adapters to serve as uplinks to a physical switch in order to communicate with other computers on the physical network. Hyper-V provides virtual network adapters to its virtual machines, and those communicate directly with the virtual switch.

What are Virtual Network Adapters?

Like the Hyper-V virtual switch, virtual network adapters are mostly self-explanatory. In more detail, they are software constructs that are responsible for receiving and transmitting Ethernet frames into and out of their assigned virtual machine or the management operating system. This article focuses on the virtual switch, so I will only be giving the virtual adapters enough attention to ensure understanding of the switch.

Virtual Machine Network Adapters

The most common virtual network adapters belong to virtual machines. They can be seen in both PowerShell (Get-VMNetworkAdapter) and in Hyper-V Manager’s GUI. The screenshot below is an example:

Example Virtual Adapter

Example Virtual Adapter

 

I have drawn a red box on the left where the adapter appears in the hardware list. On the right, I have drawn another to show the virtual switch that this particular adapter is connected to. You can change it at any time to any other virtual switch on the host or “Not Connected”, which is the virtual equivalent of not plugging the adapter into anything. There is no virtual equivalent of a “crossover” cable, so you cannot directly connect one virtual adapter to another.

Within a guest, they appear in all the same places as a physical adapter.

Virtual Adapter from Within a Guest

Virtual Adapter from Within a Guest

 

Management Operating System Virtual Adapters

Virtual adapters can also be created for use by the management operating system. They can be seen in PowerShell and in the same locations that you’d find a physical adapter. By default, they will be named vEthernet (<name>).

Virtual Adapters in the Management Operating System

Virtual Adapters in the Management Operating System

 

In contrast to virtual adapters for virtual machines, your options for managing virtual adapters in the management operating system are a bit limited. If you only have one, you can use Hyper-V Manager’s virtual network manager to set the VLAN. If you have multiple, as I do, you can’t even do that in the GUI:

Virtual Switch Manager with Multiple Host Virtual Network Adapters

Virtual Switch Manager with Multiple Host Virtual Network Adapters

 

PowerShell is the only option in this case. PowerShell is also the only way to modify a number of management OS virtual adapter settings that Hyper-V Manager can’t even see. All of that, however, is a topic for another article.

[optin-monster-shortcode id=”u4bw5fa5efzfughm”]

Modes for the Hyper-V Virtual Switch

The Hyper-V virtual switch presents three different operational modes, although, in truth, there are only two.

Private Virtual Switch

A Hyper-V virtual switch in private mode allows communications only between virtual adapters connected to virtual machines.

Internal Virtual Switch

A Hyper-V virtual switch in internal mode allows communications only between virtual adapters connected to virtual machines and the management operating system.

External Virtual Switch

A Hyper-V virtual switch in external mode allows communications between virtual adapters connected to virtual machines and the management operating system. It uses single or teamed physical adapters to connect to a physical switch, thereby allowing communications with other systems.

Deeper Explanation of the Hyper-V Switch Modes

The private and internal switch types only differ by the absence or presence of a virtual adapter for the management operating system, respectively. In fact, you can turn an internal switch into a private switch just by removing any virtual adapters for the management operating system and vice versa:

Convert Internal Virtual Switch to Private

Convert Internal Virtual Switch to Private

 

With both the Internal and Private virtual switches, adapters can only communicate with other adapters on the same switch. If you need them to be able to talk to adapters on other switches, one of the operating systems will need to have adapters on other switches and be configured as a router.

The external virtual switch relies on one or more physical adapters. These adapters act as an uplink to the rest of your physical network. Like the internal and private switches, virtual adapters on an external switch cannot directly communicate with adapters on any other virtual switch.

Important Note: the terms Private and External for the Hyper-V switch are commonly confused with private and public IP addressing. They have nothing in common.

Conceptualizing the External Virtual Switch

Part of what makes understanding the external virtual switch difficult is the way that the related settings are worded. In the Hyper-V Manager GUI, it’s worded as Allow management operating system to share this network adapter. In the PowerShell New-VMSwitch, there’s an -AllowManagementOS boolean parameter which is no better, and its description — “Specifies whether the parent partition (i.e. the management operating system) is to have access to the physical NIC bound to the virtual switch to be created.” — makes it worse. What happens far too often is that people read these and think of them like this:

Incorrect Visualization of the Hyper-V Virtual Switch

Incorrect Visualization of the Hyper-V Virtual Switch

 

The number one most important thing to understand is that a physical adapter or team used by a Hyper-V virtual switch is not, and cannot be, used for anything else. The adapter is not “shared” with anything. You cannot configure TCP/IP information on it. After the Hyper-V virtual switch is bound to an adapter or team (it will appear as Hyper-V Extensible Virtual Switch), tinkering with any other clients, protocols, or services on that adapter will at best have no effect and at worst break your virtual switch.

Instead of the above, what really happens when you “share” the adapter is this:

Correct Visualization of the External Hyper-V Virtual Switch

Correct Visualization of the External Hyper-V Virtual Switch

 

The “sharing” happens by creating a virtual adapter for the management operating system and attaching it to the same virtual switch that the virtual machines will use. You can add or remove this adapter at any time without impacting the virtual switch at all. I see many, many people creating virtual adapters for the management operating system by clicking that Allow management operating system to share this network adapter checkbox because they believe it’s the only way to get the virtual switch to participate in the network for the virtual machines. If that’s you, don’t feel bad. I did the very same thing on my first 2008 R2 deployment. It’s OK to blame the crummy wording in the tools because that is exactly what threw me off as well. The only reason to check the box is if you need the management operating system to be able to communicate with that particular physical adapter or team. If you’re going to use a dedicated physical adapter or team just for management traffic, then don’t use the “share” option. If you’re going to use network convergence, that’s when you want to “share” it.

If you’re not certain what to do in the beginning, it doesn’t really matter. You can always add or remove virtual adapters after the switch is created. Personally, I never create an adapter on the virtual switch by using the Allow… checkbox or the -AllowManagementOS parameter. I always use PowerShell to create any necessary virtual adapters later. I have my own reasons for doing so, but it also helps with any conceptual issues.

Once you understand the above, you can easily see that the differences even between the external and internal switch types are not very large. Look at all three types visualized side-by-side:

Visualization of All Switch Modes

Visualization of All Switch Modes

 

I don’t recommend it, but it is possible to convert any Hyper-V virtual switch to/from the external type by adding or removing the physical adapter/team.

What are the Features of the Hyper-V Virtual Switch?

The Hyper-V virtual switch exposes several features natively.

  • Ethernet Frame Switching
    The Hyper-V virtual switch is able to read the MAC addresses in an Ethernet packet and deliver it to the correct destination if it is present on the virtual switch. It is aware of the MAC addresses of all virtual network adapters attached to it. An external virtual switch also knows about the MAC addresses on any layer-2 networks that it has visibility to via its assigned physical adapter or team. The Hyper-V virtual switch does not have any native routing (layer 3) capability. It will not perform NAT operations (in 2012/R2). You will need to provide a hardware or software router if that type of functionality is necessary.
  • 802.1q VLAN, Access Mode
    Virtual adapters for both the management operating system and virtual machines can be assigned to a VLAN. It will only deliver Ethernet frames to virtual adapters within the same VLAN, just like a physical switch. If trunking is properly configured on the connected physical switch port, VLAN traffic will extend to the physical network as expected.
  • 802.1q VLAN, Trunk Mode
    The very first point to make about this feature is: you do not need to configure trunk mode on the Hyper-V virtual switch to communicate with a physical network!!! Configure trunks on the physical switch’s ports only! Hyper-V automatically makes its assigned physical adapter/team into an uplink.
    Second, Hyper-V’s trunk mode for virtual adapters has very specific purposes that only a tiny percentage of administrators will ever need. If you’re not sure what you’d do with it, you don’t need it. Software in the guest operating system using a virtual network adapter configured in trunk mode must be able to process 802.1q VLAN tags. Even Microsoft’s Routing and Remote Access Service does not have this ability. Most people that I see asking questions about Hyper-V’s trunk mode really only want to use access mode and just aren’t aware of it.
  • 802.1p Quality of Service
    802.1p uses a special part of the Ethernet frame to mark traffic as belonging to a particular priority group. All switches along the line that can speak 802.1p will then prioritize it appropriately.
  • Hyper-V Quality of Service
    Hyper-V has its own quality of service for its virtual switch, but, unlike 802.1p, it does not extend to the physical network. You can guarantee a minimum and/or limit the outbound speed of a virtual adapter when your virtual switch is in Absolute mode and you can guarantee a minimum and/or lock a maximum outbound speed for an adapter when your switch is in Weight mode. The mode must be selected when the virtual switch is created.
  • SR-IOV (Single Root I/O Virtualization)
    SR-IOV requires compatible hardware, both on your motherboard and physical network adapter(s). When enabled, you will have the option to connect a limited number of virtual adapters directly to Virtual Functions — special constructs exposed by your physical network adapters. The Hyper-V virtual switch has only very minimal participation in any IOV functions, meaning that you will have access to very nearly the full speed of the hardware. This performance boost does come at a cost, however: SR-IOV network adapters cannot function if the virtual switch is assigned to an adapter team. I have heard that this might change somewhat in the 2016 release, but even so, be mindful that a single Virtual Function cannot exist on two separate adapters simultaneously, meaning that there might not be any way for this limitation to be truly lifted.
  • Extensibility
    Microsoft publishes an API that anyone can use to make their own filter drivers for the Hyper-V virtual switch. For instance, System Center Virtual Machine Manager provides a driver that enables Hardware Network Virtualization (HNV). Other possibilities include network scanning tools.

Why Would I Use an Internal or Private Virtual Switch?

There is exactly one reason to use an internal or private virtual switch: isolation. You can be absolutely certain that no traffic that moves on an internal or private switch will ever leave the host unless you explicitly configure routing software that operates on a virtual adapter on the same virtual switch. You can look at the first diagram on my software router post to get an idea of what I’m talking about.

Internal and private virtual switches do not provide a performance boost over the external virtual switch. This is because the virtual switch is smart enough to not use the physical network when delivering packets from one virtual adapter to the MAC address of another virtual adapter on the same virtual switch. However, this will not hold true if a router must be involved.

The following illustrates communications between two virtual adapters on the same external virtual switch within the same subnet:

Virtual Network Adapters on Same Subnet

Virtual Network Adapters on Same Subnet

 

If the virtual adapters are on different subnets, this is what happens:

Virtual Adapters on Different Subnets

Virtual Adapters on Different Subnets

 

What’s happened in this second scenario is that the virtual adapters are using IP addresses that belong to different subnets. Because of the way that TCP/IP works (not the virtual switch!), packets between these two adapters must be transmitted through a router. Remember that the Hyper-V virtual switch is a layer 2 device that does not perform routing; it is not aware of IP addresses. If the way that Ethernet and IP work are new to you, or you need a refresher, I’ve got an article about it.

How Does Teaming Impact the Virtual Switch?

There are a great many ifs, ands, and buts involved when discussing the virtual switch and network adapter teams. The most important points:

  • Bandwidth aggregation does not occur the way that most people think it does.
    I see this sort of complaint a lot: “I teamed 6 1GbE NICs for my Hyper-V team and then a copied a file from a virtual machine to my file server and it didn’t go at 6Gbps and now I’m really mad at Hyper-V!” There are three problems. First, file copy is not a network speed testing tool in any sense. Second, that individual probably doesn’t have a hard disk subsystem at one end or the other that can sustain 6Gbps anyway. Third, Ethernet and TCP/IP don’t work that way, nevermind the speed of the disks or the Hyper-V virtual switch. If you want a visualization for why teaming won’t make a file copy go faster, this older post has a nice explanatory picture. I have a more recent article with a more in-depth technical description. The TL;DR summary: using adapter teaming for the Hyper-V virtual switch improves performance for all virtual adapters in aggregate, not at an individual level.
  • Almost everyone overestimates how much network performance they need. Seriously, file copy isn’t just a bad testing tool, it also sets unrealistic expectations. Your average user doesn’t sit around copying multi-gigabyte files all day long. They mostly move a few bits here and there and watch 1.2mbps videos of cats when they think that no one is watching.
  • Using faster adapters gives better results than using bigger teams. If you really need performance (I can help you figure out if that’s you), then faster adapters gives better results than big teams. I am exasperated at anyone at how often 10GbE is oversold to institutions that can barely stress a 100Mbps network, but I am equally exasperated at people that try to get the equivalent of 10GbE out of 10x 1 GbE connections.
  • SR-IOV doesn’t work. I mentioned this above, but it’s worth reiterating.

What About the Hyper-V Virtual Switch and Clustering?

The simplest way to explain the relationship between the Hyper-V virtual switch and failover clustering is that the Hyper-V virtual switch is not a clustered role. The cluster is completely unaware of any virtual switches whatsoever. Hyper-V, of course, is very aware of them. When you attempt to migrate a virtual machine from one cluster node to another, Hyper-V will perform a sort of “pre-flight” check. One of those checks involves looking for a virtual switch on the destination host with the same name of every virtual switch that the migrating virtual machine is connected to. If a virtual switch with the same name is not present, the virtual machine will not migrate. With versions 2012 and later, you have the option to use “resource pools” of virtual switches, and in that case it will attempt to match the name of the resource pool instead of the switch, but the same name-matching rule applies.

Starting with the 2012 version, you cannot Live Migrate a clustered virtual machine if it is connected to an internal or a private virtual switch. The “pre-flight” check will fail. I have not tried it with a Shared Nothing Live Migration.

When a clustered virtual machine is Live Migrated, there is the potential for a minor service outage. The MAC address(es) of the virtual machines virtual adapter(s) must be de-registered from the source virtual switch (and therefore its attached physical switch) and re-registered at the destination. If you are using dynamic MAC addresses, then the MACs may be returned to the source host’s pool and replaced with new MACs on the destination host, in which case a similar de-registration and registration sequence will occur. All of this happens easily within the standard TCP timeout window, so in-flight TCP communications should succeed, albeit with a brief and potentially detectable hiccup. UDP and all other traffic with non-error-correcting behavior (including ICMP and IGMP operations like PING) will be lost during this process. Hyper-V performs the de-registration and registration extremely quickly — the duration of the delay will depend upon the amount if time necessary to propagate the MAC changes throughout the network.

Should I Use Multiple Hyper-V Virtual Switches?

In a word: no. That’s not a rule, but a very powerful guideline. Multiple virtual switches can cause quite a bit of processing overhead and there’s rarely any benefit.

The exception would be if you really need to physically isolate network traffic. For instance, you might have a virtualized web server living in a DMZ and you don’t want any physical overlap between your DMZ and your internal networks. You’ll need to use multiple virtual switches to make that happen. Truthfully, VLANs should provide sufficient security and isolation.

What you should not do is create multiple virtual switches to separate roles. For instance, don’t make a virtual switch for management operating system traffic and another for virtual machine traffic. The processing overhead usually outweighs any possible benefits. Create a team of all the adapters and converge as much traffic on it as possible. If you detect an issue, implement QoS.

What About VMQ on the Hyper-V Virtual Switch?

VMQ is a more advanced topic than I want to spend a lot of time on in this article, so we’re only going to touch on it briefly. What VMQ does is allow incoming data for a virtual adapter to be processed on a CPU core other than 0:0. When VMQ is not in effect, all inbound traffic is processed on the first core of the first CPU. Keep these points in mind:

  • If you are using gigabit adapters, VMQ is pointless. CPU 0:0 can handle many 1GbE adapters before it has a problem. Disable VMQ if your 1GbE adapters support it. Most of them do not implement VMQ properly anyway.
  • Not all 10GbE adapters implement VMQ properly. If your virtual machines seem to be struggling to communicate, turning off VMQ is a good place to begin troubleshooting.
  • If you disable VMQ on your physical adapters and you have teamed them, make sure that you also disable VMQ on the logical team adapter.