The Really Simple Guide to Hyper-V Networking

Save to My DOJO

The Really Simple Guide to Hyper-V Networking

If you’re just getting started with Hyper-V and struggling with the networking configuration, you are not alone. I (and others) have written a great deal of introductory material on the subject. But, we often throw too much into it. I’m going to try a different approach. Rather than a thorough deep-dive, I’m going to show you what you’re trying to accomplish. After that, I will provide links to the detailed information so that you can turn those goals into reality.

Getting Started with Hyper-V Networking

First things first. If you have a solid handle on layer 2 and layer 3 concepts, that’s helpful. If you have experience with networking Windows machines, that’s also helpful. If you come to Hyper-V from a different hypervisor, then that knowledge won’t transfer well. If you apply ESXi networking design patterns to Hyper-V, then you will create a jumbled mess that will never function correctly or perform adequately.

Goals for Hyper-V Networking

You have two very basic goals:

  1. Ensure that the management operating system can communicate on the network
  2. Ensure that virtual machines can communicate on the network

Hyper-V Networking Configuration

Any other goals that you bring to this endeavor are secondary, at best. If you have never done this before, don’t try to jump ahead to routing or anything else until you achieve these two basic goals.

Hyper-V Networking Rules

Understand what you must, can, and cannot do with Hyper-V networking:

  • You can connect the management operating system to a physical network directly using a physical network adapter or a team of physical network adapters.
  • You cannot dedicate any physical adapter or team to a virtual machine.
  • You can connect a virtual machine to the physical network, but you must use a Hyper-V virtual switch. There is no bypass or pass-through mode.
  • A Hyper-V virtual switch completely consumes a physical network adapter or team. It totally takes over that adapter or team; nothing else can use it.
  • You can connect the management operating system to a Hyper-V virtual switch. It is not required.
  • It is not possible for the management operating system and the virtual switch to use a physical adapter or team at the same time. The “share” terminology that you see in all of the tools is a lie.

What the Final Product Looks Like

It might help to have visualizations of correctly-configured Hyper-V virtual switches. I will only show images with a single physical adapter. The principles are identical if you use a team instead.

Networking for a Single Hyper-V Host, the Old Way

An old technique has survived from the pre-Hyper-V 2012 days. It uses a pair of physical adapters. One belongs to the management operating system. The other hosts a virtual switch that the virtual machines use. I don’t like this solution for a two-adapter host. It leaves both the host and the virtual machines with a single point of failure. However, it could be useful if you have more than two adapters and create a team for virtual machines to use. Either way, this design is perfectly viable whether I like it or not.



Networking for a Single Hyper-V Host, the New Way

With teaming, you can join all of the physical adapters together. Let the team host a single virtual switch. Let the management operating system and all of the guests connect through that.



Networking for a Clustered Hyper-V Host

For a stand-alone Hyper-V host, the management operating system only requires one connection to the network. Clustered hosts benefit from multiple connections. Before teaming was directly supported, we used a lot of physical adapters to make that happen. Now we can just use one big team to handle our host and our guest traffic. That looks like this:




VLANs seem to have some special power to trip people up. A few things:

  • The only purpose of a VLAN is to separate layer 2 (Ethernet) traffic.
  • VLANs are not necessary to separate layer 3 (IP) networks. Many network administrators use VLANs to create walls around specific layer 3 networks, though. If that describes your network, you will need to design your Hyper-V hosts to match. If your physical network doesn’t use VLANs, then don’t worry about them on your Hyper-V hosts.
  • Do not create one Hyper-V virtual switch per VLAN the way that you configure ESXi. Every Hyper-V virtual switch automatically supports untagged frames and VLANs 1-4096.
  • Hyper-V does not have a “default” VLAN designation.
  • Configure VLANs directly on virtual adapters, not on the virtual switch.

Other Quick Pointers

I’m going to provide you with some links so you can do some more reading and get some assistance with configuration. However, some quick things to point out:

  • The Hyper-V virtual switch does not have an IP address of its own.
  • The Hyper-V virtual switch does not appear anywhere in the regular Windows GUI. You can’t see it in the regular network connections list. You might find a logical team adapter and you might find virtual adapters that belong to the management operating system, but the switch is not there.
  • You do not manage the Hyper-V virtual switch via an IP or management VLAN. You use tools in the management or a remote operating system (Hyper-V Manager, PowerShell, and WMI/CIM). The Hyper-V virtual switch does not have an IP address of its own.
  • Network connections for storage (iSCSI/SMB): Preferably, network connections for storage will use dedicated, unteamed physical adapters. If you can’t do that, then you can create dedicated virtual NICs in the management operating system
  • Multiple virtual switches: Almost no one will ever need more than one virtual switch on a Hyper-V host. If you have VMware experience, especially do not create virtual switches just for VLANs.
  • The virtual machines’ virtual network adapters connect directly to the virtual switch. You do not need anything in the management operating system to assist them. You don’t need a virtual adapter for the management operating system that has anything to do with the virtual machines.
  • Turn off VMQ for every gigabit physical adapter that will host a virtual switch. If you team them, the logical team NIC will also have a VMQ setting that you need to disable.

For More Information

I only intend for this article to be a quick introduction to show you what you’re trying to accomplish. We have several articles to help you dive into the concepts and the necessary steps for configuration.


Note: this page was originally published in January 2018 and has been updated to be relevant as of December 2019.

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

74 thoughts on "The Really Simple Guide to Hyper-V Networking"

  • Larry Johnson says:

    please add some guides to using vm firewalls. For instance pfsense with two nics on a hyperv host. thanks.

  • Steve Fiore says:

    Hi Eric,

    Nice guide.

    I’ve been working with Hyper-V for about 10 years now. I am by no means an expert, but I do have a good working experience, and your guide is right on target – should be very helpful for folks.

    I do have one question however. You mention “Almost no one will ever need more than one virtual switch on a Hyper-V host”. I currently have a configuration running RDS with an RDVH back-end (for VDI) with approximately 100 Windows 10 VMs. I have 6 NICs – one for the OS, and 5 physical NICs specifically designated to 5 virtual switches. If I allocated one physical NIC to create one virtual switch, that means I would have 100 VMs network traffic all going through one physical NIC (and subsequently one physical bus, etc.). I have seen this suggestion before (one virtual switch for all VMs), so I know it is currently trending; however, I just wanted to get your take on this specific scenario.

    Thanks again Eric,

    • Eric Siron says:

      Hi Steve, thanks for your comment!
      If I look back 10 years, then I would be doing what you are doing. My initial builds had multiple pNICs for multiple vSwitches.
      But, that was in the days before Microsoft provided native teaming. The processing overhead for, say, 5 pNICs in a team with one vSwitch is noticeably lower than the overhead for 5 vSwitches on 5 unteamed pNICs. Even better, you don’t need any system at all for determining which VMs will participate on which switch. Even if all else was equal, the reduction in management overhead alone justifies a teamed build.
      If it helps any, then I have production RDVH systems running ~100 VDI VMs. I converge everything except storage through a single vSwitch.

      • Steve Fiore says:

        Okay, so what you’re saying is that from a resource utilization perspective, in my particular scenario I should team all 5 and create one Vswitch for all hundred VMs…?

        I can see where that definitely makes sense, it’s just sometimes difficult to get past old habits and/or beliefs when you’ve been doing something or have understood something in a particular way for so long : ). I think I’ll make the change, but run a perf-mon prior and then right after, so as to obtain some empirical data.

        Last question – in your 100 VDI build, are you using 1Gb or 10Gb NICs?

        Thank you again,

        • Eric Siron says:

          I would definitely encourage you to explore the one-switch-on-a-team solution.
          I am using dual 10GbE. The 90th percentile usage statistics indicate that I could get away with dual 100Mb adapters and only get yelled at occasionally. No idea how normative that is in the scope of VDI deployments, but I have noticed that network utilization tends to be consistently an order of magnitude lower than expectation.

          • Steve Fiore says:

            I’m curious how much of a difference the 10Gb NIC(s) make/s in the grand scheme of things… but I digress.

            Thank you again Eric, I will definitely let you know how I make out.


  • Jean-Pierre DUBREUIL says:

    I am currently fighting trying to create a team for the host Windows 2016 GUI and a VS for the VM and get totally flustered
    How do i set up the whole thing on just one single VS and I have 5 physical NICs on just want to set up 2 VMs.
    Help would be thoroughly appreciated

  • Darren says:

    Networking for a Single Hyper-V Host, the New Way

    With this scenario – how do i assign a static ip and virtual nic for the host machine or do I use the virtual switch as the nic for the host? a little confused

    • Eric Siron says:

      I recommend that you look through the links at the end of the article. They provide much more how-to and cover the possibilities.
      You’ll probably want this one: The parts about adding virtual NICs is roughly halfway down for PowerShell. The GUI instructions start up a bit after that.
      Once you have a vNIC created for the management OS, it shows up like a regular physical adapter and you assign IP information as normal.

  • PsychoData says:

    Any suggestions on VLANs?
    In my testing before, when I tried to migrate VMs between hosts, the vNICs on the VMs would lose their VLAN tagging.

    So I was using the multiple vSwitches with one per VLAN. so that I could transfer between hosts without problems. I can’t remember if that was before I had made identical Virtual Switches on the multiple hosts, or not, though.

    Have you seen anything like that?

    • Eric Siron says:

      I have never seen a VM lose its VLAN due to a Live Migration. I would revisit that testing because having multiple virtual switches is hurting your build.

  • Lawrence says:

    I am trying to set up a Hyper-V Virtual Machine as my Host Based Security System (HBSS) Server for work. Based on this post, does this mean that Hyper-V VM does not have capability to talk outside other than the host machine? Can someone please confirm.

    Thank you!

  • Eric says:


    This is a great article. Question though. If you have 2 10Gb NIC’s teamed in the OS and set as the uplinks into a virtual switch as you show in the last example, where do your vNIC’s come from for live migration, etc? We can share the team with the OS and get one virtual NIC in one VLAN, but how do we create more for things like LiveMig? We’re not clustered, but we do want to take advantage of the 10Gb links for replication, moving shutdown vm’s between hosts, etc. Your last example seems to show this, but I’m lost on the vNIC’s after the mgmt one… Thanks!

    • Eric Siron says:

      The second and all further virtual network adapters for the management OS must be added via PowerShell. Check the how-to links at the end. You can also use Windows Admin Center.

  • Hank says:

    Great info.

    I have a frustrating issue: I have a Win10 system running Hyper-v with a single Ubuntu VM.

    I set it up and it ran for several days perfectly connected to the network, network browsable by other network devices and host programs operating as expected. Thus was Ubuntu 18.xx and my first move into both Hyper-v and Ubuntu. I have broad, but somewhat rusty skills in networking, host/server and programming skills. I figured, “piece of cake!”

    Two days in, I allowed Ubuntu to upgrade to 20.xx and needed to reboot the Win 10 machine. When it all came back, the Hyper-V VM running Ubuntu is now its own DHCP and DNS server thus no longer on my network as even the net mask is different. WHAT THE HELL HAPPENED?

    Thanks bud.

Leave a comment or ask a question

Your email address will not be published. Required fields are marked *

Your email address will not be published. Required fields are marked *

Notify me of follow-up replies via email

Yes, I would like to receive new blog posts by email

What is the color of grass?

Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.