How to Create Virtual Machine Templates with Hyper-V Manager

How to Create Virtual Machine Templates with Hyper-V Manager

As an infrastructure hypervisor, Hyper-V hits all the high notes. However, it misses on some of the management aspects, though. You can find many control features in System Center Virtual Machine Manager, but I don’t feel that product was well-designed and the pricing places it out of reach of many small businesses anyway. Often, we don’t even need a heavy management layer; sometimes just one or two desires go unmet by the free tools. Of those, admins commonly request the ability to create and deploy templates. The free tools don’t directly include that functionality, but you can approximate it with only a bit of work.

The Concept of the Gold Image

You will be building “gold” or “master” (or even “gold master”) images as the core of this solution. This means that you’ll spend at least a little time configuring an environment (or several environments) to your liking. Instead of sending those directly to production, you’ll let them sit cold. When you want to deploy a new system, you use one of those as a base rather than building the instance up from scratch.

As you might have guessed, we do need to take some special steps with these images. They are not merely regular systems that have been turned off. We “generalize” them first, using a designated tool called “sysprep”. That process strips away all known unique identifiers for a Windows instance. The next time anyone boots that image, they’ll be presented with the same screens that you would see after freshly installing Windows. However, most non-identifying customization, such as software installations, will remain.

Do I Need Gold Images?

The simpler your environment, the less the concept of the gold image seems to fit. I wouldn’t write it off entirely, though. Even with rare usage, you can use a gold image to jump ahead of a lot of the drudgery of setting a new system. If you deploy from the same image only twice, it will be worth your time.

For any environment larger than a few servers, the need for gold images becomes apparent quickly. Otherwise, you wind up spending significant amounts of time designing and deploying new systems. Since major parts of new server deployments share steps (and the equivalent involved time), you get the best usage by leveraging gold images.

Usually, the resistance to such images revolves around the work involved. People often don’t wish to invest much time in something whose final product will mostly just sit idle. I think that there’s also something to that “all-new” feeling of a freshly built image that you lose with gold images. The demands of modern business don’t really allow for these archaic notions. Do the work once, maybe some maintenance effort later, and ultimately save yourself and your colleagues many hours.

Should I Image Workstation or Server Environments?

The majority of my virtualization experience involves server instances. To that end, I’ve been using some sort of template strategy ever since I started using Hyper-V. I only build all-new images when new operating systems debut or significant updates release. Even if I wasn’t sure that I’d ever deploy a server OS more than once, I would absolutely build an image for it.

Workstation OSes have a different calculus. If you’ll be building a Microsoft virtual-machine RDS deployment, then you cannot avoid gold images. If you’re using only hardware deployments, then you might still image, but probably not the way that I’m talking about in this article. I will not illustrate workstation OSes, as the particulars of the process do not deviate meaningfully from server instances.

What About OS and Software Keys?

For operating systems, you have two basic types:

  • Keyed during install: This key will be retained after the sysprep, so you’ll need to use a key with enough remaining activations. KMS keys work best for this. With others, you’ll need to be prepared to change the key after deployment if the situation calls for it. If you have Windows Server Datacenter Edition as your hypervisor, then you can use the AVMA keys. If you don’t have DC edition, then you could technically still use the keys but you’ll have to immediately change it after deployment. I have no idea how this plays legally, so consider that a last-ditch risky move.
  • Keyed after install: This usually happens with volume licensing images. These are the best because you really don’t have to plan anything. Key it afterward. Of course, you also need to qualify for volume licensing in order to use this option at all, so…
  • OEM keys: I’m not even going to wade into that. Ask your reseller.

If you use the ADK (revisited a bit in an upcoming section), you have ways to address key problems.

As for software, you’ll have all sorts of issues with that. Most retain their keys. Lots of them have activation routines, too, so there’s that. And all of the things that come with it. You will need to think through and test. It will be worth the effort far more often than not.

What About Linux Gold Images?

Yes, you most certainly can create gold masters of Linux. In a way, it can be easier. Linux doesn’t use a lot of fancy computer identification techniques or have system-specific GUIDs embedded anywhere. Usually, you can just duplicate a Linux system at will and just rename it and assign a new IP.

Unfortunately, that’s not always the case. Because exceptions are so rare, there’s also no singular built-in tool to handle the items that need generalization. The only problem that I’ve encountered so far is with SSH keys. I found one set of instructions to regenerate them:

Creating Gold Images for Hyper-V Templating

The overall process:

  1. Create a virtual machine
  2. Install the operating system
  3. Customize
  4. Generalize
  5. Store
  6. Deploy

If that sounds familiar, you probably do something like that for physical systems as well.

Let’s go over the steps in more detail.

Creating the Virtual Machine and Installing the Operating System

You start by simply doing what you might have done any number of times before: create a new virtual machine. One thing really matters: the virtual machine generation. Whatever generation you choose for the gold image will be the generation of all virtual machines that you build on it. Sure, there are some conversion techniques… but why use them? If you will need Generation 1 and Generation 2 VMs, then build two templates.

The rest of the settings of the virtual machine that you use for creating a gold image do not matter (unless the dictates of some particular software package override). You have more than one option for image storage, but in all cases, you will deploy to unique virtual machines whose options can be changed.

Once you’ve got your virtual machine created, install Windows Server (or whatever) as normal (note, especially for desktop deployments: many guides mention booting in Audit mode, which I have never done; this appears to be most important when Windows Store applications are in use):

Customizing the Gold Image

If you’re working on your first image, I would not go very far with this. You want a generic image to start with. For initial images, I tend to insert things like BGInfo that I want on every system. You can then use this base image to create more specialized images.

I have plans for future articles that will expand on your options for customization. You can perform simple things, like installing software. You can do more complicated things, such as using the Automated Deployment Kit. One of the several useful aspects of ADK is the ability to control keying.

Tip: If you have software that requires .Net 3.5, you can save a great deal of time by having a branch of images that include that feature pre-installed:

Just remember that you want to create generic images. Do not try to create a carbon-copy of an intended live system. If that’s your goal (say, for quick rollback to a known-good build), then create the image that you want as a live system and store a permanent backup copy. You could use an export if you like.

Very important: Patch the image fully, but only after you have all of the roles, features, and applications installed.

Generalize the Gold Image

Once you have your image built the way that you like it, you need to seal it. That process will make the image generic, freezing it into a state from which it can be repeatedly deployed. Windows (and Windows Server) includes that tool natively: sysprep.

The best way to invoke sysprep is by a simple command-line process. Use these switches:


The first three parameters are standard. We can use the last one because we’re creating Hyper-V images. It will ensure that the image doesn’t spend a lot of time worrying about hardware.

Tip: If you want to use the aforementioned Audit Mode so that you can work with software packages, use /audit instead of /oobe.

Tip: You can also just run sysprep.exe to get the user interface where you can pick all of these options except the mode. Your image(s) will work just fine if you don’t use /mode:vm.

Once the sysprep operation completes, it will stop the virtual machine. At that point, consider it to be in a “cold” state. Starting it up will launch the continuation of a setup process. So, you don’t want to do that. Instead, store the image so that it can be used later.

Storing a Gold Image

Decide early how you want to deploy virtual machines from this image. You have the option of creating all-new virtual machines at each go and re-using a copy of the VHDX. Alternatively, you can import a virtual machine as a copy. I use both techniques, so I recommend export. That way, you’ll have the base virtual machine and the VHDX so you can use either as suits you.

Image storage tip: Use deduplicated storage. In my test lab, keep mine on a volume using Windows Server deduplication in Hyper-V mode. That mode only targets VHDX files and was intended for running VDI deployments. It seems to work well for cold image storage, as well. I have not tried with the normal file mode.

VHDX Copy Storage

If you only want to store the VHDX, then copy it to a safe location. Give the VHDX a very clear name as to its usage. Don’t forget that Windows allows you to use very, very long filenames. Delete the root virtual machine afterward and clean up after it.

The benefit of copy storage is that you can easily lay out all of your gold image VHDXs side-by-side in the same folder and not need to keep track of all of those virtual machine definition files and folders.

Exported Image Storage

By exporting the virtual machine, you can leverage import functionality to easily deploy virtual machines without much effort on your part. There are some downsides, but they’re not awful:

  • The export process takes a tiny bit of work. It’s not much, but…
  • When importing, the name of the VHDX cannot be changed. So, you wind up with a newly-deployed virtual machine that uses the same VHDX name as your gold image. That problem can be fixed, of course, but it’s extra work.

I discovered that we’ve never written an article on exporting virtual machines. I’ll rectify that in a future article and we’ll link this one to it. Fortunately, the process is not difficult. Start by right-clicking your virtual machine and clicking the Export option. Follow the wizard from there:

Tip: Disconnect any ISO images prior to exporting. Otherwise, the export will make a copy of that ISO to go with the VM image and it will remain permanently attached and deployed with each import.

Deploying Hyper-V Virtual Machines from Gold Images

From these images that you created, you can build a new virtual machine and attach a copy of the VHDX or you can import a copy.

Deploying from a VHDX Copy

VHDX copy deployment has two pieces:

  • Creation of a virtual machine
  • Attaching to a copy of the gold image VHDX

We have an article on using Hyper-V Manager to create a virtual machine. Make sure to use the same generation as the gold image. On the disk page, skip attaching anything:

Remember that Hyper-V Manager doesn’t have much for configuration options during VM creation. Set the memory and CPU and network up before proceeding.

To finish up, copy the gold image’s VHDX into whatever target location you like. You can use a general “Virtual Hard Disks” folder like the Hyper-V default settings do, or you can drop it in a folder named after the VM. It really doesn’t matter, as long as the Hyper-V host can reach the location. If it were me, I would also rename the VHDX copy to something that suits the VM.

Once you have the VHDX placed, use Hyper-V Manager to attach it to the virtual machine:

Once you hit OK, you can boot up the VM. It will start off like a newly-installed Windows machine, but all your customizations will be in place.

Deploying with Import

Importing saves you a bit of work in exchange for a bit of different but optional work.

We already have an article on importing (scroll down to the bottom 2/3 or so). I will only recap the most relevant portions.

First, choose the relevant source gold image:


Especially ensure that you choose to Copy. Either of the other choices will cause problems for the gold image.

Now, the fun parts. It will import with the name of the exported VM. That’s probably not what you want. You’ll need to:

  • Rename the virtual machine
  • Rename the VHDX(s)
    1. Detach the VHDX(s)
    2. Rename it/them
    3. Reattach it/them
  • You may need to rename the folder(s), depending on how you deployed. That hasn’t been a problem for me, so far.

Post-Deployment Work

At this point, you are mostly finished. The one thing to keep in mind: the guest operating system will have a generic name unrelated to the virtual machine’s name. Don’t forget to fix that. Also, IP addresses will not be retained, etc.

Further Work and Consideration

What I’ve shown you only takes you through some simplistic builds. You can really turn this into a powerhouse deployment system. Things to think about:

  • After you have a basic build, import it, customize it further and sysprep it again. Repeat as necessary.
  • Microsoft places limits on how many times an image can be sysprepped. Therefore, always try to work from the first image rather than from a deep child.
  • You can use the Mount-WindowsImage cmdlet family and DISM tools. Those allow you to patch and apply items to an image without decrementing the sysprep counter. For an example, I have a script that makes a VHDX into a client for a WSUS system.
  • You can mount a VHDX in Windows Explorer to move things into it.

Watch out for more articles that talk more about customizing your images. If you’re having trouble following any of the steps above, ping me a message in the comments below and I’ll help you out.

How to Configure a File Share Witness for a Hyper-V Cluster

How to Configure a File Share Witness for a Hyper-V Cluster

Traditionally, the preferred choice for a cluster quorum witness has been some type of networked disk. Small SAN LUNs do the trick nicely. Things have changed a bit, increasing the viability of the file share witness. You can configure one easily in a few simple steps.

Prerequisites for a File Share Witness

Just as with a cluster disk witness, you need to do a bit of work in advance.

  1. First, you need to pick a system that will host the share. It needs to be reliable, although not necessarily bulletproof. We’ll talk about that after the how-to.
  2. Space used on that share will be extremely tiny. Mine doesn’t even use 100 bytes.
  3. You need to set adequate share and NTFS permissions. I just used my domain group that includes the cluster and nodes and gave it Full Control for both and that worked. From my observations, only the domain account that belongs to the cluster name object is used. It seems that Change on the share and Modify on NTFS are adequate permission levels.
  4. If you have firewalls configured, the cluster node needs to be able to reach the share’s host on port 445.

The cluster will create its own folder underneath the root of this share. When you configure the witness, it will generate a GUID to use as the folder name. Therefore, you can point multiple clusters at the same share.

Using PowerShell to Configure a File Share Witness

As with most things, PowerShell is the quickest and easiest way. You will use the Set-ClusterQuorum cmdlet.

From any node of the cluster, you can use that cmdlet with the FileShareWitness parameter:

Once you have the name of the sharing host entered, you can use tab completion to step through the available shares.

If you want to run it remotely, make sure that you log in with an account that has administrative access to the cluster:


Using Failover Cluster Manager to Configure a File Share Witness

Failover Cluster Manager has two different ways to get to the same screen.

  1. In Failover Cluster Manager, right-click the cluster’s root node, go to More Actions, and click Configure Cluster Quorum Settings.
  2. Click Next on the introductory screen.
  3. If you choose Advanced quorum configuration, you can change which nodes have quorum votes. That is not part of this article, but you’ll eventually get to the same screen that I’m taking you to. For my directions, choose Select the quorum witness. Click Next.
  4. Choose Configure a file share witness and click Next.
  5. You can manually enter or browse to the shared location.
    1. Manual entry:
    2. Browse:
  6. The next screen summarizes your proposed changes. Review them and click Next when ready. The cluster will attempt to establish your setting.
  7. The final screen shows the results of your action.

Checking/Verifying Your Cluster Quorum Settings in Failover Cluster Manager

The home screen in Failover Cluster Manager shows most of the pertinent information for a file share witness. Look in the Cluster Core Resources section:

You can see the most important things: the state of the share and where it lives. You cannot see the name of the sub-folder used.

Checking/Verifying Your Cluster Quorum Settings in PowerShell

The built-in PowerShell module does not join the information quite as gracefully. You can quickly see the status and mode with Get-ClusterResource:

If desired, you can pare it down a bit with  Get-ClusterResource -Name 'File Share Witness' or just  Get-ClusterResource 'File Share Witness'.

You can see the mode with Get-ClusterQuorum, but it doesn’t show much else.

If you want to see the settings, you’ll need to get into the registry. The pertinent keys:

  • HKEY_LOCAL_MACHINE\Cluster\Quorum
  • HKEY_LOCAL_MACHINE\Cluster\Resources\<quorum GUID>
  • HKEY_LOCAL_MACHINE\Cluster\Resources\<quorum GUID>\Parameters

The value for the quorum GUID can be found in the Resource key-value pair under HKEY_LOCAL_MACHINE\Quorum. The PowerShell interface for the registry is a bit messy:

That will output the GUID used in the other registry keys and the subfolder that was created on the file share witness. You can use it to retrieve them:


Troubleshooting a Cluster File Share Witness

There isn’t a lot to the file share witness. If the cluster says that it’s offline, then it can’t reach the share or it doesn’t have the necessary permissions.

I have noticed that the cluster takes longer to recognize that the share has come back online than a cluster disk. You can force it to come back online more quickly by right-clicking it in Failover Cluster Manager (screenshot above) and clicking Bring Online.

Why Use a File Share Witness for a Hyper-V Cluster?

With the how-to out of the way, we can talk about why. For me, it was because of the evolution of the cluster that I use to write these articles. I’ve been running the Microsoft iSCSI Target. But, as my cluster matured, I’ve been moving toward SMB. I don’t have anything left on iSCSI except quorum, Keeping it makes no sense.

To decide for your own option, analyze your options:

  • No witness: In a cluster with an odd number of nodes, you can go without a witness. However, with Microsoft Failover Clustering’s Dynamic Quorum technology, you’ll generally want to have a witness. I’m not sure of any good reason to continue using this mode.
  • Disk witness: Disk witness requires you to configure a standard Cluster Disk and dedicate it to quorum. Historically, we’ve built a SAN LUN of 512MB. If you have a SAN or other fiber channel/iSCSI target and don’t mind setting aside an entire LUN for quorum, this is a good choice. Really, the only reason to go with a disk witness when you have this option available is if you want to have your quorum be separate from your target.
  • Cloud witness (2016+): The cloud witness lets you use a small space on Azure storage as your witness. It’s a cool option, but not everyone has reliable Internet. Since you’ve already got storage for your cluster, cloud quorum might be overkill. But, it’s not a bad thing if you’ve got solid Internet. I would choose this for geographically dispersed clusters but otherwise would probably skip it in favor of a disk or share witness.

Realistically, I think the share witness works great when you’re already relying on SMB storage. As those builds become more common, the file share witness will likely increase in popularity.

Reliability Requirements for a File Share Witness

You want your file share witness to be mostly reliable, but it does not need to be 100%. Do the best that you can, but do not invest an inordinate amount of time and effort trying to ensure that it never fails. If you have a scale-out file server that always is online, that’s best. But, if you just have a single SMB system that hosts your VMs, that will work too. Remember that Dynamic Quorum will work to keep your cluster at a reasonable quorum level even if the file share witness goes offline.

Need any help?

If you’re having any issues setting up and configuring a File Share Witness for a Hyper-V Cluster let me know in the comments below and I’ll help you out. Also, if you’ve got any feedback at all on what’s been written here I’d love to hear it!

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

For years, I’d never heard of this problem. Then, suddenly, I’m seeing it everywhere. It’s not easy to precisely outline a symptom tree for you. Networked applications will behave oddly. Remote desktop sessions may skip or hang. Some network traffic will not pass at all. Other traffic will behave erratically. Rather than try to give you a thorough symptom tree, we’ll just describe the setup that can be addressed with the contents of this article: you’re using Hyper-V with a third-party network load balancer and experiencing network-related problems.


Before I ever encountered it, the problem was described to me by one my readers. Check out our Complete Guide to Hyper-V Networking article and look in the comments section for Jahn’s input. I had a different experience, but that conversation helped me reach a resolution much more quickly.

Problem Reproduction Instructions

The problem may appear under other conditions, but should always occur under these:

  • The network adapters that host the Hyper-V virtual switch are configured in a team
    • Load-balancing algorithm: Dynamic
    • Teaming mode: Switch Independent (likely occurs with switch-embedded teaming as well)
  • Traffic to/from affected virtual machines passes through a third-party load-balancer
    • Load balancer uses a MAC-based system for load balancing and source verification
      • Citrix Netscaler calls its feature “MAC based forwarding”
      • F5 load balancers call it “auto last hop”
    • The load balancer’s “internal” IP address is on the same subnet as the virtual machine’s
  • Sufficient traffic must be exiting the virtual machine for Hyper-V to load balance some of it to a different physical adapter

I’ll go into more detail later. This list should help you determine if you’re looking at an article that can help you.


Fixing the problem is very easy, and can be done without downtime. I’ll show the options in preference order. I’ll explain the impacting differences later.

Option 1: Change the Load-Balancing Algorithm

Your best bet is to change the load-balancing algorithm to “Hyper-V port”. You can change it in the lbfoadmin.exe graphical interface if your management operating system is GUI-mode Windows Server. To change it with PowerShell (assuming only one team):

There will be a brief interruption of networking while the change is made. It won’t be as bad as the network problems that you’re already experiencing.

Option 2: Change the Teaming Mode

Your second option is to change your teaming mode. It’s more involved because you’ll also need to update your physical infrastructure to match. I’ve always been able to do that without downtime as long as I changed the physical switch first, but I can’t promise the same for anyone else.

Decide if you want to use Static teaming or LACP teaming. Configure your physical switch accordingly.

Change your Hyper-V host to use the same mode. If your Hyper-V system’s management operating system is Windows Server GUI, you can use lbfoadmin.exe. To change it in PowerShell (assuming only one team):


In this context, it makes no difference whether you pick static or LACP. If you want more information, read our article on the teaming modes.

Option 3: Disable the Feature on the Load Balancer

You could tell the load balancer to stop trying to be clever. In general, I would choose that option last.

An Investigation of the Problem

So, what’s going on? What caused all this? If you’ve got an environment that matches the one that I described, then you’ve unintentionally created the perfect conditions for a storm.

Whose fault is it? In this case, I don’t really think that it’s fair to assign fault. Everyone involved is trying to make your network traffic go faster. They sometimes do that by playing fast and loose in that gray area between Ethernet and TCP/IP. We have lots of standards that govern each individually, but not so many that apply to the ways that they can interact. The problem arises because Microsoft is playing one game while your load balancer plays another. The games have different rules, and neither side is aware that another game is afoot.

Traffic Leaving the Virtual Machine

We’ll start on the Windows guest side (also applies to Linux). Your application inside your virtual machine wants to send some data to another computer. That goes something like this:

  1. Application: “Network, send this data to computer on port 443”.
  2. Network: “DNS server, get me the IP for”
  3. Network: “IP layer, determine if the IP address for is on the same subnet”
  4. Network: “IP layer, send this packet to the gateway”
  5. IP layer passes downward for packaging in an Ethernet frame
  6. Ethernet layer transfers the frame

The part to understand: your application and your operating system don’t really care about the Ethernet part. Whatever happens down there just happens. Especially, it doesn’t care at all about the source MAC.



Traffic Crossing the Hyper-V Virtual Switch

Because this particular Ethernet frame is coming out of a Hyper-V virtual machine, the first thing that it encounters is the Hyper-V virtual switch. In our scenario, the Hyper-V virtual switch rests atop a team of network adapters. As you’ll recall, that team is configured to use the Dynamic load balancing algorithm in Switch Independent mode. The algorithm decides if load balancing can be applied. The teaming mode decides which pathway to use and if it needs to repackage the outbound frame.

Switch independent mode means that the physical switch doesn’t know anything about a team. It only knows about two or more Ethernet endpoints connected in standard access mode. A port in that mode can “host” any number of MAC addresses;the physical switch’s capability defines the limit. However, the same MAC address cannot appear on multiple access ports simultaneously. Allowing that would cause all sorts of problems.



So, if the team wants to load balance traffic coming out of a virtual machine, it needs to ensure that the traffic has a source MAC address that won’t cause the physical switch to panic. For traffic going out anything other than the primary adapter, it uses the MAC address of the physical adapter.



So, no matter how many physical adapters the team owns, one of two things will happen for each outbound frame:

  • The team will choose to use the physical adapter that the virtual machine’s network adapter is registered on. The Ethernet frame will travel as-is. That means that its source MAC address will be exactly the same as the virtual network adapter’s (meaning, not repackaged)
  • The team will choose to use an adapter other than the one that the virtual machine’s network adapter is registered on. The Ethernet frame will be altered. The source MAC address will be replaced with the MAC address of the physical adapter

Note: The visualization does not cover all scenarios. A virtual network adapter might be affinitized to the second physical adapter. If so, its load balanced packets would travel out of the shown “pNIC1” and use that physical adapter’s MAC as a source.

Traffic Crossing the Load Balancer

So, our frame arrives at the load balancer. The load balancer has a really crummy job. It needs to make traffic go faster, not slower. And, it acts like a TCP/IP router. Routers need to unpackage inbound Ethernet frames, look at their IP information, and make decisions on how to transmit them. That requires compute power and time.


If it needs too much time to do all this, then people would prefer to live without the load balancer. That means that the load balancer’s manufacturer doesn’t sell any units, doesn’t make any money, and goes out of business. So, they come up with all sorts of tricks to make traffic faster. One way to do that is by not doing quite so much work on the Ethernet frame. This is a gross oversimplification, but you get the idea:


Essentially, the load balancer only needs to remember which MAC address sent which frame, and then it doesn’t need to worry so much about all that IP nonsense (it’s really more complicated than that, but this is close enough).

The Hyper-V/Load Balancer Collision

Now we’ve arrived at the core of the problem: Hyper-V sends traffic from virtual machines using source MAC addresses that don’t belong to those virtual machines. The MAC addresses belong to the physical NIC. When the load balancer tries to associate that traffic with the MAC address of the physical NIC, everything breaks.

Trying to be helpful (remember that), the load balancer attempts to return what it deems as “response” traffic to the MAC that initiated the conversation. The MAC, in this case, belongs directly to that second physical NIC. It wasn’t expecting the traffic that’s now coming in, so it silently discards the frame.

That happens because:

  • The Windows Server network teaming load balancing algorithms are send only; they will not perform reverse translations. There are lots of reasons for that and they are all good, so don’t get upset with Microsoft. Besides, it’s not like anyone else does things differently.
  • Because the inbound Ethernet frame is not reverse-translated, its destination MAC belongs to a physical NIC. The Hyper-V virtual switch will not send any Ethernet frame to a virtual network adapter unless it owns the destination MAC
  • In typical system-to-system communications, the “responding” system would have sent its traffic to the IP address of the virtual machine. Through the normal course of typical networking, that traffic’s destination MAC would always belong to the virtual machine. It’s only because your load balancer is trying to speed things along that the frame is being sent to the physical NIC’s MAC address. Otherwise, the source MAC of the original frame would have been little more than trivia.

Stated a bit more simply: Windows Server network teaming doesn’t know that anyone cares about its frames’ source MAC addresses and the load balancer doesn’t know that anyone is lying about their MAC addresses.

Why Hyper-V Port Mode Fixes the Problem

When you select the Hyper-V port load balancing algorithm in combination with the switch independent teaming mode, each virtual network adapter’s MAC address is registered on a single physical network adapter. That’s the same behavior that Dynamic uses. However, no load balancing is done for any given virtual network adapter; all traffic entering and exiting any given virtual adapter will always use the same physical adapter. The team achieves load balancing by placing each virtual network adapter across its physical members in a round-robin fashion.


Source MACs will always be those of their respective virtual adapters, so there’s nothing to get confused about.

I like this mode as a solution because it does a good job addressing the issue without making any other changes to your infrastructure. The drawback would be if you only had a few virtual network adapters and weren’t getting the best distribution. For a 10GbE system, I wouldn’t worry.

Why Static and LACP Fix the Problem

Static and LACP teaming involve your Windows Server system and the physical switch agreeing on a single logical pathway that consists of multiple physical pathways. All MAC addresses are registered on that logical pathway. Therefore, the Windows Server team has no need of performing any source MAC substitution regardless of the load balancing algorithm that you choose.


Since no MAC substitution occurs here, the load balancer won’t get anything confused.

I don’t like this method as much. It means modifying your physical infrastructure. I’ve noticed that some physical switches don’t like the LACP failover process very much. I’ve encountered some that need a minute or more to notice that a physical link was down and react accordingly. With every physical switch that I’ve used or heard of, the switch independent mode fails over almost instantly.

That said, using a static or LACP team will allow you to continue using the Dynamic load balancing algorithm. All else being equal, you’ll get a more even load balancing distribution with Dynamic than you will with Hyper-V port mode.

Why You Should Let the Load Balancer Do Its Job

The third listed resolution suggests disabling the related feature on your load balancer. I don’t like that option, personally. I don’t have much experience with the Citrix product, but I know that the F5 buries their “Auto Last Hop” feature fairly deeply. Also, these two manufacturers enable the feature by default. It won’t be obvious to a maintainer that you’ve made the change.

However, your situation might dictate that disabling the load balancer’s feature causes fewer problems than changing the Hyper-V or physical switch configuration. Do what works best for you.

Using a Different Internal Router Also Addresses the Issue

In all of these scenarios, the load balancer performs routing. Actually, these types of load balancers always perform routing, because they present a single IP address for the service to the outside world and translate internally to the back-end systems.

However, nothing states that the internal source IP address of the load balancer must exist in the same subnet as the back-end virtual machines. You might do that for performance reasons; as I said above, routing incurs overhead. However, this all a known quantity and modern routers are pretty good at what they do. If any router is present between the load balancer and the back-end virtual machines, then the MAC address issue will sort itself out regardless of your load balancing and teaming mode selections.

Have You Experienced this Phenomenon?

If so, I’d love to hear from you. What system did you experience it happening? How did you resolve the situation (if you were able)? Perhaps you’ve just encountered it and arrived here to get a solution – if so let me know if this explanation was helpful or if you need any further assistance regarding your particular environment. The comment section below awaits.

How to Resize Virtual Hard Disks in Hyper-V 2016

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.



Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:


Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

The Really Simple Guide to Hyper-V Networking

The Really Simple Guide to Hyper-V Networking

If you’re just getting started with Hyper-V and struggling with the networking configuration, you are not alone. I (and others) have written a great deal of introductory material on the subject, but sometimes, that’s just too much. I’m going to try a different approach. Rather than a thorough deep-dive on the topic that tries to cover all of the concepts and how-to, I’m just going to show you what you’re trying to accomplish. Then, I can just link you to the necessary supporting information so that you can make it into reality.

Getting Started

First things first. If you have a solid handle on layer 2 and layer 3 concepts, that’s helpful. If you have experience networking Windows machines, that’s also helpful. If you come to Hyper-V from a different hypervisor, then that knowledge won’t transfer well. If you apply ESXi networking design patterns to Hyper-V, then you will create a jumbled mess that will never function correctly or perform adequately.

Your Goals for Hyper-V Networking

You have two very basic goals:

  1. Ensure that the management operating system can communicate on the network
  2. Ensure that virtual machines can communicate on the network


Any other goals that you bring to this endeavor are secondary, at best. If you have never done this before, don’t try to jump ahead to routing or anything else until you achieve these two basic goals.

Hyper-V Networking Rules

Understand what you must, can, and cannot do with Hyper-V networking:

  • You can connect the management operating system to a physical network directly using a physical network adapter or a team of physical network adapters.
  • You cannot connect any virtual machine to a physical network directly using a physical network adapter or team.
  • If you wish for a virtual machine to have network access, you must use a Hyper-V virtual switch. There is no bypass or pass-through mode.
  • A Hyper-V virtual switch uses a physical network adapter or team. It completely takes over that adapter or team; nothing else can use it.
  • It is possible for the management operating system to connect through a Hyper-V virtual switch, but it is not required.
  • It is not possible for the management operating system and the virtual switch to use a physical adapter or team at the same time. The “share” terminology that you see in all of the tools is a lie.

What the Final Product Looks Like

It might help to have visualizations of correctly-configured Hyper-V virtual switches. I will only show images with a single physical adapter. You can use a team instead.

Networking for a Single Hyper-V Host, the Old Way

An old technique has survived from the pre-Hyper-V 2012 days. It uses a pair of physical adapters. One belongs to the management operating system. The other hosts a virtual switch that the virtual machines use. I don’t like this solution for a two adapter host. It leaves both the host and the virtual machines with a single point of failure. However, it could be useful if you have more than two adapters and create a team for the virtual machines to use. Either way, this design is perfectly viable whether I like it or not.



Networking for a Single Hyper-V Host, the New Way

With teaming, you can just join all of the physical adapters together and let it host a single virtual switch. Let the management operating system and all of the guests connect through it.



Networking for a Clustered Hyper-V Host

For a stand-alone Hyper-V host, the management operating system only requires one connection to the network. Clustered hosts benefit from multiple connections. Before teaming was directly supported, we used a lot of physical adapters to make that happen. Now we can just use one big team to handle our host and our guest traffic. That looks like this:




VLANs seem to have some special power to trip people up. A few things:

  • The only purpose of a VLAN is to separate layer 2 (Ethernet) traffic.
  • VLANs are not necessary to separate layer 3 (IP) networks. Many network administrators use VLANs to create walls around specific layer 3 networks, though. If that describes your network, you will need to design your Hyper-V hosts to match. If your physical network doesn’t use VLANs, then don’t worry about them on your Hyper-V hosts.
  • Do not create one Hyper-V virtual switch per VLAN the way that you configure ESXi. Every Hyper-V virtual switch automatically supports untagged frames and VLANs 1-4096.
  • Hyper-V does not have a “default” VLAN designation.
  • Configure VLANs directly on virtual adapters, not on the virtual switch.

Other Quick Pointers

I’m going to provide you with some links so you can do some more reading and get some assistance with configuration. However, some quick things to point out:

  • The Hyper-V virtual switch does not have an IP address of its own.
  • You do not manage the Hyper-V virtual switch via an IP or management VLAN. You manage the Hyper-V virtual switch using tools in the management or a remote operating system (Hyper-V Manager, PowerShell, and WMI/CIM).
  • Network connections for storage (iSCSI/SMB): Preferably, network connections for storage will use dedicated, unteamed physical adapters. If you can’t do that, then you can create dedicated virtual NICs in the management operating system
  • Multiple virtual switches: Almost no one will ever need more than one virtual switch on a Hyper-V host. If you have VMware experience, especially do not create virtual switches just for VLANs.
  • The virtual machines’ virtual network adapters connect directly to the virtual switch. You do not need anything in the management operating system to assist them. You don’t need a virtual adapter for the management operating system that has anything to do with the virtual machines.
  • Turn off VMQ for every gigabit physical adapter that will host a virtual switch. If you team them, the logical team NIC will also have a VMQ setting that you need to disable.

For More Information

I only intend for this article to be a quick introduction to show you what you’re trying to accomplish. We have several articles to help you dive into the concepts and the necessary steps for configuration.


How to Monitor Hyper-V Performance using PNP4Nagios

How to Monitor Hyper-V Performance using PNP4Nagios

At a high level, you need three things to run a trouble-free datacenter (even if your datacenter consists of two mini-tower systems stuffed in a closet): intelligent architecture, monitoring, and trend analysis. Intelligent architecture consists of making good purchase decisions and designing virtual machines that can appropriately handle their load. Monitoring allows you to prevent or respond quickly to emergent situations. Trend analysis helps you to determine how well your reality matches your projections and greatly assists in future architectural decisions. In this article, we’re going to focus on trend analysis. We will set up a data collection and graphing system called “PNP4Nagios” that will allow you to track anything that you can measure. It will hold that data for four years. You can display it in graphs on demand.

What You Get

I know that intro was a little heavy. So, to put it more simply, I’m giving you graphs. Want to know how much CPU that VM has been using? Trying to figure out how quickly your VMs are filling up your Cluster Shared Volumes? Curious about a VM’s memory usage? We have all of that.

Where I find it most useful: Getting rid of vendor excuses. We all have at least one of those vendors that claim that we’re not providing enough CPU or memory or disk or a combination. Now, you can visually determine the reasonableness of their demands.

First, the host and service screens in Nagios will get a new graph icon next to every host and service that track performance data. Also, hovering over one of those graph icons will show a preview of the most recent chart:


Second, clicking any of those icons will open a new tab with the performance data graph for the selected item.


Just as the Nagios pages periodically refresh, the PNP4Nagios page will update itself.

Additionally, you can do the following:

  • Click-dragging a section on a graph will cause it to zoom. If you’ve ever used the zoom feature in Performance Monitor, this is similar.
  • In the Actions bar, you can:
    • Set a custom time/date range to graph
    • Generate a PDF of the visible charts
    • Generate XML summary data
  • Create a “basket” of the graphs that you view most. The basket persists between sessions, so you can build a dashboard of your favorite charts

What You Need

Fortunately, you don’t need much to get going with PNP4Nagios.

Fiscal Cost

Let’s answer the most important question: what does it cost? PNP4Nagios does not require you to purchase anything. Their site does include a Donate button. If your organization finds PNP4Nagios useful, it would be good to throw a few dollars their way.

You’ll need an infrastructure to install PNP4Nagios on, of course. We’ll wrap that up into the later segments.


As its name implies, PNP4Nagios needs Nagios. PNP4Nagios installs alongside Nagios on the same system. We have a couple of walkthroughs for installing Nagios as a Hyper-V guest, divided by distribution.

The installation really doesn’t change much between distributions. The differences lie in how you install the prerequisites and in how you configure Apache. If you know those things about your distribution, then you should be able to use either of the two linked walkthroughs to great effect. If you’d rather see something on your exact distribution, the official Nagios project has stepped up its game on documentation. If we haven’t got instructions for your distribution, maybe they do. There are still things that I do differently, but nothing of critical importance. Also, being a Hyper-V blog, I have included special items just for monitoring Hyper-V, so definitely look at the post-installation steps of my articles.

Also, if you want to use SSL and Active Directory to secure your Nagios installation, we’ve got an article for that.

Disk Space

According to the PNP4Nagios documentation, each item that you monitor will require about 400 kilobytes once it has reached maximum data retention. That assumes that you will leave the default historical interval and retention lengths. More information can be found on the PNP4Nagios site. So, 20 systems with 12 monitors apiece will use about 96 megabytes.

PNP4Nagios itself appears to use around 7 megabytes once installed and extracted.

Downloading PNP4Nagios

PNP4Nagios is distributed on Sourceforge:

As always, I recommend that you download to a standard workstation and then transfer the files to the Nagios server. Since I operate using a Windows PC and run Nagios on a Linux system, WinSCP is my choice of transfer tool.

On my Linux systems, I create a “Download” directory in my home folder and place everything there. The install portion of my instructions will be written using the file’s location as a starting point. So, for me, I begin with cd ~/Downloads.

Installing PNP4Nagios

PNP4Nagios installs quite easily.

PNP4Nagios Prerequisites

Most of the prerequisites for PNP4Nagios automatically exist in most Linux distributions. Most of the remainder will have been satisfied when you installed Nagios. The documentation lists them:

  • Perl, at least version 5. To check your installed Perl version: perl -v
  • RRDTool: This one will not be installed automatically or during a regular Nagios build. Most distributions include it in their mainstream repositories. Install with your distribution’s package manager.
    • CentOS and most other RedHat-based distributions: sudo yum install perl-rrdtool
    • SUSE-based systems: sudo zypper install rrdtool
    • Ubuntu and most other Debian-based distributions: sudo apt install rrdtool librrds-perl
  • PHP, at least version 5. This would have been installed with Nagios. Check with: php -v
  • GD extension for PHP. You might have installed this with Nagios. Easiest way to check is to just install it; it will tell you if you’ve already got it.
    • CentOS and most other RedHat-based distributions: sudo yum install php-gd
    • SUSE-based systems: sudo zypper install php-gd
    • Ubuntu and most other Debian-based distributions: sudo apt install php-gd
  • mod_rewrite extension for Apache. This should have been installed along with Nagios. How you check depends on whether your distribution uses “apache2” or “httpd” as the name of the Apache executable:
    • CentOS and most other RedHat-based distributions: sudo httpd -M | grep rewrite
    • Ubuntu, openSUSE, and most Debian and SUSE distributions: sudo apache2ctl -M | grep rewrite
  • There will be a bit more on this in the troubleshooting section near the end of the article, but if you’re running a more current version of PHP (like 7), then you may not have the XML extension built-in. I only ran into this problem on my Ubuntu installation. I solved it with this: sudo apt install php-xml
  • openSUSE was missing a couple of PHP modules on my system: sudo zypper install php-sockets php-zlib

If you are missing anything that I did not include instructions for, you can visit one of my articles on installing Nagios. If I haven’t got one for your distribution, then you’ll need to search for instructions elsewhere.

Unpacking and Installing PNP4Nagios

As I mentioned in the download section, I place my downloaded files in ~/Downloads. I start from there (with cd ~/Downloads). Start these directions in the folder where you placed your downloaded PNP4Nagios tarball.

  1. Unpack the tarball. I wrote these directions with version 0.6.26. Modify your command as necessary (don’t forget about tab completion!): tar xzf pnp4nagios-0.6.26.tar.gz
  2. Move to the unpacked folder: cd ./pnp4nagios-0.6.26/
  3. Next, you will need to configure the installer. Most of us can just use it as-is. Some of us will need to override some things, such as the Nagios user groups. To determine if that applies to you, open /usr/local/nagios/etc/nagios.cfg. Look for the following section:

    If both nagios_user and nagios_group are “nagios”, then you don’t need to do anything special.
    Regular configuration: ./configure
    Configuration with overrides: ./configure --with-nagios-user=naguser --with-nagios-group=nagcmd .
    Other overrides are available. You can view them all with ./configure --help. One useful override would be to change the location of the emitted perfdata files to an auxiliary volume to control space usage. On my Ubuntu system, I needed to override the location of the Apache conf files: ./configure --with-httpd-conf=/etc/apache2/sites-available
  4. When configure completes, check its output. Verify that everything looks OK. Especially pay attention to “Apache Config File” — note the value because you will access it later. If anything looks off, install any missing prerequisites and/or use the appropriate configure options. You can continue running ./configure until everything suits your needs.
  5. Compile the program: make all. If you have an “oh no!” moment in which you realize that you missed something, you can still re-run ./configure and then compile again.
  6. Because we’re doing a new installation, we will have it install everything: sudo make fullinstall. Be aware that we are now using sudo. That’s because it will need to copy files into locations that your regular account won’t have access to. For an upgrade, you’d likely only want sudo make install. Please check the documentation for additional notes about upgrading. If you didn’t pay attention to the output file locations during configure, they’ll be displayed to you again.
  7. We’re going to be adding a bit of flair to our Nagios links. Enable the pop-up extension with: sudo cp ./contrib/ssi/status-header.ssi /usr/local/nagios/share/ssi/

Installation is complete. We haven’t wired it into Nagios yet, so don’t expect any fireworks.

Configure Apache Security for PNP4Nagios

If you just use the default Apache security for Nagios, then you can skip this whole section. As outlined in my previous article, I use Active Directory authentication. Really, all that you need to do is duplicate your existing security configuration to the new site. Remember how I told you to pay attention to the output of configure, specifically “Apache Config File”? That’s the file to look in.

My “fixed” file looks like this:

Only a single line needed to be changed to match my Nagios virtual directories.

Initial Verification of PNP4Nagios Installation

Before we go any further, let’s ensure that our work to this point has done what we expected.

  1. If you are using a distribution whose Apache enables and disables sites by symlinking into sites-available and you instructed PNP4Nagios to place its files there (ex: Ubuntu), enable the site: sudo a2ensite pnp4nagios.conf
  2. Restart Apache.
    1. CentOS and most other RedHat-based distributions: sudo service httpd restart
    2. Almost everyone else: sudo service apache2 restart
  3. If necessary, address any issues with Apache starting. For instance, Apache on my openSUSE box really did not like the “Order” and “Allow” directives.
  4. Once Apache starts correctly, access http://yournagiosserveraddress/pnp4nagios. For instance, my internal URL is Remember that you copied over your Nagios security configuration, so you will log in using the same credentials that you use on a normal Nagios site.
  5. Fix any problems indicated by the web page. Continue reloading the Apache server and the page as necessary until you get the green light:
  6. Remove the file that validates the installation: sudo rm /usr/local/pnp4nagios/share/install.php

Installation was painless on my CentOS and Ubuntu systems. openSUSE gave me more drama. In particular, it complained about “PHP zlib extension not available” and “PHP socket extension not available”. Very easy to fix: sudo zypper install php-sockets php-zlib. Don’t forget to restart Apache after making these changes.

Initial Configuration of Nagios for PNP4Nagios

At this point, you have PNP4Nagios mostly prepared to do its job. However, if you try to access the URL, you’ll get a message that says that it doesn’t have any data: “perfdata directory “/usr/local/pnp4nagios/var/perfdata/” is empty. Please check your Nagios config.” Nagios needs to start feeding it data.

We start by making several global changes. If you are comparing my walkthrough to the official PNP4Nagios documentation, be aware that I am guiding you to a Bulk + NPCD configuration. I’ll talk about why after the how-to.

Global Nagios Configuration File Changes

In the text editor of your choice, open /usr/local/nagios/etc/nagios.cfg. Find each of the entries that I show in the following block and change them accordingly. Some don’t need anything other than to be uncommented:


Next, open /usr/local/nagios/etc/objects/templates.cfg. At the end, you’ll find some existing commands that mention “perfdata”. After those, add the commands from the following block. If you don’t use the initial Nagios sample files, then just place these commands in any active cfg file that makes sense to you.

Configuring NPCD

The performance collection method that we’re employing involves the Nagios Perfdata C Daemon (NPCD). The default configuration will work perfectly for this walkthrough. If you need something more from it, you can edit /usr/local/pnp4nagios/etc/npcd.cfg. We just want it to run as a daemon:

Enable it to run automatically at startup.

  • Most Red Hat and SUSE based distributions: sudo chkconfig --add npcd
  • Ubuntu and most other Debian-based distributions: sudo update-rc.d npcd defaults

Configuring Hosts in Nagios for PNP4Nagios Graphing

If you made it here, you’ve successfully completed all the hard work! Now you just need to tell Nagios to start collecting performance data so that PNP4Nagios can graph it.

Note: I deviate substantially from the PNP4Nagios official documentation. If you follow those directions, you will quickly and easily set up every single host and every single service to gather data. I didn’t want that because I don’t find such a heavy hand to be particularly useful. You’ll need to do more work to exert finer control. In my opinion, that extra bit of work is worth it. I’ll explain why after the how-to.

If you followed the path of least resistance, every single host in your Nagios environment inherits from a single root source. Open /usr/local/nagios/etc/objects/templates.cfg. Find the define host object with a name of generic-host. Most likely, this is your master host object. Look at its configuration:

Now that you’ve enabled performance data processing in nagios.cfg, this means that Nagios and PNP4Nagios will now start graphing for every single host in your Nagios configuration. Sound good? Well, wait a second. What it really means is that it will graph the output of the check_command for every single host in your Nagios configuration. What is check_command in this case? Probably check_ping or check_icmp. The performance data that those output are the round-trip average and packets lost during pings from the Nagios server to the host in question. Is that really useful information? To track for four years?

I don’t really need that information. Certainly not for every host. So, I modified mine to look this:

What we have:

  • Our existing hosts are untouched. They’ll continue not recording performance data just as they always have.
  • A new, small host definition called “perf-host”. It also does not set up the recording of host performance data. However, its “action_url” setting will cause it to display a link to any graphs that belong to this host. You can use this with hosts that have graphed services but you don’t want the ping statistics tracked. To use it, you would set up/modify hosts and host templates to inherit from this template in addition to whatever host templates they already inherit from. For example: use perf-host,generic-host.
  • A new, small host definition called “perf-host-pingdata”. It works exactly like “perf-host” except that it will capture the ping data as well. The extra bit on the end of the “action_url” will cause it to draw a little preview when you mouseover the link. To use it, you will set up/modify hosts and host templates to inherit from this template in addition to whatever host templates they already inherit from. For example: use perf-host-pingdata,generic-host.

Note: When setting the inheritance:

  • perf-host or perf-host-pingdata must come before any other host templates in a use line.
  • In some instances, including a space after the comma in a use line causes Nagios to panic if the name of the host does not also have a space (ex: you are using tabs instead of spaces on the name generic_host line. Make sure that all of your use directives have no spaces after any commas and you will never have a problem. Ex: use perf-host,generic-host.

Remember to check the configuration and restart Nagios after any changes to the .cfg files:

Couldn’t You Just Set a Single Root Host for Inheritance?

An alternative to the above would be:

In this configuration, perf-host inherits directly from generic-host. You could then have all of your other systems inherit from perf-host instead of generic-host. The problem is that even in a fairly new Nagios installation, a fair number of hosts already inherit from generic-host. You’d need to determine which of those you wanted to edit and carefully consider how inheritance works. If you’re going to all of that trouble, it seems to me that maybe you should just directly edit the generic-host template and be done with it.

Truthfully, I’m only telling you what I do. Do whatever makes sense to you.

Configuring Services in Nagios for PNP4Nagios Graphing

You’ll get much more use of out service graphing than host graphing. Just as with hosts, the default configuration enables performance graphing for all services. Not all services emit performance data, and you may not want data from all services that do produce data. So, let’s fine-tune that configuration as well.

Still in /usr/local/nagios/etc/objects/templates.cfg, find the define service object with a name of generic-service. Disable performance data collection on it and add a stub service that enables performance graphing:

When you want to capture performance data from a service, prepend the new stub service to its use line. Ex: use perf-service,generic-service. The warnings from the host section about the order of items and the lack of a space after the comma in the use line transfer to the service definition.

Remember to check the configuration and restart Nagios after any changes to the .cfg files:

Example Configurations

In case the above doesn’t make sense, I’ll show you what I’m doing.

Most of the check_nt services emit performance data. I’m especially interested in CPU, disk, and memory. The uptime service also emits data, but for some reason, it doesn’t use the defined “counter” mode. Instead, it’s just a graph that steadily increases at each interval until you reboot, then it starts over again at zero. I don’t find that terribly useful, especially since Nagios has its own perfectly capable host uptime graphs. So, I first configure the “windows-server” host to show the performance action_url. Then I configure the desired default Windows services to capture performance data.

My /usr/local/nagios/etc/objects/windows.cfg:

Now, my hosts that inherit from the default Windows template have the extra action icon, but my other hosts do not:

p4n_hostswithiconsThe same story on the services page; services that track performance data have an icon, but the others do not:


Troubleshooting your PNP4Nagios Deployment

Not getting any data? First of all, be patient, especially when you’re just getting started. I have shown you how to set up the bulk mode with NPCD which means that data captures and graphing are delayed. I’ll explain why later, but for now, just be aware that it will take some time before you get anything at all.

If it’s been some time, say, 15 minutes, and you’re still not getting any data. Go to and download the verify_pnp_config file. Transfer it to your Nagios host. I just plop it into my Downloads folder as usual. Navigate to the folder where you placed yours, then run:

That should give you the clues that you need to fix most any problems.

I did have one leftover problem, but only my Ubuntu system where I had updated to PHP 7. The verify script passed everything, but trying to load any PNP4Nagios page gave me this error: “Call to undefined function simplexml_load_file()”. I only needed to install the PHP XML package to fix that: sudo apt install php-xml. I didn’t look up the equivalent on the other distributions.

Plugin Output for Performance Graphing

To determine if a plugin can be graphed, you could just look at its documentation. Otherwise, you’ll need to manually execute it from /usr/local/nagios/libexec. For instance, we’ll just use the first one that shows up on an Ubuntu system, check_apt:


See the pipe character (|) there after the available updates report? Then the jumble of characters after that? That’s all in the standard format for Nagios performance charting. That format is:

  1. A pipe character after the standard Nagios service monitoring result.
  2. A human-readable label. If the label includes any special characters, the entire label should be enclosed in single quotes.
  3. An equal sign (=)
  4. The reported value.
  5. Optionally, a unit of measure.
  6. A semi-colon, optionally followed by a value for the warning level. If the warning level is visible on the produced chart, it will be indicated by a horizontal yellow line.
  7. A semi-colon, optionally followed by a value for the critical level. If the warning level is visible on the produced chart, it will be indicated by a horizontal red line.
  8. A semicolon, optionally followed by the minimum value for the chart’s y-axis. Must be the same unit of measure as the value in #4. If not specified, PNP4Nagios will automatically set the minimum value. If this value would make the current value invisible, PNP4Nagios will set its own minimum.
  9. A semicolon, optionally followed by the maximum value for the chart’s y-axis. Must be the same unit of measure as the value in #4. If not specified, PNP4Nagios will automatically set the maximum value. If this value would make the current value invisible, PNP4Nagios will set its own maximum.

This format is defined by Nagios and PNP4Nagios conforms to it. You can read more about the format at:

My plugins did not originally emit any performance data. I have been working on that and should hopefully have all of that work completed before you read this article.

My PNP4Nagios Configuration Philosophy

I had several decision points when setting up my system. You may choose to diverge as it meets your needs. I’ll use this section to explain why I made the choices that I did.

Why “Bulk with NPCD” Mode?

Initially, I tried to set up PNP4Nagios in “synchronous” mode. That would cause Nagios to instantly call on PNP4Nagios to generate performance data immediately after every check’s results were returned. I chose that initially because it seemed like the path of least resistance.

It didn’t work for me. I’m betting that I did something wrong. But, I didn’t get my problem sorted out. I found a lot more information on the NPCD mode. So, I switched. Then I researched the differences. I feel like I made the correct choice.

You can read up on the available modes yourself:

In synchronous mode, Nagios can’t do anything while PNP4Nagios processes the return information. That’s because it all occurs in the same thread; we call that behavior “blocking”. According to the PNP4Nagios documentation, that method “will work very good up to about 1,000 services in a 5-minute interval”. I assume that’s CPU-driven, but I don’t know. I also don’t know how to quantify or qualify “will work very good”. I also don’t know what sort of environments any of my readers are using.

Bulk mode moves the processing of data from per-return-of-results to gathering results for a while and then processing them all at once. The documentation says that testing showed that 2,000 services were processed in .06 seconds. That’s easier to translate to real-world systems, although I still don’t know the overall conditions that generated that benchmark.

When we add NPCD onto bulk mode, then we don’t block Nagios at all. Nagios still does the bulk gathering, but NPCD processes the data, not Nagios. I chose this method as it means that as long as your Nagios system is multi-core and not already overloaded, you should not encounter any meaningful interruption to your Nagios service by adding PNP4Nagios. It should also work well with most installation sizes. For really big Nagios/PNP4Nagios installations (also not qualified or quantified), you can follow their instructions on configuring “Gearman Mode”.

One drawback to this method: Your “4 Hour” charts will frequently show an empty space at the right of their charts. That’s because they will be drawn in-between collection/processing periods. All of the data will be filled in after a few minutes. You just may not have instant gratification.

Why Not Just Allow Every Host and Service to be Monitored?

The default configuration of PNP4Nagios results in every single host and every single service being enabled for monitoring. From an “ease-of-configuration” standpoint, that’s tempting. Once you’ve set the globals, you literally don’t have to do anything else.

However, we are also integrating directly with Nagios’ generated HTML pages. Whereas PNP4Nagios can determine that a service doesn’t have performance data because Nagios won’t have generated anything, the front-end just has an instruction to add a linked icon to every single service. So, if you just globally enable it, then you’ll get a lot of links that don’t work.

If you’re the only person using your environment, maybe that’s OK. But, if you share the environment, then you’ll start getting calls wanting to you to “fix” all those broken links. It won’t take long before you’re spending more time explaining (and re-explaining) that not all of the links have anything to show.

Why Not Just Change the Inheritance Tree?

If you want, you could have your performance-enabled hosts and services inherit from the generic-host/generic-service templates, then have later templates, hosts, and services inherit from those. If that works for you, then take that approach.

I chose to employ multiple inheritance as a way of overriding the default templates because it seemed like less effort to me. When I went to modify the services, I simply copied “perf-service,” to the clipboard and then selectively pasted it into the use line of every service that I wanted. It worked easier for me than a selective find-replace operation or manual replacement. It also seems to me that it would be easier to revert that decision if I make a mistake somewhere.

I can envision very solid arguments for handling this differently. I won’t argue. I just think that this approach was best for my situation.

Free Hyper-V Script: Virtual Machine Storage Consistency Diagnosis

Free Hyper-V Script: Virtual Machine Storage Consistency Diagnosis

Hyper-V allows you to scatter a virtual machine’s files just about anywhere. That might be good; it might be bad. That all depends on your perspective and need. No good can come from losing track of your files’ sprawl, though. Additionally, the placement and migration tools don’t always work exactly as expected. Even if the tools work perfectly, sometimes administrators don’t. To help you sort out these difficulties, I’ve created a small PowerShell script that will report on the consistency of a virtual machine’s file placement.

Free Hyper-V Script from Altaro

How the Script Works

I designed the script to be quick and simple to use and understand. Under the hood, it does these things:

  1. Gathers the location of the virtual machine’s configuration files. It considers that location to be the root.
  2. Gathers the location of the virtual machine’s checkpoint files. It compares that to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  3. Gathers the location of the virtual machine’s second-level paging files. It compares that to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  4. Gathers the location of the virtual machine’s individual virtual hard disk files. It compares each to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  5. Gathers the location of any CD or DVD image files attached to the virtual machine. It compares each to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  6. Emits a custom object that contains:
    1. The name of the virtual machine
    2. The name of the virtual machine’s Hyper-V host
    3. The virtual machine’s ID (a string-formatted GUID)
    4. A boolean ($true or $false) value that indicates if the virtual machine’s storage is consistent
    5. An array that contains a record of each location. Each entry in the array is itself a custom object with these two pieces:
      1. The component name(Configuration, Checkpoints, etc.)
      2. The location of the component

It doesn’t check for pass-through. A VM with pass-through disks would have inconsistent storage placement by definition, but this script completely ignores pass-through.

The script doesn’t go through a lot of validation to check if storage locations are truly unique. If you have a VM using “\\system\VMs” and “\\system\VMFiles” but they’re both the same physical folder, the VM will be marked as inconsistent.

Script Usage

I built the script to be named Get-VMStorageConsistency. All documentation and examples are built around that name. You must supply it with a virtual machine. All other parameters are optional.

Use Get-Help Get-VMStorageConsistency -Full to see the built-in help.


  • VM (aliases “VMName” and “Name”): The virtual machine(s) to check. The input type is an array, so it will accept one or more of any of the following:
    • A string that contains the virtual machine’s names
    • A VirtualMachine object (as output from Get-VM, etc.)
    • A GUID object that contains the virtual machine’s ID
    • A WMI Msvm_ComputerSystem object
    • A WMI MSCluster_Resource object
  • ComputerName: The name of the host that owns the virtual machine. If not specified, defaults to the local system. Ignored if VM is anything other than a String or GUID. Only accepts strings.
  • DisksOnly: A switch parameter (include if you want to use it, leave off otherwise). If specified, the script only checks at the physical storage level for consistency. Examples:
    • With this switch, C:\VMs and C:\VMCheckpoints are the same thing (simplifies to C:\)
    • With this switch, C:\ClusterStorage\VMs1\VMFiles and C:\ClusterStorage\VMs1\VHDXs are the same thing (simplifies to C:\ClusterStorage\VMs1)
    • With this switch, \\storage1\VMs\VMFiles and \\storage1\VMs\VHDFiles are the same thing (simplifies to \\storage1\VMs)
    • Without this switch, all of the above are treated as unique locations
  • IgnoreVHDFolder: A switch parameter (include if you want to use it, leave off otherwise). If specified, ignores the final “Virtual Hard Disks” path for virtual hard disks. Notes:
    • With this switch, VHDXs in “C:\VMs\Virtual Hard Disks” will be treated as though they were found in C:\VMs
    • With this switch, VHDXs in “C:\VMs\Virtual Hard Disks\VHDXs” will not be treated specially. This is because there is a folder underneath the one named “Virtual Hard Disks”.
    • With this switch, a VM configured to hold its checkpoints in a folder named “C:\VMs\Virtual Hard Disks\” will not treat its checkpoints especially. This is because the switch only applies to virtual hard disk files.
  • Verbose: A built-in switch parameter. If specified, the script will use the Verbose output stream to show you the exact comparison that caused a virtual machine to be marked as inconsistent. Note: The script only traps the first item that causes a virtual machine to be inconsistent. That’s because the Consistent marker is a boolean; in boolean logic, it’s not possible to become more false. Therefore, I considered it to be wasteful to continue processing. However, all locations are stored in the Locations property of the report. You can use that for an accurate assessment of all the VMs’ component locations.

The Output Object

The following shows the output of the cmdlet run on my host with the -IgnoreVHDFolder and -Verbose switches set:


Output object structure:

  • Name: String that contains the virtual machine’s name
  • ComputerName: String that contains the name of the Hyper-V host that currently owns the virtual machine
  • VMId: String representation of the virtual machine’s GUID
  • Consistent: Boolean value that indicates whether or not the virtual machine’s storage is consistently placed
  • Location: An array of custom objects that contain information about the location of each component

Location object structure:

  • Component: A string that identifies the component. I made these strings up. Possible values:
    • Configuration: Location of the “Virtual Machines” folder that contains the VM’s definition files (.xml, .vmcx, .bin, etc.)
    • Checkpoints: Location of the “Snapshots” folder configured for this virtual machine. Note: That folder might not physically exist if the VM uses a non-default location and has never been checkpointed.
    • SecondLevelPaging: Location of the folder where the VM will place its .slp files if it ever uses second-level paging.
    • Virtual Hard Disk: Full path of the virtual hard disk.
    • CD/DVD Image: Full path of the attached ISO.

An example that puts the object to use:

The above will output only the names of local virtual machines with inconsistent storage.

Script Source

The script is intended to be named “Get-VMStorageConsistency”. As shown, you would call its file (ex.: C:\Scripts\Get-VMStorageConsistency). If you want to use it dot-sourced or in your profile, uncomment lines 55, 56, and 262 (subject to change from editing; look for the commented-out function { } delimiters).


How Device Naming for Network Adapters Works in Hyper-V 2016

How Device Naming for Network Adapters Works in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. I don’t know for sure, but I’m guessing that the Hyper-V Integration Services enable this feature.

Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:


In contrast, a physical system employing Consistent Device Naming would have automatically named the network adapters in some fashion that reflected their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.


Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
  2. When the Ethernet # Properties dialog appears, click Configure:
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.


Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:


I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

Understanding the Windows Server Semi-Annual Channel

Understanding the Windows Server Semi-Annual Channel

Microsoft has made major changes to the way that they build and release their operating systems. The new Windows Server “Semi-Annual Channel” (SAC) marks a substantial departure from the familiar release pattern that Microsoft has established. The change has pleased some people, upset many, and confused even more. With all the flash of new features, it’s easy to miss the finer points — specifically, how you, your organization, and your issues fit into the new model.

The Traditional Microsoft OS Release Model

Traditionally, Microsoft would work on pre-release builds internally and with a small group of customers, partners, and other interested stakeholders (such as MVPs). Then, they would finesse the builds with open betas (usually called “Release Candidates”). Then, there would be an RTM (release-to-manufacturing) event followed by GA (general availability). The release would then be considered “current”. It would enjoy regular updates including service packs and feature upgrades for a few years, then it would go into “extended support” where it would only get stability and security updates. While customers purchased and worked with the “current” version, work on the next version would begin in parallel.

Not every version followed this model exactly, but all of them were similar. The most recent Windows Server operating system to employ this model is Windows Server 2016.

Changes to the Model with Windows 10/Server 2016

The “Windows Insider Program” was the first major change to Microsoft’s OS build and release model. Initially, it was most similar to the “release candidate” phase of earlier versions. Anyone could get in and gain access to Windows 10 builds before Windows 10 could even be purchased. However, it deviated from the RC program in two major ways:

  • The Windows Insider Program includes an entire community.
  • The Windows Insider Program continues to provide builds after Windows 10 GA

The Windows Insider Community

Most of us probably began our journey to Windows Insider by clicking an option in the Windows 10 update interface. However, you can also sign up using the dedicated Windows Insider web page. You get access to a dedicated forum. And, of course, you’ll get e-mail notifications from the program team. You can tell Microsoft what you think about your build using the Feedback Hub. That applet is not exclusive to Insiders, but they’ll know if you’re talking about an Insider build or a GA build.

Ongoing Windows Insider Builds

I expect that most Insiders prize access to new builds of Windows 10 above the other perks of the program. The Windows 10 Insider Program allows you to join one of multiple “rings” (one per joined Windows 10 installation). The ring that an installation belongs to dictates how close it will be to the “cutting edge”. You can read up on these rings and what they mean on the Insider site.

The most important thing about Windows Insider builds — and the reason that I brought them up at all in this article — is that they are not considered production-ready. The fast ring builds will definitely have problems. The preview release builds will likely have problems. You’re not going to get help for those problems outside of the Insider community, and any fix will almost certainly include the term “wait for the next build” (or the next… or the one after… or some indeterminate future build). I suspect that most software vendors will be… reluctant… to officially support any of their products on an Insider build.

Windows Server Insider Program

The Windows Server Insider Program serves essentially the same purpose as the Windows 10 Insider Program, but for the server operating system. The sign-up process is a bit different, as it goes through the Windows Insider Program for Business site. The major difference is the absence of any “rings”. Only one current Windows Server Insider build exists at any given time.

Introducing the Windows Server Semi-Annual Channel

I have no idea what you’ve already read, so I’m going to assume that you haven’t read anything. But, I want to start off with some very important points that I think others gloss over or miss entirely:

  • Releases in the Windows Server Semi-Annual Channel are not Windows Server 2016! Windows Server 2016 belongs to the Long-Term Servicing Channel (LTSC). The current SAC is simply titled “Windows Server, version 1709”.
  • You cannot upgrade from Windows Server 2016 to the Semi-Annual Channel. For all I know, that might change at some point. Today, you can only switch between LTSC and SAC via a complete wipe-and-reinstall.
  • On-premises Semi-Annual Channel builds require Software Assurance (I’d like to take this opportunity to point out: so does Nano). I haven’t been in the reseller business for a while so I don’t know the current status, but I was never able to get Software Assurance added to an existing license. It was always necessary to purchase it at the same time as its base volume Windows Server license. I don’t know of any way to get Software Assurance with an OEM build. All of these things may have changed. Talk to your reseller. Ask questions. Do your research. Do not blindly assume that you are eligible to use an SAC build.
  • The license for Windows Server is interchangeable between LTSC and SAC. Meaning that, if you are a Software Assurance customer, you’ll be able to download/use either product per license count (but not both; 1 license count = 1 license for LTSC or 1 license for SAC).
  • The keys for Windows Server are not interchangeable between LTSC and SAC. I’m not yet sure how this will work out for Automatic Virtual Machine Activation. I did try adding the WS2016 AVMA key to a WS1709 guest and it did not like that one bit.
  • SAC does not offer the Desktop Experience. Meaning, there is no GUI. There is no way to install a GUI. You don’t get a GUI. You get only Core.
  • Any given SAC build might or might not have the same available roles and features as the previous SAC build. Case in point: Windows Server, version 1709 does not support Storage Spaces Direct.
  • SAC builds are available in Azure.
  • SAC builds are supported for production workloads. SAC follows the Windows Server Insider builds, but SAC is not an Insider build.
  • SAC builds will only be supported for 18 months. You can continue using a specific SAC build after that period, but you can’t get support for it.
  • SAC builds should release roughly every six months.
  • SAC builds will be numbered for their build month. Ex: 1709 = “2017 September (09)”.
  • SAC ships in Standard and Datacenter flavors only.

what is the windows server semi-annual channel

The Semi-Annual Channel is Not for Everyone

Lots of people have lots of complaints about the Windows Server Semi-annual Channel. I won’t judge the reasonableness or validity of any of them. However, I think that many of these complaints are based on a misconception. People have become accustomed to a particular release behavior, so they expected SAC to serve as vNext of Windows Server 2016. Looking at Microsoft’s various messages on the topic. I don’t feel like they did a very good job explaining the divergence. So, if that’s how you look at it, then it’s completely understandable that you’d feel like WS1709 slapped you in the face.

However, it looks different when you realize that WS1709 is not intended as a linear continuation. vNext of Windows Server 2016 will be another release in the LTSC cycle. It will presumably arrive sometime late next year or early the following year, and it will presumably be named Windows Server 2018 or Windows Server 2019. Unless there are other big changes in our future, it will have the Desktop Experience and at least the non-deprecated roles and features that you currently have available in WS2016. Basically, if you just follow the traditional release model, you can ignore the existence of the SAC releases.

Some feature updates in SAC will also appear in LTSC updates. As an example, both WS1709 and concurrent WS2016 patches introduce the ability for containers to use persistent data volumes on Cluster Shared Volumes.

Who Benefits from the Semi-Annual Channel?

If SAC is not meant for everyone, then who should use it? Let’s get one thing out of the way: no organization will use SAC for everything. The LTSC will always have a place. Do not feel like you’re going to be left behind if you stick with the LTSC.

I’ll start by simplifying some of Microsoft’s marketing-speak about targeted users:

  • Organizations with cloud-style deployments
  • Systems developers

Basically, you need to have something akin to a mission-critical level of interest in one or more of these topics:

  • Containers and related technologies (Docker, Kubernetes, etc.)
  • Software-defined networking
  • High-performance networking. I’m not talking about the “my test file copy only goes 45Mbps” flavor of “high performance” networking, but the “processing TCP packets between the real-time interface and its OLTP database causes discernible delays for my 150,000 users” flavor.
  • Multiple large Hyper-V clusters

Read the “What’s New” article for yourself. If you can’t find any must-have-yesterdays in that list, then don’t worry that you might have to wait twelve to eighteen months for vNext of LTSC to get them.

Who Benefits from the Long-Term Servicing Channel?

As I said, the LTSC isn’t going anywhere. Not only that, we will all continue to use more LTSC deployments than SAC deployments.

Choose LTSC for:

  • Stability. Even though SAC will be production-ready, the lead time between initial conception and first deployment will be much shorter. The wheel for new SAC features will be blocky.
  • Predictability: The absence of S2D in WS1709 caught almost everyone by surprise. That sort of thing won’t happen with LTSC. They’ll deprecate features first to give you at least one version’s worth of fair warning. (Note: S2D will return; it’s not going away).
  • Third-party applications: We all have vendors that are still unsure about WS2008. They’re certainly not going to sign off on SAC builds.
  • Line-of-business applications: Whether third-party or Microsoft, the big app server that holds your organization up doesn’t need to be upgraded twice each year.
  • The GUI.

What Does SAC Mean for Hyper-V?

The above deals with Windows Server Semi-Annual Channel in a general sense. Since this is a Hyper-V blog, I can’t leave without talking about what SAC means for Hyper-V.

For one thing, SAC does not have a Hyper-V Server distribution. I haven’t heard of any long-term plans, so the safest bet is to assume that future releases of Hyper-V Server will coincide with LTSC releases.

As far the Hyper-V role, it perfectly fits almost everything that SAC targets. Just look at the new Hyper-V-related features in 1709:

  • Enhanced VM load-balancing
  • Storage of VMs in storage-class memory (non-volatile RAM)
  • vPMEM
  • Splitting of “guest state” information out of the .vmrs file into its own .vmgs file
  • Support for running the host guardian service as a virtual machine
  • Support for Shielded Linux VMs
  • Virtual network encryption

Looking at that list, “Shielded Linux VMs” seems to have the most appeal to a small- or medium-sized organization. As I understand it, that’s not a feature so much as a support statement. Either way, I can shield a Linux VM on my fully-patched Windows Server 2016 build 1607 (LTSC) system.

As for the rest of the features, they will find the widest adoption in larger, more involved Hyper-V installations. I obviously can’t speak for everyone, but it seems to me that anyone that needs those features today won’t have any problems accepting the terms that go along with the switch to SAC.

For the rest of us, Hyper-V in LTSC has plenty to offer.

What to Watch Out For

Even though I don’t see any serious problems that will result from sticking with the LTSC, I don’t think this SKU split will be entirely painless.

For one thing, the general confusion over “Windows Server 2016” vs. “Windows Server, version 1709” includes a lot of technology authors. I see a great many articles with titles that include “Windows Server 2016 build 1709”. So, when you’re looking for help, you’re going to need to be on your toes. I think the limited appeal of the new features will help to mitigate that somewhat. Still, if you’re going to be writing, please keep the distinction in mind.

For another, a lot of technology writers (including those responsible for documentation) work only with the newest, hottest tech. They might not even think to point out that one feature or another belongs only to SAC. I think that the smaller target audience for the new features will keep this problem under control, as well.

The Future of LTSC/SAC

All things change. Microsoft might rethink one or both of these release models. Personally, I think they’ve made a good decision with these changes. Larger customers will be able to sit out on the bleeding edge and absorb all the problems that come with early adoption. By the time these features roll into LTSC, they’ll have undergone solid vetting cycles on someone else’s production systems. Customers in LTSC will benefit from the pain of others. That might even entice them to adopt newer releases earlier.

Most importantly, effectively nothing changes for anyone that sticks with the traditional regular release cycle. Windows Server Semi-Annual Channel offers an alternative option, not a required replacement.

7 Powerful Scripts for Practical Hyper-V Network Configurations

7 Powerful Scripts for Practical Hyper-V Network Configurations

I firmly believe in empowerment. I feel that I should supply you with knowledge, provide you with how-tos, share insights and experiences, and release you into the world to make your own decisions. However, I came to that approach by standing at the front of a classroom. During class, we’d almost invariably walk through exercises. Since this is a blog and not a classroom, I do things differently. We don’t have common hardware in a controlled environment, so I typically forgo the exercises bit. As a result, that leaves a lot of my readers at the edge of a cliff with no bridge to carry them from theory to practice. And, of course, there are those of you that would love to spend time reading about concepts but just really need to get something done right now. If you’re stopped at Hyper-V networking, this is the article for you.

Script Inventory

These scripts are included in this article:

Basic Usage

I’m going to show each item as a stand-alone script. First, you’ll locate the one that best aligns with what you’re trying to accomplish. You’ll copy/paste that into a .ps1 PowerShell script file on your system. You’ll need to edit the script to provide information about your environment so that it will work for you. I’ll have you set each of those items at the beginning of the script. Then, you’ll just need to execute the script on your host.

Most scripts will have its own “basic usage” heading that explains a bit about how you’d use it without modification.

Enhanced Usage

I could easily compile these into standalone executables that you couldn’t tinker with. Even though I want to give you a fully prepared springboard, I also want you to learn how the system works and what you’re doing to it.

Most scripts will have its own “enhanced usage” heading that gives some ideas how you might exploit or extend it yourself.

Configure networking for a single host with a single adapter

Use this script for a standalone system that only has one physical adapter. It will:

  • Disable VMQ for the physical adapter
  • Create a virtual switch on the adapter
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it.

Advanced Usage for this Script

As-is, this script should be complete for most typical single-adapter systems. You might choose to disable some items. For instance, if you are using this on Windows 10, you might not want to provide a fixed IP address. In that case, just put a # sign at the beginning of lines 42 onward. When the virtual network adapter is created, it will remain in DHCP mode.


Configure a standalone host with 2-4 gigabit adapters for converged networking

Use this script for a standalone host that has between two and four gigabit adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. Be aware that it will have problems if you already have a team.

Advanced Usage for this Script

This script serves as the base for the remaining scripts on this page. Likewise, you could use it as a base for your own. You could also use any of the items as examples for whatever similar actions you wish to accomplish in your own scripts.


Configure a standalone host with 2-4 10 GbE adapters for converged networking

Use this script for a standalone host that has between two and four 10GbE adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

It won’t take a great deal of sleuthing to discover that this script is identical to the preceding one, except that it does not disable VMQ.


Configure a clustered host with 2-4 gigabit adapters for converged networking

Use this script for a host that has between two and four gigabit adapters that will be a member of a cluster. Like the previous scripts, it will employ a converged networking configuration. The script will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.


Configure a clustered host with 2-4 10 GbE adapters for converged networking

This script is identical to the preceding except that it leaves VMQ enabled. It does the following:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

These notes are identical to those of the preceding script.

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

These notes are identical to those of the preceding script.

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.


Set preferred order for cluster Live Migration networks

This script aligns with the two preceding scripts to ensure that the cluster chooses the named “Live Migration” adapter first when moving virtual machines between nodes. The “Cluster” virtual adapter will be used second. The management adapter will be used as the final fallback.

Basic Usage for this Script

Use this script after you’ve run one of the above two clustered host scripts and joined them into a cluster.

Advanced Usage for this Script

Modify this to change the order of Live Migration adapters. You must specify all adapters recognized by the cluster. Check the “MigrationExcludeNetworks” registry key that’s in the same location as “MigrationNetworkOrder”.


Exclude cluster networks from Live Migration

This script is intended to be used as an optional adjunct to the preceding script. Since my scripts set up all virtual adapters to be used in Live Migration, the network names used here are fabricated.

Basic Usage for this Script

You’ll need to set the network names to match yours, but otherwise, the script does not need to be altered.

Advanced Usage for this Script

This script will need to be modified in order to be used at all.


Page 1 of 2712345...1020...Last »