Hyper-V Backup Best Practices: Terminology and Basics

Hyper-V Backup Best Practices: Terminology and Basics


One of my very first jobs performing server support on a regular basis was heavily focused on backup. I witnessed several heart-wrenching tragedies of permanent data loss but, fortunately, played the role of data savior much more frequently. I know that most, if not all, of the calamities could have at least been lessened had the data owners been more educated on the subject of backup. I believe very firmly in the value of a solid backup strategy, which I also believe can only be built on the basis of a solid education in the art. This article’s overarching goal is to give you that education by serving a number of purposes:

  • Explain industry-standard terminology and how to apply it to your situation
  • Address and wipe away 1990s-style approaches to backup
  • Clearly illustrate backup from a Hyper-V perspective

Backup Terminology

Whenever possible, I avoid speaking in jargon and TLAs/FLAs (three-/four-letter acronyms) unless I’m talking to a peer that I’m certain has the experience to understand what I mean. When you start exploring backup solutions, you will have these tossed at you rapid-fire with, at most, brief explanations. If you don’t understand each and every one of the following, stop and read those sections before proceeding. If you’re lucky enough to be working with an honest salesperson, it’s easy for them to forget that their target audience may not be completely following along. If you’re less fortunate, it’s simple for a dishonest salesperson to ridiculously oversell backup products through scare tactics that rely heavily on your incomplete understanding.

  • Backup
  • Full/incremental/differential backup
  • Delta
  • Deduplication
  • Inconsistent/crash-consistent/application-consistent
  • Bare-metal backup/restore (BMB/BMR)
  • Disaster Recovery/Business Continuity
  • Recovery point objective (RPO)
  • Recovery time objective (RTO)
  • Retention
  • Rotation — includes terms such as Grandfather-Father-Son (GFS)

There are a number of other terms that you might encounter, although these are the most important for our discussion. If you encounter a vendor making up their own TLAs/FLAs, take a moment to investigate their meaning in comparison to the above. Most are just marketing tactics — inherently harmless attempts by a business entity trying to turn a coin by promoting its products. Some are more nefarious — attempts to invent a nail for which the company just conveniently happens to provide the only perfectly matching hammer (with an extra “value-added” price, of course).


This heading might seem pointless — doesn’t everyone know what a backup is? In my experience, no. In order to qualify as a backup, you must have a distinct, independent copy of data. A backup cannot have any reliance on the health or well-being of its source data or the media that contains that data. Otherwise, it is not a true backup.

Full/Incremental/Differential Backups

Recent technology changes and their attendant strategies have made this terminology somewhat less popular than in past decades, but it is still important to understand because it is still in widespread use. They are presented in a package because they make the most sense when compared to each other. So, I’ll give you a brief explanation of each and then launch into a discussion.

  • Full Backups: Full backups are the easiest to understand. They are a point-in-time copy of all target data.
  • Differential Backups: A differential backup is a point-in-time copy of all data that is different from the last full backup that is its parent.
  • Incremental Backups: An incremental backup is a point-in-time copy of all data that is different from the backup that is its parent.

The full backup is the safest type because it is the only one of the three that can stand alone in any circumstances. It is a complete copy of whatever data has been selected.

Full Backup

Full Backup

A differential backup is the next safest type. Remember the following:

  • To fully restore the latest data, a differential backup always requires two backups: the latest full backup and the latest differential backup. Intermediary differential backups, if any exist, are not required.
  • It is not necessary to restore from the most recent differential backup if an earlier version of the data is required.
  • Depending on what data is required and the intelligence of the backup application, it may not be necessary to have both backups available to retrieve specific items.

The following is an illustration of what a differential backup looks like:

Differential Backup

Differential Backup

Each differential backup goes all the way back to the latest full backup as its parent. Also, notice that each differential backup is slightly larger than the preceding differential backup. This phenomenon is conventional wisdom on the matter. In theory, each differential backup contains the previous backup’s changes as well as any new changes. In reality, it truly depends on the change pattern. A file backed up on Monday might have been deleted on Tuesday, so that part of the backup certainly won’t be larger. A file that changed on Tuesday might have had half its contents removed on Wednesday, which would make that part of the backup smaller. A differential backup can range anywhere from essentially empty (if nothing changed) to as large as the source data (if everything changed). Realistically, you should expect each differential backup to be slightly larger than the previous.

The following is an illustration of an incremental backup:

Incremental Backup

Incremental Backup

Incremental backups are best thought of as a chain. The above shows a typical daily backup in an environment that uses a weekly full with daily incrementals. If all data is lost and restoring to Wednesday’s backup is necessary, then every single night’s backup from Sunday onward will be necessary. If any one is missing or damaged, then it will likely not be possible to retrieve anything from that backup or any backup afterward. Therefore, incremental backups are the riskiest; they are also the fastest and consume the least amount of space.

Historically, full/incremental/differential backups have been facilitated by an archive bit in Windows. Anytime a file is changed, Windows sets its archive bit. The backup types operate with this behavior:

  • A full backup captures all target files and clears any archive bits that it finds.
  • A differential backup captures only target files that have their archive bit set and it leaves the bit in the state that it found it.
  • An incremental backup captures only files with the archive bit set and clears it afterward.
Archive Bit Example

Archive Bit Example


“Delta” is probably the most overthought word in all of technology. It means “difference”. Do not analyze it beyond that. It just means “difference”. If you have $10 in your pocket and you buy an item for $8, the $8 dollars that you spent is the “delta” between the amount of money that you had before you made the purchase and the amount of money that you have now.

The way that vendors use the term “delta” sometimes changes, but usually not by a great deal. In the earliest incarnation that I am aware of “delta” as applied to backups, it meant intra-file changes. All previous backup types operated with individual files being the smallest level of granularity (not counting specialty backups such as Exchange item-level). Delta backups would analyze the blocks of individual files, making the granularity one step finer.

The following image illustrates the delta concept:

Delta Backup

Delta Backup

A delta backup is essentially an incremental backup, but at the block level instead of the file level. Somebody got the clever idea to use the word “delta”, probably so that it wouldn’t be confused with “differential”, and the world thought it must mean something extra special because it’s Greek.

The major benefit of delta backups is that they use much less space than even incremental backups. The trade-off is in the computing power to calculate deltas. The archive bit can tell it if a file needs to be scanned, but it cannot tell it which blocks to cover. Backup systems that perform delta operations require some other method for change tracking.


Deduplication represents the latest iteration of backup innovation. The term explains itself quite nicely. The backup application searches for identical blocks of data and reduces them to a single copy.

Deduplication involves three major feats:

  • The algorithm that discovers duplicate blocks must operate in a timely fashion
  • The system that tracks the proper location of duplicated blocks must be foolproof
  • The system that tracks the proper location of duplicated blocks must use significantly less storage than simply keeping the original blocks

So, while deduplication is conceptually simple, implementations can depend upon advanced computer science.

Deduplication’s primary benefit is that it can produce backups that are even smaller than delta systems. Part of that will depend on the overall scope of the deduplication engine. If you were to run fifteen new Windows Server 2016 virtual machines through even a rudimentary deduplicator, it would reduce all of them to the size of approximately a single Windows Server 2016 virtual machine — a 93% savings.

There is risk in overeager implementations, however. With all data blocks represented by a single copy, each block becomes a single point of failure. The loss of a single vital block could spell disaster for a backup set. This risk can be mitigated by employing a single pre-existing best practice: always maintain multiple backups.


We already have an article set that explores these terms in some detail. Quickly:

  • Inconsistent backups would be effectively the same thing as performing a manual file copy of a directory tree.
  • Crash-consistent backup captures data as it sits on the storage volume at a given point in time, but cannot touch anything passing through the CPU or waiting in memory. You could lose any in-flight I/O operations.
  • Application-consistent backup coordinates with the operating system and, where possible, individual applications to ensure that in-flight I/Os are flushed to disk so that there are no active file changes at the moment that the backup is taken

I occasionally see people twisting these terms around, although I believe that’s most accidental. The definitions that I used above have been the most common, stretching back into the 90s. Be aware that there are some disagreements, so ensure that you clarify terminology with any salespeople.

Bare-Metal Backup/Restore

A so-called “bare-metal backup” and/or “bare metal restore” involves capturing the entirety of a storage unit including metadata portions such as the boot sector. These backup/restore types essentially mean that you could restore data to a completely empty physical system without needing to install an operating system and/or backup agent on it first.

Disaster Recovery/Business Continuity

The terms “Disaster Recovery” (DR) and “Business Continuity” are often used somewhat interchangeably in marketing literature. “Disaster Recovery” is the older term and more accurately reflects the nature of the involved solutions. “Business Continuity” is a newer, more exciting version that sounds more positive but mostly means the same thing. These two terms encompass not just restoring data, but restoring the organization to its pre-disaster state. “Business Continuity” is used to emphasize the notion that, with proper planning, disasters can have little to no effect on your ability to conduct normal business. Of course, the more “continuous” your solution is, the higher your costs are. That’s not necessarily a bad thing, but it must be understood and expected.

One thing that I really want to make very clear about disaster recovery and/or business continuity is that these terms extend far beyond just backing up and restoring your data. DR plans need to include downtime procedures, phone trees, alternative working sites, and a great deal more. You need to think all the way through a disaster from the moment that one occurs to the moment that everything is back to some semblance of normal.

Recovery Point Objective

The maximum acceptable span of time between the latest backup and a data loss event is called a recovery point objective (RPO). If the words don’t sound very much like their definition, that’s because someone worked really hard to couch a bad situation within a somewhat neutral term. If it helps, the “point” in RPO means “point in time.” Of all data adds and changes, anything that happens between backup events has the highest potential of being lost. Many technologies have some sort of fault tolerance built in; for instance, if your domain controller crashes and it isn’t due to a completely failed storage subsystem, you’re probably not going to need to go to backup. Most other databases can tell a similar story. RPOs mostly address human error and disaster. More common failures should be addressed by technology branches other than backup, such as RAID.

A long RPO means that you are willing to lose a greater span of time. A daily backup gives you a 24-hour RPO. Taking backups every two hours results in a 2-hour RPO. Remember that an RPO represents a maximum. It is highly unlikely that a failure will occur immediately prior to the next backup operation.

Recovery Time Objective

Recovery time objective (RTO) represents the maximum amount of time that you are willing to wait for systems to be restored to a functional state. This term sounds much more like its actual meaning than RPO. You need to take extra care when talking with backup vendors about RTO. They will tend to only talk about RTO in terms of restoring data to a replacement system. If your primary site is your only site and you don’t have a contingency plan for complete building loss, your RTO is however long it takes to replace that building, fill it with replacement systems, and restore data to those systems. Somehow, I suspect that a six-month or longer RTO is unacceptable for most institutions. That is one reason that DR planning must extend beyond taking backups.

In more conventional usage, RTOs will be explained as though there is always a target system ready to receive the restored data. So, if your backup drives are taken offsite to a safety deposit box by the bookkeeper when she comes in at 8 AM, your actual recovery time is essentially however long it takes someone to retrieve the backup drive plus the time needed to perform a restore in your backup application.


Retention is the desired amount of time that a backup should be kept. This deceptively simple description hides some complexity. Consider the following:

  • Legislation mandates a ten-year retention policy on customer data for your industry. A customer was added in 2007. Their address changed in 2009. Must the customer’s data be kept until 2017 or 2019?
  • Corporate policy mandates that all customer information be retained for a minimum of five years. The line-of-business application that you use to record customer information never deletes any information that was placed into it and you have a copy of last night’s data. Do you need to keep the backup copy from five years ago or is having a recent copy of the database that contains five-year-old data sufficient?

Questions such as these can plague you. Historically, monthly and annual backup tapes were simply kept for a specific minimum number of years and then discarded, which more or less answered the question for you. Tape is an expensive solution, however, and many modern small businesses do not use it. Furthermore, laws and policies only dictate that the data be kept; nothing forced anyone to ensure that the backup tapes were readable after any specific amount of time. One lesson that many people learn the hard way is that tapes stored flat can lose data after a few years. We used to joke with customers that their bits were sliding down the side of the tape. I don’t actually understand the governing electromagnetic phenomenon, but I can verify that it does exist.

With disk-based backups, the possibilities are changed somewhat. People typically do not keep stacks of backup disks lying around, and their ability to hold data for long periods of time is not the same as backup tape. The rules are different — some disks will outlive tape, others will not.


Backup rotations deal with the media used to hold backup information. This has historically meant tape, and tape rotations often came in some very grandiose schemes. One of the most widely used rotations is called “Grandfather-Father-Son” (GFS):

  • One full backup is taken monthly. The media it is taken on is kept for an extended period of time, usually one year. One of these is often considered an annual and kept longer. This backup is called the “Grandfather”.
  • Each week thereafter, on the same day, another full backup is taken. This media is usually rotated so that it is re-used once per month. This backup is known as the “Father”.
  • On every day between full backups, an incremental backup is taken. Each day’s media is rotated so that it is re-used on the same day each week. This backup is known as the “Son”.

The purpose of rotation is to have enough backups to provide sufficient possible restore points to guard against a myriad of possible data loss instances without using so much media that you bankrupt yourself and run out of physical storage room. Grandfathers are taken offsite and placed in long-term storage. Fathers are taken offsite, but perhaps not placed in long-term storage so that they are more readily accessible. Sons are often left onsite, at least for a day or two, to facilitate rapid restore operations.

Replacing Old Concepts with New Best Practices

Some backup concepts are simply outdated, especially for the small business. Tape used to be the only feasible mass storage device that could be written and rewritten on a daily basis and were sufficiently portable. I recall being chastised by a vendor representative in 2004 because I was “still” using tape when I “should” be backing up to his expensive SAN. I asked him, “Oh, do employees tend to react well when someone says, ‘The building is on fire! Grab the SAN and get out!’?” He suddenly didn’t want talk to me anymore.

The other somewhat outdated issue is that backups used to take a very, very long time. Tape was not very fast, disks were not very fast, networks were not very fast. Differential and incremental backups were partly the answer to that problem, and partly to the problem that tape capacity was an issue. Today, we have gigantic and relatively speedy portable hard drives, networks that can move at least many hundreds of megabits per second, and external buses like USB 3 that outrun both of those things. We no longer need all weekend and an entire media library to perform a full backup.

One thing that has not changed is the need for backups to exist offsite. You cannot protect against a loss of a building if all of your data stays in that building. Solutions have evolved, though. You can now afford to purchase large amounts of bandwidth and transmit your data offsite to your alternative business location(s) each night. If you haven’t got an alternative business location, there are an uncountable number of vendors that would be happy to store your data each night in exchange for a modest (or not so modest) sum of money. I still counsel periodically taking an offline offsite backup copy, as that is a solid way to protect your organization against malicious attacks (some of which can be by disgruntled staff).

These are the approaches that I would take today that would not have been available to me a few short years ago:

  • Favor full backups whenever possible — incremental, differential, delta, and deduplicated backups are wonderful, but they are incomplete by nature. It must never be forgotten that the strength of backup lies in the fact that it creates duplicates of data. Any backup technique that reduces duplication dilutes the purpose of backup. I won’t argue against anyone saying that there are many perfectly valid reasons for doing so, but such usage must be balanced. Backup systems are larger and faster than ever before; if you can afford the space and time for full copies, get full copies.
  • Steer away from complicated rotation schemes like GFS whenever possible. Untrained staff will not understand them and you cannot rely on the availability of trained staff in a crisis.
  • Encrypt every backup every time.
  • Spend the time to develop truly meaningful retention policies. You can easily throw a tape in a drawer for ten years. You’ll find that more difficult with a portable disk drive. Then again, have you ever tried restoring from a ten-year-old tape?
  • Be open to the idea of using multiple backup solutions simultaneously. If using a combination of applications and media types solves your problem and it’s not too much overhead, go for it.

There are a few best practices that are just as applicable now as ever:

  • Periodically test your backups to ensure that data is recoverable
  • Periodically review what you are backing up and what your rotation and retention policies are to ensure that you are neither shorting yourself on vital data nor wasting backup media space on dead information
  • Backup media must be treated as vitally sensitive mission-critical information and guarded against theft, espionage, and damage
    • Magnetic media must be kept away from electromagnetic fields
    • Tapes must be stored upright on their edges
    • Optical media must be kept in dark storage
    • All media must be kept in a cool environment with a constant temperature and low humidity
  • Never rely on a single backup copy. Media can fail, get lost, or be stolen. Backup jobs don’t always complete.

Hyper-V-Specific Backup Best Practices

I want to dive into the nuances of backup and Hyper-V more thoroughly in later articles, but I won’t leave you here without at least bringing them up.

  • Virtual-machine-level backups are a good thing. That might seem a bit self-serving since I’m writing for Altaro and they have a virtual-machine-level backup application, but I fit well here because of shared philosophy. A virtual-machine-level backup gives you the following:
    • No agent installed inside the guest operating system
    • Backups are automatically coordinated for all guests, meaning that you don’t need to set up some complicated staggered schedule to prevent overlaps
    • No need to reinstall guest operating systems separately from restoring their data
  • Hyper-V versions prior to 2016 do not have a native changed block tracking mechanism, so virtual-machine-level backup applications that perform delta and/or deduplication operations must perform a substantial amount of processing. Keep that in mind as you are developing your rotations and scheduling.
  • Hyper-V will coordinate between backup applications that run at the virtual-machine-level (like Altaro VM) and the VSS writer(s) within guest Windows operating systems and the integration components within Linux guest operating systems. This enables application-consistent backups without doing anything special other than ensuring that the integration components/services are up-to-date and activated.
  • For physical installations, no application can perform a bare metal restore operation any more quickly than you can perform a fresh Windows Server/Hyper-V Server installation from media (or better yet, a WDS system). Such a physical server should only have very basic configuration and only backup/management software installed. Therefore, backing up the management operating system is typically a completely pointless endeavor. If you feel otherwise, I want to know what you installed in the management operating system that would make a bare-metal restore worth your time, as I’m betting that such an application or configuration should not be in the management operating system at all.
  • Use your backup application’s ability to restore a virtual machine next to its original so that you can test data integrity

Follow-Up Articles

With the foundational material supplied in this article, I intend to work on further posts that expand on these thoughts in greater detail. If you have any questions or concerns about backing up Hyper-V, let me know. Anything that I can’t answer quickly in comments might find its way into an article.

PowerShell Script: Change Advanced Settings of Hyper-V Virtual Machines

PowerShell Script: Change Advanced Settings of Hyper-V Virtual Machines


Each Hyper-V virtual machine sports a number of settings that can be changed, but not by any sanctioned GUI tools. If you’re familiar with WMI, these properties are part of the Msvm_VirtualSystemSettingData class. Whether you’re familiar with WMI or not, these properties are not simple to change. I previously created a script that modifies the BIOS GUID setting, but that left out all the other available fields. So, I took that script back into the workshop and rewired it to increase its reach.

If you’re fairly new to using PowerShell as a scripting language and use other people’s scripts to learn, there are some additional notes after the script contents that you might be interested in.

What this Script Does

This script can be used to modify the following properties of a Hyper-V virtual machine:

  • BIOS GUID: The BIOS of every modern computer should contain a Universally Unique Identifier (UUID). Microsoft’s implementation of the standard UUID is called a Globally Unique Identifier (GUID).  You can see it on any Windows computer with gwmi Win32_ComputerSystem | select UUID. On physical machines, I don’t believe that it’s possible to change the UUID. On virtual machines, it is possible and even necessary at times. You can provide your own GUID or specify a new one.
  • Baseboard serial number.
  • BIOS serial number.
  • Chassis asset tag.
  • Chassis serial number.

There are other modifiable fields on this particular WMI class, but these are the only fields that I’m certain have no effect on the way that Hyper-V handles the virtual machine.

Warning 1: Changes to these fields are irreversible without restoring from backup. Modification of the BIOS GUID field is likely to trigger software activation events. Other side effects, potentially damaging, may occur. Any of these fields may be tracked and used by software inside the virtual machine. Any of these fields may be tracked and used by third-party software that manipulates the virtual machine. Use this script at your own risk.

Warning 2: These settings cannot be modified while the virtual machine is on. It must be in an Off state (not Saved or Paused). This script will turn off a running virtual machine (you are prompted first). It will not change anything on saved or paused VMs.

Warning 3: Only the active virtual machine is impacted. If the virtual machine has any checkpoints, they are left as-is. That means that if you delete the checkpoints, the new settings will be retained. If you apply or revert to a checkpoint, the old settings will be restored. I made the assumption that this behavior would be expected.

The following safety measures are in place:

  • The script is marked as High impact, which means that it will prompt before doing anything unless you supply the -Force parameter or have your confirmation preference set to a dangerous level. It will prompt up to two times: once if the virtual machine is running (because the VM must be off before the change can occur) and when performing the change.
  • The script will only accept a single virtual machine at a time. Of course, it can operate within a foreach block so this barrier can be overcome. The intent was to prevent accidents.
  • If a running virtual machine does not shut down within the allotted time, the script exits. The default wait time is 5 minutes, overridable by specifying the Timeout parameter. The timeout is measured in seconds. If the virtual machine’s guest shutdown process was properly triggered, it will continue to attempt to shut down and this script will not try to turn it back on.
  • If a guest’s shutdown integration service does not respond (which includes guests that don’t have a shutdown integration service) the script will exit without making changes. If you’re really intent on making changes no matter what, I’d use the built-in Stop-VM cmdlet first.

Script Requirements

The script is designed to operate via WMI, so the Hyper-V PowerShell module is not required. However, it can accept output from Get-VM for its own VM parameter.

You must have administrative access on the target Hyper-V host. The script does not check for this status because you might be targeting a remote computer. I did not test the script with membership in “Hyper-V Administrators”. That group does not always behave as expected, but it might work.


Copy/paste the contents of the code block to a text editor on your system and save the file as Set-VMAdvancedSettings.ps1. As-is, you call the script directly. If you uncomment the first two lines and the last line, you will convert the script to an advanced function that can be dot-sourced or added to your profile.

Script Notes for Other Scripters

I hope that at least some of you are using my scripts to advance your own PowerShell knowledge. This script shows two internal functions used for two separate purposes.

Code/Script Reusability

Sometimes, you’ll find yourself needing the same functionality in multiple scripts. You could rewrite that functionality each time, but I’m sure you don’t need me to tell you what’s wrong with that. You could put that functionality into another script and dot-source it into each of the others that rely on it. That’s a perfectly viable approach, but I don’t use it in my public scripts because I can’t guarantee what scripts will be on readers’ systems and I want to provide one-stop wherever possible. That leads to the third option, which is reusability.

Find the line that says function Process-WmiJob. As the comments say, I have adapted that from someone else’s work. I commonly need to use its functionality in many of my WMI-based scripts. So, I modularized it to use a universal set of parameters. Now I just copy/paste it into any other script that uses WMI jobs.

Making your code/script reusable can save you a great deal of time. Reused blocks have predictable results. The more you use them, the more likely you are to work out long-standing bugs and fine-tune them.

The problem with reused blocks is that they become disjointed. If I fix a problem that I find in the function within this script, I might or might not go back and update it everywhere else that I use it. In your own local scripting, you can address that problem by having a single copy of a script dot-sourced into all of your others. However, if each script file has its own copy of the function, it’s easier to customize it when necessary.

There’s no “right” answer or approach when it comes to code/script reusability. The overall goal is to reduce duplication of work, not to make reusable blocks. Never be so bound up in making a block reusable that you end up doing more overall work.

Don’t Repeat Yourself

A lofty goal related to code/script reusability is the Don’t Repeat Yourself principle (DRY). As I was reworking the original version of this script, I found that I was essentially taking the script block for the previous virtual machine property, copy/pasting it, and then updating the property and variable names. There was a tiny bit of customization on a few of them, but overall the blocks were syntactically identical. The script worked, and it was easy to follow along, but that’s not really an efficient way to write a script. Computers are quite skilled at repeating tasks. Humans, on the contrary, quickly tire of repetition. Therefore, it only makes sense to let the computer do whatever you’d rather skip.

DRY also addresses the issue of tiny mistakes. Let’s say that I duplicated the script for changing the ChassisSerialNumber, but left in the property name for the ChassisAssetTag. That would mean that your update would result in the ChassisAssetTag that you specified being applied to the ChassisAssetTag field and the ChassisSerialNumber. These types of errors are extremely common when you copy/paste/modify blocks.

Look at the line that says Change-VMSetting. That contains a fairly short bit of script that changes the properties of an object. I won’t dive too deeply into the details; the important part is that this particular function might be called up to five times during each iteration of the script. It’s only typed once, though. If I (or you) find a bug in it, there’s only place that I need to make corrections.

Internal Functions in Your Own Scripts

Notice that I put my functions into a begin {} block. If this script is on the right side of a pipe, then its process {} block might be called multiple times. The functions only need to be defined once. Leaving them in the begin block provides a minor performance boost because that part of the script won’t need to be parsed on each pass.

I also chose to use non-standard verbs “Process” and “Change” for the functions. That’s because I can never be entirely certain about function names that might already be in the global namespace or that might be in other scripts that include mine. Programming languages tend to implement namespaces to avoid such naming collisions, but PowerShell does not have that level of namespace support just yet. Keep that problem in mind when writing your own internal functions.

Critical Status in Hyper-V Manager

Critical Status in Hyper-V Manager



I’m an admitted technophile. I like blinky lights and sleek chassis and that new stuff smell and APIs and clicking through interfaces. I wouldn’t be in this field otherwise. However, if I were to compile a list of my least favorite things about unfamiliar technology, that panicked feeling when something breaks would claim the #1 slot. I often feel that systems administration sits diametrically opposite medical care. We tend to be comfortable learning by poking and prodding at things while they’re alive. When they’re dead, we’re sweating — worried that anything we do will only make the situation worse. For many of us in the Hyper-V world, that feeling first hits with the sight of a virtual machine in “Critical” status.

If you’re there, I can’t promise that the world hasn’t ended. I can help you to discover what it means and how to get back on the road to recovery.

The Various “Critical” States in Hyper-V Manager

If you ever look at the underlying WMI API for Hyper-V, you’ll learn that virtual machines have a long list of “sick” and “dead” states. Hyper-V Manager distills these into a much smaller list for its display. If you have a virtual machine in a “Critical” state, you’re only given two control options: Connect and Delete:


We’re fortunate enough in this case that the Status column gives some indication as to the underlying problem. That’s not always the case. That tiny bit of information might not be enough to get you to the root of the problem.

For starters, be aware that any state that includes the word “Critical” typically means that the virtual machine’s storage location has a problem. The storage device might have failed. The host may not be able to connect to storage. If you’re using SMB 3, the host might be unable to authenticate.

You’ll notice that there’s a hyphen in the state display. Before the hyphen will be another word that indicates the current or last known power state of the virtual machine. In this case, it’s Saved. I’ve only ever seen three states:

  • Off-Critical: The virtual machine was off last time the host was able to connect to it.
  • Saved-Critical: The virtual machine was in a saved state the last time the host was able to connect to it.
  • Paused-Critical: The paused state typically isn’t a past condition. This one usually means that the host can still talk to the storage location, but it has run out of free space.

There may be other states that I have not discovered. However, if you see the word “Critical” in a state, assume a storage issue.

Learning More About the Problem

If you have a small installation, you probably already know enough at this point to go find out what’s wrong. If you have a larger system, you might only be getting started. With only Connect and Delete, you can’t find out what’s wrong. You need to start by discovering the storage location that’s behind all of the fuss. Since Hyper-V Manager won’t help you, it’s PowerShell to the rescue:

Remember to use your own virtual machine name for best results. The first of those two lines will show you all of the virtual machine’s properties. It’s easier to remember in a pinch, but it also displays a lot of fields that you don’t care about. The second one pares the output list down to show only the storage-related fields. My output:


The Status field specifically mentioned the configuration location. As you can see, the same storage location holds all of the components of this particular virtual machine. We are not looking at anything related to the virtual hard disks, though. For that, we need a different cmdlet:

Again, I recommend that you use the name of your virtual machine instead of mine. The first cmdlet will show a table display that includes the path of the virtual hard disk file, but it will likely be truncated. There’s probably enough to get you started. If not, the second shows the entire path.


Everything that makes up this virtual machine happens to be on the same SMB 3 share. If yours is on a SCSI target, use iscsicpl.exe to check the status of connected disks. If you’re using Fibre Channel, your vendor’s software should be able to assist you.

Correcting the Problem

In my case, the Server service was stopped on the system that I use to host SMB 3 shares. It got that way because I needed to set up a scenario for this article. To return the virtual machine to a healthy state, I only needed to start that service and wait a few moments.

Your situation will likely be different from mine, of course. Your first goal is to rectify the root of the problem. If the storage is offline, bring it up. If there’s a disconnect, reconnect. After that, simply wait. Everything should take care of itself.

When I power down my test cluster, I tend to encounter this issue upon turning everything back on. I could start my storage unit first, but the domain controllers are on the Hyper-V hosts so nothing can authenticate to the storage unit even if it’s on. I could start the Hyper-V hosts first, but then the storage unit isn’t there to challenge authentication. So, I just power the boxes up in whatever order I come to them. All I need to do is wait — the Hyper-V hosts will continually try to reach storage, and they’ll eventually be successful.

If the state does not automatically return to a normal condition, restart the “Hyper-V Virtual Machine Management” service. You’ll find it by that name in the Services control panel applet. In an elevated PowerShell session:

At an administrative command prompt:

That should clear up any remaining status issues. If it doesn’t, there is still an issue communicating with storage. Or, in the case of the Paused condition, it still doesn’t believe that the location has sufficient space to safely run the virtual machine(s).

Less Common Corrections

If you’re certain that the target storage location does not have issues and the state remains Critical, then I would move on to repairs. Try chkdsk. Try resetting/rebooting the storage system. It’s highly unlikely that the Hyper-V host is at fault, but you can also try rebooting that.

Sometimes, the constituent files are damaged or simply gone. Make sure that you can find the actual .xml (2012 R2 and earlier) or .vmcx (2016 and later) file that represents the virtual machine. Remember that it’s named with the virtual machine’s unique identifier. You can find that with PowerShell:

If the files are misplaced or damaged, your best option is restore. If that’s not an option, then Delete might be your only choice. Delete will remove any remainders of the virtual machine’s configuration files, but will not touch any virtual hard disks that belong to the virtual machine. You can create a new one and reattach those disk files.

Best of luck to you.

Performance Impact of Hyper-V CPU Compatibility Mode

Performance Impact of Hyper-V CPU Compatibility Mode


If there’s anything in the Hyper-V world that’s difficult to get good information on, it’s the CPU compatibility setting. Very little official documentation exists, and it only tells you why and how. I, for one, would like to know a bit more about the what. That will be the focus of this article.

What is Hyper-V CPU Compatibility Mode?

Hyper-V CPU compatibility mode is a per-virtual machine setting that allows Live Migration to a physical host running a different CPU model (but not manufacturer). It performs this feat by masking the CPU’s feature set to one that exists on all CPUs that are capable of running Hyper-V. In essence, it prevents the virtual machine from trying to use any advanced CPU instructions that may not be present on other hosts.

Does Hyper-V’s CPU Compatibility Mode Impact the Performance of My Virtual Machine?

If you want a simple and quick answer, then: probably not. The number of people that will be able to detect any difference at all will be very low. The number of people that will be impacted to the point that they need to stop using compatibility mode will be nearly non-existent. If you use a CPU benchmarking tool, then you will see a difference, and probably a marked one. If that’s the only way that you can detect a difference, then that difference does not matter.

I will have a much longer-winded explanation, but I wanted to get that out of the way first.

How Do I Set CPU Compatibility Mode?

Luke wrote a thorough article on setting Hyper-V’s CPU compatibility mode. You’ll find your answer there.

A Primer on CPU Performance

For most of us in the computer field, CPU design is a black art. It requires understanding of electrical engineering, a field that combines physics and logic. There’s no way you’ll build a processor if you can’t comprehend both how a NAND gate functions and why you’d want it to do that. It’s more than a little complicated. Therefore, most of us have settled on a few simple metrics to decide which CPUs are “better”. I’m going to do “better” than that.

CPU Clock Speed

Clock speed is typically the first thing that people generally want to know about a CPU. It’s a decent bellwether for its performance, although an inaccurate one.

A CPU is a binary device. Most people interpret that to mean that a CPU operates on zeros and ones. That’s conceptually accurate but physically untrue. A CPU interprets electrical signals above a specific voltage threshold as a “one”; anything below that threshold is a “zero”. Truthfully speaking, even that description is wrong. The silicon components inside a CPU will react one way when sufficient voltage is present and a different way in the absence of such voltage. To make that a bit simpler, if the result of an instruction is “zero”, then there’s little or no voltage. If the result of an instruction is “one”, then there is significantly more voltage.

Using low and high voltages, we solve the problem of how a CPU functions and produces results. The next problem that we have is how to keep those instructions and results from running into each other. It’s often said that “time is what keeps everything from happening at once”. That is precisely the purpose of the CPU clock. When you want to send an instruction, you ensure that the input line(s) have the necessary voltages at the start of a clock cycle. When you want to check the results, you check the output lines at the start of a clock cycle. It’s a bit more complicated than that, and current CPUs time off of more points than just the beginning of a clock cycle, but that’s the gist of it.


From this, we can conclude that increasing the clock speed gives us more opportunities to input instructions and read out results. That’s one way that performance has improved. As I said before, though, clock speed is not the most accurate predictor of performance.

Instructions per Cycle

The clock speed merely sets how often data can be put into or taken out of a CPU. It does not directly control how quickly a CPU operates. When a CPU “processes” an instruction, it’s really just electrons moving through logic gates. The clock speed can’t make any of that go any more quickly. It’s too easy to get bogged down in the particulars here, so we’ll just jump straight to the end: there is no guarantee that a CPU will be able to finish any given instruction in a clock cycle. That’s why overclockers don’t turn out earth-shattering results on modern CPUs.

That doesn’t mean that the clock speed isn’t relevant. It’s common knowledge that an Intel 386 performed more instructions per cycle than a Pentium 4. However, the 386 topped out at 33Mhz whereas the Pentium 4 started at over 1 GHz. No one would choose to use a 386 against a Pentium 4 when performance matters. However, when the clock speeds of two different chips are closer, internal efficiency trumps clock speed.

Instruction Sets

Truly exploring the depths of “internal efficiency” would take our little trip right down Black Arts Lane. I only have a 101-level education in electrical engineering so I certainly will not be a chauffeur on that trip. However, the discussion includes instructions sets, which is a very large subtopic that is directly related to the subject of interest in this article.

CPUs operate with two units: instructions and data. Data is always data, but if you’re a programmer, you probably use the term “code”. “Code” goes through an interpreter or a compiler which “decodes” it into an instruction. Every CPU that I’ve ever worked with understood several common instructions: PUSH, POP, JE, JNE, EQ, etc. (for the sake of accuracy, those are actually codes, too, but I figure it’s better than throwing a bunch of binary at you). All of these instructions appear in the 80×86 (often abbreviated as x86) and the AMD64 (often abbreviated as x64) instruction sets. If you haven’t figured it out by now, an instruction set is just a gathering of CPU instructions.

If you’ve been around for a while, you’ve probably at least heard the acronyms “CISC” and “RISC”. They’re largely marketing terms, but they have some technical merit. These acronyms stand for:

  • CISC: Complete Instruction Set Computer
  • RISC: Reduced Instruction Set Computer

In the abstract, a CISC system has all of the instructions available. A RISC system has only some of those instructions available. RISC is marketed as being faster than CISC, based on these principles:

  • I can do a lot of adding and subtracting more quickly than you can do a little long division.
  • With enough adding and substracting, I have nearly the same outcome as your long division.
  • You don’t do that much long division anyway, so what good is all of that extra infrastructure to enable long division?

On the surface, the concepts are sound. In practice, it’s muddier. Maybe I can’t really add and subtract more quickly than you can perform long division. Maybe I can, but my results are so inaccurate that my work constantly needs to be redone. Maybe I need to do long division a lot more often than I thought. Also, there’s the ambiguity of it all. There’s really no such thing as a “complete” instruction set; we can always add more. Does a “CISC” 80386 become a “RISC” chip when the 80486 debuts with a larger instruction set? That’s why you don’t hear these terms anymore.

Enhanced Instruction Sets and Hyper-V Compatibility Mode

We’ve arrived at a convenient segue back to Hyper-V. We don’t think much about RISC vs. CISC, but that’s not the only instruction set variance in the world. Instructions sets grow because electrical engineers are clever types and they tend to come up with new tasks, quicker ways to do old tasks, and ways to combine existing tasks for more efficient results. They also have employers that need to compete with other employers that have their own electrical engineers doing the same thing. To achieve their goals, engineers add instructions. To achieve their goals, employers bundle the instructions into proprietary instruction sets. Even the core x86 and x64 instruction sets go through revisions.

When you Live Migrate a virtual machine to a new host, you’re moving active processes. The system already initialized those processes to a particular instruction set. Some applications implement logic to detect the available instruction set, but no one checks it on the fly. If that instruction set were to change, your Live Migration would quickly become very dead. CPU compatibility mode exists to address that problem.

The Technical Differences of Compatibility Mode

If you use a CPU utility, you can directly see the differences that compatibility mode makes. These screen shot sets were taken of the same virtual machine on AMD and Intel systems, first with compatibility mode off, then with compatibility mode on.



The first thing to notice is that the available instruction set list shrinks just by setting compatibility mode, but everything else stays the same.

The second thing to notice is that the instruction sets are always radically different between an AMD system and an Intel system. That’s why you can’t Live Migrate between the two even with compatibility mode on.

Understanding Why CPU Compatibility Mode Isn’t a Problem

I implied in an earlier article that good systems administrators learn about CPUs and machine instructions and code. This is along the same lines, although I’m going to take you a bit deeper, to a place that I have little expectation that many of you would go on your own. My goal is to help you understand why you don’t need to worry about CPU compatibility mode.

There are two generic types of software application developers/toolsets:

  • Native/unmanaged: Native developers/toolsets work at a relatively low level. Their languages of choice will be assembler, C, C++, D, etc. The code that they write is built directly to machine instructions.
  • Interpreted/managed: The remaining developers use languages and toolsets whose products pass through at least one intermediate system.Their languages of choice will be Java, C#, Javascript, PHP, etc. Those languages rely on external systems that are responsible for translating the code into machine instructions as needed, often on the fly (Just In Time, or “JIT”).

These divisions aren’t quite that rigid, but you get the general idea.

Native Code and CPU Compatibility

As a general rule, developing native code for enhanced CPU instruction sets is a conscious decision made twice. First, you must instruct your compiler to use these sets:

There might be some hints here about one of my skunkworks projects

These are just the extensions that Visual Studio knows about. For anything more, you’re going to need some supporting files from the processor manufacturer. You might even need to select a compiler that has support built-in for those enhanced sets.

Second, you must specifically write code that calls on instructions from those sets. SSE code isn’t something that you just accidentally use.

Interpreted/Managed Code and CPU Compatibility

When you’re writing interpreted/managed code, you don’t (usually) get to decide anything about advanced CPU instructions. That’s because you don’t compile that kind of code to native machine instructions. Instead, a run-time engine will operate your code. In the case of scripting languages, that happens on the fly. For languages like Java and C#, they are first compiled (is that the right word for Java?) into some sort of intermediate format. Java becomes byte code; C# becomes Common Intermediate Language (CIL) and then byte code. They are executed by an interpreter.

It’s the interpreter that has the option of utilizing enhanced instruction sets. I don’t know if any of them can do that, but these interpreters all run on a wide range of hardware. That ensures that their developers are certainly verifying the existence of any enhancements that they intend to use.

What These Things Mean for Compatibility

What this all means is that even if you don’t know if CPU compatibility affects the application that you’re using, the software manufacturer should certainly know. If the app requires the .Net Framework, then I would not be concerned at all. If it’s native/unmanaged code, the manufacturer should have had the foresight to list any required enhanced CPU capabilities in their requirements documentation.

In the absence of all other clues, these extensions are generally built around boosting multimedia performance. Video and audio encoding and decoding operations feature prominently in these extensions. If your application isn’t doing anything like that, then the odds are very low that it needs these extensions.

What These Things Do Not Mean for Compatibility

No matter what, your CPU’s maximum clock speed will be made available to your virtual machines. There is no throttling, there is no cache limiting, there is nothing other than a reduction of the available CPU instruction sets. Virtual machine performance is unlikely to be impacted at all.

Free Tool: Advanced Settings Editor for Hyper-V Virtual Machines

Free Tool: Advanced Settings Editor for Hyper-V Virtual Machines


Hyper-V’s various GUI tools allow you to modify most of the common virtual machine settings. With the Hyper-V PowerShell module, you can modify a few more. Some settings, however, remain out of easy reach. One particular WMI class contains the setting for the virtual machine’s NumLock key setting and a handful of identifiers. Manipulating that specific class is unpleasant, even if you’re versed in WMI.

I’ve previously written PowerShell scripts to deal with these fields. Those scripts tend to be long, complicated, and difficult to troubleshoot. So, after taking some time to familiarize myself with the Management Interface API, I’ve produced a full-fledged graphical application so that you can make these changes more simply.

The application looks like this when run against my system:

VM Editor Main Screen

VM Editor Main Screen

The single shown dialog contains the entire application. Modeling after the design philosophy of Altaro Software, “ease-of-use” was my primary design goal. It’s a busy dialog, however, so a quick walkthrough might help you get started.

Advanced VM Settings Editor: Walkthrough

The screen is designed to work from left to right. You can use the screenshot above as a reference point, if you’d like, but it’s probably easier to install the application and follow along with the real thing.

Begin by choosing a host.

Choosing a Hyper-V Host

Use the drop-down/edit control in the top left. You can type the name of a Hyper-V host and press [Enter] or push the Refresh Virtual Machine List button to attempt to connect to a host by name. It does not work with IP addresses. You can use the Browse button to locate a system in Active Directory. I did code the browse functionality to allow you to pick workgroup-joined hosts. I did not test it for workgroup connectivity, so I don’t know (or care) if that part works or not.

If you installed the app to work with local virtual machines, enter a period (.), “LOCALHOST”, or the name of the local computer.

Loading the Virtual Machine List

Upon choosing a host, the contained virtual machines will automatically appear in the list box at the far left of the dialog. If none appear, you should also receive an error message explaining why. Use the Refresh Virtual Machine List to refresh the list items. The application does not automatically detect changes on the host, so you’ll need to use this if you suspect something is different (ex: a virtual machine has Live Migrated away).

A valid host must be selected for this section to function.

Loading the Current Settings for a Virtual Machine

Clicking on any virtual machine item in the list at the left of the dialog will automatically populate the settings in the center of the dialog. It will also populate the power state readout at the top right.

Clicking on the same virtual machine will reload its settings. If you have made any changes and not applied them, you will be prompted.

Making Changes to a Virtual Machine

There are six options to modify, and six corresponding fields.

  • Enable NumLock at Boot: This field is a simple on/off toggle.
  • Baseboard Serial Number
  • BIOS GUID: If you are PXE booting virtual machines, this field contains the machine UUID. This field requires 32 hexadecimal characters. Because there are multiple ways for GUIDs/UUIDs to be formatted, I opted to allow you to enter any character that you like. Once it has found 32 hexadecimal characters (digits 0 through 9 and letters A through F, case insensitive), it will consider the field to be validly formatted. Any other characters, including hexadecimal characters after the 32nd, will be silently ignored.
  • BIOS Serial Number
  • Chassis Asset Tag
  • Chassis Serial Number

The text fields besides the BIOS GUID are limited to 32 characters. WMI imposes the limit, not the application.

Viewing and Changing the Virtual Machine’s Power State

The power state is important because you cannot save changes to a virtual machine in any state other than Off. The current state is shown in the text box at the top right. Use the buttons underneath to control the virtual machine’s power state. Be aware that the Refresh State button is the only way to re-check what the virtual machine’s power state is without affecting any of the editable fields.

This application is not multi-threaded, so it will appear to hang on any long-running operation. The worst, by far, is the Graceful Shutdown feature.

Saving and Discarding Virtual Machine Changes

Use the Apply Changes and Reset Fields button to save any pending changes to the virtual machine or to discard them, respectively. If you attempt to save changes for a virtual machine that is not in the Off state, you will receive an error.

All of the error messages generated by the Apply Changes button are sent from WMI. I did not write them, so I may not be able to help you to decipher all of them. During testing, I occasionally received a message that the provider did not support the operation. When I got it, I just clicked the button again and the changes went through. However, I also stopped receiving that error message during development, so it’s entirely possible that it was just a bug in my code that was fixed during normal code review. If you receive it, just try again.

Usability Enhancements

The application boasts a couple of features meant to make your life a bit easier.


As mentioned above, use a period (.), “LOCALHOST” or the local computer’s name to connect locally. I added some logic so that it should always know when you are connected locally. If it detects a local connection, the application will use DCOM instead of WinRM. Operations should be marginally faster. That said, I was impressed with the speed of the native MI binaries. If you’re accustomed to the delays of using WMI via PowerShell, I think you’ll be pleased as well.

However, I do have some concerns about the way that local host detection will function on systems that do not have WinRM enabled. If you can’t get the application to work locally, see if enabling WinRM fixes it (winrm qc at a command prompt or Enable-PSRemoting at an elevated PowerShell prompt).

Saving and Deleting Previously Connected Hosts

Once you successfully connect to a host, the application will remember it. If you don’t want a host in the list anymore, just hover over it and press [Delete].

If you’d like to edit the host list, look in %APPDATA%SironVMEditorvmhosts.txt. Each host is a single line in the text file. Short names and FQDNs are accepted.

Settings Validation Hints

As you type, the various fields will check that you are within ranges that WMI will accept. If you enter too many characters for any text field except BIOS GUID, the background of the text field will turn a reddish color. As long as you are within acceptable ranges, it will remain green. The BIOS GUID field will remain green as long as it detects 32 hexadecimal characters.

I realize that some people are blue-green colorblind and may not be able to distinguish between the two colors. Proper validation is performed upon clicking Apply Changes.

Clear Error Messages

One of the things that drives me nuts about software developers is the utter gibberish they try to pass off as error messages. “Sorry, there was an error.” How useless is that? Sure, I know first hand how tiresome writing error messages can be. But, I figure, I voluntarily wrote this app. None of you made me do it. So, it’s not right to punish you with cryptic or pointless messages. Wherever possible, I wrote error messages that should clearly guide you toward a solution. Any time that I didn’t, it’s because I’m relaying a useless message from a lower layer, like “unknown”.

Application Security

In general, I have left security concerns to the Windows environment. The application runs under your user context, so it cannot do anything that you do not already have permission to do. WMI throws errors on invalid input, so sneaking something by my application won’t have much effect. WMI communications are always encrypted, and I can see it loading crypto DLLs in the debugger.

I did instruct the application to securely wipe the names and IDs of all virtual machines from memory on exit. I’m not certain that has any real security value, but it was trivial to implement so I did it.

Potential Future Enhancements

The application does everything that it promises, so I’m not certain that I’ll update it for anything beyond bug fixes. There are a few things that I already have in mind:

  • Multi-threaded/asynchronous. The application appears to hang on long-running operations. They aren’t usually overly annoying so it didn’t seem worth it to delay version 1 to add the necessary infrastructure.
  • Automatic detection of state changes. The API has techniques for applications to watch for changes, such as to power states. These would be nice, but they also require enough effort to implement that they would have delayed version 1.
  • Other visual indicators. I was brainstorming a few ways to give better visual feedback when a field contained invalid data, but ultimately decided to proceed along so that I could release version 1.
  • Other settings. This is already a very busy dialog. I can’t imagine widening its net without major changes. But, maybe this app does need to grow.

I think that a lot of these enhancements hinge on the popularity of the app and how much their absence impacts usability. The program does what it needs to do today, so I hate to start tinkering too much.

System Requirements

Testing was extensively done using Windows 7 and Windows 10 clients against Windows Server 2012 R2 Hyper-V hosts. Some testing was done on other platforms/clients. The supported list:

  • Hyper-V Hosts
    • Windows Server 2012 (not directly tested at all)
    • Windows Server 2012 R2
    • Windows Server 2016
    • Windows 8/8.1 (not directly tested at all)
    • Windows 10 (not directly tested at all)
  • Clients
    • Windows 7
    • Windows 8/8.1 (not directly tested at all)
    • Windows 10
    • Windows Server 2008 R2
    • Windows Server 2012 (not directly tested at all)
    • Windows Server 2012 R2
    • Windows Server 2016

Software Prerequisites

The client and the server must be running at least the Windows Management Framework version 3.0. The version of PowerShell on the system perfectly matches the version of WMF, so you can run $PSVersionTable at any command prompt to determine the WMF level. The MSI and EXE installers will warn you if you do not meet this requirement on the client. The standalone executable will complain about a missing MI.dll if the framework is not at a sufficient level. WMF 3.0 shipped with Windows 8 and Windows Server 2012, so this is mostly a concern for Windows 7 and Windows Server 2008 R2.

The client must have at least the at least the Visual Studio 2015 C++ Redistributable installed. I did not intentionally use any newer Windows/Windows Server functionality, so earlier versions of the redistributable may work. If you use either of the installers, a merge module is included that will automatically add the necessary runtime files. If you use the EXE-only distribution and the required DLLs are not present, the application will not start. It also will not throw an error to tell you why it won’t start.

Downloading the Advanced VM Settings Editor for Hyper-V

You can download the application right from here. I have provided three packages.

  • Get the MSI package: SetupVMEditor1.0.0.msi.
  • Get the installer EXE: SetupVMEditor1.0.0.exe
  • Get the directly runnable EXE: VMEditor-PlainEXE.
    • Note: You must have the Windows Management Framework version 3.0 installed. You will receive an error about a missing “MI.dll” if this requirement is not met.
    • Note if not using an installer: You must have at least the Visual Studio 2015 C++ Redistributable installed. The application will not even open if the required DLLs are not present on your system.

Support, Warranty, Disclaimer, and All That

I wrote this application, not Altaro. There’s no warranty, there’s no promises, there’s no nothing. It’s 100% as-is, what you see is what you get, whatever happens, happens. It might do something that I didn’t intend for it to do. It might do things that you don’t want it to do. Do not call Altaro asking for help. They gave me a place to distribute my work and that’s the end of their involvement.

I’ll provide limited support here in the comments. However, my time is limited, so help me to help you. I will not respond to “it didn’t work” and “I tried but got an error the end” messages. If you got an error, tell me what the error was. Tell me your OS. Tell me what I can do to reproduce the error.

I am specifically excluding providing support for any problems that arise from attempting to use the application on a workgroup-joined host. I only test in domain environments. If it happens to work in a workgroup, that’s great! If not, too bad!

Undocumented Changes to Hyper-V 2016 WMI

Undocumented Changes to Hyper-V 2016 WMI


We all know that IT is an ongoing educational experience. Most of that learning is incremental. I can only point to a few times in my career in which a single educational endeavor translated directly to a major change in the course of my career. One of those was reading Richard Siddaway’s PowerShell and WMI. It’s old enough that large patches of the examples in that work are outdated, but the lessons and principles are sound. I can tell you that it’s still worth the purchase price, and more importantly that if this man says anything about WMI, you should listen. You can imagine how excited I was to see that Richard had begun contributing to the Altaro blog.

WMI can be challenging though, and it doesn’t help when you can’t find solid information about it. I’m here to fill in some of the blanks for WMI and Hyper-V 2016.

What is WMI?

WMI stands for “Windows Management Instrumentation”. WMI itself essentially has no substance; it’s a Microsoft-specific implementation of the standardized Common Information Model (CIM), maintained by the DMTF. CIM defines common structures and interfaces that anyone can use for a wide range of purposes. Most purposes involve systems monitoring and management. The simplest way to explain WMI, and therefore CIM, is that it is an open API framework with standardized interfaces intended for usage in management systems. PowerShell has built-in capabilities to allow you to directly interact with those interfaces.

What is the Importance of Hyper-V and WMI?

When it comes to Hyper-V, all the GUIs are the beginner’s field. The PowerShell cmdlets are the intermediate level. The experts distinguish themselves in the WMI layer. Usually, when someone incredulously asks me, “How did you do that?”, WMI is the answer. WMI is the only true external interface for Hyper-V. All of the other tools that you know and love (or hate) rely on WMI. However, none of those tools touch all of the interfaces that Hyper-V exposes through WMI. That’s why we need to be able to access WMI ourselves.

How Do I Get Started with Hyper-V’s WMI Provider?

If you don’t already know WMI, then I would recommend Richard’s book that I linked in the first paragraph. The “warning” that I would tell you on that is to not spend a lot of time learning about associators. You won’t use them with v2 of the Hyper-V WMI provider. Instead, you’ll use $WMIObject.GetRelated(), which is much easier. There are other ways to learn WMI, of course, but that’s the one that I know. Many of the PowerShell scripts that I’ve published on this blog include WMI at some point, so feel free to tear into those. Also try to familiarize yourself with the WMI Query Language (WQL). It’s basically a baby SQL.

Get a copy of WMI Explorer and put it on a system running Hyper-V. Use this tool to navigate through the system. In this case, you’re especially interested in the rootvirtualizationv2 branch. No other tool or reference material that you’ll find will be as useful or as accurate. You can use it to generate PowerShell (check the Script tab). You can also use it to generate MOF definitions for classes (right-click one). It’s a fantastic hands-on way to learn how to use WMI and discover your system.


Microsoft does publish documentation on the Hyper-V WMI provider. Starting with 2016, it is not thorough, it is not current, and someone had the brilliant idea to leave it undated so that you won’t be able to determine if it’s ever been updated. There are more than a few notes that make it worthwhile enough to use as a reference.

Do not forget search engines! If you just drop in the name of a class, you’ll find something, and often a useful something. It doesn’t hurt to include “v2” in your search criteria.

Undocumented and Partially Documented WMI Changes for Hyper-V 2016

Some of this stuff isn’t so much “undocumented” so much as unorganized. The goal of this section is to compile information that isn’t readily accessible elsewhere.

Security and the Hyper-V WMI Provider

It is not possible to set a permanent WMI registration on any event, class, or instance in the Hyper-V WMI provider. The reason is that permanent subscriptions operate anonymously, and this particular provider does not allow that level of access. You can create temporary subscriptions, but that’s because they must always operate under a named security context. Specifically, a user name.

I don’t have much more to give you on this. You can see the symptoms, or side effects if you will, of the different security restrictions. Many things, like Get-VM, don’t produce any results unless you have sufficient permissions. Other than that, you’ll have to muddle through on your own just as I have. My best sources on the subject say that there is no documentation on this. Not just nothing public, just nothing. That means that there is probably a lot more that we could be doing in terms of providing controlled access to Hyper-V functions.

What Classes Were Removed from the Hyper-V WMI Provider in 2016?

I pulled a list of all classes from both a 2012 R2 system and a 2016 system and cross-referenced the results. The following classes appear in 2012 R2 but not in 2016:

I have never personally used any of these classes, so I’m not going to miss them. If you have any script or code that expects these classes to exist, that code will not function against a 2016 system.

One retired class of interest is “Msvm_ResourceTypeDefinition”. As we’ll see in a bit, the way that virtual machine components are tracked has changed, which could explain the removal of this particular class.

What Classes Were Added to the Hyper-V WMI Provider in 2016?

The results of the previous test produced a great many new classes in 2016.

If you’re aware of the many new features in 2016, then the existence of most of these new classes makes sense. You can’t find documentation, though. If you want to see one of the shortest Google results lists in history, go search for “Msvm_TPM”. I got a whopping three hits when I ran it, with no relation to Hyper-V. After publication of this article, we’ll be up to a staggering four!

Some of these class additions are related to a breaking change from the v2 namespace in 2012 R2: some items that were formerly a named subtype of the Msvm_ResourceAllocationSettingData class now have their own specialized classes.

What Happened to the Serial Port Subtype of Msvm_ResourceAllocationSettingData?

First, let’s look at an instance of the Msvm_ResourceAllocationSettingData class. The following was taken from WMI Explorer on a 2012 R2 system:

undoc_serialportsubtypeI’ve highlighted two items. The first is the ID of the virtual machine that this component belongs to. The second is the “ResourceSubType” field, which you can use to identify the component type. In this case, it’s a virtual serial port.

I chose to use WMI Explorer for this example because it’s a bit easier to read. The following code block shows three ways that I could have done it in WMI by starting from the virtual machine’s human-readable name:

The first technique utilizes the skills of the .Net and PowerShell savvy. The second and third methods invokes procedures familiar to SQL gurus.

Now that we’ve seen it in 2012 R2, let’s step over to 2016. I have configured the above virtual machine in Hyper-V Replica between 2012 R2 and 2016, so everything that you see is from the exact same virtual machine.

To begin, all three of the above methods return no results on 2016. The virtual machine still has its virtual serial ports, but they no longer appear as instances of Msvm_ResourceAllocationSettingData.

Now, we have:



I’ve highlighted a couple of things in that second entry that I believe are of interest. This entry certainly looks a great deal like the Msvm_ResourceAllocationSettingData class from 2012 R2, doesn’t it? However, it is an instance of Msvm_SerialPortSettingData. Otherwise, it’s structurally identical. You can even search for it using any of the three methods that I outlined above, provided that you change them to use the new class name.

I did not find any other missing subtypes, but I didn’t dig very deeply, either.

Associator Troubles?

I mentioned a bit earlier that I don’t use associators with the v2 namespace. I have seen a handful of reports that associator calls that did work in 2012 R2 do not work in 2016, although I have not investigated them myself. If that’s happened to you, just stop using associators. .Net and PowerShell automatically generate a GetRelated() method for every WMI object of type System.Management.ManagementObject. It has an optional String parameter that you can use to locate specific classes, if you know their names.

Find everything directly related to a specific virtual machine:

Find a specific class related to a specific virtual machine:

What the tools that I’ve shown you so far lack is the ability to quickly discover associations. The GetRelated() method allows you to discover connections yourself. To keep the output reasonable, filter it by the __CLASS field (that’s two leading underscores). The following shows the commands and the output from system:

You can use this technique on the Script tab in WMI Explorer (which will run the script in an external window) and then cross-reference the results in the class list to rapidly discover how the various classes connect to each other.

You can also chain the GetRelated() method. Use the following to find all the various components of a virtual machine:

Put WMI to Work

WMI is the most powerful tool that a Hyper-V administrator can call upon. You don’t need to worry about hurting anything, as you would need to directly call on some method in order to make any changes. The Get-WmiObject cmdlet that I’ve shown you has no such powers.

If you’re willing to go deeper, though, you can certainly use WMI to change things. There are several properties that can only be changed through WMI, such as the BIOS GUID. In previous versions, some people would modify the XML files, but that was never supported. In 2016, the virtual machine file format is now proprietary and copes with manual alterations even more poorly than the old XML format. To truly sharpen your skillset, you need to learn WMI.

Nano Server AMA with Andrew Mason from Microsoft – Q&A Follow Up

Nano Server AMA with Andrew Mason from Microsoft – Q&A Follow Up


NOTE: Please read THIS important update on the direction of Nano Server prior to using the below resources.

Hello once again everyone!

A few weeks ago, we put on a very special webinar here at Altaro where we had Andrew Mason from the Nano Server team at Microsoft on to answer all of your burning Nano Server questions. Both sessions were very well attended and the number of quality, engaging questions was amazing. It really made for a great webinar!

As we usually do after webinars, this post is intended to act as an ongoing resource for the content that was discuss during said webinar. Below you will find the recording of the webinar for your viewing pleasure in case you missed it, along with a written list of questions and their associated answers below that were not covered verbally during the Q & A due to time constraints.

Revisit the Webinar

Q & A

Q. Will we be able to run the Active Directory role on Nano Server in the future?

A. This is a frequent ask, which you can also vote for on the Windows Server User Voice HERE. We are investigating how to bring this to Nano Server, but at this time I don’t have a timeline to share.

Q. Will WSL eventually get into Nano Server? Could is replace the instance of OpenSSH from GitHub Eventually?

A. WSL was added to Windows 10 to support developer scenarios, so we hadn’t been considering it for Nano Server. This is a remote management scenario, it would be interesting to understand how many people would want this for management, so please vote on User Voice HERE.

Q. Will there be support for boot from USB for Nano Server, Hyper-V nodes for instance?

A. This is not currently planned. There have been a lot of asks for SD boot. If this is an important scenario for you, please vote for it on user voice.

Q. Are there plans to use MS DirectAccess on Nano?

A. This is not currently planned due to the cloud focus we have for Nano Server. If this is an important scenario for you, please vote for it on User Voice.

Q. How does one manage a Nano server if Azure or an Azure Account is unavailable?

A. You can still use the standard MMC tools to remotely manage nano server on-prem, just like any other Windows Server.

Q. Are there any significant changes in licensing for Nano Server?

A. There are some licensing implications when using Nano Server. Altaro has an ebook on licensing Windows Server 2016 that includes some information about Nano Server HERE.

Q. Can you manage a Nano Server host with SCVMM 2012 R2?

A. Unfortunately no. SCVMM 2016 is needed to manage 2016 Nano Server hosts.

Q. Do you see a role for Nano Server in regards to on-prem Hyper-V environments.

A. Absolutely! Nano Server lends itself very well to running as a Hyper-V host. The attack surface is smaller, less resources are needed for the OS, and you have fewer reboots needed due to patching. You can still manage it remotely just like any other Hyper-V host.

Q. How can I use the Anti-Malware options that are available in Nano Server?

A. Nano Server uses a Just-Enough-OS model, in that only the bits needed to run the OS are initially available. There is an Anti-Malware feature available, you just need to install it. More information on installing roles in Nano Server can be found HERE.

Q. Are iSCSI and MPIO usable on Nano Server?

A. Yes they are, they can be installed and managed via PowerShell Remoting.

Q. How do you configure NIC teaming in Nano Server?

A. NIC teaming can be managed and configured via PowerShell. Take note however, that the usual LBFO NIC teaming is not available on Nano Server and you will have to use the new Switch Embedded Teaming (SET) option that was released with Windows Server 2016.

Q. Does Altaro VM Backup support protecting VMs running on a Nano Server Hyper-V Host?

A. As Nano Server is such a radical departure from the usual Microsoft deployment option, we currently do not support backing up VMs on Nano Server hosts. We are currently looking at adding support for this deployment option, but do not have a date that can be provided at this time. Be sure to keep a look out on the Altaro blog for developments in this matter.


That wraps things up for our Q & A follow up post. We had lots of great questions and loved to see everyone actively participating in the webinar! As usual, if you think of any further follow up questions, feel free to ask them in the comments section below and we’ll get back to you ASAP!

Thanks for reading!


Hyper-V and Linux: Changing Volume Names

Hyper-V and Linux: Changing Volume Names


I’ll say up front that this article is more about Linux than Hyper-V. It’s relevant here for anyone that duplicates a source VHDX to use with new Linux guests. In our how-to article on using Ubuntu Server as a Hyper-V guest, I counseled you to do that in order to shortcut installation steps on new systems. That article shows you how to rename the system so that it doesn’t collide with others. However, it doesn’t do anything with the volumes.

During Ubuntu (and I assume other distributions), the Logical Volume Manager (lvm) gives its volume groups the same base name as the system. So, using a copy of that VHDX for a new system leaves you with a name mismatch. Before you do anything, keep in mind that this state does not hurt anything! I’m going to show you how to change the volume group names, but you’re modifying a part of the system involved with booting (and probably other things). My systems still work fine, but these system-level changes only address a cosmetic problem. If you’re still interested, check your backups, take a checkpoint, and let’s get started!

Linux Volume Group Names: What We’re Solving

In case you’re in the dark as to what I’m talking about, run lvs on your system. My mismatched system looks like this:


This system is currently named “svlinuxtest”. It was built from a base system that was named “svlmon2”. I want to reiterate that nothing is broken here. I see the volume group name pop up occasionally, such as during boot. That’s it. If I leave this alone, everything will be perfectly fine. On Windows, changing a volume name is trivial because the system only cares about drive letters and volume IDs. On Linux, volume names have importance. You take a risk when changing them, especially for volume groups that contain system data.

Renaming Volume Groups on Ubuntu Server

These instructions were written using Ubuntu Server. I assume that they will work for any system using lvm.

Read step 7 before you do anything! If that bothers you, leave this whole thing alone!

  1. Make sure that you have a fresh backup and/or a checkpoint. I haven’t had problems yet, but…
  2. If you don’t already know the current volume group, use sudo lvs as shown above to list them. Decide on a new name. Take special care to match the spelling exactly throughout these directions!
  3. Use lvm to change to a new name: sudo lvm vgrename oldname newname.
  4. Change the volume name in the /etc/fstab file to match the new name.
    1. METHOD 1: Use the nano visual editor: sudo nano /etc/fstab. Notice that the entries in fstab use TWO hyphens between the name and the “vg-volumename” suffix whereas outputs show only ONE. Leave the extra hyphen alone! Once you’ve made your changes, use [CTRL+X] to exit, pressing [Y] to save when prompted.
    2. METHOD 2: Use sed. This method is faster than nano and only gives you one shot at making a typo instead of two. You don’t get any visual feedback, though (although you could open it in nano or less or some other editor/reader, of course): sudo sed -i "s/oldname/newname/g" /etc/fstab .
  5. Change the volume name in /boot/grub/grub.cfg just as you did with /etc/fstab.
    1. METHOD 1: Use the nano visual editor: sudo nano /boot/grub/grub.cfg. This file is much larger than fstab and you will likely have many entries to change. As with fstab, there are TWO hyphens between the volume group name and the “vg-volumename” suffix. Leave them alone. If you’re going to use nano for this, I’d recommend that you employ its search and replace feature.
      1. Press [CTRL+].
      2. Type the original name in the Search (to replace) prompt and press [Enter]:
      3. Type the new name in the Replace with prompt and press [Enter]:
      4. Working with the assumption that you didn’t use an original name that might otherwise appear in this file (like, say, “Ubuntu”), you can press [A] at the Replace this instance? prompt. If you want to be certain that you’re not overwriting anything important, press [Y] and [N] where appropriate to step through the file.
      5. Use [CTRL+X] when finished and [Y] to save the file.
    2. METHOD 2: Use sed. As with /etc/fstab, this method is faster than nano. It allows you to change all of the entries at once without prompting (even if you type the name wrong). If you used sed to change fstab, then you can press the up arrow to retrieve it from the buffer and change only the filename: sudo sed -i "s/oldname/newname/g" /etc/fstab.
  6. Apply the previous file changes to initramfs: sudo update-initramfs -u. This operation can take a bit of time, so don’t panic if it doesn’t return right away. I also sometimes get I/O errors mentioning “fd0”. That’s the floppy disk. Since I don’t have a floppy disk in the system, I don’t worry about those errors.
  7. Shut down the system. While it’s probably not urgent, I recommend doing this immediately: sudo shutdown now. You could use -r if you want, but it won’t matter. The system will almost undoubtedly hang on shutdown! That’s because the volumes that it wants to dismount by name no longer have that name. Just wait for the shutdown process to hang, then use the Hyper-V console’s reset button. Or, if you’re not using Hyper-V, whatever reset method works for you.
  8. Test EVERYTHING. sudo lvs for certain. Make sure your services are functioning. Once everything looks good, test a restart: sudo shutdown -r now. It should not be hanging anymore.
  9. Remove any checkpoints.

Volume Group Rename Notes

I performed a fair bit of research to come up with these directions and found a lot of conflicting information. One place said that it wasn’t necessary to update initramfs. If you follow that advice, everything will most likely still function. However, you’ll get errors at boot up that volume groups cannot be found. Those messages will include the original volume names. They’ll also be repeated a few times, which appears to delay bootup a bit. I’m not sure if any other problems will arise if you don’t follow these directions as listed.

I’m also not entirely certain that these directions reach 100% of all places that the volume groups are named. As always, we encourage comments with any additional information!

Planning for Hyper-V Replica

Planning for Hyper-V Replica



Before tackling this article, ensure that you already know what Hyper-V Replica is. If not, follow this link. I also trust that you understand that you are not building a replacement for backup. This post will help you verify that you have what you need so that you can create a successful Hyper-V Replica deployment. We are saving the “how-to” for building that system for a later article.

Hyper-V Replica Prerequisites

Hyper-V Replica originally released with the 2012 Server platform. I am deliberately talking only about the 2012 R2 and 2016 platforms. Most of what I say here will apply to 2012, but I don’t believe that there are enough new installations on that version to justify covering the differences. If you’re one of the handful of exceptions, I doubt that you’ll have many troubles.

Before you begin, you must have all of these items:

  • At least two systems running a server edition of Hyper-V. For best results, use the same version. Hyper-V Replica does work up-level from 2012 R2 to 2016. As long as the virtual machine configuration version remains at 5.0, 2016 can replicate down to 2012 R2. 2012 can replicate up to 2012 R2, but 2012 R2 cannot reverse replication down to 2012.  I would not expect 2012 to 2016 to work at all, but I didn’t test it and could not find anything about it.
  • Sufficient storage on the replica server to hold the replica, with extra room for change logs and recovery points. The extra space needed depends on the rapidity of changes in the virtual machine(s) and on how many recovery points you wish to keep.
  • Reliable network connectivity between the source and replica hosts.
  • All hosts involved in Replica must be in the same or trusting domains to use Kerberos authentication.
  • All hosts involved in Replica must be able to validate certificates presented by the other host(s) in order to use certificate authentication. The certificates used must be enabled for “Server Authentication” enhanced key usage.
  • A configurable firewall. Replica defaults to port 80 for Kerberos authentication or 443 for certificate authentication.

Preparing to Use Hyper-V Replica Securely

Today’s world demands that you think securely from the beginning of a project. The most common use for Hyper-V Replica involves transmitting data across the Internet. Walk into this project knowing that your data can be intercepted.

Mutual Host Authentication is Required

Hyper-V Replica will not function if the source host cannot verify the identity of the target host and vice versa. This is a good thing, but it can also be a bothersome thing. You have two options: Kerberos authentication and certificate-based authentication.

If your replica traffic will directly traverse an unsecured network (the Internet), do not use Kerberos authentication. The source and replica servers will securely authenticate each other’s identities, but the replica traffic is not encrypted. However, if you are using a secured tunnel such as a site-to-site VPN, then feel free to use Kerberos. There is little value in using an encrypted tunnel to carry encrypted traffic. Also, because certificate-based encryption is asymmetrical, the encrypted packets are much larger than the unencrypted source. Double encryption dramatically increases the payload size.

Pros of Kerberos-based Authentication

  • If both hosts are in the same or trusting domains, Kerberos authentication is “fire-and-forget” simple. Just select that dot on their configuration pages and you’re set.
  • Synchronization traffic is unencrypted, so it requires the least amount of processing and network resources.
  • Simple, centralized emergency management of the hosts. If a system at a remote site is compromised, you can disable its object in Active Directory and it will no longer be valid for replication.

Cons of Kerberos-based Authentication

  • No option to encrypt synchronization traffic within Hyper-V. IPSec and encrypted VPN tunnels are viable alternatives.
  • Cannot fail over a domain controller covered by Hyper-V Replica unless another domain controller is available. You can eliminate this problem by allowing Active Directory to replicate itself.
  • “Only” works for domain-joined hosts. Leaving Hyper-V hosts out of the domain isn’t smart practice anyway, so competent administration eliminates this problem unless using an outside service provider.

Pros of Certificate-Based Authentication

  • Hosts do not need any other method to authenticate each other. This approach works well for service providers.
  • All traffic is encrypted. As long as the hosts’ private keys are adequately protected, it’s as safe as anything can be to transmit certificate-based Hyper-V Replica traffic directly across the Internet.

Cons of Certificate-Based Authentication

  • Certificate-based encryption results in higher CPU usage and much larger traffic requirements.
  • PKI and certificates can be difficult and confusing for those that don’t use PKI often.
  • Certificates expire periodically and must be manually renewed, redistributed, and selected in Replica configuration screens.
  • If you don’t maintain your own PKI, you’ll need to purchase certificates from a third party. This might also be necessary when working with a Hyper-V Replica service provider.

Make the decision about which type of authentication to use before proceeding.

Acquiring Certificates to Use with Hyper-V Replica

It is possible to use self-signed certificates for Hyper-V Replica, but it is not recommended. Self-signed certificates do not utilize any type of external arbiter, therefore the hosts are not truly able to authenticate each other.

There are two recommended ways to acquire certificates for Hyper-V Replica:

  • A third-party trusted certificate provider. These certificates cost money, but all of the not-fun bits of managing a PKI are left to someone else. If you shop around, you can usually find certificates at a reasonable price. These are most useful when you do not own all of the Hyper-V hosts in the replica chain.
  • An internal Certificate Authority. If you own all of the Hyper-V hosts, then it won’t matter a great deal that they only use your resources for authentication. Even if some or all of the Hyper-V hosts aren’t in your domain, you can add your CA’s certificate to their trusted lists and then they’ll trust the certificates that they issue.

Making certificate requests is really not difficult, but there are a lot of steps involved. The most comprehensive walkthrough that I’m aware of is the one that I wrote for the Hyper-V Security book that I co-wrote with Andy Syrewicze. The bad news is that, since it seems to be one-of-a-kind, I can’t duplicate it here. You can find several other examples, although there are so many variables and possibilities that you may struggle a bit to find one that perfectly matches your situation. This certificate enrollment walkthrough looks promising: https://social.technet.microsoft.com/wiki/contents/articles/10377.create-a-certificate-request-using-microsoft-management-console-mmc.aspx. It’s for domains, but it does show you how to get the CSR text. You’ll need that if you’re going to request from a third-party or a disconnected system.

If you want to set up your own Active Directory-based PKI, be warned that you are facing a non-trivial task made worse by poorly designed and documented tools. The “official” documentation isn’t great. I’ve had better luck with this: https://www.derekseaman.com/2014/01/windows-server-2012-r2-two-tier-pki-ca-pt-1.html. It’s not perfect either, but it’s better than the “official” documentation. If you don’t have any other use for PKI, I recommend that you save your sanity by spending a few dollars on some cheap third-party certificates.

Hyper-V Replica Certificate Requirements

If you already know how to make a certificate request, this is a simple checklist of the requirements:

  • Enhanced Key Usage must be: Client Authentication, Server Authentication. This is the default for the Computer certificate template if you are using a Windows PKI.
  • Common Name (on the Subject tab) must be the name of the system as it will be presented to the other Replica server(s) that it communicates with. So, if you’re connecting over the Internet to target.mydomain.com, then that must be the the Common Name on the Subject of the certificate and/or a Subject Alternate Name.
  • Subject Alternate Name (SAN). This is also on the Subject tab. You want to add DNS entries. If your replica host is going to be addressed by a name other than its computer name, then that name must at least appear in the Subject Alternate Name list. If the target system is a cluster and the other Replica server(s) will be connecting to it via its Cluster Name Object, then your certificate must use that FQDN as the Common Name or as one of the Subject Alternate Names. Because the certificate can be used for more purposes than just replica, I typically use all of these items in the SAN fields:
    • Cluster Name Object DNS name
    • Replica Name Object DNS name
    • Internal DNS name of each node
  • 2048 bit encryption. The default is 1024 bit, so ensure that you change it.

A warning note on Subject Alternate Name: If you are using an internal Active Directory-based PKI, the default configuration for the Computer certificate template may prevent you from using Subject Alternate Names. You may fill out the fields correctly, but then discover that the issued certificate contains no SANs. I typically create my own certificate templates from scratch to avoid any issues.



Have your certificates installed on each host before you configure Hyper-V Replica.

Selecting Virtual Machines to Use with Replica

It’s not a given that you’ll want to replicate every virtual machine. The very first link in this article spends some time on this topic, so I’m only going to briefly touch upon it here. Avoid using Hyper-V Replica with any technology that has its own replication mechanisms. Active Directory, Exchange Server, and SQL Server are technologies that I strongly discourage mixing with Hyper-V Replica.

Remember that Hyper-V Replica does not save on licensing in most configurations. You cannot build a virtual machine running Active Directory Domain Services and then create a replica of it for free. The replica virtual machine must also be licensed. If you have Software Assurance on the license that covers the source virtual machine, then that does cover the replica. However, that’s not “free”. I do not know how replica is handled by the licensing terms of non-operating system software, so consult with your licensing reseller. Do not make assumptions and do not attempt to use replica to side-step licensing requirements. The fines are prohibitively high and auditors never accept ignorance as an excuse.

Selecting Virtual Machine Data to Exclude from Replica

Just because you want a virtual machine replicated means that you want all of that virtual machine replicated. Hyper-V Replica has the ability to skip specified disks. Many people will move the page file for a virtual machine to a separate disk just to keep it out of replica. There are other uses for this ability as well. Think through what you want left out.

Selecting Hardware to Use with Replica

If you want to make the simplest choice, buy the same hardware for Hyper-V Replica that you use for your primary systems. That’s rarely the most fiscally sound choice, however.


  • Hyper-V Replica is intended for recovery and/or continuity through a disaster, not as an ordinary running mode
  • Disasters tend to alter usage patterns; staff re-tasks to other duties, customers have other things to do, etc.
  • Using hardware in another physical location will likely cause other logistical access restrictions. For example, your primary office location may house fifty on-site staff. Your replica site may have sufficient room for five.

I am unable to make any solid general recommendations. If you’re not certain, I would recommend purchasing a system that is at least similar to your primary. If you’re really uncertain, hire a consultant.

If you’re thinking about using a smaller system for the replica site, remember these things:

  • You can replicate from a cluster to a standalone host and vice versa.
  • You can replicate from a cluster to a smaller cluster and vice versa.

Replica Site Networking

Set aside time to think through your networking design for replica. You absolutely do not want to be stumbling over it in the middle of a crisis. There are three basic ways to approach this.

Use Completely Separate Networks

My preferred way is to build distinct networks at each site. You’ll invest more effort in this design, but you’ll need substantially less to maintain it. You do not need to build an elaborate system.


One option that you have to make this work is DHCP. The very simplest way is to have all services configured to use DNS names and just allow DHCP and DNS to do their jobs. That concept makes a lot of people nervous, though, and you won’t always have that option anyway. In that case, set each virtual machine to use a static MAC address. Since you are hopefully keeping an active domain controller in each site, throw DHCP and DNS in as well (separate servers, if it suits your environment). Use DHCP reservations unique to each site.

If you don’t want to use DHCP, then you can configure failover IPs for each virtual machine individually. That’s the most work, but it gives you a guaranteed outcome.

The nice thing about this setup is that it will work even if you haven’t got a VPN. Active Directory also works wonderfully. You configure the second IP range as its own site in Active Directory Sites and Services, and it just knows what to do… if there’s a VPN. Active Directory replication does work over a VPN-less Internet connection, but you’ll need to do some configuration.

Use a Stretched Network

A “stretched network”, sometimes called a “stretched VLAN”, exists when layer 2 traffic can move freely from one site to another. A stretched network allows you to keep the same IP addresses for your source and replica systems. It’s conceptually simple and requires little effort on the part of the Hyper-V administrator, but networking can be a challenge.


When all is well, a stretched network isn’t a big deal. However, you’re building replica specifically for the times when all is not well. The shown network will need a router so that the virtual machines can communicate off of their VLAN. So, let’s say that we have a router with in the primary site. What happens when site 1 is down and you’re running from replicas? is in the primary site and unreachable. There are ways to deal with this, but someone needs to have the networking know-how to make it work. For instance, you could have a router in site B and have it inserted into each machine’s routing table. But how? If it’s a persistent mapping with .1 as a default, then any machines in site 2 will always route through site 1 even though it’s inefficient. If you take some sort of dynamic approach, you then have another thing to deal with during a crisis.

Active Directory won’t like this setup. It will work, but it will function as though all systems were in the same site. It’s not ideal.

I recommend against a stretched network unless you have sufficient networking knowledge available to deal with these sorts of issues.

Use a Mirrored Network

I’m fairly certain that “mirrored network” isn’t a real term, so I’m just going to make it up for this article. What I mean by a “mirrored network” is that the same IP range appears in each site, but they aren’t really the same network. This would get around the routing problem of the stretched VLAN. Unfortunately, it introduces other issues.


The big difference here is that the two sites have no direct connectivity of any kind. They’ll reference each other by external IPs. That’s what makes this “mirrored” network possible.

The issues that you’re going to encounter will be around anything Active Directory related. You won’t be able to have two sites for the same reason that you couldn’t with a stretched network. You might be able to do some finagling to get them to communicate over the Internet, but you have to be careful that you don’t inadvertently cause a collision between your two networks that have the same IPs but aren’t the same.

I can see the appeal in this design, but I don’t like it for anything but very small systems. Even then I’m not sure that I like it.

Replica Site User Access

If you wanted to describe Hyper-V Replica in the least abstract way possible, you could say that it transfers your data center to an alternative site. It doesn’t move your users, though. How you attach users to the services in the new location will depend on a great many factors. For things like Outlook Anywhere, it’s a DNS change. For other things, you’re going to need to bring people on site. I can’t give great advice here because there are so many possibilities. You need to make many decisions. They need to be made before Replica begins.

Initial (Seed) Replication

You might have a great deal of data to move to your replica site. For instance, let’s say that you have a 1 terabyte database and your remote site is at the other end of a T1 line. At peak transmission rates, that’s nearly four days of transfer time, just for the database. With asymmetrical encryption, that four days automatically turns into eight days.

Hyper-V Replica allows you to perform the initial replication using portable media. It’s much faster, but it’s still going to require time. And portable media. Have all of this planned and ready to go.

Planned Failovers

It’s imperative that you test failover on a regular basis. You won’t necessarily need to test every virtual machine, but you need to test at least one. Consider building a virtual machine just for this purpose. Failovers need to be on the calendar. Responsible staff need to be designated and held accountable.

Unplanned Failover Criteria

It needs to be made clear to all interested parties that a Hyper-V Replica failover is a non-trivial event. A failover requires downtime. There are often unforeseen problems with using a replica site. The decision to fail to a replica site needs to be made by management staff. The criteria for “crisis situation demanding a replica failover” needs to be plotted when there isn’t a crisis, not in the middle of one. Clearly define who will make the determination that a failover is required.

Build the Replica System

Once all of these items have been satisfied, you can begin building your replica system. We’ll have an upcoming article that explains the procedure. But, if you have all of the items in this article prepared, you’ll find that you have already done all of the hard work.

Free Hyper-V Script: Update WDS Boot ID for a Virtual Machine

Free Hyper-V Script: Update WDS Boot ID for a Virtual Machine


For quite some time, I’ve been wanting to write an article on the wonders of Windows Deployment Services (WDS) and Hyper-V. Most of the other techniques that we use to deploy hosts and virtual machines are largely junk. WDS’s learning curve is short, and it doesn’t require many resources to stand up, operate, or maintain. It’s one of those technologies that you didn’t know that you couldn’t live without until the first time you see it in action.

This article is not the article that explains all of that to you. This article is the one that knocks out the most persistent annoyance to fully using WDS.

What is Windows Deployment Services?

Before I can explain why you want this script, I need to explain Windows Deployment Services (WDS). If you’re already familiar with WDS, I assume that you’re already familiar with your scroll wheel as well.

WDS is a small service that sits on your network listening for PXE boot requests. When it intercepts one, it ships a boot image to the requesting machine. From there, it can start an operating system (think diskless workstations) or it can fire up an operating system installer. I use it for the latter. The best part is that WDS is highly Active Directory integrated. Not only will it install the operating system for you, it will automatically place it in the directory. Even better, you can take a tiny snip of information from the target computer and place it into a particular attribute of an Active Directory computer object. WDS will match the newly installed computer to the AD object, so it will start life with the computer name, OU, and group policies that you want.

That tiny little piece of information needed for WDS to match the computer to the object is the tiny annoyance that I spoke of earlier. You must boot the machine in PXE mode and capture its GUID:



Modern server systems can take a very long time to boot, if you miss it then you must start over, and you must manually transcribe the digits. Fun, right?

Well, virtual machines don’t have any of those problems. Extracting the BIOS GUID still isn’t the most pleasant thing that you’ll ever do, though. It’s worse if you don’t know how to use CIM and/or WMI. It’s almost easier just to boot a VM just like you would a physical machine. That’s where this script comes in.

Script Prerequisites

The script itself has these requirements:

  • PowerShell version 4 or later
  • The Active Directory PowerShell module. You could run it from a domain controller, although that’s about as smart as starting a land war in Asia. Use the Remote Server Administration Tools instead.
  • Must be run from a domain member. I’m not sure if the AD PS module would work otherwise anyway.

There is no dependency on the Hyper-V PowerShell module. I specifically built it to work with native CIM cmdlets since I’m already forcing you to use the AD module.

I tested from a Windows 10 system against a Hyper-V Server 2016 system. I tested from a Windows Server 2012 R2 system still using PowerShell 4.0 against a Hyper-V Server 2012 R2 system. All machines are in the same domain, which is 2012 R2 forest and domain functional level.

The Script

The script is displayed below. Copy/paste into your PowerShell editor of choice and save it to a system that has the Active Directory cmdlet module installed.


  • VM: This will accept a string (name), a Hyper-V VM object (from Get-VM, etc.), a WMI object of type Msvm_ComputerSystem, or a CIM object of type Msvm_ComputerSystem.
  • ComputerName: The name of the Hyper-V host for the VM that you want to work with. This field is only used if the VM is specified as a string. For any of the other types, the computer name is extracted from the passed-in object.
  • ADObjectName: If the Active Directory object name is different from the VM’s name, this parameter will be used to name the AD object. If not specified, the VM’s name will be used.
  • Create: A switch that indicates that you wish to create an AD object if one does not exist. If you don’t specify this, the script will error if it can’t find a matching AD object. The object will be created in the default OU. Can be used with CreateInOU, but it’s not necessary to use both.
  • CreateInOU: An Active Directory OU object where you wish to create the computer. This must be a true object; use Get-ADOrganizationalUnit to generate it. This can be used with the Create parameter, but it’s not necessary to use both.
  • What If: Shows you what will happen, as usual. Useful if you just want to see what the VM’s BIOS GUID is without learning WMI/CIM or going through the hassle of booting it up.

The script includes complete support for Get-Help. It contains numerous examples to help you get started. If you’re uncertain, leverage -WhatIf until things look as you expect.

Script Discussion

Once the script completes, its results should be instantly viewable in the WDS console:


There are a few additional things to note.

Potential for Data Loss

This script executes a Replace function on the netbootGUID property of an Active Directory computer object. Target wisely.

I always set my WDS server to require users to press F12 before an image is pushed. If you’re not doing that, then the next time a configured VM starts and contacts the PXE server, it will drop right into setup. If you’ve got all the scaffolding set up for it to jump straight into unattend mode… Well, just be careful.

Other WDS-Related Fields

I elected to only set the BIOS GUID because that is the toughest part. It would be possible to set other WDS-related items, such as the WDS server, but that would have made the script quite a bit more complicated. I am using the Active Directory “Replace” function to place the BIOS GUID. I could easily slip in a few other fields, but you’d be required to specify them each time or any existing settings would be wiped out. The scaffolding necessary to adequately control that behavior would be significant. It would be easier to write other scripts that were similar in build to this one to adjust other fields.

Further Work

I still have it in my to-do list to work up a good article on Windows Deployment Services with Hyper-V. It’s not a complicated technology, so I encourage any and all self-starters to spin up a WDS system and start poking around. It’s nice to never need to scrounge for install disks/ISOs or dig for USB keys or bother with templates. I’ve also got my WDS system integrated with the automated WSUS script that I wrote earlier, so I know that my deployment images are up to date. These are all tools that can make your life much easier. I’ll do my best to get that article out soon, but I’m encouraging you to get started right away anyway.

Page 5 of 27« First...34567...1020...Last »