Performance Impact of Hyper-V CPU Compatibility Mode

Performance Impact of Hyper-V CPU Compatibility Mode

 

If there’s anything in the Hyper-V world that’s difficult to get good information on, it’s the CPU compatibility setting. Very little official documentation exists, and it only tells you why and how. I, for one, would like to know a bit more about the what. That will be the focus of this article.

What is Hyper-V CPU Compatibility Mode?

Hyper-V CPU compatibility mode is a per-virtual machine setting that allows Live Migration to a physical host running a different CPU model (but not manufacturer). It performs this feat by masking the CPU’s feature set to one that exists on all CPUs that are capable of running Hyper-V. In essence, it prevents the virtual machine from trying to use any advanced CPU instructions that may not be present on other hosts.

Does Hyper-V’s CPU Compatibility Mode Impact the Performance of My Virtual Machine?

If you want a simple and quick answer, then: probably not. The number of people that will be able to detect any difference at all will be very low. The number of people that will be impacted to the point that they need to stop using compatibility mode will be nearly non-existent. If you use a CPU benchmarking tool, then you will see a difference, and probably a marked one. If that’s the only way that you can detect a difference, then that difference does not matter.

I will have a much longer-winded explanation, but I wanted to get that out of the way first.

How Do I Set CPU Compatibility Mode?

Luke wrote a thorough article on setting Hyper-V’s CPU compatibility mode. You’ll find your answer there.

A Primer on CPU Performance

For most of us in the computer field, CPU design is a black art. It requires understanding of electrical engineering, a field that combines physics and logic. There’s no way you’ll build a processor if you can’t comprehend both how a NAND gate functions and why you’d want it to do that. It’s more than a little complicated. Therefore, most of us have settled on a few simple metrics to decide which CPUs are “better”. I’m going to do “better” than that.

CPU Clock Speed

Clock speed is typically the first thing that people generally want to know about a CPU. It’s a decent bellwether for its performance, although an inaccurate one.

A CPU is a binary device. Most people interpret that to mean that a CPU operates on zeros and ones. That’s conceptually accurate but physically untrue. A CPU interprets electrical signals above a specific voltage threshold as a “one”; anything below that threshold is a “zero”. Truthfully speaking, even that description is wrong. The silicon components inside a CPU will react one way when sufficient voltage is present and a different way in the absence of such voltage. To make that a bit simpler, if the result of an instruction is “zero”, then there’s little or no voltage. If the result of an instruction is “one”, then there is significantly more voltage.

Using low and high voltages, we solve the problem of how a CPU functions and produces results. The next problem that we have is how to keep those instructions and results from running into each other. It’s often said that “time is what keeps everything from happening at once”. That is precisely the purpose of the CPU clock. When you want to send an instruction, you ensure that the input line(s) have the necessary voltages at the start of a clock cycle. When you want to check the results, you check the output lines at the start of a clock cycle. It’s a bit more complicated than that, and current CPUs time off of more points than just the beginning of a clock cycle, but that’s the gist of it.

cpucompat_clockcycles

From this, we can conclude that increasing the clock speed gives us more opportunities to input instructions and read out results. That’s one way that performance has improved. As I said before, though, clock speed is not the most accurate predictor of performance.

Instructions per Cycle

The clock speed merely sets how often data can be put into or taken out of a CPU. It does not directly control how quickly a CPU operates. When a CPU “processes” an instruction, it’s really just electrons moving through logic gates. The clock speed can’t make any of that go any more quickly. It’s too easy to get bogged down in the particulars here, so we’ll just jump straight to the end: there is no guarantee that a CPU will be able to finish any given instruction in a clock cycle. That’s why overclockers don’t turn out earth-shattering results on modern CPUs.

That doesn’t mean that the clock speed isn’t relevant. It’s common knowledge that an Intel 386 performed more instructions per cycle than a Pentium 4. However, the 386 topped out at 33Mhz whereas the Pentium 4 started at over 1 GHz. No one would choose to use a 386 against a Pentium 4 when performance matters. However, when the clock speeds of two different chips are closer, internal efficiency trumps clock speed.

Instruction Sets

Truly exploring the depths of “internal efficiency” would take our little trip right down Black Arts Lane. I only have a 101-level education in electrical engineering so I certainly will not be a chauffeur on that trip. However, the discussion includes instructions sets, which is a very large subtopic that is directly related to the subject of interest in this article.

CPUs operate with two units: instructions and data. Data is always data, but if you’re a programmer, you probably use the term “code”. “Code” goes through an interpreter or a compiler which “decodes” it into an instruction. Every CPU that I’ve ever worked with understood several common instructions: PUSH, POP, JE, JNE, EQ, etc. (for the sake of accuracy, those are actually codes, too, but I figure it’s better than throwing a bunch of binary at you). All of these instructions appear in the 80×86 (often abbreviated as x86) and the AMD64 (often abbreviated as x64) instruction sets. If you haven’t figured it out by now, an instruction set is just a gathering of CPU instructions.

If you’ve been around for a while, you’ve probably at least heard the acronyms “CISC” and “RISC”. They’re largely marketing terms, but they have some technical merit. These acronyms stand for:

  • CISC: Complete Instruction Set Computer
  • RISC: Reduced Instruction Set Computer

In the abstract, a CISC system has all of the instructions available. A RISC system has only some of those instructions available. RISC is marketed as being faster than CISC, based on these principles:

  • I can do a lot of adding and subtracting more quickly than you can do a little long division.
  • With enough adding and substracting, I have nearly the same outcome as your long division.
  • You don’t do that much long division anyway, so what good is all of that extra infrastructure to enable long division?

On the surface, the concepts are sound. In practice, it’s muddier. Maybe I can’t really add and subtract more quickly than you can perform long division. Maybe I can, but my results are so inaccurate that my work constantly needs to be redone. Maybe I need to do long division a lot more often than I thought. Also, there’s the ambiguity of it all. There’s really no such thing as a “complete” instruction set; we can always add more. Does a “CISC” 80386 become a “RISC” chip when the 80486 debuts with a larger instruction set? That’s why you don’t hear these terms anymore.

Enhanced Instruction Sets and Hyper-V Compatibility Mode

We’ve arrived at a convenient segue back to Hyper-V. We don’t think much about RISC vs. CISC, but that’s not the only instruction set variance in the world. Instructions sets grow because electrical engineers are clever types and they tend to come up with new tasks, quicker ways to do old tasks, and ways to combine existing tasks for more efficient results. They also have employers that need to compete with other employers that have their own electrical engineers doing the same thing. To achieve their goals, engineers add instructions. To achieve their goals, employers bundle the instructions into proprietary instruction sets. Even the core x86 and x64 instruction sets go through revisions.

When you Live Migrate a virtual machine to a new host, you’re moving active processes. The system already initialized those processes to a particular instruction set. Some applications implement logic to detect the available instruction set, but no one checks it on the fly. If that instruction set were to change, your Live Migration would quickly become very dead. CPU compatibility mode exists to address that problem.

The Technical Differences of Compatibility Mode

If you use a CPU utility, you can directly see the differences that compatibility mode makes. These screen shot sets were taken of the same virtual machine on AMD and Intel systems, first with compatibility mode off, then with compatibility mode on.

cpucompat_amdcompat

cpucompat_intelcompat

The first thing to notice is that the available instruction set list shrinks just by setting compatibility mode, but everything else stays the same.

The second thing to notice is that the instruction sets are always radically different between an AMD system and an Intel system. That’s why you can’t Live Migrate between the two even with compatibility mode on.

Understanding Why CPU Compatibility Mode Isn’t a Problem

I implied in an earlier article that good systems administrators learn about CPUs and machine instructions and code. This is along the same lines, although I’m going to take you a bit deeper, to a place that I have little expectation that many of you would go on your own. My goal is to help you understand why you don’t need to worry about CPU compatibility mode.

There are two generic types of software application developers/toolsets:

  • Native/unmanaged: Native developers/toolsets work at a relatively low level. Their languages of choice will be assembler, C, C++, D, etc. The code that they write is built directly to machine instructions.
  • Interpreted/managed: The remaining developers use languages and toolsets whose products pass through at least one intermediate system.Their languages of choice will be Java, C#, Javascript, PHP, etc. Those languages rely on external systems that are responsible for translating the code into machine instructions as needed, often on the fly (Just In Time, or “JIT”).

These divisions aren’t quite that rigid, but you get the general idea.

Native Code and CPU Compatibility

As a general rule, developing native code for enhanced CPU instruction sets is a conscious decision made twice. First, you must instruct your compiler to use these sets:

There might be some hints here about one of my skunkworks projects

These are just the extensions that Visual Studio knows about. For anything more, you’re going to need some supporting files from the processor manufacturer. You might even need to select a compiler that has support built-in for those enhanced sets.

Second, you must specifically write code that calls on instructions from those sets. SSE code isn’t something that you just accidentally use.

Interpreted/Managed Code and CPU Compatibility

When you’re writing interpreted/managed code, you don’t (usually) get to decide anything about advanced CPU instructions. That’s because you don’t compile that kind of code to native machine instructions. Instead, a run-time engine will operate your code. In the case of scripting languages, that happens on the fly. For languages like Java and C#, they are first compiled (is that the right word for Java?) into some sort of intermediate format. Java becomes byte code; C# becomes Common Intermediate Language (CIL) and then byte code. They are executed by an interpreter.

It’s the interpreter that has the option of utilizing enhanced instruction sets. I don’t know if any of them can do that, but these interpreters all run on a wide range of hardware. That ensures that their developers are certainly verifying the existence of any enhancements that they intend to use.

What These Things Mean for Compatibility

What this all means is that even if you don’t know if CPU compatibility affects the application that you’re using, the software manufacturer should certainly know. If the app requires the .Net Framework, then I would not be concerned at all. If it’s native/unmanaged code, the manufacturer should have had the foresight to list any required enhanced CPU capabilities in their requirements documentation.

In the absence of all other clues, these extensions are generally built around boosting multimedia performance. Video and audio encoding and decoding operations feature prominently in these extensions. If your application isn’t doing anything like that, then the odds are very low that it needs these extensions.

What These Things Do Not Mean for Compatibility

No matter what, your CPU’s maximum clock speed will be made available to your virtual machines. There is no throttling, there is no cache limiting, there is nothing other than a reduction of the available CPU instruction sets. Virtual machine performance is unlikely to be impacted at all.

Free Tool: Advanced Settings Editor for Hyper-V Virtual Machines

Free Tool: Advanced Settings Editor for Hyper-V Virtual Machines

 

Hyper-V’s various GUI tools allow you to modify most of the common virtual machine settings. With the Hyper-V PowerShell module, you can modify a few more. Some settings, however, remain out of easy reach. One particular WMI class contains the setting for the virtual machine’s NumLock key setting and a handful of identifiers. Manipulating that specific class is unpleasant, even if you’re versed in WMI.

I’ve previously written PowerShell scripts to deal with these fields. Those scripts tend to be long, complicated, and difficult to troubleshoot. So, after taking some time to familiarize myself with the Management Interface API, I’ve produced a full-fledged graphical application so that you can make these changes more simply.

The application looks like this when run against my system:

VM Editor Main Screen

VM Editor Main Screen

The single shown dialog contains the entire application. Modeling after the design philosophy of Altaro Software, “ease-of-use” was my primary design goal. It’s a busy dialog, however, so a quick walkthrough might help you get started.

Advanced VM Settings Editor: Walkthrough

The screen is designed to work from left to right. You can use the screenshot above as a reference point, if you’d like, but it’s probably easier to install the application and follow along with the real thing.

Begin by choosing a host.

Choosing a Hyper-V Host

Use the drop-down/edit control in the top left. You can type the name of a Hyper-V host and press [Enter] or push the Refresh Virtual Machine List button to attempt to connect to a host by name. It does not work with IP addresses. You can use the Browse button to locate a system in Active Directory. I did code the browse functionality to allow you to pick workgroup-joined hosts. I did not test it for workgroup connectivity, so I don’t know (or care) if that part works or not.

If you installed the app to work with local virtual machines, enter a period (.), “LOCALHOST”, or the name of the local computer.

Loading the Virtual Machine List

Upon choosing a host, the contained virtual machines will automatically appear in the list box at the far left of the dialog. If none appear, you should also receive an error message explaining why. Use the Refresh Virtual Machine List to refresh the list items. The application does not automatically detect changes on the host, so you’ll need to use this if you suspect something is different (ex: a virtual machine has Live Migrated away).

A valid host must be selected for this section to function.

Loading the Current Settings for a Virtual Machine

Clicking on any virtual machine item in the list at the left of the dialog will automatically populate the settings in the center of the dialog. It will also populate the power state readout at the top right.

Clicking on the same virtual machine will reload its settings. If you have made any changes and not applied them, you will be prompted.

Making Changes to a Virtual Machine

There are six options to modify, and six corresponding fields.

  • Enable NumLock at Boot: This field is a simple on/off toggle.
  • Baseboard Serial Number
  • BIOS GUID: If you are PXE booting virtual machines, this field contains the machine UUID. This field requires 32 hexadecimal characters. Because there are multiple ways for GUIDs/UUIDs to be formatted, I opted to allow you to enter any character that you like. Once it has found 32 hexadecimal characters (digits 0 through 9 and letters A through F, case insensitive), it will consider the field to be validly formatted. Any other characters, including hexadecimal characters after the 32nd, will be silently ignored.
  • BIOS Serial Number
  • Chassis Asset Tag
  • Chassis Serial Number

The text fields besides the BIOS GUID are limited to 32 characters. WMI imposes the limit, not the application.

Viewing and Changing the Virtual Machine’s Power State

The power state is important because you cannot save changes to a virtual machine in any state other than Off. The current state is shown in the text box at the top right. Use the buttons underneath to control the virtual machine’s power state. Be aware that the Refresh State button is the only way to re-check what the virtual machine’s power state is without affecting any of the editable fields.

This application is not multi-threaded, so it will appear to hang on any long-running operation. The worst, by far, is the Graceful Shutdown feature.

Saving and Discarding Virtual Machine Changes

Use the Apply Changes and Reset Fields button to save any pending changes to the virtual machine or to discard them, respectively. If you attempt to save changes for a virtual machine that is not in the Off state, you will receive an error.

All of the error messages generated by the Apply Changes button are sent from WMI. I did not write them, so I may not be able to help you to decipher all of them. During testing, I occasionally received a message that the provider did not support the operation. When I got it, I just clicked the button again and the changes went through. However, I also stopped receiving that error message during development, so it’s entirely possible that it was just a bug in my code that was fixed during normal code review. If you receive it, just try again.

Usability Enhancements

The application boasts a couple of features meant to make your life a bit easier.

LOCALHOST Detection

As mentioned above, use a period (.), “LOCALHOST” or the local computer’s name to connect locally. I added some logic so that it should always know when you are connected locally. If it detects a local connection, the application will use DCOM instead of WinRM. Operations should be marginally faster. That said, I was impressed with the speed of the native MI binaries. If you’re accustomed to the delays of using WMI via PowerShell, I think you’ll be pleased as well.

However, I do have some concerns about the way that local host detection will function on systems that do not have WinRM enabled. If you can’t get the application to work locally, see if enabling WinRM fixes it (winrm qc at a command prompt or Enable-PSRemoting at an elevated PowerShell prompt).

Saving and Deleting Previously Connected Hosts

Once you successfully connect to a host, the application will remember it. If you don’t want a host in the list anymore, just hover over it and press [Delete].

If you’d like to edit the host list, look in %APPDATA%SironVMEditorvmhosts.txt. Each host is a single line in the text file. Short names and FQDNs are accepted.

Settings Validation Hints

As you type, the various fields will check that you are within ranges that WMI will accept. If you enter too many characters for any text field except BIOS GUID, the background of the text field will turn a reddish color. As long as you are within acceptable ranges, it will remain green. The BIOS GUID field will remain green as long as it detects 32 hexadecimal characters.

I realize that some people are blue-green colorblind and may not be able to distinguish between the two colors. Proper validation is performed upon clicking Apply Changes.

Clear Error Messages

One of the things that drives me nuts about software developers is the utter gibberish they try to pass off as error messages. “Sorry, there was an error.” How useless is that? Sure, I know first hand how tiresome writing error messages can be. But, I figure, I voluntarily wrote this app. None of you made me do it. So, it’s not right to punish you with cryptic or pointless messages. Wherever possible, I wrote error messages that should clearly guide you toward a solution. Any time that I didn’t, it’s because I’m relaying a useless message from a lower layer, like “unknown”.

Application Security

In general, I have left security concerns to the Windows environment. The application runs under your user context, so it cannot do anything that you do not already have permission to do. WMI throws errors on invalid input, so sneaking something by my application won’t have much effect. WMI communications are always encrypted, and I can see it loading crypto DLLs in the debugger.

I did instruct the application to securely wipe the names and IDs of all virtual machines from memory on exit. I’m not certain that has any real security value, but it was trivial to implement so I did it.

Potential Future Enhancements

The application does everything that it promises, so I’m not certain that I’ll update it for anything beyond bug fixes. There are a few things that I already have in mind:

  • Multi-threaded/asynchronous. The application appears to hang on long-running operations. They aren’t usually overly annoying so it didn’t seem worth it to delay version 1 to add the necessary infrastructure.
  • Automatic detection of state changes. The API has techniques for applications to watch for changes, such as to power states. These would be nice, but they also require enough effort to implement that they would have delayed version 1.
  • Other visual indicators. I was brainstorming a few ways to give better visual feedback when a field contained invalid data, but ultimately decided to proceed along so that I could release version 1.
  • Other settings. This is already a very busy dialog. I can’t imagine widening its net without major changes. But, maybe this app does need to grow.

I think that a lot of these enhancements hinge on the popularity of the app and how much their absence impacts usability. The program does what it needs to do today, so I hate to start tinkering too much.

System Requirements

Testing was extensively done using Windows 7 and Windows 10 clients against Windows Server 2012 R2 Hyper-V hosts. Some testing was done on other platforms/clients. The supported list:

  • Hyper-V Hosts
    • Windows Server 2012 (not directly tested at all)
    • Windows Server 2012 R2
    • Windows Server 2016
    • Windows 8/8.1 (not directly tested at all)
    • Windows 10 (not directly tested at all)
  • Clients
    • Windows 7
    • Windows 8/8.1 (not directly tested at all)
    • Windows 10
    • Windows Server 2008 R2
    • Windows Server 2012 (not directly tested at all)
    • Windows Server 2012 R2
    • Windows Server 2016

Software Prerequisites

The client and the server must be running at least the Windows Management Framework version 3.0. The version of PowerShell on the system perfectly matches the version of WMF, so you can run $PSVersionTable at any command prompt to determine the WMF level. The MSI and EXE installers will warn you if you do not meet this requirement on the client. The standalone executable will complain about a missing MI.dll if the framework is not at a sufficient level. WMF 3.0 shipped with Windows 8 and Windows Server 2012, so this is mostly a concern for Windows 7 and Windows Server 2008 R2.

The client must have at least the at least the Visual Studio 2015 C++ Redistributable installed. I did not intentionally use any newer Windows/Windows Server functionality, so earlier versions of the redistributable may work. If you use either of the installers, a merge module is included that will automatically add the necessary runtime files. If you use the EXE-only distribution and the required DLLs are not present, the application will not start. It also will not throw an error to tell you why it won’t start.

Downloading the Advanced VM Settings Editor for Hyper-V

You can download the application right from here. I have provided three packages.

  • Get the MSI package: SetupVMEditor1.0.0.msi.
  • Get the installer EXE: SetupVMEditor1.0.0.exe
  • Get the directly runnable EXE: VMEditor-PlainEXE.
    • Note: You must have the Windows Management Framework version 3.0 installed. You will receive an error about a missing “MI.dll” if this requirement is not met.
    • Note if not using an installer: You must have at least the Visual Studio 2015 C++ Redistributable installed. The application will not even open if the required DLLs are not present on your system.

Support, Warranty, Disclaimer, and All That

I wrote this application, not Altaro. There’s no warranty, there’s no promises, there’s no nothing. It’s 100% as-is, what you see is what you get, whatever happens, happens. It might do something that I didn’t intend for it to do. It might do things that you don’t want it to do. Do not call Altaro asking for help. They gave me a place to distribute my work and that’s the end of their involvement.

I’ll provide limited support here in the comments. However, my time is limited, so help me to help you. I will not respond to “it didn’t work” and “I tried but got an error the end” messages. If you got an error, tell me what the error was. Tell me your OS. Tell me what I can do to reproduce the error.

I am specifically excluding providing support for any problems that arise from attempting to use the application on a workgroup-joined host. I only test in domain environments. If it happens to work in a workgroup, that’s great! If not, too bad!

Undocumented Changes to Hyper-V 2016 WMI

Undocumented Changes to Hyper-V 2016 WMI

 

We all know that IT is an ongoing educational experience. Most of that learning is incremental. I can only point to a few times in my career in which a single educational endeavor translated directly to a major change in the course of my career. One of those was reading Richard Siddaway’s PowerShell and WMI. It’s old enough that large patches of the examples in that work are outdated, but the lessons and principles are sound. I can tell you that it’s still worth the purchase price, and more importantly that if this man says anything about WMI, you should listen. You can imagine how excited I was to see that Richard had begun contributing to the Altaro blog.

WMI can be challenging though, and it doesn’t help when you can’t find solid information about it. I’m here to fill in some of the blanks for WMI and Hyper-V 2016.

What is WMI?

WMI stands for “Windows Management Instrumentation”. WMI itself essentially has no substance; it’s a Microsoft-specific implementation of the standardized Common Information Model (CIM), maintained by the DMTF. CIM defines common structures and interfaces that anyone can use for a wide range of purposes. Most purposes involve systems monitoring and management. The simplest way to explain WMI, and therefore CIM, is that it is an open API framework with standardized interfaces intended for usage in management systems. PowerShell has built-in capabilities to allow you to directly interact with those interfaces.

What is the Importance of Hyper-V and WMI?

When it comes to Hyper-V, all the GUIs are the beginner’s field. The PowerShell cmdlets are the intermediate level. The experts distinguish themselves in the WMI layer. Usually, when someone incredulously asks me, “How did you do that?”, WMI is the answer. WMI is the only true external interface for Hyper-V. All of the other tools that you know and love (or hate) rely on WMI. However, none of those tools touch all of the interfaces that Hyper-V exposes through WMI. That’s why we need to be able to access WMI ourselves.

How Do I Get Started with Hyper-V’s WMI Provider?

If you don’t already know WMI, then I would recommend Richard’s book that I linked in the first paragraph. The “warning” that I would tell you on that is to not spend a lot of time learning about associators. You won’t use them with v2 of the Hyper-V WMI provider. Instead, you’ll use $WMIObject.GetRelated(), which is much easier. There are other ways to learn WMI, of course, but that’s the one that I know. Many of the PowerShell scripts that I’ve published on this blog include WMI at some point, so feel free to tear into those. Also try to familiarize yourself with the WMI Query Language (WQL). It’s basically a baby SQL.

Get a copy of WMI Explorer and put it on a system running Hyper-V. Use this tool to navigate through the system. In this case, you’re especially interested in the rootvirtualizationv2 branch. No other tool or reference material that you’ll find will be as useful or as accurate. You can use it to generate PowerShell (check the Script tab). You can also use it to generate MOF definitions for classes (right-click one). It’s a fantastic hands-on way to learn how to use WMI and discover your system.

undoc_wmiexplorer

Microsoft does publish documentation on the Hyper-V WMI provider. Starting with 2016, it is not thorough, it is not current, and someone had the brilliant idea to leave it undated so that you won’t be able to determine if it’s ever been updated. There are more than a few notes that make it worthwhile enough to use as a reference.

Do not forget search engines! If you just drop in the name of a class, you’ll find something, and often a useful something. It doesn’t hurt to include “v2” in your search criteria.

Undocumented and Partially Documented WMI Changes for Hyper-V 2016

Some of this stuff isn’t so much “undocumented” so much as unorganized. The goal of this section is to compile information that isn’t readily accessible elsewhere.

Security and the Hyper-V WMI Provider

It is not possible to set a permanent WMI registration on any event, class, or instance in the Hyper-V WMI provider. The reason is that permanent subscriptions operate anonymously, and this particular provider does not allow that level of access. You can create temporary subscriptions, but that’s because they must always operate under a named security context. Specifically, a user name.

I don’t have much more to give you on this. You can see the symptoms, or side effects if you will, of the different security restrictions. Many things, like Get-VM, don’t produce any results unless you have sufficient permissions. Other than that, you’ll have to muddle through on your own just as I have. My best sources on the subject say that there is no documentation on this. Not just nothing public, just nothing. That means that there is probably a lot more that we could be doing in terms of providing controlled access to Hyper-V functions.

What Classes Were Removed from the Hyper-V WMI Provider in 2016?

I pulled a list of all classes from both a 2012 R2 system and a 2016 system and cross-referenced the results. The following classes appear in 2012 R2 but not in 2016:

I have never personally used any of these classes, so I’m not going to miss them. If you have any script or code that expects these classes to exist, that code will not function against a 2016 system.

One retired class of interest is “Msvm_ResourceTypeDefinition”. As we’ll see in a bit, the way that virtual machine components are tracked has changed, which could explain the removal of this particular class.

What Classes Were Added to the Hyper-V WMI Provider in 2016?

The results of the previous test produced a great many new classes in 2016.

If you’re aware of the many new features in 2016, then the existence of most of these new classes makes sense. You can’t find documentation, though. If you want to see one of the shortest Google results lists in history, go search for “Msvm_TPM”. I got a whopping three hits when I ran it, with no relation to Hyper-V. After publication of this article, we’ll be up to a staggering four!

Some of these class additions are related to a breaking change from the v2 namespace in 2012 R2: some items that were formerly a named subtype of the Msvm_ResourceAllocationSettingData class now have their own specialized classes.

What Happened to the Serial Port Subtype of Msvm_ResourceAllocationSettingData?

First, let’s look at an instance of the Msvm_ResourceAllocationSettingData class. The following was taken from WMI Explorer on a 2012 R2 system:

undoc_serialportsubtypeI’ve highlighted two items. The first is the ID of the virtual machine that this component belongs to. The second is the “ResourceSubType” field, which you can use to identify the component type. In this case, it’s a virtual serial port.

I chose to use WMI Explorer for this example because it’s a bit easier to read. The following code block shows three ways that I could have done it in WMI by starting from the virtual machine’s human-readable name:

The first technique utilizes the skills of the .Net and PowerShell savvy. The second and third methods invokes procedures familiar to SQL gurus.

Now that we’ve seen it in 2012 R2, let’s step over to 2016. I have configured the above virtual machine in Hyper-V Replica between 2012 R2 and 2016, so everything that you see is from the exact same virtual machine.

To begin, all three of the above methods return no results on 2016. The virtual machine still has its virtual serial ports, but they no longer appear as instances of Msvm_ResourceAllocationSettingData.

Now, we have:

Msvm_SerialPortSettingData

Msvm_SerialPortSettingData

I’ve highlighted a couple of things in that second entry that I believe are of interest. This entry certainly looks a great deal like the Msvm_ResourceAllocationSettingData class from 2012 R2, doesn’t it? However, it is an instance of Msvm_SerialPortSettingData. Otherwise, it’s structurally identical. You can even search for it using any of the three methods that I outlined above, provided that you change them to use the new class name.

I did not find any other missing subtypes, but I didn’t dig very deeply, either.

Associator Troubles?

I mentioned a bit earlier that I don’t use associators with the v2 namespace. I have seen a handful of reports that associator calls that did work in 2012 R2 do not work in 2016, although I have not investigated them myself. If that’s happened to you, just stop using associators. .Net and PowerShell automatically generate a GetRelated() method for every WMI object of type System.Management.ManagementObject. It has an optional String parameter that you can use to locate specific classes, if you know their names.

Find everything directly related to a specific virtual machine:

Find a specific class related to a specific virtual machine:

What the tools that I’ve shown you so far lack is the ability to quickly discover associations. The GetRelated() method allows you to discover connections yourself. To keep the output reasonable, filter it by the __CLASS field (that’s two leading underscores). The following shows the commands and the output from system:

You can use this technique on the Script tab in WMI Explorer (which will run the script in an external window) and then cross-reference the results in the class list to rapidly discover how the various classes connect to each other.

You can also chain the GetRelated() method. Use the following to find all the various components of a virtual machine:

Put WMI to Work

WMI is the most powerful tool that a Hyper-V administrator can call upon. You don’t need to worry about hurting anything, as you would need to directly call on some method in order to make any changes. The Get-WmiObject cmdlet that I’ve shown you has no such powers.

If you’re willing to go deeper, though, you can certainly use WMI to change things. There are several properties that can only be changed through WMI, such as the BIOS GUID. In previous versions, some people would modify the XML files, but that was never supported. In 2016, the virtual machine file format is now proprietary and copes with manual alterations even more poorly than the old XML format. To truly sharpen your skillset, you need to learn WMI.

Nano Server AMA with Andrew Mason from Microsoft – Q&A Follow Up

Nano Server AMA with Andrew Mason from Microsoft – Q&A Follow Up

 

NOTE: Please read THIS important update on the direction of Nano Server prior to using the below resources.

Hello once again everyone!

A few weeks ago, we put on a very special webinar here at Altaro where we had Andrew Mason from the Nano Server team at Microsoft on to answer all of your burning Nano Server questions. Both sessions were very well attended and the number of quality, engaging questions was amazing. It really made for a great webinar!

As we usually do after webinars, this post is intended to act as an ongoing resource for the content that was discuss during said webinar. Below you will find the recording of the webinar for your viewing pleasure in case you missed it, along with a written list of questions and their associated answers below that were not covered verbally during the Q & A due to time constraints.

Revisit the Webinar

Q & A

Q. Will we be able to run the Active Directory role on Nano Server in the future?

A. This is a frequent ask, which you can also vote for on the Windows Server User Voice HERE. We are investigating how to bring this to Nano Server, but at this time I don’t have a timeline to share.

Q. Will WSL eventually get into Nano Server? Could is replace the instance of OpenSSH from GitHub Eventually?

A. WSL was added to Windows 10 to support developer scenarios, so we hadn’t been considering it for Nano Server. This is a remote management scenario, it would be interesting to understand how many people would want this for management, so please vote on User Voice HERE.

Q. Will there be support for boot from USB for Nano Server, Hyper-V nodes for instance?

A. This is not currently planned. There have been a lot of asks for SD boot. If this is an important scenario for you, please vote for it on user voice.

Q. Are there plans to use MS DirectAccess on Nano?

A. This is not currently planned due to the cloud focus we have for Nano Server. If this is an important scenario for you, please vote for it on User Voice.

Q. How does one manage a Nano server if Azure or an Azure Account is unavailable?

A. You can still use the standard MMC tools to remotely manage nano server on-prem, just like any other Windows Server.

Q. Are there any significant changes in licensing for Nano Server?

A. There are some licensing implications when using Nano Server. Altaro has an ebook on licensing Windows Server 2016 that includes some information about Nano Server HERE.

Q. Can you manage a Nano Server host with SCVMM 2012 R2?

A. Unfortunately no. SCVMM 2016 is needed to manage 2016 Nano Server hosts.

Q. Do you see a role for Nano Server in regards to on-prem Hyper-V environments.

A. Absolutely! Nano Server lends itself very well to running as a Hyper-V host. The attack surface is smaller, less resources are needed for the OS, and you have fewer reboots needed due to patching. You can still manage it remotely just like any other Hyper-V host.

Q. How can I use the Anti-Malware options that are available in Nano Server?

A. Nano Server uses a Just-Enough-OS model, in that only the bits needed to run the OS are initially available. There is an Anti-Malware feature available, you just need to install it. More information on installing roles in Nano Server can be found HERE.

Q. Are iSCSI and MPIO usable on Nano Server?

A. Yes they are, they can be installed and managed via PowerShell Remoting.

Q. How do you configure NIC teaming in Nano Server?

A. NIC teaming can be managed and configured via PowerShell. Take note however, that the usual LBFO NIC teaming is not available on Nano Server and you will have to use the new Switch Embedded Teaming (SET) option that was released with Windows Server 2016.

Q. Does Altaro VM Backup support protecting VMs running on a Nano Server Hyper-V Host?

A. As Nano Server is such a radical departure from the usual Microsoft deployment option, we currently do not support backing up VMs on Nano Server hosts. We are currently looking at adding support for this deployment option, but do not have a date that can be provided at this time. Be sure to keep a look out on the Altaro blog for developments in this matter.

Summary

That wraps things up for our Q & A follow up post. We had lots of great questions and loved to see everyone actively participating in the webinar! As usual, if you think of any further follow up questions, feel free to ask them in the comments section below and we’ll get back to you ASAP!

Thanks for reading!

 

Hyper-V and Linux: Changing Volume Names

Hyper-V and Linux: Changing Volume Names

 

I’ll say up front that this article is more about Linux than Hyper-V. It’s relevant here for anyone that duplicates a source VHDX to use with new Linux guests. In our how-to article on using Ubuntu Server as a Hyper-V guest, I counseled you to do that in order to shortcut installation steps on new systems. That article shows you how to rename the system so that it doesn’t collide with others. However, it doesn’t do anything with the volumes.

During Ubuntu (and I assume other distributions), the Logical Volume Manager (lvm) gives its volume groups the same base name as the system. So, using a copy of that VHDX for a new system leaves you with a name mismatch. Before you do anything, keep in mind that this state does not hurt anything! I’m going to show you how to change the volume group names, but you’re modifying a part of the system involved with booting (and probably other things). My systems still work fine, but these system-level changes only address a cosmetic problem. If you’re still interested, check your backups, take a checkpoint, and let’s get started!

Linux Volume Group Names: What We’re Solving

In case you’re in the dark as to what I’m talking about, run lvs on your system. My mismatched system looks like this:

lvg_problem

This system is currently named “svlinuxtest”. It was built from a base system that was named “svlmon2”. I want to reiterate that nothing is broken here. I see the volume group name pop up occasionally, such as during boot. That’s it. If I leave this alone, everything will be perfectly fine. On Windows, changing a volume name is trivial because the system only cares about drive letters and volume IDs. On Linux, volume names have importance. You take a risk when changing them, especially for volume groups that contain system data.

Renaming Volume Groups on Ubuntu Server

These instructions were written using Ubuntu Server. I assume that they will work for any system using lvm.

Read step 7 before you do anything! If that bothers you, leave this whole thing alone!

  1. Make sure that you have a fresh backup and/or a checkpoint. I haven’t had problems yet, but…
  2. If you don’t already know the current volume group, use sudo lvs as shown above to list them. Decide on a new name. Take special care to match the spelling exactly throughout these directions!
  3. Use lvm to change to a new name: sudo lvm vgrename oldname newname.
    lvg_vgrename
  4. Change the volume name in the /etc/fstab file to match the new name.
    1. METHOD 1: Use the nano visual editor: sudo nano /etc/fstab. Notice that the entries in fstab use TWO hyphens between the name and the “vg-volumename” suffix whereas outputs show only ONE. Leave the extra hyphen alone! Once you’ve made your changes, use [CTRL+X] to exit, pressing [Y] to save when prompted.
      lvg_fstab
    2. METHOD 2: Use sed. This method is faster than nano and only gives you one shot at making a typo instead of two. You don’t get any visual feedback, though (although you could open it in nano or less or some other editor/reader, of course): sudo sed -i "s/oldname/newname/g" /etc/fstab .
      lvg_fstab2
  5. Change the volume name in /boot/grub/grub.cfg just as you did with /etc/fstab.
    1. METHOD 1: Use the nano visual editor: sudo nano /boot/grub/grub.cfg. This file is much larger than fstab and you will likely have many entries to change. As with fstab, there are TWO hyphens between the volume group name and the “vg-volumename” suffix. Leave them alone. If you’re going to use nano for this, I’d recommend that you employ its search and replace feature.
      1. Press [CTRL+].
      2. Type the original name in the Search (to replace) prompt and press [Enter]:
        lvg_replace
      3. Type the new name in the Replace with prompt and press [Enter]:
        lvg_replacewith
      4. Working with the assumption that you didn’t use an original name that might otherwise appear in this file (like, say, “Ubuntu”), you can press [A] at the Replace this instance? prompt. If you want to be certain that you’re not overwriting anything important, press [Y] and [N] where appropriate to step through the file.
        lvg_instance
      5. Use [CTRL+X] when finished and [Y] to save the file.
    2. METHOD 2: Use sed. As with /etc/fstab, this method is faster than nano. It allows you to change all of the entries at once without prompting (even if you type the name wrong). If you used sed to change fstab, then you can press the up arrow to retrieve it from the buffer and change only the filename: sudo sed -i "s/oldname/newname/g" /etc/fstab.
      lvg_sedgrub
  6. Apply the previous file changes to initramfs: sudo update-initramfs -u. This operation can take a bit of time, so don’t panic if it doesn’t return right away. I also sometimes get I/O errors mentioning “fd0”. That’s the floppy disk. Since I don’t have a floppy disk in the system, I don’t worry about those errors.
    lvg-updateinitramfs
  7. Shut down the system. While it’s probably not urgent, I recommend doing this immediately: sudo shutdown now. You could use -r if you want, but it won’t matter. The system will almost undoubtedly hang on shutdown! That’s because the volumes that it wants to dismount by name no longer have that name. Just wait for the shutdown process to hang, then use the Hyper-V console’s reset button. Or, if you’re not using Hyper-V, whatever reset method works for you.
    lvg-hanglvg_reset
  8. Test EVERYTHING. sudo lvs for certain. Make sure your services are functioning. Once everything looks good, test a restart: sudo shutdown -r now. It should not be hanging anymore.
  9. Remove any checkpoints.

Volume Group Rename Notes

I performed a fair bit of research to come up with these directions and found a lot of conflicting information. One place said that it wasn’t necessary to update initramfs. If you follow that advice, everything will most likely still function. However, you’ll get errors at boot up that volume groups cannot be found. Those messages will include the original volume names. They’ll also be repeated a few times, which appears to delay bootup a bit. I’m not sure if any other problems will arise if you don’t follow these directions as listed.

I’m also not entirely certain that these directions reach 100% of all places that the volume groups are named. As always, we encourage comments with any additional information!

Planning for Hyper-V Replica

Planning for Hyper-V Replica

 

 

Before tackling this article, ensure that you already know what Hyper-V Replica is. If not, follow this link. I also trust that you understand that you are not building a replacement for backup. This post will help you verify that you have what you need so that you can create a successful Hyper-V Replica deployment. We are saving the “how-to” for building that system for a later article.

Hyper-V Replica Prerequisites

Hyper-V Replica originally released with the 2012 Server platform. I am deliberately talking only about the 2012 R2 and 2016 platforms. Most of what I say here will apply to 2012, but I don’t believe that there are enough new installations on that version to justify covering the differences. If you’re one of the handful of exceptions, I doubt that you’ll have many troubles.

Before you begin, you must have all of these items:

  • At least two systems running a server edition of Hyper-V. For best results, use the same version. Hyper-V Replica does work up-level from 2012 R2 to 2016. As long as the virtual machine configuration version remains at 5.0, 2016 can replicate down to 2012 R2. 2012 can replicate up to 2012 R2, but 2012 R2 cannot reverse replication down to 2012.  I would not expect 2012 to 2016 to work at all, but I didn’t test it and could not find anything about it.
  • Sufficient storage on the replica server to hold the replica, with extra room for change logs and recovery points. The extra space needed depends on the rapidity of changes in the virtual machine(s) and on how many recovery points you wish to keep.
  • Reliable network connectivity between the source and replica hosts.
  • All hosts involved in Replica must be in the same or trusting domains to use Kerberos authentication.
  • All hosts involved in Replica must be able to validate certificates presented by the other host(s) in order to use certificate authentication. The certificates used must be enabled for “Server Authentication” enhanced key usage.
  • A configurable firewall. Replica defaults to port 80 for Kerberos authentication or 443 for certificate authentication.

Preparing to Use Hyper-V Replica Securely

Today’s world demands that you think securely from the beginning of a project. The most common use for Hyper-V Replica involves transmitting data across the Internet. Walk into this project knowing that your data can be intercepted.

Mutual Host Authentication is Required

Hyper-V Replica will not function if the source host cannot verify the identity of the target host and vice versa. This is a good thing, but it can also be a bothersome thing. You have two options: Kerberos authentication and certificate-based authentication.

If your replica traffic will directly traverse an unsecured network (the Internet), do not use Kerberos authentication. The source and replica servers will securely authenticate each other’s identities, but the replica traffic is not encrypted. However, if you are using a secured tunnel such as a site-to-site VPN, then feel free to use Kerberos. There is little value in using an encrypted tunnel to carry encrypted traffic. Also, because certificate-based encryption is asymmetrical, the encrypted packets are much larger than the unencrypted source. Double encryption dramatically increases the payload size.

Pros of Kerberos-based Authentication

  • If both hosts are in the same or trusting domains, Kerberos authentication is “fire-and-forget” simple. Just select that dot on their configuration pages and you’re set.
  • Synchronization traffic is unencrypted, so it requires the least amount of processing and network resources.
  • Simple, centralized emergency management of the hosts. If a system at a remote site is compromised, you can disable its object in Active Directory and it will no longer be valid for replication.

Cons of Kerberos-based Authentication

  • No option to encrypt synchronization traffic within Hyper-V. IPSec and encrypted VPN tunnels are viable alternatives.
  • Cannot fail over a domain controller covered by Hyper-V Replica unless another domain controller is available. You can eliminate this problem by allowing Active Directory to replicate itself.
  • “Only” works for domain-joined hosts. Leaving Hyper-V hosts out of the domain isn’t smart practice anyway, so competent administration eliminates this problem unless using an outside service provider.

Pros of Certificate-Based Authentication

  • Hosts do not need any other method to authenticate each other. This approach works well for service providers.
  • All traffic is encrypted. As long as the hosts’ private keys are adequately protected, it’s as safe as anything can be to transmit certificate-based Hyper-V Replica traffic directly across the Internet.

Cons of Certificate-Based Authentication

  • Certificate-based encryption results in higher CPU usage and much larger traffic requirements.
  • PKI and certificates can be difficult and confusing for those that don’t use PKI often.
  • Certificates expire periodically and must be manually renewed, redistributed, and selected in Replica configuration screens.
  • If you don’t maintain your own PKI, you’ll need to purchase certificates from a third party. This might also be necessary when working with a Hyper-V Replica service provider.

Make the decision about which type of authentication to use before proceeding.

Acquiring Certificates to Use with Hyper-V Replica

It is possible to use self-signed certificates for Hyper-V Replica, but it is not recommended. Self-signed certificates do not utilize any type of external arbiter, therefore the hosts are not truly able to authenticate each other.

There are two recommended ways to acquire certificates for Hyper-V Replica:

  • A third-party trusted certificate provider. These certificates cost money, but all of the not-fun bits of managing a PKI are left to someone else. If you shop around, you can usually find certificates at a reasonable price. These are most useful when you do not own all of the Hyper-V hosts in the replica chain.
  • An internal Certificate Authority. If you own all of the Hyper-V hosts, then it won’t matter a great deal that they only use your resources for authentication. Even if some or all of the Hyper-V hosts aren’t in your domain, you can add your CA’s certificate to their trusted lists and then they’ll trust the certificates that they issue.

Making certificate requests is really not difficult, but there are a lot of steps involved. The most comprehensive walkthrough that I’m aware of is the one that I wrote for the Hyper-V Security book that I co-wrote with Andy Syrewicze. The bad news is that, since it seems to be one-of-a-kind, I can’t duplicate it here. You can find several other examples, although there are so many variables and possibilities that you may struggle a bit to find one that perfectly matches your situation. This certificate enrollment walkthrough looks promising: https://social.technet.microsoft.com/wiki/contents/articles/10377.create-a-certificate-request-using-microsoft-management-console-mmc.aspx. It’s for domains, but it does show you how to get the CSR text. You’ll need that if you’re going to request from a third-party or a disconnected system.

If you want to set up your own Active Directory-based PKI, be warned that you are facing a non-trivial task made worse by poorly designed and documented tools. The “official” documentation isn’t great. I’ve had better luck with this: https://www.derekseaman.com/2014/01/windows-server-2012-r2-two-tier-pki-ca-pt-1.html. It’s not perfect either, but it’s better than the “official” documentation. If you don’t have any other use for PKI, I recommend that you save your sanity by spending a few dollars on some cheap third-party certificates.

Hyper-V Replica Certificate Requirements

If you already know how to make a certificate request, this is a simple checklist of the requirements:

  • Enhanced Key Usage must be: Client Authentication, Server Authentication. This is the default for the Computer certificate template if you are using a Windows PKI.
  • Common Name (on the Subject tab) must be the name of the system as it will be presented to the other Replica server(s) that it communicates with. So, if you’re connecting over the Internet to target.mydomain.com, then that must be the the Common Name on the Subject of the certificate and/or a Subject Alternate Name.
  • Subject Alternate Name (SAN). This is also on the Subject tab. You want to add DNS entries. If your replica host is going to be addressed by a name other than its computer name, then that name must at least appear in the Subject Alternate Name list. If the target system is a cluster and the other Replica server(s) will be connecting to it via its Cluster Name Object, then your certificate must use that FQDN as the Common Name or as one of the Subject Alternate Names. Because the certificate can be used for more purposes than just replica, I typically use all of these items in the SAN fields:
    • Cluster Name Object DNS name
    • Replica Name Object DNS name
    • Internal DNS name of each node
  • 2048 bit encryption. The default is 1024 bit, so ensure that you change it.

A warning note on Subject Alternate Name: If you are using an internal Active Directory-based PKI, the default configuration for the Computer certificate template may prevent you from using Subject Alternate Names. You may fill out the fields correctly, but then discover that the issued certificate contains no SANs. I typically create my own certificate templates from scratch to avoid any issues.

 

repbootcamp_certcnsan

Have your certificates installed on each host before you configure Hyper-V Replica.

Selecting Virtual Machines to Use with Replica

It’s not a given that you’ll want to replicate every virtual machine. The very first link in this article spends some time on this topic, so I’m only going to briefly touch upon it here. Avoid using Hyper-V Replica with any technology that has its own replication mechanisms. Active Directory, Exchange Server, and SQL Server are technologies that I strongly discourage mixing with Hyper-V Replica.

Remember that Hyper-V Replica does not save on licensing in most configurations. You cannot build a virtual machine running Active Directory Domain Services and then create a replica of it for free. The replica virtual machine must also be licensed. If you have Software Assurance on the license that covers the source virtual machine, then that does cover the replica. However, that’s not “free”. I do not know how replica is handled by the licensing terms of non-operating system software, so consult with your licensing reseller. Do not make assumptions and do not attempt to use replica to side-step licensing requirements. The fines are prohibitively high and auditors never accept ignorance as an excuse.

Selecting Virtual Machine Data to Exclude from Replica

Just because you want a virtual machine replicated means that you want all of that virtual machine replicated. Hyper-V Replica has the ability to skip specified disks. Many people will move the page file for a virtual machine to a separate disk just to keep it out of replica. There are other uses for this ability as well. Think through what you want left out.

Selecting Hardware to Use with Replica

If you want to make the simplest choice, buy the same hardware for Hyper-V Replica that you use for your primary systems. That’s rarely the most fiscally sound choice, however.

Consider:

  • Hyper-V Replica is intended for recovery and/or continuity through a disaster, not as an ordinary running mode
  • Disasters tend to alter usage patterns; staff re-tasks to other duties, customers have other things to do, etc.
  • Using hardware in another physical location will likely cause other logistical access restrictions. For example, your primary office location may house fifty on-site staff. Your replica site may have sufficient room for five.

I am unable to make any solid general recommendations. If you’re not certain, I would recommend purchasing a system that is at least similar to your primary. If you’re really uncertain, hire a consultant.

If you’re thinking about using a smaller system for the replica site, remember these things:

  • You can replicate from a cluster to a standalone host and vice versa.
  • You can replicate from a cluster to a smaller cluster and vice versa.

Replica Site Networking

Set aside time to think through your networking design for replica. You absolutely do not want to be stumbling over it in the middle of a crisis. There are three basic ways to approach this.

Use Completely Separate Networks

My preferred way is to build distinct networks at each site. You’ll invest more effort in this design, but you’ll need substantially less to maintain it. You do not need to build an elaborate system.

repbootcamp_separatenetworks

One option that you have to make this work is DHCP. The very simplest way is to have all services configured to use DNS names and just allow DHCP and DNS to do their jobs. That concept makes a lot of people nervous, though, and you won’t always have that option anyway. In that case, set each virtual machine to use a static MAC address. Since you are hopefully keeping an active domain controller in each site, throw DHCP and DNS in as well (separate servers, if it suits your environment). Use DHCP reservations unique to each site.

If you don’t want to use DHCP, then you can configure failover IPs for each virtual machine individually. That’s the most work, but it gives you a guaranteed outcome.

The nice thing about this setup is that it will work even if you haven’t got a VPN. Active Directory also works wonderfully. You configure the second IP range as its own site in Active Directory Sites and Services, and it just knows what to do… if there’s a VPN. Active Directory replication does work over a VPN-less Internet connection, but you’ll need to do some configuration.

Use a Stretched Network

A “stretched network”, sometimes called a “stretched VLAN”, exists when layer 2 traffic can move freely from one site to another. A stretched network allows you to keep the same IP addresses for your source and replica systems. It’s conceptually simple and requires little effort on the part of the Hyper-V administrator, but networking can be a challenge.

repbootcamp_stretchednetworks

When all is well, a stretched network isn’t a big deal. However, you’re building replica specifically for the times when all is not well. The shown 192.168.100.0/24 network will need a router so that the virtual machines can communicate off of their VLAN. So, let’s say that we have a router with 192.168.100.1 in the primary site. What happens when site 1 is down and you’re running from replicas? 192.168.100.1 is in the primary site and unreachable. There are ways to deal with this, but someone needs to have the networking know-how to make it work. For instance, you could have a 192.168.100.2 router in site B and have it inserted into each machine’s routing table. But how? If it’s a persistent mapping with .1 as a default, then any machines in site 2 will always route through site 1 even though it’s inefficient. If you take some sort of dynamic approach, you then have another thing to deal with during a crisis.

Active Directory won’t like this setup. It will work, but it will function as though all systems were in the same site. It’s not ideal.

I recommend against a stretched network unless you have sufficient networking knowledge available to deal with these sorts of issues.

Use a Mirrored Network

I’m fairly certain that “mirrored network” isn’t a real term, so I’m just going to make it up for this article. What I mean by a “mirrored network” is that the same IP range appears in each site, but they aren’t really the same network. This would get around the routing problem of the stretched VLAN. Unfortunately, it introduces other issues.

repbootcamp_mirrorednetworks

The big difference here is that the two sites have no direct connectivity of any kind. They’ll reference each other by external IPs. That’s what makes this “mirrored” network possible.

The issues that you’re going to encounter will be around anything Active Directory related. You won’t be able to have two sites for the same reason that you couldn’t with a stretched network. You might be able to do some finagling to get them to communicate over the Internet, but you have to be careful that you don’t inadvertently cause a collision between your two networks that have the same IPs but aren’t the same.

I can see the appeal in this design, but I don’t like it for anything but very small systems. Even then I’m not sure that I like it.

Replica Site User Access

If you wanted to describe Hyper-V Replica in the least abstract way possible, you could say that it transfers your data center to an alternative site. It doesn’t move your users, though. How you attach users to the services in the new location will depend on a great many factors. For things like Outlook Anywhere, it’s a DNS change. For other things, you’re going to need to bring people on site. I can’t give great advice here because there are so many possibilities. You need to make many decisions. They need to be made before Replica begins.

Initial (Seed) Replication

You might have a great deal of data to move to your replica site. For instance, let’s say that you have a 1 terabyte database and your remote site is at the other end of a T1 line. At peak transmission rates, that’s nearly four days of transfer time, just for the database. With asymmetrical encryption, that four days automatically turns into eight days.

Hyper-V Replica allows you to perform the initial replication using portable media. It’s much faster, but it’s still going to require time. And portable media. Have all of this planned and ready to go.

Planned Failovers

It’s imperative that you test failover on a regular basis. You won’t necessarily need to test every virtual machine, but you need to test at least one. Consider building a virtual machine just for this purpose. Failovers need to be on the calendar. Responsible staff need to be designated and held accountable.

Unplanned Failover Criteria

It needs to be made clear to all interested parties that a Hyper-V Replica failover is a non-trivial event. A failover requires downtime. There are often unforeseen problems with using a replica site. The decision to fail to a replica site needs to be made by management staff. The criteria for “crisis situation demanding a replica failover” needs to be plotted when there isn’t a crisis, not in the middle of one. Clearly define who will make the determination that a failover is required.

Build the Replica System

Once all of these items have been satisfied, you can begin building your replica system. We’ll have an upcoming article that explains the procedure. But, if you have all of the items in this article prepared, you’ll find that you have already done all of the hard work.

Free Hyper-V Script: Update WDS Boot ID for a Virtual Machine

Free Hyper-V Script: Update WDS Boot ID for a Virtual Machine

 

For quite some time, I’ve been wanting to write an article on the wonders of Windows Deployment Services (WDS) and Hyper-V. Most of the other techniques that we use to deploy hosts and virtual machines are largely junk. WDS’s learning curve is short, and it doesn’t require many resources to stand up, operate, or maintain. It’s one of those technologies that you didn’t know that you couldn’t live without until the first time you see it in action.

This article is not the article that explains all of that to you. This article is the one that knocks out the most persistent annoyance to fully using WDS.

What is Windows Deployment Services?

Before I can explain why you want this script, I need to explain Windows Deployment Services (WDS). If you’re already familiar with WDS, I assume that you’re already familiar with your scroll wheel as well.

WDS is a small service that sits on your network listening for PXE boot requests. When it intercepts one, it ships a boot image to the requesting machine. From there, it can start an operating system (think diskless workstations) or it can fire up an operating system installer. I use it for the latter. The best part is that WDS is highly Active Directory integrated. Not only will it install the operating system for you, it will automatically place it in the directory. Even better, you can take a tiny snip of information from the target computer and place it into a particular attribute of an Active Directory computer object. WDS will match the newly installed computer to the AD object, so it will start life with the computer name, OU, and group policies that you want.

That tiny little piece of information needed for WDS to match the computer to the object is the tiny annoyance that I spoke of earlier. You must boot the machine in PXE mode and capture its GUID:

PXE GUID

PXE GUID

Modern server systems can take a very long time to boot, if you miss it then you must start over, and you must manually transcribe the digits. Fun, right?

Well, virtual machines don’t have any of those problems. Extracting the BIOS GUID still isn’t the most pleasant thing that you’ll ever do, though. It’s worse if you don’t know how to use CIM and/or WMI. It’s almost easier just to boot a VM just like you would a physical machine. That’s where this script comes in.

Script Prerequisites

The script itself has these requirements:

  • PowerShell version 4 or later
  • The Active Directory PowerShell module. You could run it from a domain controller, although that’s about as smart as starting a land war in Asia. Use the Remote Server Administration Tools instead.
  • Must be run from a domain member. I’m not sure if the AD PS module would work otherwise anyway.

There is no dependency on the Hyper-V PowerShell module. I specifically built it to work with native CIM cmdlets since I’m already forcing you to use the AD module.

I tested from a Windows 10 system against a Hyper-V Server 2016 system. I tested from a Windows Server 2012 R2 system still using PowerShell 4.0 against a Hyper-V Server 2012 R2 system. All machines are in the same domain, which is 2012 R2 forest and domain functional level.

The Script

The script is displayed below. Copy/paste into your PowerShell editor of choice and save it to a system that has the Active Directory cmdlet module installed.

Parameters:

  • VM: This will accept a string (name), a Hyper-V VM object (from Get-VM, etc.), a WMI object of type Msvm_ComputerSystem, or a CIM object of type Msvm_ComputerSystem.
  • ComputerName: The name of the Hyper-V host for the VM that you want to work with. This field is only used if the VM is specified as a string. For any of the other types, the computer name is extracted from the passed-in object.
  • ADObjectName: If the Active Directory object name is different from the VM’s name, this parameter will be used to name the AD object. If not specified, the VM’s name will be used.
  • Create: A switch that indicates that you wish to create an AD object if one does not exist. If you don’t specify this, the script will error if it can’t find a matching AD object. The object will be created in the default OU. Can be used with CreateInOU, but it’s not necessary to use both.
  • CreateInOU: An Active Directory OU object where you wish to create the computer. This must be a true object; use Get-ADOrganizationalUnit to generate it. This can be used with the Create parameter, but it’s not necessary to use both.
  • What If: Shows you what will happen, as usual. Useful if you just want to see what the VM’s BIOS GUID is without learning WMI/CIM or going through the hassle of booting it up.

The script includes complete support for Get-Help. It contains numerous examples to help you get started. If you’re uncertain, leverage -WhatIf until things look as you expect.

Script Discussion

Once the script completes, its results should be instantly viewable in the WDS console:

wdsscript_results

There are a few additional things to note.

Potential for Data Loss

This script executes a Replace function on the netbootGUID property of an Active Directory computer object. Target wisely.

I always set my WDS server to require users to press F12 before an image is pushed. If you’re not doing that, then the next time a configured VM starts and contacts the PXE server, it will drop right into setup. If you’ve got all the scaffolding set up for it to jump straight into unattend mode… Well, just be careful.

Other WDS-Related Fields

I elected to only set the BIOS GUID because that is the toughest part. It would be possible to set other WDS-related items, such as the WDS server, but that would have made the script quite a bit more complicated. I am using the Active Directory “Replace” function to place the BIOS GUID. I could easily slip in a few other fields, but you’d be required to specify them each time or any existing settings would be wiped out. The scaffolding necessary to adequately control that behavior would be significant. It would be easier to write other scripts that were similar in build to this one to adjust other fields.

Further Work

I still have it in my to-do list to work up a good article on Windows Deployment Services with Hyper-V. It’s not a complicated technology, so I encourage any and all self-starters to spin up a WDS system and start poking around. It’s nice to never need to scrounge for install disks/ISOs or dig for USB keys or bother with templates. I’ve also got my WDS system integrated with the automated WSUS script that I wrote earlier, so I know that my deployment images are up to date. These are all tools that can make your life much easier. I’ll do my best to get that article out soon, but I’m encouraging you to get started right away anyway.

Confusing Terms and Concepts in Hyper-V

Confusing Terms and Concepts in Hyper-V

 

 

If I ever got a job at Microsoft, I’d want my title to be “He Who Fixes Stupid Names and Labels”. Depending upon my mood, I envision that working out in multiple ways. Sometimes, I see myself in a product meeting with someone locked in a condescending glare, and asking, “Really? With nearly one million unique words in the English language, we’re going with ‘Core’? Again?” Other times, I see myself stomping around like Gordon Ramsay, bellowing, “This wording is so unintelligible that it could be a plot for a Zero Wing sequel!” So, now you know one of the many reasons that I don’t work for Microsoft. But, the degree of my fitness to work in a team aside, the abundance of perplexing aspects of the Hyper-V product generates endless confusion for newcomers. I’ve compiled a shortlist to help cut through a few of them.

Azure

This particular item doesn’t have a great deal of relevance to Hyper-V for most of us. On the back end, there is a great deal of intersection in the technologies. Site Recovery allows you to replicate your on-premises virtual machines into Azure. But, there’s not a lot of confusion about the technology that I’m aware of. It’s listed here, and first, as an example of what we’re up against. Think about what the word “azure” means. It is the color of a clear, cloudless sky. You think one thing when a salesman walks in and says, “Hi, we’d like to introduce you to our cloud product called ‘Azure’.” That sounds nice, right? What if, instead, he said, “Hi, we’d like to introduce you to our cloud product called ‘Cloudless’.” What?

"Microsoft Drab" Just Doesn't have the Same Ring

“Microsoft Drab” Just Doesn’t have the Same Ring

Azure’s misnomer appears to be benign, as it works and it’s selling very well. I just want you to be aware that, if you’re confused when reading a product label or a dialog box, it’s probably not your fault. Microsoft doesn’t appear to invest many resources in the “Thoroughly Think Through Phrasing” department.

What Should I Call Hyper-V, Anyway?

Some of the confusion kicks in right at the beginning. Most people know that Hyper-V is Microsoft’s hypervisor, which is good. But, then they try to explain what they’re using, and everything immediately goes off the rails.

First, there’s Hyper-V. That part, we all understand. Or, at least we think that we understand. When you just use the word “Hyper-V”, that’s just the hypervisor. It’s completely independent of how you acquired or installed or use the hypervisor. It applies equally to Hyper-V Server, Windows Server with Hyper-V, and Nano Server with Hyper-V.

Second, there’s Client Hyper-V. It’s mostly Hyper-V, but without as many bells and whistles. Client Hyper-V is only found in the client editions of Windows, conveniently enough. So, if you’ve installed some product whose name includes the word “Server”, then you are not using Client Hyper-V. Simple enough, right?

Third, there’s the fictitious “Hyper-V Core”. I’ve been trying to get people to stop saying this for years, but I’m giving up now. Part of it is that it’s just not working. Another part of it:

confuse_hypercore

With Microsoft actively working against me, I don’t like my odds. Sure, they’ve cleaned up a lot of these references, but I suspect they’ll never completely go away.

What I don’t like about the label/name “Hyper-V Core” is that it implies the existence of “Hyper-V not Core”. Therefore, people download Hyper-V Server and want to know why it’s all command-line based. People will also go to the forums and ask for help with “Hyper-V Core”, so then there’s at least one round of, “What product are you really using?”

What Does it Mean to”Allow management operating system to share this network adapter”?

The setting in question appears on the Virtual Switch Manager’s dialog when you create a virtual switch in Hyper-V Manager:

confuse_allow

The corresponding PowerShell parameter for New-VMSwitch is AllowManagementOs.

If I had that job that we were talking about a bit ago, the Hyper-V Manager line would say, “Connect the management operating system to this virtual switch.” The PowerShell parameter would be ConnectManagementOs. Then the labels would be true, explainable, and comprehensible.

Whether you choose the Hyper-V Manager path or the PowerShell route, this function creates a virtual network adapter for the management operating system and attaches it to the virtual switch that you’re creating. It does not “share” anything, at least not in any sense that this phrasing evokes. For more information, we have an article that explains the Hyper-V virtual switch.

I Downloaded and Installed Hyper-V. Where Did My Windows 7/8/10 Go?

I see this question often enough to know that there are a significant number of people that encounter this problem. The trainer in me must impart a vital life lesson: If the steps to install a product include anything like “boot from a DVD or DVD image”, then it is making substantial and potentially irreversible changes.

If you installed Hyper-V Server, your original operating environment is gone. You may not be out of luck, though. If you didn’t delete the volume, then your previous operating system is in a folder called “Windows.old”. Don’t ask me or take this to the Hyper-V forums, though, because this is not a Hyper-V problem. Find a forum for the operating system that you lost and ask how to recover it from the Windows.old folder. There are no guarantees.

Many of the people that find themselves in this position claim that Microsoft didn’t warn them, which is absolutely not true.

The first warning occurs if you attempt to upgrade. It prevents you from doing so and explicitly says what the only other option, “Custom”, will do:

confuse_overwrite1

If you never saw that because you selected Custom first, then you saw this warning:

confuse_overwrite2

That warning might be a bit too subtle, but you had another chance. After choosing Custom, you then decided to either install over the top of what you had or delete a partition. Assuming that you opted to use what was there, you saw this dialog:

confuse_overwrite3

The dialog could use some cleanup to cover the fact that it might have detected something other than a previous installation of Hyper-V Server, but there’s a clear warning that something new is pushing out something old. If you chose to delete the volume so that you could install Hyper-V Server on it, that warning is inescapably blatant:

confuse_overwrite4

If this has happened to you, then I’m sorry, but you were warned. You were warned multiple times.

How Many Hyper-V Virtual Switches Should I Use?

I often see questions in this category from administrators that have VMware experience. Hyper-V’s virtual switch is markedly different from what VMware does, so you should not expect a direct knowledge transfer.

The default answer to this question is always “one”. If you’re going to be putting your Hyper-V hosts into a cluster, that strengthens the case for only one. A single Hyper-V virtual switch performs VLAN isolation and identifies local MAC addresses to prevent superfluous trips to the physical network for intra-VM communications. So, you rarely gain anything from using two or more virtual switches. We have a more thorough article on the subject of multiple Hyper-V switches.

Checkpoint? Snapshot? Well, Which Is it?

To save time, I’m going to skip definitions here. This is just to sort out the terms. A Hyper-V checkpoint is a Hyper-V snapshot. They are not different. The original term in Hyper-V was “snapshot”. That caused confusion with the Volume Shadow Copy Service (VSS) snapshot. Hyper-V’s daddy, “Virtual Server”, used the term “checkpoint”. System Center Virtual Machine Manager has always used the term “checkpoint”. The “official” terms have been consolidated into “checkpoint”. You’ll still find many references to snapshots, such as:

confuse_snaporcheck

But We Officially Don’t Say “Snapshot”

We writers are looking forward to many more years of saying “checkpoint (or snapshot)”.

Do I Delete a Checkpoint? Or Merge It? Or Apply It? Or Something Else? What is Going on Here?

If you’re the person that developed the checkpoint actions, all of these terms make a lot of sense. If you’re anyone else, they’re an unsavory word soup.

  • Delete: “Delete” is confusing because deleting a checkpoint keeps your changes. Coming into this cold, you might think that deleting a checkpoint would delete changes. Just look under the hood, though. When you create a checkpoint, it makes copies of the virtual machine’s configuration files and starts using new ones. When you delete that checkpoint, that tells Hyper-V to delete the copies of the old configuration. That makes more sense, right? Hyper-V also merges the data in post-checkpoint differencing disks back into the originals, then deletes the differencing disks.
  • Merge (checkpoint): When you delete a checkpoint (see previous bullet point), the differencing disks that were created for its attached virtual hard disks are automatically merged back into the original. You can’t merge a checkpoint, though. That’s not a thing. That can’t be a thing. How would you merge a current VM with 2 vCPUs and its previous setting with 4 vCPUs? Split the difference? Visitation of 2 vCPUs every other weekend?
  • Merge (virtual hard disk): First, make sure that you understand the previous bullet point. If there’s a checkpoint, you want to delete it and allow that process to handle the virtual hard disk merging on your behalf. Otherwise, you’ll bring death and pestilence. If the virtual hard disk in question is not related to a checkpoint but still has a differencing disk, then you can manually merge them.
  • Apply: The thought process behind this term is just like the thinking behind Delete. Remember those copies that your checkpoint made? When you apply the checkpoint, the settings in those old files are applied to the current virtual machine. That means that applying a checkpoint discards your changes. As for the virtual hard disks, Hyper-V stops using the differencing disk that was created when the virtual machine was checkpointed and starts using a new differencing disk that is a child of the original virtual hard disk. Whew! Get all of that?
  • Revert: This verb makes sense to everyone, I think. It reverts the current state of the virtual machine to the checkpoint state. Technologically, Hyper-V applies the settings from the old files and discards the differencing disk. It creates a new, empty differencing disk and starts the virtual machine from it. In fact, the only difference between Revert and Apply is the opportunity to create another checkpoint to hold the changes that you’re about to lose. If I had that job, there would be no Apply. There would only be Revert (keep changes in a new checkpoint) and Revert (discard changes).

If this is tough to keep straight, it might make you feel better to know that my generation was expected to remember that Windows boots from the system disk to run its system from the boot disk. No one has ever explained that one to me. When you’re trying to keep this checkpoint stuff straight, just try to think of it from the perspective of the files that constitute a checkpoint.

If you want more information on checkpoints, I happen to like one of my earlier checkpoint articles. I would also recommend searching the blog on the “checkpoint” keyword, as we have many articles written by myself and others.

Dynamic Disks and Dynamically Expanding Virtual Hard Disks

“Dynamically expanding virtual hard disk” is a great big jumble of words that nobody likes to say. So, almost all of us shorten it to “dynamic disk”. Then, someone sees that the prerequisites list for the product that they want to use says, “does not support dynamic disks”. Panic ensues.

Despite common usage, these terms are not synonymous.

With proper planning and monitoring, dynamically expanding hard disks are perfectly safe to use.

Conversely, Dynamic disks are mostly useless. A handful of products require them, but hopefully they’ll all die soon (or undergo a redesign, that could work too). In the absence of an absolute, defined need, you should never use Dynamic disks. The article linked in the previous paragraph explains the Dynamic disk, if you’re interested. For a quicker explanation, just like at this picture from Disk Management:

Basic and Dynamic Disks

Basic and Dynamic Disks

Dynamic disks, in the truest sense of the term, are not a Hyper-V technology.

Which Live Migration Do I Want?

I was attempting to answer a forum question in which the asker was configuring Constrained Delegation so that he could Live Migrate a virtual machine from one physical cluster node to another physical node in the same cluster. I rightly pointed out that nodes in the same cluster do not require delegation. It took a while for me to understand that he was attempting to perform a Shared Nothing Live Migration of an unclustered guest between the two nodes. That does require delegation in some cases.

To keep things straight, understand that Hyper-V offers multiple virtual machine migration technologies. Despite all of them including the word “migration” and most of them including the word “live”, they are different. They are related because they all move something Hyper-V, but they are not interchangeable terms.

This is the full list:

  • Quick Migration: Quick Migration moves a virtual machine from one host to another within a cluster. So it’s said, the virtual machine must be clustered, not simply on a cluster node. It is usually the fastest of the migration techniques because nothing is transmitted across the network. If the virtual machine is on, it is first saved. Ownership is transferred to the target node. If the virtual machine was placed in a saved state for the move, it is resumed.
  • Live Migration: A Live Migration has the same requirement as a Quick Migration: it is only applicable to clustered virtual machines. Additionally, the virtual machine must be turned on (otherwise, it wouldn’t be “live”). Live Migration is slower than Quick Migration because CPU threads, memory, and pending I/O must be transferred to the target host, but it does not involve an interruption in service. The virtual machine experiences no outage except for the propagation of its network adapters’ MAC address change throughout the network.
  • Storage Live Migration: A Storage Live Migration involves the movement of any files related to a virtual machine. It could be all of them, or it could be any subset. “Storage Live Migration” is just a technology name; the phrase never appears anywhere in any of the tools. You select one of the options to “Move” and then you choose to only move storage. You can choose a new target location on the same host or remote storage, but a Storage Live Migration by itself cannot change a virtual machine’s owner to a new physical host. Unlike a “Live Migration”, the “Live” in “Storage Live Migration” is optional.
  • Shared Nothing Live Migration: The “Shared Nothing” part of this term can cause confusion because it isn’t true. The “live” bit doesn’t help, because the VM can be off or saved, if you want. The idea is that the source and destination hosts don’t need to be in the same cluster, so they don’t need to share a common storage pool. Their hosts do need to share a domain and at least one network, though. I’m not sure what I would have called this one, so maybe I’m glad that I don’t have that job. Anyway, as with Storage Live Migration, you’ll never see this phrase in any of the tools. It’s simply one of the “move” options.

If you’re seeking help from others, it’s important to use the proper term. Otherwise, your confusion will become their confusion and you might never find any help.

What Else?

I’ve been doing this long enough that I might be missing other things that just don’t make sense. Let us know what’s boggled you about Hyper-V and we’ll add it to the list.

Why Hyper-V Replica Doesn’t Replace Backups

Why Hyper-V Replica Doesn’t Replace Backups

 

Once upon a time, insurance was the only product that you purchased in the hopes that you’d never need to use it. Then, Charles Babbage made the horrible mistake of inventing computers, which gave us all so much more to worry about. The good news is that, whereas insurance can’t do much more than pay money when you lose something, computers have the ability to recover data that you lose. The bad news is that, just like insurance, you must spend a lot of money for that power. Not only has the tech world come up with a cornucopia of schemes to protect your data, it’s produced catchy names like “Disaster Recovery” and “Business Continuity” to get you in the money-spending mood. This article compares two of those product categories within the scope of the Hyper-V world: Hyper-V Replica and virtual machine backups.

Meet the Players

If you’re here looking for a quick answer to the question of whether you should use Hyper-V Replica or a virtual machine backup product, the answer is that they are both on the same team but they play in different positions. If you can’t afford to have both, virtual machine backup is your MVP. Hyper-V Replica is certainly valuable, but ultimately nonessential.

What is Hyper-V Replica?

The server editions of Hyper-V include a built-in feature named Hyper-V Replica. It requires a minimum of two separate hosts running Hyper-V. A Hyper-V host periodically sends the changed blocks of a virtual machine to another system. That system maintains a replica of the virtual machine and incorporates the incoming changes. At any time, an administrator can initiate a “failover” event. This causes the replica virtual machine to activate from the point of the last change that it received.

What is the Purpose of Hyper-V Replica?

The general goal of Hyper-V Replica is to provide rapid Disaster Recovery protection to a virtual machine. Disaster Recovery has become more of a broad marketing buzzword than a useful technical term, but it is important here: Hyper-V Replica does not have any other purpose. To use another marketing term, they can also be said to enable Business Continuity. Little time is necessary to start up a replica, making it ideal when extended outages are unacceptable. However, this does not change the core purpose of Hyper-V Replica, nor does it qualify as an automatic edge over virtual machine backup.

What is Virtual Machine Backup?

I am generically using the phrase “virtual machine backup” in this article, not specifically referring to Altaro’s product. A virtual machine backup is a software-created duplicate copy of a virtual machine that is kept in what we usually call a cold condition. It must undergo a restore operation in order to be used. Virtual machine backups require one Hyper-V host and some sort of storage subsystem — it could be magnetic disk, optical disc, magnetic tape, or solid-state storage.

I am deliberately scoping backup to the virtual machine level in this article in order to make the fairest comparison to Hyper-V Replica. There are backup applications available with wider or different targets.

What is the Purpose of Virtual Machine Backup?

At first glance, the purpose of virtual machine backup might seem identical to Hyper-V Replica. Virtual machine backups can certainly provide disaster recovery protection. However, they also allow for multi-layered protection. Not all failures qualify as disasters (although impacted parties may disagree). Equally as important, not all uses for virtual machine backup involve a failure. The purpose of virtual machine backup is to provide an historical series of a virtual machine’s contents.

A Brief Overview of Hyper-V Replica

To understand why Hyper-V Replica can’t replace backup, you’ll first need to see what it truly does. Fortunately, all of the difficult parts are in the configuration and failover processes. A properly running replica system is easy to understand.

The Hyper-V host that owns the “real”, or maybe the “source” virtual machine tracks changes. At short, regular intervals, it transmits those changes to the Hyper-V host that owns the replica.

Hyper-V Replica in Action

Hyper-V Replica in Action

For this discussion, the most important part is the short, regular interval. You can choose a different value for short, but there can be only one.

Where Hyper-V Replica Falls Behind Backup

As I start this section, I want it made clear that I am not attempting to disparage Hyper-V Replica. It is a fantastic technology. The stated goal of this article is to explain why Hyper-V Replica does not replace virtual machine backup. Therefore, this section will lay out what virtual machine backup gives you that Hyper-V Replica cannot.

Retention

You can instruct Hyper-V Replica to maintain multiple Recovery Points. These are similar to backups in that each one represents a complete virtual machine. You can recover from one of them independently of any other recovery points. However, these recovery points are captured once every hour and you cannot change that interval. Therefore, opting to keep more than a few recovery points will result in a great deal of space utilization in a very short amount of time. All of that space will only represent a very short period in the life of the virtual machine. You won’t be able to maintain a very long history using only Hyper-V Replica.

In contrast, you can easily configure virtual machine backups for much longer retention periods. It’s not strange to encounter retention policies measured in years.

No Separate Copies

When Hyper-V Replica receives new change information, it merges the data directly into the Replica virtual machine. If you are maintaining recovery points, those are essentially change block records that are temporarily spared from the normal delete process. The files generated by the replica system are useless if you separate them from the replica virtual machine’s configuration files and virtual hard disks. More simply: Hyper-V Replica maintains exactly one standalone copy of a virtual machine per target replica server. As long as that copy survives, you can use it to recover from a disaster. Any damage to that copy essentially makes its entire replica architecture pointless.

Virtual machine backup, on the other hand, grants the ability to create multiple distinct copies of a virtual machine. They exist independently. If the backup copy that you want to use is damaged, try another.

No Separate Storage Locations

Since a virtual machine replica only has a single set of data, each of its components exists in only one location. For Hyper-V Replica’s intended purpose, that’s not a problem. But, what if something damages a storage location? What if a drive system fails and corrupts all of its data? What if the replica site suffers a catastrophe?

Most virtual machine backup applications allow you to place your backups on separate storage media. For instance, Altaro Virtual Machine Backup is friendly to the idea of rotating through disks. Others let you cycle through different tapes. With copies on separate physical media, you can put distance between unique copies of your virtual machines’ backups. That allows some to survive when others might not. It prevents one bad event from destroying everything.

All or Nothing

You can either bring up the replica of your virtual machine or not. There really isn’t any in-between. You don’t want to tinker with the VHDX file(s) of a replica in any way because that would break the replica chain. It wouldn’t write new change blocks and you’d be forced to start the replica process anew from the beginning. There are alternatives, of course. You could perform an export on the replica. If the replica has recovery points, you could export one of them. You’d then be able to do whatever you need with the exported copy. It’s a rather messy way to extract data from a replica, but it could work.

Almost all commercial virtual machine backup vendors design their applications to handle situations in which you don’t want complete data restoration. The features list will typically mention “granular” or “file-level” restoration capabilities. You shouldn’t need to endure any intermediary complications.

Limited Support and Interoperability

Microsoft does not support Exchange Server with Hyper-V Replica. Active Directory Domain Services sometimes stutters with Hyper-V Replica. Microsoft will support SQL Server with Hyper-V Replica, under some conditions. That series only included Microsoft products. Your line-of-business applications might have similar problems. Furthermore, each of the three items that I mentioned have their own built-in replication technologies. In all three cases, the native capabilities vastly outpace Hyper-V Replica’s abilities. For starters, they all allow for active/active configurations and transfer much less data than Hyper-V Replica.

I’ve seen a lot of strange things in my time, but I don’t believe that I’ve encountered any software vendor that wouldn’t support a customer taking backups of their product. You sometimes need to go through some documentation to properly restore an application. Some applications, like SQL Server, do include their own backup tools, but you can enhance them with external offerings.

Little Control Over Pacing

Once Hyper-V Replica is running, it just goes. It’s going to ship differences at each configured interval, and that’s all there is to it. You can change the interval and you can pause replication, but you don’t have any other options. Pausing replication for an extended period of time is a poor choice, as catching up might take longer than starting fresh.

One of the many nice things about backup applications is that you have the power to set the schedule. You define when backups run and when they don’t. You can restrict your large backup jobs to quiet hours. If you have a high churn virtual machine and need to take backups that are frequent, but not as frequent as Hyper-V Replica, you have that option. If you have a very small domain that doesn’t change often, you might want to only capture weekly backups of your domain controller. You might decide to interject a one-off backup around major changes. Backup applications allow you to set the pacing of each virtual machine’s backup according to what makes the most sense.

Heavy Configuration Burden

Hyper-V Replica requires a fair bit of effort to properly set up and configure. With basic settings, you can set up each involved host in just a few minutes. If you need to use HTTPS for any reason, that will need some more effort. But, initial configuration is only the beginning. By default, no virtual machines are replicated. You’ll need to touch every virtual machine that you want to be covered by Hyper-V Replica. Yes, you can use PowerShell to ease the burden, but I know how so many of you feel about that.

Even the worst virtual machine backup that I’ve ever used at least tried to make configuring backups easy. They all try to employ some sort of mechanism to set up guests in bulk. The biggest reason that this matters is configuration fatigue. If it takes a great deal of effort to reach an optimal configuration, you might take some shortcuts. Even if you suffer all the way through a difficult initial setup, anyone would be resistant to revisiting the process if something in the environment changes. Whereas you’ll likely find it simple to configure a virtual machine backup program exactly as you like it, most people will have almost no variance in their Hyper-V Replica build — even if it isn’t the best choice.

Higher Software Licensing Costs

If your source virtual machine’s operating system license is covered by Software Assurance, then you can use Hyper-V Replica without any further OS licensing cost. Otherwise, a separate operating system license is required to cover the replica. I don’t keep up with application licensing requirements, but those terms might be even less favorable.

Virtual machine backup applications generate copies that Microsoft labels “cold” backups. Microsoft does not require you to purchase additional licenses for any of their operating systems or applications when they are protected like that. I don’t know of any other vendor that requires it, either.

Higher Hardware Costs

The replica host needs to be powerful enough to run the virtual machines that it hosts. You might choose a system that isn’t quite as powerful as the original, but you can’t take that too far. Most organizations employing Hyper-V Replica tend to build a secondary system that rivals the primary system.

If we accept that the primary use case for a replica system involves the loss of the primary system, we see where backup can save money. Only the most foolish business owners do not carry insurance on at least their server equipment. It’s fair to presume that whatever destroyed them would qualify as a covered event. Smaller organizations commonly rely on that fact as part of their disaster recovery strategy. After a disaster, they order replacement equipment and insurance pays for it. They drag their backup disks out of the bank vault and restore to the new equipment.

Of course, they lose out on time. A Hyper-V Replica system can return you to operational status in a few minutes. If you’ve only got backups, then leaning on local resources might have you running in a few hours at best. However, small budgets lead to compromises. Backup alone is cheaper than Hyper-V Replica alone

The Choice is Clear

I don’t like setting down ultimatums or dictating “best practices”, but this is one case where there is little room for debate. If you can afford and justify both virtual machine backup and Hyper-V Replica, employ both. If you must choose, the only rational option is virtual machine backup.

Hyper-V 2016 Shielded Virtual Machines on Stand-Alone Hosts

Hyper-V 2016 Shielded Virtual Machines on Stand-Alone Hosts

 

One of the hot new technologies in Hyper-V 2016 is Shielded Virtual Machines. This feature plugs a few long-standing security holes in the hypervisor space that were exacerbated by the rise of hosting providers. It’s ridiculously easy to start using Shielded Virtual Machines, but its simplicity can mask some very serious consequences if the environment and guests are not properly managed. To make matters worse, the current documentation on this feature is sparse and reads more like marketing brochures than technical material.

The material that does exist implies that Shielded Virtual Machines require a complicated Host Guardian Service configuration and a cluster or two. This is not true. You can use Shielded Virtual Machines on standalone hosts without ever even finding any setup for Host Guardian Service (HGS). Using a properly configured HGS is better, but it is not required. Standalone mode is possible. “Standalone” can apply to non-domain-joined hosts and domain-joined hosts that are not members of a cluster. I did verify that I could enable VM shielding on a non-domain-joined host, but I did not, and will not, investigate it any further. This article will discuss using Shielded Virtual Machines on a domain-joined Hyper-V host that is not a member of a cluster and is not governed by a Host Guardian Service.

What are Shielded Virtual Machines?

A Shielded Virtual Machine is protected against tampering. There are several facets to this protection.

Unauthorized Hosts Cannot Start Shielded Virtual Machines

Only systems specifically authorized to operate a Shielded Virtual Machine will be able to start it. Others will receive an error message that isn’t perfectly obvious, but should be decipherable with a bit of thought. The primary error is “The key protector could not be unwrapped. Details are included in the HostGuardianService-Client event log.” The details of the error will be different depending on your overall configuration.

No Starting Shielded VMs on Unauthorized Hosts

No Starting Shielded VMs on Unauthorized Hosts

This feature is most useful when combined with the next.

Unauthorized Hosts Cannot Mount Virtual Hard Disks from Shielded Virtual Machines

The virtual hard disks for a Shielded Virtual Machine cannot be opened or mounted on unauthorized systems. Take care as the error message on an unauthorized host is not nearly as clear as the message that you receive when trying to start a Shielded Virtual Machine on an unauthorized host, and it could be mistaken for a corrupted VHD: “Couldn’t Mount File. The disk image isn’t initialized, or contains partitions that aren’t recognizable, or contains volumes that haven’t been assigned drive letters. Please use the Disk Management snap-in to make sure that the disk, partitions, and volumes are in a usable state.”.

Error When Opening a Shielded VHD on an Unauthorized Host

Error When Opening a Shielded VHD on an Unauthorized Host

 

For small businesses, this is the primary benefit of using Shielded Virtual Machines. If your VM’s files are ever stolen, the thieves will need more than that.

VMConnect.exe Cannot be Used on a Shielded Virtual Machine

Even administrators can’t use VMConnect.exe to connect to a Shielded Virtual Machine. In case you didn’t already know, “VMConnect.exe” is a separate executable that Hyper-V Manager and Failover Cluster Manager both call upon when you instruct them to connect to the console of a virtual machine. This connection refusal provides a small level of protection against snooping by a service provider’s employees, but does more against other tenants that might inadvertently have been granted a few too many privileges on the host. Attempting to connect results in a message that “You cannot connect to a shielded virtual machine using a Virtual Machine Connection. Use a Remote Desktop Connection instead.”

No VMConnect for Shielded VMs

No VMConnect for Shielded VMs

The upshot of the VMConnect restriction is that if you create VMs from scratch and immediately set them to be shielded, you’d better have some method in mind of installing an OS without using the console at all (as in, completely unattended WDS).

What are the Requirements for Shielded Virtual Machines?

The requirements for using Shielded Virtual Machines are:

  • Generation 2 virtual machines

That’s it. You’ll read a lot about the need for clusters and services and conditional branches where a physical Trusted Platform Module (TPM) can be used or when administrator sign-off will do and all other sorts of things, but all of those are in regards to Guarded Fabric and involve the Host Guardian Service. Again, HGS is a very good thing to have, and would certainly give you a more resilient and easily managed Shielded Virtual Machine environment, but none of that is required. The only thing that you must absolutely have is a Generation 2 virtual machine. Generation 1 VMs cannot be shielded.

Generation 1 virtual machines can be encrypted by Hyper-V. That’s a topic for another article.

How Does the Shielded Virtual Machine Mechanism Work on a Standalone System?

Do not skip this section just because it might have some dry technical details! Ignorance on this topic could easily leave you with virtual machines whose data you cannot access! Imagine a situation in which you have a single, non-clustered host with a guest on a Scale Out File Server cluster and you enable the Shielded VM feature. Since all of the virtual machine’s data is on an automatically backed-up storage location, you don’t bother doing anything special for backup. One day, your Hyper-V host spontaneously combusts. You buy a new host and import the VM directly from the SOFS cluster, only to learn that you can’t turn it on. What can you do!? You could try crying or drinking or cursing or sacrificing a rubber chicken or anything else that makes you feel better, but nothing that you do short of cracking the virtual machine’s encryption will get any of that data back. If you don’t want that to be you, pay attention to this section.

Shielded Virtual Machines are Locked with Digital Keys

Access to and control of a Shielded Virtual Machine is governed by asymmetric public/private encryption keys. In a single host environment without a configured Host Guardian Service, these keys are created automatically immediately after you set the first virtual machine to be shielded. You can see these certificates in two ways.

Viewing Shielded Virtual Machine Certificates Using CERTUTIL.EXE

The CERTUTIL.EXE program is available on any system, including those without a GUI. At an elevated command prompt, type:

You’ll be presented with a dialog that shows the Shielded VM Encryption Certificate. Click More Choices and it will expand to show that certificate and the Shielded VM Signing Certificate:

VM Shielding Certificates

VM Shielding Certificates

You can click either of the certificates in the bottom half of the dialog and it will update the information in the top half of the dialog. Click the Click here to view certificate properties link, and you’ll be rewarded with the Certificate Details dialog:

Certificate Details

Certificate Details

This dialog should look fairly familiar if you’ve ever looked at a certificate in Internet Explorer or in the Certificates MMC snap-in. We’ll turn to that snap-in next.

Viewing Shielded Virtual Machine Certificates Using the Certificates MMC Snap-In

The Microsoft Management Console (MMC.EXE) has a dependency on the Explorer rendering engine, so it is only available on GUI systems. You can use it to connect to systems without a GUI, though, as long as they are in the same or a trusting domain.

  1. At an elevated prompt, run MMC.EXE. You can also enter MMC.EXE into Cortana’s search, then right-click it and choose Run as administrator.

    Starting MMC

    Starting MMC

  2. In Microsoft Management Console, click File -> Add/Remove Snap-in…

    Accessing the Snap-in Menu

    Accessing the Snap-in Menu

  3. In the Add or Remove Snap-ins dialog, highlight Certificates and then click the Add > button.

    Choose Certificates Snap-in

    Choose Certificates Snap-in

  4. A prompt will appear for the target of the Certificates snap-in. We want to target the Computer account:

    Computer Account Choice

    Computer Account Choice

  5. After that, you’ll need to indicate which computer to control. In my example, I want the local computer so I’ll leave that selection. You can connect to any computer in the same or a trusting domain, provided that the user account that you started MMC.EXE with has administrative privileges on that computer:

    Choose Local or Remote Computer

    Choose Local or Remote Computer

  6. After you OK out of all of the above dialogs, MMC.EXE will populate with the certificate tree of the targeted computer account. Expand Shielded VM Local Certificates, then click the Certificates node. If you have shielded a virtual machine, you’ll see two certificates:

    VM Shielding Certificates in MMC

    VM Shielding Certificates in MMC

You can open these certificates to view them.

The Significance of Certificates and Shielded Virtual Machines

Not to put too fine a point on it, but these two certificates are absolutely vital. If they are lost, any virtual machine that they were used to shield is also permanently lost… unless you have the ability to crack 2048-bit SHA256 encryption. There is no backdoor. There is no plan “B”.

If you are backing up your host’s operating system using traditional backup applications, a standard System State backup will include the certificate store.

If you are not backing up the management operating system, then you need a copy of these keys. I’ll give you directions, but the one thing that you must absolutely not miss is the bit about exporting the private keys. The shielding certificates are completely useless without their private keys!

Exporting and Importing VM Shielding Keys with CERTUTIL.EXE

Using CERTUTIL.EXE is the fastest and safest way to export certificates.

  1. Open an elevated command prompt.
  2. Type the following: certutil -store "Shielded VM Local Certificates"
  3. In the output, locate the Serial Number for each of the certificates.

    VM Shielded Certificates with Serial Numbers

    VM Shielded Certificates with Serial Numbers

  4. Use the mouse to highlight the first serial number, which should be for the encryption certificate, then press [Enter] to copy it to the clipboard.
  5. To export the VM shielding encryption certificate, type the following, replacing my information with yours. Use right-click to paste the serial number when you come to that point: certutil -exportPFX -p "2Easy2Guess!" "Shielded VM Local Certificates" 169d0cacaea2a396428b62f77545682e c:\temp\SVHV02-VMEncryption.pfx
  6. Use the mouse to highlight the second serial number, which should be for the signing certificate, then press [Enter] to copy it to the clipboard.
  7. To export the VM shielding signing certificate, type the following, replacing my information with yours. Use right-click to paste the serial number when you come to that point: certutil -exportPFX -p "2Easy2Guess!" "Shielded VM Local Certificates" 5d0cb1f0fa8b34b24e1195c41d997c19 c:\temp\SVHV02-VMSigning.pfx
  8. Ensure that the PFX files that you created are moved to a SAFE place and that the password is SECURED!

If you ever need to recover the certificates, use this template:

You’ll be prompted for the password on each one.

Exporting and Importing VM Shielding Keys with MMC

The MMC snap-in all but encourages you to do some very silly things, so I would recommend that you use the certutil instructions above instead. If you must use the UI:

  1. Open MMC and the Certificates snap-in using instructions from the “Viewing Shielded Virtual Machine Certificates Using the Certificates MMC Snap-In” section above.
  2. Highlight both certificates. Right-click them, hover over All Tasks, and click Export…
Export Start

Export Start

  • Click Next on the informational screen.
  • Change the dot to Yes, export the private key. The certificates might as well not exist at all without their private keys.

    Export Private Key

    Export Private Key

  • Leave the defaults on the Export File Format page. If you know what you’re doing, you can select Enable certificate privacy. Do not select the option to Delete the private key!! The host will no longer be able to open its own shielded VMs if you do that!

    Certificate File Format

    Certificate File Format

  • On the Security tab, you must choose to lock the exported certificate to a security principal or a password. It’s tempting to lock it down by security principal, and it might even work for you. I almost always use passwords because they’ll survive where security principals won’t. If you do choose this option, only use domain principals, use groups not names, use more than one, and make double-, triple-, and quadruple- certain that your Active Directory backups are in good shape. If you’re one of those types that likes to leave your Hyper-V hosts outside of the domain for whatever reason, the Groups or user names option is a good way to lose your shielded VMs forever.

    Exported Certificate Security

    Exported Certificate Security

  • On the File to Export page, use a file name that indicates that you’re backing up both certificates. That’s one nice thing about the GUI.

    Choose a File

    Choose a File

  • The final screen is just a summary. Click Finish to complete the export.
  • Ensure that the PFX files that you created are moved to a SAFE place and that the password is SECURED (or if you used one or more security principals, hope that nothing ever happens to them)!

If you ever need to recover these certificates, I would again recommend using certutil.exe instead. The GUI still makes some dangerous suggestions and it takes much longer. If you insist on the GUI:

  1. Open MMC and the Certificates snap-in using instructions from the “Viewing Shielded Virtual Machine Certificates Using the Certificates MMC Snap-In” section above.
  2. Right-click in the center pane and hover over All Tasks, and click Import…
  3. Click Next on the introductory screen.
  4. On the File to Import screen, navigate to where your certificate backups are. Note that you’ll need to change the filter from *.cer, *.crt to *.pfx, *.p12 to see them.
  5. The Password part of the Private key protection screen is fairly easy to figure out (and won’t be necessary at all if you protected by security principal). Do make sure to check the Mark this key as exportable box. If you don’t, then you won’t be able to export the private key. It’s not strictly necessary, since you do have the file that you’re importing from. At least, you have it right now. Something could happen to it, and then you’d have no way to generate a new one.

    Import as Exportable

    Import as Exportable

  6. Make certain that the certificate store is Shielded VM Local Certificates.

    Certificate Store Choice

    Certificate Store Choice

  7. The final screen is just a summary. Click Finish to import the certificates.

Do take good care of these certificates. They are literally the keys to your Shielded Virtual Machines.

Why Does the Certificate Say “Untrusted Guardian”?

The consequence of not using a full Host Guardian Service is that there’s no independent control over these certificates. With HGS, there’s independent “attestation” that a host is allowed to run a particular virtual machine because the signature on the VM and the signing certificate will match up and, most importantly, the signing certificate was issued by someone else. In this case, the certificate is “self-signed”. You’ll see the term “self-signed” used often, and usually incorrectly. Most of the time, I see it used to refer to certificates that were signed by someone’s internal certificate authority, like their private domain’s Enterprise CA. That is not self-signed! A true self-signed certificate is signed and issued by a host that is not a valid certificate authority and is only used by that host. The most literal meaning of a self-signed certificate is: “I certify that this content was signed/encrypted by me because I say so.” There is no independent verification of any kind for a true self-signed certificate.

Can I Use Shielded VMs from an “Untrusted Guardian” on Another Hyper-V Host?

Yes. These virtual machines are not permanently matched to their source host. That’s a good thing, because otherwise you’d never be able to restore them after a host failure. All that you need to do is import the keys that were used to sign and encrypt those virtual machines on the new target host into its “Shielded VM Local Certificates” store, and it will then be able to immediately open those VMs. There will not be any conflict with any certificates that are already there. This should work for Live Migrations as well, although I only tested export/import.

If you like, you can unshield the VMs and then reshield them. That will shield the VMs under the keyset of the new target host.

What Happens When the Certificate Expires?

I didn’t test, so I don’t know. You could try it out by forcing your clock 10 years into the future.

Realistically, nothing bad will happen when the certificate expires. An expired certificate still matches perfectly to whatever it signed and/or encrypted, so I see no reason why the VMs wouldn’t still work. You can’t renew these certificates, though, so the host will no longer be able to use them to sign or encrypt new VMs. If this is still something that you’re concerned about 9 years and 11 months after shielding your first VM, be happy that your host made it that long and then unshield all of the VMs, delete the certificates, and reshield the VMs. New 10 year certificates will be automatically created and give you another decade to worry about the problem.

How Do I Know if a VM is Shielded?

The “easiest” way is the checkbox on the GUI tab. There’s also PowerShell:

Virtual hard drives are a bit tougher. Get-VHD, even on Server 2016, does not show anything about encryption. You can test it in a hex editor or something else that can poke at the actual bits, of course, but other than that I don’t know of a way to tell.

Are Shielded VMs a Good Idea on an Untrusted Host?

I’m not sure if there is a universal answer to this question. Without the Host Guardian Service being fully configured, there is a limit to the usefulness of Shielded VMs. I would say that if you have the ability to configure HGS, do that.

That said, shielding a VM on an untrusted host still protects its data if the files for the VM are ever copied to a system outside of your control. Just remember that anyone with administrative access to the host has access to the certificate. What you can do, if you’ve got an extremely solid protection plan, is export, delete, and re-import the certificate without marking the private key as exportable. That’s risky, because you’re then counting on never forgetting or losing that exported certificate. However, even a local admin won’t be able to steal virtual machines without having access to the exported key as well.

Page 5 of 27« First...34567...1020...Last »