Free Hyper-V Script: Virtual Machine Storage Consistency Diagnosis

Free Hyper-V Script: Virtual Machine Storage Consistency Diagnosis

Hyper-V allows you to scatter a virtual machine’s files just about anywhere. That might be good; it might be bad. That all depends on your perspective and need. No good can come from losing track of your files’ sprawl, though. Additionally, the placement and migration tools don’t always work exactly as expected. Even if the tools work perfectly, sometimes administrators don’t. To help you sort out these difficulties, I’ve created a small PowerShell script that will report on the consistency of a virtual machine’s file placement.

Free Hyper-V Script from Altaro

How the Script Works

I designed the script to be quick and simple to use and understand. Under the hood, it does these things:

  1. Gathers the location of the virtual machine’s configuration files. It considers that location to be the root.
  2. Gathers the location of the virtual machine’s checkpoint files. It compares that to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  3. Gathers the location of the virtual machine’s second-level paging files. It compares that to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  4. Gathers the location of the virtual machine’s individual virtual hard disk files. It compares each to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  5. Gathers the location of any CD or DVD image files attached to the virtual machine. It compares each to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  6. Emits a custom object that contains:
    1. The name of the virtual machine
    2. The name of the virtual machine’s Hyper-V host
    3. The virtual machine’s ID (a string-formatted GUID)
    4. A boolean ($true or $false) value that indicates if the virtual machine’s storage is consistent
    5. An array that contains a record of each location. Each entry in the array is itself a custom object with these two pieces:
      1. The component name(Configuration, Checkpoints, etc.)
      2. The location of the component

It doesn’t check for pass-through. A VM with pass-through disks would have inconsistent storage placement by definition, but this script completely ignores pass-through.

The script doesn’t go through a lot of validation to check if storage locations are truly unique. If you have a VM using “\\system\VMs” and “\\system\VMFiles” but they’re both the same physical folder, the VM will be marked as inconsistent.

Script Usage

I built the script to be named Get-VMStorageConsistency. All documentation and examples are built around that name. You must supply it with a virtual machine. All other parameters are optional.

Use Get-Help Get-VMStorageConsistency -Full to see the built-in help.

Parameters:

  • VM (aliases “VMName” and “Name”): The virtual machine(s) to check. The input type is an array, so it will accept one or more of any of the following:
    • A string that contains the virtual machine’s names
    • A VirtualMachine object (as output from Get-VM, etc.)
    • A GUID object that contains the virtual machine’s ID
    • A WMI Msvm_ComputerSystem object
    • A WMI MSCluster_Resource object
  • ComputerName: The name of the host that owns the virtual machine. If not specified, defaults to the local system. Ignored if VM is anything other than a String or GUID. Only accepts strings.
  • DisksOnly: A switch parameter (include if you want to use it, leave off otherwise). If specified, the script only checks at the physical storage level for consistency. Examples:
    • With this switch, C:\VMs and C:\VMCheckpoints are the same thing (simplifies to C:\)
    • With this switch, C:\ClusterStorage\VMs1\VMFiles and C:\ClusterStorage\VMs1\VHDXs are the same thing (simplifies to C:\ClusterStorage\VMs1)
    • With this switch, \\storage1\VMs\VMFiles and \\storage1\VMs\VHDFiles are the same thing (simplifies to \\storage1\VMs)
    • Without this switch, all of the above are treated as unique locations
  • IgnoreVHDFolder: A switch parameter (include if you want to use it, leave off otherwise). If specified, ignores the final “Virtual Hard Disks” path for virtual hard disks. Notes:
    • With this switch, VHDXs in “C:\VMs\Virtual Hard Disks” will be treated as though they were found in C:\VMs
    • With this switch, VHDXs in “C:\VMs\Virtual Hard Disks\VHDXs” will not be treated specially. This is because there is a folder underneath the one named “Virtual Hard Disks”.
    • With this switch, a VM configured to hold its checkpoints in a folder named “C:\VMs\Virtual Hard Disks\” will not treat its checkpoints especially. This is because the switch only applies to virtual hard disk files.
  • Verbose: A built-in switch parameter. If specified, the script will use the Verbose output stream to show you the exact comparison that caused a virtual machine to be marked as inconsistent. Note: The script only traps the first item that causes a virtual machine to be inconsistent. That’s because the Consistent marker is a boolean; in boolean logic, it’s not possible to become more false. Therefore, I considered it to be wasteful to continue processing. However, all locations are stored in the Locations property of the report. You can use that for an accurate assessment of all the VMs’ component locations.

The Output Object

The following shows the output of the cmdlet run on my host with the -IgnoreVHDFolder and -Verbose switches set:

conscript_output

Output object structure:

  • Name: String that contains the virtual machine’s name
  • ComputerName: String that contains the name of the Hyper-V host that currently owns the virtual machine
  • VMId: String representation of the virtual machine’s GUID
  • Consistent: Boolean value that indicates whether or not the virtual machine’s storage is consistently placed
  • Location: An array of custom objects that contain information about the location of each component

Location object structure:

  • Component: A string that identifies the component. I made these strings up. Possible values:
    • Configuration: Location of the “Virtual Machines” folder that contains the VM’s definition files (.xml, .vmcx, .bin, etc.)
    • Checkpoints: Location of the “Snapshots” folder configured for this virtual machine. Note: That folder might not physically exist if the VM uses a non-default location and has never been checkpointed.
    • SecondLevelPaging: Location of the folder where the VM will place its .slp files if it ever uses second-level paging.
    • Virtual Hard Disk: Full path of the virtual hard disk.
    • CD/DVD Image: Full path of the attached ISO.

An example that puts the object to use:

The above will output only the names of local virtual machines with inconsistent storage.

Script Source

The script is intended to be named “Get-VMStorageConsistency”. As shown, you would call its file (ex.: C:\Scripts\Get-VMStorageConsistency). If you want to use it dot-sourced or in your profile, uncomment lines 55, 56, and 262 (subject to change from editing; look for the commented-out function { } delimiters).

 

How Device Naming for Network Adapters Works in Hyper-V 2016

How Device Naming for Network Adapters Works in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. I don’t know for sure, but I’m guessing that the Hyper-V Integration Services enable this feature.

Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:

harn_getnetadapter

In contrast, a physical system employing Consistent Device Naming would have automatically named the network adapters in some fashion that reflected their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.

ndn_gui

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
    ndn_netadvprops
  2. When the Ethernet # Properties dialog appears, click Configure:
    ndn_netpropsconfbutton
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:
    ndn_display

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.

ndn_psall

Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:

ndn_psselectobject

I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

Understanding the Windows Server Semi-Annual Channel

Understanding the Windows Server Semi-Annual Channel

Microsoft has made major changes to the way that they build and release their operating systems. The new Windows Server “Semi-Annual Channel” (SAC) marks a substantial departure from the familiar release pattern that Microsoft has established. The change has pleased some people, upset many, and confused even more. With all the flash of new features, it’s easy to miss the finer points — specifically, how you, your organization, and your issues fit into the new model.

The Traditional Microsoft OS Release Model

Traditionally, Microsoft would work on pre-release builds internally and with a small group of customers, partners, and other interested stakeholders (such as MVPs). Then, they would finesse the builds with open betas (usually called “Release Candidates”). Then, there would be an RTM (release-to-manufacturing) event followed by GA (general availability). The release would then be considered “current”. It would enjoy regular updates including service packs and feature upgrades for a few years, then it would go into “extended support” where it would only get stability and security updates. While customers purchased and worked with the “current” version, work on the next version would begin in parallel.

Not every version followed this model exactly, but all of them were similar. The most recent Windows Server operating system to employ this model is Windows Server 2016.

Changes to the Model with Windows 10/Server 2016

The “Windows Insider Program” was the first major change to Microsoft’s OS build and release model. Initially, it was most similar to the “release candidate” phase of earlier versions. Anyone could get in and gain access to Windows 10 builds before Windows 10 could even be purchased. However, it deviated from the RC program in two major ways:

  • The Windows Insider Program includes an entire community.
  • The Windows Insider Program continues to provide builds after Windows 10 GA

The Windows Insider Community

Most of us probably began our journey to Windows Insider by clicking an option in the Windows 10 update interface. However, you can also sign up using the dedicated Windows Insider web page. You get access to a dedicated forum. And, of course, you’ll get e-mail notifications from the program team. You can tell Microsoft what you think about your build using the Feedback Hub. That applet is not exclusive to Insiders, but they’ll know if you’re talking about an Insider build or a GA build.

Ongoing Windows Insider Builds

I expect that most Insiders prize access to new builds of Windows 10 above the other perks of the program. The Windows 10 Insider Program allows you to join one of multiple “rings” (one per joined Windows 10 installation). The ring that an installation belongs to dictates how close it will be to the “cutting edge”. You can read up on these rings and what they mean on the Insider site.

The most important thing about Windows Insider builds — and the reason that I brought them up at all in this article — is that they are not considered production-ready. The fast ring builds will definitely have problems. The preview release builds will likely have problems. You’re not going to get help for those problems outside of the Insider community, and any fix will almost certainly include the term “wait for the next build” (or the next… or the one after… or some indeterminate future build). I suspect that most software vendors will be… reluctant… to officially support any of their products on an Insider build.

Windows Server Insider Program

The Windows Server Insider Program serves essentially the same purpose as the Windows 10 Insider Program, but for the server operating system. The sign-up process is a bit different, as it goes through the Windows Insider Program for Business site. The major difference is the absence of any “rings”. Only one current Windows Server Insider build exists at any given time.

Introducing the Windows Server Semi-Annual Channel

I have no idea what you’ve already read, so I’m going to assume that you haven’t read anything. But, I want to start off with some very important points that I think others gloss over or miss entirely:

  • Releases in the Windows Server Semi-Annual Channel are not Windows Server 2016! Windows Server 2016 belongs to the Long-Term Servicing Channel (LTSC). The current SAC is simply titled “Windows Server, version 1709”.
  • You cannot upgrade from Windows Server 2016 to the Semi-Annual Channel. For all I know, that might change at some point. Today, you can only switch between LTSC and SAC via a complete wipe-and-reinstall.
  • On-premises Semi-Annual Channel builds require Software Assurance (I’d like to take this opportunity to point out: so does Nano). I haven’t been in the reseller business for a while so I don’t know the current status, but I was never able to get Software Assurance added to an existing license. It was always necessary to purchase it at the same time as its base volume Windows Server license. I don’t know of any way to get Software Assurance with an OEM build. All of these things may have changed. Talk to your reseller. Ask questions. Do your research. Do not blindly assume that you are eligible to use an SAC build.
  • The license for Windows Server is interchangeable between LTSC and SAC. Meaning that, if you are a Software Assurance customer, you’ll be able to download/use either product per license count (but not both; 1 license count = 1 license for LTSC or 1 license for SAC).
  • The keys for Windows Server are not interchangeable between LTSC and SAC. I’m not yet sure how this will work out for Automatic Virtual Machine Activation. I did try adding the WS2016 AVMA key to a WS1709 guest and it did not like that one bit.
  • SAC does not offer the Desktop Experience. Meaning, there is no GUI. There is no way to install a GUI. You don’t get a GUI. You get only Core.
  • Any given SAC build might or might not have the same available roles and features as the previous SAC build. Case in point: Windows Server, version 1709 does not support Storage Spaces Direct.
  • SAC builds are available in Azure.
  • SAC builds are supported for production workloads. SAC follows the Windows Server Insider builds, but SAC is not an Insider build.
  • SAC builds will only be supported for 18 months. You can continue using a specific SAC build after that period, but you can’t get support for it.
  • SAC builds should release roughly every six months.
  • SAC builds will be numbered for their build month. Ex: 1709 = “2017 September (09)”.
  • SAC ships in Standard and Datacenter flavors only.

what is the windows server semi-annual channel

The Semi-Annual Channel is Not for Everyone

Lots of people have lots of complaints about the Windows Server Semi-annual Channel. I won’t judge the reasonableness or validity of any of them. However, I think that many of these complaints are based on a misconception. People have become accustomed to a particular release behavior, so they expected SAC to serve as vNext of Windows Server 2016. Looking at Microsoft’s various messages on the topic. I don’t feel like they did a very good job explaining the divergence. So, if that’s how you look at it, then it’s completely understandable that you’d feel like WS1709 slapped you in the face.

However, it looks different when you realize that WS1709 is not intended as a linear continuation. vNext of Windows Server 2016 will be another release in the LTSC cycle. It will presumably arrive sometime late next year or early the following year, and it will presumably be named Windows Server 2018 or Windows Server 2019. Unless there are other big changes in our future, it will have the Desktop Experience and at least the non-deprecated roles and features that you currently have available in WS2016. Basically, if you just follow the traditional release model, you can ignore the existence of the SAC releases.

Some feature updates in SAC will also appear in LTSC updates. As an example, both WS1709 and concurrent WS2016 patches introduce the ability for containers to use persistent data volumes on Cluster Shared Volumes.

Who Benefits from the Semi-Annual Channel?

If SAC is not meant for everyone, then who should use it? Let’s get one thing out of the way: no organization will use SAC for everything. The LTSC will always have a place. Do not feel like you’re going to be left behind if you stick with the LTSC.

I’ll start by simplifying some of Microsoft’s marketing-speak about targeted users:

  • Organizations with cloud-style deployments
  • Systems developers

Basically, you need to have something akin to a mission-critical level of interest in one or more of these topics:

  • Containers and related technologies (Docker, Kubernetes, etc.)
  • Software-defined networking
  • High-performance networking. I’m not talking about the “my test file copy only goes 45Mbps” flavor of “high performance” networking, but the “processing TCP packets between the real-time interface and its OLTP database causes discernible delays for my 150,000 users” flavor.
  • Multiple large Hyper-V clusters

Read the “What’s New” article for yourself. If you can’t find any must-have-yesterdays in that list, then don’t worry that you might have to wait twelve to eighteen months for vNext of LTSC to get them.

Who Benefits from the Long-Term Servicing Channel?

As I said, the LTSC isn’t going anywhere. Not only that, we will all continue to use more LTSC deployments than SAC deployments.

Choose LTSC for:

  • Stability. Even though SAC will be production-ready, the lead time between initial conception and first deployment will be much shorter. The wheel for new SAC features will be blocky.
  • Predictability: The absence of S2D in WS1709 caught almost everyone by surprise. That sort of thing won’t happen with LTSC. They’ll deprecate features first to give you at least one version’s worth of fair warning. (Note: S2D will return; it’s not going away).
  • Third-party applications: We all have vendors that are still unsure about WS2008. They’re certainly not going to sign off on SAC builds.
  • Line-of-business applications: Whether third-party or Microsoft, the big app server that holds your organization up doesn’t need to be upgraded twice each year.
  • The GUI.

What Does SAC Mean for Hyper-V?

The above deals with Windows Server Semi-Annual Channel in a general sense. Since this is a Hyper-V blog, I can’t leave without talking about what SAC means for Hyper-V.

For one thing, SAC does not have a Hyper-V Server distribution. I haven’t heard of any long-term plans, so the safest bet is to assume that future releases of Hyper-V Server will coincide with LTSC releases.

As far the Hyper-V role, it perfectly fits almost everything that SAC targets. Just look at the new Hyper-V-related features in 1709:

  • Enhanced VM load-balancing
  • Storage of VMs in storage-class memory (non-volatile RAM)
  • vPMEM
  • Splitting of “guest state” information out of the .vmrs file into its own .vmgs file
  • Support for running the host guardian service as a virtual machine
  • Support for Shielded Linux VMs
  • Virtual network encryption

Looking at that list, “Shielded Linux VMs” seems to have the most appeal to a small- or medium-sized organization. As I understand it, that’s not a feature so much as a support statement. Either way, I can shield a Linux VM on my fully-patched Windows Server 2016 build 1607 (LTSC) system.

As for the rest of the features, they will find the widest adoption in larger, more involved Hyper-V installations. I obviously can’t speak for everyone, but it seems to me that anyone that needs those features today won’t have any problems accepting the terms that go along with the switch to SAC.

For the rest of us, Hyper-V in LTSC has plenty to offer.

What to Watch Out For

Even though I don’t see any serious problems that will result from sticking with the LTSC, I don’t think this SKU split will be entirely painless.

For one thing, the general confusion over “Windows Server 2016” vs. “Windows Server, version 1709” includes a lot of technology authors. I see a great many articles with titles that include “Windows Server 2016 build 1709”. So, when you’re looking for help, you’re going to need to be on your toes. I think the limited appeal of the new features will help to mitigate that somewhat. Still, if you’re going to be writing, please keep the distinction in mind.

For another, a lot of technology writers (including those responsible for documentation) work only with the newest, hottest tech. They might not even think to point out that one feature or another belongs only to SAC. I think that the smaller target audience for the new features will keep this problem under control, as well.

The Future of LTSC/SAC

All things change. Microsoft might rethink one or both of these release models. Personally, I think they’ve made a good decision with these changes. Larger customers will be able to sit out on the bleeding edge and absorb all the problems that come with early adoption. By the time these features roll into LTSC, they’ll have undergone solid vetting cycles on someone else’s production systems. Customers in LTSC will benefit from the pain of others. That might even entice them to adopt newer releases earlier.

Most importantly, effectively nothing changes for anyone that sticks with the traditional regular release cycle. Windows Server Semi-Annual Channel offers an alternative option, not a required replacement.

7 Powerful Scripts for Practical Hyper-V Network Configurations

7 Powerful Scripts for Practical Hyper-V Network Configurations

I firmly believe in empowerment. I feel that I should supply you with knowledge, provide you with how-tos, share insights and experiences, and release you into the world to make your own decisions. However, I came to that approach by standing at the front of a classroom. During class, we’d almost invariably walk through exercises. Since this is a blog and not a classroom, I do things differently. We don’t have common hardware in a controlled environment, so I typically forgo the exercises bit. As a result, that leaves a lot of my readers at the edge of a cliff with no bridge to carry them from theory to practice. And, of course, there are those of you that would love to spend time reading about concepts but just really need to get something done right now. If you’re stopped at Hyper-V networking, this is the article for you.

Script Inventory

These scripts are included in this article:

Basic Usage

I’m going to show each item as a stand-alone script. First, you’ll locate the one that best aligns with what you’re trying to accomplish. You’ll copy/paste that into a .ps1 PowerShell script file on your system. You’ll need to edit the script to provide information about your environment so that it will work for you. I’ll have you set each of those items at the beginning of the script. Then, you’ll just need to execute the script on your host.

Most scripts will have its own “basic usage” heading that explains a bit about how you’d use it without modification.

Enhanced Usage

I could easily compile these into standalone executables that you couldn’t tinker with. Even though I want to give you a fully prepared springboard, I also want you to learn how the system works and what you’re doing to it.

Most scripts will have its own “enhanced usage” heading that gives some ideas how you might exploit or extend it yourself.

Configure networking for a single host with a single adapter

Use this script for a standalone system that only has one physical adapter. It will:

  • Disable VMQ for the physical adapter
  • Create a virtual switch on the adapter
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it.

Advanced Usage for this Script

As-is, this script should be complete for most typical single-adapter systems. You might choose to disable some items. For instance, if you are using this on Windows 10, you might not want to provide a fixed IP address. In that case, just put a # sign at the beginning of lines 42 onward. When the virtual network adapter is created, it will remain in DHCP mode.

 

Configure a standalone host with 2-4 gigabit adapters for converged networking

Use this script for a standalone host that has between two and four gigabit adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. Be aware that it will have problems if you already have a team.

Advanced Usage for this Script

This script serves as the base for the remaining scripts on this page. Likewise, you could use it as a base for your own. You could also use any of the items as examples for whatever similar actions you wish to accomplish in your own scripts.

 

Configure a standalone host with 2-4 10 GbE adapters for converged networking

Use this script for a standalone host that has between two and four 10GbE adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

It won’t take a great deal of sleuthing to discover that this script is identical to the preceding one, except that it does not disable VMQ.

 

Configure a clustered host with 2-4 gigabit adapters for converged networking

Use this script for a host that has between two and four gigabit adapters that will be a member of a cluster. Like the previous scripts, it will employ a converged networking configuration. The script will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Configure a clustered host with 2-4 10 GbE adapters for converged networking

This script is identical to the preceding except that it leaves VMQ enabled. It does the following:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

These notes are identical to those of the preceding script.

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

These notes are identical to those of the preceding script.

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Set preferred order for cluster Live Migration networks

This script aligns with the two preceding scripts to ensure that the cluster chooses the named “Live Migration” adapter first when moving virtual machines between nodes. The “Cluster” virtual adapter will be used second. The management adapter will be used as the final fallback.

Basic Usage for this Script

Use this script after you’ve run one of the above two clustered host scripts and joined them into a cluster.

Advanced Usage for this Script

Modify this to change the order of Live Migration adapters. You must specify all adapters recognized by the cluster. Check the “MigrationExcludeNetworks” registry key that’s in the same location as “MigrationNetworkOrder”.

 

Exclude cluster networks from Live Migration

This script is intended to be used as an optional adjunct to the preceding script. Since my scripts set up all virtual adapters to be used in Live Migration, the network names used here are fabricated.

Basic Usage for this Script

You’ll need to set the network names to match yours, but otherwise, the script does not need to be altered.

Advanced Usage for this Script

This script will need to be modified in order to be used at all.

 

13 Questions Answered on the Future of Hyper-V

13 Questions Answered on the Future of Hyper-V

What does the future hold for Hyper-V and its users? Technology moves fast so should Hyper-V admins be concerned about the future? Well, we don’t have a crystal ball to tell us what the future holds but we do have 3 industry experts and Microsoft MVPs to tell you what to expect. Following our hugely popular panel webinar 3 Emerging Technologies that will Change the Way you use Hyper-V we’ve decided to bring together all of the questions asked during both sessions (we hold 2 separate webinar sessions on the same topic to accommodate our European and American audiences) into one article with some extended answers to address the issue of what’s around the corner for Hyper-V and related technologies.

Let’s get started!

questions from the webinar 3 emerging technologies that will change the way you use hyper-v

The Questions

Question 1: Do you think IT Security is going to change as more and more workloads move into the cloud?

Answer: Absolutely! As long as we’re working with connected systems, no matter where they are located, we will always have to worry about security. 1 common misconception though is that just because a workload is housed inside of Microsoft Azure, doesn’t mean that it’s LESS secure. Public cloud platforms have been painstakingly setup from the ground up with the help of security experts in the industry. You’ll find that if best practices are followed, and rules of least access and just-in-time administration are followed, the public cloud is a highly secure platform.

Question 2: Do you see any movement to establish a global “law” of data security/restrictions that are not threatened by local laws (like the patriot act)?

Answer: Until all countries of the world are on the same page, I just don’t see this happening. The US treats data privacy in a very different way than the EU unfortunately. The upcoming General Data Protection Regulation (GDPR) coming in may of 2018 is a step in the right direction, but that only applies to the EU and data traversing the boundaries of the EU. It will certainly affect US companies and organizations, but nothing similar in nature is in the works there.

Question 3: In the SMB Space, where a customer may only have a single MS Essentials server and use Office 365, do you feel that this is still something that should move to the cloud?

Answer: I think the answer to that question depends greatly on the customer and the use case. As Didier, Thomas and I discussed in the webinar, the cloud is a tool, and you have to evaluate for each case, whether it makes sense or not to run that workload in the cloud. If for that particular customer, they could benefit from those services living in the cloud with little downside, then it may be a great fit. Again, it has to make sense, technically, fiscally, and operationally, before you can consider doing so.

Question 4: What exactly is a Container?

Answer: While not the same at all, it’s often easiest to see a container as a kind of ultra-stripped down VM. A container holds an ultra-slim OS image (In the case of Nano Server 50-60 MB), any supporting code framework, such as DotNet, and then whatever application you want to run within the container. They are not the same as a VM due to the fact that Windows containers all share the kernel of the underlying host OS. However, if you require further isolation, you can do so with Hyper-V containers, which allows you to run a container within an optimized VM so you can take advantage of Hyper-V’s isolation capabilities.

Question 5: On-Premises Computing is Considered to be a “cloud” now too correct?

Answer: That is correct! In my view, the term cloud doesn’t refer to a particular place, but to the new technologies and software-defined methods that are taking over datacenters today. So you can refer to your infrastructure on-prem as “private cloud”, and anything like Azure or AWS as “Public Cloud”. Then on top of that anything that uses both is referred to as “Hybrid Cloud”.

Question 6: What happens when my client goes to the cloud and they lose their internet service for 2 weeks?

Answer: The cloud, just like any technology solution, has its shortcomings that can be overcome if planned for properly. If you have mission critical service you’d like to host in the cloud, then you’ll want to research ways for the workload to be highly available. That would include a secondary internet connection from a different provider or some way to make that workload accessible from the on-prem location if needed. Regardless of where the workload is, you need to plan for eventualities like this.

Question 7: What Happened to Azure Pack?

Answer: Azure Pack is still around and usable, it will just be replaced by Azure stack at some point. In the meantime, there are integrations available that allow you to manage both solutions from your Azure Stack management utility.

Question 8: What about the cost of Azure Stack? What’s the entry point?

Answer: This is something of a difficult question. Ranges that I’ve heard range from 75k to 250k, depending on the vendor and the load-out. You’ll want to contact your preferred hardware vendor for more information on this question.

Question 9: We’re a hosting company, is it possible to achieve high levels of availability with Azure Stack?

Answer: Just like any technology solution, you can achieve the coveted 4 9s of availability. The question is how much money do you want to spend? You could do so with Azure stack and the correct supporting infrastructure. However, one other thing to keep in mind, your SLA is only as good as your supporting vendors as well. For example, if you sell 4 9s as an SLA, and the internet provider for your datacenter can only provide 99%, then you’ve already broken your SLA, so something to keep in mind there.

Question 10: For Smaller businesses running Azure Stack, should software vendors assume these businesses will look to purchase traditionally on-prem software solutions that are compatible with this? My company’s solution does not completely make sense for the public cloud, but this could bridge the gap. 

Answer: I think for most SMBs, Azure Stack will be fiscally out of reach. In Azure Stack you’re really paying for a “Cloud Platform”, and for most SMBs it will make more sense to take advantage of public Azure if those types of features are needed. that said, to answer your question, there are already vendors doing this. Anything that will deploy on public Azure using ARM will also deploy easily on Azure Stack.

Question 11: In Azure Stack, can I use any backup software and backup the VM to remote NAS storage or to AWS?

Answer: At release, there is no support for 3rd party backup solutions in Azure Stack. Right now there is a built-in flat file backup and that is it. I suspect that it will be opened up to third-party vendors at some point in time and it will likely be protected in much the same way as public Azure resources.

Question 12: How would a lot of these [Azure Stack] services be applied to the K-12 education market? There are lots of laws that require data to be stored in the same country. Yet providers often host in a different country. 

Answer: If you wanted to leverage a providers Azure stack somewhere, you would likely have to find one that actually hosts it in the geographical region you’re required to operate in. Many hosters will provide written proof of where the workload is hosted for these types of situations.

Question 13: I’m planning to move to public Azure, how many Azure cloud Instances would I need?

Answer: There is no hard set answer for this. It depends on the number of VMs/Applications and whether you run them in Azure as VMs or in Azure’s PaaS fabric. The Azure Pricing Calculator will give you an idea of VM sizes and what services are available.

Watch the webinar

Watch the altaro webinar now

Did you miss the webinar when it first went out? Has this blog post instilled a desire for you to rewatch the session again? Have no fear, we have set up an on-demand version for you to watch right now! Simply click on the link below to go the on-demand webinar page where you can watch a live recording of the webinar free.

WATCH NOW: 3 Emerging Technologies that will Change the Way you Use Hyper-V – Altaro On-demand Webinar

Join the Discussion: #FutureOfHyperV

Follow the hashtag #FutureOfHyperV on twitter and join the discussion about the impact of the emerging technologies discussed in our webinar along with other things that will shape the future of Hyper-V for years to come.

Did we miss something?

If you have a question on the future of Hyper-v or any of the 3 emerging technologies that were discussed in the webinar just post in the comments below and we will get straight back to you. Furthermore, if you asked a question during the webinar that you don’t see here, by all means, let us know in the comments section below and we will be sure to answer it here. Any follow-up questions are also very welcome – to feel free to let us know about that as well!

As always – thanks for reading!

How to Hot Add/Remove Virtual Network Adapters in Hyper-V 2016

How to Hot Add/Remove Virtual Network Adapters in Hyper-V 2016

Last week I showed you how to hot add/remove memory in Hyper-V 2016 and this week I’m covering another super handy new feature that system admins will also love. In fact, Hyper-V 2016 brought many fantastic features. Containers! It also added some features that indicate natural product maturation. On that list, we find “hot add/remove of virtual network adapters”. If that’s not obvious, it means that you can now add or remove virtual network adapters to/from running virtual machines.

Requirements for Hyper-V Hot Add/Remove of Virtual Network Adapters

To make hot add/remove of network adapters work in Hyper-V, you must meet these requirements:

  • Hypervisor must be 2016 version (Windows 10, Windows Server 2016, or Hyper-V Server 2016)
  • Virtual machine must be generation 2
  • To utilize the Device Naming feature, the virtual machine version must be at least 6.2. The virtual machine configuration version does not matter if you do not attempt to use Device Naming. Meaning, you can bring a version 5.0 virtual machine over from 2012 R2 to 2016 and hot add a virtual network adapter. A discussion on Device Naming will appear in a different article.

The guest operating system may need an additional push to realize that a change was made. I did not encounter any issues with the various operating systems that I tested.

How to Use PowerShell to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

I always recommend PowerShell to work with second or higher network adapters to a virtual machine. Otherwise, they’re all called “Network Adapter”. Sorting that out can be unpleasant.

Adding a Virtual Adapter with PowerShell

Use Add-VMNetworkAdapter to add a network adapter to a running Hyper-V guest. That’s the same command that you’d use for an offline guest, as well. I don’t know why the authors chose the verb “Add” instead of “New”.

The above will work on a virtual machine with a configuration version of at least 6.2. If the virtual machine is set to a lower version, you get a rather confusing message that talks about DVD drives:

It does eventually get around to telling you exactly what it doesn’t like. You can avoid this error by not specifying the DeviceNaming parameter. If you’re scripting, you can avoid the parameter by employing splatting or by setting DeviceNaming to Off.

You can use any of the other parameters of Add-VMNetworkAdapter normally.

Removing a Virtual Adapter with PowerShell

To remove the adapter, use Remove-VMNetworkAdapter:

This is where things can get… interesting. Especially if you didn’t specify a unique name for the adapter. The Name parameter works like a search filter; it will remove any adapter that perfectly matches that name. So, if all of the virtual machine’s network adapters use the default name Network Adapter, and you specify Network Adapter for the Name parameter, then all of that VM’s adapters will be removed.

To address that issue, you’ll need to employ some cleverness. A quick ‘n’ dirty option would be to just remove all of the adapters, then add one. By default, that one adapter will pick up an IP from an available DHCP server. Since you can specify a static MAC address with the StaticMacAddress parameter of Add-VMNetworkAdapter, you can control that behavior with reservations.

You could also filter adapters by MAC address:

You could also use arrays to selectively remove items:

You could even use a loop to knock out all adapters after the first:

In my unscientific testing, virtual machine network adapters are always stored and retrieved in the order in which they were added, so the above script should always remove every adapter except the original. Based on the file format, I would expect that to always hold true. However, no documentation exists that outright supports that; use this sort of cleverness with caution.

I recommend naming your adapters to save a lot of grief in these instances.

How to Use the GUI to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

These instructions work for both Hyper-V Manager and Failover Cluster Manager. Use the virtual machine’s Settings dialog in either tool.

Adding a Virtual Network Adapter in the GUI

Add a virtual network adapter to a running VM the same way that you add one to a stopped VM:

  1. On the VM’s Settings dialog, start on the Add Hardware page. The Network Adapter entry should be black, not gray. If it’s gray, then the VM is either Generation 1 or not in a valid state to add an adapter:
    harn_newhardware
  2. Highlight Network Adapter and click Add.
  3. You will be taken to a screen where you can fill out all of the normal information for a network adapter. Set all items as desired.
    harn_newadapter
  4. Once you’ve set everything to your liking, click OK to add the adapter and close the dialog or Apply to add the adapter and leave the dialog open.

Removing a Virtual Network Adapter in the GUI

As with adding an adapter, removing an adapter for a running virtual machine is performed the same way as adding one:

  1. Start on the Settings dialog for the virtual machine. Switch to the tab for the adapter that you wish to remove:
    harn_addedadapter
  2. Click the Remove button.
    harn_removeadapter
  3. The tab for the adapter to be removed will have all of its text crossed out. The dialog items for it will turn gray.
    harn_removeadapterpending
  4. Click OK to remove the adapter and close the dialog or Apply to remove the adapter and leave the dialog open. Click Cancel if you change your mind. For OK or Apply, a prompt will appear with a warning that you’ll probably disrupt network communications:
    harn_removeprompt

Hot Add/Remove of Hyper-V Virtual Adapters for Linux Guests

I didn’t invest a great deal of effort into testing, but this feature works for Linux guests with mixed results. A Fedora guest running on my Windows 10 system was perfectly happy with it:

harn_linux

OpenSUSE Leap… not so much:

harn_noleap

But then, I added another virtual network adapter to my OpenSUSE system. This time, I remembered to connect it to a virtual switch before adding. It liked that much better:

harn_leapon

So, the moral of the story: for Linux guests, always specify a virtual switch when hot adding a virtual network card. Connecting it afterward does not help.

Also notice that OpenSUSE Leap did not ever automatically configure the adapter for DHCP, whereas Fedora did. As I mentioned in the beginning of the article, you might need to give some environments an extra push.

Also, Leap seemed to get upset when I hot removed the adapter:

harn_leapout

To save your eyes, the meat of that message says: “unable to send revoke receive buffer to netvsp”. I don’t know if that’s serious or not. The second moral of this story, then: hot removing network adapters might leave some systems in an inconsistent, unhappy state.

My Thoughts on Hyper-V’s Hot Add/Remove of Network Adapters Feature

Previous versions of Hyper-V did not have this feature and I never missed it. I wasn’t even aware that other hypervisors had it until I saw posts from people scrounging for any tiny excuse to dump hate on Microsoft. Sure, I’ve had a few virtual machines with services that benefited from multiple network adapters. However, I knew of that requirement going in, so I just built them appropriately from the beginning. I suppose that’s a side effect of competent administration. Overall, I find this feature to be a hammer desperately seeking a nail.

That said, it misses the one use that I might have: it doesn’t work for generation 1 VMs. As you know, a generation 1 Hyper-V virtual machine can only PXE boot from a legacy network adapter. The legacy network adapter has poor performance. I’d like to be able to remove that legacy adapter post-deployment without shutting down the virtual machine. That said, it’s very low on my wish list. I’m guessing that we’ll eventually be using generation 2 VMs exclusively, so the problem will handle itself.

During my testing, I did not find any problems at all using this feature with Windows guests. As you can see from the Linux section, things didn’t go quite as well there. Either way, I would think twice about using this feature with production systems. Network disruptions do not always behave exactly as you might think because networks often behave unexpectedly. Multi-homed systems often crank the “strange” factor up somewhere near “haunted”. Multi-home a system and fire up Wireshark. I can almost promise that you’ll see something that you didn’t expect within the first five minutes.

I know that you’re going to use this feature anyway, and that’s fine; that’s why it’s there. I would make one recommendation: before removing an adapter, clear its TCP/IP settings and disconnect it from the virtual switch. That gives the guest operating system a better opportunity to deal with the removal of the adapter on familiar terms.

95 Best Practices for Optimizing Hyper-V Performance

95 Best Practices for Optimizing Hyper-V Performance

We can never get enough performance. Everything needs to be faster, faster, faster! You can find any number of articles about improving Hyper-V performance and best practices, of course, unfortunately, a lot of the information contains errors, FUD, and misconceptions. Some are just plain dated. Technology has changed and experience is continually teaching us new insights. From that, we can build a list of best practices that will help you to tune your system to provide maximum performance.

How to optimize Hyper-V Performance

Philosophies Used in this Article

This article focuses primarily on performance. It may deviate from other advice that I’ve given in other contexts. A system designed with performance in mind will be built differently from a system with different goals. For instance, a system that tries to provide high capacity at a low price point would have a slower performance profile than some alternatives.

  • Subject matter scoped to the 2012 R2 and 2016 product versions.
  • I want to stay on target by listing the best practices with fairly minimal exposition. I’ll expand ideas where I feel the need; you can always ask questions in the comments section.
  • I am not trying to duplicate pure physical performance in a virtualized environment. That’s a wasted effort.
  • I have already written an article on best practices for balanced systems. It’s a bit older, but I don’t see anything in it that requires immediate attention. It was written for the administrator who wants reasonable performance but also wants to stay under budget.
  • This content targets datacenter builds. Client Hyper-V will follow the same general concepts with variable applicability.

General Host Architecture

If you’re lucky enough to be starting in the research phase — meaning, you don’t already have an environment — then you have the most opportunity to build things properly. Making good purchase decisions pays more dividends than patching up something that you’ve already got.

  1. Do not go in blind.
    • Microsoft Assessment and Planning Toolkit will help you size your environment: MAP Toolkit
    • Ask your software vendors for their guidelines for virtualization on Hyper-V.
    • Ask people that use the same product(s) if they have virtualized on Hyper-V.
  2. Stick with logo-compliant hardware. Check the official list: https://www.windowsservercatalog.com/
  3. Most people will run out of memory first, disk second, CPU third, and network last. Purchase accordingly.
  4. Prefer newer CPUs, but think hard before going with bleeding edge. You may need to improve performance by scaling out. Live Migration requires physical CPUs to be the same or you’ll need to enable CPU compatibility mode. If your environment starts with recent CPUs, then you’ll have the longest amount of time to be able to extend it. However, CPUs commonly undergo at least one revision, and that might be enough to require compatibility mode. Attaining maximum performance may reduce virtual machine mobility.
  5. Set a target density level, e.g. “25 virtual machines per host”. While it may be obvious that higher densities result in lower performance, finding the cut-off line for “acceptable” will be difficult. However, having a target VM number in mind before you start can make the challenge less nebulous.
  6. Read the rest of this article before you do anything.

Management Operating System

Before we carry on, I just wanted to make sure to mention that Hyper-V is a type 1 hypervisor, meaning that it runs right on the hardware. You can’t “touch” Hyper-V because it has no direct interface. Instead, you install a management operating system and use that to work with Hyper-V. You have three choices:

Note: Nano Server initially offered Hyper-V, but that functionality will be removed (or has already been removed, depending on when you read this). Most people ignore the fine print of using Nano Server, so I never recommended it anyway.

TL;DR: In absence of a blocking condition, choose Hyper-V Server. A solid blocking condition would be the Automatic Virtual Machine Activation feature of Datacenter Edition. In such cases, the next preferable choice is Windows Server in Core mode.

I organized those in order by distribution size. Volumes have been written about the “attack surface” and patching. Most of that material makes me roll my eyes. No matter what you think of all that, none of it has any meaningful impact on performance. For performance, concern yourself with the differences in CPU and memory footprint. The widest CPU/memory gap lies between Windows Server and Windows Server Core. When logged off, the Windows Server GUI does not consume many resources, but it does consume some. The space between Windows Server Core and Hyper-V Server is much tighter, especially when the same features/roles are enabled.

One difference between Core and Hyper-V Server is the licensing mechanism. On Datacenter Edition, that does include the benefit of Automatic Virtual Machine Activation (AVMA). That only applies to the technological wiring. Do not confuse it with the oft-repeated myth that installing Windows Server grants guest licensing privileges. The legal portion of licensing stands apart; read our eBook for starting information.

Because you do not need to pay for the license for Hyper-V Server, it grants one capability that Windows Server does not: you can upgrade at any time. That allows you to completely decouple the life cycle of your hosts from your guests. Such detachment is a hallmark of the modern cloud era.

If you will be running only open source operating systems, Hyper-V Server is the natural choice. You don’t need to pay any licensing fees to Microsoft at all with that usage. I don’t realistically expect any pure Linux shops to introduce a Microsoft environment, but Linux-on-Hyper-V is a fantastic solution in a mixed-platform environment. And with that, let’s get back onto the list.

Management Operating System Best Practices for Performance

  1. Prefer Hyper-V Server first, Windows Server Core second
  2. Do not install any software, feature, or role in the management operating system that does not directly aid the virtual machines or the management operating system. Hyper-V prioritizes applications in the management operating system over virtual machines. That’s because it trusts you; if you are running something in the management OS, it assumes that you really need it.
  3. Do not log on to the management operating system. Install the management tools on your workstation and manipulate Hyper-V remotely.
  4. If you must log on to the management operating system, log off as soon as you’re done.
  5. Do not browse the Internet from the management operating system. Don’t browse from any server, really.
  6. Stay current on mainstream patches.
  7. Stay reasonably current on driver versions. I know that many of my peers like to install drivers almost immediately upon release, but I can’t join that camp. While it’s not entirely unheard of for a driver update to bring performance improvements, it’s not common. With all of the acquisitions and corporate consolidations going on in the hardware space — especially networking — I feel that the competitive drive to produce quality hardware and drivers has entered a period of decline. In simple terms, view new drivers as a potential risk to stability, performance, and security.
  8. Join your hosts to the domain. Systems consume less of your time if they answer to a central authority.
  9. Use antivirus and intrusion prevention. As long you choose your anti-malware vendor well and the proper exclusions are in place, performance will not be negatively impacted. Compare that to the performance of a compromised system.
  10. Read through our article on host performance tuning.

Leverage Containers

In the “traditional” virtualization model, we stand up multiple virtual machines running individual operating system environments. As “virtual machine sprawl” sets in, we wind up with a great deal of duplication. In the past, we could justify that as a separation of the environment. Furthermore, some Windows Server patches caused problems for some software but not others. In the modern era, containers and omnibus patch packages have upset that equation.

Instead of building virtual machine after virtual machine, you can build a few virtual machines. Deploy containers within them. Strategies for this approach exceed the parameters of this article, but you’re aiming to reduce the number of disparate complete operating system environments deployed. With careful planning, you can reduce density while maintaining a high degree of separation for your services. Fewer kernels are loaded, fewer context switches occur, less memory contains the same code bits, fewer disk seeks to retrieve essentially the same information from different locations.

  1. Prefer containers over virtual machines where possible.

CPU

You can’t do a great deal to tune CPU performance in Hyper-V. Overall, I count that among my list of “good things”; Microsoft did the hard work for you.

  1. Follow our article on host tuning; pay special attention to C States and the performance power settings.
  2. For Intel chips, leave hyperthreading on unless you have a defined reason to turn it off.
  3. Leave NUMA enabled in hardware. On your VMs’ property sheet, you’ll find a Use Hardware Topology button. Remember to use that any time that you adjust the number of vCPUs assigned to a virtual machine or move it to a host that has a different memory layout (physical core count and/or different memory distribution).
    best pratices for optimizing hyper-v performance - settings NUMA configuration
  4. Decide whether or not to allow guests to span NUMA nodes (the global host NUMA Spanning setting). If you size your VMs to stay within a NUMA node and you are careful to not assign more guests than can fit solidly within each NUMA node, then you can increase individual VM performance. However, if the host has trouble locking VMs into nodes, then you can negatively impact overall memory performance. If you’re not sure, just leave NUMA at defaults and tinker later.
  5. For modern guests, I recommend that you use at least two virtual CPUs per virtual machine. Use more in accordance with the virtual machine’s performance profile or vendor specifications. This is my own personal recommendation; I can visibly detect the response difference between a single vCPU guest and a dual vCPU guest.
  6. For legacy Windows guests (Windows XP/Windows Server 2003 and earlier), use 1 vCPU. More will likely hurt performance more than help.
  7. Do not grant more than 2 vCPU to a virtual machine without just cause. Hyper-V will do a better job reducing context switches and managing memory access if it doesn’t need to try to do too much core juggling. I’d make exceptions for very low-density hosts where 2 vCPU per guest might leave unused cores. At the other side, if you’re assigning 24 cores to every VM just because you can, then you will hurt performance.
  8. If you are preventing VMs from spanning NUMA nodes, do not assign more vCPU to a VM than you have matching physical cores in a NUMA node (usually means the number of cores per physical socket, but check with your hardware manufacturer).
  9. Use Hyper-V’s priority, weight, and reservation settings with great care. CPU bottlenecks are highly uncommon; look elsewhere first. A poor reservation will cause more problems than it solves.

Memory

I’ve long believed that every person that wants to be a systems administrator should be forced to become conversant in x86 assembly language, or at least C. I can usually spot people that have no familiarity with programming in such low-level languages because they almost invariably carry a bizarre mental picture of how computer memory works. Fortunately, modern memory is very, very, very fast. Even better, the programmers of modern operating system memory managers have gotten very good at their craft. Trying to tune memory as a systems administrator rarely pays dividends. However, we can establish some best practices for memory in Hyper-V.

  1. Follow our article on host tuning. Most importantly, if you have multiple CPUs, install your memory such that it uses multi-channel and provides an even amount of memory to each NUMA node.
  2. Be mindful of operating system driver quality. Windows drivers differ from applications in that they can permanently remove memory from the available pool. If they do not properly manage that memory, then you’re headed for some serious problems.
  3. Do not make your CSV cache too large.
  4. For virtual machines that will perform high quantities of memory operations, avoid dynamic memory. Dynamic memory disables NUMA (out of necessity). How do you know what constitutes a “high volume”? Without performance monitoring, you don’t.
  5. Set your fixed memory VMs to a higher priority and a shorter startup delay than your Dynamic Memory VMs. This ensures that they will start first, allowing Hyper-V to plot an optimal NUMA layout and reduce memory fragmentation. It doesn’t help a lot in a cluster, unfortunately. However, even in the best case, this technique won’t yield many benefits.
  6. Do not use more memory for a virtual machine than you can prove that it needs. Especially try to avoid using more memory than will fit in a single NUMA node.
  7. Use Dynamic Memory for virtual machines that do not require the absolute fastest memory performance.
  8. For Dynamic Memory virtual machines, pay the most attention to the startup value. It sets the tone for how the virtual machine will be treated during runtime. For virtual machines running full GUI Windows Server, I tend to use a startup of either 1 GB or 2 GB, depending on the version and what else is installed.
  9. For Dynamic Memory VMs, set the minimum to the operating system vendor’s stated minimum (512 MB for Windows Server). If the VM hosts a critical application, add to the minimum to ensure that it doesn’t get choked out.
  10. For Dynamic Memory VMs, set the maximum to a reasonable amount. You’ll generally discover that amount through trial and error and performance monitoring. Do not set it to an arbitrarily high number. Remember that, even on 2012 R2, you can raise the maximum at any time.

Check the CPU section for NUMA guidance.

Networking

In the time that I’ve been helping people with Hyper-V, I don’t believe that I’ve seen anyone waste more time worrying about anything that’s less of an issue than networking. People will read whitepapers and forums and blog articles and novels and work all weekend to draw up intricately designed networking layouts that need eight pages of documentation. But, they won’t spend fifteen minutes setting up a network utilization monitor. I occasionally catch grief for using MRTG since it’s old and there are shinier, bigger, bolder tools, but MRTG is easy and quick to set up. You should know how much traffic your network pushes. That knowledge can guide you better than any abstract knowledge or feature list.

That said, we do have many best practices for networking performance in Hyper-V.

  1. Follow our article on host tuning. Especially pay attention to VMQ on gigabit and separation of storage traffic.
  2. If you need your network to go faster, use faster adapters and switches. A big team of gigabit won’t keep up with a single 10 gigabit port.
  3. Use a single virtual switch per host. Multiple virtual switches add processing overhead. Usually, you can get a single switch to do whatever you wanted multiple switches to do.
  4. Prefer a single large team over multiple small teams. This practice can also help you to avoid needless virtual switches.
  5. For gigabit, anything over 4 physical ports probably won’t yield meaningful returns. I would use 6 at the outside. If you’re using iSCSI or SMB, then two more physical adapters just for that would be acceptable.
  6. For 10GbE, anything over 2 physical ports probably won’t yield meaningful returns.
  7. If you have 2 10GbE and a bunch of gigabit ports in the same host, just ignore the gigabit. Maybe use it for iSCSI or SMB, if it’s adequate for your storage platform.
  8. Make certain that you understand how the Hyper-V virtual switch functions. Most important:
    • You cannot “see” the virtual switch in the management OS except with Hyper-V specific tools. It has no IP address and no presence in the Network and Sharing Center applet.
    • Anything that appears in Network and Sharing Center that you think belongs to the virtual switch is actually a virtual network adapter.
    • Layer 3 (IP) information in the host has no bearing on guests — unless you create an IP collision
  9. Do not create a virtual network adapter in the management operating system for the virtual machines. I did that before I understood the Hyper-V virtual switch, and I have encountered lots of other people that have done it. The virtual machines will use the virtual switch directly.
  10. Do not multi-home the host unless you know exactly what you are doing. Valid reasons to multi-home:
    • iSCSI/SMB adapters
    • Separate adapters for cluster roles. e.g. “Management”, “Live Migration”, and “Cluster Communications”
  11. If you multi-home the host, give only one adapter a default gateway. If other adapters must use gateways, use the old route command or the new New-NetRoute command.
  12. Do not try to use internal or private virtual switches for performance. The external virtual switch is equally fast. Internal and private switches are for isolation only.
  13. If all of your hardware supports it, enable jumbo frames. Ensure that you perform validation testing (i.e.: ping storage-ip -f -l 8000)
  14. Pay attention to IP addressing. If traffic needs to locate an external router to reach another virtual adapter on the same host, then traffic will traverse the physical network.
  15. Use networking QoS if you have identified a problem.
    • Use datacenter bridging, if your hardware supports it.
    • Prefer the Weight QoS mode for the Hyper-V switch, especially when teaming.
    • To minimize the negative side effects of QoS, rely on limiting the maximums of misbehaving or non-critical VMs over trying to guarantee minimums for vital VMs.
  16. If you have SR-IOV-capable physical NICs, it provides the best performance. However, you can’t use the traditional Windows team for the physical NICs. Also, you can’t use VMQ and SR-IOV at the same time.
  17. Switch-embedded teaming (2016) allows you to use SR-IOV. Standard teaming does not.
  18. If using VMQ, configure the processor sets correctly.
  19. When teaming, prefer Switch Independent mode with the Dynamic load balancing algorithm. I have done some performance testing on the types (near the end of the linked article). However, a reader commented on another article that the Dynamic/Switch Independent combination can cause some problems for third-party load balancers (see comments section).

Storage

When you need to make real differences in Hyper-V’s performance, focus on storage. Storage is slow. The best way to make storage not be slow is to spend money. But, we have other ways.

  1. Follow our article on host tuning. Especially pay attention to:
    • Do not break up internal drive bays between Hyper-V and the guests. Use one big array.
    • Do not tune the Hyper-V partition for speed. After it boots, Hyper-V averages zero IOPS for itself. As a prime example, don’t put Hyper-V on SSD and the VMs on spinning disks. Do the opposite.
    • The best ways to get more storage speed is to use faster disks and bigger arrays. Almost everything else will only yield tiny differences.
  2. For VHD (not VHDX), use fixed disks for maximum performance. Dynamically-expanding VHD is marginally, but measurably, slower.
  3. For VHDX, use dynamically-expanding disks for everything except high-utilization databases. I receive many arguments on this, but I’ve done the performance tests and have years of real-world experience. You can trust that (and run the tests yourself), or you can trust theoretical whitepapers from people that make their living by overselling disk space but have perpetually misplaced their copy of diskspd.
  4. Avoid using shared VHDX (2012 R2) or VHDS (2016). Performance still isn’t there. Give this technology another maturation cycle or two and look at it again.
  5. Where possible, do not use multiple data partitions in a single VHD/X.
  6. When using Cluster Shared Volumes, try to use at least as many CSVs as you have nodes. Starting with 2012 R2, CSV ownership will be distributed evenly, theoretically improving overall access.
  7. You can theoretically improve storage performance by dividing virtual machines across separate storage locations. If you need to make your arrays span fewer disks in order to divide your VMs’ storage, you will have a net loss in performance. If you are creating multiple LUNs or partitions across the same disks to divide up VMs, you will have a net loss in performance.
  8. For RDS virtual machine-based VDI, use hardware-based or Windows’ Hyper-V-mode deduplication on the storage system. The read hits, especially with caching, yield positive performance benefits.
  9. The jury is still out on using host-level deduplication for Windows Server guests, but it is supported with 2016. I personally will be trying to place Server OS disks on SMB storage deduplicated in Hyper-V mode.
  10. The slowest component in a storage system is the disk(s); don’t spend a lot of time worrying about controllers beyond enabling caching.
  11. RAID-0 is the fastest RAID type, but provides no redundancy.
  12. RAID-10 is generally the fastest RAID type that provides redundancy.
  13. For Storage Spaces, three-way mirror is fastest (by a lot).
  14. For remote storage, prefer MPIO or SMB multichannel over multiple unteamed adapters. Avoid placing this traffic on teamed adapters.
  15. I’ve read some scattered notes that say that you should format with 64 kilobyte allocation units. I have never done this, mostly because I don’t think about it until it’s too late. If the default size hurts anything, I can’t tell. Someday, I’ll remember to try it and will update this article after I’ve gotten some performance traces. If you’ll be hosting a lot of SQL VMs and will be formatting their VHDX with 64kb AUs, then you might get more benefit.
  16. I still don’t think that ReFS is quite mature enough to replace NTFS for Hyper-V. For performance, I definitely stick with NTFS.
  17. Don’t do full defragmentation. It doesn’t help. The minimal defragmentation that Windows automatically performs is all that you need. If you have some crummy application that makes this statement false, then stop using that application or exile it to its own physical server. Defragmentation’s primary purpose is to wear down your hard drives so that you have to buy more hard drives sooner than necessary, which is why employees of hardware vendors recommend it all the time. If you have a personal neurosis that causes you pain when a disk becomes “too” fragmented, use Storage Live Migration to clear and then re-populate partitions/LUNs. It’s wasted time that you’ll never get back, but at least it’s faster. Note: All retorts must include verifiable and reproducible performance traces, or I’m just going to delete them.

Clustering

For real performance, don’t cluster virtual machines. Use fast internal or direct-attached SSDs. Cluster for redundancy, not performance. Use application-level redundancy techniques instead of relying on Hyper-V clustering.

In the modern cloud era, though, most software doesn’t have its own redundancy and host clustering is nearly a requirement. Follow these best practices:

  1. Validate your cluster. You may not need to fix every single warning, but be aware of them.
  2. Follow our article on host tuning. Especially pay attention to the bits on caching storage. It includes a link to enable CSV caching.
  3. Remember your initial density target. Add as many nodes as necessary to maintain that along with sufficient extra nodes for failure protection.
  4. Use the same hardware in each node. You can mix hardware, but CPU compatibility mode and mismatched NUMA nodes will have at least some impact on performance.
  5. For Hyper-V, every cluster node should use a minimum of two separate IP endpoints. Each IP must exist in a separate subnet. This practice allows the cluster to establish multiple simultaneous network streams for internode traffic.
    • One of the addresses must be designated as a “management” IP, meaning that it must have a valid default gateway and register in DNS. Inbound connections (such as your own RDP and PowerShell Remoting) will use that IP.
    • None of the non-management IPs should have a default gateway or register in DNS.
    • One alternative IP endpoint should be preferred for Live Migration. Cascade Live Migration preference order through the others, ending with the management IP. You can configure this setting most easily in Failover Cluster Manager by right-clicking on the Networks node.
    • Further IP endpoints can be used to provide additional pathways for cluster communications. Cluster communications include the heartbeat, cluster status and update messages, and Cluster Shared Volume information and Redirected Access traffic.
    • You can set any adapter to be excluded from cluster communications but included in Live Migration in order to enforce segregation. Doing so generally does not improve performance, but may be desirable in some cases.
    • You can use physical or virtual network adapters to host cluster IPs.
    • The IP for each cluster adapter must exist in a unique subnet on that host.
    • Each cluster node must contain an IP address in the same subnet as the IPs on other nodes. If a node does not contain an IP in a subnet that exists on other nodes, then that network will be considered “partitioned” and the node(s) without a member IP will be excluded from that network.
    • If the host will connect to storage via iSCSI, segregate iSCSI traffic onto its own IP(s). Exclude it/them from cluster communications and Live Migration. Because they don’t participate in cluster communications, it is not absolutely necessary that they be placed into separate subnets. However, doing so will provide some protection from network storms.
  6. If you do not have RDMA-capable physical adapters, Compression usually provides the best Live Migration performance.
  7. If you do have RDMA-capable physical adapters, SMB usually provides the best Live Migration performance.
  8. I don’t recommend spending time tinkering with the metric to shape CSV traffic anymore. It utilizes SMB, so the built-in SMB multi-channel technology can sort things out.

Virtual Machines

The preceding guidance obliquely covers several virtual machine configuration points (check the CPU and the memory sections). We have a few more:

  1. Don’t use Shielded VMs or BitLocker. The encryption and VMWP hardening incur overhead that will hurt performance. The hit is minimal — but this article is about performance.
  2. If you have 1) VMs with very high inbound networking needs, 2) physical NICs >= 10GbE, 3) VMQ enabled, 4) spare CPU cycles, then enable RSS within the guest operating systems. Do not enable RSS in the guest OS unless all of the preceding are true.
  3. Do not use the legacy network adapter in Generation 1 VMs any more than absolutely necessary.
  4. Utilize checkpoints rarely and briefly. Know the difference between standard and “production” checkpoints.
  5. Use time synchronization appropriately. Meaning, virtual domain controllers should not have the Hyper-V time synchronization service enabled, but all other VMs should (generally speaking). The hosts should pull their time from the domain hierarchy. If possible, the primary domain controller should be pulling from a secured time source.
  6. Keep Hyper-V guest services up-to-date. Supported Linux systems can be updated via kernel upgrades/updates from their distribution repositories. Windows 8.1+ and Windows Server 2012 R2+ will update from Windows Update.
  7. Don’t do full defragmentation in the guests, either. Seriously. We’re administering multi-spindle server equipment here, not displaying a progress bar to someone with a 5400-RPM laptop drive so that they feel like they’re accomplishing something.
  8. If the virtual machine’s primary purpose is to run an application that has its own replication technology, don’t use Hyper-V Replica. Examples: Active Directory and Microsoft SQL Server. Such applications will replicate themselves far more efficiently than Hyper-V Replica.
  9. If you’re using Hyper-V Replica, consider moving the VMs’ page files to their own virtual disk and excluding it from the replica job. If you have a small page file that doesn’t churn much, that might cost you more time and effort than you’ll recoup.
  10. If you’re using Hyper-V Replica, enable compression if you have spare CPU but leave it disabled if you have spare network. If you’re not sure, use compression.
  11. If you are shipping your Hyper-V Replica traffic across an encrypted VPN or keeping its traffic within secure networks, use Kerberos. SSL en/decryption requires CPU. Also, the asymmetrical nature of SSL encryption causes the encrypted data to be much larger than its decrypted source.

Monitoring

You must monitor your systems. Monitoring is not and has never been, an optional activity.

  1. Be aware of Hyper-V-specific counters. Many people try to use Task Manager in the management operating system to gauge guest CPU usage, but it just doesn’t work. The management operating system is a special-case virtual machine, which means that it is using virtual CPUs. Its Task Manager cannot see what the guests are doing.
  2. Performance Monitor has the most power of any built-in tool, but it’s tough to use. Look at something like Performance Analysis of Logs (PAL) tool, which understands Hyper-V.
  3. In addition to performance monitoring, employ state monitoring. With that, you no longer have to worry (as much) about surprise events like disk space or memory filling up. I like Nagios, as regular readers already know, but you can select from many packages.
  4. Take periodic performance baselines and compare them to earlier baselines

 

If you’re able to address a fair proportion of points from this list, I’m sure you’ll see a boost in Hyper-V performance. Don’t forget this list is not exhaustive and I’ll be adding to it periodically to ensure it’s as comprehensive as possible however if you think there’s something missing, let me know in the comments below and you may see the number 95 increase!

Get Involved on twitter: #How2HyperV

Get involved on twitter where we will be regularly posting excerpts from this article and engaging the IT community to help each other improve our use of Hyper-V. Got your own Hyper-V tips or tricks for boosting performance? Use the hashtag #How2HyperV when you tweet and share your knowledge with the world!

#How2HyperV Tweets

How to Perform Hyper-V Storage Migration

How to Perform Hyper-V Storage Migration

New servers? New SAN? Trying out hyper-convergence? Upgrading to Hyper-V 2016? Any number of conditions might prompt you to move your Hyper-V virtual machine’s storage to another location. Let’s look at the technologies that enable such moves.

An Overview of Hyper-V Migration Options

Hyper-V offers numerous migration options. Each has its own distinctive features. Unfortunately, we in the community often muck things up by using incorrect and confusing terminology. So, let’s briefly walk through the migration types that Hyper-V offers:

  • Quick migration: Cluster-based virtual machine migration that involves placing a virtual machine into a saved state, transferring ownership to another node in the same cluster, and resuming the virtual machine. A quick migration does not involve moving anything that most of us consider storage.
  • Live migration: Cluster-based virtual machine migration that involves transferring the active state of a running virtual machine to another node in the same cluster. A Live Migration does not involve moving anything that most of us consider storage.
  • Storage migration: Any technique that utilizes the Hyper-V management service to relocate any file-based component that belongs to a virtual machine. This article focuses on this migration type, so I won’t expand any of those thoughts in this list.
  • Shared Nothing Live Migration: Hyper-V migration technique between two hosts that does not involve clustering. It may or may not include a storage migration. The virtual machine might or might not be running. However, this migration type always includes ownership transfer from one host to another.

It Isn’t Called Storage Live Migration

I have always called this operation “Storage Live Migration”. I know lots of other authors call it “Storage Live Migration”. But, Microsoft does not call it “Storage Live Migration”. They just call it “Storage Migration”. The closest thing that I can find to “Storage Live Migration” in anything from Microsoft is a 2012 TechEd recording by Benjamin Armstrong. The title of that presentation includes the phrase “Live Storage Migration”, but I can’t determine if the “Live” just modifies “Storage Migration” or if Ben uses it as part of the technology name. I suppose I could listen to the entire hour and a half presentation, but I’m lazy. I’m sure that it’s a great presentation, if anyone wants to listen and report back.

Anyway, does it matter? I don’t really think so. I’m certainly not going to correct anyone that uses that phrase. However, the virtual machine does not necessarily need to be live. We use the same tools and commands to move a virtual machine’s storage whether it’s online or offline. So, “Storage Migration” will always be a correct term. “Storage Live Migration”, not so much. However, we use the term “Shared Nothing Live Migration” for virtual machines that are turned off, so we can’t claim any consistency.

What Can Be Moved with Hyper-V Storage Migration?

When we talk about virtual machine storage, most people think of the places where the guest operating system stores its data. That certainly comprises the physical bulk of virtual machine storage. However, it’s also only one bullet point on a list of multiple components that form a virtual machine.

Independently, you can move any of these virtual machine items:

  • The virtual machine’s core files (configuration in xml or .vmcx, .bin, .vsv, etc.)
  • The virtual machine’s checkpoints (essentially the same items as the preceding bullet point, but for the checkpoint(s) instead of the active virtual machine)
  • The virtual machine’s second-level paging file location. I have not tested to see if it will move a VM with active second-level paging files, but I have no reason to believe that it wouldn’t
  • Virtual hard disks attached to a virtual machine
  • ISO images attached to a virtual machine

We most commonly move all of these things together. Hyper-V doesn’t require that, though. Also, we can move all of these things in the same operation but distribute them to different destinations.

What Can’t Be Moved with Hyper-V Storage Migration?

In terms of storage, we can move everything related to a virtual machine. But, we can’t move the VM’s active, running state with Storage Migration. Storage Migration is commonly partnered with a Live Migration in the operation that we call “Shared Nothing Live Migration”. To avoid getting bogged down in implementation details that are more academic than practical, just understand one thing: when you pick the option to move the virtual machine’s storage, you are not changing which Hyper-V host owns and runs the virtual machine.

More importantly, you can’t use any Microsoft tool-based technique to separate a differencing disk from its parent. So, if you have an AVHDX (differencing disk created by the checkpointing mechanism) and you want to move it away from its source VHDX, Storage Migration will not do it. If you instruct Storage Migration to move the AVHDX, the entire disk chain goes along for the ride.

Uses for Hyper-V Storage Migration

Out of all the migration types, storage migration has the most applications and special conditions. For instance, Storage Migration is the only Hyper-V migration type that does not always require domain membership. Granted, the one exception to the domain membership rule won’t be very satisfying for people that insist on leaving their Hyper-V hosts in insecure workgroup mode, but I’m not here to please those people. I’m here to talk about the nuances of Storage Migration.

Local Relocation

Let’s start with the simplest usage: relocation of local VM storage. Some situations in this category:

  • You left VMs in the default “C:\ProgramData\Microsoft\Windows\Hyper-V” and/or “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks” locations and you don’t like it
  • You added new internal storage as a separate volume and want to re-distribute your VMs
  • You have storage speed tiers but no active management layer
  • You don’t like the way your VMs’ files are laid out
  • You want to defragment VM storage space. It’s a waste of time, but it works.

Network Relocation

With so many ways to do network storage, it’s nearly a given that we’ll all need to move a VHDX across ours at some point. Some situations:

  • You’re migrating from local storage to network storage
  • You’re replacing a SAN or NAS and need to relocate your VMs
  • You’ve expanded your network storage and want to redistribute your VMs

Most of the reasons listed under “Local Relocation” can also apply to network relocation.

Cluster Relocation

We can’t always build our clusters perfectly from the beginning. For the most part, a cluster’s relocation needs list will look like the local and network lists above. A few others:

  • Your cluster has new Cluster Shared Volumes that you want to expand into
  • Existing Cluster Shared Volumes do not have a data distribution that does not balance well. Remember that data access from a CSV owner node is slightly faster than from a non-owner node

The reasons matter less than the tools when you’re talking about clusters. You can’t use the same tools and techniques to move virtual machines that are protected by Failover Clustering under Hyper-V as you use for non-clustered VMs.

Turning the VM Off Makes a Difference for Storage Migration

You can perform a very simple experiment: perform a Storage Migration for a virtual machine while it’s on, then turn it off and migrate it back. The virtual machine will move much more quickly while it’s off. This behavior can be explained in one word: synchronization.

When the virtual machine is off, a Storage Migration is essentially a monitored file copy. The ability of the constituent parts to move bits from source to destination sets the pace of the move. When the virtual machine is on, all of the rules change. The migration is subjected to these constraints:

  • The virtual machine’s operating system must remain responsive
  • Writes must be properly captured
  • Reads must occur from the most appropriate source

Even if the guest operating does not experience much activity during the move, that condition cannot be taken as a constant. In other words, Hyper-V needs to be ready for it to start demanding lots of I/O at any time.

So, the Storage Migration of a running virtual machine will always take longer than the Storage Migration of a virtual machine in an off or saved state. You can choose the convenience of an online migration or the speed of an offline migration.

Note: You can usually change a virtual machine’s power state during a Storage Migration. It’s less likely to work if you are moving across hosts.

How to Perform Hyper-V Storage Migration with PowerShell

The nice thing about using PowerShell for Storage Migration: it works for all Storage Migration types. The bad thing about using PowerShell for Storage Migration: it can be difficult to get all of the pieces right.

The primary cmdlet to use is Move-VMStorage. If you will be performing a Shared Nothing Live Migration, you can also use Move-VM. The parts of Move-VM that pertain to storage match Move-VMStorage. Move-VM has uses, requirements, and limitations that don’t pertain to the topic of this article, so I won’t cover Move-VM here.

A Basic Storage Migration in PowerShell

Let’s start with an easy one. Use this when you just want all of a VM’s files to be in one place:

This will move the virtual machine named testvm so that all of its components reside under the C:\LocalVMs folder. That means:

  • The configuration files will be placed in C:\LocalVMs\Virtual Machines
  • The checkpoint files will be placed in C:\LocalVMs\Snapshots
  • The VHDXs will be placed in C:\LocalVMs\Virtual Hard Disks
  • Depending on your version, an UndoLog Configuration folder will be created if it doesn’t already exist. The folder is meant to contain Hyper-V Replica files. It may be created even for virtual machines that aren’t being replicated.

Complex Storage Migrations in PowerShell

For more complicated move scenarios, you won’t use the DestinationStoragePath parameter. You’ll use one or more of the individual component parameters. Choose from the following:

  • VirtualMachinePath: Where to place the VM’s configuration files.
  • SnapshotFilePath: Where to place the VM’s checkpoint files (again, NOT the AVHDXs!)
  • SmartPagingFilePath: Where to place the VM’s smart paging files
  • Vhds: An array of hash tables that indicate where to place individual VHD/X files.

Some notes on these items:

  • You are not required to use all of these parameters. If you do not specify a parameter, then its related component is left alone. Meaning, it doesn’t get moved at all.
  • If you’re trying to use this to get away from those auto-created Virtual Machines and Snapshots folders, it doesn’t work. They’ll always be created as sub-folders of whatever you type in.
  • It doesn’t auto-create a Virtual Hard Disks folder.
  • If you were curious whether or not you needed to specify those auto-created subfolders, the answer is: no. Move-VMStorage will always create them for you (unless they already exist).
  • The VHDs hash table is the hardest part of this whole thing. I’m usually a PowerShell-first kind of guy, but even I tend to go to the GUI for Storage Migrations.

The following will move all components except VHDs, which I’ll tackle in the next section:

Move-VMStorage’s Array of Hash Tables for VHDs

The three …FilePath parameters are easy: just specify the path. The Vhds parameter is tougher. It is one or more hash tables inside an array.

First, the hash tables. A hash table is a custom object that looks like an array, but each entry has a unique name. The hash tables that Vhds expects have a SourceFilePath entry and a DestinationFilePath entry. Each must be fully-qualified for a file. A hash table is contained like this: @{ }. The name of an entry and its value are joined with an =. Entries are separated by a ; So, if you want to move the VHDX named svtest.vhdx from \\svstore\VMs to C:\LocalVMs\testvm, you’d use this hash table:

Reading that, you might ask (quite logically): “Can I change the name of the VHDX file when I move it?” The answer: No, you cannot. So, why then do you need to enter the full name of the destination file? I don’t know!

Next, the arrays. An array is bounded by @( ). Its entries are separated by commas. So, to move two VHDXs, you would do something like this:

I broke that onto multiple lines for legibility. You can enter it all on one line. Note where I used parenthesis and where I used curly braces.

Tip: To move a single VHDX file, you don’t need to do the entire array notation. You can use the first example with Vhds.

A Practical Move-VMStorage Example with Vhds

If you’re looking at all that and wondering why you’d ever use PowerShell for such a thing, I have the perfect answer: scripting. Don’t do this by hand. Use it to move lots of VMs in one fell swoop. If you want to see a plain example of the Vhds parameter in action, the Get-Help examples show one. I’ve got a more practical script in mind.

The following would move all VMs on the host. All of their config, checkpoint, and second-level paging files will be placed on a share named “\\vmstore\slowstorage”. All of their VHDXs will be placed on a share named “\\vmstore\faststorage”. We will have PowerShell deal with the source paths and file names.

I used splatting for the parameters for two reasons: 1, legibility. 2, to handle VMs without any virtual hard disks.

How to Perform Hyper-V Storage Migration with Hyper-V Manager

Hyper-V Manager can only be used for non-clustered virtual machines. It utilizes a wizard format. To use it to move a virtual machine’s storage:

  1. Right-click on the virtual machine and click Move.
  2. Click Next on the introductory page.
  3. Change the selection to Move the virtual machine’s storage (the same storage options would be available if you moved the VM’s ownership, but that’s not part of this article)
    movevm_hvmwiz1
  4. Choose how to perform the move. You can move everything to the same location, you can move everything to different locations, or you can move only the virtual hard disks.
    movevm_hvmwiz2
  5. What screens you see next will depend on what you chose. We’ll cover each branch.

If you opt to move everything to one location, the wizard will show you this simple page:

movevm_hvmwiz3

If you choose the option to Move the virtual machine’s data to different locations, you will first see this screen:

movevm_hvmwiz4

For every item that you check, you will be given a separate screen where you indicate the desired location for that item. The wizard uses the same screen for these items as it does for the hard-disks only option. I’ll show its screen shot next.

If you choose Move only the virtual machine’s virtual hard disks, then you will be given a sequence of screens where you instruct it where to move the files. These are the same screens used for the individual components from the previous selection:

movevm_hvmwiz5

After you make your selections, you’ll be shown a summary screen where you can click Finish to perform the move:

movevm_hvmwiz6

How to Perform Hyper-V Storage Migration with Failover Cluster Manager

Failover Cluster Manager uses a slick single-screen interface to move storage for cluster virtual machines. To access it, simply right-click a virtual machine, hover over Move, and click Virtual Machine Storage. You’ll see the following screen:

movecm_fcm1

If you just want to move the whole thing to one of the display Cluster Shared Volumes, just drag and drop it down to that CSV in the Cluster Storage heading at the lower left. You can drag and drop individual items or the entire VM. The Destination Folder Path will be populated accordingly.

As you can see in mine, I have all of the components except the VHD on an SMB share. I want to move the VHD to be with the rest. To get a share to show up, click the Add Share button. You’ll get this dialog:

movevm_fcmaddshare

The share will populate underneath the CSVs in the lower left. Now, I can drag and drop that file to the share. View the differences:

movecm_fcm2

Once you have the dialog the way that you like it, click Start.

8 Most Important Announcements from Microsoft Ignite 2017

8 Most Important Announcements from Microsoft Ignite 2017

Last week saw us close the door on Microsoft Ignite 2017, and while the conference came and went in a blur, there was no lack of information or amazing reveals from Microsoft. While this conference serves as a great way to stay informed on all the new things that Microsoft is working on, I also find that it is a good way to get an overall sense of the company’s overall direction. With that in mind, I wanted to not only talk about some of my favorite reveals from the week but also discuss my take on Microsoft’s overall direction.

Microsoft Ignite 2017 - most important announcements

My take on the week from an Infrastructure Engineering Perspective

To put things simply….. things are changing, and they’re changing in a big way. I’ve had this gut feeling stirring for some time that the way we work with VMs and virtualization was changing, and the week of Ignite was a major confirmation of that. This is not to mention the continued shift from the on-premise model we’re used to, to the new cloud (Public, Private, and Hybrid) model that things are moving too.

It’s very clear that Microsoft is adopting what I would call the “Azure-Everywhere” approach. Sure, you’ve always been able to consume Azure using what Microsoft has publicly available, but things really changed when Azure Stack is put into the mix. Microsoft Azure Stack (MAS) is officially on the market now, and the idea of having MAS in datacenters around the world is an interesting prospect. What I find so interesting about it, is the fact that management of MAS onsite is identical to managing Azure. You use Azure Resource Manager and the same collection of tools to manage both. Pair that with the fact that Hyper-V is so abstracted and under-the-hood in MAS that you can’t even see it, and you’ve got a recipe for major day-to-day changes for infrastructure administrators.

Yes, we’ve still got Windows Server 2016, and the newly announced Honolulu management utility, but If I look out 5, or even 10 years, I’m not sure I see us working with Windows Server anymore in the way that we do so today. I don’t think VM usage will be as prevalent then as it is today either. After last week, I firmly believe that containers will be the “new virtual machine”. I think VMs will stay around for legacy workloads, and for workloads that require additional layers of isolation, but after seeing containers in action last week, I’m all in on that usage model.

We used to see VMs as this amazing cost-reducing technology, and it was for a long time. However, I saw containers do to VMs, what VMs did to physical servers. I attended a session on moving workloads to a container based model, and MetLife was on stage talking about moving some of their infrastructure to containers. In doing so they achieved:

  • -70% reduction in the number of VMs in the environment
  • -67% reduction in needed CPU cores
  • -66% reduction in overall cost of ownership

Those are amazing numbers that nobody can ignore. Given this level of success with containers, I see the industry moving to that deployment model from VMs over the next several years. As much as it pains me to say it, virtual machines are starting to look very “legacy”, and we all need to adjust our skill sets accordingly.

Big Reveals

As you know, Ignite is that time of year where Microsoft makes some fairly large announcements, and below I’ve compiled a list of some of my favorite. While this is by no means a comprehensive list,  but I feel these represent what our readers would find most interesting. Don’t agree? That’s fine! Just let me know what you think were the most important announcements in the comments. Let’s get started.

8. New Azure Exams and Certifications!

With new technologies, come new things to learn, and as such there are 3 new exams on the market today for Azure Technologies.

  • Azure Stack Operators – Exam 537: Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack
  • For Azure Solutions Architects – Exam 539: Managing Linux Workloads on Azure
  • For Azure DevOps – Exam 538: Implementing Microsoft Azure DevOps Solutions

If you’re interested in pursuing any of these (Which I would Highly Recommend) then you can get more information on them at this link.

7. SQL Server 2017 is now Available

Normally I wouldn’t make much of a fuss about SQL Server as I’m not much of a SQL guy myself, but Microsoft did something amazing with this release. SQL Server 2017 will run on Windows, Linux, and inside of Docker Containers. Yes, you read correctly. SQL Server 2017 will run on Linux and inside of docker containers, which opens up a whole new avenue of providing SQL workloads. Exciting times indeed!

6. Patch Management from the Azure Portal

Ever wanted to have WSUS available from the Azure portal? Now you have it. You can easily view, and deploy patches for your Azure based workloads directly from the Azure portal. This includes Linux VMs as well, which is great news as more and more admins are finding themselves managing Linux workloads these days!

5. PowerShell now Available in Azure CLI.

When Azure CLI was announced and released, many people were taken aback at the lack of PowerShell support. This was done for a number of reasons that I won’t get into in this article, but regardless, it has been added in now. It is now possible with Azure CLI to deploy a VM with a single PowerShell cmdlet and more. So, get those scripts ready!

4. Azure File Sync in Preview

I know many friends and colleagues that have been waiting for something like this. You can essentially view this as next-generation DFS. (Though it doesn’t use the same technology). It essentially allows you to sync your on-premise file servers with an Azure Files account for distributed access to the stored information around the globe.

3. Quality of Life Improvements for Windows Containers

While there were no huge reveals in the container space, Windows Server 1709 was announced and contains a lot of improvements and optimizations for running containers on Windows Server. This includes things like smaller images and support for Linux Containers running on Windows Server. I did an Interview with Taylor Brown from the Containers team, which you can view below for more information.

2. Nested Virtualization in Azure for Production Workloads

Yes, I know, nested virtualization in Azure has been announced for some time. However, what I found different was Microsoft’s Insistence that it could also be used for production workloads. During Scott Guthrie’s Keynote, Corey Sanders actually demonstrated the use of the M-Series (Monster) VM in Azure being used to host production workloads with nested VMs. While not ideal in every scenario obviously, this is simply another tool that we have at our disposal for added flexibility in our day-to-day operations.

If you’re interested, I actually interviewed Rick Claus from the Azure Compute team about this. That Interview can be seen below

1. Project Honolulu

This one is for the folks that are still strictly interested in only the on-prem stuff. Microsoft revealed and showed us the new Project Honolulu management utility for on-premise workloads. Honolulu, takes the functionality of all the management tools and MMC snap-ins that we’ve been using for years and packages them up into a nice easy to use web-UI. It’s worth a look if you haven’t seen it yet. We even have a nice article on our blog about it if you’re interested in reading more!

Wrap-Up

As I mentioned, this was by no means a comprehensive list, but we’ll be talking about items (From this list and some not mentioned) from Ignite on our blogs for some time. So, be sure to keep an eye on our blog if you’re interested in more information.

Additionally, if you attended Microsoft Ignite, and you saw a feature or product you think is amazing that is not listed above, be sure to let us know in the comments section below!

Hyper-V 2016 Host Mode: GUI vs Core

Hyper-V 2016 Host Mode: GUI vs Core

Choice is a good thing, right? Well… usually. Sometimes, choice is just confusing. With most hypervisors, you get what you get. With Hyper-V, you can install in three different ways, and that’s just for the server hypervisor. In this article, we’ll balance the pros and cons of your options with the 2016 SKUs.

Server Deployment Options for Hyper-V

As of today, you can deploy Hyper-V in one of four packages.

Nano Server

When 2016 initially released, it brought a completely new install mode called “Nano”. Nano is little more than the Windows Server kernel with a tiny handful of interface bits attached. You then plug in the roles and features that you need to get to the server deployment that you want. I was not ever particularly fond of the idea of Hyper-V on Nano for several reasons, but none of them matter now. Nano Server is no longer supported as a Hyper-V host. It currently works, but that capability will be removed in the next iteration. Part of the fine print about Nano that no one reads includes the requirement that you keep within a few updates of current. So, you will be able to run Hyper-V on Nano for a while, but not forever.

If you currently use Nano for Hyper-V, I would start plotting a migration strategy now. If you are considering Nano for Hyper-V, stop.

Hyper-V Server

Hyper-V Server is the product name given to the free distribution vehicle for Hyper-V. You’ll commonly hear it referred to as “Hyper-V Core”, although that designation is both confusing and incorrect. You can download Hyper-V Server as a so-called “evaluation”, but it never expires.

A word of advice: Hyper-V Server includes a legally-binding license agreement. Violation of that licensing agreement subjects you to the same legal penalties that you would face for violating the license agreement of a paid operating system. Hyper-V Server’s license clearly dictates that it can only be used to host and maintain virtual machines. You cannot use it as a file server or a web server or anything else. Something that I need to make extremely clear: the license agreement does not provide special allowances for a test environment. I know of a couple of blog articles that guide you to doing things under the guise of “test environment”. That’s not OK. If it’s not legal in a production environment, it doesn’t magically become legal in a test environment.

Windows Server Core

When you boot to the Windows Server install media, the first listed option includes “Core” in the name. That’s not an accident; Microsoft wants you to use Core mode by default. Windows Server Core excludes the primary Windows graphical interface and explorer.exe. Some people erroneously believe that means that no graphical applications can be run at all. Applications that use the Explorer rendering engine will not function (such as MMC), but the base Windows Forms libraries and mechanisms exist.

Windows Server with GUI

I doubt that the GUI mode of Windows Server needs much explanation. You have the same basic graphical interface as Windows 10 with some modifications that make it more appropriate for a server environment. When you install from 2016 media, you will see this listed as (Desktop Experience).

The Pros and Cons of the Command-line and Graphical Modes for Hyper-V

I know that things would be easier if I would just tell you what to do. If I knew you and knew your environment, I might do that. I prefer giving you the tools and knowledge to make decisions like this on your own, though. So, we’ll complement our discussion with a pros and cons list of each option. After the lists, I’ll cover some additional guidelines and points to consider.

Hyper-V Server Pros and Cons

If you skipped the preamble, remember that “Hyper-V Server” refers to the completely free SKU that you can download at any time.

Pros of Hyper-V Server:

  • Never requires a licensing fee
  • Never requires activation
  • Smallest deployment
  • Smallest “surface area” for attacks
  • Least memory usage by the management operating system
  • Fewest patch needs
  • Includes all essential features for running Hyper-V (present, not necessarily enabled by default):
    • Hyper-V hypervisor
    • Hyper-V PowerShell interface
    • Cluster membership
    • Domain membership
    • Hyper-V Replica membership
    • Remote Desktop Virtual Host role for VDI deployments
    • RemoteFx (automatic with RDVH role)

Cons of Hyper-V Server:

  • Cannot provide Automatic Virtual Machine Activation
  • Cannot provide deduplication features
  • Impossible to enable the Windows Server GUI
  • Software manufacturers may refuse to support their software on it
  • Third-party support operations, such as independent consulting firms, may not have any experience with it
  • Switching to Windows Server requires a complete reinstall
  • Difficult to manage hardware

Hyper-V in Windows Server Core Pros and Cons

If you’ve seen the term “Hyper-V Core”, that probably means “Hyper-V Server”. This section covers the authentic Windows Server product installed in Core mode.

Pros of Windows Server Core for Hyper-V:

  • Microsoft recommends Windows Server Core for Hyper-V
  • Receives feature updates on the quickest schedule (look toward the bottom of the link in the preceding bullet)
  • Comparable deployment size to Hyper-V Server
  • Comparable surface area to Hyper-V Server
  • Comparable memory usage to Hyper-V Server
  • Comparable patch requirements to Hyper-V Server
  • Allows almost all roles and features of Windows Server
  • Can provide Automatic Activation for Windows Server in VMs (Datacenter Edition only)

Cons of Windows Server Core for Hyper-V:

  • Impossible to enable the Windows Server GUI
  • Must be licensed and activated
  • Upgrading to the next version requires paying for that version’s license, even if you will wait to deploy newer guests
  • Software manufacturers may refuse to support their software on it
  • Third-party support operations, such as independent consulting firms, may not have any experience with it
  • Difficult to manage hardware

Hyper-V in Windows Server GUI Pros and Cons

We saved what many consider the “default” option for last.

Pros of Windows Server with GUI for Hyper-V:

  • Familiar Windows GUI
  • More tools available, both native and third party
  • Widest support from software manufacturers and consultants
  • Easiest hardware management
  • Valid environment for all Windows Server roles, features, and software
  • Can provide Automatic Activation for Windows Server in VMs (Datacenter Edition only)

Cons of Windows Server with GUI for Hyper-V:

  • Familiarity breeds contempt
  • Slowest feature roll-out cycle (see the bottom of this article)
  • Largest attack surface, especially with explorer.exe
  • Largest deployment size
  • Largest memory usage
  • Largest patch requirements
  • Must be licensed and activated
  • Upgrading to the next version requires paying for that version’s license, even if you will wait to deploy newer guests

Side-by-Side Comparison of Server Modes for Hyper-V

Two items appear in every discussion of this topic: disk space and memory usage. I thought that it might be enlightening to see the real numbers. So, I built three virtual machines running Hyper-V in nested mode. The first contains Hyper-V Server, the second contains Windows Server Datacenter Edition in Core mode, and the third contains Windows Server Datacenter Edition in GUI mode. I have enabled Hyper-V in each of the Windows Server systems and included all management tools and subfeatures ( Add-WindowsFeature -Name Hyper-V -IncludeAllSubFeature -IncludeManagementTools). All came from the latest MSDN ISOs. None are patched. None are on the network.

Disk Usage Comparison of the Three Modes

I used the following PowerShell command to determine the used space: '{0:N0}' -f (Get-WmiObject -Class Win32_LogicalDisk | ? DeviceId -eq 'C:' | % {$_.Size - $_.FreeSpace}).

Deployment Mode Used Disk Space (bytes)
Hyper-V Server 2016 6,044,270,592
Windows Server 2016 Datacenter Edition in Core mode 7,355,858,944
Windows Server 2016 Datacenter Edition in GUI mode 10,766,614,528

For shock value, the full GUI mode of Windows Server adds 78% space utilization above Hyper-V Server 2016 and 46% space utilization above Core mode. That additional space amounts to less than 5 gigabytes. If 5 gigabytes will make or break your deployment, you’ve got other issues.

Memory Usage Comparison of the Three Modes

We’ll start with Task Manager while logged on:

cvg_tmmemory

These show what we expect: Hyper-V Server uses the least amount of memory, Windows Server Core uses a bit more, and Windows Server with GUI uses a few ticks above both. However, I need to point out that these charts show a more dramatic difference than you should encounter in reality. Since I’m using nested VMs to host my sample systems, I only gave them 2 GB total memory apiece. The consumed memory distance between Hyper-V Server and Windows Server with GUI weighs in at a whopping .3 gigabytes. If that number means a lot to you in your production systems, then you’re going to have other problems.

But that’s not the whole story.

Those numbers were taken from Task Manager while logged on to the systems. Good administrators log off of servers as soon as possible. What happens, then, when we log off? To test that, I had to connect each VM to the network and join the domain. I then ran: Get-WmiObject Win32_OperatingSystem | select FreePhysicalMemory with the ComputerName switch against each of the hosts. Check out the results:

Deployment Mode Free Memory (MB)
Hyper-V Server 2016 1,621,148
Windows Server 2016 Datacenter Edition in Core mode 1,643,060
Windows Server 2016 Datacenter Edition in GUI mode 1,558,744

Those differences aren’t so dramatic, are they? Windows Server Core even has a fair bit more free memory than Hyper-V Server… at that exact moment in time. If you don’t have much background in memory management, especially in terms of operating systems, then keep in mind that memory allocation and usage can seem very strange.

The takeaway: memory usage between all three modes is comparable when they are logged off.

Hyper-V and the “Surface Area” Argument

Look at the difference in consumed disk sizes between the three modes. Those extra bits represent additional available functionality. Within them, you’ll find things such as Active Directory Domain Services and IIS. So, when we talk about choosing between these modes, we commonly point out that all of these things add to the “attack surface”. We try to draw the conclusion that using a GUI-less system increases security.

First part: Let’s say that a chunk of malware injects itself into one of the ADDS DLLs sitting on your Windows Server host running Hyper-V. What happens if you never enable ADDS on that system? Well, it’s infected, to be sure. But, in order for any piece of malware to cause any harm, something eventually needs to bring it into memory and execute it. But, you know that you’re not supposed to run ADDS on a Hyper-V host. Philosophical question: if malware attacks a file and no one ever loads it, is the system still infected? Hopefully, you’ve got a decent antimalware system that will eventually catch and clean it, so you should be perfectly fine.

On one hand, I don’t want to downplay malware. I would never be comfortable with any level of infection on any system. On the other hand, I think common sense host management drastically mitigates any concerns. I don’t believe this is enough of a problem to carry a meaningful amount of weight in your decision.

Second part: Windows Server runs explorer.exe as its shell and includes Internet Explorer. Attackers love those targets. You can minimize your exposure by, you know, not browsing the Internet from a server, but you can’t realistically avoid using explorer.exe on a GUI system. However, as an infrastructure system, you should be able to safely instruct your antimalware system to keep a very close eye on Explorer’s behavior and practice solid defensive techniques to prevent malware from reaching the system.

Overall takeway from this section: Explorer presents the greatest risk. Choose the defense-in-depth approach of using Hyper-V Server or Windows Server Core, or choose to depend on antimalware and safe operating techniques with the Windows Server GUI.

Hyper-V and the Patch Frequency Non-Issue

Another thing that we always try to bring into these discussions is the effect of monthly patch cycles. Windows Server has more going on than Hyper-V Server, so it gets more patches. From there, we often make the argument that more patches equals more reboots.

A little problem, though. Let’s say that Microsoft releases twelve patches for Windows Server and only two apply to Hyper-V Server. One of those two patches requires a reboot. In that case, both servers will reboot. One time. So, if we get hung up on downtime over patches, then we gain nothing. I believe that, in previous versions, the downtime math did favor Hyper-V Server a few times. However, patches are now delivered in only a few omnibus packages instead of smaller targeted patches. So, I suspect that we will no longer be able to even talk about reboot frequency.

One part of the patching argument remains: with less to patch, fewer things can go wrong from a bad patch. However, this argument faces the same problem as the “surface area” non-issue. What are you using on your Windows Server system that you wouldn’t also use on a Hyper-V Server system? If you’re using your Windows Server deployment correctly, then your patch risks should be roughly identical.

Most small businesses will patch their Hyper-V systems via automated processes that occur when no one is around. Larger businesses will cluster Hyper-V hosts and allow Cluster Aware Updating to prevent downtime.

Overall takeaway from this section: patching does not make a convincing argument in any direction.

Discussions: Choosing Between Core and GUI for Hyper-V

Now you’ve seen the facts. You’ve seen a few generic arguments for the impact level of two of those facts. If you still don’t know what to do, that’s OK. Let’s look at some situational points.

A Clear Case for Hyper-V on Windows Server Full GUI

If you’re in a small environment with only a single physical server, go ahead and use the full GUI.

Why? Some reasons:

  • It is not feasible to manage Hyper-V without any GUI at all. I advocate for PowerShell usage as strongly as anyone else, but sometimes the GUI is a better choice. In a multi-server environment, you can easily make a GUI-less system work because you have at least one GUI-based management system somewhere. Without that, GUI-less demands too much.
  • The world has a shortage of Windows Server administrators that are willing and able to manage a GUI-less system. You will have difficulty hiring competent help at a palatable price.
  • Such a small shop will not face the density problems that justify the few extra resources saved by the GUI-less modes.
  • The other issues that I mentioned are typically easier to manage in a small environment than in a large environment.
  • A GUI system will lag behind Core in features, but Hyper-V is quite feature-complete for smaller businesses. You probably won’t miss anything that really matters to you.
  • If you try Hyper-V Server or Windows Server Core and decide that you made a mistake, you have no choice but to reinstall. If you install the GUI and don’t want to use it, then don’t use it — switch to remote management practices. You won’t miss out on anything besides the faster feature release cycle.

We can make some very good arguments for a GUI-less system, but none are strong enough to cause crippling pain for a small business. When the GUI fits, use it.

A Clear Case for Hyper-V Server

Let’s switch gears completely. Let’s say that:

  • You’re a PowerShell whiz
  • You’re a cmd whiz
  • You run a lot of Linux servers
  • Your Windows Servers (if any) are all temporary testing systems

Hyper-V Server will suit you quite well.

Everyone Else

If you’re somewhere in the middle of the above two cases, I think that Microsoft’s recommendation of Windows Server Core with Hyper-V fits perfectly. The parts that stand out to me:

  • Flexibility: Deduplication has done such wonders for me in VDI that I’m anxious to see how I can apply it to server loads. In 2012 R2, server guests were specifically excluded; VDI only. Server 2016 maintains the same wording in the feature setup, but I can’t find a comparable statement saying that server usage is verboten in 2016. I could also see a case for building a nice VM management system in ASP.Net and hosting it locally with IIS — you can’t do that in Hyper-V Server.
  • Automatic Virtual Machine Activation. Who loves activation? Nobody loves activation! Let the system deal with that.
  • Security by terror: Not all server admins are created equally. I find that the really incompetent ones won’t even log on to a Server Core/Hyper-V Server system. That means that they won’t put them at risk.
  • Remote management should be the default behavior. If you don’t currently practice remote management, there’s no time like the present! You can dramatically reduce the security risk to any system by never logging on to its console, even by RDP.

You can manage Hyper-V systems from a Windows 10 desktop with RSAT. It’s not entirely without pain, though:

  • Drivers! Ouch! Microsoft could help us out by providing a good way to use Device Manager remotely. We should not let driver manufacturers off the hook easily, though. Note: Honolulu is coming to reduce some of that pain.
  • Not everyone that requires the GUI is an idiot. Some of them just haven’t yet learned. Some have learned their way around PowerShell but don’t know how to use it for Hyper-V. You like taking vacations sometimes, don’t you?
  • Crisis mode when you don’t know what’s wrong can be a challenge. It’s one thing to keep the top spinning; it’s another to get it going when you can’t see what’s holding it down. However, these problems have solutions. It’s a pain, but a manageable one.

I’m not here to make the decision for you. You now have enough information to make an informed decision.

Page 1 of 2712345...1020...Last »