How to Resize Virtual Hard Disks in Hyper-V 2016

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.



Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:


Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

How to Perform Hyper-V Storage Migration

How to Perform Hyper-V Storage Migration

New servers? New SAN? Trying out hyper-convergence? Upgrading to Hyper-V 2016? Any number of conditions might prompt you to move your Hyper-V virtual machine’s storage to another location. Let’s look at the technologies that enable such moves.

An Overview of Hyper-V Migration Options

Hyper-V offers numerous migration options. Each has its own distinctive features. Unfortunately, we in the community often muck things up by using incorrect and confusing terminology. So, let’s briefly walk through the migration types that Hyper-V offers:

  • Quick migration: Cluster-based virtual machine migration that involves placing a virtual machine into a saved state, transferring ownership to another node in the same cluster, and resuming the virtual machine. A quick migration does not involve moving anything that most of us consider storage.
  • Live migration: Cluster-based virtual machine migration that involves transferring the active state of a running virtual machine to another node in the same cluster. A Live Migration does not involve moving anything that most of us consider storage.
  • Storage migration: Any technique that utilizes the Hyper-V management service to relocate any file-based component that belongs to a virtual machine. This article focuses on this migration type, so I won’t expand any of those thoughts in this list.
  • Shared Nothing Live Migration: Hyper-V migration technique between two hosts that does not involve clustering. It may or may not include a storage migration. The virtual machine might or might not be running. However, this migration type always includes ownership transfer from one host to another.

It Isn’t Called Storage Live Migration

I have always called this operation “Storage Live Migration”. I know lots of other authors call it “Storage Live Migration”. But, Microsoft does not call it “Storage Live Migration”. They just call it “Storage Migration”. The closest thing that I can find to “Storage Live Migration” in anything from Microsoft is a 2012 TechEd recording by Benjamin Armstrong. The title of that presentation includes the phrase “Live Storage Migration”, but I can’t determine if the “Live” just modifies “Storage Migration” or if Ben uses it as part of the technology name. I suppose I could listen to the entire hour and a half presentation, but I’m lazy. I’m sure that it’s a great presentation, if anyone wants to listen and report back.

Anyway, does it matter? I don’t really think so. I’m certainly not going to correct anyone that uses that phrase. However, the virtual machine does not necessarily need to be live. We use the same tools and commands to move a virtual machine’s storage whether it’s online or offline. So, “Storage Migration” will always be a correct term. “Storage Live Migration”, not so much. However, we use the term “Shared Nothing Live Migration” for virtual machines that are turned off, so we can’t claim any consistency.

What Can Be Moved with Hyper-V Storage Migration?

When we talk about virtual machine storage, most people think of the places where the guest operating system stores its data. That certainly comprises the physical bulk of virtual machine storage. However, it’s also only one bullet point on a list of multiple components that form a virtual machine.

Independently, you can move any of these virtual machine items:

  • The virtual machine’s core files (configuration in xml or .vmcx, .bin, .vsv, etc.)
  • The virtual machine’s checkpoints (essentially the same items as the preceding bullet point, but for the checkpoint(s) instead of the active virtual machine)
  • The virtual machine’s second-level paging file location. I have not tested to see if it will move a VM with active second-level paging files, but I have no reason to believe that it wouldn’t
  • Virtual hard disks attached to a virtual machine
  • ISO images attached to a virtual machine

We most commonly move all of these things together. Hyper-V doesn’t require that, though. Also, we can move all of these things in the same operation but distribute them to different destinations.

What Can’t Be Moved with Hyper-V Storage Migration?

In terms of storage, we can move everything related to a virtual machine. But, we can’t move the VM’s active, running state with Storage Migration. Storage Migration is commonly partnered with a Live Migration in the operation that we call “Shared Nothing Live Migration”. To avoid getting bogged down in implementation details that are more academic than practical, just understand one thing: when you pick the option to move the virtual machine’s storage, you are not changing which Hyper-V host owns and runs the virtual machine.

More importantly, you can’t use any Microsoft tool-based technique to separate a differencing disk from its parent. So, if you have an AVHDX (differencing disk created by the checkpointing mechanism) and you want to move it away from its source VHDX, Storage Migration will not do it. If you instruct Storage Migration to move the AVHDX, the entire disk chain goes along for the ride.

Uses for Hyper-V Storage Migration

Out of all the migration types, storage migration has the most applications and special conditions. For instance, Storage Migration is the only Hyper-V migration type that does not always require domain membership. Granted, the one exception to the domain membership rule won’t be very satisfying for people that insist on leaving their Hyper-V hosts in insecure workgroup mode, but I’m not here to please those people. I’m here to talk about the nuances of Storage Migration.

Local Relocation

Let’s start with the simplest usage: relocation of local VM storage. Some situations in this category:

  • You left VMs in the default “C:\ProgramData\Microsoft\Windows\Hyper-V” and/or “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks” locations and you don’t like it
  • You added new internal storage as a separate volume and want to re-distribute your VMs
  • You have storage speed tiers but no active management layer
  • You don’t like the way your VMs’ files are laid out
  • You want to defragment VM storage space. It’s a waste of time, but it works.

Network Relocation

With so many ways to do network storage, it’s nearly a given that we’ll all need to move a VHDX across ours at some point. Some situations:

  • You’re migrating from local storage to network storage
  • You’re replacing a SAN or NAS and need to relocate your VMs
  • You’ve expanded your network storage and want to redistribute your VMs

Most of the reasons listed under “Local Relocation” can also apply to network relocation.

Cluster Relocation

We can’t always build our clusters perfectly from the beginning. For the most part, a cluster’s relocation needs list will look like the local and network lists above. A few others:

  • Your cluster has new Cluster Shared Volumes that you want to expand into
  • Existing Cluster Shared Volumes do not have a data distribution that does not balance well. Remember that data access from a CSV owner node is slightly faster than from a non-owner node

The reasons matter less than the tools when you’re talking about clusters. You can’t use the same tools and techniques to move virtual machines that are protected by Failover Clustering under Hyper-V as you use for non-clustered VMs.

Turning the VM Off Makes a Difference for Storage Migration

You can perform a very simple experiment: perform a Storage Migration for a virtual machine while it’s on, then turn it off and migrate it back. The virtual machine will move much more quickly while it’s off. This behavior can be explained in one word: synchronization.

When the virtual machine is off, a Storage Migration is essentially a monitored file copy. The ability of the constituent parts to move bits from source to destination sets the pace of the move. When the virtual machine is on, all of the rules change. The migration is subjected to these constraints:

  • The virtual machine’s operating system must remain responsive
  • Writes must be properly captured
  • Reads must occur from the most appropriate source

Even if the guest operating does not experience much activity during the move, that condition cannot be taken as a constant. In other words, Hyper-V needs to be ready for it to start demanding lots of I/O at any time.

So, the Storage Migration of a running virtual machine will always take longer than the Storage Migration of a virtual machine in an off or saved state. You can choose the convenience of an online migration or the speed of an offline migration.

Note: You can usually change a virtual machine’s power state during a Storage Migration. It’s less likely to work if you are moving across hosts.

How to Perform Hyper-V Storage Migration with PowerShell

The nice thing about using PowerShell for Storage Migration: it works for all Storage Migration types. The bad thing about using PowerShell for Storage Migration: it can be difficult to get all of the pieces right.

The primary cmdlet to use is Move-VMStorage. If you will be performing a Shared Nothing Live Migration, you can also use Move-VM. The parts of Move-VM that pertain to storage match Move-VMStorage. Move-VM has uses, requirements, and limitations that don’t pertain to the topic of this article, so I won’t cover Move-VM here.

A Basic Storage Migration in PowerShell

Let’s start with an easy one. Use this when you just want all of a VM’s files to be in one place:

This will move the virtual machine named testvm so that all of its components reside under the C:\LocalVMs folder. That means:

  • The configuration files will be placed in C:\LocalVMs\Virtual Machines
  • The checkpoint files will be placed in C:\LocalVMs\Snapshots
  • The VHDXs will be placed in C:\LocalVMs\Virtual Hard Disks
  • Depending on your version, an UndoLog Configuration folder will be created if it doesn’t already exist. The folder is meant to contain Hyper-V Replica files. It may be created even for virtual machines that aren’t being replicated.

Complex Storage Migrations in PowerShell

For more complicated move scenarios, you won’t use the DestinationStoragePath parameter. You’ll use one or more of the individual component parameters. Choose from the following:

  • VirtualMachinePath: Where to place the VM’s configuration files.
  • SnapshotFilePath: Where to place the VM’s checkpoint files (again, NOT the AVHDXs!)
  • SmartPagingFilePath: Where to place the VM’s smart paging files
  • Vhds: An array of hash tables that indicate where to place individual VHD/X files.

Some notes on these items:

  • You are not required to use all of these parameters. If you do not specify a parameter, then its related component is left alone. Meaning, it doesn’t get moved at all.
  • If you’re trying to use this to get away from those auto-created Virtual Machines and Snapshots folders, it doesn’t work. They’ll always be created as sub-folders of whatever you type in.
  • It doesn’t auto-create a Virtual Hard Disks folder.
  • If you were curious whether or not you needed to specify those auto-created subfolders, the answer is: no. Move-VMStorage will always create them for you (unless they already exist).
  • The VHDs hash table is the hardest part of this whole thing. I’m usually a PowerShell-first kind of guy, but even I tend to go to the GUI for Storage Migrations.

The following will move all components except VHDs, which I’ll tackle in the next section:

Move-VMStorage’s Array of Hash Tables for VHDs

The three …FilePath parameters are easy: just specify the path. The Vhds parameter is tougher. It is one or more hash tables inside an array.

First, the hash tables. A hash table is a custom object that looks like an array, but each entry has a unique name. The hash tables that Vhds expects have a SourceFilePath entry and a DestinationFilePath entry. Each must be fully-qualified for a file. A hash table is contained like this: @{ }. The name of an entry and its value are joined with an =. Entries are separated by a ; So, if you want to move the VHDX named svtest.vhdx from \\svstore\VMs to C:\LocalVMs\testvm, you’d use this hash table:

Reading that, you might ask (quite logically): “Can I change the name of the VHDX file when I move it?” The answer: No, you cannot. So, why then do you need to enter the full name of the destination file? I don’t know!

Next, the arrays. An array is bounded by @( ). Its entries are separated by commas. So, to move two VHDXs, you would do something like this:

I broke that onto multiple lines for legibility. You can enter it all on one line. Note where I used parenthesis and where I used curly braces.

Tip: To move a single VHDX file, you don’t need to do the entire array notation. You can use the first example with Vhds.

A Practical Move-VMStorage Example with Vhds

If you’re looking at all that and wondering why you’d ever use PowerShell for such a thing, I have the perfect answer: scripting. Don’t do this by hand. Use it to move lots of VMs in one fell swoop. If you want to see a plain example of the Vhds parameter in action, the Get-Help examples show one. I’ve got a more practical script in mind.

The following would move all VMs on the host. All of their config, checkpoint, and second-level paging files will be placed on a share named “\\vmstore\slowstorage”. All of their VHDXs will be placed on a share named “\\vmstore\faststorage”. We will have PowerShell deal with the source paths and file names.

I used splatting for the parameters for two reasons: 1, legibility. 2, to handle VMs without any virtual hard disks.

How to Perform Hyper-V Storage Migration with Hyper-V Manager

Hyper-V Manager can only be used for non-clustered virtual machines. It utilizes a wizard format. To use it to move a virtual machine’s storage:

  1. Right-click on the virtual machine and click Move.
  2. Click Next on the introductory page.
  3. Change the selection to Move the virtual machine’s storage (the same storage options would be available if you moved the VM’s ownership, but that’s not part of this article)
  4. Choose how to perform the move. You can move everything to the same location, you can move everything to different locations, or you can move only the virtual hard disks.
  5. What screens you see next will depend on what you chose. We’ll cover each branch.

If you opt to move everything to one location, the wizard will show you this simple page:


If you choose the option to Move the virtual machine’s data to different locations, you will first see this screen:


For every item that you check, you will be given a separate screen where you indicate the desired location for that item. The wizard uses the same screen for these items as it does for the hard-disks only option. I’ll show its screen shot next.

If you choose Move only the virtual machine’s virtual hard disks, then you will be given a sequence of screens where you instruct it where to move the files. These are the same screens used for the individual components from the previous selection:


After you make your selections, you’ll be shown a summary screen where you can click Finish to perform the move:


How to Perform Hyper-V Storage Migration with Failover Cluster Manager

Failover Cluster Manager uses a slick single-screen interface to move storage for cluster virtual machines. To access it, simply right-click a virtual machine, hover over Move, and click Virtual Machine Storage. You’ll see the following screen:


If you just want to move the whole thing to one of the display Cluster Shared Volumes, just drag and drop it down to that CSV in the Cluster Storage heading at the lower left. You can drag and drop individual items or the entire VM. The Destination Folder Path will be populated accordingly.

As you can see in mine, I have all of the components except the VHD on an SMB share. I want to move the VHD to be with the rest. To get a share to show up, click the Add Share button. You’ll get this dialog:


The share will populate underneath the CSVs in the lower left. Now, I can drag and drop that file to the share. View the differences:


Once you have the dialog the way that you like it, click Start.

Comparing Hyper-V Generation 1 and 2 Virtual Machines

Comparing Hyper-V Generation 1 and 2 Virtual Machines


The 2012 R2 release of Hyper-V introduced a new virtual machine type: Generation 2. The words in that designation don’t convey much meaning. What are the generations? What can the new generation do for you? Are there reasons not to use it?

We’ll start with an overview of the two virtual machine types separately.

Generation 1 Virtual Machines: The Legacy

The word “legacy” often invokes a connotation of “old”. In the case of the Generation 1 virtual machine, “old” paints an accurate picture. The virtual machine type isn’t that old, of course. However, the technology that it emulates has been with us for a very long time.


BIOS stands for “Basic Input/Output System”, which doesn’t entirely describe what it is or what it does.

A computer’s BIOS serves two purposes:

  1. Facilitates the power-on process of a computer. The BIOS initializes all of the system’s devices, then locates and loads the operating system.
  2. Acts as an interface between the operating system and common hardware components. Even though there are multiple vendors supplying their own BIOSes, all of them provide the same command set for operating systems to access. Real mode operating systems used those common BIOS calls to interact with keyboards, disk systems, and text output devices. Protected mode operating systems do not use BIOS for this purpose. Instead, they rely on drivers.

Hyper-V creates a digital BIOS that it attaches to all Generation 1 virtual machines.

Emulated Hardware

A virtualization machine is a fake computer. Faking a computer is hard. A minimally functional computer requires several components. Have you ever looked at a motherboard and wondered what all of those chips do? I’m sure you recognize the CPU and the memory chips, but what about the others? Each has its own important purpose. Each contributes something. A virtual machine must fake them all.

One of the ways a virtual machine can fake hardware is emulation. Nearly every computer component is a digital logical device. That means that each of them processes data in binary using known, predictable methods. Since we know what those components do and since they accept and produce binary data, we can make completely digital copies of them. When we do that, we say that we have emulated that hardware. Emulated hardware is a software construct that produces behavior that is identical to the “real” item. If you look at Device Manager inside a Generation 1 virtual machine, you can see evidence of emulation:


Digitally, the IDE controller in a Hyper-V virtual machine behaves exactly like the Intel 82371AB/EB series hardware. Because almost all operating systems include drivers that can talk to Intel 82371AB/EB series hardware, they can immediately work inside a Hyper-V Generation 1 VM.

Emulated hardware provides the benefit of widespread compatibility. Very few operating systems exist that can’t immediately work with these devices. They also tend to work in the minimalist confines of PXE (pre-boot execution environment). For this reason, you’ll often see requirements to use a Generation 1 virtual machine with a legacy network adapter. The PXE system can identify and utilize that adapter; it cannot recognize the newer synthetic adapter.

Generation 2 Virtual Machines: A Step Forward

BIOS works very well, but has a number of limitations. On that list, the most severe is security; BIOS knows to load boot code from devices and that’s it. It cannot make any judgment on whether or not the boot code that it found should be avoided. When it looks for an operating system on a hard disk, that hard disk must use a master boot record (MBR) partition layout, or BIOS won’t understand what to do. MBR imposes a limit of four partitions and 2TB of space.


Enter the Unified Extensible Firmware Interface (UEFI). As a successor to BIOS, it can do everything that BIOS can do. On some hardware systems, it can emulate a BIOS when necessary. There are three primary benefits to choosing UEFI over BIOS:

  1. Secure Boot. UEFI can securely store an internal database of signatures for known good boot loaders. If a boot device presents a boot loader that the UEFI system doesn’t recognize, it will refuse to boot. Secure Boot can be an effective shield against root kits that hijack the boot loader.
  2. GPT disk layout. The GUID partition table system (GPT) has been available for some time, but only for data disks. BIOS can’t boot to it. UEFI can. GPT allows for 128 partitions and a total disk size of 8 zettabytes, dramatically surpassing MBR.
  3. Extensibility. Examine the options available in the firmware screens of a UEFI physical system. Compare them to any earlier BIOS-only system. UEFI allows for as many options as the manufacturer can fit onto their chips. Support for hardware that didn’t exist when those chips were soldered onto the mainboard might be the most important.

When you instruct Hyper-V to create a Generation 2 virtual machine, it uses a UEFI construct.

Synthetic Hardware

Synthetic hardware diverges from emulated hardware in its fundamental design goal. Emulated hardware pretends to be a known physical device to maximize compatibility with guest operating environments and systems. Hypervisor architects design synthetic hardware to maximize interface capabilities with the hypervisor. They release drivers for guest operating systems to address the compatibility concerns. The primary benefits of synthetic hardware:

  • Controlled code base. When emulating hardware, you’re permanently constrained by that hardware’s pre-existing interface. With synthetic hardware, you’re only limited to what you can build.
  • Tight hypervisor integration. Since the hypervisor architects control the hypervisor and the synthetic device, they can build them to directly interface with each other, bypassing the translation layers necessary to work with emulated hardware.
  • Performance. Synthetic hardware isn’t always faster than emulated hardware, but the potential is there. In Hyper-V, the SCSI controller is synthetic whereas the IDE controller is emulated, but performance differences can only be detected under extremely uncommon conditions. Conversely, the synthetic network adapter is substantially faster than the emulated legacy adapter.

Generation 2 virtual machines can use less emulated hardware because of their UEFI platform. They can boot from the SCSI controller because UEFI understands how to communicate with it; BIOS does not. They can boot using PXE with a synthetic network adapter because UEFI understands how to communicate with it; BIOS does not.

Reasons to Use Generation 1 Over Generation 2

There are several reasons to use Generation 1 instead of Generation 2:

  • Older guest operating systems. Windows/Windows Server operating systems prior to Vista/2008 did not understand UEFI at all. Windows Vista/7 and Windows Server 2008/2008 R2 do understand UEFI, but require a particular component that Hyper-V does not implement. Several Linux distributions have similar issues.
  • 32-bit guest operating systems. UEFI began life with 64-bit operating systems in mind. Physical UEFI systems will emulate a BIOS mode for 32-bit operating systems. In Hyper-V, Generation 1 is that emulated mode.
  • Software vendor requirements. A number of systems built against Hyper-V were designed with the limitations of Generation 1 in mind, especially if they target older OSes. Until their manufacturers update for newer OSes and Generation 2, you’ll need to stick with Generation 1.
  • VHD requirements. If there is any requirement at all to use VHD instead of VHDX, Generation 1 is required. Generation 2 VMs will not attach VHDs. If a VHDX doesn’t exceed VHD’s limitations, it can be converted. That’s certainly not convenient for daily operations.
  • Azure interoperability. At this time, Azure uses Generation 1 virtual machines. You can use Azure Recovery Services with a Generation 2 virtual machine, but it is down-converted to Generation 1 when you fail to Azure and then up-converted back to Generation 2 when you fail back. If I had any say in designing Azure-backed VMs, I’d just use Generation 1 to make things easier.
  • Virtual floppy disk support. Generation 2 VMs do not provide a virtual floppy disk. If you need one, then you also need Generation 1.
  • Virtual COM ports. Generation 2 VMs do not provide virtual COM ports, either.

I’ll also add that a not-insignificant amount of anecdotal evidence exists that suggests stability problems with Generation 2 virtual machines. In my own experiences, I’ve had Linux virtual machines lose vital data in their boot loaders that I wasn’t able to repair. I’ve also had some networking glitches in Generation 2 VMs that I couldn’t explain that disappeared when I rebuilt the guests as Generation 1. From others, I’ve heard of VHDX performance variances and some of the same issues that I’ve seen. These reports are not substantiated, not readily reproducible, and not consistent. I’ve also had fewer problems using 2016 and newer Linux kernels with Generation 2 VMs.

Reasons to Use Generation 2 Over Generation 1

For newer builds that do not have any immediately show-stopping problems, I would default to using Generation 2. Some concrete reasons:

  • Greater security through Secure Boot. Secure Boot is the primary reason many opt for Generation 2. There are at least two issues with that, though:
    • Admins routinely make Secure Boot pointless. Every time someone says that they have a problem with Secure Boot, the very first suggestion is: “disable Secure Boot”. If that’s going to be your choice, then just leave Secure Boot unchecked. Secure Boot has exactly one job: preventing a virtual machine from booting from an unrecognized boot loader. If you’re going to stop it from doing that job, then it’s pointless to enable it in the first place.
    • Secure Boot might not work. Microsoft made a mistake. Maybe ensuring that your Hyper-V host stays current on patches will prevent this from negatively affecting you. Maybe it won’t.
  • Greater security through TPM, Device Guard, and Credential Guard. Hyper-V 2016 can project a virtual Trusted Platform Module (TPM) into Generation 2 virtual machines. If you can make use of that, Generation 2 is the way to go. I have not yet spent much time exploring Device Guard or Credential Guard, but here’s a starter article if you’re interested:
  • Higher limits on vCPU and memory assignment. 2016 adds support for extremely high quantities of vCPU and RAM for Generation 2 VMs. Most of you won’t be building VMs that large, but for the rest of you…
  • PXE booting with a synthetic adapter. Generation 1 VMs require a legacy adapter for PXE booting. That adapter is quite slow. Many admins will deal with this by using both a legacy adapter and a synthetic adapter, or by removing the legacy adapter post-deployment. Generation 2 reduces this complexity by allowing PXE-booting on a synthetic adapter.
  • Slightly faster boot. I’m including this one mainly for completion. UEFI does start more quickly than BIOS, but you’d need to be rebooting a VM lots for it to make a real difference.

How to convert from Generation 1 to Generation 2

One of the many nice things about Hyper-V is just how versatile virtual hard disks are. You can just pop them off of one virtual controller and slap them onto another, no problems. You can disconnect them from one VM and attach them to another, no problems. Unless it’s the boot/system disk. Then, often, many problems. Generation 1 and 2 VMs differ in several ways, but boot/system disk differences are the biggest plague for trying to move between them.

I do not know of any successful efforts made to convert from Generation 2 to Generation 1. It’s possible on paper, but it would not be easy.

You do have options if you want to move from Generation 1 to Generation 2. The most well-known is John Howard’s PowerShell “Convert-VMGeneration” solution. This script has a lot of moving parts and does not work for everyone. Do not expect it to serve as a magic wand. Microsoft does not provide support for Convert-VMGeneration.

Microsoft has released an official tool called MBR2GPT. It cannot convert a virtual machine at all, but it can convert an MBR VHDX to GPT. It’s only supported on Windows 10, though, and it was not specifically intended to facilitate VM generation conversion. To use it for that purpose, I would detach the VHDX from the VM, copy it, mount it on a Windows 10 machine, and run MBR2GPT against the copy. Then, I would create an all-new Generation 2 VM and attach the converted disk to it. If it didn’t work, at least I’d still have the original copy.

Keep in mind that any such conversions are a major change for the guest operating system.Windows has never been fond of changes to its boot system. Conversion to GPT is more invasive than most other changes. Be pleasantly surprised at every successful conversion.


Disk Fragmentation is not Hyper-V’s Enemy

Disk Fragmentation is not Hyper-V’s Enemy

Fragmentation is the most crippling problem in computing, wouldn’t you agree? I mean, that’s what the strange guy downtown paints on his walking billboard, so it must be true, right? And fragmentation is at least five or six or a hundred times worse for a VHDX file, isn’t it? All the experts are saying so, according to my psychic.

But, when I think about it, my psychic also told me that I’d end up rich with a full head of hair. And, I watched that downtown guy lose a bet to a fire hydrant. Maybe those two aren’t the best authorities on the subject. Likewise, most of the people that go on and on about fragmentation can’t demonstrate anything concrete that would qualify them as storage experts. In fact, they sound a lot like that guy that saw your employee badge in the restaurant line and ruined your lunch break by trying to impress you with all of his anecdotes proving that he “knows something about computers” in the hopes that you’d put in a good word for him with your HR department (and that they have a more generous attitude than his previous employers on the definition of “reasonable hygiene practices”).

To help prevent you from ever sounding like that guy, we’re going to take a solid look at the “problem” of fragmentation.

Where Did All of this Talk About Fragmentation Originate?

Before I get very far into this, let me point out that all of this jabber about fragmentation is utter nonsense. Most people that are afraid of it don’t know any better. The people that are trying to scare you with it either don’t know what they’re talking about or are trying to sell you something. If you’re about to go to the comments section with some story about that one time that a system was running slowly but you set everything to rights with a defrag, save it. I once bounced a quarter across a twelve foot oak table, off a guy’s forehead, and into a shot glass. Our anecdotes are equally meaningless, but at least mine is interesting and I can produce witnesses.

The point is, the “problem” of fragmentation is mostly a myth. Like most myths, it does have some roots in truth. To understand the myth, you must know its origins.

These Aren’t Your Uncle’s Hard Disks

In the dark ages of computing, hard disks were much different from the devices that you know and love today. I’m young enough that I missed the very early years, but the first one owned by my family consumed the entire top of a desktop computer chassis. I was initially thrilled when my father presented me with my very own PC as a high school graduation present. I quickly discovered that it was a ploy to keep me at home a little longer because it would be quite some time before I could afford an apartment large enough to hold its hard drive. You might be thinking, “So what, they were physically bigger. I have a dozen magazines claiming that size doesn’t matter!” Well, those articles weren’t written about computer hard drives, were they? In hard drives, physical characteristics matter.

Old Drives Were Physically Larger

The first issue is diameter. Or, more truthfully, radius. You see, there’s a little arm inside that hard drive whose job it is to move back and forth from the inside edge to the outside edge of the platter and back, picking up and putting down bits along the way. That requires time. The further the distance, the more time required. Even if we pretend that actuator motors haven’t improved at all, less time is required to travel a shorter distance. I don’t know actual measurements, but it’s a fair guess that those old disks had over a 2.5-inch radius, whereas modern 3.5″ disks are closer to a 1.5″ radius and 2.5″ disks something around a 1″ radius. It doesn’t sound like much until you compare them by percentage differences. Modern enterprise-class hard disks have less than half the maximum read/write head travel distance of those old units.


It’s not just the radius. The hard disk that I had wasn’t only wide, it was also tall. That’s because it had more platters in it than modern drives. That’s important because, whereas each platter has its own set of read/write heads, a single motor controls all of the arms. Each additional platter increases the likelihood that the read/write head arm will need to move a meaningful distance to find data between any two read/write operations. That adds time.

Old Drives Were Physically Slower

After size, there’s rotational speed. The read/write heads follow a line from the center of the platter out to the edge of the platter, but that’s their only range of motion. If a head isn’t above the data that it wants, then it must hang around and wait for that data to show up. Today, we think of 5,400 RPM drives as “slow”. That drive of mine was moping along at a meagerly 3,600 RPM. That meant even more time was required to get/set data.

There were other factors that impacted speed as well, although none quite so strongly as rotational speed improvements. The point is, physical characteristics in old drives meant that they pushed and pulled data much more slowly than modern drives.

Old Drives Were Dumb

Up until the mid-2000s, every drive in (almost) every desktop computer used a PATA IDE  or EIDE interface (distinction is not important for this discussion). A hard drive’s interface is the bit that sits between the connecting cable bits and the spinning disk/flying head bits. It’s the electronic brain that figures out where to put data and where to go get data. IDE brains are dumb (another word for “cheap”). They operate on a FIFO (first-in first-out) basis. This is an acronym that everyone knows but almost no one takes a moment to think about. For hard drives, it means that each command is processed in exactly the order in which it was received. Let’s say that it gets the following:

  1. Read data from track 1
  2. Write data to track 68,022
  3. Read data from track 2

An IDE drive will perform those operations in exactly that order, even though it doesn’t make any sense. If you ever wondered why SCSI drives were so much more expensive than IDE drives, that was part of the reason. SCSI drives were a lot smarter. They would receive a list of demands from the host computer, plot the optimal course to satisfy those requests, and execute them in a logical fashion.

In the mid-2000s, we started getting new technology. AHCI and SATA emerged from the primordial acronym soup as Promethean saviors, bringing NCQ (native command queuing) to the lowly IDE interface. For the first time, IDE drives began to behave like SCSI drives. … OK, that’s overselling NCQ. A lot. It did help, but not as much as it might have because…

Operating Systems Take More Responsibility

It wasn’t just hard drives that operated in FIFO. Operating systems started it. They had good excuses, though. Hard drives were slow, but so were all of the other components. A child could conceive of better access techniques than FIFO, but even PhDs struggled against the CPU and memory requirements to implement them. Time changed all of that. Those other components gained remarkable speed improvements while hard disks lagged behind. Before “NCQ” was even coined, operating systems learned to optimize requests before sending them to the IDE’s FIFO buffers. That’s one of the ways that modern operating systems manage disk access better than those that existed at the dawn of defragmentation, but it’s certainly not alone.

This Isn’t Your Big Brother’s File System

The venerated FAT file system did its duty and did it well. But, the nature of disk storage changed dramatically, which is why we’ve mostly stopped using FAT. Now we have NTFS, and even that is becoming stale. Two things that it does a bit better than FAT is metadata placement and file allocation. Linux admins will be quick to point out that virtually all of their file systems are markedly better at preventing fragmentation than NTFS. However, most of the tribal knowledge around fragmentation on the Windows platform sources from the FAT days, and NTFS is certainly better than FAT.

Some of Us Keep Up with Technology

It was while I owned that gigantic, slow hard drive that the fear of fragmentation wormed its way into my mind. I saw some very convincing charts and graphs and read a very good spiel and I deeply absorbed every single word and took the entire message to heart. That was also the same period of my life in which I declined free front-row tickets to Collective Soul to avoid rescheduling a first date with a girl with whom I knew I had no future. It’s safe to say that my judgment was not sound during those days.

Over the years, I became a bit wiser. I looked back and realized some of the mistakes that I’d made. In this particular case, I slowly came to understand that everything that convinced me to defragment was marketing material from a company that sold defragmentation software. I also forced myself to admit that I never could detect any post-defragmentation performance improvements. I had allowed the propaganda to sucker me into climbing onto a bandwagon carrying a lot of other suckers, and we reinforced each others’ delusions.

That said, we were mostly talking about single-drive systems in personal computers. That transitions right into the real problem with the fragmentation discussion.

Server Systems are not Desktop Systems

I was fortunate enough that my career did not immediately shift directly from desktop support into server support. I worked through a gradual transition period. I also enjoyed the convenience of working with top-tier server administrators. I learned quickly, and thoroughly, that desktop systems and server systems are radically different.

Usage Patterns

You rely on your desktop or laptop computer for multiple tasks. You operate e-mail, web browsing, word processing, spreadsheet, instant messaging, and music software on a daily basis. If you’re a gamer, you’ve got that as well. Most of these applications use small amounts of data frequently and haphazardly; some use large amounts of data, also frequently and haphazardly. The ratio of write operations to read operations is very high, with writes commonly outnumbering reads.

Servers are different. Well-architected servers in an organization with sufficient budget will run only one application or application suite. If they use much data, they’ll rely on a database. In almost all cases, server systems perform substantially more read operations than write operations.

The end result is that server systems almost universally have more predictable disk I/O demands and noticeably higher cache hits than desktop systems. Under equal fragmentation levels, they’ll fare better.

Storage Hardware

Whether or not you’d say that server-class systems contain “better” hardware than desktop system is a matter of perspective. Server systems usually provide minimal video capabilities and their CPUs have gigantic caches but are otherwise unremarkable. That only makes sense; playing the newest Resident Evil at highest settings with a smooth frame rate requires substantially more resources than a domain controller for 5,000 users. Despite what many lay people have come to believe, server systems typically don’t work very hard. We build them for reliability, not speed.

Where servers have an edge is storage. SCSI has a solid record as the premier choice for server-class systems. For many years, it was much more reliable, although the differences are negligible today. One advantage that SCSI drives maintain over their less expensive cousins is higher rotational speeds. Of all the improvements that I mentioned above, the most meaningful advance in IDE drives was the increase of rotational speed from 3,600 RPM to 7,200 RPM. That’s a 100% gain. SCSI drives ship with 10,000 RPM motors (~38% faster than 7,200 RPM) and 15,000 RPM motors (108% faster than 7,200 RPM!).

Spindle speed doesn’t address the reliability issue, though. Hard drives need many components, and a lot of them move. Mechanical failure due to defect or wear is a matter of “when”, not “if”. Furthermore, they are susceptible to things that other component designers don’t even think about. If you get very close to a hard drive and shout at it while it’s powered, you can cause data loss. Conversely, my solid-state phone doesn’t seem to suffer nearly as much as I do even after the tenth attempt to get “OKAY GOOGLE!!!” to work as advertised.

Due to the fragility of spinning disks, almost all server systems architects design them to use multiple drives in a redundant configuration (lovingly known as RAID). The side effect of using multiple disks like this is a speed boost. We’re not going to talk about different RAID types because that’s not important here. The real point is that in practically all cases, a RAID configuration is faster than a single disk configuration. The more unique spindles in an array, the higher its speed.

With SCSI and RAID, it’s trivial to achieve speeds that are many multipliers faster than a single disk system. If we assume that fragmentation has ill effects and that defragmentation has positive effects, they are mitigated by the inherent speed boosts of this topology.

These Differences are Meaningful

When I began taking classes to train desktop support staff to become server support staff, I managed to avoid asking any overly stupid questions. My classmates weren’t so lucky. One asked about defragmentation jobs on server systems. The echoes of laughter were still reverberating through the building when the instructor finally caught his breath enough to choke out, “We don’t defragment server systems.” The student was mortified into silence, of course. Fortunately, there were enough shared sheepish looks that the instructor felt compelled to explain it. That was in the late ’90s, so the explanation was a bit different then, but it still boiled down to differences in usage and technology.

With today’s technology, we should be even less fearful of fragmentation in the datacenter, but, my observations seem to indicate that the reverse has happened. My guess is that training isn’t what it used to be and we simply have too many server administrators that were promoted off of the retail floor or the end-user help desk a bit too quickly. This is important to understand, though. Edge cases aside, fragmentation is of no concern for a properly architected server-class system. If you are using disks of an appropriate speed in a RAID array of an appropriate size, you will never realize meaningful performance improvements from a defragmentation cycle. If you are experiencing issues that you believe are due to fragmentation, expanding your array by one member (or two for RAID-10) will return substantially greater yields than the most optimized disk layout.

Disk Fragmentation and Hyper-V

To conceptualize the effect of fragmentation on Hyper-V, just think about the effect of fragmentation in general. When you think of disk access on a fragmented volume, you’ve probably got something like this in mind:

Jumpy Access

Look about right? Maybe a bit more complicated than that, but something along those lines, yes?

Now, imagine a Hyper-V system. It’s got, say, three virtual machines with their VHDX files in the same location. They’re all in the fixed format and the whole volume is nicely defragmented and pristine. As the virtual machines are running, what does their disk access look like to you. Is it like this?:

Jumpy Access

If you’re surprised that the pictures are the same, then I don’t think that you understand virtualization. All VMs require I/O and they all require their I/O more or less concurrently with I/O needs of other VMs. In the first picture, access had to skip a few blocks because of fragmentation. In the second picture, access had to skip a few blocks because it was another VM’s turn. I/O will always be a jumbled mess in a shared-storage virtualization world. There are mitigation strategies, but defragmentation is the most useless.

For fragmentation to be a problem, it must interrupt what would have otherwise been a smooth read or write operation. In other words, fragmentation is most harmful on systems that commonly perform long sequential reads and/or writes. A typical Hyper-V system hosting server guests is unlikely to perform meaningful quantities of long sequential reads and/or writes.

Disk Fragmentation and Dynamically-Expanding VHDX

Fragmentation is the most egregious of the copious, terrible excuses that people give for not using dynamically-expanding VHDX. If you listen to them, they’ll paint a beautiful word picture that will have you daydreaming that all the bits of your VHDX files are scattered across your LUNs like a bag of Trail Mix. I just want to ask anyone who tells those stories: “Do you own a computer? Have you ever seen a computer? Do you know how computers store data on disks? What about Hyper-V, do you have any idea how that works?” I’m thinking that there’s something lacking on at least one of those two fronts.

The notion fronted by the scare message is that your virtual machines are just going to drop a few bits here and there until your storage looks like a finely sifted hodge-podge of multicolored powders. The truth is that your virtual machines are going to allocate a great many blocks in one shot, maybe again at a later point in time, but will soon reach a sort of equilibrium. An example VM that uses a dynamically-expanding disk:

  • You create a new application server from an empty Windows Server template. Hyper-V writes that new VHDX copy as contiguously as the storage system can allow
  • You install the primary application. This causes Hyper-V to request many new blocks all at once. A large singular allocation results in the most contiguous usage possible
  • The primary application goes into production.
    • If it’s the sort of app that works with big gobs of data at a time, then Hyper-V writes big gobs, which are more or less contiguous.
    • If it’s the sort of app that works with little bits of data at a time, then fragmentation won’t matter much anyway
  • Normal activities cause a natural ebb and flow of the VM’s data usage (ex: downloading and deleting Windows Update files). A VM will re-use previously used blocks because that’s what computers do.

How to Address Fragmentation in Hyper-V

I am opposed to ever taking any serious steps to defragmenting a server system. It’s just a waste of time and causes a great deal of age-advancing disk thrashing. If you’re really concerned about disk performance, these are the best choices:

  • Add spindles to your storage array
  • Use faster disks
  • Use a faster array type
  • Don’t virtualize

If you have read all of this and done all of these things and you are still panicked about fragmentation, then there is still something that you can do. Get an empty LUN or other storage space that can hold your virtual machines. Use Storage Live Migration to move all of them there. Then, use Storage Live Migration to move them all back, one at a time. It will line them all up neatly end-to-end. If you want, copy in some “buffer” files in between each one and delete them once all VMs are in place. These directions come with a warning: you will never recover the time necessary to perform that operation.

Cannot Delete a Virtual Hard Disk from a Cluster Shared Volume

Cannot Delete a Virtual Hard Disk from a Cluster Shared Volume

When you use the built-in Hyper-V tools (Hyper-V Manager and PowerShell) to delete a virtual machine, all of its virtual hard disks are left behind. This is by design and is logically sound. The configuration files are components of the virtual machine and either define it or have no purposeful existence without it; the virtual hard disks are simply attached to the virtual machine and could just as easily be attached to another. After you delete the virtual machine, you can manually delete the virtual hard disk files. Usually. Sometimes, when the VHD is placed on a cluster shared volume (CSV), you might have some troubles deleting it. The fix is simple.


There are a few ways that this problem will manifest. All of these conditions will be applicable, but the way that you encounter them is different.

Symptom 1: Cannot Delete a VHDX on a CSV Using Windows Explorer on the CSV’s Owner Node

When using Windows Explorer to try to delete the file from the node that own the CSV, you receive the error: The action can’t be completed because the file is open in System. Close the file and try again.

System has VHD Open

System has VHD Open


Note: this message does sometimes appear on non-owner nodes.

Symptom 2: Cannot Delete a VHDX on a CSV Using Windows Explorer on a Non-Owning Node

When using Windows Explorer to try to delete the file from a node other than the CSV’s owner, you receive the error: The action can’t be completed because the file is open in another program. Close the file and try again.

Another Program has the VHD Open

Another Program has the VHD Open


Note: non-owning nodes do sometimes receive the “System” message from symptom 1.

Symptom 3: Cannot Delete a VHDX on a CSV Using PowerShell

The error is always the same from PowerShell whether you are on the owning node or not: Cannot remove item virtualharddisk.vhdx: The process cannot access the file ‘virtualharddisk.vhdx’ because it is being used by another process.

Another Process has the CSV

Another Process has the CSV


Symptom 4: Cannot Delete a VHDX on a CSV Using the Command Prompt

The error message from a standard command prompt is almost identical to the message that you receive in PowerShell: The process cannot access the file because it is being used by another process.

Another Process has the VHD

Another Process has the VHD



Clean up is very simple, but it comes with a serious warning: I do not know what would happen if you ran this against a VHD that was truly in use by a live virtual machine, but you should expect the outcome to be very bad.

Open an elevated PowerShell prompt on the owner node and issue Dismount-DiskImage:

You do not need to type out the -ImagePath parameter name but you must fully qualify the path! If you try to use a relative path with Dismount-DiskImage or any of the other disk image manipulation cmdlets, you will be told that The system cannot find the file specified:

DiskImage Cmdlet Fails on Relative Paths

DiskImage Cmdlet Fails on Relative Paths


Once the cmdlet returns, you should be able to delete the file without any further problems.

If you’re not sure which node owns the CSV, you can ask PowerShell, assuming that the Failover Cluster cmdlets are installed:

CSV Owner in PowerShell

CSV Owner in PowerShell


You can also use Failover Cluster Manager, on the Storage/Disks node:

CSV Owner in Failover Cluster Manager

CSV Owner in Failover Cluster Manager


Other Cleanup

Make sure to use Failover Cluster Manager to remove any other mention of the virtual machine. Look on the Roles node. These resources are not automatically cleaned up when the virtual machine is deleted. Try to be in the habit of removing cluster resources before deleting the related virtual machine, if possible. I assume that for most of us, deleting a virtual machine is a rare enough occurrence that it’s easy to overlook things like this. I do know that the problem can occur even if the objects are deleted in the “proper” order, so this is not the root cause.

Alternative Cleanup Approaches

The above solution has worked for me every time, but it’s a very rare event without a known cause, so it’s impossible for me to test every possibility. There are two other things that you can try.

Move the CSV to Another Node

Moving the CSV to another node might break the lock from the owning node. This has worked for me occasionally, but not as reliably as the primary method described above.

In Failover Cluster Manager, right-click the CSV, expand Move, then click one of the two options. Best Possible Node will choose where to place the CSV for you; Select Node will give you a dialog for you to choose the target.

Move CSV in Failover Cluster Manager

Move CSV in Failover Cluster Manager


Alternatively, you can use PowerShell:

If you don’t want to tell it which node to place the CSV on, simply omit the Node  parameter:

Move CSV in PowerShell

Move CSV in PowerShell


Rolling Cluster Reboot

The “nuclear” option is to reboot each node in the cluster, starting with the original owner node. If this does not work, then the disk image is truly in use somewhere and you need to determine where.

How to use Microsoft Virtual Machine Converter (MVMC) for Hyper-V P2V

How to use Microsoft Virtual Machine Converter (MVMC) for Hyper-V P2V

Up through version 2012 of Microsoft’s System Center Virtual Machine Manager, the product included a physical-to-virtual (P2V) conversion tool for Hyper-V. It was taken out of the 2012 R2 version, and as most anyone could have predicted, the customer response was a general revolt. The capability was later added to the Microsoft Virtual Machine Converter (MVMC) product when it released as version 3.0.

Download Altaro VM Backup

Start your free 30-day trial of Altaro VM Backup today and see why it's trusted by 40 000+ organizations worldwide. Get started now and run your first backup in under 15 mins!

The good news is: That particular product is provided free-of-charge, so you do not need to purchase any System Center products.
The bad news is: It’s really not that well-developed.

This article contains a how-to guide, but I strongly recommend that you read through the entire thing, especially the pros and cons, before you start. With some of the caveats, you might find that you’d rather not use this tool at all.

microsoft virtual machine converter for Hyper-V P2V

What is Microsoft Virtual Machine Converter (MVMC)?

Microsoft Virtual Machine Converter, currently at version 3.1, is a freely-available tool provided by Microsoft for the purpose of converting VMware virtual machines and physical computers to Hyper-V virtual machines. If you prefer, you can also use MVMC to create VHD or VHDX from the source disks without converting the entire system. It includes both a GUI tool and a set of PowerShell functions so you can graphically step through your conversions one at a time or bulk transfer them with only a few lines.

During P2V, an agent application is temporarily installed on the source system.

You can convert desktop operating systems running Windows Vista and later. You can convert Windows Server operating systems running version 2008 or later. No Linux systems are supported by the P2V module. Your hypervisor target can be any version of Windows or Hyper-V Server from 2008 R2 onward.

Pros and Cons of Microsoft Virtual Machine Converter

I was not very impressed with this tool and would be unlikely to use it. Overall, I feel that Disk2VHD is better suited to what most people are going to do.


  • MVMC needs a temporary conversion location, even if you installed on the target Hyper-V host. So, you need enough space to hold the source system twice. MVMC does use dynamic VHD by default, so plan for the consumed space, not the total empty space.
  • MVMC only creates VHD files, not VHDX. It appears that what Microsoft really wants you to use this for is to convert machines so that they can be used with Azure, which still can’t work with VHDX. So, you must have enough space for initial VHD and the converted VHDX.
  • MVMC creates one VHD for each volume, not for each physical disk. So, for modern Windows OSs, you will have that tiny 350MB system disk as its own VHD and then your boot C: disk as a separate VHD.
  • MVMC operates from the broker’s perspective, so paths may not line up as you expect on the target system.

It’s not all doom and gloom, though. Pros:

  • MVMC can convert machines in bulk via its PowerShell module. That module is one of the poorer examples of the craft, but it is workable.
  • Aside from a tiny agent that is installed and almost immediately removed during the discovery change, nothing is installed on the source system. Contrast with Disk2VHD, which requires that you run the app within the machine to be converted.

How to Download Microsoft Virtual Machine Converter

Virtual Machine Converter is freely available from the Microsoft download site. That link was the most current as of the time of writing, but being the Internet, things are always subject to change. To ensure that you’re getting the latest version:

  1. Access
  2. In the search box, enter Virtual Machine Converter.
  3. From the results list, choose the most recent version.

The download page will offer you mvmc_setup.msi and MVMC_cmdlets.doc. The .doc file is documentation for the PowerShell cmdlets. It is not required, especially if you don’t intend to use the cmdlets. The .msi is required.

How to Install Microsoft Virtual Machine Converter

Do not install MVMC on the computer(s) to be converted. You can install the application directly on the Hyper-V host that will contain the created virtual machine or on an independent system that reads from the source physical machine and writes to the target host. For the rest of this article, I will refer to such an independent system as the broker. During the instructions portion, I will only talk about the broker; if you install MVMC on the target Hyper-V host, then that is the system that I am referring to for you.

If you use a broker, it can be running any version of Windows Server 2008 R2 or onward. The documentation also mentions Windows 8 and later, although desktop operating systems are not explicitly listed on the supported line.

Whichever operating system you choose, you must also have its current .Net environment (3.5 for 2008/R2 or 4.5 for 2012/R2). If you intend to use the PowerShell module, version 3.0 of PowerShell must be installed. This is only of concern for 2008/R2. Enter $PSVersionTable at any PowerShell prompt to see the installed version. If you are below version 3.0, use the same steps listed above for downloading MVMC, but search for Windows Management Framework instead. Version 3 is required, but any later version supported by your broker’s operating system will suffice.

The BITS Compact Server feature must also be enabled:




Installing MVMC is very straightforward. On the broker system, execute mvmc_setup.msi. As you step through the wizard, the only page with any real choice is the Destination Folder. Once installed, it will have its own entry for Microsoft Virtual Machine Converter on the Start menu/screen.

How to use Microsoft Virtual Machine Converter’s GUI to Perform P2V

Before you start, the source system must be online and reachable via network by the broker. The broker will temporarily hold the converted disk files, so it must have sufficient free space; it will not accept an SMB path. You must also know a user name and password that can act as an administrator on the source system and the destination Hyper-V host.

When ready, start MVMC and follow these steps:

  1. The first screen (not shown) is simply informational. If you don’t want to see it again, check Do not show this page again. Click Next when ready.
  2. The next screen asks if you wish to convert from a virtual or physical source machine. This article is only about P2V, so choose Physical machine conversion and click Next.
    MVMC Physical Source

    MVMC Physical Source


  3. Now, enter information about the source computer. You’ll need a resolvable name or IP address as well as an administrative user account. Upon clicking Next, MVMC will attempt to connect to the source system using the credentials that you specify.
    MVMC Source System

    MVMC Source System


  4. If the connection from step 3 is successful, you’ll next be asked to install the agent and scan the source host. Press the Scan System button and wait for the middle screen to display the results (do not be thrown off by the appearance of a Hyper-V adapter in my screenshot; I didn’t have a suitable physical system to demonstrate with):
    MVMC Source Scan

    MVMC Source Scan


  5. Each volume in your system will be detected and converted to an individual VHD. You can deselect any volume that you don’t want converted and you can choose to create a Fixed VHD instead of a Dynamic VHD if you prefer. Be aware that every line item will create a unique VHD.
    MVMC Disk Selection

    MVMC Disk Selection


  6. Enter the specifications for the target virtual machine. You’ll need to provide its name, the number of vCPUs to assign, and the amount of memory to assign.
    MVMC Target VM Specifications

    MVMC Target VM Specifications


  7. Enter the connection information for the target Hyper-V host. You’re given the option to use the credentials of your currently-logged-on user. If that will not work, clear the checkbox and manually enter the correct credentials.
    MVMC Target Host

    MVMC Target Host


  8. Next, you’ll be asked where to place the files. The location that you specify is from the viewpoint of the broker. So, if you enter C:\VMs, the files will be placed on the broker’s C: drive. Unless you’re placing the virtual machine’s files on an SMB 3 share, you’ll need to fix this all up afterward.
    MVMC Target File Location

    MVMC Target File Location


  9. Choose the interim storage location, which must be on the broker system.
    MVMC Interim Storage

    MVMC Interim Storage


  10. Select the virtual switch, if any, that you will connect the virtual machine to. I recommend that you leave it on Not Connected. This helps ensure that the system doesn’t appear on the same network twice.
    MVMC Target Virtual Switch

    MVMC Target Virtual Switch


  11. The next screen (not shown) is a simple summary. Review it click Back to make changes or Finish to start the conversion.
  12. You’ll now be shown the progress of the conversion job. Once it’s complete, click Close.
    MVMC Progress

    MVMC Progress


If all is well, you’ll eventually be given an all-green success screen:

MVMC Success

MVMC Success


There will be some wrap-up operations to carry out. Jump over the PowerShell section to find those steps.

How to use Microsoft Virtual Machine Converter’s PowerShell Cmdlets to Perform P2V

For some reason, MVMC’s cmdlets are not built properly to autoload. You’d think if any company could find an internal resource to show them how to set that up, it would be Microsoft. You’ll need to manually load the module each time that you want to use it. It should also be theoretically possible to place the files into an MvmcCmdlet sub-folder of any of the Modules folders listed in your system’s PSModulePath environment variable and it would then be picked up by the auto-loader. I wasn’t certain which of the DLLs were required and didn’t spend a lot of time testing it.

  1. Open an elevated PowerShell prompt on your broker system.
  2. Import the module: Import-Module -Name 'C:\Program Files\Microsoft Virtual Machine Converter\MvmcCmdlet.psd1'
  3. Load the credential to use on the source physical machine. You can load a credential set in a wide variety of ways. To interactively enter a credential set and save it to a variable: $SourceCredential = Get-Credential
  4. Connect to the source machine: $SourceConnection = New-MvmcP2VSourceConnection -PhysicalServer '' -SourceCredential $SourceCredential
  5. Set up the P2V agent and retrieve information from it: $SourceInfo = Get-MvmcP2VSourceSystemInformation -P2VSourceConnection $SourceConnection. We’ve stored this information in a variable so that we can continue to use it with other cmdlets in this module, but you are more than welcome to look at the information that it gathered. Use $SourceInfo | Get-Member to see its properties. You can then look at any of the members just by entering the variable name, a dot, and the property that you’d like to see. Ex: $SourceInfo.Services
  6. Create a variable to hold the parameters of the virtual machine to be created: $TargetVMParameters = New-MvmcP2VRequestParam. This object has a few properties that you can look at in the same way that you did with the $SourceInfo variable, although they’ll all be empty this time.
  7. Populate the SelectedDrives parameter of $TargetVMParameters with all of the drives from the source machine: $TargetVMParameters.SelectedDrives.AddRange($SourceInfo.LogicalDrives). If you’d prefer, you can add individual drives, ex: $TargetVMParameters.SelectedDrives.Add($SourceInfo.LogicalDrives[0]) will add only the first drive from the source machine. You can continue using .Add to specify other drives until you have the ones that you want. Every single item here will have a VHD created just for it.
  8. The VHDs will be created as dynamically expanding, and I don’t recommend that you change that. You’re probably going to want to convert them to VHDX later anyway, so, if you’re dead set on fixed, wait until you can convert it yourself. Otherwise, your temp and destination space consumption will be higher than necessary. If you really want to create them as fixed right now: $TargetVMParameters.SelectedDrives | foreach { $_.IsFixed = $True }
  9. Populate the CPUCount parameter of $TargetVMParameters. You can enter a number or use the same as the source: $TargetVMParameters.CpuCount = $SourceInfo.CoreSystem.PhysicalProcessorCount.
  10. Populate the StartupMemoryInMB parameter of $TargetVMParameters. As with CPU, you can pull it from the source system: $TargetVMParameters.StartupMemoryInMB = $SourceInfo.CoreSystem.MemoryInMB. This is potentially a bit more dangerous, as it could create a VM that is simply too large to start. You can, of course, just specify an integer value.
  11. The final task is to set up the network adapter(s). If you skip this step, your virtual machine will be created without any virtual network adapters at all. That’s a viable option, but I recommend against it because MVMC can keep the OS-to-adapter IDs intact. You can add virtual adapters somewhat like you did with the hard drives. The differences are that you can only add one adapter at a time and you also need to specify which, if any, virtual switch to connect the adapter to. If you use an empty string, then the adapter remains disconnected. Some samples:
    1. Copy the first physical adapter and leave it disconnected: $TargetVMParameters.SelectedNetworkAdapters.Add($SourceInfo.NetworkAdapters[0], '')
    2. Add all adapters and connect them to the virtual switch named “vSwitch”: $SourceInfo.NetworkAdapters | foreach { $TargetVMParameters.SelectedNetworkAdapters.Add($_, 'vSwitch') }
  12. You’ve collected all the information from the source system and defined the target system. Let’s turn our attention to the target host. Start by gathering the credential set that will be used to create the new virtual machine: $DestinationCredential = Get-Credential. You can use the same credential as the source if that will work: $DestinationCredential = $SourceCredential
  13. Open a connection to the target Hyper-V host: $DestinationConnection = New-MVMCHyperVHostConnection -HyperVServer '' -HostCredential $DestinationCredential.
  14. All that’s left is to perform the conversion: ConvertTo-MvmcP2V -SourceMachineConnection $SourceConnection -DestinationLiteralPath '\\svhv2\c$\LocalVMs' -DestinationHyperVHostConnection $DestinationConnection -TempWorkingFolder 'C:\Temp' -VmName 'svmigrated' -P2VRequestParam $TargetVMParameters. Take a close look at the DestinationLiteralPath that I used. This cmdlet operates from the perspective of the broker, not the target host (contrast with Move-VM/Move-VMStorage).

Post-Conversion Fix-Up and Notes

Do not forget to turn off or otherwise disconnect the source physical system before turning on the virtual replacement!

Virtual network adapters are not placed in a VLAN. If a VLAN is needed, you’ll need to set that after the VM is created.

The virtual machine will be set to use fixed memory. If you’d like to use Dynamic Memory, you’ll need to set that after the VM is created.

The process automatically creates a sub-folder of DestinationLiteralPath with the name of the virtual machine. All of the virtual machine’s files are placed there. Feel free to use Storage Live Migration to place the files anywhere that you like.

I do not know of any way to recombine the volumes so that they are all together in a single VHD. It might be possible to use a partition manipulation tool such as Clonezilla.

Assuming that you don’t want to continue using the older VHD format, you’ll need to convert to VHDX. We have an article explaining how to do that. Remember that the new disk is created alongside the old.

When I was first running through the PowerShell steps, I didn’t realize that the DestinationLiteralPath was from the broker’s perspective so I used the local path on the Hyper-V host (C:\LocalVMs). The cmdlet accepted my input, ran for a very long time, and then failed due to the path. I then discovered that it had created an entire VM on my broker machine in C:\LocalVMs. Where it failed was in connecting that folder to the target host. So, I could have copy/pasted and imported its output rather than going through the whole thing again.

Even though I used what would ordinarily be a completely non-workable path for DestinationLiteralPath, the cmdlet automatically fixed up the VM completely so that it ran from the correct local path.

If the conversion process does fail for some reason during the final stage, it will almost always have created a virtual machine. You’ll need to manually delete it before retrying.

How To Copy or Backup a VHD File While the VM is Running

How To Copy or Backup a VHD File While the VM is Running

I think that we can all agree that backup software exists for a reason. Well, lots of reasons. Very good reasons. So, if you ask me in the abstract how to make a copy or backup of a virtual machine’s virtual hard disk file while the virtual machine is running, I’m probably going to refer you to your backup vendor.

If you don’t have one, or don’t have one that you can trust, then I am naturally going to recommend that you download Altaro VM Backup. Backup is their wheelhouse and they’re going to have a lot more experience in it than any of us. The outcomes will be better than anything that we administrators can do on our own.

But, I also understand that sometimes you have one-off needs and you need to get something done right now.

Or, you need to script something.

Or your backup software isn’t quite granular enough.

Or you have some other need that’s unique to your situation.

If you need to get a copy or backup of one or more of a virtual machine’s hard disks without shutting down the virtual machine, you have three options, shown in their preferred order:

  1. Use your backup application, as we discussed.
  2. Export the virtual machine with Hyper-V Manager or Export-VM. This only works for Hyper-V versions 2012 R2 and later.
  3. Copy the file manually.

I’m not going to change my mind that a backup application is the best way to get that copy. But, I’m done beating that horse in this article.

Export is the second-best solution. The biggest problem with that is that it exports the entire virtual machine, which might be undesirable for some reason or another. It also locks the virtual machine. That won’t necessarily be a bad thing, especially if all you’re doing is getting a copy, but maybe it’s a showstopper for whatever you’re trying to accomplish.

That leaves us with option 3, which I will illustrate in this article. But first, I’m going to try to talk you out of it.

You Really Shouldn’t Manually Copy the Disks of a Live Virtual Machine

Manually copying a live VHD/X file isn’t the greatest idea. The best that you can hope for is that your copy will be “crash consistent”. The copy will only contain whatever data was within the VHD/X file at the moment of the copy. Any in-flight I/Os will be completely lost if you’re lucky or partially completed if you’re not. Databases will probably be in very bad shape. I’m sure that whatever reason that you have for wanting to do this is very good, but the old adage, “Because you can do a thing, it does not follow that you must do a thing,” is applicable. Please, think of the data.

OK, the guilt trip is over.

Just remember that if you attach the copied disk to a virtual machine and start it up, the volume will be marked as dirty and your operating system is going to want to run a repair pass on it.

Manually Copying a Hyper-V Disk the Dangerous Way

That header is a bit scarier than the reality. Most importantly, you’re not going to hurt your virtual machine doing this. I tested this several times and did not have any data loss or corruption issues. I was very careful not to try this process with a disk that housed a database because I was fairly certain that would break my perfect streak that way.

Use robocopy in restartable mode:

The format is:

It is important that you do not use a trailing slash on the folder names! If you want to copy multiple files, just enter them with a space between each.

Pros of the robocopy method:

  • It’s easy to remember
  • It works on anything that your account can reach — local storage, CSVs, SMB shares, whatever
  • Is “good enough”

Cons of the robocopy method:

  • Restartable mode (specified by the /z switch) is sssssllllllllooooooooow especially over networks
  • There is no guarantee of data integrity. But, there’s no real guarantee of data integrity when manually copying a live VHD/X anyway, so that probably doesn’t matter.
  • I doubt that anyone will ever give you support if you use this method

For basic file storage VHD/X files, this is probably the best of these bad methods to use. I would avoid it for frequently written VHD/X files.

Manually Copying a Hyper-V Disk the Safer Way

A somewhat safer method is to invoke VSS. It’s more involved, though. The following is a sampledo not copy/paste!

This is going to need somewhat more explanation than the robocopy method. We’ll take it line-by-line.

The first line tells VSSADMIN to create a shadow copy for the C: volume. The VHD/X file that I’m targeting lives on C:. Substitute your own drive here. The shadow copy becomes a standard Windows volume.

The second line creates a symbolic link to the newly created volume so that we can access its contents with the usual tools. You can discover what that lines contents should be via the output from the previous command.

VSS Admin Create Shadow

VSS Admin Create Shadow

We use “mklink.exe” to create the symbolic link. The /D switch lets it know that we’re going to make a directory link, not a file link. After that, we only need to tell it what to call the link (I used C:\vssvolume) and then the target of our link. It is vitally important that you place a trailing slash on the target or your symbolic link will not work.

Next, we copy the file out of the shadow copy to our destination. I used XCOPY because I like XCOPY (and because it allows for tab completion of the file name, which robocopy does not). You can use any tool that you like.

That’s all for the file. You can copy anything else out of the shadow copy that you want.

We need to clean up after ourselves. Do not leave shadow copies lying around. It’s bad for your system’s health. The first step is to destroy the symbolic link:

The last thing is to delete the shadow. There are multiple ways to do this, but my preferred way is to delete the exact shadow copy that we made. If you look at the output of your vssadmin create shadow command, it has all of the information that you need. Just look at the Shadow Copy ID line (it’s directly above the red box in my screen shot). Since VSSADMIN was nice enough to place all of that on one line, you can copy/paste it into the deletion command.

You’ll be prompted to confirm that you want to delete the shadow. Press [Y] and you’re all finished! If you want to see other ways to remove VSS shadows, type vssadmin delete shadows without any other parameters. It will show you all of the ways to use that command.

Yes, this works for Cluster Shared Volumes. Get a shadow of C: as shown above and copy from the shadow’s ClusterStorage folder. I would take caution to only perform this from the node that owns the CSV.

Pros of the VSSADMIN method:

  • It’s completely safe to use, if you do it right.
  • It’s not entirely perfect, but some quiescing of data is done. The copied volume is still dirty, though.
  • Faster when copying to a network destination than robocopy in restartable mode.
  • Works for local disks and CSVs. Won’t work for SMB 3 from the Hyper-V host side.

Cons of the VSSADMIN method:

  • Tough to remember (but this blog article should live for a long time, and that’s what Favorites and Bookmarks are for)
  • If you don’t clean up, you could cause your system to have problems. For instance, you might prevent your actual backup software from running later on in the same evening.
  • May not work at all if another VSS snapshot exists
  • May have problems when third-party and hardware VSS providers are in use

If the integrity of the copied data is important and/or changing frequently, you’ll likely get better results from the VSSADMIN method than the robocopy method. It’s still not as good as the other techniques that I promised not to harp on you about.

Hyper-V How To: Create a Virtual Hard Disk

Hyper-V How To: Create a Virtual Hard Disk

Creating a new virtual hard disk is almost always included as a step during the creation of the virtual machine that owns it. It’s certainly not the only time or the only way that you can create a VHDX. You don’t necessarily need to attach it to a virtual machine, however, as Hyper-V Manager has a complete management system for virtual hard disks whether or not they have a connection to a virtual machine. You can use it to create both standalone virtual hard disks and disks directly attached to a virtual machine. PowerShell also offers similar functionality.

Use Hyper-V Manager to Create a New Virtual Disk Directly on a Virtual Machine

To get started, open Hyper-V Manager and choose your desired creation method from one of the next two sections.

  1. In Hyper-V Manager’s center pane, right-click a virtual machine and click Settings.

    Access VM Settings

    Access VM Settings

  2. In the left pane of the VM’s Settings dialog, click to select the controller that you wish to connect the new disk to. Remember that you cannot make changing to the disk configuration of an IDE controller while the virtual machine is on, but this is not a restriction for the SCSI controller.
    The following two screen shots show the IDE screen and the SCSI screen, respectively. On either, click the Add button (for an IDE controller, make sure that Hard Drive is highlighted).

    Add VHD on IDE

    Add VHD on IDE

    Add VHD to SCSI

    Add VHD on SCSI

  3. Whichever you chose, the following screen will appear:
    Disk Connection

    Disk Connection

    Notice that the connection information for Controller is the controller that you selected in step 2 and the Location is the next available on that controller. You can freely switch either here, with the limitation that you can’t add to an IDE controller while the VM is on and you can’t use a location that already has a disk attached. When ready, click the New button.

  4. This will open the New Virtual Hard Disk Wizard. Jump down to the New Virtual Hard Disk Wizard section below to finish up.

Using Hyper-V Manager to Create a Standalone Virtual Hard Disk

To create a new standalone virtual hard disk, there’s only a single that that you need to do to open the wizard. In Hyper-V Manager, in the Actions pane at the far right, click New then Virtual Hard Disk.

Add New VHD

Add New VHD

New Virtual Hard Disk Wizard

Whichever method you chose above, you’ll now be looking at the New Virtual Hard Disk Wizard. Follow the steps below to actually create the virtual hard disk.

  1. The first screen that appears is informational unless it was previously suppressed via the Do not show this page again checkbox. When ready, click Next.
  2. If you aren’t adding the disk to a Generation 2 virtual machine, the first active screen is the Choose Disk Format page. This allows you to choose between the earlier VHD format or the newer VHDX format. Unless you need to use the disk on a pre-2012 version of Hyper-V or share it with a third party hypervisor that can’t understand VHDX, you’ll likely want to choose VHDX. Make your choice and click Next.

    New VHD Format

    New VHD Format

  3. The next screen is Choose Disk Type. Your options are between fixed size, dynamically expanding, and differencing.

    New VHD Type

    New VHD Type

  4. Next, you’ll be asked to provide the name and location for your new virtual hard disk.

    New VHD Name and Location

    New VHD Name and Location

  5. If you are creating a new differencing disk, the next screen will ask you to select the parent virtual hard disk.

    New VHD Parent

    New VHD Parent

  6. If you chose either a fixed size or a dynamically expanding disk, you’ll be brought to the Configure Disk screen.
    New VHD Source

    New VHD Source

    You have three options:

    1. Create a new blank virtual hard disk does exactly that. You’ll need to supply the size for the disk that you wish to create.
    2. You can also choose to copy an existing physical disk’s contents to the file that you specified in step 4.
    3. The final option is like the second except that the source is an existing VHD file.
  7. The final screen is a summary. Review the settings, click Back if you need to correct anything.

Creating a VHDX in PowerShell

Use the New-VHD cmdlet to create the VHD, then Add-VMHardDiskDrive to attach it.

New VHD Creation Notes

VHD creation is a fairly easy-to-understand process but there are a few things to be mindful of.

  • If you create a disk using a virtual machine’s Settings page, the wizard will automatically pre-populate with the new disk file when it is finished creating.
  • If you create a new disk from a physical disk or another virtual disk, the new VHD is built up from scratch. If it’s dynamically expanding, any empty blocks in the source will not be duplicated in the new disk. This means that it will likely be smaller than the original.
  • The new disk that you create does not need to be in the same location or on the same storage as any of a virtual machine’s other files.
  • A new differencing disk’s parent cannot be actively in use by any virtual machine, although it can be a parent of other differencing disks.
  • For Linux virtual machines, it is recommended that you specify -BlockSize 1MB for dynamically-expanding virtual hard disks. You can only provide this option with PowerShell’s New-VHD. None of the GUI tools allows for a block size override. Due to the way that common Linux file systems work, this switch will result in more efficient space utilization.


PowerShell & Hyper-V: Finding the Parents of a Differencing VHD

PowerShell & Hyper-V: Finding the Parents of a Differencing VHD

We’ve had quite a few posts about Hyper-V checkpoints lately (formerly snapshots). We also spend a fair bit of time warning people not to tinker with them manually. There are still those people that are going to tinker despite any warnings, and there will always be those people who don’t even find the warnings until they’re too late to be of any value. Worse, there will always be those unfortunate few that do everything right and still find themselves in a mess. The least I can do is provide a tool that can be of use to anyone that’s stuck working on a complicated tree of differencing disks.

As a refresher, a differencing virtual hard disk contains all the changes that would have been made to a fixed or dynamically expanding virtual hard disk. These disks have a number of uses. The most common is with checkpoints, which automatically create differencing disks (and prepend an A to the extension so that it becomes an AVHD or an AVHDX). They are also used by Remote Desktop Services (RDS) to deploy an individual machine in a pool from a single parent. We can create them manually with New-VHD to attach one or more dependent child virtual machines to a single master.

Once a differencing disk has been created, any virtual machine that is assigned to that differencing disk will only write changes into the differencing disk. When it reads from the differencing disk, any blocks that it does not contain are retrieved from the parent. It is possible for a differencing disk to be a parent of a differencing disk, but they must eventually trace back to a root fixed or dynamically expanding disk.

To put it mildly, problems arise when changes are made to a VHD that is the parent of a differencing disk. So that there is no misunderstanding, these problems are catastrophic to the differencing disk. When a parent-child relationship exists between a virtual disk and a differencing virtual disk, all changes must occur in the child. If any change, however minor, is made to the parent, the data in the child is invalidated. The parent is still fully usable.

So, if you’re going to tinker with a disk in a differencing relationship, you must have a clear idea of what that relationship is. Unfortunately, there’s really no way to look at a VHD and determine if it has children. It’s easiest with checkpoints, because Hyper-V will always create the AVHD/X files in the same folder as the parent. Even if you can’t visually confirm which is the newest or oldest, at least you know that none have wandered off into a neighboring pasture. For RDS, you should only be using the provided modification tools because they have their own methods of protecting the differencing children. If you’re out creating differencing disks on your own, hopefully you paid attention to where you placed them.

How to see all disks upward of a VHD file

While I can’t be of much help in determining what the newest disk in a differencing chain is, I can show you a very quick way to see all the disks in a chain upward of a VHD file that you provide. Just use the following script:

The above script is one of those fun ones where the introductory comments dramatically outnumber the functional parts of the script. The only problem is that it runs the risk of being a bit too clever. I like my scripts to be easy to read. If this were any longer, or was part of a bigger script, I wouldn’t do it this way. This is because one line of script does three things, and some of the functionality is obscured. Look specifically at this line:

The first thing this line does is run Get-VHD against $Path.

The second thing that it does is assign one property of the output of Get-VHD to $Path. There are two confusion points in that piece alone. First, remember that in PowerShell, the = sign is the assignment operator; -eq is the equality operator. We are not checking if $Path is equal to its own ParentPath. The second confusion point is using $Path as a parameter to a cmdlet whose output is being assigned back into $Path. It works because as soon as Get-VHD completes, PowerShell moves on to evaluating the value of its ParenPath, then performs the assignment. The original value of $Path does not collide in PowerShell because it stops caring about the value of $Path before its even done with Get-VHD. It’s not a problem for us to reuse $Path because we don’t need its original value anymore.

The third thing that this line does is verify the outcome of that assignment for the while. while is a deceptively simple language construct that expands to: if the condition is not zero or empty, loop until the condition is zero or empty. Pay attention to the “condition is zero or empty” part, as this is not intuitive for newcomers. It would be logical to expect it to expand to: if the condition is true, loop until the condition is not true, because that is how humans use if in natural language. The only computer language that I am familiar with that implements If in the natural language fashion is Visual Basic. If this same line of script were converted to VB, it would refuse to compile because $Path = (Get-VHD -Path $Path).ParentPath cannot be evaluated as a simple true/false condition. Because PowerShell will continue operating the while loop as long as the condition inside parentheses produces something, it will loop until it encounters a VHD that doesn’t have a parent.

There is only one remaining line of script, and that is the  $Path item on a line all by itself inside the while loop. This cause the value of $Path to be placed into the pipeline. If you run this script as-is, it will just emit the text to the screen, line by line. If you pipe into something that accepts a String object or an array of Strings, it will handle them appropriately.

Most of the programming and scripting best practices recommendations that I’ve seen will tell you to avoid writing code that doesn’t follow natural language conventions because of the potential confusion. That means that a friendlier script would look like this:

While friendlier, it duplicates script and, in my opinion, has other issues that make it harder to read than the method that I chose. Of course, there are other ways to script this to wind up with the same output, but this is one of those few cases where increased readability introduces increased complexity.

Bonus: Re-using a Get-VHD Script

I wrote a script that looks for orphaned virtual machine files a while back, and that same post I included a script that could use only built-in PowerShell components to check a VHD/X file for parents. The purpose of doing so is that you need the Hyper-V PowerShell module loaded in order to use Get-VHD, even though virtual disk files can be used on systems without Hyper-V, such as Windows 7. If you wanted to use that script with this one, then making two simple changes to produce the following script will do the trick (assuming that you loaded Get-VHDDifferencingParent as a dot-sourced function):

The referenced script wasn’t included in the PowerShell and Hyper-V series because it’s far more complex than what I had in mind for this teaching series. You shouldn’t let that stop you from tearing into it to see what you can learn (or if you can find something I did that could be fixed).


How To Attach an Existing Virtual Disk (VHD/X) in Hyper-V

How To Attach an Existing Virtual Disk (VHD/X) in Hyper-V

Virtual hard disks (VHD or VHDX) are usually created along with their owning virtual machines or, when a new disk is needed for an existing virtual machine, directly from the VM’s property sheet. There are times when you’ll want to connect an existing virtual disk, though. For instance:

  • A virtual machine’s definition files are damaged but the VHDX is intact, perhaps due to antivirus
  • Transferring duplicates of virtual disks to one or more test virtual machines
  • Using a virtual hard disk to move files between virtual machines or between a host and guest

Attach a VHDX Using Hyper-V Manager

To attach an existing virtual disk to a virtual machine using Hyper-V Manager, follow these steps:

  1. In Hyper-V Manager’s center pane, locate the virtual machine that you wish to attach the disk to and click Settings.

    Access VM Settings

    Access VM Settings

  2. In the dialog, click on any virtual disk controller (IDE or SCSI). As a matter of course, it’s preferable to choose the controller that you intend to attach the disk to, but it actually doesn’t matter. The screens are different whether or not you choose an IDE or a SCSI controller, but on either one you highlight Hard Drive and click Add:
    IDE Options

    IDE Options

    SCSI Options

    SCSI Options

  3. The following screen will appear:
    New Disk Attachment

    New Disk Attachment

    This screen is identical to the page for an existing disk, with a couple of exceptions:

    • If you look in the left pane, you’ll see, in blue, the placeholder for the new hard drive. The controller that it appears under will be the one that you selected in step 2.
    • None of the fields that indicate the identity of the disk, whether virtual or physical, are filled in.
  4. From the drop-down, choose the Controller that you wish to attach the new disk to. The controller that you selected in step 2 will automatically be selected, but you can choose any that the virtual machine possesses. If you need to add a new SCSI controller, you’ll need to do that separately (it’s on the Add Hardware tab).
  5. From the Location drop-down, pick the controller location to attach the disk to. Any location that already has a disk will be marked as In Use. IDE controllers will only accept a maximum of 2 disks; SCSI will accept up to 64. Remember that a Generation 1 VM will boot from the hard disk in Location 0 on the first IDE controller and a Generation 2 VM will boot from the hard disk in Location 0 on the first SCSI controller.
  6. The last major step is to find the disk file that you wish to attach. Click the Browse button to find it (or if you’ve got the path handy, you can just enter it into the text box).
  7. In the main Settings dialog, click either Apply or OK (the only difference between the two is that OK will close the dialog after the work is finished whereas Apply does not).

Attaching a VHDX Using PowerShell

Use the Add-VMHardDiskDrive cmdlet.


The -ControllerType parameter works with tab completion and will automatically cycle between IDE and SCSI (and Floppy).

You can validate the in-use bus locations in advance:

Removing a VHDX File from a Virtual Machine with Hyper-V Manager

On the virtual disk property screen in Hyper-V Manager (shown in Step 3 above), there is a Remove button. Clicking this and then Apply or OK will detach the VHDX from the virtual machine. The VHDX file is not damaged. The only change made is that the virtual machine’s GUID is no longer given NTFS permissions on the file.

Removing a VHDX File from a Virtual Machine with PowerShell

To remove a VHDX with PowerShell:

Use the Get-VMIdeController and Get-VMScsiController cmdlets as shown at the end of the section on connecting a VHDX in PowerShell to locate the VHDX to remove.

Notes About Attaching VHDX Files to Virtual Machines

This is a fairly straightforward process with little to add. There are a few points to be mindful of:

  • While not mentioned in this article, you could easily follow these directions and choose a physical disk instead of a virtual disk in step 6.
  • You cannot add a disk to an IDE controller while the virtual machine is in any state other than Off. This is due more to Microsoft’s adherence to the IDE specification than a true limitation of Hyper-V. Disks can be added to and removed from a SCSI controller at any time.
  • Adding a disk to a virtual machine requires it to have exclusive access, so it cannot be in use by another virtual machine. There is an exception with shared VHDX, but that is an advanced topic not covered here.
  • Hyper-V will automatically adjust the security of the virtual disk file as necessary. This involves adding the GUI of the new owning virtual machine to the disk’s access control list (ACL) directly. If any security-related problem arises with a virtual machine and its VHDX files, you can use this technique to quickly and easily correct the issue.


Page 1 of 212