How to Create Virtual Machine Templates with Hyper-V Manager

How to Create Virtual Machine Templates with Hyper-V Manager

As an infrastructure hypervisor, Hyper-V hits all the high notes. However, it misses on some of the management aspects, though. You can find many control features in System Center Virtual Machine Manager, but I don’t feel that product was well-designed and the pricing places it out of reach of many small businesses anyway. Often, we don’t even need a heavy management layer; sometimes just one or two desires go unmet by the free tools. Of those, admins commonly request the ability to create and deploy templates. The free tools don’t directly include that functionality, but you can approximate it with only a bit of work.

The Concept of the Gold Image

You will be building “gold” or “master” (or even “gold master”) images as the core of this solution. This means that you’ll spend at least a little time configuring an environment (or several environments) to your liking. Instead of sending those directly to production, you’ll let them sit cold. When you want to deploy a new system, you use one of those as a base rather than building the instance up from scratch.

As you might have guessed, we do need to take some special steps with these images. They are not merely regular systems that have been turned off. We “generalize” them first, using a designated tool called “sysprep”. That process strips away all known unique identifiers for a Windows instance. The next time anyone boots that image, they’ll be presented with the same screens that you would see after freshly installing Windows. However, most non-identifying customization, such as software installations, will remain.

Do I Need Gold Images?

The simpler your environment, the less the concept of the gold image seems to fit. I wouldn’t write it off entirely, though. Even with rare usage, you can use a gold image to jump ahead of a lot of the drudgery of setting a new system. If you deploy from the same image only twice, it will be worth your time.

For any environment larger than a few servers, the need for gold images becomes apparent quickly. Otherwise, you wind up spending significant amounts of time designing and deploying new systems. Since major parts of new server deployments share steps (and the equivalent involved time), you get the best usage by leveraging gold images.

Usually, the resistance to such images revolves around the work involved. People often don’t wish to invest much time in something whose final product will mostly just sit idle. I think that there’s also something to that “all-new” feeling of a freshly built image that you lose with gold images. The demands of modern business don’t really allow for these archaic notions. Do the work once, maybe some maintenance effort later, and ultimately save yourself and your colleagues many hours.

Should I Image Workstation or Server Environments?

The majority of my virtualization experience involves server instances. To that end, I’ve been using some sort of template strategy ever since I started using Hyper-V. I only build all-new images when new operating systems debut or significant updates release. Even if I wasn’t sure that I’d ever deploy a server OS more than once, I would absolutely build an image for it.

Workstation OSes have a different calculus. If you’ll be building a Microsoft virtual-machine RDS deployment, then you cannot avoid gold images. If you’re using only hardware deployments, then you might still image, but probably not the way that I’m talking about in this article. I will not illustrate workstation OSes, as the particulars of the process do not deviate meaningfully from server instances.

What About OS and Software Keys?

For operating systems, you have two basic types:

  • Keyed during install: This key will be retained after the sysprep, so you’ll need to use a key with enough remaining activations. KMS keys work best for this. With others, you’ll need to be prepared to change the key after deployment if the situation calls for it. If you have Windows Server Datacenter Edition as your hypervisor, then you can use the AVMA keys. If you don’t have DC edition, then you could technically still use the keys but you’ll have to immediately change it after deployment. I have no idea how this plays legally, so consider that a last-ditch risky move.
  • Keyed after install: This usually happens with volume licensing images. These are the best because you really don’t have to plan anything. Key it afterward. Of course, you also need to qualify for volume licensing in order to use this option at all, so…
  • OEM keys: I’m not even going to wade into that. Ask your reseller.

If you use the ADK (revisited a bit in an upcoming section), you have ways to address key problems.

As for software, you’ll have all sorts of issues with that. Most retain their keys. Lots of them have activation routines, too, so there’s that. And all of the things that come with it. You will need to think through and test. It will be worth the effort far more often than not.

What About Linux Gold Images?

Yes, you most certainly can create gold masters of Linux. In a way, it can be easier. Linux doesn’t use a lot of fancy computer identification techniques or have system-specific GUIDs embedded anywhere. Usually, you can just duplicate a Linux system at will and just rename it and assign a new IP.

Unfortunately, that’s not always the case. Because exceptions are so rare, there’s also no singular built-in tool to handle the items that need generalization. The only problem that I’ve encountered so far is with SSH keys. I found one set of instructions to regenerate them:

Creating Gold Images for Hyper-V Templating

The overall process:

  1. Create a virtual machine
  2. Install the operating system
  3. Customize
  4. Generalize
  5. Store
  6. Deploy

If that sounds familiar, you probably do something like that for physical systems as well.

Let’s go over the steps in more detail.

Creating the Virtual Machine and Installing the Operating System

You start by simply doing what you might have done any number of times before: create a new virtual machine. One thing really matters: the virtual machine generation. Whatever generation you choose for the gold image will be the generation of all virtual machines that you build on it. Sure, there are some conversion techniques… but why use them? If you will need Generation 1 and Generation 2 VMs, then build two templates.

The rest of the settings of the virtual machine that you use for creating a gold image do not matter (unless the dictates of some particular software package override). You have more than one option for image storage, but in all cases, you will deploy to unique virtual machines whose options can be changed.

Once you’ve got your virtual machine created, install Windows Server (or whatever) as normal (note, especially for desktop deployments: many guides mention booting in Audit mode, which I have never done; this appears to be most important when Windows Store applications are in use):

Customizing the Gold Image

If you’re working on your first image, I would not go very far with this. You want a generic image to start with. For initial images, I tend to insert things like BGInfo that I want on every system. You can then use this base image to create more specialized images.

I have plans for future articles that will expand on your options for customization. You can perform simple things, like installing software. You can do more complicated things, such as using the Automated Deployment Kit. One of the several useful aspects of ADK is the ability to control keying.

Tip: If you have software that requires .Net 3.5, you can save a great deal of time by having a branch of images that include that feature pre-installed:

Just remember that you want to create generic images. Do not try to create a carbon-copy of an intended live system. If that’s your goal (say, for quick rollback to a known-good build), then create the image that you want as a live system and store a permanent backup copy. You could use an export if you like.

Very important: Patch the image fully, but only after you have all of the roles, features, and applications installed.

Generalize the Gold Image

Once you have your image built the way that you like it, you need to seal it. That process will make the image generic, freezing it into a state from which it can be repeatedly deployed. Windows (and Windows Server) includes that tool natively: sysprep.

The best way to invoke sysprep is by a simple command-line process. Use these switches:


The first three parameters are standard. We can use the last one because we’re creating Hyper-V images. It will ensure that the image doesn’t spend a lot of time worrying about hardware.

Tip: If you want to use the aforementioned Audit Mode so that you can work with software packages, use /audit instead of /oobe.

Tip: You can also just run sysprep.exe to get the user interface where you can pick all of these options except the mode. Your image(s) will work just fine if you don’t use /mode:vm.

Once the sysprep operation completes, it will stop the virtual machine. At that point, consider it to be in a “cold” state. Starting it up will launch the continuation of a setup process. So, you don’t want to do that. Instead, store the image so that it can be used later.

Storing a Gold Image

Decide early how you want to deploy virtual machines from this image. You have the option of creating all-new virtual machines at each go and re-using a copy of the VHDX. Alternatively, you can import a virtual machine as a copy. I use both techniques, so I recommend export. That way, you’ll have the base virtual machine and the VHDX so you can use either as suits you.

Image storage tip: Use deduplicated storage. In my test lab, keep mine on a volume using Windows Server deduplication in Hyper-V mode. That mode only targets VHDX files and was intended for running VDI deployments. It seems to work well for cold image storage, as well. I have not tried with the normal file mode.

VHDX Copy Storage

If you only want to store the VHDX, then copy it to a safe location. Give the VHDX a very clear name as to its usage. Don’t forget that Windows allows you to use very, very long filenames. Delete the root virtual machine afterward and clean up after it.

The benefit of copy storage is that you can easily lay out all of your gold image VHDXs side-by-side in the same folder and not need to keep track of all of those virtual machine definition files and folders.

Exported Image Storage

By exporting the virtual machine, you can leverage import functionality to easily deploy virtual machines without much effort on your part. There are some downsides, but they’re not awful:

  • The export process takes a tiny bit of work. It’s not much, but…
  • When importing, the name of the VHDX cannot be changed. So, you wind up with a newly-deployed virtual machine that uses the same VHDX name as your gold image. That problem can be fixed, of course, but it’s extra work.

I discovered that we’ve never written an article on exporting virtual machines. I’ll rectify that in a future article and we’ll link this one to it. Fortunately, the process is not difficult. Start by right-clicking your virtual machine and clicking the Export option. Follow the wizard from there:

Tip: Disconnect any ISO images prior to exporting. Otherwise, the export will make a copy of that ISO to go with the VM image and it will remain permanently attached and deployed with each import.

Deploying Hyper-V Virtual Machines from Gold Images

From these images that you created, you can build a new virtual machine and attach a copy of the VHDX or you can import a copy.

Deploying from a VHDX Copy

VHDX copy deployment has two pieces:

  • Creation of a virtual machine
  • Attaching to a copy of the gold image VHDX

We have an article on using Hyper-V Manager to create a virtual machine. Make sure to use the same generation as the gold image. On the disk page, skip attaching anything:

Remember that Hyper-V Manager doesn’t have much for configuration options during VM creation. Set the memory and CPU and network up before proceeding.

To finish up, copy the gold image’s VHDX into whatever target location you like. You can use a general “Virtual Hard Disks” folder like the Hyper-V default settings do, or you can drop it in a folder named after the VM. It really doesn’t matter, as long as the Hyper-V host can reach the location. If it were me, I would also rename the VHDX copy to something that suits the VM.

Once you have the VHDX placed, use Hyper-V Manager to attach it to the virtual machine:

Once you hit OK, you can boot up the VM. It will start off like a newly-installed Windows machine, but all your customizations will be in place.

Deploying with Import

Importing saves you a bit of work in exchange for a bit of different but optional work.

We already have an article on importing (scroll down to the bottom 2/3 or so). I will only recap the most relevant portions.

First, choose the relevant source gold image:


Especially ensure that you choose to Copy. Either of the other choices will cause problems for the gold image.

Now, the fun parts. It will import with the name of the exported VM. That’s probably not what you want. You’ll need to:

  • Rename the virtual machine
  • Rename the VHDX(s)
    1. Detach the VHDX(s)
    2. Rename it/them
    3. Reattach it/them
  • You may need to rename the folder(s), depending on how you deployed. That hasn’t been a problem for me, so far.

Post-Deployment Work

At this point, you are mostly finished. The one thing to keep in mind: the guest operating system will have a generic name unrelated to the virtual machine’s name. Don’t forget to fix that. Also, IP addresses will not be retained, etc.

Further Work and Consideration

What I’ve shown you only takes you through some simplistic builds. You can really turn this into a powerhouse deployment system. Things to think about:

  • After you have a basic build, import it, customize it further and sysprep it again. Repeat as necessary.
  • Microsoft places limits on how many times an image can be sysprepped. Therefore, always try to work from the first image rather than from a deep child.
  • You can use the Mount-WindowsImage cmdlet family and DISM tools. Those allow you to patch and apply items to an image without decrementing the sysprep counter. For an example, I have a script that makes a VHDX into a client for a WSUS system.
  • You can mount a VHDX in Windows Explorer to move things into it.

Watch out for more articles that talk more about customizing your images. If you’re having trouble following any of the steps above, ping me a message in the comments below and I’ll help you out.

Why You Should Be Compacting Your Hyper-V Virtual Disks

Why You Should Be Compacting Your Hyper-V Virtual Disks

Hyper-V’s dynamically expanding virtual hard disks (VHD/VHDX) provide numerous benefits. I suspect that most of us use them to achieve efficient use of physical storage. However, their convenience does come with a maintenance cost. They grow as needed, but they will never release any space unless you manually intervene (it could be scripted if you so desire). In this article, we’ll cover the reasons, prerequisites, and steps necessary to reclaim unused space from your dynamically-expanding VHDXs.

“Compact” or “Shrink” Hyper-V Virtual Hard Disks?

Sometimes definitions for “compact” and “shrink” get crossed when talking about VHDXs. They do not mean the same thing and you use completely different operations to achieve them.

Every virtual hard disk type (fixed, dynamically-expanding, and differencing) has a fixed upper size limit. “Shrinking” a VHDX reduces that limit. Any of the types can be shrunk. “Compacting” does not change the upper limit at all. Instead, it reduces the physical disk space consumed by a dynamically-expanding or differencing VHDX by removing empty blocks. I’ll explain what a “block” means to these VHDXs in the next section. You can run a compact operation on any of the types, but it will never have any effect on a fixed VHDX (the graphical interface screens will not even allow you to try). For more information on the shrink process, refer to our earlier article on the issue.

What Does “Compact” Mean for Hyper-V Virtual Hard Disks

Dynamically-expanding VHDXs function is a straightforward process. When created, each has a specific, permanently-set “block size”. When a guest operating system or some external action attempts to write information into the VHDX, the VHDX driver first checks to see if the file contains sufficient space in unclaimed blocks for that data. If it does, then the write proceeds. If it does not, the VHDX driver expands the physical file by the minimum number of blocks needed to contain the write.

However, when a process deletes a file (or uses some other technique to clear data), the VHDX driver does not do anything with the block. Most importantly, it does not automatically remove empty blocks from the file. I’ve read a fair number of complaints on that, but they lack merit. To maintain such a thing would be inherently dangerous and prohibitively expensive in terms of IOPS.

Instead, we manually initiate the compact process. The system scans the data region of the VHDX, looking for completely empty blocks. When it finds one, it removes it from the file. I’m not aware of the precise mechanism that it uses, but I suspect that it simply finds the next used block and shifts it backward, eventually lopping off the end of the file.


Why Should I Compact Hyper-V Virtual Hard Disks?

Physical systems rarely use up all of their disk space. The same can be said for virtual systems. Therein lies the primary reason to use dynamically-expanding virtual hard disks. I especially like them for VHDXs that contain operating systems when you’ll be placing data on different VHDXs. I’ve got some VHDXs nearing five years of age that have a 60GB maximum but still use only around 25GB on disk. With many OS VHDXs, that quickly adds up to a great deal of saved storage. That’s only one example, though; I generally use dynamically-expanding VHDX in any case where I know it won’t cause problems (high-churn database systems would be a poor usage for them).

However, sometimes operations occur that consume an inordinate amount of disk space. Sometimes space is just used and there’s nothing you can do about it. Other times, the usage is only temporary. For example, you might copy several ISO files to the C: drive of a guest in order to perform a local software installation. If you’re doing in-place operating system upgrades, they’ll make a duplicate of the existing installation so that you can perform a rollback. A system may begin life with an oversized swap file that you later move to a different VHDX, leaving behind gigabytes of unused space. I always compact VHDXs with sysprepped operating systems that I will use as templates.

Why Should I NOT Compact Hyper-V Virtual Hard Disks?

“Premature optimization is the root of all evil.” — Donald Knuth

I often complain about administrators that arbitrarily guess that their disk performance needs will be so great that anything other than fixed (or, egads, pass-through) is unacceptable. At the other end of the spectrum, some administrators became agitated when their VHDXs consume even a few gigabytes above their minimum. Most dynamically-expanding VHDXs will eventually find some sort of “normal state” in which they can re-use space from deleted data (example: Windows Updates).

I can’t give you a fine description of when a compact operation becomes viable, but do not use it to try to squeeze every last drop of space out of your system. You’ll eventually lose that battle one way or another. Do not compact over minor gains. Do not compact VHDXs that will never find any reasonable steady state (such as file servers). Do not compact as part of some routine maintenance cycle. If you have set up an automated script because your systems constantly approach the hard limits of your storage, spend the money to expand your storage. Compacting requires downtime and equipment wear, so it’s not free.

How Do I Use PowerShell to Compact a VHDX?

You can compact using GUI tools, but PowerShell provides the complete method. Remember two things:

  • You cannot compact a VHDX connected to a running virtual machine.
  • The computer system that runs the Optimize-VHD cmdlet must have the complete Hyper-V role installed. The PowerShell module alone does not contain the necessary system services. If you must run the compact operation on a system that does not have Hyper-V, you can use diskpart instead.

The process centers around the Optimize-VHD cmdlet. In general, you’ll want to mount the VHDX into the management operating system to get the best results. For example:

Be aware that even for very small files, that can take some time to complete. The amount of time depends mostly on your available CPU cycle and the speed of your disk subsystem.

A PowerShell Tool for Compacting VHDX Files

You can easily convert the above three lines into a re-usable utility:

You can dot-source that or add it to your PowerShell profile to use it anytime.

What Do the Different VHDX Compact Modes Mean?

If you read the help or tab through the options for the -Full parameter, you’ll see several choices. I feel like the built-in help describes it well enough, but more than a few people have gotten lost in terms with similar meanings. A bit of clarity:

  • An empty block means a continuous length of space within the VHDX that its header has designated as a block and contains all zeros. A single 1 anywhere in the entire block means that the VHDX driver will treat it as used.
  • An unused block may have some 1s, but the guest operating system’s file system has marked the space as not being used. This is common after file deletion; few operating systems will actually remove the bits from the disk. They simply mark it as unused in their file table(s). Two points on that:
    • At this time, the VHDX driver works most reliably with the NTFS file system. I have not yet tested for ReFS. It does not recognize any Linux file systems. In order for the system to best detect unused blocks, the VHDX must be mounted in read-only mode.
    • The VHDX driver also recognizes the Trim and Unmap commands. That allows it to work with your hardware to determine what blocks are unused.


Hopefully, that helps smooth out explanations of the modes:

  • Full: Full optimization requires the most time and resources. It will take out both empty and unused blocks. In order to detect unused blocks, the VHDX must be mounted in read-only mode. If the VHDX isn’t mounted, then it won’t be able to find empty blocks as easily (assume that it won’t find them at all).
  • Quick: The system only looks for unused blocks using the contained file system’s metadata. If you don’t mount the VHDX first, nothing will change.
  • Pretrimmed: Utilizes information from the trim/unmap commands to detect unused blocks. Does not look for empty blocks and does not query the contained file system for unused blocks.
  • Prezeroed: If the VHDX driver intercepted zero writes for existing blocks (such as from tools like sdelete, then it has already recorded in its own tables that they’re empty. Use this mode to remove only those blocks.
  • Retrim: Retrim reads from the VHDX file’s metadata for blocks marked as empty and reports them as trim or unmap commands to the underlying hardware.

Only the Full mode directly scans the file for empty blocks. The others work only with the VHDX and/or guest file system metadata.

How Do I Use Hyper-V Manager to Compact a VHDX?

Hyper-V Manager can compact a disconnected VHD/X or one attached to a powered-off virtual machine. It does not perform a very thorough job, though. I do not know which method it uses, but you’ll discover very quickly that it is not the equivalent of PowerShell’s “Full”. Whatever technique it uses, it will not automatically mount the VHDX.

You can access the Edit function directly on a VM’s disk property page or by going to the Edit Disk link in the main interface. Use the main interface for VHDXs that have no owning virtual machine.


Both ways invoke the same wizard.

  1. On the first screen, you select the target VHDX file. If you started from a VM’s property page, then the Location line is filled in and grayed out.
  2. Next, the wizard will ask what you want to do with the disk. Choose Compact. If the Compact option does not appear, the virtual machine is turned on or you selected a fixed disk.
  3. Click Finish on the final screen. You’ll get a small dialog that shows the progress.

Can I Compact a VHDX that has Differencing Children?

The system will not prevent you from compacting a VHDX that has children unless one of the children is in use. If you manage to compact the parent VHDX, you will break the parent-child relationship.

What About VHDXs that Contain Non-Microsoft File Systems?

The Full mode will detect fully empty blocks, but cannot interact with any non-Microsoft file system metadata to locate unused blocks. However, most modern filesystems now support trim/unmap. If you can perform that operation within the guest operating system, you can then perform a Full optimize pass to shrink the VHDX. I wrote an article on compacting VHDXs with Linux filesystems that have more information.

Why Does the Compact Operation Save Nothing or Very Little?

The simple answer is that the blocks are not empty. If a block contains a single non-zero bit, then the entire block must be kept. You cannot see the VHDX’s blocks from within the guest. Short of some sort of file scanner, I don’t know of any way to view the blocks aside from a hex editor.

Some tips on cleaning up space:

  • Delete as many files as possible
  • Clear the guest’s Recycle Bin(s)
  • Use utilities that write zeros to free space
  • Use a defragmentation tool that will consolidate free space

If you’re really stuck, you can use a tool like robocopy or rsync to move the files into a completely new VHDX.

What Block Sizes are Valid for a Dynamically-Expanding VHDX?

By default, all VHDXs will have a 32 megabyte block size. You can only change that at creation time when using New-VHD. Set the BlockSizeBytes parameter as desired. Unfortunately, they don’t list possible values so it’s a bit of trial and error. I tried 1MB, 8MB, 16MB, 64MB, and 256MB sizes with success, in addition to the default 32MB. 512 bytes failed, as did 512MB.

How Can I Find a Dynamically-Expanding VHDX’s Block Size?

Only PowerShell’s Get-VHD can show you the block size.



What Block Size Should I Use?

You have a basic trade-off: small block sizes result in more writes with less unused space. Larger block sizes result in fewer writes with potentially more unused space.

With small block sizes, scattered small files will be less likely to result in a VHDX with a great many almost-but-not-quite empty blocks. For the various ext* filesystems found on Linux, a 1MB block size can greatly reduce your VHDX sizes. The greatest downside is that you might get a lot of fragmentation of the physical VHDX. Then again, you might not. It depends on how the writes occur. Infrequent small writes will result in more fragmentation. If you use robust, modern storage, then fragmentation is not usually a major concern.

Large block sizes don’t have much to offer. If you were to store a rarely-accessed archival database in a dynamically-expanding VHDX with block sizes that match your guest OS’s allocation unit size, you might, in theory, see something improve. Probably not, though.

I recommend that you stick to the default 32mb block size for any guest that doesn’t use an ext filesystem. Use 1MB block sizes for those, if you remember.

How to Resize Virtual Hard Disks in Hyper-V 2016

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.



Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:


Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

How to Perform Hyper-V Storage Migration

How to Perform Hyper-V Storage Migration

New servers? New SAN? Trying out hyper-convergence? Upgrading to Hyper-V 2016? Any number of conditions might prompt you to move your Hyper-V virtual machine’s storage to another location. Let’s look at the technologies that enable such moves.

An Overview of Hyper-V Migration Options

Hyper-V offers numerous migration options. Each has its own distinctive features. Unfortunately, we in the community often muck things up by using incorrect and confusing terminology. So, let’s briefly walk through the migration types that Hyper-V offers:

  • Quick migration: Cluster-based virtual machine migration that involves placing a virtual machine into a saved state, transferring ownership to another node in the same cluster, and resuming the virtual machine. A quick migration does not involve moving anything that most of us consider storage.
  • Live migration: Cluster-based virtual machine migration that involves transferring the active state of a running virtual machine to another node in the same cluster. A Live Migration does not involve moving anything that most of us consider storage.
  • Storage migration: Any technique that utilizes the Hyper-V management service to relocate any file-based component that belongs to a virtual machine. This article focuses on this migration type, so I won’t expand any of those thoughts in this list.
  • Shared Nothing Live Migration: Hyper-V migration technique between two hosts that does not involve clustering. It may or may not include a storage migration. The virtual machine might or might not be running. However, this migration type always includes ownership transfer from one host to another.

It Isn’t Called Storage Live Migration

I have always called this operation “Storage Live Migration”. I know lots of other authors call it “Storage Live Migration”. But, Microsoft does not call it “Storage Live Migration”. They just call it “Storage Migration”. The closest thing that I can find to “Storage Live Migration” in anything from Microsoft is a 2012 TechEd recording by Benjamin Armstrong. The title of that presentation includes the phrase “Live Storage Migration”, but I can’t determine if the “Live” just modifies “Storage Migration” or if Ben uses it as part of the technology name. I suppose I could listen to the entire hour and a half presentation, but I’m lazy. I’m sure that it’s a great presentation, if anyone wants to listen and report back.

Anyway, does it matter? I don’t really think so. I’m certainly not going to correct anyone that uses that phrase. However, the virtual machine does not necessarily need to be live. We use the same tools and commands to move a virtual machine’s storage whether it’s online or offline. So, “Storage Migration” will always be a correct term. “Storage Live Migration”, not so much. However, we use the term “Shared Nothing Live Migration” for virtual machines that are turned off, so we can’t claim any consistency.

What Can Be Moved with Hyper-V Storage Migration?

When we talk about virtual machine storage, most people think of the places where the guest operating system stores its data. That certainly comprises the physical bulk of virtual machine storage. However, it’s also only one bullet point on a list of multiple components that form a virtual machine.

Independently, you can move any of these virtual machine items:

  • The virtual machine’s core files (configuration in xml or .vmcx, .bin, .vsv, etc.)
  • The virtual machine’s checkpoints (essentially the same items as the preceding bullet point, but for the checkpoint(s) instead of the active virtual machine)
  • The virtual machine’s second-level paging file location. I have not tested to see if it will move a VM with active second-level paging files, but I have no reason to believe that it wouldn’t
  • Virtual hard disks attached to a virtual machine
  • ISO images attached to a virtual machine

We most commonly move all of these things together. Hyper-V doesn’t require that, though. Also, we can move all of these things in the same operation but distribute them to different destinations.

What Can’t Be Moved with Hyper-V Storage Migration?

In terms of storage, we can move everything related to a virtual machine. But, we can’t move the VM’s active, running state with Storage Migration. Storage Migration is commonly partnered with a Live Migration in the operation that we call “Shared Nothing Live Migration”. To avoid getting bogged down in implementation details that are more academic than practical, just understand one thing: when you pick the option to move the virtual machine’s storage, you are not changing which Hyper-V host owns and runs the virtual machine.

More importantly, you can’t use any Microsoft tool-based technique to separate a differencing disk from its parent. So, if you have an AVHDX (differencing disk created by the checkpointing mechanism) and you want to move it away from its source VHDX, Storage Migration will not do it. If you instruct Storage Migration to move the AVHDX, the entire disk chain goes along for the ride.

Uses for Hyper-V Storage Migration

Out of all the migration types, storage migration has the most applications and special conditions. For instance, Storage Migration is the only Hyper-V migration type that does not always require domain membership. Granted, the one exception to the domain membership rule won’t be very satisfying for people that insist on leaving their Hyper-V hosts in insecure workgroup mode, but I’m not here to please those people. I’m here to talk about the nuances of Storage Migration.

Local Relocation

Let’s start with the simplest usage: relocation of local VM storage. Some situations in this category:

  • You left VMs in the default “C:\ProgramData\Microsoft\Windows\Hyper-V” and/or “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks” locations and you don’t like it
  • You added new internal storage as a separate volume and want to re-distribute your VMs
  • You have storage speed tiers but no active management layer
  • You don’t like the way your VMs’ files are laid out
  • You want to defragment VM storage space. It’s a waste of time, but it works.

Network Relocation

With so many ways to do network storage, it’s nearly a given that we’ll all need to move a VHDX across ours at some point. Some situations:

  • You’re migrating from local storage to network storage
  • You’re replacing a SAN or NAS and need to relocate your VMs
  • You’ve expanded your network storage and want to redistribute your VMs

Most of the reasons listed under “Local Relocation” can also apply to network relocation.

Cluster Relocation

We can’t always build our clusters perfectly from the beginning. For the most part, a cluster’s relocation needs list will look like the local and network lists above. A few others:

  • Your cluster has new Cluster Shared Volumes that you want to expand into
  • Existing Cluster Shared Volumes do not have a data distribution that does not balance well. Remember that data access from a CSV owner node is slightly faster than from a non-owner node

The reasons matter less than the tools when you’re talking about clusters. You can’t use the same tools and techniques to move virtual machines that are protected by Failover Clustering under Hyper-V as you use for non-clustered VMs.

Turning the VM Off Makes a Difference for Storage Migration

You can perform a very simple experiment: perform a Storage Migration for a virtual machine while it’s on, then turn it off and migrate it back. The virtual machine will move much more quickly while it’s off. This behavior can be explained in one word: synchronization.

When the virtual machine is off, a Storage Migration is essentially a monitored file copy. The ability of the constituent parts to move bits from source to destination sets the pace of the move. When the virtual machine is on, all of the rules change. The migration is subjected to these constraints:

  • The virtual machine’s operating system must remain responsive
  • Writes must be properly captured
  • Reads must occur from the most appropriate source

Even if the guest operating does not experience much activity during the move, that condition cannot be taken as a constant. In other words, Hyper-V needs to be ready for it to start demanding lots of I/O at any time.

So, the Storage Migration of a running virtual machine will always take longer than the Storage Migration of a virtual machine in an off or saved state. You can choose the convenience of an online migration or the speed of an offline migration.

Note: You can usually change a virtual machine’s power state during a Storage Migration. It’s less likely to work if you are moving across hosts.

How to Perform Hyper-V Storage Migration with PowerShell

The nice thing about using PowerShell for Storage Migration: it works for all Storage Migration types. The bad thing about using PowerShell for Storage Migration: it can be difficult to get all of the pieces right.

The primary cmdlet to use is Move-VMStorage. If you will be performing a Shared Nothing Live Migration, you can also use Move-VM. The parts of Move-VM that pertain to storage match Move-VMStorage. Move-VM has uses, requirements, and limitations that don’t pertain to the topic of this article, so I won’t cover Move-VM here.

A Basic Storage Migration in PowerShell

Let’s start with an easy one. Use this when you just want all of a VM’s files to be in one place:

This will move the virtual machine named testvm so that all of its components reside under the C:\LocalVMs folder. That means:

  • The configuration files will be placed in C:\LocalVMs\Virtual Machines
  • The checkpoint files will be placed in C:\LocalVMs\Snapshots
  • The VHDXs will be placed in C:\LocalVMs\Virtual Hard Disks
  • Depending on your version, an UndoLog Configuration folder will be created if it doesn’t already exist. The folder is meant to contain Hyper-V Replica files. It may be created even for virtual machines that aren’t being replicated.

Complex Storage Migrations in PowerShell

For more complicated move scenarios, you won’t use the DestinationStoragePath parameter. You’ll use one or more of the individual component parameters. Choose from the following:

  • VirtualMachinePath: Where to place the VM’s configuration files.
  • SnapshotFilePath: Where to place the VM’s checkpoint files (again, NOT the AVHDXs!)
  • SmartPagingFilePath: Where to place the VM’s smart paging files
  • Vhds: An array of hash tables that indicate where to place individual VHD/X files.

Some notes on these items:

  • You are not required to use all of these parameters. If you do not specify a parameter, then its related component is left alone. Meaning, it doesn’t get moved at all.
  • If you’re trying to use this to get away from those auto-created Virtual Machines and Snapshots folders, it doesn’t work. They’ll always be created as sub-folders of whatever you type in.
  • It doesn’t auto-create a Virtual Hard Disks folder.
  • If you were curious whether or not you needed to specify those auto-created subfolders, the answer is: no. Move-VMStorage will always create them for you (unless they already exist).
  • The VHDs hash table is the hardest part of this whole thing. I’m usually a PowerShell-first kind of guy, but even I tend to go to the GUI for Storage Migrations.

The following will move all components except VHDs, which I’ll tackle in the next section:

Move-VMStorage’s Array of Hash Tables for VHDs

The three …FilePath parameters are easy: just specify the path. The Vhds parameter is tougher. It is one or more hash tables inside an array.

First, the hash tables. A hash table is a custom object that looks like an array, but each entry has a unique name. The hash tables that Vhds expects have a SourceFilePath entry and a DestinationFilePath entry. Each must be fully-qualified for a file. A hash table is contained like this: @{ }. The name of an entry and its value are joined with an =. Entries are separated by a ; So, if you want to move the VHDX named svtest.vhdx from \\svstore\VMs to C:\LocalVMs\testvm, you’d use this hash table:

Reading that, you might ask (quite logically): “Can I change the name of the VHDX file when I move it?” The answer: No, you cannot. So, why then do you need to enter the full name of the destination file? I don’t know!

Next, the arrays. An array is bounded by @( ). Its entries are separated by commas. So, to move two VHDXs, you would do something like this:

I broke that onto multiple lines for legibility. You can enter it all on one line. Note where I used parenthesis and where I used curly braces.

Tip: To move a single VHDX file, you don’t need to do the entire array notation. You can use the first example with Vhds.

A Practical Move-VMStorage Example with Vhds

If you’re looking at all that and wondering why you’d ever use PowerShell for such a thing, I have the perfect answer: scripting. Don’t do this by hand. Use it to move lots of VMs in one fell swoop. If you want to see a plain example of the Vhds parameter in action, the Get-Help examples show one. I’ve got a more practical script in mind.

The following would move all VMs on the host. All of their config, checkpoint, and second-level paging files will be placed on a share named “\\vmstore\slowstorage”. All of their VHDXs will be placed on a share named “\\vmstore\faststorage”. We will have PowerShell deal with the source paths and file names.

I used splatting for the parameters for two reasons: 1, legibility. 2, to handle VMs without any virtual hard disks.

How to Perform Hyper-V Storage Migration with Hyper-V Manager

Hyper-V Manager can only be used for non-clustered virtual machines. It utilizes a wizard format. To use it to move a virtual machine’s storage:

  1. Right-click on the virtual machine and click Move.
  2. Click Next on the introductory page.
  3. Change the selection to Move the virtual machine’s storage (the same storage options would be available if you moved the VM’s ownership, but that’s not part of this article)
  4. Choose how to perform the move. You can move everything to the same location, you can move everything to different locations, or you can move only the virtual hard disks.
  5. What screens you see next will depend on what you chose. We’ll cover each branch.

If you opt to move everything to one location, the wizard will show you this simple page:


If you choose the option to Move the virtual machine’s data to different locations, you will first see this screen:


For every item that you check, you will be given a separate screen where you indicate the desired location for that item. The wizard uses the same screen for these items as it does for the hard-disks only option. I’ll show its screen shot next.

If you choose Move only the virtual machine’s virtual hard disks, then you will be given a sequence of screens where you instruct it where to move the files. These are the same screens used for the individual components from the previous selection:


After you make your selections, you’ll be shown a summary screen where you can click Finish to perform the move:


How to Perform Hyper-V Storage Migration with Failover Cluster Manager

Failover Cluster Manager uses a slick single-screen interface to move storage for cluster virtual machines. To access it, simply right-click a virtual machine, hover over Move, and click Virtual Machine Storage. You’ll see the following screen:


If you just want to move the whole thing to one of the display Cluster Shared Volumes, just drag and drop it down to that CSV in the Cluster Storage heading at the lower left. You can drag and drop individual items or the entire VM. The Destination Folder Path will be populated accordingly.

As you can see in mine, I have all of the components except the VHD on an SMB share. I want to move the VHD to be with the rest. To get a share to show up, click the Add Share button. You’ll get this dialog:


The share will populate underneath the CSVs in the lower left. Now, I can drag and drop that file to the share. View the differences:


Once you have the dialog the way that you like it, click Start.

Comparing Hyper-V Generation 1 and 2 Virtual Machines

Comparing Hyper-V Generation 1 and 2 Virtual Machines


The 2012 R2 release of Hyper-V introduced a new virtual machine type: Generation 2. The words in that designation don’t convey much meaning. What are the generations? What can the new generation do for you? Are there reasons not to use it?

We’ll start with an overview of the two virtual machine types separately.

Generation 1 Virtual Machines: The Legacy

The word “legacy” often invokes a connotation of “old”. In the case of the Generation 1 virtual machine, “old” paints an accurate picture. The virtual machine type isn’t that old, of course. However, the technology that it emulates has been with us for a very long time.


BIOS stands for “Basic Input/Output System”, which doesn’t entirely describe what it is or what it does.

A computer’s BIOS serves two purposes:

  1. Facilitates the power-on process of a computer. The BIOS initializes all of the system’s devices, then locates and loads the operating system.
  2. Acts as an interface between the operating system and common hardware components. Even though there are multiple vendors supplying their own BIOSes, all of them provide the same command set for operating systems to access. Real mode operating systems used those common BIOS calls to interact with keyboards, disk systems, and text output devices. Protected mode operating systems do not use BIOS for this purpose. Instead, they rely on drivers.

Hyper-V creates a digital BIOS that it attaches to all Generation 1 virtual machines.

Emulated Hardware

A virtualization machine is a fake computer. Faking a computer is hard. A minimally functional computer requires several components. Have you ever looked at a motherboard and wondered what all of those chips do? I’m sure you recognize the CPU and the memory chips, but what about the others? Each has its own important purpose. Each contributes something. A virtual machine must fake them all.

One of the ways a virtual machine can fake hardware is emulation. Nearly every computer component is a digital logical device. That means that each of them processes data in binary using known, predictable methods. Since we know what those components do and since they accept and produce binary data, we can make completely digital copies of them. When we do that, we say that we have emulated that hardware. Emulated hardware is a software construct that produces behavior that is identical to the “real” item. If you look at Device Manager inside a Generation 1 virtual machine, you can see evidence of emulation:


Digitally, the IDE controller in a Hyper-V virtual machine behaves exactly like the Intel 82371AB/EB series hardware. Because almost all operating systems include drivers that can talk to Intel 82371AB/EB series hardware, they can immediately work inside a Hyper-V Generation 1 VM.

Emulated hardware provides the benefit of widespread compatibility. Very few operating systems exist that can’t immediately work with these devices. They also tend to work in the minimalist confines of PXE (pre-boot execution environment). For this reason, you’ll often see requirements to use a Generation 1 virtual machine with a legacy network adapter. The PXE system can identify and utilize that adapter; it cannot recognize the newer synthetic adapter.

Generation 2 Virtual Machines: A Step Forward

BIOS works very well, but has a number of limitations. On that list, the most severe is security; BIOS knows to load boot code from devices and that’s it. It cannot make any judgment on whether or not the boot code that it found should be avoided. When it looks for an operating system on a hard disk, that hard disk must use a master boot record (MBR) partition layout, or BIOS won’t understand what to do. MBR imposes a limit of four partitions and 2TB of space.


Enter the Unified Extensible Firmware Interface (UEFI). As a successor to BIOS, it can do everything that BIOS can do. On some hardware systems, it can emulate a BIOS when necessary. There are three primary benefits to choosing UEFI over BIOS:

  1. Secure Boot. UEFI can securely store an internal database of signatures for known good boot loaders. If a boot device presents a boot loader that the UEFI system doesn’t recognize, it will refuse to boot. Secure Boot can be an effective shield against root kits that hijack the boot loader.
  2. GPT disk layout. The GUID partition table system (GPT) has been available for some time, but only for data disks. BIOS can’t boot to it. UEFI can. GPT allows for 128 partitions and a total disk size of 8 zettabytes, dramatically surpassing MBR.
  3. Extensibility. Examine the options available in the firmware screens of a UEFI physical system. Compare them to any earlier BIOS-only system. UEFI allows for as many options as the manufacturer can fit onto their chips. Support for hardware that didn’t exist when those chips were soldered onto the mainboard might be the most important.

When you instruct Hyper-V to create a Generation 2 virtual machine, it uses a UEFI construct.

Synthetic Hardware

Synthetic hardware diverges from emulated hardware in its fundamental design goal. Emulated hardware pretends to be a known physical device to maximize compatibility with guest operating environments and systems. Hypervisor architects design synthetic hardware to maximize interface capabilities with the hypervisor. They release drivers for guest operating systems to address the compatibility concerns. The primary benefits of synthetic hardware:

  • Controlled code base. When emulating hardware, you’re permanently constrained by that hardware’s pre-existing interface. With synthetic hardware, you’re only limited to what you can build.
  • Tight hypervisor integration. Since the hypervisor architects control the hypervisor and the synthetic device, they can build them to directly interface with each other, bypassing the translation layers necessary to work with emulated hardware.
  • Performance. Synthetic hardware isn’t always faster than emulated hardware, but the potential is there. In Hyper-V, the SCSI controller is synthetic whereas the IDE controller is emulated, but performance differences can only be detected under extremely uncommon conditions. Conversely, the synthetic network adapter is substantially faster than the emulated legacy adapter.

Generation 2 virtual machines can use less emulated hardware because of their UEFI platform. They can boot from the SCSI controller because UEFI understands how to communicate with it; BIOS does not. They can boot using PXE with a synthetic network adapter because UEFI understands how to communicate with it; BIOS does not.

Reasons to Use Generation 1 Over Generation 2

There are several reasons to use Generation 1 instead of Generation 2:

  • Older guest operating systems. Windows/Windows Server operating systems prior to Vista/2008 did not understand UEFI at all. Windows Vista/7 and Windows Server 2008/2008 R2 do understand UEFI, but require a particular component that Hyper-V does not implement. Several Linux distributions have similar issues.
  • 32-bit guest operating systems. UEFI began life with 64-bit operating systems in mind. Physical UEFI systems will emulate a BIOS mode for 32-bit operating systems. In Hyper-V, Generation 1 is that emulated mode.
  • Software vendor requirements. A number of systems built against Hyper-V were designed with the limitations of Generation 1 in mind, especially if they target older OSes. Until their manufacturers update for newer OSes and Generation 2, you’ll need to stick with Generation 1.
  • VHD requirements. If there is any requirement at all to use VHD instead of VHDX, Generation 1 is required. Generation 2 VMs will not attach VHDs. If a VHDX doesn’t exceed VHD’s limitations, it can be converted. That’s certainly not convenient for daily operations.
  • Azure interoperability. At this time, Azure uses Generation 1 virtual machines. You can use Azure Recovery Services with a Generation 2 virtual machine, but it is down-converted to Generation 1 when you fail to Azure and then up-converted back to Generation 2 when you fail back. If I had any say in designing Azure-backed VMs, I’d just use Generation 1 to make things easier.
  • Virtual floppy disk support. Generation 2 VMs do not provide a virtual floppy disk. If you need one, then you also need Generation 1.
  • Virtual COM ports. Generation 2 VMs do not provide virtual COM ports, either.

I’ll also add that a not-insignificant amount of anecdotal evidence exists that suggests stability problems with Generation 2 virtual machines. In my own experiences, I’ve had Linux virtual machines lose vital data in their boot loaders that I wasn’t able to repair. I’ve also had some networking glitches in Generation 2 VMs that I couldn’t explain that disappeared when I rebuilt the guests as Generation 1. From others, I’ve heard of VHDX performance variances and some of the same issues that I’ve seen. These reports are not substantiated, not readily reproducible, and not consistent. I’ve also had fewer problems using 2016 and newer Linux kernels with Generation 2 VMs.

Reasons to Use Generation 2 Over Generation 1

For newer builds that do not have any immediately show-stopping problems, I would default to using Generation 2. Some concrete reasons:

  • Greater security through Secure Boot. Secure Boot is the primary reason many opt for Generation 2. There are at least two issues with that, though:
    • Admins routinely make Secure Boot pointless. Every time someone says that they have a problem with Secure Boot, the very first suggestion is: “disable Secure Boot”. If that’s going to be your choice, then just leave Secure Boot unchecked. Secure Boot has exactly one job: preventing a virtual machine from booting from an unrecognized boot loader. If you’re going to stop it from doing that job, then it’s pointless to enable it in the first place.
    • Secure Boot might not work. Microsoft made a mistake. Maybe ensuring that your Hyper-V host stays current on patches will prevent this from negatively affecting you. Maybe it won’t.
  • Greater security through TPM, Device Guard, and Credential Guard. Hyper-V 2016 can project a virtual Trusted Platform Module (TPM) into Generation 2 virtual machines. If you can make use of that, Generation 2 is the way to go. I have not yet spent much time exploring Device Guard or Credential Guard, but here’s a starter article if you’re interested:
  • Higher limits on vCPU and memory assignment. 2016 adds support for extremely high quantities of vCPU and RAM for Generation 2 VMs. Most of you won’t be building VMs that large, but for the rest of you…
  • PXE booting with a synthetic adapter. Generation 1 VMs require a legacy adapter for PXE booting. That adapter is quite slow. Many admins will deal with this by using both a legacy adapter and a synthetic adapter, or by removing the legacy adapter post-deployment. Generation 2 reduces this complexity by allowing PXE-booting on a synthetic adapter.
  • Slightly faster boot. I’m including this one mainly for completion. UEFI does start more quickly than BIOS, but you’d need to be rebooting a VM lots for it to make a real difference.

How to convert from Generation 1 to Generation 2

One of the many nice things about Hyper-V is just how versatile virtual hard disks are. You can just pop them off of one virtual controller and slap them onto another, no problems. You can disconnect them from one VM and attach them to another, no problems. Unless it’s the boot/system disk. Then, often, many problems. Generation 1 and 2 VMs differ in several ways, but boot/system disk differences are the biggest plague for trying to move between them.

I do not know of any successful efforts made to convert from Generation 2 to Generation 1. It’s possible on paper, but it would not be easy.

You do have options if you want to move from Generation 1 to Generation 2. The most well-known is John Howard’s PowerShell “Convert-VMGeneration” solution. This script has a lot of moving parts and does not work for everyone. Do not expect it to serve as a magic wand. Microsoft does not provide support for Convert-VMGeneration.

Microsoft has released an official tool called MBR2GPT. It cannot convert a virtual machine at all, but it can convert an MBR VHDX to GPT. It’s only supported on Windows 10, though, and it was not specifically intended to facilitate VM generation conversion. To use it for that purpose, I would detach the VHDX from the VM, copy it, mount it on a Windows 10 machine, and run MBR2GPT against the copy. Then, I would create an all-new Generation 2 VM and attach the converted disk to it. If it didn’t work, at least I’d still have the original copy.

Keep in mind that any such conversions are a major change for the guest operating system.Windows has never been fond of changes to its boot system. Conversion to GPT is more invasive than most other changes. Be pleasantly surprised at every successful conversion.


Hyper-V Differencing Disks Explained

Hyper-V Differencing Disks Explained


Usually when we talk about Hyper-V’s virtual disk types, we focus on fixed and dynamically expanding. There’s another type that enjoys significantly less press: differencing disks. Administrators don’t deal directly with differencing disks as often as they work with the other two types, but they are hardly rare. Your Hyper-V knowledge cannot be complete without an understanding of the form and function of differencing disks, so let’s take a look.

What are Hyper-V Differencing Disks?

A differencing disk contains block data that represents changes to a parent virtual hard disk. The salient properties of differencing disks are:

  • A differencing disk must have exactly one parent. No more, no less.
  • The parent of a differencing disk must be another virtual hard disk. You cannot attach them to pass-through disks, a file system, a LUN, a remote share, or anything else.
  • The parent of a differencing disk can be any of the three types (fixed, dynamically expanding, or differencing)
  • Any modification to the data of the parent of a differencing disk effectively orphans the differencing disk, rendering it useless
  • Hyper-V can merge the change data back into the parent, destroying the differencing disk in the process. For Hyper-V versions past 2008 R2, this operation can take place while the disk is in use

Typically, differencing disks are small. They can grow, however. They can grow to be quite large. The maximum size of a differencing disk is equal to the maximum size of the root parent. I say “root” because, even though a differencing disk can be the parent of another differencing disk, there must be a non-differencing disk at the very top for any of them to be useful. Be aware that a differencing disk attached to a dynamically expanding disk does have the potential to outgrow its parent, if that disk isn’t fully expanded.

How Do Differencing Disks Work?

The concept behind the functioning of a differencing disk is very simple. When Hyper-V needs to write to a virtual disk that has a differencing child, the virtual disk driver redirects the write into a differencing disk. It tracks which block(s) in the original file were targeted and what their new contents would have been.

Differencing Disk Write

Differencing Disk Write


The most important thing to understand is that the virtual disk driver makes a choice to write to the differencing disk. The file itself is not marked read-only. You cannot scan the file and discover that it has a child. The child knows who its parent is, but that knowledge is not reciprocated.

Writes are the hard part to understand. If you’ve got that down, then reads are easy to understand. When the virtual machine requests data from its disk, the virtual disk driver first checks to see if the child has a record of the requested block(s). If it does, then the child provides the data for the read. If the child does not have a record of any changes to the block(s), the virtual disk driver retrieves them from the parent.

This is a Hyper-V blog, so I mostly only talk about Hyper-V. However, the virtual disk driver is part of the Windows operating system. The normal tools that you have access to in Windows without Hyper-V cannot create a differencing disk, but you can mount one as long as its parent is present.

How are Differencing Disks Created?

Unlike fixed and dynamically expanding virtual hard disks, you don’t simply kick off a wizard and create a differencing disk from scratch. In fact, most Hyper-V administrators will never directly create a differencing disk at all. There are four generic methods by which differencing disks are created.

Backup Software

For most of us, backup software is the most likely source of differencing disks. When a Hyper-V aware backup application targets a virtual machine, Hyper-V will take a special checkpoint. While the disk and the state of the virtual machine are frozen in the checkpoint, the backup application can copy the contents without fear that they’ll change. When the backup is complete, Hyper-V deletes the checkpoint and merges the differencing disk that it created back into its parent. If it doesn’t, you have a problem and will need to talk to your backup vendor.

Note: backup software operations will always create differencing disks in the same location as the parent. You cannot override this behavior!

Standard and Production Checkpoints

Standard and Production Checkpoints are created by administrators, either manually or via scripts and other automated processes. As far as the disks are concerned, there isn’t much difference between any of the checkpoint types. Unlike backup checkpoints, Hyper-V will not automatically attempt to clean up standard or production checkpoints. That’s something that an administrator must do, also manually or via scripts and other automated processes.

Note: checkpoint operations will always create differencing disks in the same location as the parent. You cannot override this behavior!

Pooled Remote Desktop Services

For the rest of this article, I’m going to pretend that this method doesn’t exist. If you’re operating a full-blown Remote Desktop Services (RDS) operation for your virtual desktop infrastructure (VDI), then it’s using differencing disks. Your gold master is the source, and all of the virtual machines that users connect to are built on differencing disks. When a user’s session ends, the differencing disk is destroyed.

Manual Creation

Of the four techniques to create a differencing virtual hard disk, manual creation is the rarest. There aren’t a great many uses for this ability, but you might need to perform an operation similar to the gold master with many variants technique employed by VDI. It is possible to create many differencing disks from a single source and connect separate virtual machines to them. It can be tough to manage, though, and there aren’t any tools to aid you.

You can create a differencing disk based on any parent virtual hard disk using PowerShell or Hyper-V Manager.

Creating a Hyper-V Differencing Virtual Hard Disk with PowerShell

The New-VHD cmdlet is the tool for this job:

Don’t forget to use tab completion, especially with ParentPath.

Creating a Hyper-V Differencing Virtual Hard Disk with Hyper-V Manager

Use the new virtual hard disk wizard in Hyper-V Manager to create a differencing disk:

  1. In Hyper-V Manager, right-click on the host to create the disk on, or use the Action pane in the far right. Click New, then Hard Disk.
  2. Click Next on the informational screen.
  3. Choose VHD or VHDX. The differencing disk’s type must match its parent’s type. You cannot make a differencing disk of a VHDS file.
  4. Choose Differencing.
  5. Enter the file name and path of the differencing disk that you want to create.
  6. Select the source virtual hard disk.
  7. Check your work and click Finish if you’re satisfied, or go Back and fix things.

How Manual Differencing Disk Creation is Different

A differencing disk is a differencing disk; no matter how you create them, they are technologically identical. There are environmental differences, however. Keep these things in mind:

  • Hyper-V will automatically use the differencing disks created by backup, standard, and production checkpoints. It will retarget the connected virtual machine as necessary. No such automatic redirection occurs when you manually create a differencing disk. Remember that modifying a virtual hard disk that has differencing disks will render the children useless.
  • During manual creation of a differencing, you can specify a different target path for the differencing disk. While convenient, it’s tougher to identify that a virtual hard disk has children when they’re not all together.
  • Hyper-V Manager maintains a convenient tree view of standard and production checkpoints. Manually created differencing disks have no visual tools.
  • The checkpointing system will conveniently prepend an “A” (for “automatic”) to the extensions of the differencing disks it creates (and give them bizarre base file names). Both Hyper-V Manager and PowerShell will get upset if you attempt to use AVHD or AVHDX as an extension for a manually-created differencing disk. That makes sense, since “A” is for automatic, and automatic is antonym of “manual”. Unfortunately, these tools are not supportive of “MVHD” or “MHVDX” extensions, either. If you do not give it an obvious base name, you could cause yourself some trouble.

You can use PowerShell to detect the differencing disk type and its parent:


The Inspect function in Hyper-V Manager does the same thing. You can find this function in the same Action menu that you used to start the disk creation wizard.


I also wrote a PowerShell script that can plumb a VHD/X file for parent information. It’s useful when you don’t have the Hyper-V role enabled, because none of the above utilities can function without it. Head over to my orphaned Hyper-V file locator script and jump down to the Bonus Script heading. It’s a PowerShell veneer over .Net code, so it will also be of use if you’re looking to do something like that programmatically.

Merging Manually-Created Differencing Disks

Now that you know how to create differencing disks, it’s important to teach you how to merge them. Ordinarily, you’ll merge them back into their parents. You also have the option to create an entirely new disk that is a combination of the parent and child, but does not modify either. The merge process can also be done in PowerShell or Hyper-V Manager, but these tools have a different feature set.

Warning: Never use these techniques to merge a differencing disk that is part of a checkpointed VM back into its parent! Delete the checkpoint to merge the differencing disk instead. It is safe to merge a checkpointed VM’s disk into a different disk.

Merging a Hyper-V Differencing Virtual Hard Disk with PowerShell

Use the aptly-named Merge-VHD cmdlet to transfer the contents of the differencing disk into its parent:

The differencing disk is destroyed at the end of this operation.

PowerShell cannot be used to create a completely new target disk, for some reason. It does include a DestinationPath parameter, but that can only be used to skip levels in the differencing chain. For instance, let’s say that you have a root.vhdx with child diff1.vhdx that has its own child diff2.vhdx that also has its own child diff3.vhdx. You can use Merge-VHD -Path .\diff3.vhdx -DestinationPath .\diff1.vhdx to combine diff3.vhdx and diff2.vhdx into diff1.vhdx in a single pass. Without the DestinationPath parameter, diff3.vhdx would only merge into diff2.vhdx. You’d need to run Merge-VHD several times to merge the entire chain. Hyper-V Manager has no such capability.

Merging a Hyper-V Differencing Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager has a disk editing wizard for this task.

  1. In Hyper-V Manager, right-click on the host to create the disk on, or use the Action pane in the far right. Click
  2. Click Next on the informational screen.
  3. Browse to the differencing disk that you wish to merge.
  4. Choose Merge.
  5. Choose to merge into the parent or into a new disk.
  6. Check your work and click Finish if you’re satisfied, or go Back and fix things.

If you chose to merge the disk into its parent, the differencing disk is destroyed at the end of the operation. If you chose to merge into a new disk, both the source and differencing disk are left intact.

Hyper-V Manager cannot merge multiple layers of a differencing chain the way that PowerShell can.

The Dangers of Differencing Disks

There are two risks with using differencing disks: performance and space.

Differencing Disk Performance Hits

When a differencing disk is in use, Hyper-V will need to jump back and forth from the child to the parent to find the data that it wants for reads. Writes are smoother as they all go to the differencing disk. On paper, this looks like a very scary operation. In practice, you are unlikely to detect any performance problems with a single differencing child. However, if you continue chaining differencing disk upon differencing disk, there will eventually be enough extraneous read operations that you’ll start having problems.

Also, merge operations require every single bit in the differencing disk to be transferred to the parent. That operation can cause an I/O storm. The larger the differencing disk is, the greater the impact of a merge operation.

Differencing Disk Space Issues

As mentioned earlier, a differencing disk can expand to the maximum size of its parent. If you have a root disk with a maximum size of 50 gigabytes, then any and all of its differencing disks can also grow to 50 gigabytes. If the root is dynamically expanding, then it is possible for its differencing disk(s) to exceed its size. For example, in this article I have used a completely empty root VHDX. It’s 4 megabytes in size. If I were to install an operating system into the differencing disk that I created for it, root.vhdx would remain at 4 megabytes in size while its differencing disk ballooned to whatever was necessary to hold that operating system.

A merge operation might require extra space, as well. If I were to merge that differencing disk with the OS back into the empty 4 megabyte root disk, then it would need to expand the root disk to accommodate all of those changed bits. It can’t destroy the differencing disk until the merge is complete, so I’m going to need enough space to hold that differencing disk twice. Once the merge is completed, the space used by the differencing disk will be reclaimed.

If the root disk is fixed instead of dynamically expanding, then the merges will be written into space that’s already allocated. There will always be a space growth concern when merging trees, however, because differencing disks are also dynamically expanding and they merge from the bottom up.

Transplanting a Differencing Disk

Did a forgotten differencing disk run you out of space? Did a differencing disk get much larger than anticipated and now you can’t merge it back into its parent? No problem. Well… potentially no problem. All that you need is some alternate location to hold one or more of the disks. Follow these steps:

  1. Shut down any connected virtual machine. You cannot perform this operation live.
  2. Move the disk(s) in question. Which you move, and where you move them, is up to you. However, both locations must be simultaneously visible from a system that has the Merge-VHD cmdlet and the Hyper-V role installed. Remember that it is the disk that you are merging into that will grow (unless it’s fixed).
  3. If you moved the differencing disk and you’re still on the system that it came from, then you can probably just try the merge. It’s still pointed to the same source disk. Use Get-VHD or Inspect to verify.
  4. If you moved the root disk, run the following cmdlet:

    Your usage will obviously be different from mine, but what you’re attempting to do is set the ParentPath to reflect the true location of the parent’s VHDX. You can now attempt to merge the disk.

If a differencing disk somehow becomes disjoined from its parent, you can use Set-VHD to correct it. What you cannot do is use it to rejoin a differencing disk to a parent that has been changed. Even though the Set-VHD cmdlet may work, any merge operation will likely wreck the data in the root disk and render both unusable.

Disk Fragmentation is not Hyper-V’s Enemy

Disk Fragmentation is not Hyper-V’s Enemy

Fragmentation is the most crippling problem in computing, wouldn’t you agree? I mean, that’s what the strange guy downtown paints on his walking billboard, so it must be true, right? And fragmentation is at least five or six or a hundred times worse for a VHDX file, isn’t it? All the experts are saying so, according to my psychic.

But, when I think about it, my psychic also told me that I’d end up rich with a full head of hair. And, I watched that downtown guy lose a bet to a fire hydrant. Maybe those two aren’t the best authorities on the subject. Likewise, most of the people that go on and on about fragmentation can’t demonstrate anything concrete that would qualify them as storage experts. In fact, they sound a lot like that guy that saw your employee badge in the restaurant line and ruined your lunch break by trying to impress you with all of his anecdotes proving that he “knows something about computers” in the hopes that you’d put in a good word for him with your HR department (and that they have a more generous attitude than his previous employers on the definition of “reasonable hygiene practices”).

To help prevent you from ever sounding like that guy, we’re going to take a solid look at the “problem” of fragmentation.

Where Did All of this Talk About Fragmentation Originate?

Before I get very far into this, let me point out that all of this jabber about fragmentation is utter nonsense. Most people that are afraid of it don’t know any better. The people that are trying to scare you with it either don’t know what they’re talking about or are trying to sell you something. If you’re about to go to the comments section with some story about that one time that a system was running slowly but you set everything to rights with a defrag, save it. I once bounced a quarter across a twelve foot oak table, off a guy’s forehead, and into a shot glass. Our anecdotes are equally meaningless, but at least mine is interesting and I can produce witnesses.

The point is, the “problem” of fragmentation is mostly a myth. Like most myths, it does have some roots in truth. To understand the myth, you must know its origins.

These Aren’t Your Uncle’s Hard Disks

In the dark ages of computing, hard disks were much different from the devices that you know and love today. I’m young enough that I missed the very early years, but the first one owned by my family consumed the entire top of a desktop computer chassis. I was initially thrilled when my father presented me with my very own PC as a high school graduation present. I quickly discovered that it was a ploy to keep me at home a little longer because it would be quite some time before I could afford an apartment large enough to hold its hard drive. You might be thinking, “So what, they were physically bigger. I have a dozen magazines claiming that size doesn’t matter!” Well, those articles weren’t written about computer hard drives, were they? In hard drives, physical characteristics matter.

Old Drives Were Physically Larger

The first issue is diameter. Or, more truthfully, radius. You see, there’s a little arm inside that hard drive whose job it is to move back and forth from the inside edge to the outside edge of the platter and back, picking up and putting down bits along the way. That requires time. The further the distance, the more time required. Even if we pretend that actuator motors haven’t improved at all, less time is required to travel a shorter distance. I don’t know actual measurements, but it’s a fair guess that those old disks had over a 2.5-inch radius, whereas modern 3.5″ disks are closer to a 1.5″ radius and 2.5″ disks something around a 1″ radius. It doesn’t sound like much until you compare them by percentage differences. Modern enterprise-class hard disks have less than half the maximum read/write head travel distance of those old units.


It’s not just the radius. The hard disk that I had wasn’t only wide, it was also tall. That’s because it had more platters in it than modern drives. That’s important because, whereas each platter has its own set of read/write heads, a single motor controls all of the arms. Each additional platter increases the likelihood that the read/write head arm will need to move a meaningful distance to find data between any two read/write operations. That adds time.

Old Drives Were Physically Slower

After size, there’s rotational speed. The read/write heads follow a line from the center of the platter out to the edge of the platter, but that’s their only range of motion. If a head isn’t above the data that it wants, then it must hang around and wait for that data to show up. Today, we think of 5,400 RPM drives as “slow”. That drive of mine was moping along at a meagerly 3,600 RPM. That meant even more time was required to get/set data.

There were other factors that impacted speed as well, although none quite so strongly as rotational speed improvements. The point is, physical characteristics in old drives meant that they pushed and pulled data much more slowly than modern drives.

Old Drives Were Dumb

Up until the mid-2000s, every drive in (almost) every desktop computer used a PATA IDE  or EIDE interface (distinction is not important for this discussion). A hard drive’s interface is the bit that sits between the connecting cable bits and the spinning disk/flying head bits. It’s the electronic brain that figures out where to put data and where to go get data. IDE brains are dumb (another word for “cheap”). They operate on a FIFO (first-in first-out) basis. This is an acronym that everyone knows but almost no one takes a moment to think about. For hard drives, it means that each command is processed in exactly the order in which it was received. Let’s say that it gets the following:

  1. Read data from track 1
  2. Write data to track 68,022
  3. Read data from track 2

An IDE drive will perform those operations in exactly that order, even though it doesn’t make any sense. If you ever wondered why SCSI drives were so much more expensive than IDE drives, that was part of the reason. SCSI drives were a lot smarter. They would receive a list of demands from the host computer, plot the optimal course to satisfy those requests, and execute them in a logical fashion.

In the mid-2000s, we started getting new technology. AHCI and SATA emerged from the primordial acronym soup as Promethean saviors, bringing NCQ (native command queuing) to the lowly IDE interface. For the first time, IDE drives began to behave like SCSI drives. … OK, that’s overselling NCQ. A lot. It did help, but not as much as it might have because…

Operating Systems Take More Responsibility

It wasn’t just hard drives that operated in FIFO. Operating systems started it. They had good excuses, though. Hard drives were slow, but so were all of the other components. A child could conceive of better access techniques than FIFO, but even PhDs struggled against the CPU and memory requirements to implement them. Time changed all of that. Those other components gained remarkable speed improvements while hard disks lagged behind. Before “NCQ” was even coined, operating systems learned to optimize requests before sending them to the IDE’s FIFO buffers. That’s one of the ways that modern operating systems manage disk access better than those that existed at the dawn of defragmentation, but it’s certainly not alone.

This Isn’t Your Big Brother’s File System

The venerated FAT file system did its duty and did it well. But, the nature of disk storage changed dramatically, which is why we’ve mostly stopped using FAT. Now we have NTFS, and even that is becoming stale. Two things that it does a bit better than FAT is metadata placement and file allocation. Linux admins will be quick to point out that virtually all of their file systems are markedly better at preventing fragmentation than NTFS. However, most of the tribal knowledge around fragmentation on the Windows platform sources from the FAT days, and NTFS is certainly better than FAT.

Some of Us Keep Up with Technology

It was while I owned that gigantic, slow hard drive that the fear of fragmentation wormed its way into my mind. I saw some very convincing charts and graphs and read a very good spiel and I deeply absorbed every single word and took the entire message to heart. That was also the same period of my life in which I declined free front-row tickets to Collective Soul to avoid rescheduling a first date with a girl with whom I knew I had no future. It’s safe to say that my judgment was not sound during those days.

Over the years, I became a bit wiser. I looked back and realized some of the mistakes that I’d made. In this particular case, I slowly came to understand that everything that convinced me to defragment was marketing material from a company that sold defragmentation software. I also forced myself to admit that I never could detect any post-defragmentation performance improvements. I had allowed the propaganda to sucker me into climbing onto a bandwagon carrying a lot of other suckers, and we reinforced each others’ delusions.

That said, we were mostly talking about single-drive systems in personal computers. That transitions right into the real problem with the fragmentation discussion.

Server Systems are not Desktop Systems

I was fortunate enough that my career did not immediately shift directly from desktop support into server support. I worked through a gradual transition period. I also enjoyed the convenience of working with top-tier server administrators. I learned quickly, and thoroughly, that desktop systems and server systems are radically different.

Usage Patterns

You rely on your desktop or laptop computer for multiple tasks. You operate e-mail, web browsing, word processing, spreadsheet, instant messaging, and music software on a daily basis. If you’re a gamer, you’ve got that as well. Most of these applications use small amounts of data frequently and haphazardly; some use large amounts of data, also frequently and haphazardly. The ratio of write operations to read operations is very high, with writes commonly outnumbering reads.

Servers are different. Well-architected servers in an organization with sufficient budget will run only one application or application suite. If they use much data, they’ll rely on a database. In almost all cases, server systems perform substantially more read operations than write operations.

The end result is that server systems almost universally have more predictable disk I/O demands and noticeably higher cache hits than desktop systems. Under equal fragmentation levels, they’ll fare better.

Storage Hardware

Whether or not you’d say that server-class systems contain “better” hardware than desktop system is a matter of perspective. Server systems usually provide minimal video capabilities and their CPUs have gigantic caches but are otherwise unremarkable. That only makes sense; playing the newest Resident Evil at highest settings with a smooth frame rate requires substantially more resources than a domain controller for 5,000 users. Despite what many lay people have come to believe, server systems typically don’t work very hard. We build them for reliability, not speed.

Where servers have an edge is storage. SCSI has a solid record as the premier choice for server-class systems. For many years, it was much more reliable, although the differences are negligible today. One advantage that SCSI drives maintain over their less expensive cousins is higher rotational speeds. Of all the improvements that I mentioned above, the most meaningful advance in IDE drives was the increase of rotational speed from 3,600 RPM to 7,200 RPM. That’s a 100% gain. SCSI drives ship with 10,000 RPM motors (~38% faster than 7,200 RPM) and 15,000 RPM motors (108% faster than 7,200 RPM!).

Spindle speed doesn’t address the reliability issue, though. Hard drives need many components, and a lot of them move. Mechanical failure due to defect or wear is a matter of “when”, not “if”. Furthermore, they are susceptible to things that other component designers don’t even think about. If you get very close to a hard drive and shout at it while it’s powered, you can cause data loss. Conversely, my solid-state phone doesn’t seem to suffer nearly as much as I do even after the tenth attempt to get “OKAY GOOGLE!!!” to work as advertised.

Due to the fragility of spinning disks, almost all server systems architects design them to use multiple drives in a redundant configuration (lovingly known as RAID). The side effect of using multiple disks like this is a speed boost. We’re not going to talk about different RAID types because that’s not important here. The real point is that in practically all cases, a RAID configuration is faster than a single disk configuration. The more unique spindles in an array, the higher its speed.

With SCSI and RAID, it’s trivial to achieve speeds that are many multipliers faster than a single disk system. If we assume that fragmentation has ill effects and that defragmentation has positive effects, they are mitigated by the inherent speed boosts of this topology.

These Differences are Meaningful

When I began taking classes to train desktop support staff to become server support staff, I managed to avoid asking any overly stupid questions. My classmates weren’t so lucky. One asked about defragmentation jobs on server systems. The echoes of laughter were still reverberating through the building when the instructor finally caught his breath enough to choke out, “We don’t defragment server systems.” The student was mortified into silence, of course. Fortunately, there were enough shared sheepish looks that the instructor felt compelled to explain it. That was in the late ’90s, so the explanation was a bit different then, but it still boiled down to differences in usage and technology.

With today’s technology, we should be even less fearful of fragmentation in the datacenter, but, my observations seem to indicate that the reverse has happened. My guess is that training isn’t what it used to be and we simply have too many server administrators that were promoted off of the retail floor or the end-user help desk a bit too quickly. This is important to understand, though. Edge cases aside, fragmentation is of no concern for a properly architected server-class system. If you are using disks of an appropriate speed in a RAID array of an appropriate size, you will never realize meaningful performance improvements from a defragmentation cycle. If you are experiencing issues that you believe are due to fragmentation, expanding your array by one member (or two for RAID-10) will return substantially greater yields than the most optimized disk layout.

Disk Fragmentation and Hyper-V

To conceptualize the effect of fragmentation on Hyper-V, just think about the effect of fragmentation in general. When you think of disk access on a fragmented volume, you’ve probably got something like this in mind:

Jumpy Access

Look about right? Maybe a bit more complicated than that, but something along those lines, yes?

Now, imagine a Hyper-V system. It’s got, say, three virtual machines with their VHDX files in the same location. They’re all in the fixed format and the whole volume is nicely defragmented and pristine. As the virtual machines are running, what does their disk access look like to you. Is it like this?:

Jumpy Access

If you’re surprised that the pictures are the same, then I don’t think that you understand virtualization. All VMs require I/O and they all require their I/O more or less concurrently with I/O needs of other VMs. In the first picture, access had to skip a few blocks because of fragmentation. In the second picture, access had to skip a few blocks because it was another VM’s turn. I/O will always be a jumbled mess in a shared-storage virtualization world. There are mitigation strategies, but defragmentation is the most useless.

For fragmentation to be a problem, it must interrupt what would have otherwise been a smooth read or write operation. In other words, fragmentation is most harmful on systems that commonly perform long sequential reads and/or writes. A typical Hyper-V system hosting server guests is unlikely to perform meaningful quantities of long sequential reads and/or writes.

Disk Fragmentation and Dynamically-Expanding VHDX

Fragmentation is the most egregious of the copious, terrible excuses that people give for not using dynamically-expanding VHDX. If you listen to them, they’ll paint a beautiful word picture that will have you daydreaming that all the bits of your VHDX files are scattered across your LUNs like a bag of Trail Mix. I just want to ask anyone who tells those stories: “Do you own a computer? Have you ever seen a computer? Do you know how computers store data on disks? What about Hyper-V, do you have any idea how that works?” I’m thinking that there’s something lacking on at least one of those two fronts.

The notion fronted by the scare message is that your virtual machines are just going to drop a few bits here and there until your storage looks like a finely sifted hodge-podge of multicolored powders. The truth is that your virtual machines are going to allocate a great many blocks in one shot, maybe again at a later point in time, but will soon reach a sort of equilibrium. An example VM that uses a dynamically-expanding disk:

  • You create a new application server from an empty Windows Server template. Hyper-V writes that new VHDX copy as contiguously as the storage system can allow
  • You install the primary application. This causes Hyper-V to request many new blocks all at once. A large singular allocation results in the most contiguous usage possible
  • The primary application goes into production.
    • If it’s the sort of app that works with big gobs of data at a time, then Hyper-V writes big gobs, which are more or less contiguous.
    • If it’s the sort of app that works with little bits of data at a time, then fragmentation won’t matter much anyway
  • Normal activities cause a natural ebb and flow of the VM’s data usage (ex: downloading and deleting Windows Update files). A VM will re-use previously used blocks because that’s what computers do.

How to Address Fragmentation in Hyper-V

I am opposed to ever taking any serious steps to defragmenting a server system. It’s just a waste of time and causes a great deal of age-advancing disk thrashing. If you’re really concerned about disk performance, these are the best choices:

  • Add spindles to your storage array
  • Use faster disks
  • Use a faster array type
  • Don’t virtualize

If you have read all of this and done all of these things and you are still panicked about fragmentation, then there is still something that you can do. Get an empty LUN or other storage space that can hold your virtual machines. Use Storage Live Migration to move all of them there. Then, use Storage Live Migration to move them all back, one at a time. It will line them all up neatly end-to-end. If you want, copy in some “buffer” files in between each one and delete them once all VMs are in place. These directions come with a warning: you will never recover the time necessary to perform that operation.

Cannot Delete a Virtual Hard Disk from a Cluster Shared Volume

Cannot Delete a Virtual Hard Disk from a Cluster Shared Volume

When you use the built-in Hyper-V tools (Hyper-V Manager and PowerShell) to delete a virtual machine, all of its virtual hard disks are left behind. This is by design and is logically sound. The configuration files are components of the virtual machine and either define it or have no purposeful existence without it; the virtual hard disks are simply attached to the virtual machine and could just as easily be attached to another. After you delete the virtual machine, you can manually delete the virtual hard disk files. Usually. Sometimes, when the VHD is placed on a cluster shared volume (CSV), you might have some troubles deleting it. The fix is simple.


There are a few ways that this problem will manifest. All of these conditions will be applicable, but the way that you encounter them is different.

Symptom 1: Cannot Delete a VHDX on a CSV Using Windows Explorer on the CSV’s Owner Node

When using Windows Explorer to try to delete the file from the node that own the CSV, you receive the error: The action can’t be completed because the file is open in System. Close the file and try again.

System has VHD Open

System has VHD Open


Note: this message does sometimes appear on non-owner nodes.

Symptom 2: Cannot Delete a VHDX on a CSV Using Windows Explorer on a Non-Owning Node

When using Windows Explorer to try to delete the file from a node other than the CSV’s owner, you receive the error: The action can’t be completed because the file is open in another program. Close the file and try again.

Another Program has the VHD Open

Another Program has the VHD Open


Note: non-owning nodes do sometimes receive the “System” message from symptom 1.

Symptom 3: Cannot Delete a VHDX on a CSV Using PowerShell

The error is always the same from PowerShell whether you are on the owning node or not: Cannot remove item virtualharddisk.vhdx: The process cannot access the file ‘virtualharddisk.vhdx’ because it is being used by another process.

Another Process has the CSV

Another Process has the CSV


Symptom 4: Cannot Delete a VHDX on a CSV Using the Command Prompt

The error message from a standard command prompt is almost identical to the message that you receive in PowerShell: The process cannot access the file because it is being used by another process.

Another Process has the VHD

Another Process has the VHD



Clean up is very simple, but it comes with a serious warning: I do not know what would happen if you ran this against a VHD that was truly in use by a live virtual machine, but you should expect the outcome to be very bad.

Open an elevated PowerShell prompt on the owner node and issue Dismount-DiskImage:

You do not need to type out the -ImagePath parameter name but you must fully qualify the path! If you try to use a relative path with Dismount-DiskImage or any of the other disk image manipulation cmdlets, you will be told that The system cannot find the file specified:

DiskImage Cmdlet Fails on Relative Paths

DiskImage Cmdlet Fails on Relative Paths


Once the cmdlet returns, you should be able to delete the file without any further problems.

If you’re not sure which node owns the CSV, you can ask PowerShell, assuming that the Failover Cluster cmdlets are installed:

CSV Owner in PowerShell

CSV Owner in PowerShell


You can also use Failover Cluster Manager, on the Storage/Disks node:

CSV Owner in Failover Cluster Manager

CSV Owner in Failover Cluster Manager


Other Cleanup

Make sure to use Failover Cluster Manager to remove any other mention of the virtual machine. Look on the Roles node. These resources are not automatically cleaned up when the virtual machine is deleted. Try to be in the habit of removing cluster resources before deleting the related virtual machine, if possible. I assume that for most of us, deleting a virtual machine is a rare enough occurrence that it’s easy to overlook things like this. I do know that the problem can occur even if the objects are deleted in the “proper” order, so this is not the root cause.

Alternative Cleanup Approaches

The above solution has worked for me every time, but it’s a very rare event without a known cause, so it’s impossible for me to test every possibility. There are two other things that you can try.

Move the CSV to Another Node

Moving the CSV to another node might break the lock from the owning node. This has worked for me occasionally, but not as reliably as the primary method described above.

In Failover Cluster Manager, right-click the CSV, expand Move, then click one of the two options. Best Possible Node will choose where to place the CSV for you; Select Node will give you a dialog for you to choose the target.

Move CSV in Failover Cluster Manager

Move CSV in Failover Cluster Manager


Alternatively, you can use PowerShell:

If you don’t want to tell it which node to place the CSV on, simply omit the Node  parameter:

Move CSV in PowerShell

Move CSV in PowerShell


Rolling Cluster Reboot

The “nuclear” option is to reboot each node in the cluster, starting with the original owner node. If this does not work, then the disk image is truly in use somewhere and you need to determine where.

Free PowerShell Script: Use WSUS to Update Installation Media and Hyper-V Templates

Free PowerShell Script: Use WSUS to Update Installation Media and Hyper-V Templates

In several articles and other works, I make the claim that backing up a Hyper-V host is largely a waste of time. A separate practice is to maintain templates and other offline images of systems for easy deployment of new systems. What these two topics have in common is the need (or at least the desire) to keep the images current with Windows patches. It doesn’t help much to save ten or fifteen minutes deploying Windows from a template or ISO if you then need to spend two hours installing updates. In the past, we would have use “slipstreamed” ISOs. That’s no longer possible with modern iterations of Windows. However, what we lack now is not the capability to update these systems, but the proper tools. Of course, if you’re willing (and able) to spend a lot of money on System Center, that suite can help you a great deal. For the rest of us, we have to resort to other options, usually homegrown.

To address that problem, I have crafted a script that will automatically update both VHDX files and WIM files. If you’re thinking that this doesn’t apply to you because you only deploy from physical media, think again! The issue of publicly-available Windows Server ISOs never being updated by Microsoft is the primary driver behind the creation of this script.

Script Features

Scripts to update WIMs and VHDs are numerous, so I’d like to take a few moments to enumerate the features in this script, especially the parts that set it apart from others that I’ve found.

  • Updates WIMs and VHDXs interchangeably
  • Updates multiple images with a single execution
  • Ability to scope updates; no more trying to apply Office 2013 patches to your Hyper-V Server image (unless that’s what you want to do; I’m not judging)
  • Subsequent runs against the same image will not try to re-apply previously installed updates
  • Designed to be run on a schedule, but can also be run interactively
  • Can update every image in a multi-pack WIM (if that’s what you want)


There are a few things that you’ll need to provide for this script to function.

  • A willingness to put away your DVDs
  • A USB flash device that is recognized on the physical systems that you use with a capacity of at least 8 GB (not applicable if you’re only here to update VHDXs)
  • An installation of Windows Server Update Services (WSUS)
  • Space to store WIMs and/or VHDXs (varies, but anywhere from 5 to 20GB per)
  • Spare space for the update operation (varies depending on the total size of all updates to be applied, but usually no more than a few gigabytes per operation)
  • On the system where you wish to run the script, the 2012 R2 or later Windows Server Update Services console must be installed. It includes the UpdateServices PowerShell module. For Windows 8.1+, download the Remote Server Administration Tools. I don’t believe that this module is available for Windows 7 or Server 2008 R2 or earlier, but it would be in the WSUS 3.0 SP2 download if it were. The following screenshot shows where the option appears on Windows 10. For Server, it is in the same location on the Features page of the Add Roles and Features wizard.
    WSUS in RSAT

    WSUS in RSAT


  • PowerShell 4.0 or later on the system that will run the script.

Something I want to make clear right from the beginning is that I don’t know how to update a Windows ISO image or use this to create an ISO image that can then be burned to DVD. I have moved to USB flash deployments for any physical systems where PXE is not an option. DVD has had its day but it will soon be following the floppy into the museum. If you haven’t tried loading an operating system from a USB stick, today is a great day to learn.

Deploying a Physical Machine from a WIM and a USB Stick

If you’re only going to be using this script to update VHDX files, skip this entire section.

You may not have realized it, especially if you’re like me and have been around since the pre-WIM days, but any time you use a Microsoft-pressed DVD or a burned ISO to install Windows, you are deploying from a WIM. Check your installation media’s file structure for a file named install.wim in the Sources folder. That’s where your Windows installation comes from. The trick is to get that WIM updated with the latest patches before using it again. There are a few inefficiencies with the method that I’ve discovered, but it works perfectly without requiring any paid tools.

  1. Acquire an ISO of the Windows, Windows Server, or Hyper-V Server operating system. If you only have physical media to work from and you don’t already have a tool, I use ImgBurn. It’s tough to get to through all the adwalls but it’s free and does the job.
  2. Acquire a copy of the Windows USB/DVD Download Tool.
  3. Insert your USB stick into the computer. If it’s not empty, whatever is on it will be destroyed.
  4. Run the tool that you downloaded. It’s still branded as “Windows 7” but it will work with whatever Windows ISO you give it.
    Windows 7 DVD Tool

    Windows 7 DVD Tool


  5. Browse to the ISO image from step 1 that you want to convert for USB.
    Windows USB Tool ISO Selection

    Windows USB Tool ISO Selection


  6. Choose USB Device.
    Windows USB Media Selection

    Windows USB Media Selection


  7. Ensure that the correct USB device from step 3 is selected. Press Begin Copying.
    Windows USB Select Output Drive

    Windows USB Select Output Drive


  8. You will get a small popup box warning you that your device will be erased. Click Erase USB Device. You’ll get yet another dialog telling you the essentially the same thing that the previous dialog said. I guess they really want to make certain that no one can say they didn’t know that their drive was going to be erased. Click Yes.
  9. Wait for the process to complete.
    Windows USB File Copy

    Windows USB File Copy


  10. When it’s finished, you can just close the window or click Start Over to craft another USB drive.
  11. Copy the \sources\install.wim from the USB device to a location where it can be updated — I prefer having it on my WSUS host. Because every single Windows media uses the name install.wim, I would either set up a folder structure that clearly indicates exactly where the file came from or I would rename the file.

You’re now ready to begin. The procedure going forward will be:

  1. Update the WIM.
  2. Copy the WIM back to the USB device.
  3. Use the USB device to install.

If you have more images than USB keys, that’s a workable problem. You can always rebuild the USB device from the ISO image and then copy over the latest copy of the WIM. However, now that you’ve come this far, I strongly recommend that you research deployment from a WDS server with WIM. It’s not that tough and it’s nice to never worry about installation media.

Script Usage

This script is slow. Very slow. The strongest control on speed is your hardware, but it’s still going to be slow. Part of the delay is scanning for applicable updates, so I’ve set that so that the scan of the WSUS server only occurs once per iteration, no matter how many VHDXs/WIMs you specify to update. To make it even better, it will record which updates were successfully applied to the WIM. As long as you don’t move or rename the log, additional runs will skip updates that have already been applied. This means that the first run against any given image will likely take hours, but subsequent runs might only require minutes.

You can run the script by hand, if you wish. If you only update any given Windows Server 2012 R2 one time, that will literally save you at least a years’ worth of updates each time that you deploy from it. My recommendation is to schedule the update to run once a month over the weekend so that you’re always up-to-date. To make that easier, I’ll show you how to build a supporting script to call this one with your images.

There are two parameter sets. One is to run against a single image file, the other is for multiple image files. Because of the way that WIMs work, you can’t just supply a list of file names. The parameter sets are otherwise identical.

Single Image File Parameter Set

Multiple Image File Parameter Set

There are only two required parameters: the image(s) and the WSUS system’s content folder. The hardest part is the image(s), so we’ll start there.

Specifying Image File(s)

The basic issue with specifying image files is that a single WIM can contain multiple images. If you’ve ever started an installation and been asked to choose between Standard and Datacenter and Standard Core and Datacenter Core or something similar, every single line that you see is a separate image in one WIM. When you update a WIM, you must select which image to work with. VHDX files, on the other hand, only have a single item so you don’t need to worry about specifying an index.

Specifying a Single VHDX

This is the easiest usage. Just use the full path of the VHDX with the WSUS content folder:

This assumes that you are running the script locally on the WSUS server.

Specifying a Single WIM

You must specify the index of an image within a WIM to update. If you don’t know, or just want to update all of them, specify -1. Updating every image will take a very long time!

If you’d like to narrow it down to a specific image but you don’t know what image to choose, you can interactively and locally run the script and you’ll be prompted:

WIM Index Menu

WIM Index Menu

This list is pulled directly from Get-WindowsImage. You can look at the available indexes yourself in advance with Get-WindowsImage D:\FromISO\2k12r2\install.wim. If you do not specify -1 or a valid index when running Update-WindowsImage either from a scheduled task or in a remote PowerShell session, the script will fail.

Specifying Multiple Target Images

In order to update multiple images at once, you must supply an array of hash tables. If you’re new to PowerShell, take heart; it sounds much worse than it is.

First, make an empty array:

Then, make a hash table. This must have at least one component, a Path. For a WIM, it must also contain an Index. VHDX files can also have an index but they’ll be ignored.

Insert the hash table into the array:

Finally, submit your array to the script:

Easy, right? Now, let’s do a bunch in one shot:

Any image that can’t be found will simply be skipped. It will not impact the success or failure of the others.

Specifying the Target Product(s)

To reduce the amount of time spent attempting to apply patches, I added a filter for specific products. By default, the only scanned product is Windows Server 2012 R2 (which will include Hyper-V Server 2012 R2). You can specify what products to search for by using the TargetProduct parameter:

The items you enter here must match their names in WSUS verbatim or the updates will not be scanned (and there will be no error). To see that list, use Get-WsusProduct. Unfortunately, the PowerShell cmdlets for WSUS leave a great deal to be desired and there’s no simple way to narrow down which products that your host is receiving in synchronization.

Understanding how Available Updates will be Selected

I’ve never been the biggest fan of WSUS for a number of reasons, and you’re about to encounter one. I can easily determine if an update has been Approved in at least one place on the server and if it has been Declined in at least one place on the server. Finding out which computer groups that it has been Approved or Declined for is much harder. So, the default rule is: if an update has been approved on at least one group and has not been declined on any groups, it will be eligible. If you specify the IgnoreDeclinedStatus parameter, then the rule will change to: if an update has been approved on at least one group, it will be eligible. There is also a MinimumPatchAgeInDays parameter.

Other Parameters

Let’s step through the other, more self-explanatory parameters quickly:

  • WsusServerName: this is the name (short or FQDN) or the IP address of the WSUS server to connect to. If not specified, the cmdlet will assume WSUS is running locally.
  • WsusServerPort: the port that WSUS runs on. By default, this is 8530, because that’s the default WSUS port.
  • WsusUsesSSL: this is a switch parameter. Include it if your WSUS server is using SSL. Leave it off otherwise.
  • MinimumPatchAgeInDays: this is a numeric parameter that indicates the minimum number of days that a patch must have been on the WSUS server before it can be eligible for your images.
  • OfflineMountFolder: by default, the script will create an Offline folder on the system’s system drive (usually C:) for its working space. If this folder already exists, it must be empty. The folder is not removed at the end of the cycle. Use this parameter to override the name of the folder.

Scripting the Script

My vision is that you’ll set this script to run on a schedule. To work with multiple items, I’d make a script that calls the update script. So, save something like the following and call it every Friday at 7 PM:

Depending on your scripting skills, you could make this far more elaborate. Just remember that each image is going to take quite some time to update, especially on the first run.

The Script Source

As included here, you simply run the script on demand. If you’d like to dot-source it or use it in your profile, uncomment the function definition lines right after the help section and at the end. They are clearly marked.


How to use Microsoft Virtual Machine Converter (MVMC) for Hyper-V P2V

How to use Microsoft Virtual Machine Converter (MVMC) for Hyper-V P2V

Up through version 2012 of Microsoft’s System Center Virtual Machine Manager, the product included a physical-to-virtual (P2V) conversion tool for Hyper-V. It was taken out of the 2012 R2 version, and as most anyone could have predicted, the customer response was a general revolt. The capability was later added to the Microsoft Virtual Machine Converter (MVMC) product when it released as version 3.0.

Download Altaro VM Backup

Download Altaro VM Backup

Start your free 30-day trial of Altaro VM Backup today and see why it's trusted by 40 000+ organizations worldwide. Get started now and run your first backup in under 15 mins!

The good news is: That particular product is provided free-of-charge, so you do not need to purchase any System Center products.
The bad news is: It’s really not that well-developed.

This article contains a how-to guide, but I strongly recommend that you read through the entire thing, especially the pros and cons, before you start. With some of the caveats, you might find that you’d rather not use this tool at all.

microsoft virtual machine converter for Hyper-V P2V

What is Microsoft Virtual Machine Converter (MVMC)?

Microsoft Virtual Machine Converter, currently at version 3.1, is a freely-available tool provided by Microsoft for the purpose of converting VMware virtual machines and physical computers to Hyper-V virtual machines. If you prefer, you can also use MVMC to create VHD or VHDX from the source disks without converting the entire system. It includes both a GUI tool and a set of PowerShell functions so you can graphically step through your conversions one at a time or bulk transfer them with only a few lines.

During P2V, an agent application is temporarily installed on the source system.

You can convert desktop operating systems running Windows Vista and later. You can convert Windows Server operating systems running version 2008 or later. No Linux systems are supported by the P2V module. Your hypervisor target can be any version of Windows or Hyper-V Server from 2008 R2 onward.

Pros and Cons of Microsoft Virtual Machine Converter

I was not very impressed with this tool and would be unlikely to use it. Overall, I feel that Disk2VHD is better suited to what most people are going to do.


  • MVMC needs a temporary conversion location, even if you installed on the target Hyper-V host. So, you need enough space to hold the source system twice. MVMC does use dynamic VHD by default, so plan for the consumed space, not the total empty space.
  • MVMC only creates VHD files, not VHDX. It appears that what Microsoft really wants you to use this for is to convert machines so that they can be used with Azure, which still can’t work with VHDX. So, you must have enough space for initial VHD and the converted VHDX.
  • MVMC creates one VHD for each volume, not for each physical disk. So, for modern Windows OSs, you will have that tiny 350MB system disk as its own VHD and then your boot C: disk as a separate VHD.
  • MVMC operates from the broker’s perspective, so paths may not line up as you expect on the target system.

It’s not all doom and gloom, though. Pros:

  • MVMC can convert machines in bulk via its PowerShell module. That module is one of the poorer examples of the craft, but it is workable.
  • Aside from a tiny agent that is installed and almost immediately removed during the discovery change, nothing is installed on the source system. Contrast with Disk2VHD, which requires that you run the app within the machine to be converted.

How to Download Microsoft Virtual Machine Converter

Virtual Machine Converter is freely available from the Microsoft download site. That link was the most current as of the time of writing, but being the Internet, things are always subject to change. To ensure that you’re getting the latest version:

  1. Access
  2. In the search box, enter Virtual Machine Converter.
  3. From the results list, choose the most recent version.

The download page will offer you mvmc_setup.msi and MVMC_cmdlets.doc. The .doc file is documentation for the PowerShell cmdlets. It is not required, especially if you don’t intend to use the cmdlets. The .msi is required.

How to Install Microsoft Virtual Machine Converter

Do not install MVMC on the computer(s) to be converted. You can install the application directly on the Hyper-V host that will contain the created virtual machine or on an independent system that reads from the source physical machine and writes to the target host. For the rest of this article, I will refer to such an independent system as the broker. During the instructions portion, I will only talk about the broker; if you install MVMC on the target Hyper-V host, then that is the system that I am referring to for you.

If you use a broker, it can be running any version of Windows Server 2008 R2 or onward. The documentation also mentions Windows 8 and later, although desktop operating systems are not explicitly listed on the supported line.

Whichever operating system you choose, you must also have its current .Net environment (3.5 for 2008/R2 or 4.5 for 2012/R2). If you intend to use the PowerShell module, version 3.0 of PowerShell must be installed. This is only of concern for 2008/R2. Enter $PSVersionTable at any PowerShell prompt to see the installed version. If you are below version 3.0, use the same steps listed above for downloading MVMC, but search for Windows Management Framework instead. Version 3 is required, but any later version supported by your broker’s operating system will suffice.

The BITS Compact Server feature must also be enabled:




Installing MVMC is very straightforward. On the broker system, execute mvmc_setup.msi. As you step through the wizard, the only page with any real choice is the Destination Folder. Once installed, it will have its own entry for Microsoft Virtual Machine Converter on the Start menu/screen.

How to use Microsoft Virtual Machine Converter’s GUI to Perform P2V

Before you start, the source system must be online and reachable via network by the broker. The broker will temporarily hold the converted disk files, so it must have sufficient free space; it will not accept an SMB path. You must also know a user name and password that can act as an administrator on the source system and the destination Hyper-V host.

When ready, start MVMC and follow these steps:

  1. The first screen (not shown) is simply informational. If you don’t want to see it again, check Do not show this page again. Click Next when ready.
  2. The next screen asks if you wish to convert from a virtual or physical source machine. This article is only about P2V, so choose Physical machine conversion and click Next.
    MVMC Physical Source

    MVMC Physical Source


  3. Now, enter information about the source computer. You’ll need a resolvable name or IP address as well as an administrative user account. Upon clicking Next, MVMC will attempt to connect to the source system using the credentials that you specify.
    MVMC Source System

    MVMC Source System


  4. If the connection from step 3 is successful, you’ll next be asked to install the agent and scan the source host. Press the Scan System button and wait for the middle screen to display the results (do not be thrown off by the appearance of a Hyper-V adapter in my screenshot; I didn’t have a suitable physical system to demonstrate with):
    MVMC Source Scan

    MVMC Source Scan


  5. Each volume in your system will be detected and converted to an individual VHD. You can deselect any volume that you don’t want converted and you can choose to create a Fixed VHD instead of a Dynamic VHD if you prefer. Be aware that every line item will create a unique VHD.
    MVMC Disk Selection

    MVMC Disk Selection


  6. Enter the specifications for the target virtual machine. You’ll need to provide its name, the number of vCPUs to assign, and the amount of memory to assign.
    MVMC Target VM Specifications

    MVMC Target VM Specifications


  7. Enter the connection information for the target Hyper-V host. You’re given the option to use the credentials of your currently-logged-on user. If that will not work, clear the checkbox and manually enter the correct credentials.
    MVMC Target Host

    MVMC Target Host


  8. Next, you’ll be asked where to place the files. The location that you specify is from the viewpoint of the broker. So, if you enter C:\VMs, the files will be placed on the broker’s C: drive. Unless you’re placing the virtual machine’s files on an SMB 3 share, you’ll need to fix this all up afterward.
    MVMC Target File Location

    MVMC Target File Location


  9. Choose the interim storage location, which must be on the broker system.
    MVMC Interim Storage

    MVMC Interim Storage


  10. Select the virtual switch, if any, that you will connect the virtual machine to. I recommend that you leave it on Not Connected. This helps ensure that the system doesn’t appear on the same network twice.
    MVMC Target Virtual Switch

    MVMC Target Virtual Switch


  11. The next screen (not shown) is a simple summary. Review it click Back to make changes or Finish to start the conversion.
  12. You’ll now be shown the progress of the conversion job. Once it’s complete, click Close.
    MVMC Progress

    MVMC Progress


If all is well, you’ll eventually be given an all-green success screen:

MVMC Success

MVMC Success


There will be some wrap-up operations to carry out. Jump over the PowerShell section to find those steps.

How to use Microsoft Virtual Machine Converter’s PowerShell Cmdlets to Perform P2V

For some reason, MVMC’s cmdlets are not built properly to autoload. You’d think if any company could find an internal resource to show them how to set that up, it would be Microsoft. You’ll need to manually load the module each time that you want to use it. It should also be theoretically possible to place the files into an MvmcCmdlet sub-folder of any of the Modules folders listed in your system’s PSModulePath environment variable and it would then be picked up by the auto-loader. I wasn’t certain which of the DLLs were required and didn’t spend a lot of time testing it.

  1. Open an elevated PowerShell prompt on your broker system.
  2. Import the module: Import-Module -Name 'C:\Program Files\Microsoft Virtual Machine Converter\MvmcCmdlet.psd1'
  3. Load the credential to use on the source physical machine. You can load a credential set in a wide variety of ways. To interactively enter a credential set and save it to a variable: $SourceCredential = Get-Credential
  4. Connect to the source machine: $SourceConnection = New-MvmcP2VSourceConnection -PhysicalServer '' -SourceCredential $SourceCredential
  5. Set up the P2V agent and retrieve information from it: $SourceInfo = Get-MvmcP2VSourceSystemInformation -P2VSourceConnection $SourceConnection. We’ve stored this information in a variable so that we can continue to use it with other cmdlets in this module, but you are more than welcome to look at the information that it gathered. Use $SourceInfo | Get-Member to see its properties. You can then look at any of the members just by entering the variable name, a dot, and the property that you’d like to see. Ex: $SourceInfo.Services
  6. Create a variable to hold the parameters of the virtual machine to be created: $TargetVMParameters = New-MvmcP2VRequestParam. This object has a few properties that you can look at in the same way that you did with the $SourceInfo variable, although they’ll all be empty this time.
  7. Populate the SelectedDrives parameter of $TargetVMParameters with all of the drives from the source machine: $TargetVMParameters.SelectedDrives.AddRange($SourceInfo.LogicalDrives). If you’d prefer, you can add individual drives, ex: $TargetVMParameters.SelectedDrives.Add($SourceInfo.LogicalDrives[0]) will add only the first drive from the source machine. You can continue using .Add to specify other drives until you have the ones that you want. Every single item here will have a VHD created just for it.
  8. The VHDs will be created as dynamically expanding, and I don’t recommend that you change that. You’re probably going to want to convert them to VHDX later anyway, so, if you’re dead set on fixed, wait until you can convert it yourself. Otherwise, your temp and destination space consumption will be higher than necessary. If you really want to create them as fixed right now: $TargetVMParameters.SelectedDrives | foreach { $_.IsFixed = $True }
  9. Populate the CPUCount parameter of $TargetVMParameters. You can enter a number or use the same as the source: $TargetVMParameters.CpuCount = $SourceInfo.CoreSystem.PhysicalProcessorCount.
  10. Populate the StartupMemoryInMB parameter of $TargetVMParameters. As with CPU, you can pull it from the source system: $TargetVMParameters.StartupMemoryInMB = $SourceInfo.CoreSystem.MemoryInMB. This is potentially a bit more dangerous, as it could create a VM that is simply too large to start. You can, of course, just specify an integer value.
  11. The final task is to set up the network adapter(s). If you skip this step, your virtual machine will be created without any virtual network adapters at all. That’s a viable option, but I recommend against it because MVMC can keep the OS-to-adapter IDs intact. You can add virtual adapters somewhat like you did with the hard drives. The differences are that you can only add one adapter at a time and you also need to specify which, if any, virtual switch to connect the adapter to. If you use an empty string, then the adapter remains disconnected. Some samples:
    1. Copy the first physical adapter and leave it disconnected: $TargetVMParameters.SelectedNetworkAdapters.Add($SourceInfo.NetworkAdapters[0], '')
    2. Add all adapters and connect them to the virtual switch named “vSwitch”: $SourceInfo.NetworkAdapters | foreach { $TargetVMParameters.SelectedNetworkAdapters.Add($_, 'vSwitch') }
  12. You’ve collected all the information from the source system and defined the target system. Let’s turn our attention to the target host. Start by gathering the credential set that will be used to create the new virtual machine: $DestinationCredential = Get-Credential. You can use the same credential as the source if that will work: $DestinationCredential = $SourceCredential
  13. Open a connection to the target Hyper-V host: $DestinationConnection = New-MVMCHyperVHostConnection -HyperVServer '' -HostCredential $DestinationCredential.
  14. All that’s left is to perform the conversion: ConvertTo-MvmcP2V -SourceMachineConnection $SourceConnection -DestinationLiteralPath '\\svhv2\c$\LocalVMs' -DestinationHyperVHostConnection $DestinationConnection -TempWorkingFolder 'C:\Temp' -VmName 'svmigrated' -P2VRequestParam $TargetVMParameters. Take a close look at the DestinationLiteralPath that I used. This cmdlet operates from the perspective of the broker, not the target host (contrast with Move-VM/Move-VMStorage).

Post-Conversion Fix-Up and Notes

Do not forget to turn off or otherwise disconnect the source physical system before turning on the virtual replacement!

Virtual network adapters are not placed in a VLAN. If a VLAN is needed, you’ll need to set that after the VM is created.

The virtual machine will be set to use fixed memory. If you’d like to use Dynamic Memory, you’ll need to set that after the VM is created.

The process automatically creates a sub-folder of DestinationLiteralPath with the name of the virtual machine. All of the virtual machine’s files are placed there. Feel free to use Storage Live Migration to place the files anywhere that you like.

I do not know of any way to recombine the volumes so that they are all together in a single VHD. It might be possible to use a partition manipulation tool such as Clonezilla.

Assuming that you don’t want to continue using the older VHD format, you’ll need to convert to VHDX. We have an article explaining how to do that. Remember that the new disk is created alongside the old.

When I was first running through the PowerShell steps, I didn’t realize that the DestinationLiteralPath was from the broker’s perspective so I used the local path on the Hyper-V host (C:\LocalVMs). The cmdlet accepted my input, ran for a very long time, and then failed due to the path. I then discovered that it had created an entire VM on my broker machine in C:\LocalVMs. Where it failed was in connecting that folder to the target host. So, I could have copy/pasted and imported its output rather than going through the whole thing again.

Even though I used what would ordinarily be a completely non-workable path for DestinationLiteralPath, the cmdlet automatically fixed up the VM completely so that it ran from the correct local path.

If the conversion process does fail for some reason during the final stage, it will almost always have created a virtual machine. You’ll need to manually delete it before retrying.

Page 1 of 3123