Introduction to Azure Cloud Shell: Manage Azure from a Browser

Introduction to Azure Cloud Shell: Manage Azure from a Browser

Are you finding the GUI of Azure Portal difficult to work with?

You’re not alone and it’s very easy to get lost. There are so many changes and updates made every day and the azure overview blades can be pretty clunky to traverse through. However, with Azure Cloud Shell, we can utilize PowerShell or Bash to manage Azure resources instead of having to click around in the GUI.

So what is Azure Cloud Shell? It is a web-based shell that can be accessed via a web browser. It will automatically authenticate with your Azure sign-on credentials and allow you to manage all the Azure resources that your account has access to. This eliminates the need to load Azure modules on workstations. So for some situations where developers or IT Pros require shell access to their Azure resources, Azure Cloud Shell can be a very useful solution, as they won’t have to remote into “management” nodes that have the Azure PowerShell modules installed on them.

Cloud Masterclass webinar

How Azure Cloud Shell Works

As of right now, Azure Cloud Shell gives users two different environments to use. One is a Bash environment, which is basically a terminal connection to a Linux VM in Azure that gets spun up. This VM is free of charge. The second environment available is a PowerShell environment, which runs Windows PowerShell on a Windows Server Core VM. You will need to have some storage provisioned on your Azure account in order to create the $home directory. This acts as the persistent storage for the console session and allows users to upload scripts to run on the console.

Getting Started

To get started using Azure Cloud Shell, go to shell.azure.com. You will be prompted to sign in with your Azure account credentials:

Azure Cloud Shell welcome

Now we have some options. We can select which environment we prefer to run in. We can run in a Bash shell or we can use PowerShell. Pick whichever one you’re more comfortable with. For this example, I’ve selected PowerShell:

Next, we get a prompt for storage, since we haven’t configured the shell settings with this account yet. Simply select the “Create Now” button to go ahead and have Azure create a new resource group, or select “Show Advanced Settings” to configure those settings to your preference:

Once the storage is provisioned, we will wait a little bit for the console to finish loading, and then the shell should be ready for us to use!

In the upper left corner, we have all of the various controls for the console. We can reset the console, start a new session, switch to Bash, and upload files to our cloud drive:

For an example, I uploaded an activate.bat script file to my cloud drive. In order to access it we simply reference $home and specify our CloudDrive:

Now I can see my script:

This will allow you to deploy your custom PowerShell scripts and modules in Azure from any device! assuming you have access to a web browser, of course. Pretty neat!

Upcoming Changes and Things to Note

  • On May 21st, Microsoft announced that they will be going with Linux platform for both the Windows PowerShell and Bash experience. How is this possible? Essentially they will be using a Linux container to host the shell. By default PowerShell Core 6 will be the first experience. They claim that the startup time will be much faster than previous versions because of the Linux container. For switching between bash and PowerShell in the console, simply type “bash”. If you want to go back to PowerShell Core just type “pwsh”.
  • Microsoft is planning on having “persistent settings” for Git and SSH tools so that the settings for these tools are saved to the CloudDrive and users won’t have to hassle with them all the time.
  • There is some ongoing pain with modules currently. Microsoft is still working on porting over modules to .Net Core (for use with Powershell Core) and there will be a transition period while this happens. They are prioritizing the porting of the most commonly used modules first. In the meantime, there is one workaround that many people seem to forget, implicit remoting. This is the process of taking a module that is already installed on another endpoint and importing it into your PowerShell session allowing you to call that module and have it remotely execute on the node where the module is installed. It can be very useful for now until we get more modules converted over to .Net Core.

Want to Learn More About Microsoft Cloud Services?

The development pace of Azure is one of the most aggressive in the market today, and as you can see Azure Cloud Shell is constantly being updated and improved over a short period of time. In the near future, it will most likely be one of the more commonly used methods for interacting with Azure resources. It provides Azure customers with a seamless way of managing and automating their Azure resources without having to authenticate over and over again or install extra snap-ins and modules; and will continually shape the way we do IT today.

What are your thoughts regarding the Azure Cloud Shell? Have you used it yet? What are your initial thoughts? Let us know in the comments section below!

Do you have interest in more Azure Goodness? Are you wondering how to get started with the cloud and move some existing resources into Microsoft Azure? We actually have a panel styled webinar coming up in June that addresses those questions. Join Andy Syrewicze, Didier Van Hoye, and Thomas Maurer for a crash course on how you can plan your journey effectively and smoothly utilizing the exciting cloud technologies coming out of Microsoft including:

  • Windows Server 2019 and the Software-Defined Datacenter
  • New Management Experiences for Infrastructure with Windows Admin Center
  • Hosting an Enterprise Grade Cloud in your datacenter with Azure Stack
  • Taking your first steps into the public cloud with Azure IaaS

Journey to the Clouds

Save your seat

3 Tools for Automating Deployments in the Hybrid Cloud [Video]

3 Tools for Automating Deployments in the Hybrid Cloud [Video]

Automating deployments has quickly become the norm for IT professionals servicing most organizations from small-scale up. But automation can go far beyond just deploying VMs. It is used to configure Active Directory inside a VM, File Services, DNS, etc. automatically providing a boost to productivity, accuracy and workload management.

Earlier this year I had the privilege of speaking for the MVP Days Virtual Conference. For those that aren’t aware, the MVP Days Virtual Conference is a monthly event hosted by Dave and Cristal Kawula showcasing the skills and know-how of individuals in the Microsoft MVP Program. The idea being that Microsoft MVPs are a great resource of knowledge for IT Pros, and this virtual conference gives them a platform to share that knowledge on a monthly basis.

The following video is a recording of my presentation “3 Tools for Automating Deployments in the Era of the Modern Hybrid Cloud”.

Deployments in the Hybrid Cloud

Workloads and IT Infrastructures are becoming more complex and spread out than ever. It used to be that IT Pros had little to worry about outside the confines of their network, but those days are long over. Today a new workload is just as likely to be successfully residing in the public cloud as it is on premises. Cloud computing technologies like Microsoft Azure have provided a number of capabilities to IT Pros that were previously unheard of in all but the most complex enterprise datacenters. The purview of the IT Pro no longer stops within the 4 walls of his/her network but wherever the workload lives at a given time.

With these new innovations and technologies comes the ability to mass deploy applications and services either on-premises or in the public cloud in a very easy way, but what happens when you need to automate the deployment of workloads and services that stretch from on-premises to the public cloud? Many IT Pros struggle to automate deployments that stretch across those boundaries for a true hybrid cloud deployment.

In this demo-heavy session, you’ll discover a number of tools to assist you with your deployment operations. Learn:

  • How PowerShell ties-together and executes your deployment strategy end-to-end
  • How PowerShell Direct is used on-premises with Hyper-V to automate more than ever before
  • How Azure IaaS is used to effortlessly extend your automated deployments to the cloud

The Video: 3 Tools for Automating Deployments in the Era of the Modern Hybrid Cloud

Thoughts? Comments?

Be sure to share your thoughts on the session with us in the comments section below! I’m especially interested if there are any 3rd party tools or other methods you’ve used in these kinds of deployment situations. I’d also like to hear about any challenges you’ve encountered in doing operations like this. We’ll be sure to take that info and put together some relevant posts to assist.

Thanks for watching!

Top 5 PowerShell Hacks for Hyper-V

Top 5 PowerShell Hacks for Hyper-V

We love PowerShell. The task automation tool enhances productivity and also enables you to do tasks that simply wouldn’t be possible manually. It also just plain cool to sit back and watch scripts do their work! The tool enables endless possibilities in terms of specific tasks relating to individual set-ups and environments however the following 5 PowerShell applications should be applicable to all Hyper-V users. Sit back and enjoy the top 5 PowerShell hacks for Hyper-V.

Before we begin, make sure you have PowerShell enabled and up-to-date in Hyper-V. Assuming you have kept up to date, you are managing Hyper-V servers running Windows Server 2016 from a Windows 10 desktop. At least, that’s the smart way. The Windows 10 desktop does not need to have a hypervisor enabled, but you need the PowerShell module for Hyper-V.  In Control Panel check installed Windows Features.

 

image

At a minimum check the box for the “Hyper-V Module for Windows PowerShell”. Then open a PowerShell prompt and run this command:

It should show a version of 2.0.0.0. It is possible you might see older versions, but you can ignore them. PowerShell will use the latest version when you run any Hyper-V command.

5. Set Server Default

When you remotely manage a Hyper-V infrastructure from your desktop, you’ll find yourself constantly specifying the server name.

You might even need to specify a credential. You can simplify your life by setting a default parameter value. There is a special variable called PSDefaultParameterValues that is a hashtable. The key takes the format “cmdletname:parameter”. So to set a default value for the -Computername parameter of Get-VM, I can run code like this:

Now, every time I run Get-VM it will connect to the specified host. If I want to connect to a different host all I have to do is specify it.

You can add values for just the cmdlets you use, or use wildcards. Put this code in your PowerShell profile script to add all the necessary entries into $PSDefaultParameterValues.

This will handle almost all of the cmdlets in the Hyper-V module. Every time I start PowerShell I’ll get prompted for the password. From then on, every Hyper-V command I run will automatically use the Computername and Credential value without me having to type anything.

All of that said, you will have to determine how much you want to set to a default. Adding default values might complicate some of your pipeline expressions, which you’ll see as you keep reading.

4. Measure VM Performance

Another feature I love is the ability to monitor VM performance. However, in order to do this, you must enable VM resource metering on each virtual machine.  If it is enabled, then the virtual machine will have a property called ResourceMeteringEnabled set to True. This makes it easy to find virtual machines that do not have the setting enabled.

image

Since my primary VMs all start with CHI, I will go ahead and enable resource metering. This should take a one line command.

But because I’m using default parameter values everywhere, this fails. Without getting sidetracked into why, I can get this to work with a slight modification.

Now that the virtual machines are properly configured I can gather usage information. Again, this command would normally work:

But because of my default parameter values, I need to adjust the expression:

image

Even though this is a nicely formatted table, don’t forget that this is a collection of objects, so you can sort, filter, group, export or convert as much as you want. Think of the reporting possibilities. I know I am. Keep an eye out for a future article.

By the way, if you see an error about “Sequence contains no elements”, try disabling resource metering (I bet you can guess the cmdlet to use) and then re-enabling it.

3. Copy VM File

Here’s a nifty feature, especially if you are building an automatic provisioning system or spinning up a virtual machine as a part of a continuous integration build pipeline. With PowerShell and Hyper-V you can copy files from the host to the guest VM. With one caveat that the VM must be running Windows.  Here’s a simple example.

The copy occurs from the Hyper-V host. So even though I ran this from a client desktop, the copy happened from the server. I’m using my PSDefaultParameterValue for the computername which is why you don’t see it. The SourcePath parameter value is relative to the Hyper-V server. The DestinationPath parameter is relative to the virtual machine. You can even force the command to create the full destination path structure. One thing you have to include is the -FileSource parameter which always has a value of Host. This could be a good candidate for a PSDefaultParameterValue to save yourself some typing. This is a handy tool to copy a single file to multiple virtual machines.

In this variation, I have an implicit pipelined expression for the -VM parameter to get all running virtual machines that start with CHI*. And even though I don’t show it, the command won’t copy the file if it already exists. So if you need to overwrite existing files, include the -Force parameter.

Unfortunately, this command can only copy a single file at a time. You can’t use it to copy an entire directory. However, you can use Foreach-Object and copy multiple files. Since the copy has to occur from the host, enumerating files also needs to happen on the host. But that’s no problem. I can use PowerShell remoting from the client to copy all files from C:\Scripts on CHI-P50 to the CHI-WIN10 virtual machine.

Or if you are inclined, you could create your own function that wraps around Copy-VMFile to make it easier to copy entire folders, assuming you didn’t want to use PowerShell Direct.

2. Measure Snapshot Usage

It is simple enough to list all snapshots on your Hyper-V server with the Get-VMSnapshot command.

image

What would be helpful would be to know how much space each snapshot is consuming. This isn’t especially difficult to calculate. Although you can’t do it remotely since the snapshots are stored on the Hyper-V host. Each snapshot includes a property indicating where the snapshot is stored. By combining that piece of information with the virtual machine ID, you can run code like this on the server. I’ll go ahead and wrap it in an Invoke-Command expression and run it remotely.

image

That’s nice information. Or I might want to format the results since it is possible a virtual machine may have multiple snapshots.

In this example, I’m getting the information remotely but formatting it locally.

image

Or do whatever you want with the results.

1. Total Storage

Lastly, here’s a fun one line command to figure out how much disk storage all of your virtual machines are taking or are prepared to take.

The tricky part is that the Get-VHD command needs to be able to inspect the disk file where it resides which is most likely on the server. Once again, the easy fix is to use PowerShell Remoting.

If you try this, you’ll most likely get an eye-straining large number. It might be easier to format it to GB.

image

I rounded the values to the nearest GB. Isn’t that easier to read? What you do with the information is up to you. But as you can see it isn’t difficult to discover.

The more you use the Hyper-V module in PowerShell, the more opportunities you’ll discover. I love putting these things together to solve problems or fill a particular need. I’ll be back with more PowerShell and Hyper-V fun in future articles.

Comments or Suggestions?

What do you think of the list? Are there any PowerShell hacks you’d like to see here? Let me know in the comments below and if they are worthy of inclusion I’ll add them to the list! Also, if anything is unclear don’t hesitate to let me know below and I’ll ping an answer right back as soon as possible.

Why You Should Be Compacting Your Hyper-V Virtual Disks

Why You Should Be Compacting Your Hyper-V Virtual Disks

Hyper-V’s dynamically expanding virtual hard disks (VHD/VHDX) provide numerous benefits. I suspect that most of us use them to achieve efficient use of physical storage. However, their convenience does come with a maintenance cost. They grow as needed, but they will never release any space unless you manually intervene (it could be scripted if you so desire). In this article, we’ll cover the reasons, prerequisites, and steps necessary to reclaim unused space from your dynamically-expanding VHDXs.

“Compact” or “Shrink” Hyper-V Virtual Hard Disks?

Sometimes definitions for “compact” and “shrink” get crossed when talking about VHDXs. They do not mean the same thing and you use completely different operations to achieve them.

Every virtual hard disk type (fixed, dynamically-expanding, and differencing) has a fixed upper size limit. “Shrinking” a VHDX reduces that limit. Any of the types can be shrunk. “Compacting” does not change the upper limit at all. Instead, it reduces the physical disk space consumed by a dynamically-expanding or differencing VHDX by removing empty blocks. I’ll explain what a “block” means to these VHDXs in the next section. You can run a compact operation on any of the types, but it will never have any effect on a fixed VHDX (the graphical interface screens will not even allow you to try). For more information on the shrink process, refer to our earlier article on the issue.

What Does “Compact” Mean for Hyper-V Virtual Hard Disks

Dynamically-expanding VHDXs function is a straightforward process. When created, each has a specific, permanently-set “block size”. When a guest operating system or some external action attempts to write information into the VHDX, the VHDX driver first checks to see if the file contains sufficient space in unclaimed blocks for that data. If it does, then the write proceeds. If it does not, the VHDX driver expands the physical file by the minimum number of blocks needed to contain the write.

However, when a process deletes a file (or uses some other technique to clear data), the VHDX driver does not do anything with the block. Most importantly, it does not automatically remove empty blocks from the file. I’ve read a fair number of complaints on that, but they lack merit. To maintain such a thing would be inherently dangerous and prohibitively expensive in terms of IOPS.

Instead, we manually initiate the compact process. The system scans the data region of the VHDX, looking for completely empty blocks. When it finds one, it removes it from the file. I’m not aware of the precise mechanism that it uses, but I suspect that it simply finds the next used block and shifts it backward, eventually lopping off the end of the file.

compact-blockview

Why Should I Compact Hyper-V Virtual Hard Disks?

Physical systems rarely use up all of their disk space. The same can be said for virtual systems. Therein lies the primary reason to use dynamically-expanding virtual hard disks. I especially like them for VHDXs that contain operating systems when you’ll be placing data on different VHDXs. I’ve got some VHDXs nearing five years of age that have a 60GB maximum but still use only around 25GB on disk. With many OS VHDXs, that quickly adds up to a great deal of saved storage. That’s only one example, though; I generally use dynamically-expanding VHDX in any case where I know it won’t cause problems (high-churn database systems would be a poor usage for them).

However, sometimes operations occur that consume an inordinate amount of disk space. Sometimes space is just used and there’s nothing you can do about it. Other times, the usage is only temporary. For example, you might copy several ISO files to the C: drive of a guest in order to perform a local software installation. If you’re doing in-place operating system upgrades, they’ll make a duplicate of the existing installation so that you can perform a rollback. A system may begin life with an oversized swap file that you later move to a different VHDX, leaving behind gigabytes of unused space. I always compact VHDXs with sysprepped operating systems that I will use as templates.

Why Should I NOT Compact Hyper-V Virtual Hard Disks?

“Premature optimization is the root of all evil.” — Donald Knuth

I often complain about administrators that arbitrarily guess that their disk performance needs will be so great that anything other than fixed (or, egads, pass-through) is unacceptable. At the other end of the spectrum, some administrators became agitated when their VHDXs consume even a few gigabytes above their minimum. Most dynamically-expanding VHDXs will eventually find some sort of “normal state” in which they can re-use space from deleted data (example: Windows Updates).

I can’t give you a fine description of when a compact operation becomes viable, but do not use it to try to squeeze every last drop of space out of your system. You’ll eventually lose that battle one way or another. Do not compact over minor gains. Do not compact VHDXs that will never find any reasonable steady state (such as file servers). Do not compact as part of some routine maintenance cycle. If you have set up an automated script because your systems constantly approach the hard limits of your storage, spend the money to expand your storage. Compacting requires downtime and equipment wear, so it’s not free.

How Do I Use PowerShell to Compact a VHDX?

You can compact using GUI tools, but PowerShell provides the complete method. Remember two things:

  • You cannot compact a VHDX connected to a running virtual machine.
  • The computer system that runs the Optimize-VHD cmdlet must have the complete Hyper-V role installed. The PowerShell module alone does not contain the necessary system services. If you must run the compact operation on a system that does not have Hyper-V, you can use diskpart instead.

The process centers around the Optimize-VHD cmdlet. In general, you’ll want to mount the VHDX into the management operating system to get the best results. For example:

Be aware that even for very small files, that can take some time to complete. The amount of time depends mostly on your available CPU cycle and the speed of your disk subsystem.

A PowerShell Tool for Compacting VHDX Files

You can easily convert the above three lines into a re-usable utility:

You can dot-source that or add it to your PowerShell profile to use it anytime.

What Do the Different VHDX Compact Modes Mean?

If you read the help or tab through the options for the -Full parameter, you’ll see several choices. I feel like the built-in help describes it well enough, but more than a few people have gotten lost in terms with similar meanings. A bit of clarity:

  • An empty block means a continuous length of space within the VHDX that its header has designated as a block and contains all zeros. A single 1 anywhere in the entire block means that the VHDX driver will treat it as used.
  • An unused block may have some 1s, but the guest operating system’s file system has marked the space as not being used. This is common after file deletion; few operating systems will actually remove the bits from the disk. They simply mark it as unused in their file table(s). Two points on that:
    • At this time, the VHDX driver works most reliably with the NTFS file system. I have not yet tested for ReFS. It does not recognize any Linux file systems. In order for the system to best detect unused blocks, the VHDX must be mounted in read-only mode.
    • The VHDX driver also recognizes the Trim and Unmap commands. That allows it to work with your hardware to determine what blocks are unused.

 

Hopefully, that helps smooth out explanations of the modes:

  • Full: Full optimization requires the most time and resources. It will take out both empty and unused blocks. In order to detect unused blocks, the VHDX must be mounted in read-only mode. If the VHDX isn’t mounted, then it won’t be able to find empty blocks as easily (assume that it won’t find them at all).
  • Quick: The system only looks for unused blocks using the contained file system’s metadata. If you don’t mount the VHDX first, nothing will change.
  • Pretrimmed: Utilizes information from the trim/unmap commands to detect unused blocks. Does not look for empty blocks and does not query the contained file system for unused blocks.
  • Prezeroed: If the VHDX driver intercepted zero writes for existing blocks (such as from tools like sdelete, then it has already recorded in its own tables that they’re empty. Use this mode to remove only those blocks.
  • Retrim: Retrim reads from the VHDX file’s metadata for blocks marked as empty and reports them as trim or unmap commands to the underlying hardware.

Only the Full mode directly scans the file for empty blocks. The others work only with the VHDX and/or guest file system metadata.

How Do I Use Hyper-V Manager to Compact a VHDX?

Hyper-V Manager can compact a disconnected VHD/X or one attached to a powered-off virtual machine. It does not perform a very thorough job, though. I do not know which method it uses, but you’ll discover very quickly that it is not the equivalent of PowerShell’s “Full”. Whatever technique it uses, it will not automatically mount the VHDX.

You can access the Edit function directly on a VM’s disk property page or by going to the Edit Disk link in the main interface. Use the main interface for VHDXs that have no owning virtual machine.

compact-hvmstarts

Both ways invoke the same wizard.

  1. On the first screen, you select the target VHDX file. If you started from a VM’s property page, then the Location line is filled in and grayed out.
    compact-browse
  2. Next, the wizard will ask what you want to do with the disk. Choose Compact. If the Compact option does not appear, the virtual machine is turned on or you selected a fixed disk.
    compact-opselect
  3. Click Finish on the final screen. You’ll get a small dialog that shows the progress.
    compact-finishwiz

Can I Compact a VHDX that has Differencing Children?

The system will not prevent you from compacting a VHDX that has children unless one of the children is in use. If you manage to compact the parent VHDX, you will break the parent-child relationship.

What About VHDXs that Contain Non-Microsoft File Systems?

The Full mode will detect fully empty blocks, but cannot interact with any non-Microsoft file system metadata to locate unused blocks. However, most modern filesystems now support trim/unmap. If you can perform that operation within the guest operating system, you can then perform a Full optimize pass to shrink the VHDX. I wrote an article on compacting VHDXs with Linux filesystems that have more information.

Why Does the Compact Operation Save Nothing or Very Little?

The simple answer is that the blocks are not empty. If a block contains a single non-zero bit, then the entire block must be kept. You cannot see the VHDX’s blocks from within the guest. Short of some sort of file scanner, I don’t know of any way to view the blocks aside from a hex editor.

Some tips on cleaning up space:

  • Delete as many files as possible
  • Clear the guest’s Recycle Bin(s)
  • Use utilities that write zeros to free space
  • Use a defragmentation tool that will consolidate free space

If you’re really stuck, you can use a tool like robocopy or rsync to move the files into a completely new VHDX.

What Block Sizes are Valid for a Dynamically-Expanding VHDX?

By default, all VHDXs will have a 32 megabyte block size. You can only change that at creation time when using New-VHD. Set the BlockSizeBytes parameter as desired. Unfortunately, they don’t list possible values so it’s a bit of trial and error. I tried 1MB, 8MB, 16MB, 64MB, and 256MB sizes with success, in addition to the default 32MB. 512 bytes failed, as did 512MB.

How Can I Find a Dynamically-Expanding VHDX’s Block Size?

Only PowerShell’s Get-VHD can show you the block size.

compact-showblocksize

 

What Block Size Should I Use?

You have a basic trade-off: small block sizes result in more writes with less unused space. Larger block sizes result in fewer writes with potentially more unused space.

With small block sizes, scattered small files will be less likely to result in a VHDX with a great many almost-but-not-quite empty blocks. For the various ext* filesystems found on Linux, a 1MB block size can greatly reduce your VHDX sizes. The greatest downside is that you might get a lot of fragmentation of the physical VHDX. Then again, you might not. It depends on how the writes occur. Infrequent small writes will result in more fragmentation. If you use robust, modern storage, then fragmentation is not usually a major concern.

Large block sizes don’t have much to offer. If you were to store a rarely-accessed archival database in a dynamically-expanding VHDX with block sizes that match your guest OS’s allocation unit size, you might, in theory, see something improve. Probably not, though.

I recommend that you stick to the default 32mb block size for any guest that doesn’t use an ext filesystem. Use 1MB block sizes for those, if you remember.

Free Hyper-V Script: Virtual Machine Storage Consistency Diagnosis

Free Hyper-V Script: Virtual Machine Storage Consistency Diagnosis

Hyper-V allows you to scatter a virtual machine’s files just about anywhere. That might be good; it might be bad. That all depends on your perspective and need. No good can come from losing track of your files’ sprawl, though. Additionally, the placement and migration tools don’t always work exactly as expected. Even if the tools work perfectly, sometimes administrators don’t. To help you sort out these difficulties, I’ve created a small PowerShell script that will report on the consistency of a virtual machine’s file placement.

Free Hyper-V Script from Altaro

How the Script Works

I designed the script to be quick and simple to use and understand. Under the hood, it does these things:

  1. Gathers the location of the virtual machine’s configuration files. It considers that location to be the root.
  2. Gathers the location of the virtual machine’s checkpoint files. It compares that to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  3. Gathers the location of the virtual machine’s second-level paging files. It compares that to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  4. Gathers the location of the virtual machine’s individual virtual hard disk files. It compares each to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  5. Gathers the location of any CD or DVD image files attached to the virtual machine. It compares each to the root. If the locations aren’t the same, it marks the VM’s storage as inconsistent.
  6. Emits a custom object that contains:
    1. The name of the virtual machine
    2. The name of the virtual machine’s Hyper-V host
    3. The virtual machine’s ID (a string-formatted GUID)
    4. A boolean ($true or $false) value that indicates if the virtual machine’s storage is consistent
    5. An array that contains a record of each location. Each entry in the array is itself a custom object with these two pieces:
      1. The component name(Configuration, Checkpoints, etc.)
      2. The location of the component

It doesn’t check for pass-through. A VM with pass-through disks would have inconsistent storage placement by definition, but this script completely ignores pass-through.

The script doesn’t go through a lot of validation to check if storage locations are truly unique. If you have a VM using “\\system\VMs” and “\\system\VMFiles” but they’re both the same physical folder, the VM will be marked as inconsistent.

Script Usage

I built the script to be named Get-VMStorageConsistency. All documentation and examples are built around that name. You must supply it with a virtual machine. All other parameters are optional.

Use Get-Help Get-VMStorageConsistency -Full to see the built-in help.

Parameters:

  • VM (aliases “VMName” and “Name”): The virtual machine(s) to check. The input type is an array, so it will accept one or more of any of the following:
    • A string that contains the virtual machine’s names
    • A VirtualMachine object (as output from Get-VM, etc.)
    • A GUID object that contains the virtual machine’s ID
    • A WMI Msvm_ComputerSystem object
    • A WMI MSCluster_Resource object
  • ComputerName: The name of the host that owns the virtual machine. If not specified, defaults to the local system. Ignored if VM is anything other than a String or GUID. Only accepts strings.
  • DisksOnly: A switch parameter (include if you want to use it, leave off otherwise). If specified, the script only checks at the physical storage level for consistency. Examples:
    • With this switch, C:\VMs and C:\VMCheckpoints are the same thing (simplifies to C:\)
    • With this switch, C:\ClusterStorage\VMs1\VMFiles and C:\ClusterStorage\VMs1\VHDXs are the same thing (simplifies to C:\ClusterStorage\VMs1)
    • With this switch, \\storage1\VMs\VMFiles and \\storage1\VMs\VHDFiles are the same thing (simplifies to \\storage1\VMs)
    • Without this switch, all of the above are treated as unique locations
  • IgnoreVHDFolder: A switch parameter (include if you want to use it, leave off otherwise). If specified, ignores the final “Virtual Hard Disks” path for virtual hard disks. Notes:
    • With this switch, VHDXs in “C:\VMs\Virtual Hard Disks” will be treated as though they were found in C:\VMs
    • With this switch, VHDXs in “C:\VMs\Virtual Hard Disks\VHDXs” will not be treated specially. This is because there is a folder underneath the one named “Virtual Hard Disks”.
    • With this switch, a VM configured to hold its checkpoints in a folder named “C:\VMs\Virtual Hard Disks\” will not treat its checkpoints especially. This is because the switch only applies to virtual hard disk files.
  • Verbose: A built-in switch parameter. If specified, the script will use the Verbose output stream to show you the exact comparison that caused a virtual machine to be marked as inconsistent. Note: The script only traps the first item that causes a virtual machine to be inconsistent. That’s because the Consistent marker is a boolean; in boolean logic, it’s not possible to become more false. Therefore, I considered it to be wasteful to continue processing. However, all locations are stored in the Locations property of the report. You can use that for an accurate assessment of all the VMs’ component locations.

The Output Object

The following shows the output of the cmdlet run on my host with the -IgnoreVHDFolder and -Verbose switches set:

conscript_output

Output object structure:

  • Name: String that contains the virtual machine’s name
  • ComputerName: String that contains the name of the Hyper-V host that currently owns the virtual machine
  • VMId: String representation of the virtual machine’s GUID
  • Consistent: Boolean value that indicates whether or not the virtual machine’s storage is consistently placed
  • Location: An array of custom objects that contain information about the location of each component

Location object structure:

  • Component: A string that identifies the component. I made these strings up. Possible values:
    • Configuration: Location of the “Virtual Machines” folder that contains the VM’s definition files (.xml, .vmcx, .bin, etc.)
    • Checkpoints: Location of the “Snapshots” folder configured for this virtual machine. Note: That folder might not physically exist if the VM uses a non-default location and has never been checkpointed.
    • SecondLevelPaging: Location of the folder where the VM will place its .slp files if it ever uses second-level paging.
    • Virtual Hard Disk: Full path of the virtual hard disk.
    • CD/DVD Image: Full path of the attached ISO.

An example that puts the object to use:

The above will output only the names of local virtual machines with inconsistent storage.

Script Source

The script is intended to be named “Get-VMStorageConsistency”. As shown, you would call its file (ex.: C:\Scripts\Get-VMStorageConsistency). If you want to use it dot-sourced or in your profile, uncomment lines 55, 56, and 262 (subject to change from editing; look for the commented-out function { } delimiters).

 

How Device Naming for Network Adapters Works in Hyper-V 2016

How Device Naming for Network Adapters Works in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:

harn_getnetadapter

In contrast, a physical system employing Consistent Device Naming would automatically name the network adapters in some fashion that reflects their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.

ndn_gui

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
    ndn_netadvprops
  2. When the Ethernet # Properties dialog appears, click Configure:
    ndn_netpropsconfbutton
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:
    ndn_display

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.

ndn_psall

Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:

ndn_psselectobject

I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

7 Powerful Scripts for Practical Hyper-V Network Configurations

7 Powerful Scripts for Practical Hyper-V Network Configurations

I firmly believe in empowerment. I feel that I should supply you with knowledge, provide you with how-tos, share insights and experiences, and release you into the world to make your own decisions. However, I came to that approach by standing at the front of a classroom. During class, we’d almost invariably walk through exercises. Since this is a blog and not a classroom, I do things differently. We don’t have common hardware in a controlled environment, so I typically forgo the exercises bit. As a result, that leaves a lot of my readers at the edge of a cliff with no bridge to carry them from theory to practice. And, of course, there are those of you that would love to spend time reading about concepts but just really need to get something done right now. If you’re stopped at Hyper-V networking, this is the article for you.

Script Inventory

These scripts are included in this article:

Basic Usage

I’m going to show each item as a stand-alone script. First, you’ll locate the one that best aligns with what you’re trying to accomplish. You’ll copy/paste that into a .ps1 PowerShell script file on your system. You’ll need to edit the script to provide information about your environment so that it will work for you. I’ll have you set each of those items at the beginning of the script. Then, you’ll just need to execute the script on your host.

Most scripts will have its own “basic usage” heading that explains a bit about how you’d use it without modification.

Enhanced Usage

I could easily compile these into standalone executables that you couldn’t tinker with. Even though I want to give you a fully prepared springboard, I also want you to learn how the system works and what you’re doing to it.

Most scripts will have its own “enhanced usage” heading that gives some ideas how you might exploit or extend it yourself.

Configure networking for a single host with a single adapter

Use this script for a standalone system that only has one physical adapter. It will:

  • Disable VMQ for the physical adapter
  • Create a virtual switch on the adapter
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it.

Advanced Usage for this Script

As-is, this script should be complete for most typical single-adapter systems. You might choose to disable some items. For instance, if you are using this on Windows 10, you might not want to provide a fixed IP address. In that case, just put a # sign at the beginning of lines 42 onward. When the virtual network adapter is created, it will remain in DHCP mode.

 

Configure a standalone host with 2-4 gigabit adapters for converged networking

Use this script for a standalone host that has between two and four gigabit adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. Be aware that it will have problems if you already have a team.

Advanced Usage for this Script

This script serves as the base for the remaining scripts on this page. Likewise, you could use it as a base for your own. You could also use any of the items as examples for whatever similar actions you wish to accomplish in your own scripts.

 

Configure a standalone host with 2-4 10 GbE adapters for converged networking

Use this script for a standalone host that has between two and four 10GbE adapters that you want to use in a converged networking configuration. It will:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create a virtual network adapter for the management operating system to use
  • Optionally place the management adapter into a VLAN
  • Assign an IP, subnet, and gateway to the management adapter
  • Specify one or two DNS servers

It won’t take a great deal of sleuthing to discover that this script is identical to the preceding one, except that it does not disable VMQ.

 

Configure a clustered host with 2-4 gigabit adapters for converged networking

Use this script for a host that has between two and four gigabit adapters that will be a member of a cluster. Like the previous scripts, it will employ a converged networking configuration. The script will:

  • Create a team on the adapters
  • Disable VMQ for the physical adapters and the teamed adapter
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Configure a clustered host with 2-4 10 GbE adapters for converged networking

This script is identical to the preceding except that it leaves VMQ enabled. It does the following:

  • Create a team on the adapters
  • Create a virtual switch on the team
  • Create virtual network adapters for the management operating system to use for management traffic, cluster communications, and Live Migration
  • Optionally place the virtual adapters into VLANs
  • Assign an IP, subnet, and gateway to the management adapter
  • Assign an IP and subnet mask to the cluster and Live Migration adapters
  • Prevent the cluster and Live Migration adapters from registering in DNS
  • Specify one or two DNS servers

Basic Usage for this Script

These notes are identical to those of the preceding script.

You just need to enter the necessary information for these items and execute it. It is essentially the same as the stand-alone multi-gigabit adapter script except that it also adds adapters for cluster communications and Live Migration traffic.

It does not arrange the adapters in an optimal order for Live Migration. The cluster will automatically prioritize the cluster and Live Migration adapters over the management adapter, but it might prioritize the cluster adapter over the Live Migration adapter. Practically, that will have no meaningful effect; these designations are mostly cosmetic. If you’d like to force the issue, you’ll need to do so separately. You could, of course, use Failover Cluster Manager for this. I’ve included a script later in this article that makes the setting change for you. You cannot combine these scripts because the cluster must exist before you can specify the Live Migration adapter order. Also, you only need to specify the order one time, not once per node.

Advanced Usage for this Script

These notes are identical to those of the preceding script.

You could do a great number of things with this script. One suggestion would be to add cluster creation/join logic. It would be non-trivial, but you’d be able to combine the Live Migration adapter ordering script.

 

Set preferred order for cluster Live Migration networks

This script aligns with the two preceding scripts to ensure that the cluster chooses the named “Live Migration” adapter first when moving virtual machines between nodes. The “Cluster” virtual adapter will be used second. The management adapter will be used as the final fallback.

Basic Usage for this Script

Use this script after you’ve run one of the above two clustered host scripts and joined them into a cluster.

Advanced Usage for this Script

Modify this to change the order of Live Migration adapters. You must specify all adapters recognized by the cluster. Check the “MigrationExcludeNetworks” registry key that’s in the same location as “MigrationNetworkOrder”.

 

Exclude cluster networks from Live Migration

This script is intended to be used as an optional adjunct to the preceding script. Since my scripts set up all virtual adapters to be used in Live Migration, the network names used here are fabricated.

Basic Usage for this Script

You’ll need to set the network names to match yours, but otherwise, the script does not need to be altered.

Advanced Usage for this Script

This script will need to be modified in order to be used at all.

 

How to Perform Hyper-V Storage Migration

How to Perform Hyper-V Storage Migration

New servers? New SAN? Trying out hyper-convergence? Upgrading to Hyper-V 2016? Any number of conditions might prompt you to move your Hyper-V virtual machine’s storage to another location. Let’s look at the technologies that enable such moves.

An Overview of Hyper-V Migration Options

Hyper-V offers numerous migration options. Each has its own distinctive features. Unfortunately, we in the community often muck things up by using incorrect and confusing terminology. So, let’s briefly walk through the migration types that Hyper-V offers:

  • Quick migration: Cluster-based virtual machine migration that involves placing a virtual machine into a saved state, transferring ownership to another node in the same cluster, and resuming the virtual machine. A quick migration does not involve moving anything that most of us consider storage.
  • Live migration: Cluster-based virtual machine migration that involves transferring the active state of a running virtual machine to another node in the same cluster. A Live Migration does not involve moving anything that most of us consider storage.
  • Storage migration: Any technique that utilizes the Hyper-V management service to relocate any file-based component that belongs to a virtual machine. This article focuses on this migration type, so I won’t expand any of those thoughts in this list.
  • Shared Nothing Live Migration: Hyper-V migration technique between two hosts that does not involve clustering. It may or may not include a storage migration. The virtual machine might or might not be running. However, this migration type always includes ownership transfer from one host to another.

It Isn’t Called Storage Live Migration

I have always called this operation “Storage Live Migration”. I know lots of other authors call it “Storage Live Migration”. But, Microsoft does not call it “Storage Live Migration”. They just call it “Storage Migration”. The closest thing that I can find to “Storage Live Migration” in anything from Microsoft is a 2012 TechEd recording by Benjamin Armstrong. The title of that presentation includes the phrase “Live Storage Migration”, but I can’t determine if the “Live” just modifies “Storage Migration” or if Ben uses it as part of the technology name. I suppose I could listen to the entire hour and a half presentation, but I’m lazy. I’m sure that it’s a great presentation, if anyone wants to listen and report back.

Anyway, does it matter? I don’t really think so. I’m certainly not going to correct anyone that uses that phrase. However, the virtual machine does not necessarily need to be live. We use the same tools and commands to move a virtual machine’s storage whether it’s online or offline. So, “Storage Migration” will always be a correct term. “Storage Live Migration”, not so much. However, we use the term “Shared Nothing Live Migration” for virtual machines that are turned off, so we can’t claim any consistency.

What Can Be Moved with Hyper-V Storage Migration?

When we talk about virtual machine storage, most people think of the places where the guest operating system stores its data. That certainly comprises the physical bulk of virtual machine storage. However, it’s also only one bullet point on a list of multiple components that form a virtual machine.

Independently, you can move any of these virtual machine items:

  • The virtual machine’s core files (configuration in xml or .vmcx, .bin, .vsv, etc.)
  • The virtual machine’s checkpoints (essentially the same items as the preceding bullet point, but for the checkpoint(s) instead of the active virtual machine)
  • The virtual machine’s second-level paging file location. I have not tested to see if it will move a VM with active second-level paging files, but I have no reason to believe that it wouldn’t
  • Virtual hard disks attached to a virtual machine
  • ISO images attached to a virtual machine

We most commonly move all of these things together. Hyper-V doesn’t require that, though. Also, we can move all of these things in the same operation but distribute them to different destinations.

What Can’t Be Moved with Hyper-V Storage Migration?

In terms of storage, we can move everything related to a virtual machine. But, we can’t move the VM’s active, running state with Storage Migration. Storage Migration is commonly partnered with a Live Migration in the operation that we call “Shared Nothing Live Migration”. To avoid getting bogged down in implementation details that are more academic than practical, just understand one thing: when you pick the option to move the virtual machine’s storage, you are not changing which Hyper-V host owns and runs the virtual machine.

More importantly, you can’t use any Microsoft tool-based technique to separate a differencing disk from its parent. So, if you have an AVHDX (differencing disk created by the checkpointing mechanism) and you want to move it away from its source VHDX, Storage Migration will not do it. If you instruct Storage Migration to move the AVHDX, the entire disk chain goes along for the ride.

Uses for Hyper-V Storage Migration

Out of all the migration types, storage migration has the most applications and special conditions. For instance, Storage Migration is the only Hyper-V migration type that does not always require domain membership. Granted, the one exception to the domain membership rule won’t be very satisfying for people that insist on leaving their Hyper-V hosts in insecure workgroup mode, but I’m not here to please those people. I’m here to talk about the nuances of Storage Migration.

Local Relocation

Let’s start with the simplest usage: relocation of local VM storage. Some situations in this category:

  • You left VMs in the default “C:\ProgramData\Microsoft\Windows\Hyper-V” and/or “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks” locations and you don’t like it
  • You added new internal storage as a separate volume and want to re-distribute your VMs
  • You have storage speed tiers but no active management layer
  • You don’t like the way your VMs’ files are laid out
  • You want to defragment VM storage space. It’s a waste of time, but it works.

Network Relocation

With so many ways to do network storage, it’s nearly a given that we’ll all need to move a VHDX across ours at some point. Some situations:

  • You’re migrating from local storage to network storage
  • You’re replacing a SAN or NAS and need to relocate your VMs
  • You’ve expanded your network storage and want to redistribute your VMs

Most of the reasons listed under “Local Relocation” can also apply to network relocation.

Cluster Relocation

We can’t always build our clusters perfectly from the beginning. For the most part, a cluster’s relocation needs list will look like the local and network lists above. A few others:

  • Your cluster has new Cluster Shared Volumes that you want to expand into
  • Existing Cluster Shared Volumes do not have a data distribution that does not balance well. Remember that data access from a CSV owner node is slightly faster than from a non-owner node

The reasons matter less than the tools when you’re talking about clusters. You can’t use the same tools and techniques to move virtual machines that are protected by Failover Clustering under Hyper-V as you use for non-clustered VMs.

Turning the VM Off Makes a Difference for Storage Migration

You can perform a very simple experiment: perform a Storage Migration for a virtual machine while it’s on, then turn it off and migrate it back. The virtual machine will move much more quickly while it’s off. This behavior can be explained in one word: synchronization.

When the virtual machine is off, a Storage Migration is essentially a monitored file copy. The ability of the constituent parts to move bits from source to destination sets the pace of the move. When the virtual machine is on, all of the rules change. The migration is subjected to these constraints:

  • The virtual machine’s operating system must remain responsive
  • Writes must be properly captured
  • Reads must occur from the most appropriate source

Even if the guest operating does not experience much activity during the move, that condition cannot be taken as a constant. In other words, Hyper-V needs to be ready for it to start demanding lots of I/O at any time.

So, the Storage Migration of a running virtual machine will always take longer than the Storage Migration of a virtual machine in an off or saved state. You can choose the convenience of an online migration or the speed of an offline migration.

Note: You can usually change a virtual machine’s power state during a Storage Migration. It’s less likely to work if you are moving across hosts.

How to Perform Hyper-V Storage Migration with PowerShell

The nice thing about using PowerShell for Storage Migration: it works for all Storage Migration types. The bad thing about using PowerShell for Storage Migration: it can be difficult to get all of the pieces right.

The primary cmdlet to use is Move-VMStorage. If you will be performing a Shared Nothing Live Migration, you can also use Move-VM. The parts of Move-VM that pertain to storage match Move-VMStorage. Move-VM has uses, requirements, and limitations that don’t pertain to the topic of this article, so I won’t cover Move-VM here.

A Basic Storage Migration in PowerShell

Let’s start with an easy one. Use this when you just want all of a VM’s files to be in one place:

This will move the virtual machine named testvm so that all of its components reside under the C:\LocalVMs folder. That means:

  • The configuration files will be placed in C:\LocalVMs\Virtual Machines
  • The checkpoint files will be placed in C:\LocalVMs\Snapshots
  • The VHDXs will be placed in C:\LocalVMs\Virtual Hard Disks
  • Depending on your version, an UndoLog Configuration folder will be created if it doesn’t already exist. The folder is meant to contain Hyper-V Replica files. It may be created even for virtual machines that aren’t being replicated.

Complex Storage Migrations in PowerShell

For more complicated move scenarios, you won’t use the DestinationStoragePath parameter. You’ll use one or more of the individual component parameters. Choose from the following:

  • VirtualMachinePath: Where to place the VM’s configuration files.
  • SnapshotFilePath: Where to place the VM’s checkpoint files (again, NOT the AVHDXs!)
  • SmartPagingFilePath: Where to place the VM’s smart paging files
  • Vhds: An array of hash tables that indicate where to place individual VHD/X files.

Some notes on these items:

  • You are not required to use all of these parameters. If you do not specify a parameter, then its related component is left alone. Meaning, it doesn’t get moved at all.
  • If you’re trying to use this to get away from those auto-created Virtual Machines and Snapshots folders, it doesn’t work. They’ll always be created as sub-folders of whatever you type in.
  • It doesn’t auto-create a Virtual Hard Disks folder.
  • If you were curious whether or not you needed to specify those auto-created subfolders, the answer is: no. Move-VMStorage will always create them for you (unless they already exist).
  • The VHDs hash table is the hardest part of this whole thing. I’m usually a PowerShell-first kind of guy, but even I tend to go to the GUI for Storage Migrations.

The following will move all components except VHDs, which I’ll tackle in the next section:

Move-VMStorage’s Array of Hash Tables for VHDs

The three …FilePath parameters are easy: just specify the path. The Vhds parameter is tougher. It is one or more hash tables inside an array.

First, the hash tables. A hash table is a custom object that looks like an array, but each entry has a unique name. The hash tables that Vhds expects have a SourceFilePath entry and a DestinationFilePath entry. Each must be fully-qualified for a file. A hash table is contained like this: @{ }. The name of an entry and its value are joined with an =. Entries are separated by a ; So, if you want to move the VHDX named svtest.vhdx from \\svstore\VMs to C:\LocalVMs\testvm, you’d use this hash table:

Reading that, you might ask (quite logically): “Can I change the name of the VHDX file when I move it?” The answer: No, you cannot. So, why then do you need to enter the full name of the destination file? I don’t know!

Next, the arrays. An array is bounded by @( ). Its entries are separated by commas. So, to move two VHDXs, you would do something like this:

I broke that onto multiple lines for legibility. You can enter it all on one line. Note where I used parenthesis and where I used curly braces.

Tip: To move a single VHDX file, you don’t need to do the entire array notation. You can use the first example with Vhds.

A Practical Move-VMStorage Example with Vhds

If you’re looking at all that and wondering why you’d ever use PowerShell for such a thing, I have the perfect answer: scripting. Don’t do this by hand. Use it to move lots of VMs in one fell swoop. If you want to see a plain example of the Vhds parameter in action, the Get-Help examples show one. I’ve got a more practical script in mind.

The following would move all VMs on the host. All of their config, checkpoint, and second-level paging files will be placed on a share named “\\vmstore\slowstorage”. All of their VHDXs will be placed on a share named “\\vmstore\faststorage”. We will have PowerShell deal with the source paths and file names.

I used splatting for the parameters for two reasons: 1, legibility. 2, to handle VMs without any virtual hard disks.

How to Perform Hyper-V Storage Migration with Hyper-V Manager

Hyper-V Manager can only be used for non-clustered virtual machines. It utilizes a wizard format. To use it to move a virtual machine’s storage:

  1. Right-click on the virtual machine and click Move.
  2. Click Next on the introductory page.
  3. Change the selection to Move the virtual machine’s storage (the same storage options would be available if you moved the VM’s ownership, but that’s not part of this article)
    movevm_hvmwiz1
  4. Choose how to perform the move. You can move everything to the same location, you can move everything to different locations, or you can move only the virtual hard disks.
    movevm_hvmwiz2
  5. What screens you see next will depend on what you chose. We’ll cover each branch.

If you opt to move everything to one location, the wizard will show you this simple page:

movevm_hvmwiz3

If you choose the option to Move the virtual machine’s data to different locations, you will first see this screen:

movevm_hvmwiz4

For every item that you check, you will be given a separate screen where you indicate the desired location for that item. The wizard uses the same screen for these items as it does for the hard-disks only option. I’ll show its screen shot next.

If you choose Move only the virtual machine’s virtual hard disks, then you will be given a sequence of screens where you instruct it where to move the files. These are the same screens used for the individual components from the previous selection:

movevm_hvmwiz5

After you make your selections, you’ll be shown a summary screen where you can click Finish to perform the move:

movevm_hvmwiz6

How to Perform Hyper-V Storage Migration with Failover Cluster Manager

Failover Cluster Manager uses a slick single-screen interface to move storage for cluster virtual machines. To access it, simply right-click a virtual machine, hover over Move, and click Virtual Machine Storage. You’ll see the following screen:

movecm_fcm1

If you just want to move the whole thing to one of the display Cluster Shared Volumes, just drag and drop it down to that CSV in the Cluster Storage heading at the lower left. You can drag and drop individual items or the entire VM. The Destination Folder Path will be populated accordingly.

As you can see in mine, I have all of the components except the VHD on an SMB share. I want to move the VHD to be with the rest. To get a share to show up, click the Add Share button. You’ll get this dialog:

movevm_fcmaddshare

The share will populate underneath the CSVs in the lower left. Now, I can drag and drop that file to the share. View the differences:

movecm_fcm2

Once you have the dialog the way that you like it, click Start.

8 Most Important Announcements from Microsoft Ignite 2017

8 Most Important Announcements from Microsoft Ignite 2017

Last week saw us close the door on Microsoft Ignite 2017, and while the conference came and went in a blur, there was no lack of information or amazing reveals from Microsoft. While this conference serves as a great way to stay informed on all the new things that Microsoft is working on, I also find that it is a good way to get an overall sense of the company’s overall direction. With that in mind, I wanted to not only talk about some of my favorite reveals from the week but also discuss my take on Microsoft’s overall direction.

Microsoft Ignite 2017 - most important announcements

My take on the week from an Infrastructure Engineering Perspective

To put things simply….. things are changing, and they’re changing in a big way. I’ve had this gut feeling stirring for some time that the way we work with VMs and virtualization was changing, and the week of Ignite was a major confirmation of that. This is not to mention the continued shift from the on-premise model we’re used to, to the new cloud (Public, Private, and Hybrid) model that things are moving too.

It’s very clear that Microsoft is adopting what I would call the “Azure-Everywhere” approach. Sure, you’ve always been able to consume Azure using what Microsoft has publicly available, but things really changed when Azure Stack is put into the mix. Microsoft Azure Stack (MAS) is officially on the market now, and the idea of having MAS in datacenters around the world is an interesting prospect. What I find so interesting about it, is the fact that management of MAS onsite is identical to managing Azure. You use Azure Resource Manager and the same collection of tools to manage both. Pair that with the fact that Hyper-V is so abstracted and under-the-hood in MAS that you can’t even see it, and you’ve got a recipe for major day-to-day changes for infrastructure administrators.

Yes, we’ve still got Windows Server 2016, and the newly announced Honolulu management utility, but If I look out 5, or even 10 years, I’m not sure I see us working with Windows Server anymore in the way that we do so today. I don’t think VM usage will be as prevalent then as it is today either. After last week, I firmly believe that containers will be the “new virtual machine”. I think VMs will stay around for legacy workloads, and for workloads that require additional layers of isolation, but after seeing containers in action last week, I’m all in on that usage model.

We used to see VMs as this amazing cost-reducing technology, and it was for a long time. However, I saw containers do to VMs, what VMs did to physical servers. I attended a session on moving workloads to a container based model, and MetLife was on stage talking about moving some of their infrastructure to containers. In doing so they achieved:

  • -70% reduction in the number of VMs in the environment
  • -67% reduction in needed CPU cores
  • -66% reduction in overall cost of ownership

Those are amazing numbers that nobody can ignore. Given this level of success with containers, I see the industry moving to that deployment model from VMs over the next several years. As much as it pains me to say it, virtual machines are starting to look very “legacy”, and we all need to adjust our skill sets accordingly.

Big Reveals

As you know, Ignite is that time of year where Microsoft makes some fairly large announcements, and below I’ve compiled a list of some of my favorite. While this is by no means a comprehensive list,  but I feel these represent what our readers would find most interesting. Don’t agree? That’s fine! Just let me know what you think were the most important announcements in the comments. Let’s get started.

8. New Azure Exams and Certifications!

With new technologies, come new things to learn, and as such there are 3 new exams on the market today for Azure Technologies.

  • Azure Stack Operators – Exam 537: Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack
  • For Azure Solutions Architects – Exam 539: Managing Linux Workloads on Azure
  • For Azure DevOps – Exam 538: Implementing Microsoft Azure DevOps Solutions

If you’re interested in pursuing any of these (Which I would Highly Recommend) then you can get more information on them at this link.

7. SQL Server 2017 is now Available

Normally I wouldn’t make much of a fuss about SQL Server as I’m not much of a SQL guy myself, but Microsoft did something amazing with this release. SQL Server 2017 will run on Windows, Linux, and inside of Docker Containers. Yes, you read correctly. SQL Server 2017 will run on Linux and inside of docker containers, which opens up a whole new avenue of providing SQL workloads. Exciting times indeed!

6. Patch Management from the Azure Portal

Ever wanted to have WSUS available from the Azure portal? Now you have it. You can easily view, and deploy patches for your Azure based workloads directly from the Azure portal. This includes Linux VMs as well, which is great news as more and more admins are finding themselves managing Linux workloads these days!

5. PowerShell now Available in Azure CLI.

When Azure CLI was announced and released, many people were taken aback at the lack of PowerShell support. This was done for a number of reasons that I won’t get into in this article, but regardless, it has been added in now. It is now possible with Azure CLI to deploy a VM with a single PowerShell cmdlet and more. So, get those scripts ready!

4. Azure File Sync in Preview

I know many friends and colleagues that have been waiting for something like this. You can essentially view this as next-generation DFS. (Though it doesn’t use the same technology). It essentially allows you to sync your on-premise file servers with an Azure Files account for distributed access to the stored information around the globe.

3. Quality of Life Improvements for Windows Containers

While there were no huge reveals in the container space, Windows Server 1709 was announced and contains a lot of improvements and optimizations for running containers on Windows Server. This includes things like smaller images and support for Linux Containers running on Windows Server. I did an Interview with Taylor Brown from the Containers team, which you can view below for more information.

2. Nested Virtualization in Azure for Production Workloads

Yes, I know, nested virtualization in Azure has been announced for some time. However, what I found different was Microsoft’s Insistence that it could also be used for production workloads. During Scott Guthrie’s Keynote, Corey Sanders actually demonstrated the use of the M-Series (Monster) VM in Azure being used to host production workloads with nested VMs. While not ideal in every scenario obviously, this is simply another tool that we have at our disposal for added flexibility in our day-to-day operations.

If you’re interested, I actually interviewed Rick Claus from the Azure Compute team about this. That Interview can be seen below

1. Project Honolulu

This one is for the folks that are still strictly interested in only the on-prem stuff. Microsoft revealed and showed us the new Project Honolulu management utility for on-premise workloads. Honolulu, takes the functionality of all the management tools and MMC snap-ins that we’ve been using for years and packages them up into a nice easy to use web-UI. It’s worth a look if you haven’t seen it yet. We even have a nice article on our blog about it if you’re interested in reading more!

Wrap-Up

As I mentioned, this was by no means a comprehensive list, but we’ll be talking about items (From this list and some not mentioned) from Ignite on our blogs for some time. So, be sure to keep an eye on our blog if you’re interested in more information.

Additionally, if you attended Microsoft Ignite, and you saw a feature or product you think is amazing that is not listed above, be sure to let us know in the comments section below!

Why Your Hyper-V PowerShell Commands Don’t Work (and how to fix them)

Why Your Hyper-V PowerShell Commands Don’t Work (and how to fix them)

I occasionally receive questions about Hyper-V-related PowerShell cmdlets not working as expected. Sometimes these problems arise with the module that Microsoft provides; other times, they manifest with third-party tools. Even my own tools show these symptoms. Most GUI tools are developed to avoid the problems that plague the command line, but the solutions aren’t always perfect.

The WMI Foundation

All tools, graphical or command-line, eventually work their way back to the only external interface that Hyper-V provides: its WIM/CIM provider. CIM stands for “Common Information Model”. The Distributed Management Task Force (DMTF) maintains the CIM standard. CIM defines a number of interfaces pertaining to management. Anyone can write CIM-conforming modules to work with their systems. These modules allow users, applications, and services to retrieve information and/or send commands to the managed system. By leveraging CIM, software and hardware manufacturers can provide APIs and controls with predictable, standardized behavior.

Traditionally, Microsoft has implemented CIM via Windows Management Instrumentation (WMI). Many WMI instructions involved VBS or WMIC. As PowerShell gained popularity, WMI also gained popularity due to the relative ease of Get-WmiObject. Depending on where you look in Microsoft’s vast documentation, you might see pushes away from the Microsoft-specific WMI implementation toward the more standard CIM corollaries. Get-CimInstance provides something of an analog to Get-WmiObject, but they are not interchangeable.

For any of this to ever make any sense, you need to understand one thing: anyone can write a CIM/WMI provider. The object definitions and syntax of a provider all descend from the common standard, but they do nothing more than establish the way an interface should look. The provider’s developer determines how it all functions behind the scenes.

Why Hyper-V PowerShell Cmdlets May Not Work

Beyond minor things like incorrect syntax and environmental things like failed hardware, two common reasons prevent these tools from functioning as expected.

The Hyper-V Security Model

I told you all that about WMI so that this part would be easier to follow. The developers behind the Hyper-V WMI provider decide how it will react to any given WMI/CIM command that it receives. Sometimes, it chooses to have no reaction at all.

Before I go too far, I want to make it clear that no documentation exists for the security model in Hyper-V’s WMI provider. I ran into some issues with WMI commands not working the way that I expected. I opened a case with Microsoft, and it wound up going all the way to the developers. The answer that came back pointed to the internal security coding of the module. In other words, I was experiencing a side effect of designed behavior. So, I asked if they would give me the documentation on that — basically, anything on what caused that behavior. I was told that it doesn’t exist. They obviously don’t have any externally-facing documentation, but they don’t have anything internal, either. So, everything that you’re going to see in this article originates from experienced (and repeatable) behavior. No insider secrets or pilfered knowledge were used in the creation of this material.

Seeing Effects of the Hyper-V Security Model in Action

Think about any “Get” PowerShell cmdlet. What happens when you run a “Get” against objects that don’t exist? For example, what happens when I run Get-Job when no jobs are present?

psnowork_emptygetjob

Nothing! That’s what happens. You get nothing. So, you learn to interpret “I got nothing” to mean “no objects of that type exist”.

So, if I run Get-VM and get nothing (2012/R2):

psnowork_emptygetvm

That means that the host has no virtual machines, right?

But wait:

Hyper-V Powershell commands help

What happened? A surprise Live Migration?

Look at the title bars carefully. The session on the left was started normally. The session on the right was started by using Run as administrator.

The PowerShell behavior has changed in 2016:

psnowork_emptygetvm2016

The PowerShell cmdlets that I tried now show an appropriate error message. However, only the PowerShell module has been changed. The WMI provider behaves as it always has:

psnowork_wmigetvm2016

To clarify that messy output, I ran gwmi -Namespace root\virtualization\v2 -Class Msvm_ComputerSystem -Filter 'Caption="Virtual Machine"' as a non-privileged user and the system gave no output. That window overlaps another window that contains the output from Get-VM in an elevated session.

Understanding the Effects of the Hyper-V Security Model

When we don’t have permissions to do something, we expect that the system will alert us. If we try to open a file, we get a helpful error message explaining why the system can’t allow it. We’ve all had that experience enough times that we’ve been trained to expect a red flag. The Hyper-V WMI provider does not exhibit that expected behavior. I have never attempted to program a WMI provider myself, so I don’t want to pass any judgment. I noticed that the MSCluster namespace acts the same way, so it may be something inherent to CIM/WMI that the provider authors have no control over.

In order for a WMI query to work against Hyper-V’s provider, you must be running with administrative privileges. Confusingly, “being a member of the Administrators group” and “running with administrative privileges” are not always the same thing. When working with the Hyper-V provider on the local system, you must always ensure that you run with elevated privileges (Run as administrator) — even if you log on with an administrative account. Remote processes don’t have that problem.

The administrative requirement presents another stumbling block: you cannot create a permanent WMI event watcher for anything in the Hyper-V provider. Permanent WMI registration operates anonymously; the Hyper-V provider requires confirmed administrative privileges. As with everything else, no errors are thrown. Permanent WMI watchers simply do not function.

The takeaway: when you unexpectedly get no output from a Hyper-V-related PowerShell command, you most likely do not have sufficient permissions. Because the behavior bubbles up from the bottom-most layer (CIM/WMI), the problem can manifest in any tool.

The Struggle for Scripters and Application Developers

People sometimes report that my tools don’t work. For example, I’ve been told that my KVP processing stack doesn’t do anything. Of course, the tool works perfectly well — as long as it has the necessary privileges. So, why didn’t I write that, and all of my other scripts, to check their privilege? Because it’s really hard, that’s why.

With a bit of searching, you’ll discover that I could just insert #requires -RunAsAdministrator at the top of all my scripts. Problem solved, right? Well, no. Sure, it would “fix” the problem when you run the script locally. But, sometimes you’ll run the script remotely. What happens if:

  • … you run the script with an account that has administrative privileges on the target host but not on the local system?
  • … you run the script with an account that has local administrative privileges but only user privileges on the target host?

The answer to both: the actual outcome will not match your desired outcome.

I would need to write a solution that:

  • Checks to see if you’re running locally (harder than you might think!)
  • Checks that you’re a member of the local administrators
  • If you’re running locally, checks if your process token has administrative privileges

That’s not too tough, right? No, it’s not awful. Unfortunately, that’s not the end of it. What if you’re running locally, but invoke PowerShell Remoting with -ComputerName or Enter-PSSession or Invoke-Command? Then the entire dynamic changes yet again, because you’re not exactly remote but you’re not exactly local, either.

I’ve only attempted to fully solve this problem one time. My advanced VM settings editor includes layers of checks to try to detect all of these conditions. I spent quite a bit of time devising what I hoped would be a foolproof way to ensure that my application would warn you of insufficient privileges. I still get messages telling me that it doesn’t show any virtual machines.

I get better mileage by asking you to run my tools properly.

How to Handle the Hyper-V WMI Provider’s Security

Simply put, always ensure that you are running with the necessary privileges. If you are working locally, open PowerShell with elevated permissions:

psnowork_runas

If running remotely, always ensure that the account that you use has the necessary permissions. If your current local administrator account does not have the necessary permissions on the target system, invoke PowerShell (or whatever tool you’re using) by [Shift]+right-clicking the icon and selecting Run as different user:

psnowork_runasdifferentuser

What About the “Hyper-V Administrators” Group?

Honestly, I do not deal with this group often. I don’t understand why anyone would be a Hyper-V Administrator but not a host administrator. I believe that a Hyper-V host should not perform any other function. Trying to distinguish between the two administrative levels gives off a strong “bad plan” odor.

That said, I’ve seen more than a few reports that membership in Hyper-V Administrators does not work as expected. I have not tested it extensively, but my experiences corroborate those reports.

The Provider Might Not Be Present

All this talk about WMI mostly covers instances when you have little or no output. What happens when you have permissions, yet the system throws completely unexpected errors? Well, many things could cause that. I can’t make this article into a comprehensive troubleshooting guide, unfortunately. However, you can be certain of one thing: you cannot tell Hyper-V to carry out an action if Hyper-V is not running!

Let’s start with an obvious example. I ran Get-VM on a Windows 10 system without Hyper-V:

psnowork_getvmnohv

Nice, clear error, right? 2012 R2/Win 8.1 have a slightly different message.

Things change a bit when using the VHD cmdlets. I don’t have any current screenshots to show you because the behavior changed somewhere along the way… perhaps with Update 1 for Windows Server 2012 R2. Windows Vista/Server 2008 and later include a native driver for mounting and reading/writing VHD files. Windows 8/Server 2012 and later include a native driver for mounting and reading/writing VHDX files. However, only Hyper-V can process any of the VHD cmdlets. Get-VHD, New-VHD, Optimize-VHD, Resize-VHD, and Set-VHD require a functioning installation of Hyper-V. Just installing the Hyper-V PowerShell module won’t do it.

Currently, all of these cmdlets will show the same or a similar message to the one above. However, older versions of the cmdlets give a very cryptic message that you can’t do much with.

How to Handle a Missing Provider

This seems straightforward enough: only run cmdlets from Hyper-V module against a system with a functioning installation of Hyper-V. You can determine which functions it owns with:

When running them from a system that doesn’t have Hyper-V installed, use the ComputerName parameter.

Further Troubleshooting

With this article, I wanted to knock out two very simple reasons that Hyper-V PowerShell cmdlets (and some other tools) might not work. Of course, I realize that any given cmdlet might error for a wide variety of reasons. I am currently only addressing issues that block all Hyper-V cmdlets from running.

For troubleshooting a failure of a specific cmdlet, make sure to pay careful attention to the error message. They’re not always perfect, but they do usually point you toward a solution. Sometimes they display explicit text messages. Sometimes they include the hexadecimal error code. If they’re not clear enough to understand immediately, you can use these things in Internet searches to guide you toward an answer. You must read the error, though. Far too many times, I see “administrators” go to a forum and explain what they tried to do, but then end with, “I got an error” or “it didn’t work”. If the error message had no value the authors wouldn’t have bothered to write it. Use it.

[the_ad_group id=”229″]

Page 1 of 912345...Last »