How to Use Nested VMs in Azure

It's not cheap, but it is cool (and useful)

Save to My DOJO

How to Use Nested VMs in Azure

This article will discuss the most expensive way to host a virtual machine within Microsoft Azure; Nested Virtualization. While I say this with a glint of humor, it is also true. Nested Virtualization is not cheap. Nested virtualization consists of hosting a Virtual Machine inside another premium virtual machine. It does enable you to solve some problems you can’t solve otherwise or may not want to solve otherwise; however, we will attempt to unpack that within this article.

Nested Virtualization

Nested Virtual Machines are not new. Microsoft announced Nested Virtualization in Azure in July 2017 by introducing Dv3 and Ev3 Virtual Machine sizes. These virtual machines support Intel processors with VT-x and EPT technology and operating system support in Windows Server 2016/2019/2022.

A big caveat to note on Nested Virtualization is that the Nested Virtualization Host is supported; however, the nested virtual machines are not. The Microsoft documentation that introduced this topic carries the following note:

Nested Virtualization Host

Wait, but why?

The “how-to” with technology is often the most accessible explanation. Before we unpack the technicalities and how easy it is to achieve nested virtualization, we want to understand why we may want to do this. The note above mentions, “Labs, testing environments, demo environments, etc., are more of its purpose.

Familiarity with tools

You like Hyper-V, and you can’t deny…. While this may be true, there are drawbacks to using this approach. Cost becomes an issue, and so does supportability. Unless you have Azure Credit burning a hole in your pocket, this is not the best way to provision virtual machines. Azure supports a vast array of options here, including the Azure Resource Manager Web GUI, several command-line options that include PowerShell, Azure CLI, bash, etc. If Infrastructure as code is more of your game, consider Terraform, Ansible, or Microsoft’s Bicep to build your virtual machines.

Legacy migrations

The most significant risk to mitigate in a legacy environment old enough to host Server 2003 virtual machines is often the hardware age. Nested virtualization lets us migrate a Windows Server 2008 R2 Hyper-V host containing Windows Server 2003 guests into a nested Windows Server 2016 Hyper-V host. The resulting Hyper-V host may be new on-premises hardware or an Azure Virtual machine of sufficient size.

Microsoft supports the Hyper-V role and Failover Clustering on Windows Server 2008 R2 and later operating system versions in Azure Ev3 and Dv3 series VMs. Microsoft does not support the Azure Virtual Machine Agent for virtual machines running Windows Server 2003. However, Microsoft does support the deployment of the Operating System, making management of the OS tricky.

Windows Server 2012 and Windows Server 2008 both support integration services on Windows Server 2003 guest virtual machines. Server 2003 guest virtual machines may be managed through the host operating system or System Center Virtual Machine Manager, providing clear avenues for support.

A stretched Hyper-V cluster to move an environment into Azure is also possible, considering Hyper-V supports shared-nothing replication and failover. Building such a stretch cluster may allow you to move your legacy Server 2008 R2 hosts onto supported hardware and then move that hardware to Azure, where the Windows Server virtual machine host is still supported.

Disaster Recovery

Disaster Recovery is why you may want to use the nested virtual machine scenario. Consider an on-premises virtual environment stretching into Azure using the same kind of stretch cluster logic we mentioned above. But instead of migrating a legacy platform onto Azure, you’re considering using Azure as the disaster recovery failover destination.

After all, it makes sense since Azure gives you great network, storage, etc., options. However, I would like to suggest this option only if Azure Site Recovery’s lack of support for migrating clients containing Hyper-V disqualifies it from providing a DR option that fits. I prefer Azure Site Recovery over nested virtualization as it is vastly cheaper.

Lab training

Azure Lab Services offer the ability to set up training labs and run them in Azure, including support for nested virtualization where required.

Developers, Developers, Developers?

In my opinion, this may be the thinnest reason of them all to deploy nested virtualization. Developers like infrastructure folks like to stick with the tools they know. In the cloud, nested virtualization is probably the bluntest tool of them all, considering what we discussed above in the “familiarity with tools” section. Besides the magic of Bicep and Terraform, Azure DevLabs and Dev tier costing mitigates any familiarity Hyper-V may bring.

Size does matter

Since Large Virtual Machines are expensive, thus we use the Azure Start/Stop VMs during off-hours feature to help us with cost containment and switch off nested virtual machine hosts outside of business hours.

Not every virtual machine is capable of nested virtualization. At the time of writing this article, v3 virtual machine families and later are Hyper-threaded and capable of running nested virtualization.

Networking!

A significant restriction or parameter for the practical use case of the nested virtualization scenario is networking. Due to fabric restrictions, External Virtual Switches do not function in the same way as they do on your local LAN. DHCP broadcasts don’t propagate, and manually assigned IP addresses are not honored by Azure Networking. While double NAT workarounds to separate subnets do exist, in my mind, they range into the realm of the impractical.

For this reason, the rest of this article will be written to use Internal Virtual Switches, which are well understood and easy to manage. Remember, though, that internal address spaces need to be defined; for this reason, we include DHCP services with Hyper-V for our install guide.

But what about….?

Outside of the cost argument, there are definite benefits to nested virtualization, which include enabling scenarios that cannot be accommodated otherwise, including the use of nearly unlimited amounts of hardware. Nested virtualization enables otherwise un-supported guest operating systems, like Windows 2000, Server 2003, unsupported Linux builds, etc., or even un-supported scenarios that are impossible to achieve within a native Azure VM, such as custom or experimental drivers.

How to Use Nested VMs in Azure

First, we will build a Virtual Machine to the specification required to support your use case, consider how much RAM and CPU you may need. In my case, I’m demonstrating the concept and need very light virtual hardware. Windows Server 2019 running on a standard D2s V3 will do nicely. Depending on your needs, you may need a larger Azure Virtual Machine SKU. As we configure this Virtual Machine, allow Inbound RDP during the setup, as we will restrict it immediately.

Create a resource

create a virtual machine

VM Info

deployment complete

As we mentioned above, ensure that Auto-shutdown settings are configured unless your host is required to run 24×7.

Auto-shutdown settings

Next, navigate to Networking settings, click on the first rule, and change it to reflect your external IP address, restricting RDP access to this Virtual Machine from your network only.

Networking settings

RDP

Navigate to the connect tab, download, and connect using the RDP file option.

connect using the RDP file option

As you connect and authenticate, allow your machine to be discoverable by external networks.

allow your machine to be discoverable by external networks

Using elevated PowerShell for all of our examples, install both DHCP as well as Hyper-V.

Install-WindowsFeature -Name DHCP,Hyper-V –IncludeManagementTools

Next, enable DISM to work with Hyper-V.

dism /Online /Enable-Feature /FeatureName:Microsoft-Hyper-V /All

Note that we have installed Hyper-V and DHCP, as demonstrated by the screenshot showing our management tools.

Hyper-V and DHCP

Using elevated PowerShell using Get-NetAdapter, we show that the default network interface is enabled. In the screenshot below, you’ll notice that Windows recognizes one interface only. We also show that our host machine can ping Google DNS.

elevated PowerShell

Next, we need a virtual switch for our virtual machines to connect to, and configure an address range. We create an internal virtual switch called “vSwitchInternal” and set a NAT rule and gateway for the same switch. Execute the following commands in the same PowerShell window to preserve the Powershell variable values.

$switchName = “vSwitchInternal

New-VMSwitch -Name $switchName -SwitchType Internal

New-NetNat –Name $switchName –InternalIPInterfaceAddressPrefix “192.168.0.0/24”

$ifIndex = (Get-NetAdapter | ? {$_.name -like “*$switchName)”}).ifIndex

New-NetIPAddress -IPAddress 192.168.0.1 -InterfaceIndex $ifIndex -PrefixLength 24

PowerShell window

configure DHCP services

Next, in the same PowerShell window, we configure DHCP services using the same switch for a limited range, assign a gateway IP of 192.168.0.1, external DNS IP, and restart the DHCP service.

Add-DhcpServerV4Scope -Name “DHCP-$switchName” -StartRange 192.168.0.50 -EndRange 192.168.0.100 -SubnetMask 255.255.255.0

Set-DhcpServerV4OptionValue -Router 192.168.0.1 -DnsServer 168.63.129.16

Restart-service dhcpserver

Restart-service dhcpserver

Once the command completes successfully, create a child virtual machine using the configured internal switch. If you need help at this point, consider reading our article on this topic.

create a child virtual machine

In the following screenshot, we have built a Windows Server 2022 virtual machine, which received a DHCP address in the configured range and can browse the internet successfully.

Windows Server 2022 virtual machine

On legacy Support

I built a Windows Server 2003 virtual machine next to a Server 2022 on a Server 2019 host Virtual Machine in Azure to prove the point that it can be done. The virtual machine works, except that the integration services offered by Windows Server 2019 Hyper-V are limited, and is of course completely unsupported.

Windows Server 2003 virtual machine

Windows Server 2019 Hyper-V

Conclusion

This article has shows you how to use nested virtualization in Azure for some very specific use cases, hope you enjoyed reading it. Feel free to comment or ask any questions that you might have.

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

Frequently Asked Questions

If you need to run older, unsupported OSs in Azure, if you need to provide a training environment for Hyper-V or if you need control over the virtualization host itself, nested virtualization might the right solution.
Dv3 and Ev3 SKUs are supported.
Follow the instructions in this article to set up a host VM, enable nested virtualization and then create a child VM inside the host VM.
When any Azure VM is in the stopped (deallocated) state you only pay for the storage of it (plus any backup costs if you’re backing up the VM).

Leave a comment or ask a question

Your email address will not be published. Required fields are marked *

Your email address will not be published.

Notify me of follow-up replies via email

Yes, I would like to receive new blog posts by email

What is the color of grass?

Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.