How to Use Hyper-V and Kali Linux to Securely Wipe a Hard Drive

How to Use Hyper-V and Kali Linux to Securely Wipe a Hard Drive

The exciting time has come for my wife’s laptop to be replaced. After all the fun parts, we’ve still got this old laptop on our hands, though. Normally, we donate old computers to the local Goodwill. They’ll clean them up and sell them for a few dollars to someone else. Of course, we have no idea who will be getting the computer, and we don’t know what processes Goodwill puts them through before putting them on the shelf. A determined attacker might be able to retrieve social security numbers, bank logins, and other things that we’d prefer to keep private. As usual, I will wipe the hard drive prior to the donation. This time though, I have some new toys to use: Hyper-V and Kali Linux.

Why Use Hyper-V and Kali Linux to Securely Wipe a Physical Drive?

I am literally doing this because I can. You can easily find any number of other ways to wipe a drive. My reasons:

  • I don’t have any experience with Windows-based apps that wipe drives and didn’t find any freebies that spoke to me
  • I don’t really want to deal with booting this old laptop up to one of those security CDs
  • Kali Linux focuses on penetration testing, but Kali is also the name of the Hindu goddess of destruction. For a bit of fun, do an Internet image search on her, but maybe not around small children. What’s more appropriate than unleashing Kali on a disk you want to wipe?
  • I don’t want to deal with a Kali Live CD any more than I want to use one of the other CD-based tools, nor do I want to build a physical Kali box just for this. I already have Kali running in a virtual machine.
  • It’s very convenient for me to connect an external 2.5″ SATA disk to my Windows 10 system.

So yeah, I’m doing this mostly for fun.

Connect the Drive

I’m assuming that you’ve already got a Hyper-V installation with a Kali Linux guest. If not, get those first.

Since we’re working with a physical drive, you also need a way to physically connect the drive to the Hyper-V host. In my case, I have an old Seagate FreeAgent GoFlex that works perfectly for this. It has an enclosure for a small SATA drive and a detachable USB interface-to-SATA connector. I just pop off their drive and plug into the laptop drive, and voila! I can connect her drive to my PC via USB.

how I connected the hard drive

You might need to come up with some other method, like cracking your case and connecting the cables. Hopefully not.

I plugged the disk into my Windows 10 system, and as expected, it appeared immediately. Next, I went into Disk Management and took the disk Offline.

hard disk management page

I then went into Hyper-V Manager and ensured the Kali guest was running. I opened its settings page to the SCSI Controller page. There, I clicked the Add button.

Adding the hard drive

It created a new logical connection and asked me if I wanted a new VHDX or to connect a physical disk. In this case, the physical disk is what we’re after.

select physical hard disk

After clicking OK, the disk immediately appeared in Kali.

In Kali, open the terminal from the launcher at the left:

Kali Linux terminal launch

Use lsblk to verify that Kali can see your disk. I already had my terminal open so that I could perform a before and after for you:

Kali Linux terminal

Remember that Linux marks the SATA disks in order as sda, sdb, sdc, etc. So, I know that the last disk that it detected is sdb, even if I hadn’t run the before and after.

Use shred to Perform the Wipe

Now that we’ve successfully connected the drive, we only need to perform the wipe. We’ll use the “shred” utility for that purpose. On other distributions, you’d usually need to install that from a repository. Kali already has it waiting for you, of course.

The shred utility has a number of options. Use shred –help to view them all. In my case, I want to view progress and I want to increase the number of passes from the default of 3 to 4. I’ve been told that analog readers can sometimes go as far as three layers deep. Apparently, even that is untrue. It seems a that a single pass will do the trick. However, old paranoia dies hard. So, four passes it is.

I used:

Kali Linux

And then, I found something else to do. As you can imagine, overwriting every spot on a 250GB laptop disk takes quite some time.

Because of the time involved, I needed to temporarily disable Windows 10 sleep mode. Otherwise, Connected Standby would interrupt the process.

disabling sleep mode

After the process completed, I used Hyper-V Manager to remove the disk from the VM. Since I never mounted it in Kali, I didn’t need to do anything special there. After that, I bolted the drive back into the laptop. It’s on its way to its happy new owner, and I don’t need to worry about anyone stealing our information from it.

How to Securely Monitor Hyper-V with Nagios and NSClient

How to Securely Monitor Hyper-V with Nagios and NSClient


I’ve provided some articles on monitoring Hyper-V using Nagios. In all of them, I’ve specifically avoided the topic of securing the communications chain. On the one hand, I figure that we’re only working with monitoring data; we’re not passing credit card numbers or medical history.

On the other hand, several of my sensors use parameterized scripts. If I didn’t design my scripts well, then perhaps someone could use them as an attack vector. Rather than pretend that I can ever be certain that I’ll never make a mistake like that, I can bundle the communications into an encrypted channel. Even if you’re not worried about the scripts, you can still enjoy some positive side effects.

What’s in this Article

The end effects of following this article through to completion:

  • You will access your Nagios web interface at an https:// address.
  • You can access your Nagios web interface using your Active Directory domain credentials. You can also allow others to use their credentials. You can control what any given account can access.
  • Nagios will perform NRPE checks against your Hyper-V and Windows Server systems using a secured channel

As you get started, be advised that this is a very long article. I did test every single line, usually multiple times. You will almost always need to use sudo. I tried to add it everywhere it was appropriate, but each time I proofread this article, I find another that I missed. You might want to just sudo -s right in the beginning and be done with it.

General Philosophy

When I started working on this article, I fully intended to utilize a Microsoft-based certificate authority. Conceptually, PKI (public key infrastructure) is extremely simple. But, from that simplicity, implementers run off and make things as complicated as possible. I have not encountered any worse offender than Microsoft. After several days struggling against it, I ran up against problems that I simply couldn’t sort out. After trying to decipher one too many of Microsoft’s cryptic and non-actionable errors (“Error 0x450a05a1: masking tape will not eat pearl soufflé from the file that cannot be named”), I finally gave up. So, while it should be possible to use a Microsoft CA for everything that you see here, I cannot demonstrate it. Be aware that Microsoft’s tools tend to output DER (binary) certificates. Choose the Base64 option when you can. You can convert DER to Base64 (PEM).

Rather than giving up entirely, I re-centered myself and adopted the following positions:

  • Most of my readers probably don’t want to go to the hassle of configuring a Microsoft CA anyway; many of you don’t have the extra Windows Server licenses for that sort of thing, either
  • We’re securing monitoring traffic, not state secrets. We can shift the bulk of the security responsibility to the endpoints
  • To make all of this more secure, one simply needs to use a more secure CA. The remainder of the directions stay the same

Some things could be done differently. In a few places, I’m fairly certain that I worked harder than necessary (i.e., could have used fewer arguments to openssl). Functionality and outcome were most important.

In general, certificates are used to guarantee the identity of hosts. Anything else — host names, IP addresses, MAC addresses, etc., can be easily spoofed. In this case, we’re locking down a monitoring system. If someone manages to fool Nagios… uh… OK then. I am more concerned that, if you use the scripts that I provide, we are transmitting PowerShell commands to a service running with administrative privileges, and the transmission is sent in clear text. There are many safeguards to prevent that from being a security risk, but I want to add layers on top of that. So, while host authentication is always a good thing, my primary goal in this case is to encrypt the traffic. It’s on you to take precautions to lock down your endpoints. Maintain a good root password on your Linux boxes, maintain solid password protection policies, etc.

I borrowed heavily from many sources, but probably none quite so strongly as Nagios’ own documentation:


You need a Linux machine running Nagios. I wrote one guide for doing that on Ubuntu. I wrote another guide for doing that on CentOS. I have a third article forthcoming for doing the same with OpenSUSE. It’s totally acceptable for you to bring your own. The distributions aren’t so radically different that you won’t be able to figure out any differences that survive this article.

Also, for any of this to make sense, you need at least one Windows Server/Hyper-V Server system to monitor.

Have Patience! I can go on and on all day about how Microsoft makes a point of avoiding actionable error messages. In this case, they are far from alone. I lost many hours trying to decipher useless messages from openssl and NRPE. Solutions are usually simple, but the problems are almost always frustrating because the authors of these tools couldn’t be bothered to employ useful error handling. NSClient++ treated me much better, but even that let me down a few times. Take your time and remember that, even though there are an absurd number of configuration points, certificate exchange is fundamentally simple. Whatever problem you encounter is probably a small one.

Step 1. Acquire and Enable openssl

Every distribution that I used already had openssl installed. Just to be sure, use your distribution’s package manager. Examples:

You’ll probably get a message that you already have the package. Good!

Next, we need a basic configuration. You should automatically get one along with the default installation of openssl. Look for a file named “openssl.cnf”. It will probably be in in /etc/ssl or /usr/lib/ssl. Linux can help you:

If you haven’t got one, then maybe removing and reinstalling openssl will create it… I never tried that. You could try this site: I’ll also provide the one that I used. Different sections of the file are used for different purposes. I’ll show each portion in context.

Set Up Your Directories and Environment

You will need to place your certificate files in a common place. First, look around the location where you found the openssl.cnf file. Specifically, check for “private” and “certs” directories. If they don’t exist, you can make some.

To keep things simple, I just dump everything there on systems that need a directory created. I will write the remainder of this document using that directory. If your system already has the split directories, use “private” to hold key files and “certs” to hold certificate files. Note that if you find these files underneath a “ca” path, that is for the certificate authority, not the client certificates that I’m talking about. I’ll specifically cover the certificate authority in the next section.

Step 2. Set Up a Certificate Authority

In this implementation, the Linux system that runs Nagios will also host a certificate authority. We’ll use that to CA to generate certificates that Nagios and NRPE can use. Some people erroneously refer to those as “self-signed” because they aren’t issued by an “official” CA. However, that’s not the definition of “self-signed”. A self-signed certificate doesn’t have an authority chain. In our case, that term will apply only to the CA’s own certificate, which will then be used to sign other certificates. All of those will be authentic, not-self-signed certificates. As I describe it, you’ll use the same system as for both your CA and Nagios system, but you could just as easily spin up another Linux system to be the CA. You would only need to copy CSR, certificate, and key files across the separate systems as necessary to implement that.

Set Up Your Directories and Environment

You need places to put your CA’s files and certificates. openssl will require its own particular files. If you found some CA folders near your openssl.cnf, use those. Otherwise, you can create your own.

Configure your default openssl.cnf (sometimes openssl.conf). Note the file locations that I mentioned in the previous section. Mine looks like this:

Make certain that you walk through it and enter your localized settings in place of mine. Also note that I’ve removed the comment mark in front of “req_extensions = v3_req”.

Create the CA certificate:

You will first be asked to answer a series of questions. If you filled out the fields correctly, then you can just press [Enter] all the way through them. You will then be asked to provide a password for the private key. Even though we aren’t securing anything of earth-shattering importance, take this seriously.

Your CA’s private key is the most vital file out of all that you’ll be creating. We’re going to lock it down so that it can only be accessed by root:

The public key is included in the public cert file (ca_cert.pem). That can safely be read by anyone, anywhere, any time.

For bonus points, research setting up access to your new CA’s certificate revocation list (CRL). I did not set that up for mine.

Step 3. Set Your Managing Computer to Trust the CA

Your management computer will access the Nagios site that will be secured by your new CA. Therefore, your management computer needs to trust the certificates issued by that CA, or you’ll get warnings in every major browser.

For a Linux machine (client, not the Nagios server), check to see if /etc/ssl/certs contains several files. If it does (Ubuntu, openSUSE), just copy the CA cert there. You can rename the file so that it stands out better, if you like. Not every app on Linux will read that folder; you’ll need to find directions for those apps specifically.

If your Linux distribution doesn’t have that folder (CentOS), then look for /etc/pki/ca-trust/source/anchors. If that exists (CentOS), copy the certificate file there. Then, run:

For a Windows machine:

  1. Use WinSCP to transfer the ca_cert.pem file to your Windows system (not the key; the key never needs to leave the CA).
  2. Run MMC.EXE as administrator.
  3. Click File->Add/Remove Snap-in.
  4. Choose Computer Account and click Next.
  5. Leave Local Computer selected and click Finish.
  6. Click OK back on the Add/Remove Snap-ins dialog.
  7. Back in the main screen, right-click Trusted Root Certification Authorities. Hover over All Tasks, then click Import.
  8. On the Welcome screen, you should not be allowed to change the selection from Local Machine.
  9. Browse to the file that you copied over using WinSCP. You’ll either need to change the selection to allow all files or you’ll need to have renamed the certificate to have a .cer extension.
  10. Choose Trusted Root Certification Authorities.
  11. Click Finish on the final screen.
  12. Find your new CA in the list and double-click it to verify.

The above steps can be duplicated for other computers that need to access the Nagios site. For something a bit more widespread, you can deploy the certificate using Group Policy Management Console. In the GPO’s properties, drill down to Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities. You can right-click on that node and click Import to start the same wizard that you used above.

Note: Internet Explorer, Edge, and Chrome will use trusted root certificates from the Windows store. The authors of Firefox have decided that reinventing the wheel and maintaining a completely separate certificate store makes sense somehow. You’ll have to configure its trusted root certificate store within the program.

Step 4. Secure the Nagios Web Site

If you followed any of my earlier guides, you’re accessing your Nagios site over port 80 with Basic authentication. That means that any moderately enterprising soul can snag your Nagios site’s clear-text password(s) right out of the Ethernet. You have several options to fix that. I chose to use an SSL site while retaining Basic authentication. Your password still travels, but it travels encrypted. As long as you protect the site’s private key, an attacker should find cracking your password prohibitively difficult.

You could also use Kerberos authentication to have the Nagios site check your credentials against Active Directory. When that works, it appears that your password is protected, even using unencrypted HTTP. However, I could not find an elegant way to combine that with the existing file-based authentication. So, if you’re one of my readers at a smaller site with only one or two domain controllers and you lose your domain for some reason, you’d also lose your ability to log in to your monitoring environment. Also, managing Kerberos users in Nagios is kind of ugly. I didn’t find that a palatable option.

So, we’re going to keep the file-based authentication model and add LDAP authentication on top of it. You’ll be able to use your Active Directory account to log in to the Nagios site, but you’ll also be able to fall back to the existing “nagiosadmin” account when necessary.

One thing that I don’t demonstrate is updating the firewall to allow for port 443. Whatever directions you used to open up port 80, follow those for 443.

Create the Certificate for Apache

If you only use the one site address, then you can continue using the same openssl.cnf file from earlier steps. So, if I were using “” to access my site, then I would just proceed with what I have. However, I access my site with “”. I also have a handful of other sites on the same system. I (and you) could certainly create multiple certificates to handle them all. I chose to use Subject Alternate[sic] Names instead. That means that I create a single certificate with all of the names that I want. It means less overhead and micromanagement for me. Again, we’re not hosting a stock exchange, so we don’t need to complicate things.

You have two choices:

  1. Edit your existing openssl.cnf file with the differences for the new certificate(s).
  2. Copy your existing openssl.cnf file, make the changes to the copy, and override the openssl command to use the copied file.

I suppose a third option would be to hack at the openssl command line to manually insert what you want. That requires more gumption than I can muster, and I don’t see any benefits. I’m going with option 2.

Of course, it’s not a requirement to use nano. Use the editor you prefer.

The following shows sample additions to the file. They are not sufficient on their own!

The req_extensions line already exists in the default sample config, but has a hash mark in front of it to comment it out. Remove that (or type a new line, whatever suits you). The [ v3_req ] section probably exists; whatever it’s got, leave it. Just add the subjectAltName line. The [ alt_names ] segment won’t exist (probably). Add it, along with the DNS and IP entries that you want.

Note: The certificates we create now are not part of the CA. I sudo mkdir /var/certs to hold non-CA cert files. That’s a convenience, not a requirement. Follow the guidance from earlier.

If you’re copy/pasting, note that I used a .cnf file from /etc/ssl. Your structure may be different.

You will be asked to answer a series of questions like you did for the CA, with an additional query regarding a password and a company name. I would just [Enter] through both of those.

Verify that your CSR has the necessary Subject Alternate Names:

Secure the private key:

Submit the request to your new CA:

First, openssl will ask you to supply the password for the CA’s private key. Next, you’ll be shown a preview of the certificate and asked twice to confirm its creation.

The generated file will appear in your CA’s configured output directory (the one used in these directions is /var/ca/newcerts). It will use the next serial from your /var/ca/serial file as the name. So, if you’re following straight through, that will be /var/ca/newcerts/01.pem. You can ls /var/ca/newcerts to see them all. The highest number is the one that was just generated. Verify that it’s yours:

Transfer the certificate to whatever location that you’ll have Apache call it from, and, for convenience, rename it:

Tell Apache to Use SSL with the Certificate

Apache allows so much latitude in configuration that it appears to be complicated. Every distribution that installs Apache from repositories follows its own conventions, making things even more challenging. I’ll help guide you where possible. If you feel lost, just remember these things:

  • The last time that Apache finds a configuration setting overrides all previous configurations of that setting
  • Apache reads files in alphabetical order
  • Apache doesn’t care about file names, only extensions

So, any time that a configuration doesn’t work, that means that a later setting overrides. It might be further down in the same file or it might be in another file, but it’s out there somewhere. It might be in a file with a seemingly related name, but it might not be.

Start by locating the master Apache configuration file.

  • Ubuntu and OpenSUSE: /etc/apache2/apache2.conf
  • CentOS: /etc/httpd/conf/httpd.conf

This file will help you to figure out what extensions qualify a configuration file and which directories Apache searches for those configuration files.

We will take these basic steps:

  1. Enable SSL
  2. Instruct Apache to listen on ports 80 and 443
  3. Instruct Apache to redirect all port 80 traffic to port 443
  4. Secure all 443 traffic with the certificate that we created in the preceding section

Enable SSL in Apache

Your distribution probably enabled SSL already. Verify on Ubuntu with apache2 -M | grep ssl. Verify on CentOS/OpenSUSE with httpd -M | grep ssl. If you are rewarded with a return of ssl_module, then you don’t need to do anything else.

To enable Apache SSL on Ubuntu/OpenSUSE: sudo a2enmod ssl.

To enable Apache SSL on CentOS: sudo yum install mod_ssl.

Configure SSL in Apache

We could do all of steps 2-4 in a single file or across multiple files. I tend to do step 2 in a place that makes sense for the distribution, then steps 3 and 4 in the primary site configuration file. We could also spread out certificates across multiple virtual hosts. I’m not hosting tenants, so I tend to use one virtual host per site, but each uses the same certificate.

Remember, it doesn’t really matter where any of these things are set. The only thing that matters is that they are processed by Apache after any conflicting settings. Do your best to simply eliminate any conflicts. For instance, CentOS puts a lot of SSL settings in /etc/httpd/conf.d/ssl.conf. For that distribution, I left all of the settings it creates for defaults but commented out the entire VirtualHost host item. I strongly encourage you to create backup copies of any file before you modify them. Ex: cp /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.ssl.conf.original

Somewhere, you need a Listen 443 directive. Most distributions will automatically set it when you enable SSL (look in ports.conf or a conf file with “ssl” in the name). However, I’ve had a few times when that only worked for IPv6. If you can’t get 443 to work on IPv4, try Listen This resolves step 2.

Next, we need a port 80 to 443 redirect. Apache has an “easy” Redirect command, but it’s too restrictive unless you’re only hosting a single site. In my primary site file, I create an empty port 80 site that redirects all inbound requests to an otherwise identical location using https:

This sequence sends a 301 code back to the browser along with the “corrected” URL. As long as the browser understands what to do with 301s (every modern browser does), then the URL will be rewritten right in the address bar. If you’re stuck for where to place this, I recommend:

  • On Ubuntu: /etc/apache2/sites-available/000-default.conf (symlinked from /etc/apache2/sites-enabled/)
  • On CentOS: /etc/httpd/conf.d/sites.conf
  • On OpenSUSE: /etc/apache2/default-server.conf

Wherever you put it, you need to verify that there are no other virtual hosts set to port 80. If there are, comment them out. You could also replace the 80 with 443, provided that you also add in the certificate settings that I’m about to show you.

After setting up the 80->443 redirect, you next need to configure a secured virtual host. It must do two things: listen on port 443 and use a certificate to encrypt traffic. Mine looks like this:

If you have other sites on the same host, create essentially the same thing but use the ServerName/ServerAlias fields to differentiate. For instance, my MRTG site is on the same server:

If you want, you can certainly use the instructions from the preceding section to create as many additional certificates as necessary for your other sites.

You’ve finished the hard work! Now just restart Apache. service apache2 restart on systems that name the service “apache2” (Ubuntu, OpenSUSE) or service httpd restart (CentOS). Test by accessing the site using an https prefix, then again with an http prefix to ensure that redirection works.

Step 5: Configure Nagios for Active Directory Authentication

Now that we’re securing the Nagios web site with SSL, we can feel a little bit better about entering Active Directory domain credentials into its challenge dialog. We have five phases for that process.

  1. Create (or designate) an account to use for directory reads.
  2. Select an OU to scan for valid accounts.
  3. Enable Apache to use LDAP authentication.
  4. Configure Apache directory security.
  5. Set Nagios to recognize Active Directory accounts.

Create an Account for Directory Access

Use PowerShell or Active Directory Users and Computers to create a user account. It does not matter what you call it. It does not matter where you put it. It does not need to have any group membership other than the default Domain Users group. It only requires enough powers to read the directory, which all Domain Users have by default. I recommend that you set its password to never expire or be prepared to periodically update your Nagios configuration files.

Once you’ve created it, you need its distinguished name. You can find that on the Attribute Editor tab in ADUC. You can also find it with Get-ADUser:


Keep the DN and the password on hand. You’ll need them in a bit.

Selecting OUs for Apache LDAP

When an account logs in to the web site, Apache’s mod_authnz_ldap will search for it within locations that you specify. You need to know the distingished name of at least one organizational unit. Apache’s mod_ldap queries cannot run against the entire directory. I found many, many, many articles claiming that it’s possible, including Apache’s official document, but they are all lies (thanks for wasting hours of my time on searches and tests, though, guys, I always appreciate that).

It will, however, search multiple locations, and it can search downward into the child OUs of whatever OU you specify. Luckily for me, I have a special Accounts OU that I’ve created to organize user accounts. Hopefully, you have something similar. If not, you can use the default Users folder. You can do both.

I’ll show you how to connect to an OU and the default Users folder.


It is not necessary for the directory read account that you created in the first part of this section to exist in the selected location(s). The target location(s), or a sub-OU, only needs to contain the accounts that will log in to Nagios.

Once you’ve made your selection(s), you need to know the distinguished name(s). You can use the Attribute Editor tab like you did for the user, or Get-ADOrganizationalUnit:


Enabling LDAP Authentication in Apache

Apache requires two modules for LDAP authentication: authnz_ldap_module and ldap_module. You will probably need to enable them, but you can check in advance. On Ubuntu, use apache2 -M | grep ldap. On CentOS/OpenSUSE, use httpd -M | grep ldap. If you see both of these modules, then you don’t need to do anything else.

To enable Apache LDAP authentication on Ubuntu/OpenSUSE: sudo a2enmod authnz_ldap. You might also need to: sudo a2enmod ldap.

To enable Apache LDAP authentication on CentOS: yum install mod_ldap.

Make certain to perform the apachectl -M verification afterward to ensure that both modules are available.

Configuring Apache Directories to Use LDAP Authentication

Collect your OU DN(s), your user DN, and the password that user. Now, we’re going to configure LDAP authorization sections in Apache. Again, you can put these in any conf file that pleases you. I usually find the distribution’s LDAP configuration file:

  • Ubuntu: /etc/apache2/mods-available/ldap.conf
  • CentOS: /etc/httpd/conf.modules.d/01-ldap.conf
  • OpenSUSE: no default file is created for the ldap module on OpenSUSE; you can create your own or add it to another, like /etc/apache2/global.conf

Warning: On Ubuntu, the files always exist in mods-available; when you run a2enmod, it symlinks them from mods-enabled. I highly recommend that you avoid the mods-enabled directory. Eventually, something bad will happen if you touch anything there manually (yes, that’s experience talking). Edit the files in mods-available.

My ldap.conf, for comparison:

The initial lines are not required, but can make the overall experience a bit smoother. I am just using defaults; I didn’t tune any of those lines. The only thing to be aware of is that Apache will be oblivious to changes that occur during cache timeouts — including lockouts and disables.

A breakdown of the AuthnProviderAlias sections:

  • In the opening brackets, AuthnProviderAlias and ldap must be the first two parts. We are triggering the authn provider framework and telling it that we’re specifically working with the ldap provider. Where I used ldap-accounts and ldap-users, you can use anything you like. I named them after the specific OUs that I selected. Whatever you enter here will be used as reference tags in directories.
  • For AuthLDAPBindDN, use the distinguished name of the read-only Active Directory user that you created at the beginning of this section. You can omit the quotes if you have no spaces in the DN, but I would recommend keeping them just in case.
  • For AuthLDAPBindPassword, use the password of the read-only Active Directory account. Do not use quotes unless there are quotes in the password. If your password contains a quote, I recommend changing it.
  • For AuthLDAPURL, use the distinguished name of the OU to search. Use one? instead of sub? if you don’t want it to search sub-OUs.

Note that the Users folder uses CN, not OU.

TLS/SSL/LDAPS Configuration for Apache and LDAP

You should be able to authenticate with TLS or LDAPS if configured in your domain. I couldn’t get that to work because of the state of my domain. I have made it work elsewhere, so I can confirm that it does work. If you want to try on your own, I will tell you that right off, find the “LogLevel” line in your Apache configs and bump it up to “Debug” until you have it working, or you’ll have no idea why things don’t work. The logs are output to somewhere in /var/logs/httpd or /var/logs/apache2, depending on your configuration/distribution (the file is usually ssl_error_log, but it can be overridden, so you might need to dig a tiny bit). You can go through Apache’s documentation on this mod for some hints. You need at least:

  • LDAPTrustedGlobalCert CA_BASE64 /path/to/your/domain/CA.pem in some Apache file. I use the built-in ldap.conf or 01-ldap.conf for Apache. If you download the certificate chain from your domain CA’s web enrollment server, you can extract the subordinate’s certificate and convert it from P7B to PEM.
  • LDAPTrustedMode SSL in some Apache file if you will be using LDAPS on port 636. I normally keep it near the previous entry. Note: you can also just append SSL to any of the AuthLDAPURL entries for local configuration instead of global. In your AuthLDAPURL lines, you must change ldap: to ldaps: and append :636 to the hostname portion. Ex: AuthLDAPURL ldaps://,DC=siron,DC=int?sAMAccountName?sub?(objectClass=user)
  • LDAPTrustedMode TLS in some Apache file if you will be using TLS on port 389. I normally keep it near the previous entry. Note: you can also just append TLS any of the AuthLDAPURL entries for local configuration instead of global.
  • In the <Directory> fields that attach to AD, you need: LDAPTrustedClientCert CERT_BASE64 /var/certs/your-local-system.pem. It might also work in the AuthnProviderAlias blocks; I haven’t yet been able to try.
  • You might need to add LDAPVerifyServerCert off to an Apache configuration. I don’t like that, because it eliminates the domain controller authentication benefit of using TLS or LDAPS. Essentially, if you can get openssl -s_client -connect your.domain.address:636 -CAfile ca-cert-file-from-first-bullet.pem to work, then you will be fine.

The hardest part is usually keeping LDAPVerifyServerCert On. First, use openssl s_client -connect your.domain.controller:636 -CAfile your.addomain.cafile.pem. It will display a certificate. Paste that into a file and save it. Then, use openssl verify -CAfile your.addomain.cafile If that says OK, then you should be able to get SSL/TLS to work.

Because security is our goal here and I couldn’t get TLS or LDAP to work, I did run a Wireshark trace on the communication between the Nagios system and my domain controller. It does pass the user name in clear-text, but it does not transmit the password in clear text. I don’t love it that the user names are clear, but I also know that there are much easier ways to determine domain user accounts than by sniffing Apache LDAP authentication packets. There are also easier ways to crack your domain than by spoofing a domain controller to your Nagios system. If you can’t get TLS or LDAP to work, it won’t be the weakest link in your authentication system.

Note 1: Be very, very, very careful about typing. You’re handling your directory in read-only mode, so I wouldn’t worry about breaking the directory. What you need to worry about is the very poor error reporting in this module. I lost an enormous amount of time over a hyphen where an equal sign should have been. It was right on the side-scroll break of nano so I didn’t see it for a very long time. The only error that I got was AH01618: user esadmin not found: /., or whatever account I was trying to authenticate. If things don’t work, slow down, check for typos, check for overrides from other conf files.

Note 2: I will happily welcome verifiable assistance on improving this section. If you just throw URLs at me, they’d better contain something that I didn’t find on any of the 20+ pages that made big promises without delivering, and the directions had better work. For example, using port 3268 to authenticate against the global catalog vs. 389 LDAP or 636 LDAPS does not do anything special for trying to authenticate the entire directory.

Configure Apache Directory Security for LDAP Authentication

From here, the Apache portion is relatively simple. Assuming that you already have a Nagios directory configured, just compare with mine:

The default Nagios site created by the Nagios installer contains a lot of fluff, which I’ve removed. For instance, I don’t check the Apache version because I know what version it is. There’s only one major change, though: look at the AuthBasicProvider line. Yours, if you’re using the default, just says file. Mine also says ldap-users ldap-accounts. Those are the tags that I applied to the providers in the previous sub-section. By leaving file in there, I can still use the original “nagiosadmin” account, as well as any others that I might have created. If you create additional providers for other OUs, just keep tacking them onto the AuthBasicProvider lines.

On the AuthBasicProvider line, order is important. I placed file first because I want accounts to be verified there first. The majority of my accounts will be found in Active Directory, but the file is only a couple of lines and can be searched in a few milliseconds. If I need to reach out to the directory for an uncached account, that will cause a noticeable delay. For the same reason, order your LDAP locations wisely.


We’re not quite done; Nagios still doesn’t know what to do with these accounts. However, stop right now and go make sure that AD authentication is working.

sudo service apache2 restart or sudo service httpd restart, depending on your distribution. If Apache doesn’t restart successfully, use sudo journalctl -xe to find out why. Fix that, and move on. Once Apache successfully restarts, access your site at https:/yournagiosite.yourdomain.yourtld. Log in using an Active Directory account inside a selected OU. You do not need to prefix it with the domain name.

If all is well, you should be greeted with the Nagios home page. Click any of the navigation links on the left. The pages should load, but you should not be able to see anything — no hosts, no services, nothing. If so, that means that Apache has figured out who you are, but Nagios hasn’t. You can double-check that at the top left of most any of the pages. For instance, on the Tactical Overview:


Do not move past this point until AD authentication works.

Configure Nagios to Recognize Active Directory Accounts

Truthfully, Nagios doesn’t know an AD account from a file account. All it knows is that Apache is delivering an account to it. It will then look through its registered contacts for a match. So, in /usr/local/nagios/etc/objects/contacts.cfg, I have:

From there, add that account to groups, services, hosts, etc. as necessary. So, if your CFO wants a dashboard to show him that the accounting server is working, add his AD account accordingly. An account will only be shown its assigned items.

Remember, after any change to Nagios files, you must:

Note on cgi access: by default, only the “nagiosadmin” account can access the CGIs (most of the headings underneath the System menu item at the bottom left). That access is controlled by several “authorized_” lines in /usr/local/nagios/etc/cgi.cfg. As you become accustomed to using multiple accounts in Nagios, you’ll begin plopping them into groups for easier file maintenance. In this particular .cfg file, groups don’t mean anything. I found some Nagios documentation that insists that you can use groups in cgi.cfg, but I couldn’t make that work. You’ll have to enter each account name that you want to access any CGI.

Step 6: Configure check_nrpe and NSClient++ for SSL

After all that you’ve been through in this article, I hope that this serves as comfort: the rest is easy.

We’re going to take three major actions. First, we’ll create a “client” certificate for the check_nrpe utility, and then we’ll create a “server” certificate to be used with all of your NSClient++ systems. After that, we deploy the certificate to monitored systems and configure NSClient++ to use it.

Configure a Certificate for check_nrpe

This part is almost identical to the creation of the SSL certificate for the Apache site. You need to set up a config file to feed into openssl (or modify the default, but I don’t recommend that).

You have two choices:

  1. Edit your existing openssl.cnf file with the differences for the new certificate(s).
  2. Copy your existing openssl.cnf file, make the changes to the copy, and override the openssl command to use the copied file.

I suppose a third option would, again, be to hack at the openssl command line to manually insert what you want. I’m going with option 2 this time, as well.

Of course, it’s not a requirement to use nano. Use the editor you prefer.

The following shows sample replacements and additions to the file. They are not sufficient on their own!

The req_extensions line already exists in the default sample config, but has a hash mark in front of it to comment it out. Remove that (or type a new line, whatever suits you). The [ v3_req ] section probably exists; whatever it’s got, leave it. Just add the subjectAltName line. The [ alt_names ] segment won’t exist (probably). Add it, along with the DNS and IP entries that you want. I don’t know precisely how the check_nrpe tool presents itself to remote systems, or even if the monitored systems care beyond receiving a valid certificate, so set the DNS and IP entries to cover all of your bases.

Now, create the certificate request:

[Enter] through the questions (unless you need to change a default). Do not use a password when asked.

Lock the private key down:

Submit the CSR to your CA. I’m going to use the CA that I created on my Nagios system:

First, openssl will ask you to supply the password for the CA’s private key. Next, you’ll be shown a preview of the certificate and asked twice to confirm its creation.

The generated file will appear in your CA’s configured output directory (the one used in these directions is /var/ca/newcerts). It will use the next serial from your /var/ca/serial file as the name. So, if you’re following straight through, that will be /var/ca/newcerts/02.pem. You can ls /var/ca/newcerts to see them all. The highest number is the one that was just generated. Verify that it’s yours:

Transfer the certificate to the location that you’ll have check_nrpe load it from, and, for convenience, rename it:

Now “all” you need to do is go around and change every instance of check_nrpe in your object files to use the new certificate and never forget to use it on all new check_nrpe commands and change all those instances if you ever change something about the certificate. Who wants to do that? Oh, right, no one wants to do that.

So, let’s do this instead:

In the nano screen, paste this:


Note: In the original run of this document, I did not include the -L 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' portion, and both encrypted and unencrypted NSClient++ installations worked just fine. Two weeks later, all checks began failing over cipher mismatches. There was no indication as to why they worked up until that moment. So, you may need to test your unencrypted systems WITHOUT the -L bit, then add it back in later. The $* after the -L is required.

Assuming that you used a valid IP for a system running NSClient++ on the default port, you should receive something similar to:

It doesn’t matter if the NSClient++ system is still configured to use insecure mode (mind the note above). If it says that you used incorrect parameters, that means that you didn’t compile check_nrpe with SSL support. Go back and do that (I gave instructions in all of my Nagios articles; if you update to the most recent version of check_nrpe then SSL is implicitly configured). For any other problems, you may need additional parameters, ex: to override the port. Use whatever is working in your existing commands.

Once you have this working, you can go into all of your command files and swap out all instances of “check_nrpe” for “check_nrpe_secure” with a simple Find/Replace operation. That’s all that you’ll need to remember to do going forward. If you need to change anything about the certificate(s), then you only need to modify check_nrpe_secure itself.

Configure a Certificate for NSClient++

You’ve seen this game a couple of times now. The primary difference between this and previous certificate generation is that I will not be using any subject alternate names for the NSClient++ certificate. Working from the assumption that you don’t want to sit around generating CSRs and delivering certificates to every single monitored system, I’m going to have you create one certificate to use on all of your monitored hosts. I suppose that’s a slight security risk, because anyone with the certificate set could pretend to be one of your monitored hosts. I think if they can spoof your system that well, being able to fool Nagios will be the least of your concerns. This exercise satisfies our primary goal of encrypting the Nagios<->monitored system traffic with a nice touch of allowing the monitored systems to authenticate the Nagios system. Anything more is mostly extra effort.

Generate the CSR first. You can do this from pretty much anywhere, but I’ll continue to use the pattern that we’ve established in this article:

Of course, it’s not a requirement to use nano. Use the editor you prefer.

The following shows sample replacements to the file. They are not sufficient on their own!

All of the changes that we made for the other certificates are unnecessary here. In fact, you could probably feed in the default openssl.cnf and just change the commonName when prompted. To each their own.

Create the certificate request:

[Enter] through the questions (unless you need to change a default). Do not use a password when asked.

Locking down this particular private key file won’t do much because you’re going to be copying it to all of your monitored hosts anyway.

Submit the CSR to your CA. Again, I’m using my Nagios system’s CA:

First, openssl will ask you to supply the password for the CA’s private key. Next, you’ll be shown a preview of the certificate and asked twice to confirm its creation.

The generated file will appear in your CA’s configured output directory (the one used in these directions is /var/ca/newcerts). It will use the next serial from your /var/ca/serial file as the name. So, if you’re following straight through, that will be /var/ca/newcerts/03.pem. You can ls /var/ca/newcerts to see them all. The highest number is the one that was just generated. Verify that it’s yours:

Transfer the certificate and private key to some distribution point. While I’m not overly concerned about securing the key this time, do perform some basic due diligence. No need to just open the door and invite attackers in. I do generally copy it over to /var/certs just for consistency (avoids the “now, where did I put that” problem).

Configure NSClient++ for Certificate Security

Start by making sure that all checks that target this host use the new check_nrpe_secure. If they use the original check_nrpe without certificate information, all checks to the host will fail once you complete these steps.

You need three things to proceed: the certificate that you created for NSClient, the private key that you created for NSClient, and the certificate for the CA that signed the NSClient certificate. You do not need the CA’s private key. Just leave that where it is.

Copy the three files to C:\Program Files\NSClient++\security.

Edit your nsclient.ini file. Here’s mine:

Restart the service ( Restart-Service nscp or net stop nscp && net start nscp).

From here, you just need to come up with a deployment technique that transfers the certificates, key, and ini file to all other systems, then restarts the ncsp service.


Hyper-V 2016 Shielded Virtual Machines on Stand-Alone Hosts

Hyper-V 2016 Shielded Virtual Machines on Stand-Alone Hosts


One of the hot new technologies in Hyper-V 2016 is Shielded Virtual Machines. This feature plugs a few long-standing security holes in the hypervisor space that were exacerbated by the rise of hosting providers. It’s ridiculously easy to start using Shielded Virtual Machines, but its simplicity can mask some very serious consequences if the environment and guests are not properly managed. To make matters worse, the current documentation on this feature is sparse and reads more like marketing brochures than technical material.

The material that does exist implies that Shielded Virtual Machines require a complicated Host Guardian Service configuration and a cluster or two. This is not true. You can use Shielded Virtual Machines on standalone hosts without ever even finding any setup for Host Guardian Service (HGS). Using a properly configured HGS is better, but it is not required. Standalone mode is possible. “Standalone” can apply to non-domain-joined hosts and domain-joined hosts that are not members of a cluster. I did verify that I could enable VM shielding on a non-domain-joined host, but I did not, and will not, investigate it any further. This article will discuss using Shielded Virtual Machines on a domain-joined Hyper-V host that is not a member of a cluster and is not governed by a Host Guardian Service.

What are Shielded Virtual Machines?

A Shielded Virtual Machine is protected against tampering. There are several facets to this protection.

Unauthorized Hosts Cannot Start Shielded Virtual Machines

Only systems specifically authorized to operate a Shielded Virtual Machine will be able to start it. Others will receive an error message that isn’t perfectly obvious, but should be decipherable with a bit of thought. The primary error is “The key protector could not be unwrapped. Details are included in the HostGuardianService-Client event log.” The details of the error will be different depending on your overall configuration.

No Starting Shielded VMs on Unauthorized Hosts

No Starting Shielded VMs on Unauthorized Hosts

This feature is most useful when combined with the next.

Unauthorized Hosts Cannot Mount Virtual Hard Disks from Shielded Virtual Machines

The virtual hard disks for a Shielded Virtual Machine cannot be opened or mounted on unauthorized systems. Take care as the error message on an unauthorized host is not nearly as clear as the message that you receive when trying to start a Shielded Virtual Machine on an unauthorized host, and it could be mistaken for a corrupted VHD: “Couldn’t Mount File. The disk image isn’t initialized, or contains partitions that aren’t recognizable, or contains volumes that haven’t been assigned drive letters. Please use the Disk Management snap-in to make sure that the disk, partitions, and volumes are in a usable state.”.

Error When Opening a Shielded VHD on an Unauthorized Host

Error When Opening a Shielded VHD on an Unauthorized Host


For small businesses, this is the primary benefit of using Shielded Virtual Machines. If your VM’s files are ever stolen, the thieves will need more than that.

VMConnect.exe Cannot be Used on a Shielded Virtual Machine

Even administrators can’t use VMConnect.exe to connect to a Shielded Virtual Machine. In case you didn’t already know, “VMConnect.exe” is a separate executable that Hyper-V Manager and Failover Cluster Manager both call upon when you instruct them to connect to the console of a virtual machine. This connection refusal provides a small level of protection against snooping by a service provider’s employees, but does more against other tenants that might inadvertently have been granted a few too many privileges on the host. Attempting to connect results in a message that “You cannot connect to a shielded virtual machine using a Virtual Machine Connection. Use a Remote Desktop Connection instead.”

No VMConnect for Shielded VMs

No VMConnect for Shielded VMs

The upshot of the VMConnect restriction is that if you create VMs from scratch and immediately set them to be shielded, you’d better have some method in mind of installing an OS without using the console at all (as in, completely unattended WDS).

What are the Requirements for Shielded Virtual Machines?

The requirements for using Shielded Virtual Machines are:

  • Generation 2 virtual machines

That’s it. You’ll read a lot about the need for clusters and services and conditional branches where a physical Trusted Platform Module (TPM) can be used or when administrator sign-off will do and all other sorts of things, but all of those are in regards to Guarded Fabric and involve the Host Guardian Service. Again, HGS is a very good thing to have, and would certainly give you a more resilient and easily managed Shielded Virtual Machine environment, but none of that is required. The only thing that you must absolutely have is a Generation 2 virtual machine. Generation 1 VMs cannot be shielded.

Generation 1 virtual machines can be encrypted by Hyper-V. That’s a topic for another article.

How Does the Shielded Virtual Machine Mechanism Work on a Standalone System?

Do not skip this section just because it might have some dry technical details! Ignorance on this topic could easily leave you with virtual machines whose data you cannot access! Imagine a situation in which you have a single, non-clustered host with a guest on a Scale Out File Server cluster and you enable the Shielded VM feature. Since all of the virtual machine’s data is on an automatically backed-up storage location, you don’t bother doing anything special for backup. One day, your Hyper-V host spontaneously combusts. You buy a new host and import the VM directly from the SOFS cluster, only to learn that you can’t turn it on. What can you do!? You could try crying or drinking or cursing or sacrificing a rubber chicken or anything else that makes you feel better, but nothing that you do short of cracking the virtual machine’s encryption will get any of that data back. If you don’t want that to be you, pay attention to this section.

Shielded Virtual Machines are Locked with Digital Keys

Access to and control of a Shielded Virtual Machine is governed by asymmetric public/private encryption keys. In a single host environment without a configured Host Guardian Service, these keys are created automatically immediately after you set the first virtual machine to be shielded. You can see these certificates in two ways.

Viewing Shielded Virtual Machine Certificates Using CERTUTIL.EXE

The CERTUTIL.EXE program is available on any system, including those without a GUI. At an elevated command prompt, type:

You’ll be presented with a dialog that shows the Shielded VM Encryption Certificate. Click More Choices and it will expand to show that certificate and the Shielded VM Signing Certificate:

VM Shielding Certificates

VM Shielding Certificates

You can click either of the certificates in the bottom half of the dialog and it will update the information in the top half of the dialog. Click the Click here to view certificate properties link, and you’ll be rewarded with the Certificate Details dialog:

Certificate Details

Certificate Details

This dialog should look fairly familiar if you’ve ever looked at a certificate in Internet Explorer or in the Certificates MMC snap-in. We’ll turn to that snap-in next.

Viewing Shielded Virtual Machine Certificates Using the Certificates MMC Snap-In

The Microsoft Management Console (MMC.EXE) has a dependency on the Explorer rendering engine, so it is only available on GUI systems. You can use it to connect to systems without a GUI, though, as long as they are in the same or a trusting domain.

  1. At an elevated prompt, run MMC.EXE. You can also enter MMC.EXE into Cortana’s search, then right-click it and choose Run as administrator.

    Starting MMC

    Starting MMC

  2. In Microsoft Management Console, click File -> Add/Remove Snap-in…

    Accessing the Snap-in Menu

    Accessing the Snap-in Menu

  3. In the Add or Remove Snap-ins dialog, highlight Certificates and then click the Add > button.

    Choose Certificates Snap-in

    Choose Certificates Snap-in

  4. A prompt will appear for the target of the Certificates snap-in. We want to target the Computer account:

    Computer Account Choice

    Computer Account Choice

  5. After that, you’ll need to indicate which computer to control. In my example, I want the local computer so I’ll leave that selection. You can connect to any computer in the same or a trusting domain, provided that the user account that you started MMC.EXE with has administrative privileges on that computer:

    Choose Local or Remote Computer

    Choose Local or Remote Computer

  6. After you OK out of all of the above dialogs, MMC.EXE will populate with the certificate tree of the targeted computer account. Expand Shielded VM Local Certificates, then click the Certificates node. If you have shielded a virtual machine, you’ll see two certificates:

    VM Shielding Certificates in MMC

    VM Shielding Certificates in MMC

You can open these certificates to view them.

The Significance of Certificates and Shielded Virtual Machines

Not to put too fine a point on it, but these two certificates are absolutely vital. If they are lost, any virtual machine that they were used to shield is also permanently lost… unless you have the ability to crack 2048-bit SHA256 encryption. There is no backdoor. There is no plan “B”.

If you are backing up your host’s operating system using traditional backup applications, a standard System State backup will include the certificate store.

If you are not backing up the management operating system, then you need a copy of these keys. I’ll give you directions, but the one thing that you must absolutely not miss is the bit about exporting the private keys. The shielding certificates are completely useless without their private keys!

Exporting and Importing VM Shielding Keys with CERTUTIL.EXE

Using CERTUTIL.EXE is the fastest and safest way to export certificates.

  1. Open an elevated command prompt.
  2. Type the following: certutil -store "Shielded VM Local Certificates"
  3. In the output, locate the Serial Number for each of the certificates.

    VM Shielded Certificates with Serial Numbers

    VM Shielded Certificates with Serial Numbers

  4. Use the mouse to highlight the first serial number, which should be for the encryption certificate, then press [Enter] to copy it to the clipboard.
  5. To export the VM shielding encryption certificate, type the following, replacing my information with yours. Use right-click to paste the serial number when you come to that point: certutil -exportPFX -p "2Easy2Guess!" "Shielded VM Local Certificates" 169d0cacaea2a396428b62f77545682e c:\temp\SVHV02-VMEncryption.pfx
  6. Use the mouse to highlight the second serial number, which should be for the signing certificate, then press [Enter] to copy it to the clipboard.
  7. To export the VM shielding signing certificate, type the following, replacing my information with yours. Use right-click to paste the serial number when you come to that point: certutil -exportPFX -p "2Easy2Guess!" "Shielded VM Local Certificates" 5d0cb1f0fa8b34b24e1195c41d997c19 c:\temp\SVHV02-VMSigning.pfx
  8. Ensure that the PFX files that you created are moved to a SAFE place and that the password is SECURED!

If you ever need to recover the certificates, use this template:

You’ll be prompted for the password on each one.

Exporting and Importing VM Shielding Keys with MMC

The MMC snap-in all but encourages you to do some very silly things, so I would recommend that you use the certutil instructions above instead. If you must use the UI:

  1. Open MMC and the Certificates snap-in using instructions from the “Viewing Shielded Virtual Machine Certificates Using the Certificates MMC Snap-In” section above.
  2. Highlight both certificates. Right-click them, hover over All Tasks, and click Export…
Export Start

Export Start

  • Click Next on the informational screen.
  • Change the dot to Yes, export the private key. The certificates might as well not exist at all without their private keys.

    Export Private Key

    Export Private Key

  • Leave the defaults on the Export File Format page. If you know what you’re doing, you can select Enable certificate privacy. Do not select the option to Delete the private key!! The host will no longer be able to open its own shielded VMs if you do that!

    Certificate File Format

    Certificate File Format

  • On the Security tab, you must choose to lock the exported certificate to a security principal or a password. It’s tempting to lock it down by security principal, and it might even work for you. I almost always use passwords because they’ll survive where security principals won’t. If you do choose this option, only use domain principals, use groups not names, use more than one, and make double-, triple-, and quadruple- certain that your Active Directory backups are in good shape. If you’re one of those types that likes to leave your Hyper-V hosts outside of the domain for whatever reason, the Groups or user names option is a good way to lose your shielded VMs forever.

    Exported Certificate Security

    Exported Certificate Security

  • On the File to Export page, use a file name that indicates that you’re backing up both certificates. That’s one nice thing about the GUI.

    Choose a File

    Choose a File

  • The final screen is just a summary. Click Finish to complete the export.
  • Ensure that the PFX files that you created are moved to a SAFE place and that the password is SECURED (or if you used one or more security principals, hope that nothing ever happens to them)!

If you ever need to recover these certificates, I would again recommend using certutil.exe instead. The GUI still makes some dangerous suggestions and it takes much longer. If you insist on the GUI:

  1. Open MMC and the Certificates snap-in using instructions from the “Viewing Shielded Virtual Machine Certificates Using the Certificates MMC Snap-In” section above.
  2. Right-click in the center pane and hover over All Tasks, and click Import…
  3. Click Next on the introductory screen.
  4. On the File to Import screen, navigate to where your certificate backups are. Note that you’ll need to change the filter from *.cer, *.crt to *.pfx, *.p12 to see them.
  5. The Password part of the Private key protection screen is fairly easy to figure out (and won’t be necessary at all if you protected by security principal). Do make sure to check the Mark this key as exportable box. If you don’t, then you won’t be able to export the private key. It’s not strictly necessary, since you do have the file that you’re importing from. At least, you have it right now. Something could happen to it, and then you’d have no way to generate a new one.

    Import as Exportable

    Import as Exportable

  6. Make certain that the certificate store is Shielded VM Local Certificates.

    Certificate Store Choice

    Certificate Store Choice

  7. The final screen is just a summary. Click Finish to import the certificates.

Do take good care of these certificates. They are literally the keys to your Shielded Virtual Machines.

Why Does the Certificate Say “Untrusted Guardian”?

The consequence of not using a full Host Guardian Service is that there’s no independent control over these certificates. With HGS, there’s independent “attestation” that a host is allowed to run a particular virtual machine because the signature on the VM and the signing certificate will match up and, most importantly, the signing certificate was issued by someone else. In this case, the certificate is “self-signed”. You’ll see the term “self-signed” used often, and usually incorrectly. Most of the time, I see it used to refer to certificates that were signed by someone’s internal certificate authority, like their private domain’s Enterprise CA. That is not self-signed! A true self-signed certificate is signed and issued by a host that is not a valid certificate authority and is only used by that host. The most literal meaning of a self-signed certificate is: “I certify that this content was signed/encrypted by me because I say so.” There is no independent verification of any kind for a true self-signed certificate.

Can I Use Shielded VMs from an “Untrusted Guardian” on Another Hyper-V Host?

Yes. These virtual machines are not permanently matched to their source host. That’s a good thing, because otherwise you’d never be able to restore them after a host failure. All that you need to do is import the keys that were used to sign and encrypt those virtual machines on the new target host into its “Shielded VM Local Certificates” store, and it will then be able to immediately open those VMs. There will not be any conflict with any certificates that are already there. This should work for Live Migrations as well, although I only tested export/import.

If you like, you can unshield the VMs and then reshield them. That will shield the VMs under the keyset of the new target host.

What Happens When the Certificate Expires?

I didn’t test, so I don’t know. You could try it out by forcing your clock 10 years into the future.

Realistically, nothing bad will happen when the certificate expires. An expired certificate still matches perfectly to whatever it signed and/or encrypted, so I see no reason why the VMs wouldn’t still work. You can’t renew these certificates, though, so the host will no longer be able to use them to sign or encrypt new VMs. If this is still something that you’re concerned about 9 years and 11 months after shielding your first VM, be happy that your host made it that long and then unshield all of the VMs, delete the certificates, and reshield the VMs. New 10 year certificates will be automatically created and give you another decade to worry about the problem.

How Do I Know if a VM is Shielded?

The “easiest” way is the checkbox on the GUI tab. There’s also PowerShell:

Virtual hard drives are a bit tougher. Get-VHD, even on Server 2016, does not show anything about encryption. You can test it in a hex editor or something else that can poke at the actual bits, of course, but other than that I don’t know of a way to tell.

Are Shielded VMs a Good Idea on an Untrusted Host?

I’m not sure if there is a universal answer to this question. Without the Host Guardian Service being fully configured, there is a limit to the usefulness of Shielded VMs. I would say that if you have the ability to configure HGS, do that.

That said, shielding a VM on an untrusted host still protects its data if the files for the VM are ever copied to a system outside of your control. Just remember that anyone with administrative access to the host has access to the certificate. What you can do, if you’ve got an extremely solid protection plan, is export, delete, and re-import the certificate without marking the private key as exportable. That’s risky, because you’re then counting on never forgetting or losing that exported certificate. However, even a local admin won’t be able to steal virtual machines without having access to the exported key as well.

Enhanced Session Mode in Client Hyper-V

Enhanced Session Mode in Client Hyper-V


Hyper-V’s Enhanced Session Mode technology is available for both the standard Hyper-V editions and Client Hyper-V. I’d argue that it’s far more useful for Client Hyper-V. The only common use case involving users interacting directly with the console of a server Hyper-V guest is VDI, and that does not involve Enhanced Session Mode. Enabling Enhanced Session Mode for a server’s guest would be somewhat exceptional. Client Hyper-V reverses that convention. It is normative to directly interact with locally-hosted virtual machines. Therefore, Enhanced Session Mode is preferred for Client Hyper-V.

What is Enhanced Session Mode?

From a technological standpoint, Enhanced Session Mode ties the VMConnect.exe application into the Hyper-V host’s VMBus component. If you’re not familiar with VMConnect.exe, this is the application that you use any time you connect directly to the console of a [Client] Hyper-V guest (ex: clicking Connect on the context menu of a virtual machine in Hyper-V Manager). This allows some communications between the operating environment that you’re connecting from and the guest operating environment that you’re connecting to. Furthermore, VMConnect.exe employs several technologies from the Remote Desktop Client.

Notable capabilities enabled by Enhanced Session Mode:

  • Making local resources, such as printers, disk drives, and USB devices, available to the guest through the console connection.
  • Copy and paste of files to and from the connecting system and the target guest system.
  • Additional screen resolutions and full screen mode.
  • Smart card logins.

To state it simply, Enhanced Session Mode transforms the VMConnect.exe experience to match the Remote Desktop Client experience.

What are the Use Cases for Enhanced Session Mode in Client Hyper-V?

In general, I think it’s fair to say that if you’ve got a Client Hyper-V virtual machine that you use on a regular basis, you want Enhanced Session Mode. If you’re just testing something out quickly and only need to pop in and out of a particular guest operating system, it’s not that critical. If you’re using something for long periods of time, the small window and walled garden of a virtual machine without an enhanced session will eventually feel restrictive.

I could take the easy way out in this session by enumerating the common reasons for Client Hyper-V, such as development and testing environments, then tack on some blurb about how Enhanced Session Mode would make all of that better. You can probably figure all of that out on your own. Instead, I want to paint a very stark picture that contrasts a standard session against an enhanced session. We’ll do that through two use case scenarios.

Client Hyper-V, Enhanced Session Mode, and Security: The Sandbox Use Case

A user forwards an e-mail to you that they receive from someone that they trust, but it has an attached file. The user also says that the phrasing of the message just doesn’t feel right. So, you pop the attachment into your virtual machine to open it in a contained environment. Pop quiz: should you use Enhanced Session Mode with that virtual machine or not?

The answer might surprise you. If you said, “Do not use Enhanced Session Mode”, then you get high marks for thinking securely, but I must deduct points for not having thoroughly done your homework on Enhanced Session Mode.

Let’s start with why you would not want to use Enhanced Session Mode for poking at potential malware. Without Enhanced Session Mode, that virtual machine is essentially a walled garden. Hyper-V isolates its operating environment from the host’s operating environment naturally. Files and data cannot be transferred out of the guest to the host by any means other than a network connection, which is very easily locked down by simply not connecting the virtual machine to a network. You could place the suspect file into a VHDX, attach the VHDX to the virtual machine, and open it up without ever placing any other machine, including the host, at risk. Well done!

But, you worked harder than you had to.

These are the exact steps to accomplish the same feat using an Enhanced Session Mode guest in Client Hyper-V:

  1. In Hyper-V Manager, take a checkpoint of the virtual machine. Optionally, give that checkpoint a descriptive name.
    Checkpointing in Client Hyper-V

    Checkpointing in Client Hyper-V


  2. Disconnect the virtual machine from the network, if it’s connected.
    Disconnected VM

    Disconnected VM


  3. Copy and paste the file from your desktop into the virtual machine’s desktop. This will only work if Enhanced Session Mode is enabled.
    File Pasted from Host to Guest

    File Pasted from Host to Guest


  4. In the VMConnect console pane, click the Basic Session button.
    Basic Session Button

    Basic Session Button


    Alternatively, click View and Enhanced Session to uncheck it.

  5. Your VMConnect session will end and immediately reconnect. Some guest operating systems will lock your session when this happens, causing you to need to log back in. Do so.
  6. Test the file with whatever tools you have available in the guest operating system.
  7. Once your testing is complete, revert the checkpoint. This irrevocably discards everything that was done in the virtual machine since the last checkpoint was taken.
    Revert Checkpoint in Client Hyper-V

    Revert Checkpoint in Client Hyper-V


  8. Delete the checkpoint.
    Delete Checkpoint in Client Hyper-V

    Delete Checkpoint in Client Hyper-V


  9. Clean up by deleting any remnants of the file in your desktop operating system and performing any necessary reporting on your findings.

The next time you connect to the virtual machine, the host’s normal Enhanced Session Mode setting will be in effect. That means that if it was enabled when you started this procedure, it will be enabled the next time that you connect.

The purpose of this exercise is to show you that Enhanced Session Mode can be toggled off and on easily. You can use the same virtual machine with an open copy/paste and device configuration, then switch to the isolated walled garden at a moment’s notice.

Client Hyper-V, Enhanced Session Mode, and Security: The Administrative Privileges Use Case

A common security best practice for administrative staff is to maintain separate administrative logins and non-administrative logins. You should conduct your menial, yet dangerous activities in a non-privileged session. Those activities include e-mail and web browsing. Administrative duties, such as Active Directory tasks, can be carried out either by Run As or inside an administrative session. There are problems with the traditional techniques.

The Problems with Common Approaches to Session Isolation

The oldest technique involving this separation is secondary logon, or Run As.The Run As approach has multiple problems. For one thing, it’s not all that well-implemented. I have problems with several applications and secondary logins. Somewhat embarrassingly (for them), most of those applications are made by Microsoft. Another problem is that Run As crosses an administrative login with an active non-administrative login. That opens up a path, however narrow, for malware to compromise the administrative session by way of the non-administrative session.

To address this, some administrators have begun using virtual machines. That allows them to have two well-isolated sessions running side-by-side. That approach has merit, but, as we’ve discovered over time, most of us do it wrong. We log in to our root session (the one that the hardware boots to) as a non-privileged user, then spin up a virtual machine running an administrative session. That makes sense on the surface, because the root partition usually provides a superior user experience for the applications that we use most, such as e-mail clients and web browsers. From a security perspective, that’s bad.

What’s wrong with it? Well, a compromised host equals a compromised guest, that’s what’s wrong with it. Consider this:

  1. I’m running my desktop Win10 as a low-privilege user and a Win10 guest as a high-privilege user.
  2. I open an e-mail with a malicious attachment, which infects my machine.
  3. The attacker uses a system exploit and gains root access to my desktop.

What’s part of my desktop? That virtual machine with the privileged session, that’s what. The attacker now owns both of my sessions. What we’ve done is grant the low security session a control link over the high security session, which is an immediate security no-no.

That’s not the only problem. With Client Hyper-V, you need to run Hyper-V Manager and/or VMConnect with administrative privileges, or you can’t even connect to your virtual machines. If you sidestep that with another desktop hypervisor, you’re still going to be at least occasionally doing things via Run As. There’s just no way to avoid being an administrator in the root console. What to do?

Solving Session Isolation Problems with Client Hyper-V

Since you can’t avoid being an administrator in the root session, don’t try. Log on to the root hardware as administrator. What!?! No, I’m not crazy. However, this practice requires discipline:

  • Never surf the web in an administrative session
  • Never use e-mail in an administrative session
  • Never run untrusted or unknown applications in an administrative session
  • etc.

With proper discipline, your management environment should never be at meaningful risk of compromise because you never expose it to such risks.

I have no doubt that you’re thinking, “easier said than done”. I agree. Let Client Hyper-V help.

  1. Log in to your root operating system as an administrative account and set it up with all of your fancy administrative tools. Enable Hyper-V, etc.
  2. Create a virtual machine.
  3. In the virtual machine, install all of your day-to-day “unsafe” applications that you use, like Outlook and Word and your music app. Leave Enhanced Session Mode on.
  4. Log on to the virtual machine as your non-privileged account.

In this configuration, the high security environment has control over the low security environment, as it should be. Do you need to download a file for use in the administrative session? Fine. Download it in the low security virtual machine, verify it there, then copy/paste it into the high security session. If your root operating system is compromised, it has the same net effect that it would if you were doing things the other way around. If your non-privileged guest is compromised, then trash it and start over. You lose nothing and stand to gain much security.

The remaining problem in this use case is, of course, usability. That’s where the rest of Enhanced Session Mode comes in to play.

Enabling Enhanced Session Mode in Client Hyper-V

Enhanced Session Mode is on by default in Client Hyper-V, so you should not need to do anything. To verify it, right-click on the local system in Hyper-V Manager and click Hyper-V Settings. Verify that both of the indicated locations in the screenshot below are enabled.

Enhanced Session Mode Configuration

Enhanced Session Mode Configuration


If these are set, any time Client Hyper-V is able to detect that the guest operating system supports Enhanced Session Mode, it will be active.

Video Enhancements in Enhanced Session Mode

If you’re balking at doing your daily operations like web browsing in a Client Hyper-V virtual machine because you do that sort of thing much more often than you work in your administrative session, this is one of the mitigating points. Make your non-privileged guest full screen. When you connect to a guest in VMConnect and it detects that Enhanced Session Mode is available, you’ll get this (unless it has a RemoteFX adapter):

Enhanced Session Mode Video Options

Enhanced Session Mode Video Options


If you drag that slider all the way to the right, it becomes Full Screen. Then it looks a lot like when you use Remote Desktop Connection. Do your work in that full screen session, and minimize back to your primary desktop if you need to. You can also “Restore” it and becomes a very large window that you can move around. Double-click on its title bar to return it to Full Screen mode.

Audio, Printers, and USB Devices in Enhanced Session Mode

For me, audio rounds out what I want in my non-administrative session. I work in an environment with lots of people which means lots of distracting noise. Noise-cancelling headphones and a solid playlist constitute the thin wall that keeps me in the land of sanity. Client Hyper-V doesn’t let me down.

In that connect dialog that you saw in the previous section, click the down arrow next to Show Options, then switch to Local Resources.

Client Hyper-V Local Resources

Client Hyper-V Local Resources


Most things that you’ll want to connect can be found on this dialog. One thing to note is that sounds in the virtual machine will be transmitted back to the host by default, but your connected microphones do not transmit into the guest by default. To change that, click the Settings button in the Remote audio section and select Record from this computer under Remote Audio Recording.

Client Hyper-V Remote Audio

Client Hyper-V Remote Audio


You can see the Printers and Clipboard checkbox in the first screenshot. Those are selected by default, so copy/paste and printers should already be functional from the first time that you connect. Printers aren’t perfect, especially with networked printers. For those, I just connect to them as though I were using a physical system and all is well.

But, there are other devices that you might want to use. From that first dialog, click the More button to be greeted with a dialog similar to the following:

Additional Devices in Client Hyper-V

Additional Devices in Client Hyper-V


I don’t have any USB devices to speak of, so nothing appears here for me. If you have a special USB device, this is where you’ll go to enable it. If you don’t see it, make sure to check Other supported Plug and Play (PnP) devices and Devices that I plug in later. That’s your best bet to getting that device to work.

In my case, I like to share files between host and guest without copy/paste (be mindful of the potential security risks with that and maintain discipline — execute unknown/untrusted files in the low security environment only). I have a storage disk that I map. In the above image, that’s Storage (D:). In the guest, it looks like this:

Client Hyper-V Mapped Drive

Client Hyper-V Mapped Drive


It’s more easy to bulk-manage files using a mapped drive as opposed to host/guest clipboard copy/paste.

What’s Not Available in Enhanced Session Mode?

For a work environment, there aren’t many drawbacks to Client Hyper-V aside from the cost to properly license each guest. Aside from a natural human reluctance to change long-standing practices, it’s a winning solution.

Home users, though, even home users like me that still try to practice safe computing, face a problem. You’re not going to be doing any serious video gaming in a Client Hyper-V guest. HTML and Flash games, sure. Java games, maybe. AAA games? Absolutely not. Not even with RemoteFX. Maybe in the next version.

Understanding and Working with VHD(X) Files

Understanding and Working with VHD(X) Files

A virtual machine is an abstract computer. It mimics a physical computer for the purpose of extending the flexibility of the computer system while still providing as many features of the physical environment as possible. Hyper-V, like most hypervisors, defines this abstract computer using files. The entities that abstract a physical computer’s hard drive(s) are, therefore, mere files. Specifically for Hyper-V, these files are in the VHDX format.

Understanding and working with VHDx files

Download Altaro VM Backup

Download Altaro VM Backup

Start your free 30-day trial of Altaro VM Backup today and see why it's trusted by 40 000+ organizations worldwide. Get started now and run your first backup in under 15 mins!

What is a VHD/X file?

VHDX is a semi-open file format that describes a virtual hard disk. The x was added to the current specification’s name so that it would not be confused with the earlier VHD format. Microsoft publishes this specification freely so that others can write their own applications that manipulate VHD/VHDX files, but Microsoft maintains sole responsibility for control of the format.

A VHDX mimics a hard disk. It is not related to formats, such as NTFS or FAT or EXT3. It is also not concerned with partitions. VHDX presents the same characteristics as a physical hard drive, or SSD, or SAN LUN, or any other block storage. It is up to some other component, such as the guest operating system, to define how the blocks are used. Simply, a VHDX that contains a possible NTFS format looks like the following:

VHDX Visualization

VHDX Visualization


Where can VHDX Be Used?

This is a Hyper-V blog, so naturally, I will usually only bring up VHDX in that context. It is not particular to Hyper-V at all. Windows 7 and Windows Server 2008 R2 were able to directly open and manipulate VHD files. Windows 8+ and Windows Server 2012+ can natively open and manipulate VHDX files. For instance, the Disk Management tool in Windows 10 allows you to create and attach VHD and VHDX files:

Windows 10 VHD Menu

Windows 10 VHD Menu

When mounted, the VHDX then looks to the operating system like any other disk:

Mounted VHDX

Mounted VHDX


The most important thing to know is that whether or not you can use VHDX depends entirely upon the operating system that controls the file, not anything inside the data region of the file. The contents of the data area are an issue for the operating system that will control the file system. If it’s used by Hyper-V, then that is the guest operating system. If the VHDX is mounted in Windows 10, then Windows 10 will deal with it entirely. It will do so using two different mechanisms.

The VHDX Driver

Modern Windows operating systems, desktop, server, and Hyper-V, all include a driver for working with VHDX files. This is what that driver sees when it looks at a VHDX:

VHDX View from Hyper-V

VHDX View from Management Operating System


Windows 7 and Windows/Hyper-V Server 2008 R2 and earlier cannot mount a VHDX because they do not contain a VHDX driver. They can only work with the earlier VHD format. If a VHDX driver existed for those operating systems, they would theoretically be able to work with those file types. Logically, this is no different than attaching a disk via a SCSI card that the operating system may or may not recognize.

The File System Driver

The file system driver sees only this part:

VHDX View From Inside

VHDX View From Inside


There is no indication that the visible contents are held within a VHDX. They could just as easily be on a SAN LUN or a local SSD. For this reason, it does not matter at all to any guest operating system if you use VHDX or some other format. Linux guests can run perfectly well from their common ext3 and ext4 formats when inside a VHDX.

When a VHDX is mounted in Windows 10 or a server OS, it will require both the VHDX driver and the file system driver in order to be able to manipulate and read the contents of the file. You can mount a VHDX containing ext3 partitions inside Windows 10, but it will be unable to manipulate the contents because it doesn’t know what to do with ext3.

Should I Use VHD or VHDX?

If you will never use the virtual disk file with down-level management operating systems (Windows 7 or Windows/Hyper-V Server 2008 R2 or earlier), then you should always use VHDX (remember that guest operating systems don’t know or care which you use). Unless things have changed, Azure still can’t use a VHDX either, but you can replicate your VHDXs there. If you need to mount them on the Azure side, they will be automatically converted to VHD, although you do need to stay below the 1TB maximum size limit for Azure. If anyone has updated information that the Azure situation has changed, please let me know.

Configuration Information for VHDX Files

There are a few things to understand about VHDX files before creating them. I have explained the process for VHDX creation in another article.

Generation 1 (VHD) vs. Generation 2 (VHDX)

I’ve seen some people use the Generation 1 and Generation 2 labels with VHD and VHDX. They’re correct in the abstract sense, but the capitalization is wrong because these are not formal labels. I encourage you not to use these terms at all because they easily become confused with Generation 1 and Generation 2 Hyper-V virtual machines. A Generation 1 virtual machine can use both VHD and VHDX as long as the management operating system can use both. A Generation 2 virtual machine can only utilize VHDX.

VHDX Block Sizes

The block size for a VHDX has nothing to do with anything that you know about block sizes for disks in any other context. For VHDX, block size is the increment by which a dynamically-expanding disk will grow when it needs to expand to hold more data. It can only be set at creation time, and only when you use PowerShell or native WMI commands to build the VHDX. For a fixed disk, it is always 0:

Block Size for Fixed VHDX

Block Size for Fixed VHDX


The default block size is 32 megabytes. This means that if a VHDX does not have enough space to satisfy the latest write request and has not yet reached the maximum configured size, the VHDX driver will allocate an additional 32 megabytes for the VHDX and will perform the write. If that is insufficient, it will continue allocating space in 32 megabyte blocks until the write is fully successful. While this article is not dedicated to dynamically expanding VHDXs, I want to point out that there is persistent FUD that these expansion events kill performance.

The people making that claim have absolutely no idea what they’re talking about. A heavily-written VHDX will either reach its maximum size very quickly and stop expanding or it will re-use blocks that it has already allocated. A VHDX simply cannot spend a significant portion of its lifetime in expansion, therefore it is impossible for expansion to cause a significant performance impact.

Expansion events will cause the new data blocks to be physically placed in the next available storage block on the management operating system’s file space. This, of course, will likely lead to the VHDX file becoming fragmented. Disk fragmentation in the modern datacenter is mostly a bogeyman used to terrify new administrators and sell unnecessary software or oversell hardware to uneducated decision makers, so expect to be confronted with a great deal of FUD about it. Just remember that disk access is always scattered when multiple virtual machines share a storage subsystem and that your first, best way to reduce storage performance bottlenecks is with more array spindles/SSDs.

For Linux systems, there is a soft recommendation to use a 1 megabyte block size. Space is allocated differently with the Linux file systems than NTFS/ReFS. With the 32 megabyte default, you will find much more slack space inside a Linux-containing VHDX than you will inside a Windows-containing VHDX. The recommendation to use the smaller block size can be considered soft because it doesn’t really hurt anything to leave things be — your space utilization just won’t be as efficient.

VHDX Sector Sizes

The term sector is somewhat outdated because it refers to a method of mapping physical drives that no one uses anymore. In more current terms, it signifies the minimum amount of space that can be used to store data on a disk. So, if you want to write a bit of data to the disk that is smaller than the sector size, it will pad the rest of the space with zeroes. The logical sector size is the smallest amount of data that the operating system will work with. The physical sector size is the smallest amount of data that the physical device will use. They don’t necessarily need to be the same size.

If the logical sector size is smaller than the physical sector size, read and write operations will be larger on the actual hard drive system. The operating system will discard the excess from read operations. On write operations, the physical drive will only make changes to the data indicated by the operating system although it may need to touch more data space than was called for. There are a great many discussions over “performance” and “best” and the like but they are all largely a waste of time. The VHDX driver is an intermediate layer responsible for translating all requests as best it can. The true performance points are all in the physical disk subsystem. To make your I/Os the fastest, use faster hardware. Do not waste time trying to tinker with VHDX sector sizes.

Remember that “portability” is a major component of virtualization. If you do spend a great deal of time ensuring that your VHDX files’ sector sizes are perfectly in-line with your physical subsystem, you may find that your next storage migration places your VHDXs in a sub-optimal configuration. The “best” thing to do is use the defaults for sector sizes and let the VHDX driver take care of it.

VHDX Files Do Not Permanently Belong to a Virtual Machine

When you create a virtual machine using any of the available GUI methods, you’re also given the opportunity to create a VHDX. Behind the scenes, the virtual machine creation and the virtual hard disk creation are two separate steps, with the act of attaching the VHDX to the virtual machine being its own third operation. That final operation will set the permissions on the VHDX file so that the virtual machine can access it, but it does not mean that the VHDX requires the virtual machine in order to operate. A virtual hard disk file:

  • Can be detached from one virtual machine and attached to another
  • Can be re-assigned from one controller and/or controller location to another within the same virtual machine
  • Can be mounted in a management operating system

I have an article explaining the process for VHDX attachment; it shows the various elements for controller selection as well as remove options so that you can perform all of the above. As for the VHDX files themselves, they can be transported via copy, xcopy, robocopy, Windows Explorer, and other file manipulation tools. If a virtual machine somehow loses the ability to open its own VHDX file(s), use this process to detach and reattach the VHDX to the virtual machine; that will fix any security problems.

Mount a VHDX File to Read in Windows (Even Your Desktop!)

As briefly mentioned in the opening portion of this article, you can mount a VHDX file in Windows 10 (as well as Windows 8 and 8.1 and Windows Server 2012+). This allows you to salvage data from a VHDX file if its original operating system had some sort of fatal problem. You can use it for more benign situations, such as injecting installation files into a base image. Just right-click on the file in Windows Explorer and click Mount:

Mount VHDX Using Windows Explorer

Mount VHDX Using Windows Explorer


All of the partitions will then be assigned drive letters in your management operating system and you can work with them as you would any other partition.

Note: When mounting VHDX files that contain boot partitions, you will sometimes get Access Denied messages because the file system drivers can’t read from those partitions. These messages do not impact the mount action.

When you’re done, just right-click on any of the partitions and click Eject. If there are no partitions visible in Windows Explorer, right-click the disk in Disk Management and click Detach VHD.

Mount a VHDX File Using PowerShell

You can use PowerShell to mount a VHDX only on systems that have Hyper-V installed. Just having the Hyper-V PowerShell module installed is not enough.

Mount a VHDX:

Dismount a VHDX:

If you know the disk number of the mounted VHDX, you can use the -DiskNumber parameter instead of the -Path parameter.

Copy a Physical Disk to a VHDX

There are many ways to get a physical disk into a VHDX. Be aware that just because you can convert a disk to VHDX does not mean that it can successfully boot inside a virtual machine! However, it will always be readable by any management operating system that can mount a VHDX and has the proper file system driver. These are the most common ways to convert a physical drive to VHDX:

Backup and Restore VHDX Files

The “best” way to backup and restore of a virtual machine is to use a backup application specifically built for that purpose. But, the poor man’s way involves just copying the VHDX files wherever they are needed. For a “restore”, you’ll need to attach the virtual disks to the virtual machine that will own them.

Attach an Existing VHDX to an Existing Virtual Machine or Reset VHDX File Security

Occasionally, you’ll need to (re)connect an existing VHDX file to an existing virtual machine. Sometimes, you have to rebuild the virtual machine because its XML file was damaged. I sometimes do this because I have VHDX files that I use for templates that I can usually patch offline, but sometimes need to bring online.

While not directly related to the preceding, there are times when a virtual machine loses the ability to open a VHDX. This is invariably the rest of an administrator or security application removing necessary permissions from the VHDX file, often by erroneously setting inheritance from its containing folder.

Both problems have the same solution: use the attach directions. Hyper-V will automatically set the permissions as necessary when it connects the VHDX to the virtual machine.

Move a VHDX from One Bus to Another on the Same Virtual Machine

Generation 1 virtual machines have both a virtual IDE bus and a virtual SCSI bus. It’s rare, but sometimes you’ll want to move a VHDX from one to the other. The volume that contains the system bootstrapper must always be IDE 0 disk 0 (the first one), but any other disk can be moved.

You might need to do this because you accidentally placed a Windows page file on a virtual SCSI disk, which usually doesn’t work (and wouldn’t help with performance if it did, so stop that) or because you discovered the hard way that online resize operations don’t work with IDE-connected VHDX files. You can, of course, also move between different controllers and positions on the same bus type, if you have some need to do that.

Remember that you cannot modify anything on a VHDX connected to the virtual IDE bus while the virtual machine is online. The virtual SCSI bus allows for online modification.

Use the GUI to Move a VHDX to another Bus or Location

You must use Hyper-V Manager for non-clustered virtual machines and Failover Cluster Manager for clustered virtual machines. Or you could use some other application not covered here, like SCVMM.

In the relevant tool, open the settings dialog for the virtual machine and click the drive that you want to change. On the right, choose the new destination bus and location:

VHDX Bus Move

VHDX Bus Move


Notice that as you change the controller, the dialog contents will change automatically so that it already appears to be on the destination location before you even click OK or Apply. It will also show In Use for any location that already has an attached drive.

Use PowerShell to Move a VHDX to another Bus or Location

In PowerShell, use Set-VMHardDiskDrive to relocate a VHDX:

Warning: If running through a remote PowerShell session, be extremely mindful about second hop and delegation concerns. While it is not obvious, the above cmdlet first detaches the drive from its current location and then attaches it at the specified destination. Far fewer permissions are required to detach than attach. It is entirely possible that you will successfully detach the VHDX… and then the cmdlet will error, leaving the disk detached.

Free Script: Fixing Hyper-V Folder Security

Free Script: Fixing Hyper-V Folder Security

My standing recommendation for NTFS security when it comes to Hyper-V is: leave it alone. Unfortunately, most people don’t learn why until they’ve already broken something. It’s not fun to fix. I’ve produced a script for fixing Hyper-V folder security that should make things much easier for you.

Fixing Hyper-V Folder Security – The Script

Here’s the script in all its glory. The straight-through run of it just needs a Path. It will add the necessary privileges for “NT VIRTUAL MACHINE\Virtual Machine” to that path. If you specify -FullReset, then it removes inheritance and sets up a default set of permissions. You can read what they are in the help text. I had toyed around with resetting ownership as well, but doing so in PowerShell is just plain painful. If you want to modify this script to allow it, grab Lee Holmes’ instructions.

The script kind of has WhatIf support. It will only really work if -FullReset is specified. That’s because I didn’t want it over-confirming if you’re using FullReset and I couldn’t find a reliable way to trap if -WhatIf was specified without always triggering the prompt or writing my own WhatIf handler.

If you use -FullReset and it finds anything it thinks is a virtual machine file, it will make you double-confirm that you want to perform the reset. You should really only be using FullReset on the parent folder, not “Virtual Hard Disks” folders or anything like that. If you wind up ripping off the permissions of the VM to its VHD files, well, I warned you. Twice. The script should only operate on the folder and not its files, but, well, scripting NTFS permissions sometimes leads to unexpected results.

All you have to do is copy and paste the contents of this box into your own .ps1 file. I’ve written it with the expectation that you’ll use the name Reset-VMFolderSecurity.


The Discovery Process

The impetus to create this script is not entirely altruistic. Most of my lab work is done without backups. I’m probably not alone in the general attitude that if it was really important, it wouldn’t be in the lab. But then, I suffer some kind of self-inflicted data loss and wish I’d had backups. So, I went about setting up Altaro Hyper-V Backup on all my lab nodes and let it go. I had a few problems, mostly because of situations I created. I sent in an e-mail to Altaro support, and, as always, the initial issues were quickly resolved. But then, I noticed that I was getting some VSS errors. I probably could have dug through the logs, but Altaro support already had them and I’m kind of lazy, so I asked if they could tell me what was wrong.

I get an e-mail back telling me that I’m getting access denied errors and asking if I would check permissions. So, I thought, “I know better than to tinker with the permissions on my VM folders. How could security possibly be wrong?” But, Altaro support is rarely wrong, and checking is pretty easy even for a lazy person like me, so I did. And, sure enough, the “NT VIRTUAL MACHINE\Virtual Machine” account was not present on my virtual machines folder. I’m still not sure how that happened, but in case you’re lucky enough to be one of those people who can learn by reading, don’t ever remove the Virtual Machines account from a security list.

Setting it back is kind of a pain. For a long time, I’ve just been referring people to a blog I found that shows how to do it. The problem I have with that blog is that it sets security on the root of a drive, so I have to always couch my recommendation with a warning against doing that. It’s horrible practice and Windows really doesn’t like it. Furthermore, his instructions grant Full Control to the account. There’s nothing blatantly wrong about that, but it’s not necessary and it’s not what Hyper-V does when it builds out folder ACLs. But, as long as I personally don’t know how to do something, I have to refer everyone to someone that does. So, my goal became to figure out the process on my own and do it the right way.

The first thing I thought was that I could take some screenshots of a properly set folder and tell everyone running a GUI to duplicate those, then find some PowerShell way to set them as well. I learned pretty quickly that the GUI method would be incomplete and the PowerShell method is excessively difficult. So, the first thing to be aware of is that if you use the GUI screen to transcribe properties, it might work, but it won’t be complete.

The issue with PowerShell isn’t directly PowerShell’s fault. It relies on the .Net Framework, and as it just so happens, the .Net Framework’s file and folder security commands, at least up through v4.5, just aren’t quite complete.

What I did first was retrieve the ACL for a working folder, then narrow down its access entries to just the Virtual Machines object. I did that like so:

That produced the following:

The same folder has two separate access rules for the Virtual Machines account. The first is legible. It can be understood and manipulated. As for the second… well… What the heck is “-2147483642”?

To answer that, we need to look at something that will seem scary at first:

This mass of seemingly incomprehensible jumble is called Security Descriptor Definition Language. It’s actually not so bad if you have an SDDL lexicon. To discover what matters, we need to find the items here that represent the “Virtual Machines” account. I’ve seen it enough in the past to recognize its SID: S-1-5-83-0. There are two items here that contain that SID:

  • (A;;0x12008f;;;S-1-5-83-0)
  • (A;CIIO;DCLCGR;;;S-1-5-83-0)

The first one really is unintelligible to the human eye, but the second one isn’t. We’ll start with the first one though. We want to match them both to the more readable entries from the Get-Acl results. They’re actually in the same order, so that’s our first clue. But, from the lexicon, we know that the second grouping (after the first semicolon), references the inheritance model. The first SDDL entry has nothing between the first and second semicolons, which means it has no inheritance, which aligns it with the first object from the Get-Acl pull. We can match the CI and IO of the second item to the Container Inherit and InheritOnly of the second item from Get-Acl. So we know that “0x12008f” means “CreateFiles, AppendData, Read, Synchronize”. That’s easy enough. If I wanted to apply that to a folder, I would do this:

If I do that, the folder will get the same permissions that you find when you look at the security properties of a properly configured folder in Windows Explorer. But, if I run that Get-Acl cmdlet against it, I’m not going to get the strange entry with “-2147483642”.

To solve that, I naturally began with the lexicon. Using that, we find that “(A;CIIO;DCLCGR;;;S-1-5-83-0)” parses out to:

Access Allowed;
Container Inherit, Inherit Only;
Delete Child Objects, List Contents, Generic Read

Next, I looked at the options I had available in System.Security.AccessControl.FileSystemRights:

FileSystemRights OptionsThey’re not all visible in that screenshot, but it doesn’t matter. I tried various combinations from that list that I thought might work, but none did. I actually wasn’t surprised, because if .Net were able to parse that number as a FileSystemRights object, then it wouldn’t show us that number.

My next step was to figure out what the real number was. Obviously, a negative number doesn’t work as a group of flags. In case you’re not familiar with how flags work, here’s a visual representation:

Mailboxes, source

Mailboxes, source

In the picture, we have six mailboxes. All these boxes co-exist in the same structure. Each has its own flag. If it’s up, the postal worker knows that the box contains outbound mail. At a glance, it’s easy to tell which, if any, of the six need attention. Windows uses a similar scheme for flagging various true/false properties on items. Rather than use up a lot of boolean variables which really require more than a single bit to store, it can use a more efficient variable, like an integer, to hold many at once. So, for a six mailbox stretch like this, Windows could use an 8-bit variable to represent them all. It would store them like this:

  • Bit 1 is box 770 – binary value 1
  • Bit 2 is box 768 – binary value 10
  • Bit 3 is box 766 – binary value 100
  • Bit 4 is box 764 – binary value 1000
  • Bit 5 is box 762 – binary value 10000
  • Bit 6 is box 760 – binary value 100000
  • Bit 7 is unused  – binary value 1000000
  • Bit 8 is unused  – binary value 10000000

In the picture, all the flags are up, so that would be represented by 00111111. Converted to decimal, that would be 63; hex would be 3f. If it wants to know if a particular flag is up, it uses a boolean AND against that column. For example, it wants to check whether box 764 has a message waiting. Box 764 is bit 4, so the mask to use puts a 1 in bit 4 and a zero everywhere else: 00001000. It performs a binary AND against the condition of the entire variable, which in this case is 00111111. It then checks the result to see if bit 4 is set. If it is, then it knows that box 764 has something waiting.

If bit 4 had held a 0, the result would have been all 0s, or false, and it would have known there was no message waiting.

To go the other way, that is, to take all the values of the flags and put them together, we use a boolean OR. So, if we need to set that box 768 and 762 have messages waiting, the formula is 10 OR 10000 which is 10010.

So now that we know how flags work, we can decipher the number that represents the access rights. Let’s look at the one we already know, which showed up as 0x12008f. In binary, that’s 100100000000010001111. So, if we knew all the flags, we could visually determine what was set and what wasn’t. We’ve already figured that one out though, so let’s move to the “-2147483642” item. As I said before, that’s not valid because it’s negative. No amount of ORing 0s and 1s will ever result in a negative number. What probably happened here is that a really big unsigned number found its way into a signed number’s slot. Just looking at the contents of memory, you can’t tell if a number is signed or not. That’s decided by how the variable that owns that memory is defined. If it’s defined as a signed number, then when the leftmost bit is set, it’s negative. When it’s empty, the number is positive. If it’s an unsigned number, then that leftmost bit is part of the number. If it’s set, it’s a big number. Anyway, whatever is going on here, that number is not usable.

Knowing that, I set off to find out what the actual number is. For that, I need to find out what numbers represent the letters that we saw in the SDDL (DCLCGR, if you forgot). For that, I found myself on this page. I downloaded the linked SDDL parser and fed it both sets:

ACE 1 is the one we figured out. ACE 2 is the mystery set. As you can see, they only have “ADS_RIGHT_SD_DELETE_CHILD” in common. That by itself doesn’t tell us what the numbers should be, though. To the Internet! This time, we landed in the Active Directory C++ API documentation. With this, we can get the values of the flags:

  • ADS_RIGHT_GENERIC_READ is 0x80000000

If you weren’t aware, the “0x” means that the number is given in hexadecimal. I guess its obvious which one put us over the edge. Converted to binary, 0x80000000 is 10000000000000000000000000000000. Our flag calculation is 0x80000000 OR 0x2 OR 0x4 = 0x80000006. In binary, that’s 10000000000000000000000000000110. So, mostly for academic purposes, I assigned that number to the $ACLAccessRights variable in the small script above and tried to execute, and as expected, the system threw it out as not being a valid value for System.Security.AccessControl.FileSystemRights.

Well, now what? I tried stepping through all the other types I found under the AccessControl .Net class. Most of them just won’t let a programmer create an object. I worked my way up to the root AuthorizationRule, and I was told that there weren’t any constructors; it seems that .Net can make one of those objects but I can’t. Out of the listed rule object types, programmers can make FileSystemRights and RegistrySystemRights, and that’s it. For anything else, such as “2147483654” (the actual decimal representation of the number we want), your only options are SDDL or going deep with API calls. I have the capability of doing C++ API work, but I’m just not going to do it unless I really have to. So, SDDL it is.

As we saw before, we have the ability to set the first part of this using nice, clean, legible PowerShell. But, we do know the entire SDDL that we need. So, since we can’t have legibility across the board, we’ll just go with the super short SDDL and do it all quickly:

That’s really all there is to it. It’s not easy to read, but once you understand the components, it does make sense in its own way. It would be nice if the .Net framework, and by extension PowerShell, caught up to the rest of it, because setting file and folder permissions is definitely in line with what a sysop does every day but what I just showed you most definitely should not be. It’s because of frustrations from complications like this that you find “Everyone/Full Control” being set on critical file shares. It’s also the only thing I still dread doing when I connect to Hyper-V Server or Windows Server in Core mode.



7 Keys to Hyper-V Security

7 Keys to Hyper-V Security

For most small institutions, securing Hyper-V is often just a matter of just letting domain admins control the hypervisor. This post will examine ways you can harden your Hyper-V deployment beyond the basics.