Hyper-V Key-Value Pair Data Exchange Part 3: Linux

Hyper-V Key-Value Pair Data Exchange Part 3: Linux

Some time ago, I discovered uses for Hyper-V Key-Value Pair Data Exchange services and began exploiting them on my Windows guests. Now that I’ve started building Linux guests, I need similar functionality. This article covers the differences in the Linux implementation and includes version 1.0 of a program that allows you to receive, send, and delete KVPs.

For a primer on Hyper-V KVP Exchange, start with this article: Hyper-V Key-Value Pair Data Exchange Part 1: Explanation.

The second part of that series presented PowerShell scripts for interacting with Hyper-V KVP Exchange from both the host and the guest sides. The guest script won’t be as useful in the context of Linux. Even if you install PowerShell on Linux, the script won’t work because it reads and writes registry keys. It might still spark some implementation ideas, I suppose.

What is Hyper-V Key-Value Pair Data Exchange?

To save you a few clicks and other reading, I’ll give a quick summary of Hyper-V KVP Exchange.

Virtual machines are intended to be “walled gardens”. The host and guest should have limited ability to interact with each other. That distance sometimes causes inconvenience, but the stronger the separation, the stronger the security. Hyper-V’s KVP Exchange provides one method for moving data across the wall without introducing a crippling security hazard. Either “side” (host or guest) can “send” a message at any time. The other side can receive it — or ignore it. Essentially, they pass notes by leaving them stuck in slots in the “wall” of the “walled garden”.

KVP stands for “key-value pair”. Each of these messages consists of one text key and one text value. The value can be completely empty.

How is Hyper-V KVP Exchange Different on Linux?

On Windows guests, a service runs (Hyper-V Data Exchange Service) that monitors the “wall”. When the host leaves a message, this service copies the information into the guest’s Windows registry. To send a message to the host, you (or an application) create or modify a KVP within a different key in the Windows registry. The service then places that “note” in the “wall” where the host can pick it up. More details can be found in the first article in this series.

Linux runs a daemon that is the analog to the Windows service. It has slightly different names on different platforms, but I’ve been able to locate it on all of my distributions with sudo service --status-all | grep kvp. It may not always be running; more on that in a bit.

Linux doesn’t have a native analog to the Windows registry. Instead, the daemon maintains a set of files. It receives inbound messages from the host and places them in particular files that you can read (or ignore). You can write to one of the files. The daemon will transfer those messages up to the host.

On Windows, I’m not entirely certain of any special limits on KVP sizes. A registry key can be 16,384 characters and there is no hard-coded limit on value size. I have not tested how KVP Exchange handles these extents on Windows. However, the Linux daemon has much tighter constraints. A key can be no longer than 512 bytes. A value can be no longer than 2,048 bytes.

The keys are case sensitive on the host and on Linux guests. So, key “LinuxKey” is not the same as key “linuxkey”. Windows guests just get confused by that, but Linux handles it easily.

How does Hyper-V KVP Exchange Function on Linux?

As with Windows guests, Data Exchange must be enabled on the virtual machine’s properties:

Hyper-V KVP Exchange on Linux

The daemon must also be installed and running within the guest. Currently-supported versions of the Linux kernel contain the Hyper-V KVP framework natively, so several distributions ship with it enabled. As mentioned in the previous section, the exact name of the daemon varies. You should be able to find it with: sudo service --status-all | grep kvp. If it’s not installed, check your distribution’s instruction page on TechNet.

All of the files that the daemon uses for Hyper-V KVP exchange can be found in the /var/lib/hyperv folder. They are hidden, but you can view them with ls‘s -a parameter:

Hyper-V KVP exchange

Anyone can read any of these files. Only the root account has write permissions, but that can be misleading. Writing to any of the files that are intended to carry data from the host to the guest has no real effect. The daemon is always monitoring them and only it can carry information from the host side.

What is the Purpose of Each Hyper-V KVP Exchange File?

Each of the files is used for a different purpose.

  • .kvp_pool_0: When an administrative user or an application in the host sends data to the guest, the daemon writes the message to this file. It is the equivalent of HKLM\SOFTWARE\Microsoft\Virtual Machine\External on Windows guests. From the host side, the related commands are ModifyKvpItems, AddKvpItems, and RemoveKvpItems. The guest can read this file. Changing it has no useful effect.
  • .kvp_pool_1: The root account can write to this file from within the guest. It is the equivalent of HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest on Windows guests. The daemon will transfer messages up to the host. From the host side, its messages can be retrieved from the GuestExchangeItems field of the WMI object.
  • .kvp_pool_2: The daemon will automatically write information about the Linux guest into this file. However, you never see any of the information from the guest side. The host can retrieve it through the GuestIntrinsicExchangeItems field of the WMI object. It is the equivalent of the HKLM\SOFTWARE\Microsoft\Virtual Machine\Auto key on Windows guests. You can’t do anything useful with the file on Linux.
  • .kvp_pool_3: The host will automatically send information about itself and the virtual machine through this file. You can read the contents of this file, but changing it does nothing useful. It is the equivalent of the HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameter key on Windows guests.
  • .kvp_pool_4: I have no idea what this file does or what it is for.

What is the Format of the Hyper-V KVP Exchange File on Linux?

Each file uses the same format.

One KVP entry is built like this:

  • 512 bytes for the key. The key is a sequence of non-null bytes, typically interpreted as char. According to the documentation, it will be processed as using UTF8 encoding. After the characters for the key, the remainder of the 512 bytes is padded with null characters.
  • 2,048 bytes for the value. As with the key, these are non-null bytes typically interpreted as char. After the characters for the value, the remainder of the 2,048 bytes is padded with null characters.

KVP entries are written end-to-end in the file with no spacing, headers, or footers.

For the most part, you’ll treat these as text strings, but that’s not strictly necessary. I’ve been on this rant before, but the difference between “text” data and “binary” data is 100% semantics, no matter how much code we write to enforce artificial distinctions. From now until the point when computers can process something other than low voltage/high voltage (0s and 1s), there will never be anything but binary data and binary files. On the Linux side, you have 512 bytes for the key and 2,048 bytes for the value. Do with them as you see fit. However, on the host side, you’ll still need to get through the WMI processing. I haven’t pushed that very far.

How Do I Use Hyper-V KVP Exchange for Linux?

This is the part where it gets fun. Microsoft only goes so far as to supply the daemon. If you want to push or pull data, that’s all up to you. Or third parties.

But really, all you need to do is read to and/or write from files. The trick is, you need to be able to do it using the binary format that I mentioned above. If you just use a tool that writes simple strings, it will improperly pad the fields, resulting in mangled transfers. So, you’ll need a bit of proficiency in whatever tool you use. The tool itself doesn’t matter, though. Perl, Python, bash scripts,… anything will do. Just remember these guidelines:

  • Writing to files _0, _2, _3, and _4 just wastes time. The host will never see it, it will break KVP clients, and the files’ contents will be reset when the daemon restarts.
  • You do not need special permission to read from any of the files.
  • _1 is the only file that it’s useful to write to. You can, of course, read from it.
    • Deleting the existing contents deletes those KVPs. You probably want to update existing or append new.
    • The host only receives the LAST time that a KVP is set. Meaning that if you write a KVP with key “NewKey” twice in the _1 file, the host will only receive the second one.
    • Delete a KVP by zeroing its fields.
  • If the byte lengths are not honored properly, you will damage that KVP and every KVP following.

Source Code for a Hyper-V KVP Exchange Utility on Linux

I’ve built a small utility that can be used to read, write, and delete Hyper-V KVPs on Linux. I wrote it in C++ so that it can be compiled into a single, neat executable.

Long-term, I will only be maintaining this project on my GitHub site. The listing on this article will be forever locked in a 1.0 state.

Compile Instructions

Each file is set so that they all live in the same directory. Use make to build the sources and sudo make install to put the executable into the /bin folder.

Install Instructions

Paste the contents of all of these files into accordingly-named files. File names are in the matching section header and in the code preview bar.

Transfer all of the files to your Linux system. It doesn’t really matter where. They just need to be in the same folder.


Usage Instructions

Get help with:

  • hvkvp –help
  • hvkvp read –help
  • hvkvp write –help
  • hvkvp delete –help

Each includes the related keys for that command and some examples.

Code Listing

The file list:

  • makefile
  • main.cpp
  • hvkvp.h
  • hvkvp.cpp
  • hvkvpfile.h
  • hvkvpfile.cpp
  • hvkvpreader.h
  • hvkvpreader.cpp
  • hvkvpremover.h
  • hvkvpremover.cpp
  • hvkvpwriter.h
  • hvkvpwriter.cpp























More in this series:

Part 1: Explanation

Part 2: Implementation

How to Write C/C++ Code for Linux using Hyper-V and Visual Studio

How to Write C/C++ Code for Linux using Hyper-V and Visual Studio

Microsoft has definitely been bringing the love for Linux lately! I’ve used Linux more in 2017 than in the entirety of my previous career combined. Microsoft made that happen. Recently, they added support to their premier development product, Visual Studio, so that it can connect, deploy, and debug C/C++ code on a Linux system. I’m going to show you how to use that new functionality in conjunction with Hyper-V to ease development on Linux. I’ll provide a demo program listing that you can use to retrieve the information that Hyper-V provides to Linux guests via KVP exchange.

Why Use Visual Studio for Linux C/C++ Development?

I think it’s natural to wonder why anyone would use a Microsoft product to write code on Linux. This discussion can get very religious very quickly, and I would personally like to stay out of that. I’ve never understood why people get so emotional over their own programming preferences that they feel the need to assault the preferences of others. If using Visual Studio causes you some sort of pain, don’t use Visual Studio. Simple.

For the more open-minded people out there, there are several pragmatic reasons to use Visual Studio for Linux C/C++ development:

  • Intellisense: Visual Studio quickly shows me errors, incomplete lines, unmatched braces/parentheses, and more. Lots of other products have something similar, but I haven’t found anything that I like as much as Intellisense.
  • Autocomplete: Everyone has autocomplete. But, when it’s combined with Intellisense, you’ve got a real powerhouse. A lot of other products seem to stumble in ways that Visual Studio doesn’t. It seems to know when I want help and when to stay out of the way. It might also be my familiarity with the product, but…
  • Extensions and Marketplace: Visual Studio sports a rich extension API. A veritable cottage industry sprang up to provide plug-ins and tools. Many are free-of-charge.
  • [CTRL]+[K], [D] (Format Document). This particular key chord keeps Visual Studio right at the top of my list of must-have tools. Disagreements over how to place braces and whether to use tabs or spaces are ridiculous, but frequently cause battles that reach family-splitting levels of vitriol anyway. VS’s Format Document will almost instantly convert whatever style in place when you opened the file into whatever style you’ve configured VS to use. Allman with tabs, OTBS with spaces — it doesn’t matter! I haven’t found any other tool that deals with this as well as Visual Studio.
  • Remote debugging. I’ve been using Visual Studio’s remote debugger on Windows for a while and have really liked it. It allows you to write code on one system but run it on another. Since VS won’t run directly on Linux, this feature makes the VS+Linux relationship possible.
  • No Linux GUI needed. Practically, this is the same as the previous bullet. I want it separate so that it skimmers don’t miss it. Out of all of my Linux installations, only two have a GUI. I know that some people declare that “real programmers” only use text editors to write code. That’s part of that religious thing that I’m avoiding. I want a good IDE for my coding activities. Visual Studio allows me to have a good IDE and a GUI-less system.
  • Use the compiler of your choice. Visual Studio only provides the development environment. It calls upon the target Linux system to compile and debug your code. You can specify what tools it uses.
  • Free Community Edition. That’s free as in beer, not open source. But, Community Edition contains most of the best parts of Visual Studio. I would like to see CodeLens extended to the Community Edition, especially since the completely free Visual Studio Code provides it. Most of the rest of the features missing from Community Edition involve the testing suite. You can see a comparison for yourself.

Why Use Hyper-V for Visual Studio and Linux Development?

I don’t know about you, but I like writing code in a virtual machine. Visual Studio 2017 does not modify systems as extensively as its predecessors, but it still uses a very invasive installer. Also, you get a natural sandbox environment when coding in a virtual machine. I feel the same way about target systems. I certainly don’t want to code against a production system, and I don’t want to litter my workspace with a lot of hardware. So, I code in a virtual machine and I test in a virtual machine (several, actually).

I can do all of these things from my Windows 10 desktop. I can also target systems on my test servers. I can mix and match. Since I’m a Hyper-V guy, I can also use this to test code that’s written specifically for a Linux guest of a Hyper-V host. I’ll provide some demo code later in this article specifically for that sort of environment.

Preparing the Linux Environment for Visual Studio->Linux Connections

Visual Studio does all of its work on the Linux environment via SSH (secure shell). So, you’ll need to ensure that you can connect to TCP port 22. I don’t use SELinux, but I believe that it automatically allows the local SSH daemon as long as the default port hasn’t been changed. You’re on your own if you did that.

You need the following components installed:

  • SSH server/daemon. In most cases, this will be pre-installed, although you might need to activate it
  • The GNU C Collection (GCC) and related C and C++ compilers
  • The GNU Debugger
  • The GNU Debugger Server

Installation will vary by distribution.

openSUSE (definitely Leap, presumably Tumbleweed and SEL as well): sudo zypper install -y openssh gcc-c++ gdb gdbserver

Ubuntu: sudo apt-get install -y openssh-server g++ gdb gdbserver

CentOS, Fedora: dnf install -y openssh-server gcc-c++ gdb gdb-gdbserver

If you needed to install SSH server, you’ll probably need to start it as well: sudo service sshd start. You may also want to look up how to autostart a service on your distribution.

You’ll need a user account on the Linux system. Visual Studio will log on as that user to transfer source code and to execute compile and debug operations. Visual Studio does not SUDO, so the account that you choose will not run as a superuser. On some of the distributions, it might be possible to just use the root account. I did not spend any time investigating that. If you need to sudo for debugging, I will show you where to do that.

That’s all for the Linux requirements. You may need to generate a private key for your user account, but that’s technically not part of preparing the Linux environment. I’ll show you how to do that as part of the Windows preparation.

Preparing the Windows Environment for Visual Studio->Linux Connections

First, you need a copy of Visual Studio. You must at least use version 2015. 2017 is preferred. You can use any edition. I will be demonstrating with the Community Edition.

Visual Studio Install Options for Linux C/C++ Development

For Visual Studio 2015, acquire the extension: https://aka.ms/vslinuxext.

For Visual Studio 2017, the new installer includes the Linux toolset.

Using Visual Studio for Linux development

You may choose any other options as necessary, of course.

Connecting Visual Studio to your Linux System(s)

You will instruct Visual Studio to maintain a master list of target Linux systems. You will connect projects to systems from that list. In this section, you’ll set up the master list.

  1. On the main Visual Studio top menu, click Tools->Options.
  2. In the Options window, click Cross Platform. You should be taken right to the Connection Manager screen.
    Connecting Visual Studio to Linux
  3. At the right of the window, click Add. You’ll fill in the fields with the relevant information. You have two separate connection options, which I’ll show separately.

Connecting Visual Studio to Linux Using a Password

Depending on the configuration of your SSH server, you might be able to use a simple password connection. By default, Ubuntu and Fedora (and probably CentOS) will allow this; openSUSE Leap will not.

Fill out the fields with the relevant information, using an account that exists on the target Linux system:

Visual studio connect to Remote System

When you click Connect, Visual Studio will validate your entries. If successful, you’ll be returned to the Options window. If not, it will highlight whatever it believes the problem to be in red. It does not display any errors. If it highlights the host name and port, then it was unable to connect. If it highlights the user name and password, then the target system rejected your credentials. If you’re certain that you’re entering the correct credentials, read the next section for a solution.

Connect Visual Studio to Linux Using Key Exchange

Potentially, using full key exchange is more secure than using a password. I’m not so sure that it’s true in this case, but we’ll go with it. If you’re using openSUSE and don’t want to reconfigure your SSH server, you’ll need to follow these steps. For the other distributions, you can use the password method above or the key method.

  1. Connect to/open the Linux system’s console as the user that you will be using in Visual Studio. Do not use sudo! On some distributions, you can use root via SSH; Ubuntu blocks it.
  2. Run ssh-keygen -t rsa. It may ask you where to create the files. Press [Enter] to accept the defaults (a hidden location in your home directory).
  3. When prompted, provide a passphrase. Use one that you can remember.
  4. You should see output similar to the following:
  5. Next, enter ssh-copy-id yourusername@thefqdn.ofthe.targetlinuxsystem. For instance, I used ssh-copy-id eric@svlmon01.siron.int. Remember, you want to use the name of the Linux system, not the remote Windows system running Visual Studio. The system may complain that it can’t verify the authenticity of the system. That’s OK in this case. Type out  yes and press [Enter].
  6. You will be asked to provide a password. Use the password for your user account, not the passphrase that you created for the key.
  7. Use any tool that you like to copy the file  ~/.ssh/id_rsa  to your local system. The .ssh folder is hidden. If you’re using WinSCP, go to the Options menu and select Preferences. On the Panels tab, check Show hidden files (CTRL+ALT+H).
    Visual Studio preferences
  8. The id_rsa file is a private key. The target Linux system now implicitly trusts that anyone wielding the specified user name (in step 5) and encrypting with this particular private key is perfectly safe to be allowed on to the system. You must take care with this key! In my case, I just dropped into my account’s My Documents folder. That folder already has some NTFS permission locking and I can be reasonably certain that I can trust everyone that has sufficient credentials to override. If not, the passphrase that I chose will serve as my last line of defense.

Now that I have my private key ready, I pick up where step 3 left off in the initial Connecting section above.

  • Fill in the target system and port
  • Fill in the user name
  • Change the Authentication type drop-down to Private Key
  • In the Private key file field, enter or browse to the id_rsa file
  • In the Passphrase field, enter the passphrase that you generated for this key

connecting your Linux system

When you click Connect, Visual Studio will validate your entries. If successful, you’ll be returned to the Options window. If not, it will highlight whatever it believes the problem to be in red. It does not display any errors. If it highlights the host name and port, then it was unable to connect. If it highlights the user name and key section, then the target system rejected your credentials. If that happens, verify that you entered the ssh-copy-id command correctly.

Note: You can also use this private key with other tools, such as WinSCP.

Once you’ve added hosts, Visual Studio will remember them. Conveniently, it will also identify the distribution and bitness:

Visual Studio connection manager

Configuring a Visual Studio C/C++ Project to Connect to a Linux System

At this point, you’ve prepared your overall environment. From this point onward, you’re going to be configuring for ongoing operational tasks. The general outlay of a Visual Studio to Linux connection:

  • Your project files and code remain on the Visual Studio system. That means the .sln, .vcxproj, etc. files.
  • During a build operation, Visual Studio transfers the source files to the target Linux system. It calls on the local compiler to build them.
  • During a debug operation, Visual Studio calls on the local gdb installation. It brings the output to your local system.

You’ll find all of the transferred files under ~/projects/. Expanded, that’s /home/userid/projects. The local compiler will create bin and obj folders in that location to hold the respective files.

The following sub-sections walk through creating a project.

Creating a Linux Project in Visual Studio

You must have followed all of the preceding steps or the necessary project templates will not be available in Visual Studio.

  1. In Visual Studio, use the normal method to create a new solution or a project for an existing solution (File->New->Project).
    linux project in visual studio
  2. In the New Project dialog, expand Installed -> Templates -> Visual C++ -> Cross Platform and click Linux.
    Creating a Linux Project in Microsoft Visual Studio
  3. In the center, choose Console Application. If you choose Empty Project, you just won’t get the starter files. If you have your own process for Linux projects, you can choose Makefile Project. I will not be demonstrating that. Fill out the Name, Location, and Solution Name (if applicable) fields as necessary. If you want to connect to a source control system, such as your Github account, you can facilitate that with the Add to Source Control check box.

Your new project will include an introductory home page and a default main.cpp code file. The Getting Started page contains useful information:

visual c++ for linux development in visual studio

Default main.cpp code:

Default main.cpp code

Selecting a Target System and Changing Linux Build Options in Visual Studio

If you’ve followed through directly and gotten this far, you can begin debugging immediately. However, you might dislike the default options, especially if you added multiple target systems.

Access the root location for everything that I’m going to show you by right-clicking on the project and clicking Properties:

Selecting a Target System and Changing Linux Build Options in Visual Studio

I won’t show/discuss all available items because I trust that you can read. I’m going to touch on all of the major configuration points.

General Configuration Options

Start on the General tab. Use this section to change:

  • Directories on the remote system, such as the root project folder.
  • Project’s name as represented on the remote system.
  • Selections when using the Clean option
  • The target system to use from the list of configured connections
  • The type of build (application, standard library, dynamic library, or makefile)
  • Whether to use the Standard Library statically or as a shared resource

General Configuration Options

Directories (especially for Intellisense)

On the VC++ Directories tab, you can configure the include directories that Visual Studio knows about. This tab does not influence anything that happens on the target Linux system(s). The primary value that you will get out of configuring this section is autocomplete and Intellisense for your Linux code. For example, I have set up WinSCP to synchronize the include files from one of my Linux systems to a named local folder:

VC++ Directories

It won’t synchronize symbolic links, which means that Intellisense won’t automatically work for some items. Fortunately, you can work around that by adding the relevant targets as separate entries. I’ll show you that in a tip after the following steps.

To have Visual Studio access these include files:

  1. Start on the aforementioned VC++ Directories tab. Set the Configuration to All Configurations. Click Include Directories to highlight it. That causes the drop-down button at the right of the field to appear. Click that, then click Edit.
    include directories that Visual Studio knows
  2. In the Include Directories dialog, click the New Line button. It will create a line. At the end of that line will be an ellipsis () button that will allow you to browse for the folder.
    Include Directories
  3. One completed, your dialog should look something like this:
    Include Directories
  4. OK out of the dialog.

Remember, this does not affect anything on the target Linux system.

TIP: Linux uses symbolic links to connect some of the items. Those won’t come across in a synchronization. Add a second include line (or more) for those directories. For instance, in order to get Intellisense for <sys/types.h> and <sys/stat.h> on Ubuntu, I added x86_64-linux-gnu:

Linux Directories

Compilation Options

Visual Studio’s natural behavior is to compile C code with the C++ compiler. It assumes that you’ll do the same on your Linux system. If you want to override the compiler(s) that it uses, you’ll find that setting on the General tab underneath the C/C++ tree node.

Compilation Options

TIP: In Release mode, VC++ sets the Debug Information Format to Minimal Debug Information (-g1). I’m not sure if there’s a reason for that, but I personally don’t look for any debug information in release executables. So, that default setting bloats my executable size with no benefit that I’m aware of. Knock it down to None (-g0) on the C/C++/All Options tab (make sure you select the Release configuration first):

Debug Information Format to Minimal Debug Information

Passing Arguments and Running Additional Commands

You can easily find the Pre- and Post- sections for the linker and build events in their respective sections. However, those only apply during a build cycle. In most cases, I suspect that you’ll be interested in changing things during a debug session. Visual Studio provides many options, but I’m guessing that the two of most interest will be running a command prior to the debug phase and passing arguments into the program. You’ll find both options on the Debugging tab:

Passing Arguments and Running Additional Commands

If the program needs to run with super user privileges, then you could enter  sudo -s into the Pre-Launch Command field. However, by default, you’d also need to supply the password. That password would then be saved into the project’s configuration files in clear text. Even that by itself might not be so bad if the project files live in a secure location. However, if you add the project to your Github account… So, if you need to sudo, I would recommend simply bypassing the need for this account to enter a password at all. It’s ultimately safer to know that you have configured this account that way than to try to keep track of all the places where the password might have traveled. I’ve found two places that guide how to do that: StackExchange and Tecmint. I typically prefer Stack sites but the Tecmint directions are more thorough.

Starting a Debug Process for Linux C/C++ Code from Visual Studio

You’ve completed all configuration work! Now you just need to write code and start debugging.

Let’s start with the sample code since we know it’s a good working program. You can press the green arrow button titled Remote GDB Debugger or press the F5 key when the code window has focus.

Debug Process for Visual Studio

You will be prompted to build the project:

Visual Studio project

If you’ve left the Windows Firewall active, you’ll need to allow Visual Studio to communicate out:

Visual Studio firewall

In the Output window, you should see something similar to the following:

Debug Process for Linux C/C++ Code from Visual Studio

If errors occur, you should get a fairly comprehensible error message.

Viewing the Output of a Remote Linux Debug Cycle in Visual Studio

After the build phase, the debug cycle will start. On the Debug output, you may get some errors that aren’t as comprehensible as compile errors:

Output of a Remote Linux Debug Cycle in Visual Studio

As far as I can tell these errors (“Cannot find or open the symbol file. Stopped due to shared library event”) occur because the target system uses an older compiler. Changing the default compiler on a Linux distribution can be done, but it is a non-trivial task that may have unforeseen consequences. You have three choices:

  • As long as the older compiler can successfully build your application, live with the errors. If your final app will target that distribution, then you can bet that users of that distribution will also be using that older compiler.
  • Add a newer version of the compiler and use what you learned above to instruct Visual Studio to call it instead of the default. You’ll need to do some Internet searching to find out what the corrected command line needs to be.
  • Change the default compiler on the target. That would be my last choice, as it will affect all future software built on that system in a manner that is inconsistent with the distribution. If you want to do that, you’ll need to look up how.

The consequence of doing nothing is the normal effect of debugging into the code for which you have no symbols. I have not yet taken any serious steps to fix this problem on my own systems. I’m not even certain that I’m correct about the version mismatch. However, these aren’t showstoppers. Assuming that the code compiled, the debug session will start. Assuming that it successfully executed your program, it will have run through to completion and exit. If you remember the first time that you coded a Visual C++ Windows Console Application and didn’t have some way to tell the program to pause so that you could view the results, then you’ll already know what happened: you didn’t get to see any output aside from the return code.

Since you’re working in a remote session, you need to do more than just put some simple input process at the end of your code. In the Debug menu, click Linux Console.

Debug menu Linux Console

This will open a new always-on-top window for you to view the results of the debug. Debug the default application again, and you should see this:

Linux Console

Of course, the built output will remain until you clean it, so you can always execute the app in a separate terminal window:

LinuxApp is the name that I used for my project. Substitute in the name that you used for your project.

Sample C++ Application: Retrieving KVP data from Hyper-V on a Linux Guest

If we’re going to have an article on Hyper-V, Linux, and C++, it seems only fair that it should include a sample program tying all three together, doesn’t it?

Hyper-V KVP on Linux

A while back, I went through the KVP exchange mechanism for Hyper-V and Windows guests. From the host side, nothing changes for Linux guests. On the Linux side, just about everything changes.

If you followed my guides, the Hyper-V KVP service will already be running on your Linux guest. Check for it: sudo service --status-all | grep kvp. If it’s not there, you can look at the relevant guide on this site for your distribution (I’ve done Ubuntu, Ubuntu, openSUSE Leap, and Kali). You can also check TechNet for your distribution’s instructions. Also, make sure that the service is enabled on the virtual machine’s property page in Hyper-V Manager or Failover Cluster Manager.

Linux/Hyper-V KVP Input/Output Locations

On Windows, the KVP service operates via the local registry. On Linux, the KVP daemon operates via files:

  • /var/lib/hyperv/.kvp_pool_0: an inbound file populated by the daemon. This is data that an administrative user can send from the host. Same purpose as the External key on a Windows guest. You only read this file from the Linux side. It does not require special permissions. Ordinarily, it will be empty.
  • /var/lib/hyperv/.kvp_pool_1: an outbound file that you can use to send data to the host. Same purpose as the Guest key on a Windows guest.
  • /var/lib/hyperv/.kvp_pool_2: an outbound file populated by the daemon using data that it collects from the guest. Same purpose as the Auto key on a Windows guest. This information is read by the host. You cannot do anything useful with it from the client side.
  • /var/lib/hyperv/.kvp_pool_3: an inbound file populated by the host. This data contains information about the host. Same purpose as the Guest\Parameter key on a Windows guest. You can only read this file. It does not require special permissions. It should always contain data.

Linux/Hyper-V KVP File Format

All of the files follow a simple, straightforward format. Individual KVP records are simply laid end-to-end. These KVP records are a fixed length of 2,560 bytes. They use a simple format:

  • 512 bytes that contain the data’s key (name). Process as char. hyperv.h defines this value as HV_KVP_EXCHANGE_MAX_KEY_SIZE.
  • 2,048 bytes that contain the data’s value. By default, you’ll also process this as char, but data is data. hyperv.h defines this as HV_KVP_EXCHANGE_MAX_VALUE_SIZE.

Be aware that this differs from the Windows implementation, which doesn’t appear to use a fixed limit on value length.

Program Listing

Armed with the above knowledge, we’re going to read the inbound information that contains the auto-created host information.

I replaced the default main.cpp with the following code:

Debug this with the Linux Console open, and you should see something like the following:

Debug with the Linux Console

For More Information

I poached a little bit of the Visual Studio for C/C++ on Linux information from https://blogs.msdn.microsoft.com/vcblog/2016/03/30/visual-c-for-linux-development/.

I got the base information about Hyper-V/Linux KVP from this article: https://technet.microsoft.com/en-us/library/dn798287(v=ws.11).aspx. If you want to write KVP readers/writers using C rather than C++, you’ll find examples there. While I certainly don’t mind using C, I feel that the lock code detracts from the simplicity of reading and writing KVP data.

How to Use Hyper-V and Kali Linux to Securely Wipe a Hard Drive

How to Use Hyper-V and Kali Linux to Securely Wipe a Hard Drive

The exciting time has come for my wife’s laptop to be replaced. After all the fun parts, we’ve still got this old laptop on our hands, though. Normally, we donate old computers to the local Goodwill. They’ll clean them up and sell them for a few dollars to someone else. Of course, we have no idea who will be getting the computer, and we don’t know what processes Goodwill puts them through before putting them on the shelf. A determined attacker might be able to retrieve social security numbers, bank logins, and other things that we’d prefer to keep private. As usual, I will wipe the hard drive prior to the donation. This time though, I have some new toys to use: Hyper-V and Kali Linux.

Why Use Hyper-V and Kali Linux to Securely Wipe a Physical Drive?

I am literally doing this because I can. You can easily find any number of other ways to wipe a drive. My reasons:

  • I don’t have any experience with Windows-based apps that wipe drives and didn’t find any freebies that spoke to me
  • I don’t really want to deal with booting this old laptop up to one of those security CDs
  • Kali Linux focuses on penetration testing, but Kali is also the name of the Hindu goddess of destruction. For a bit of fun, do an Internet image search on her, but maybe not around small children. What’s more appropriate than unleashing Kali on a disk you want to wipe?
  • I don’t want to deal with a Kali Live CD any more than I want to use one of the other CD-based tools, nor do I want to build a physical Kali box just for this. I already have Kali running in a virtual machine.
  • It’s very convenient for me to connect an external 2.5″ SATA disk to my Windows 10 system.

So yeah, I’m doing this mostly for fun.

Connect the Drive

I’m assuming that you’ve already got a Hyper-V installation with a Kali Linux guest. If not, get those first.

Since we’re working with a physical drive, you also need a way to physically connect the drive to the Hyper-V host. In my case, I have an old Seagate FreeAgent GoFlex that works perfectly for this. It has an enclosure for a small SATA drive and a detachable USB interface-to-SATA connector. I just pop off their drive and plug into the laptop drive, and voila! I can connect her drive to my PC via USB.

how I connected the hard drive

You might need to come up with some other method, like cracking your case and connecting the cables. Hopefully not.

I plugged the disk into my Windows 10 system, and as expected, it appeared immediately. Next, I went into Disk Management and took the disk Offline.

hard disk management page

I then went into Hyper-V Manager and ensured the Kali guest was running. I opened its settings page to the SCSI Controller page. There, I clicked the Add button.

Adding the hard drive

It created a new logical connection and asked me if I wanted a new VHDX or to connect a physical disk. In this case, the physical disk is what we’re after.

select physical hard disk

After clicking OK, the disk immediately appeared in Kali.

In Kali, open the terminal from the launcher at the left:

Kali Linux terminal launch

Use lsblk to verify that Kali can see your disk. I already had my terminal open so that I could perform a before and after for you:

Kali Linux terminal

Remember that Linux marks the SATA disks in order as sda, sdb, sdc, etc. So, I know that the last disk that it detected is sdb, even if I hadn’t run the before and after.

Use shred to Perform the Wipe

Now that we’ve successfully connected the drive, we only need to perform the wipe. We’ll use the “shred” utility for that purpose. On other distributions, you’d usually need to install that from a repository. Kali already has it waiting for you, of course.

The shred utility has a number of options. Use shred –help to view them all. In my case, I want to view progress and I want to increase the number of passes from the default of 3 to 4. I’ve been told that analog readers can sometimes go as far as three layers deep. Apparently, even that is untrue. It seems a that a single pass will do the trick. However, old paranoia dies hard. So, four passes it is.

I used:

Kali Linux

And then, I found something else to do. As you can imagine, overwriting every spot on a 250GB laptop disk takes quite some time.

Because of the time involved, I needed to temporarily disable Windows 10 sleep mode. Otherwise, Connected Standby would interrupt the process.

disabling sleep mode

After the process completed, I used Hyper-V Manager to remove the disk from the VM. Since I never mounted it in Kali, I didn’t need to do anything special there. After that, I bolted the drive back into the laptop. It’s on its way to its happy new owner, and I don’t need to worry about anyone stealing our information from it.

How to run Kali Linux on Client Hyper-V

How to run Kali Linux on Client Hyper-V

Personally, I find Microsoft’s recent moves to improve support for Linux and its overall relationship with open source to be very exciting. I’ve taken full advantage of these new opportunities to rekindle my love for the C and C++ languages and to explore Linux anew. Since my general line of work keeps me focused on the datacenter, I’ve similarly kept tight focus on server Linux builds and within the confines of Microsoft’s support matrix. Sure, I’ve had a good time learning other distributions and comparing them to what I knew. But, I also realize that I’ve been restricting myself to the safe walled garden of enterprise-style deployments. It’s time for something new. For my first step outside the walls, I’m going to take a crack at Kali Linux.

What is Kali Linux?

The Kali Linux project focuses on security. In most of the introductory literature, you’ll find many references to “penetration testing”. With a bit of searching, you’ll find a plethora of guides on using Kali to test the strength of your Windows computers.

The distribution itself is based on Debian. Truthfully, even though I’d like to tell you that we’re going to stray far, far away from the beaten path, we won’t. Almost no one picks up a copy of the Linux kernel and builds an all-new distribution around it. Nearly every maintained distribution connects somewhere into the general distribution categories on Microsoft’s list. Anything else falls under the category of a “source-based” distribution (like Gentoo). I’d need to drastically improve my Linux knowledge to help anyone with one of those.

Why use Kali Linux?

The distributions that I tend to cover in these articles fall best under the category of “general purpose”. In that respect, they have much in common with Windows and Windows Server. You stand up the operating system first, then install whatever applications and servers you need it to operate or provide. Web, DNS, storage, games — anything goes.

Kali Linux has a purpose. You could use it as a general purpose platform, if you want. That’s not an optimal use of the distribution, though. Kali is designed to probe the strength of your environment’s computer security. During install, there won’t be any screens asking you to pick the packages you want to install. You won’t get an opportunity to tick off boxes for LAMP or DNS servers. If you want those things, look at other distributions. Kalix Linux is here to pentest, not hand out IP addresses. Err… well… I guess rogue DHCP qualifies as security testing… But, you get the idea.

A natural question, then, is, “So, Eric, what do you know about pentesting?” The answer is: very little. Where I work, we have a security team. I can notify them when I build a new system, and they’ll test it and send me a report. I accept that I will never rise to expert level, if for no other reason than because I don’t have the time. Still, I should know more than I do. Many seasoned sysadmins would be surprised at how easily an attacker can break into a system set at defaults. Since the people behind the Kali Linux project have done all the work to make a convenient entry point, I’m going to take advantage of it. I recommend that you do the same.

Why Use Client Hyper-V for Kali Linux?

I won’t tell you why you should use a Microsoft hypervisor as opposed to some other hypervisor. I use Microsoft platforms and services for almost every aspect of my home and work computing, so my natural choice is to stick with it. If your story is different, then stay with what you know.

I will tell you that Client Hyper-V makes more sense than server Hyper-V. I’ll make an exception for those of you that run Windows Server as your primary desktop. That’s not a thing that I would do, but hey, no judgment here.

Why I use Kali Linux under Client Hyper-V:

  • Kali Linux is best used interactively with a desktop interface. If I were to run Kali from within my datacenter, I’d need to use VMConnect against a remote host. I’ve never liked that.
  • Most attacks won’t come from within the datacenter, so why would your primary penetration testing tool live there? Put it into a user network. Run it from a computer that can access your wired and wireless networks.
  • Hyper-V allows you to perform all sorts of spoofing quickly and easily. You can flip MACs and hop networks in moments. You can hide Kali behind NAT to fool many network access protection schemes and then, within seconds, drop it on the network alongside my host OS.
  • I don’t want to replace my primary desktop. I don’t necessarily need to use any hypervisor; I could just install Kali right to my desktop. I could stand up a second physical machine right next to me and use Kali on that. But, this is the sort of thing that hypervisors were built for; more computers in less space. I can keep my general purpose desktop and have the special-purpose Kali running happily together.

Downloading Kali Linux

As a side effect of having a specific purpose, Kali Linux does not provide many install flavors. Start at the Kali Linux homepage. Click the Downloads header at the top of the page. Behold — the list. It looks long, but there’s really not that much there. You’re mostly picking the bitness (most are 64-bit) and the user interface experience that suits you.

This article uses the standard 64-bit distribution of Kali Linux 2017.1. If you choose something else, your experience may be different.

Verifying the ISO File Hash

Since we’re talking security, let’s start by verifying our file. On the Kali download page, next to the file link, you’ll find its SHA256 hash:

Downloading kali linux images(source: https://www.kali.org/downloads/, as of June 17th, 2017)

Use PowerShell to determine the hash:

You’ll get output that looks like the following:

kali linux with client hyper-v

If you’re OK with “good enough”, you can do a quick ‘n’ dirty eye scan — basically, just visually verify that the codes look more or less the same. Even minor changes to a file will throw off the hash substantially. But, it’s not impossible to have two files with a similar hash. And, since we’re talking security, trust no one.

In your PowerShell prompt, do exactly this:

  1. Ensure that you are at the beginning of a new command line; no text entered, just a prompt.
  2. Type a single quote mark: ‘
  3. Use the mouse to highlight the Hash output from the previous command. Press [Enter] to put it on the clipboard. Right-click to paste. That should place the code immediately after the single quote.
  4. Type another single quote mark to close off the file hash.
  5. Enter a space, then -match, then another space.
  6. Type another single quote mark to start a new string.
  7. Highlight the corresponding hash code on the Kali download page. Switch back to the PowerShell prompt and right-click to paste it.
  8. Type another single quote mark to close off the published hash.
  9. Press [Enter].

This is what you should see (with possibly different hash values):


If you get an error, check your input. If you get False, check your input. If the input is OK, then your file does not match the expected hash. Most likely, the download corrupted. Maybe somebody hijacked it. Either way, get another.

Installing Kali Linux as a Guest in Client Hyper-V

On to the good stuff!

Creating a Hyper-V Virtual Machine for Kali Linux

I do not mean for this article to be a tutorial on creating VMs in Client Hyper-V. I assume that you know how to create a virtual machine, attach an ISO to it, start it up, and connect to its console.

I have a script that I use to create Linux VMs. The more I use it, the more deficiencies I notice. I will someday make this script better. Here’s what I currently have:

This script creates a dynamically-expanding VHDX using a 1 megabyte block size, in accordance with Microsoft’s recommendation. A commenter on another of my Linux articles pointed out that the 1MB block size does not result in significant space savings on every Linux distribution. I have not tested the difference on Kali. It uses ext4, so I suspect that you’ll want the 1MB block size.

I used the script like this:

It was necessary to pre-create the target VHDX path. That’s one of the deficiencies in the script. It’s also necessary to turn off Secure Boot after creation.

During use, I learned that Kali wants so much more memory than 2GB. These memory numbers are somewhat laughable. Be prepared to turn them up. It does seem to run well enough at 2GB, but I’m thinking that 4GB would be a more reasonable average running expectancy.

Installing Kali Linux from ISO

In case you missed it from the previous section: disable Secure Boot. Hyper-V does not include Kali’s boot signature. I did enable TPM support for it, but I don’t yet even know if Kali will make use of it.

From here, I doubt that you really need much from me. Installation of Kali is very straightforward. It shares one annoyance with Ubuntu: it has an obnoxious number of input screens broken up by long file operations, rather than cohesive input gathering followed by completion operations.

An installation walkthrough:

  1. You’re given many options right from the start. I simply chose to Start installer:
    kali linux installNote that several errors regarding not being able to find anything on SDA will scroll by; don’t worry about them. That’s normal for an empty disk.
  2. Choose the installation language. I also want to draw attention to the Screenshot button; This appears on every page, so you can store install images for later retrieval:
    kali linux language
  3. Choose your location. Be aware that the options you see are determined by your language selection! The following two screenshots show the outcome of choosing English and French in step 2:
    English Choices

    English Choices

    French Options

    French Options

  4. Choose your keyboard layout:
    kali linux keyboard
  5. The installer will then load some files and perform basic network configuration. I noticed that it performed IP assignment from DHCP; I did not test to see what happens if it can’t reach a DHCP server.
  6. After the component load, provide a host name. It appears to automatically choose whatever name DHCP held for that IP last. Only provide the name, no domain.
    kali linux network
  7. Provide your domain name. You can invent one if you aren’t using a domain, but you must enter something.
  8. Enter a password for root. Even though it mentions user creation, you aren’t creating a standard user account like you would in other distributions.
    kali linux password
  9. Choose your time zone. Options will be selected based on your earlier region choices. Why it appears at this point of the installer, I certainly do not know.
  10. Choose how you want your disk to be laid out and formatted. I personally choose Guided – use entire disk because I’m not the biggest fan of LVM. Any of the first three choices are fine if you’re new and/or not especially picky.
  11. Confirm disk usage:
  12. Then confirm partition usage:
  13. Confirm disk usage:
  14. And again… (this installer needs a lot of polishing):
  15. Now, your formatting options will be applied and files will be copied. This will take a while and there will be more questions, so don’t go too far.
  16. Now you need to choose whether or not you’ll allow software to be downloaded from the Internet (or a specially configured mirror). If you choose no, you’ll need to manually supply packages or add a repository later.
  17. If you need to enter proxy information, do so now:
  18. You’ll have a few more minutes of configuration, then what appears to be a completion screen.
  19. There’s still more stuff to do, though:
  20. As soon as that part completes, the system will reboot and launch into your new Kali environment.

Getting Started with Kali

Here’s your login screen! Remember to use root, because you didn’t create a regular user:


And finally, your new desktop:


Post-Install Wrap-Up

I know that you’re anxious to start exploring this wonderful new environment, but we’ve got a bit of housekeeping to take care of first.

At the left, in the quick launch bar, hover over the second icon from the top. It should be a black square and the tool tip should say Terminal. Click it to launch a terminal window:


Since we’re running as root, the terminal will already be running with the highest privileges. You can tell by the # prompt as opposed to a > prompt.


Install Extra Hyper-V Services

The required Hyper-V components are already enabled. Let’s add the KVP, VSS, and file copy services. Enter:

This installs the file copy, KVP, and VSS services. Whether or not they start depends on whether or not the relevant services are enabled. The default Hyper-V setting enables all except Guest Services, so all except the file copy daemon should start automatically. Use service --status-all | grep hyperv to find out:


Change the Scheduler to NOOP

Linux has an I/O scheduler, but Hyper-V has an I/O scheduler. Turn off Linux’s for the best experience.
Edit the GRUB loader:

This will load the GRUB configuration file. Find the line that says:

Change it to:

Press [CTRL]+[X]. You’ll then need to press [Y] to save the changes, then [Enter] to indicate that you want to save the data back to the file you found it in. That will leave you back at the prompt.

And finally:

Exploring Kali Linux

You have now completed all of your installation and preparation work! It’s time to take Kali for a spin!

If I didn’t make this clear enough earlier, I’ll be crystal clear right now: I don’t know that much about penetration testing. I recognize many of the names of the tools in Kali, but the only one I have a meaningful level of experience with is Wireshark. So, don’t ask me what this stuff does. That’s why we have the Internet and search engines.

Let’s start with the boring things to get them out of the way. In the top right you’ll find some system status icons. Click and you’ll get the system menu:


  • Hyper-V doesn’t (yet?) enable audio out of Linux systems, so the volume slider does nothing.
  • Where my screenshot shows Wired Connected, you’ll find your network settings. Click it to expand the menu where you can access them.
  • Where my screenshot shows Proxy None, you can click to access your proxy settings.
  • Where my screenshot shows root, you can click for a Log Out option and a link to your logged on user’s account settings.
  • The wrench/screwdriver icon takes you to the system settings screen. It’s analogous to Windows’ Control Panel. I don’t think you’ll need me to explain those items, so I’ll just recommend that you create users aside from root if you intend to use this desktop for more than just pentesting.
  • The padlock icon locks the desktop. From a lock screen, just press [Enter] to get a login prompt.
  • The power button icon takes you to a cancel/restart/shutdown dialog.

Move left from the system area, and you’ll see a camera icon (it appears in the screenshot above). Click that, and you can record your screen.

Now, the fun stuff! In the top left, you’ll see Applications and Places menu items. Places includes some shortcuts to common file system locations; it’s sort of like the Quick Access section in Windows Explorer. I’ll leave that to you to play with. Click Applications. You’ll immediately see why Kali is not a garden-variety distribution:


The Usual Applications group gave me a chuckle. You’ll find all the things that you’d find on a “normal” distribution there.

You met the quick launch dash earlier, when you started the terminal. It sits at the left of the screen and contains everything marked as a favorite. It will also include icons for running applications. The nine-dot grid at the bottom opens up Kali/Gnome’s equivalent to Windows’ Start menu. From there, you can launch any item on your system. You can also add items to the Favorites/Dash area:


Get Testing!

You’ve got your shiny new Kali install ready to roll. Kick the tires and see what you can accomplish.

Oh, and remember that we’re the good guys. Use these tools responsibly.

How to run openSUSE Leap Linux on Hyper-V

How to run openSUSE Leap Linux on Hyper-V


I’ve written articles about using two popular Linux server distributions, Ubuntu and CentOS, on Hyper-V. Those distributions have large, strong communities, but truthfully, I chose them primarily because of my own familiarity. I decided that I should start branching out into other popular offerings. So, as you probable discovered from the title, this article will introduce openSUSE Leap on Hyper-V.

If you’ve been on the fence about incorporating Linux into your environment, then you have been waiting for this article.

About openSUSE and openSUSE Leap

The SUSE distribution family provides substantial offerings. openSUSE sits at the root. SUSE, who provides the impressive enterprise stack, builds upon openSUSE, not the other way around. I didn’t spend a great deal of time researching those enterprise products, but they are doing some good work, especially in the management space. All of those products include a price tag, however. I’m not opposed to a company turning a profit from its work, but I’m assuming that most of you are here because your price range hovers at “free”.

openSUSE can meet that price point. It also offers enterprise-grade quality. There are two branches of openSUSE. The first is Tumbleweed. Its name signifies its philosophy as a rolling release. Its products, components, and packages receive near-continuous updates. According to its blurb, it targets developers and desktop users that want cutting/bleeding edge technology.

Leap is the second openSUSE offering. It operates on the more familiar regulated release cycle. So, you won’t find the absolute latest packages in Leap, but you also won’t need to worry (as much) about breaking any third-party software that your organization relies upon.

Why openSUSE Leap?

Before we can decide between Tumbleweed and Leap, we must address a more pressing question: why choose openSUSE at all? As I’ve said before, I don’t feel strongly about any distribution. I know that some rigidly adhere to a specific distribution and they all have their reasons. I just want whatever gets the job done with minimal frustration.

I like Ubuntu, but I find its refusal to allow remote connection by the root account causes me more harm than good. sudo minimizes the issue for SSH, to be sure. However, I was recently tasked with some involved  work on Apache configuration files, which are root-owned. I really needed mouse-driven copy/paste functionality. None of my solutions were elegant and most caused me problems at one point or another. Also, I have some concerns about the long-term direction of the Ubuntu project. So, while I find the server edition of Ubuntu easy enough to use, it’s no longer my first choice.

I’ve been working with CentOS more ever since writing my article on it. It’s growing on me; I confess to having developed some level of fondness for it. However, it’s a bit slower on release cycles than I would like. It’s difficult to match the certainty that CentOS offers, though. If you’re mostly dealing with popular FOSS projects (such as a LAMP stack), then CentOS might not be your best choice. If your organization uses software provided by a third party and they prefer CentOS, then choose CentOS.

Now we arrive at openSUSE. I must say, they sort of had me at hello:


Truthfully, I was hooked by the management capabilities. As I started working with my first openSUSE system, I did what I knew from CentOS and Ubuntu. Things mostly worked, but I felt a little disappointed with the package management system. Specifically, I wasn’t entirely certain how to get it to remove unreferenced package dependencies. So, I did some searching, and was directed to a little gem called YAST:


YAST is a character-mode menu-based management system for openSUSE. If you’re not quite ready to jump from graphical Windows to command-line Linux, YAST can carry you over the divide.

Underneath all of that, openSUSE uses rpm. That means that you’ll be able to run many things on openSUSE that you could run on Red Hat’s derivations.

Why Leap instead of Tumbleweed?

Personally, I would choose Leap for my datacenter. Leap is more predictable, and in a sense more reliable. Since we’re installing under Hyper-V and don’t care about driver updates, Tumbleweed is a safer choice than it would be when directly installed on a hardware platform, but regular release cycles always make our vendors feel better. openSUSE’s Tumbleweed home page also talks about making the choice. My todo list contains an entry to fire up Tumbleweed on my Client Hyper-V installation, but I’m going to use Leap on my server platforms.

Downloading openSUSE Leap

Acquiring the software is your first step. I would start on the Leap homepage, as the download page will change with the version number. As the site exists today, a relatively large Install Leap button sits prominently in the center. Click it to go to the download page.

On the download page, you can choose between the full 4.7 gigabyte DVD package or a network-based install image. Unlike the other distributions that I’ve used, you can’t choose any sort of a minimal installer ISO. If you’re only going to be installing one or two instances or you have a really big Internet pipe and would rather not store bits, then the network installer will suit you fine. For me, I chose the full download. That’s what the following instructions use.

How to Build a Hyper-V Virtual Machine for openSUSE Leap

Like the other distributions, Leap does not demand many resources. I use the same build for Leap virtual machines that I do for Ubuntu Server and CentOS:

  • 2 vCPUs, no reservation. All modern operating systems work noticeably better when they can schedule two threads as opposed to one. You can turn it up later if you’re deployment needs more.
  • Dynamic Memory on; 512MB startup memory, 256MB minimum memory, 1GB maximum memory. You can always adjust Dynamic Memory’s maximum upward, even when the VM is active. Start low.
  • 40GB disk is probably much more than you’ll ever need. I use a dynamically expanding VHDX because there’s no reason not to. The published best practice is to create this with a forced 1 megabyte block size, which must be done in PowerShell. I didn’t do this on my first several Linux VMs and noticed that they do use several gigabytes more space, although still well under 10 apiece. I leave the choice to you.
  • I initially had troubles using Generation 2 VMs with Linux, but I’m having better luck recently. If you use Generation 2 with your Leap VMs on Hyper-V 2012 R2/8.1 or earlier, remember to disable Secure Boot. If using 2016, you can leave Secure Boot enabled as long as you select the “Microsoft Certification Authority”.
  • If your Hyper-V host is a member of a failover cluster and the Linux VM will be HA, use a static MAC address. Linux doesn’t respond well when its MAC addresses change.

The following is a sample script that you can use or modify to create a Linux virtual machine in Hyper-V:

Note: I have been using a 1 megabyte block size (instead of the default 32MB) for all Linux virtual machines since I learned about the dramatic difference it made for Ubuntu systems (which use the ext4 file system by default). A reader pointed out on my CentOS article that it didn’t seem to matter for that distribution. My untested assumption is that the xfs file system that CentOS made the difference. Leap uses btrfs by default. I have not tested larger block sizes with that file system, either.

Installing openSUSE Leap on Hyper-V

If you followed the script above, then you have a virtual machine with the installation ISO attached. Otherwise, you’ll need to create your own VM and manually attach the ISO. However you get there, start up your new virtual machine with the ISO mounted and its virtual DVD drive selected as the primary boot device.

openSUSE’s installer is polished and smooth. I’d say its presentation shames all other Linux distributions that I’ve tried. However, it has many steps; more than I feel are necessary to install an operating system.

My VM (on Server 2016) uses Secure Boot. If yours is the same, then you can choose Yes at the initial screen requesting openSUSE certificate validation:


Choose Installation to get started:


The installation begins with a fairly strange combination of items: the EULA acceptance screen includes your language and keyboard selection. In my case, that all worked out, but if you want something other than the defaults, take care not to click through this screen too quickly:


Once you accept the EULA, the installer will scan your system and perform some preloading. Just click Next once that completes.


If you want to add non-default software repositories and packages during the install phase, you have that option here. I tend to stick with the same OS install philosophy that I’ve held since Windows 95: do the dead minimum necessary to install the OS. It’s easier to build on a successful minimal installation than it is to fix a complicated broken installation. I recommend that you leave these checkboxes empty and click Next.


This screen looks scary, but don’t worry. It’s just showing you how it wants to use the disk. Each line shows a step in that process. Many of those lines deal with “subvolumes”, which are a feature of the btrfs file system. Essentially, the system can perform targeted snapshots of each subvolume. So, if you have a mariadb installation that places its data files into /var/lib/mariadb, then you can instruct the system to snapshot that location if you have a use for that. You might be thinking, “But I’ll never use mariadb on this system!” In that case, you can do nothing; the subvolume will just exist, not doing anything or consuming any space. Alternatively, you can click the Edit Proposal Settings button to make any changes as you see fit. I simply take the defaults, myself.


Next you’ll select your time zone. If you have special circumstances, use the Other Settings button to explore related options.


Your effort so far is rewarded with another fun EULA, this time for the main openSUSE repository. Yay.


Now, choose your operating environment. I only install server mode. I don’t know what the current graphical requirements are for either KDE or Gnome and how well a Hyper-V virtual machine can suit them. I’d like to explore those options, but not today.


Create your personal administrative user account. I did test the Import User Data from a Previous Installation with a CentOS install, and it works perfectly well. It does not retain the old installation itself; it merely transfers information.


This screen looks like a simple summary page, but it needs some serious attention. Don’t just happily click Install. First, click enable next to SSH service will be disabled. Doing so ensures that you can remotely manage the system. You can enable SSH later if you miss it. For the second part, click System and Hardware Settings. Instructions follow this screenshot.


Clicking System and Hardware Settings brings you to the Detected Hardware screen. At the bottom of this screen, click the Kernel Settings button. Instructions follow this screenshot.


You should now see the Kernel Settings screen. Switch to the Kernel Settings tab. Under Global I/O Scheduler, select NOOP [noop]. Click OK on this screen and the previous screen and you will be returned to the summary page. Now you can click Install.


Your work is complete. You can relax while openSUSE installs. You can also flip through the tabs to view Details and the release notes, if you’d like. Once installation finishes, reboot into your new openSUSE install.


openSUSE Post-Install Wrap-up for Hyper-V

I’ve added this section because it appears on my Ubuntu and CentOS posts. openSUSE installs all of the Hyper-V tools right out of the box and you selected the I/O scheduler during install, so there’s really nothing left to prepare. If you’re already familiar with Linux, I have nothing else for you. If you forgot to enable SSH or change the scheduler, I’ll use them as examples when I demonstrate YAST later in this article.

Before moving along, I recommend that you double-check Microsoft’s page regarding Hyper-V support for SUSE. There wasn’t anything to do the last time that I looked, but the page might change and/or you might find some interesting data on the feature charts. At the very bottom of that page, you can find a link to Microsoft’s best practices for Linux on Hyper-V, which might also contain interesting information for you.

10 Tips for Getting Started with openSUSE Leap Linux on Hyper-V

This section is for those with Windows backgrounds. If you already know Linux, you probably won’t get anything out of this section. I will write it from the perspective of a seasoned Windows user. Nothing here should be taken as a slight against Linux.

1. Text Acts Very Differently

Above all, remember this: Linux is CaSE-SENsiTiVe.

  • yast and Yastare two different things. The first is a command. The second is a mistake.
  • File and directory names must always be typed exactly.

Password fields do not echo anything to the screen.

2. Things Go the Wrong Way

In Windows, you’re used to C: drives and D: drives and SMB shares that start with \\.

In Linux, everything begins with the root, which is just a single /. Absolutely everything hangs off of the root in some fashion. You don’t have a D: drive. Starting from /, you have a dev location, and drives are mounted there. For the SATA virtual drives in your Hyper-V machine, they’ll all be sda, sdb, sdc, etc. Partitions will then be numbered. So, /dev/sda2 would be the equivalent to your Windows D: drive if it’s the second partition on the first virtual drive.

Directory separators are slashes (/) not backslashes (\). A directory that you’ll become familiar with is usr. It lives at /usr.

Moving around the file system should be familiar, as the Windows command line uses similar commands. Linux typically uses ls where Windows uses dir, but CentOS accepts dir. cd and mkdir work as they do on Windows. Use rm to delete things. Use cp to copy things. Use mv to move things.

Running an executable in the folder that you currently occupy by just typing its name does not work. PowerShell behaves the same way, so that may not be strange to you. Use dot and slash to run a script or binary in the same folder:


Linux doesn’t use file extensions. Instead, it uses attributes. So, if you create the equivalent of a batch file and then try to execute it, Linux won’t have any idea what you want to do. You need to mark it as executable first. Do so like this:

As you might expect, -x removes the executable attribute.

The default Linux shell does have tab completion, but it’s not the same as what you find on Windows. It will only work for files and directories, for starters. Second, it doesn’t cycle through possibilities the way that PowerShell does. The first tab press works if there is only one way for the completion to work. A second tab press will show you all possible options. You can use other shells with more power than the default, although I’ve never done it.

3. Quick Help is Available

Most commands and applications have a h and/or a help parameter that will give you some information on running them. –help is often more detailed than -h. You can sometimes type man commandname to get other help (“man” is short for “manual”). It’s not as consistent as PowerShell help, but then PowerShell’s designers got to work with the benefits of hindsight and rigidly controlled design and distribution.

4. You Can Go Home

You’ve got your own home folder, which is the rough equivalent of the “My Documents” folder in Windows. It’s at the universal alias ~. So, cd ~ takes you to your home folder. You can reference files in it with ~/filename.

5. Boss Mode

“root” is the equivalent of “Administrator” on Windows. But, the account you made has nearly the same powers — although not exactly on demand. You won’t have root powers until you specifically ask for them with “sudo”. It’s sort of like “Run as administrator” in Windows, but a lot easier. In fact, the first time you use sudo, the intro text tells you a little bit about it:


So basically, if you’re going to do something that needs admin powers, you just type “sudo” before the command, just like it says. The first time, it will ask for a password. It will remember it for a while after that. However, 99% of what I do is administrative stuff, so I pop myself into a sudo session that persists until I exit, like this:

You’ll have to enter your password once, and then you’ll be in sudo mode. You can tell that you’re in sudo mode because the dollar sign in te prompt will change to a hash sign:


I only use Linux for administrative work, so I commonly switch use sudo -s. However, I always log in with my “Eric” account. Remember that even when it’s not in sudo mode, it’s still respected as an admin-level account. If you will be using a Linux system as your primary (i.e., you’ll be logged in often), create a non-administrative account to use. You can flip to your admin account or root anytime:

Always respect the power of these accounts.

6. “Exit” Means Never Having to Say Goodbye

People accustomed to GUIs with big red Xs sometimes struggle with character mode environments. “exit” works to end any session. If you’re layered in, as in with sudo or su, you may need to type “exit” a few times. “logout” works in most, but not all contexts.

7. Single-Session is for Wimps

One of the really nifty things about Linux is multiple concurrent sessions. When you first connect, you’re in terminal 1 (tty1). Press [Alt]+[Right Arrow]. Now you’re in tty2! Keep going. 6 wraps back around to 1. [Alt]+[Left Arrow] goes the other way.

You need to be logged in to determine which terminal you’re viewing. Just type tty.

8. Patches, Updates, and Installations, Oh My.

Pretty much all applications and OS components are “packages”. “yast”, “zypper”, and “rpm” are your package managers. yast and zypper work together, but they’re a bit disjointed from rpm. Until you get some experience, I would recommend using yast. I have an upcoming section that deals with yast, so I’ll save major package management for that part.

I think my favorite tool on openSUSE is cnf. This nifty tool, which is an acronym for “command not found”, will hunt down the package that contains a command that you want to use. So, let’s say that I’m trying to perform some DNS lookups but nslookup won’t run. I first try to use zypper to install nslookup, but there’s no such package. So, I just use cnf nslookup. In my particular case, I really do have the proper package installed, but it tells me what package it’s in anyway:

opensuse_cnfIf I didn’t have it, I could use zypper to install it:

To locate, download, and install updates for packages on your system:

If you want patches:

I’m not entirely clear on the distinction between patches and updates; items appear in each that do not appear in the other. My assumption would be that patches would keep your software within the same version where updates would include version updates, but my simple perusal doesn’t seem to support that. If you want everything, I would use update first, then patch.

You can remove packages with remove (take care to use the -u switch every time!):

zypper does not appear to include an autoremove option like apt and yum. I recommend using YAST for package management. If you want to use zypper, take care to always use -u when removing software.

List all available packages:

Search for one or more packages by name:

Use zypper by itself to see all available options.

9. System Control

Linux’s equivalent to Task Manager is top. Type top at a command prompt and you’ll be taken right to it. Use the up and down arrows and page up and page down to move through the list. Type a question mark [?] to be taken to the help menu that will show you what else you can do. Type [Q] to quit.

10. OK, I’m Done Now

If you’ve used the shutdown command in Windows, then you’ll have little trouble transitioning to Linux. shutdown tells Linux to shut down gracefully with a 1 minute timer. All active sessions get a banner telling them what’s coming.

Immediate shutdown (my favorite):

You can use -r to reboot, if you like:

There’s also an -H switch, which I take to mean “halt the system immediately without waiting for anything to shut down”. I don’t use that.

There’s also a convenient reboot command, if you don’t want to use shutdown. It’s very simple:

Useful Tools for openSUSE Leap

Manipulating your Leap environment from the VMConnect console will get tiring quickly. Here are some tools to make managing it much easier.


YAST (Yet Another Setup Tool) alone makes openSUSE stand above most other distributions when it comes to ease of management. Since you’re installing in a Hyper-V virtual machine, you’ll want to work with it remotely. Make sure to take a peek at the upcoming PuTTY instructions to optimize your experience (by that, I mean that YAST looks terrible under PuTTY defaults).

Get comfortable with your keyboard. You might be tempted to use your mouse in YAST — no dice. If you look at the sections and menu headings, they’ll have one letter that’s a different color from the rest. Use [Alt] with that key to jump to that section or function. For instance, [Alt]+[C] will cancel most screens and return to the previous. [Alt]+[A] usually accepts any changes (like an OK button). [Tab] cycles through sections.

Also, be patient. YAST is not terribly fast. If you have a list with many items, don’t hold your arrow key down. You’ll regret it.

Always start yast as root:

Let’s look at a few common YAST functions.

YAST Software Management

YAST loads up to its Software page. [Tab] or right arrow to jump to the other side, where you can run an online update. Arrow down to Software Management and press [Enter] to work with software packages.

If you hit [Enter], every package state in every known repository will be listed. Expect that to take a very long time and be more or less unmanageable. Rather than that, press [Alt]+[P] to jump to the Search Phrase box and enter something to look for. As an example, I’ll search for “apache”, which will load entries related to the Apache web server (and other Apache projects). Pressing [Enter] from the Search Phrase box will retrieve all matches, enter them into the list box on the right, and switch input focus to that screen. From there, you can use the up and down arrow keys to scroll the list:


An item with “i” beside it is currently installed. Items with a + will be installed once you use [Alt]+[A] or [F10]. Items with a – will be removed. Use the spacebar to move through all possible actions. You can also press [Alt]+[T] to expand the actions sub-dialog. That will show you the options; pressing [Enter] on one will apply it to the currently selected item. Use [ESC] to close it, then [Shift]+[Tab] to switch back to the list. You can go back to the Search Phrase block and get new items without disturbing any selections that you’ve already made, so you can get all necessary packages in a single visit.

If you don’t have nano and/or the openssh packages on your system already, use this time to practice locating and installing packages.

YAST will automatically take care of dependencies for you. I prefer zypper when installing many packages at once so that I don’t have to hunt through several lists. However, I recommend that you use yast for all of your package removal management activities. It will ensure that you won’t wind up with orphaned dependencies.

Use yast to Change Kernel Settings

Did you forget to change the I/O scheduler to NOOP during install? No problem.

  1. From the main page of yast, arrow down to System on the left, then [Tab] or right arrow to the system submenu.
  2. Arrow down to Kernel Settings and press [Enter].
  3. Press [Alt]+[K] to switch to the Kernel Settings tab.
  4. Press [Alt]+[I] to open the scheduler options list. Arrow down to NOOP [noop] and press [Enter].
  5. Press [Alt]+[O] to apply the changes. You’ll be automatically returned to the main menu.

Use yast to Set Host Name and IP

My openSUSE system picked up some random name that got lodged in my DHCP server; I think it was for my Kindle. Whatever you got, you’d probably like to change it. If you’re going to assign a static IP, you’ll go to nearly the same place.

  1. From the yast main screen, arrow down to System, then press [Tab] or right arrow to move to the system submenu.
  2. Arrow down to Network Settings and press [Enter].
  3. [Alt]+[C] if prompted to install SuSEfirewall (unless you want the firewall, which I won’t cover)
  4. You will start on the Overview. If you want to set the IP addresss:
    1. [Tab] down to the desired adapter and press [Alt]+[I] to edit (IP address)
    2. Press [Alt]+[T] to change to Static assignment. [Tab] to enter data into the relevant fields.opensuse_yast_ip
    3. Press [Alt]+[N] to accept the assignment and return to the Overview.
  5. To change the host name:
    1. [Tab] to select the menu row and then right arrow to Hostname/DNS,or just press [Alt]+[S].
    2. [Tab] through fields, entering data as necessary; change Set Hostname via DHCP to No. If items are set via DHCP (like the DNS servers), then you don’t need to enter them.
    3. [F10] or [Alt]+[O] to accept.
  6. To change the system’s default gateway (unless delivered via DHCP):
    1. [Tab] to the menu bar or press [Alt]+[U] to jump to the Routing tab.
    2. [Tab] or hotkey through the fields. Enter IPv4/IPv6 default gateway(s) as necessary. If you have additional routing requirements, use the Add, Edit, and Delete functions as appropriate.
    3. [F10] or [Alt]+[O] to accept.

YAST has many more features than I have energy to describe. Explore!

Text Editors

My preferred character mode is nano. Just type nano at any prompt and press [Enter] and you’ll be in the nano screen. The toolbar at the bottom shows you what key presses are necessary to do things, ex: [CTRL]+[X] to exit. Don’t forget to start it with sudo if you need to change protected files.

The remote text editing tool that I use from my Windows desktop is Notepad++. You can use it alongside WinSCP (shown in a bit) for more reliable operations, but it can connect to a Linux machine on its own. It is a little flaky — I sometimes get Access Denied errors with it that I don’t get in any other remote tool (setting it to Active mode seems to help a little). But, the price is hard to beat. If I run into real problems, I run things through my home folder. To connect Notepad++ to your host:

  1. In NPP, go to Plugins->NppFTP->Show NppFTP Window (only click if it’s not checked):
    NPP FTP Window Selector

    NPP FTP Window Selector


  2. The NppFTP window pane will appear at the far right. Click the icon that looks like a gear (which is, unfortunately, gray in color so it always looks disabled), then click Profile Settings:NPP FTP Profile Item
  3. In the Profile Settings window, click the Add New button. This will give you a small window where you can provide the name of the profile you’re creating. I normally use the name of the system.
    Add FTP Profile
  4. All the controls will now be activated.
    1. In the Hostname field, enter the DNS name or the IP address of the system you’re connecting to (if you’re reading straight through, you might not know this yet).
    2. Change the Connection type to SFTP.
    3. If you want, save the user name and password. I don’t know how secure this is. I usually enter my name and check Ask for password. If you don’t check that and don’t enter a password, it will assume a blank password.
      NPP FTP Profile

      NPP FTP Profile


  5. You can continue adding others or changing anything you like (I suggest going to the Transfers tab and setting the mode to Active). Click Close when ready.
  6. To connect, click the Connect icon which will now be blue-ish. It will have a drop-down list where you can choose the profile to connect to.NPP FTP Connect
  7. On your first connection, you’ll have to accept the host’s key:NPP Host Key
  8. If the connection is successful, you’ll attach to your home folder on the remote system. Double-clicking an item will attempt to load it. Using the save commands in NPP will save back to the Linux system directly.NPP FTP Directory

Remember that NPP is a Windows app, and as a Windows app, it wants to save files in Windows format (I know, weird, right?). Windows expects that files encoded in human-readable formats will end lines using a carriage-return character and a linefeed character (CRLF, commonly seen escaped as \r\n). Linux only uses the linefeed character (LF, commonly seen escaped as \n). Some things in Linux will choke if they encounter a carriage return. Any time you’re using NPP to edit a Linux file, go to Edit -> EOL Conversion -> UNIX/OSX Format.

NPP EOL Conversion

NPP EOL Conversion



WinSCP allows you to move files back and forth between your Windows machine and a Linux system. It doesn’t have the weird permissions barriers that Notepad++ struggles with, but it also doesn’t have its editing powers.

  1. Download and install WinSCP. I prefer the Commander view but do as you like.
  2. In the Login dialog, highlight New Site and fill in the host’s information (while not shown here, I recommend that you change the File protocol to SCP):WinSCP Profiles
  3. Click Save to keep the profile. It will present a small dialog asking you to customize how it’s saved. You can change the name or create folders or whatever you like.
  4. With the host entry highlighted, click Login. You’ll be prompted with a key on first connect:WinSCP Key
  5. Upon clicking Yes, you’ll be connected to the home folder. If you get a prompt that it’s listening on FTP, something went awry because the install process we followed does not include FTP. Check the information that you plugged in and try the connection again.
  6. WinSCP integrates with the taskbar for quick launching:WinSCP Taskbar
  7. Right-click on any remote file and hover over Edit. Then click Configure. Use the Add button to add your own editors (like Notepad++). Whatever item appears on top will be used whenever you double-click an object. I would be wary of any editor that doesn’t understand UNIX EOLs:



The biggest tool in your Linux-controlling arsenal will be PuTTY. This gem is an SSH client for Windows. SSH (secure shell) is how you remote control Linux systems. Use it instead of Hyper-V’s virtual machine connection. You can use it from just about anywhere and, even better, you can scroll the output window. At its core, SSH is really just a remote console. PuTTY, however, adds functionality on top of that. It can keep sessions and it gives you dead-simple copy/paste functionality. Highlight text, and it’s copied. Right-click the window, and it’s pasted at the cursor location.

  1. Download PuTTY. I use the installer package myself, but do as you like.
  2. Type in the host name or IP address in that field.PuTTY Profiles
  3. If you’ll be using YAST in SSH, change to the Data tab under the Connection section. In Terminal-type string, change the current setting to linux. Change back to the Session tab before proceeding.
  4. PuTTY doesn’t let you save credentials. But, you can save the session. Type a name for it in the Saved Sessions field and then click Save to add it to the list. Clicking Load on an item, or double-clicking it, will populate the connection field with the saved details.
  5. Click Open when ready. On the first connection, you’ll have to accept the host key:PuTTY Key
  6. You’ll then have to enter your login name and password. Then you’ll be brought to the same type of screen that you saw in the console.
  7. Right-click the title bar of PuTTY for a powerful menu. The menu items change based on the session status. I have restarted the operating system for the screenshot below so that you can see the Restart Session item. This allows you to quickly reconnect to a system that you dropped from… say, because you restarted it.PuTTY Menu
  8. PuTTY also has taskbar integration:PuTTY Taskbar
  9. When you’re all done, remember to use “exit” to end your session.

Your Journey Has Begun

From here, I leave you to explore your fresh new Linux environment. I’ll be back soon with an article on installing Nagios in openSUSE Leap so you can monitor your Hyper-V environment at no cost.

How to Compact a VHDX with a Linux Filesystem

How to Compact a VHDX with a Linux Filesystem


Microsoft’s compact tool for VHD/X works by deleting empty blocks. “Empty” doesn’t always mean what you might think, though. When you delete a file, almost every file system simply removes its entry from the allocation table. That means that those blocks still contain data; the system simply removes all indexing and ownership. So, those blocks are not empty. They are unused. When a VHDX contains file systems that the VHDX driver recognizes, it can work intelligently with the contained allocation table to remove unused blocks, even if they still contain data. When a VHDX contains file systems commonly found on Linux (such as the various iterations of ext), the system needs some help.

Making Some Space

Before we start, a warning: don’t even bother with this unless you can reclaim a lot of space. There is no value in compacting a VHDX just because it exists. In my case, I had something go awry in my system that caused the initramfs system to write gigabytes of data to its temporary folder. My VHDX that ordinarily used around 5 GB ballooned to 50GB in a short period of time.

Begin by getting your bearings. df can show you how much space is in use. I neglected to get a screen shot prior to writing this article, but this is what I have now:


At this time, I’m sitting at a healthy 5% usage. When I began, I had 80% usage.

Clean up as much as you can. Use apt autoremove, apt autoclean, and apt clean on systems that use apt. Use yum clean all on yum systems. Check your /var/tmp folder. If you’re not sure what’s consuming all of your data, du can help. To keep it manageable, target specific folders. You can save the results to a file like this:

You can then open the /home/<your account>/var-temp-du file using WinSCP. It’s a tab-delimited file, so you can manipulate it easily. Paste into Excel, and you can sort by size.

More user-friendly downloadable tools exist. I tried gt5 with some luck.

As I mentioned before, I had gigabytes of files in /var/tmp created by initramfs. I’m not sure what it used to create the names, but they all started with “initramfs”. So, I removed them that way: rm /var/tmp/initramfs* -r. That alone brought me down to the lovely number that you see above. However, as you’re well aware, the VHDX remains at its expanded size.

Don’t forget to df after cleanup! If the usage hasn’t changed much, then I’d stop here and either find something else to delete or find something else to do altogether.

Zeroing a VHDX with an ext Filesystem

I assume that this process will work with any file system at all, but I’ve only tested with ext4. Your mileage may vary.

Because the VHDX cannot parse the file system, it can only remove blocks that contain all zeros. With that knowledge, we now have a goal: zero out unused blocks. We’ll need to do that from within the guest.

Preferred Method: fstrim

My personal favorite method for handling this is the “fstrim” utility. Reasons:

  • fstrim works very quickly
  • fstrim doesn’t cause unnecessary wear on SSDs but still works on spinning rust
  • fstrim ships in the default tool set of most distributions
  • fstrim is ridiculously simple to use


On my system that had recently shed over 70 GB of fat, fstrim completed in about 5 seconds.

Note: according to some notes that I found for Ubuntu, it automatically performs an fstrim periodically. I assume that you’re here because you want this done now, so this information probably serves mostly as FYI.

Alternative Zeroing Methods

If fstrim doesn’t work for you, then we need to look at tools designed to write zeros to unused blocks.

I would caution you away from using security tools.  They commonly make multiple passes of non-zero writes for security purposes on magnetic media. That’s because an analog reader can detect charge levels that are too low to register as a “1” on your drive’s internal digital head. They can interpret them as earlier write operations. After three forced writes to the same location, even analog equipment won’t read anything. On an SSD, though, those writes will mostly reduce its lifespan. Also, non-zero writes are utterly pointless for what we’re doing. Some security tools will write all zeros. That’s better, but they also make multiple passes. We only need one.

Create a File from /dev/zero

Linux includes a nifty built-in tool that just generates zeroes until you stop asking. You can leverage it by “reading” from it and outputting to a file that you create just for this purpose.

On a physical system, this operation would always take a very long time because it literally writes zeros to every unused block in the file system. Hyper-V will realize that the bits being written are zeroes. So, when it hits a block that hasn’t already been expanded, it will just ignore the write. However, the blocks that do contain data will be zeroed, so this can still take some time. So, it’s not nearly as fast as fstrim, but it’s also not going to make the VHDX grow any larger than it already is.


The “zerofree” package can be installed with your package manager from the default repository (on most distributions). It has major issues that might be show-stoppers:

  • I couldn’t find any way to make it work with LVM volumes. I found some people that did, but their directions didn’t work for me. That might be because of my disk system, because…
  • It’s not recommend for ext4 or xfs file systems. If your Linux system began life as a recent version, you’re probably using ext4 or xfs.
  • Zerofree can’t work with mounted file systems. That means that it can’t work with your active primary file system.
  • You’ll need to detach it and attach it to another Linux guest. You could also use something like a bootable recovery disk that has zerofree.

If you mount it in a foreign system, run sudo lsblk -f to locate the attached disk and file systems:

Verify that the target volume/file system does not appear in df. If it shows up in that list, you’ll need to unmount it before you can work with it.

I’ve highlighted the only volume on my added disk that is safe to work with. It’s a tiny system volume in my case so zeroing it probably won’t do a single thing for me. I’m showing you this in the event that you have an ext2 or ext3 file system in one of your own Linux guests with a meaningful amount of space to free. Once you’ve located the correct partition whose free space you wish to clear:


In my research for this article, I found a number of search hits that looked somewhat promising. If nothing here works for you, look for other ways. Remember that your goal is to zero out the unused space in your Linux file system.

Compact the VHDX

The compact process itself does not differ, regardless of the contained file system. If you already know how to compact a dynamically-expanding VHDX, you’ll learn nothing else from me here.

As with the file delete process, I always recommend that you look at the VHDX in Explorer or the directory listing of a command/PowerShell prompt so that you have a “before” idea of the file.

Use PowerShell to Compact a Dynamically-Expanding VHDX

The owning virtual machine must be Off or Saved. Do not compact a VHDX that is a parent of a differencing disk. It might work, but really, it’s not worth taking any risks.

Use the Optimize-VHD cmdlet to compact a VHDX:

The help for that cmdlet indicates that -Mode Fullscans for zero blocks and reclaims unused blocks”. However, it then goes on to say that the VHDX must be mounted in read-only mode for that to work. The wording is unclear and can lead to confusion. The zero block scan should always work. The unused block part requires the host to be able to read the contained file system — that’s why it needs to be mounted. The contained file system must also be NTFS for that to work at all. All of that only applies to blocks that are unused but not zeroed. The above exercise zeroed those unused blocks. So, this will work for Linux file systems without mounting.

Use Hyper-V Manager to Compact a Dynamically-Expanding VHDX

Hyper-V Manager connects you to a VHDX tool to provide “editing” capabilities. The options for “editing” includes compacting. It can work for VHDX’s that are attached to a VM or are sitting idle.

Start the Edit Wizard on a VM-Attached VHDX

The virtual machine must be Off or Saved. If the virtual machine has checkpoints, you will be compacting the active VHDX.

Open the property sheet for the virtual machine. On the left, highlight the disk to compact. On the right, click the Edit button.


Jump past the next sub-section to continue.

Start the Edit Wizard on a Detached VHDX

The VHDX compact tool that Hyper-V Manager uses relies on a Hyper-V host. If you’re using Hyper-V Manager from a remote system, that means something special to you. You must first select the Hyper-V host that will be performing the compact, then select the VHDX that you want that host to compact.

Select the host first:

cpctlvhd_hostselectNow, you can either right-click on that host and click Edit Disk or you can use the Edit Disk link in the far right Actions pane; they both go to the same wizard.


The first screen of the wizard is informational. Click Next on that. After that, you’ll be at the first actionable page. Read on in the next sub-section.

Using the Edit Disk Wizard to Compact a VHDX

Both of the above processes will leave you on the Locate Disk page. The difference is that if you started from a virtual machine’s property sheet, the disk selector will be grayed out. For a standalone disk, enter or browse to the target VHDX. Remember that the dialog and tool operate from the perspective of the host. If you connected Hyper-V Manager to a remote host, there may be delegation issues on SMB-hosted systems.


On the next screen, choose Compact:


The final page allows you to review and cancel if desired. Click Finish to start the process:


Depending on how much work it has to do, this could be a quick or slow process. Once it’s completed, it will simply return to the last thing you were doing. If you started from a virtual machine, you’ll return to its property sheet. Otherwise, you’ll simply return to Hyper-V Manager.

Check the Outcome

Locate your VHDX in Explorer or a directory listing to ensure that it shrank. My disk has returned to its happy 5GB size:



4 Ways to Transfer Files to a Linux Hyper-V Guest

4 Ways to Transfer Files to a Linux Hyper-V Guest

You’ve got a straightforward problem. You have a file on your Windows machine. You need to get that file into your Linux machine. Your Windows machine runs Hyper-V, and Hyper-V runs your Linux machine as a guest. You have many options.

Method 1) Use PowerShell and Integration Services

This article highlights the PowerShell technique as it’s the newest method, and therefore the least familiar. You’ll want to use this method when the Windows system that you’re working from hosts the target Linux machine. I’ll provide a longer list of the benefits of this method after the how-to.

Prerequisite for Copying a File into a Linux Guest: Linux Integration Services

The PowerShell method that I’m going to show you makes use of the Linux Integration Services (LIS). It doesn’t work on all distributions/versions. Check for your distribution on TechNet. Specifically, look for “File copy from host to guest”.

By default, Hyper-V disables the particular service that allows you to transfer files directly into a guest.

Enabling File Copy Guest Service in PowerShell

The cmdlet to use is Enable-VMIntegrationService. You can just type it out:

The Name parameter doesn’t work with tab completion, however, so you need to know exactly what to type in order to use that syntax.

You can use Get-VMIntegrationService for spelling assistance:

Enable-VMIntegrationService includes a VMIntegrationService parameter that accepts an object, which can be stored in a variable or piped from Get-VMIntegrationService:

You could leave out the entire where portion and pipe directly in order to enable all services for the virtual machine in one shot.

Use whatever method suits you best. You do not need to power cycle the virtual machine or make any other changes.

Enabling File Copy Guest Service in Hyper-V Manager or Failover Cluster Manager

If you’d prefer to use a GUI, either Hyper-V Manager or Failover Cluster Manager can help. To file copy for a guest in Hyper-V Manager or Failover Cluster Manager, open the Settings dialog for the virtual machine. It does not matter which tool you use. The virtual machine can be On or Off, but it cannot be Saved or Paused.

In the dialog, switch to the Integration Services tab. Check the box for Guest services and click OK.


You do not need to power cycle the virtual machine or make any other changes.

Verifying the Linux Guest’s File Copy Service

You can quickly check that the service in the guest is prepared to accept a file from the host:

Look in the output for hypervfcopyd:


Of course, you can supply more of the name to grep than just “hyper” to narrow it down, but this is easier to remember.

Using Copy-VMFile to Transfer a File into a Linux Guest

All right, now the prerequisites are out of the way. Use Copy-VMFile:

You can run Copy-VMFile remotely:

Notice that SourcePath must be from the perspective of ComputerName. Tab completion won’t work remotely, so you’ll need to know the precise path of the source file. It might be easier to use Enter-PSSession first so that tab completion will work.

You can create a directory on the Linux machine when you copy the file:

CreateFullPath can only create one folder. If you ask it to create a directory tree (ex: -CreateFullPath '/downloads/new' ), you’ll get an error that includes the text “failed to initiate copying files to the guest: Unspecified error (0x80004005)“.

Benefits and Notes on Using Copy-VMFile for Linux Guests

Some reasons to choose Copy-VMFile over alternatives:

  • I showed you how to use it with the VMName parameter, but Copy-VMFile also accepts VM objects. If you’ve saved the output into a variable from Get-VM or some other cmdlet that produces VM objects, you can use that variable with Copy-VMFile’s VM parameter instead of VMName.
  • The VMName and VM parameters accept arrays, so you can copy a file into multiple virtual machines simultaneously.
  • You do not need a functioning network connection within the Linux guest or between the host and the guest.
  • You do not need to open firewalls or configure any daemons inside the Linux guest.
  • The transfer occurs over the VMBus, so only your hardware capabilities can limit its speed.
  • The transfer operates under the root account, so you can place a file just about anywhere on the target system.


  • As mentioned in the list item in the preceding list, this process runs as root. Be careful what you copy and where you place it.
  • Copied files are marked as executable for some reason.
  • Copy-VMFile only works from host to guest. The existence of the FileSource parameter implies that you copy files the other direction, but that parameter accepts no value other than Host.

Method 2) Using WinSCP

I normally choose WinSCP for moving files to/from any Linux machine, Hyper-V guest or otherwise.

If you choose the SCP protocol when connecting to a Linux system, it will work immediately. You won’t need to install any packages first:


Once connected, you have a simple folder display for your local and target machines with simple drag and drop transfer functionality:


You can easily modify the permissions and execute bit on a file (as long as you have permission):


You can use the built-in editor on a file or attach it to external editors. It will automatically save the output from those editors back to the Linux machine:


You can even launch a PuTTY session right from WinSCP (if PuTTY is installed):


I still haven’t found all of the features of WinSCP.

Method 3) Move Files to/from Linux with the Windows FTP Client

Windows includes a command-line ftp client. It has many features, but still only qualifies as barely more than rudimentary. You can invoke it with something like:

The above will attempt to connect to the named host and will then start an interactive session. If you’d like to start work from within an interactive session, that would look something like this:

Use ftp /?  at the command prompt for command-line assistance and help at the interactive ftp > prompt for interactive assistance.

You’ll have a few problems using this or any other standard FTP client: most Linux distributions do not ship with any FTP daemon running. Most distributions allow you to easily acquire vsftpd. I don’t normally do that because SCP is already enabled and it’s secure.

Method 4) Move Files Between Linux Guests with a Transfer VHDX

If you have a distribution that doesn’t work with Copy-VMFile, or you just don’t want to use it, you can use a portable VHDX file instead.

  1. First, create a disk. Use PowerShell so that the sparse files don’t cause the VHDX file to grow larger than necessary:

  1. Attach the VHDX to the Linux guest. If you attach to the virtual SCSI chain, you don’t need to power down the VM.
  2. Inside the Linux guest, create an empty mount location.
  3. Determine which of the attached disks can be used for transfer with sudo fdisk -l . You are looking for a /dev/sd* item that has FAT32 partition information.
    Do not use:
  4. Enter the following as shown. Outputs will show you what you’re doing; I’m only telling you what to type:
  5. Run sudo fdisk -l to verify that your new disk now has a W95 FAT32 partition. You need FAT32 as it’s the only file system that both Linux and Windows can use without extra effort that’s not worth it for a transfer disk.
  6. Format your new partition:

You have successfully created your transfer disk.

Use a Transfer Disk in Linux

To use a transfer disk on the Linux side, you need to attach it to the Linux machine. Then you need to mount it:

  1. Use sudo fdisk -l to verify which device Linux has assigned the disk to. Use the preceding section for hints.
  2. Once you know which device it is, mount it to your transfer mount point: mount /dev/sdb1 /transfer.  Move/copy files into/out of the /transfer folder.
  3. Once you’re finished, unmount the disk from the folder:

  4. Detach the VHDX from the virtual machine.

Use a Linux Transfer Disk in Windows

You mount a VHDX in Windows via Mount-VHD (must be running Hyper-V), Mount-DiskImage, or Disk Management. Once mounted, work with it as you normally would. Mount-VHD and Disk Management will attach it to a unique drive letter; Mount-DiskImage will mount to the empty path that you specify. Once you’re finished working with it, you can use Dismount-VHD, Dismount-DiskImage (don’t forget -Save!), or Disk Management.

Be aware that even though Windows should have no trouble reading a FAT32 partition/volume created in Linux, the opposite is not true! Do not use Windows formatting tools for a Linux transfer disk! Your mileage may vary, but formatting in Linux always works, so stick to that method.


CentOS Linux on Hyper-V

CentOS Linux on Hyper-V



Microsoft continues turning greater attention to Linux. We can now run PowerShell on Linux, we can write .Net code for Linux, we can run MS SQL on Linux, Linux containers will run natively in Windows containers… the list just keeps growing. You’ve been able to find Linux-on-Hyper-V on that list for a while now, and the improvements have continued to roll in.

Microsoft provides direct support for Hyper-V running several Linux distributions as well as FreeBSD. If you have an organizational need for a particular distribution, then someone already made your choice for you. If you’re just getting started, then you need to make that decision yourself. I’m not a strong advocate for any particular distribution. I’ve written in the past about using Ubuntu Server as a guest. However, there are many other popular distributions available and I like to branch my knowledge.

Why Choose CentOS?

I’ve been using Red Hat’s products off and on for many years and have some degree of familiarity with them. At one time, there was simply “Red Hat Linux”. As a commercial venture attempting to remain profitable, Red Hat decided to create “Red Hat Enterprise Linux” (RHEL) which you must pay to use. With Red Hat being sensitive to the concept of free (as in what you normally think of when you hear “free”) being permanently attached to Linux in the collective conscience, they also make most of RHEL available to the CentOS Project.

One of the reasons that I chose Ubuntu was its ownership by a commercial entity. That guarantees that if you’re ever really stuck on something, there will be at least one professional entity that you can pay to assist you. CentOS doesn’t have that kind of direct backing. However, I also know (from experience) that relatively few administrators ever call support. Most that do work for bigger organizations that are paying for RHEL or the like. The rest will call some sort of service provider, like a local IT outsourcer. With that particular need mitigated, we are left with:

  • CentOS is based on RHEL. This is not something that someone is assembling in their garage (not that I personally think that’s a problem, but your executives may disagree)
  • CentOS has wide community support and familiarity. You can easily find help on the Internet. You will also not struggle to find support organizations that you can pay for help.
  • CentOS has a great deal in common with other Linux distributions. Because Linux is open source software, it’s theoretically possible for a distribution to completely change everything about it. In practice, no one does. That means that the bulk of knowledge you have about any other Linux distribution is applicable to CentOS.

That hits the major points that will assure most executives that you’re making a wise decision. In the scope of Hyper-V, Microsoft’s support list specifically names CentOS. It’s even first on the list, if that matters for anything.

Stable, Yet Potentially Demanding

When you use Linux’s built-in tools to download and install software, you are working from approved repositories. Essentially, it means that someone decided that a particular package adequately measured up to a standard. Otherwise, you’d need to go elsewhere to acquire that package.

The default CentOS repositories are not large when compared to some other distributions, and do not contain recent versions of many common packages, including the Linux kernel. However, the versions offered are known to be solid and stable. If you want to use more recent versions, then you’ll need to be(come) comfortable manually adding repositories and/or acquiring, compiling, and installing software.

No GUIs Here

CentOS does make at least one GUI available, but I won’t be covering it. I don’t know if CentOS’s GUI requires 3D acceleration the way that Ubuntu’s does. If they do, then the GUI experience under Hyper-V would be miserable. However, I didn’t even attempt to use any CentOS GUIs because they’re really not valuable for anything other than your primary use desktop. If you’re new to Linux and the idea of going GUI-free bothers you, then take heart: Linux is a lot easier than you think it is. I don’t think that any of the Linux GUIs score highly enough in the usability department to meaningfully soften the blow of transition anyway.

If you’ve already read my Ubuntu article, then you’ve already more or less seen this bit. Linux is easy because pretty much everything is a file. There are only executables, data, and configuration files. Executables can be binaries or text-based script files. So, any time you need to do anything, your first goal is to figure out what executable to call. Configuration files are almost always text-based, so you only need to learn what to set in the configuration file. The Internet can always help out with that. So, really, the hardest part about using Linux is figuring out which executable(s) you need to solve whatever problem you’re facing. The Internet can help out with that as well. You’re currently reading some of that help.

Enough talk. Let’s get going with CentOS.

Downloading CentOS

You can download CentOS for free from www.centos.org. As the site was arranged on the day that I wrote this article, there was a “Get CentOS” link in the main menu at the top of the screen and a large orange button stamped “Get CentOS Now”. From there, you are presented with a few packaging options. I chose “DVD ISO” and that’s the base used in this article. I would say that if you have a Torrent application installed, choose that option. It took me quite a bit of hunting to find a fast mirror.

For reference, I downloaded CentOS-7-x86_x64-DVD-1611.iso.

How to Build a Hyper-V Virtual Machine for CentOS

There’s no GUI and CentOS is small, so don’t create a large virtual machine. These are my guidelines:

  • 2 vCPUs, no reservation. All modern operating systems work noticeably better when they can schedule two threads as opposed to one. You can turn it up later if you’re deployment needs more.
  • Dynamic Memory on; 512MB startup memory, 256MB minimum memory, 1GB maximum memory. You can always adjust Dynamic Memory’s maximum upward, even when the VM is active. Start low.
  • 40GB