Some time ago, I discovered uses for Hyper-V Key-Value Pair Data Exchange services and began exploiting them on my Windows guests. Now that I’ve started building Linux guests, I need similar functionality. This article covers the differences in the Linux implementation and includes version 1.0 of a program that allows you to receive, send, and delete KVPs.
The second part of that series presented PowerShell scripts for interacting with Hyper-V KVP Exchange from both the host and the guest sides. The guest script won’t be as useful in the context of Linux. Even if you install PowerShell on Linux, the script won’t work because it reads and writes registry keys. It might still spark some implementation ideas, I suppose.
What is Hyper-V Key-Value Pair Data Exchange?
To save you a few clicks and other reading, I’ll give a quick summary of Hyper-V KVP Exchange.
Virtual machines are intended to be “walled gardens”. The host and guest should have limited ability to interact with each other. That distance sometimes causes inconvenience, but the stronger the separation, the stronger the security. Hyper-V’s KVP Exchange provides one method for moving data across the wall without introducing a crippling security hazard. Either “side” (host or guest) can “send” a message at any time. The other side can receive it — or ignore it. Essentially, they pass notes by leaving them stuck in slots in the “wall” of the “walled garden”.
KVP stands for “key-value pair”. Each of these messages consists of one text key and one text value. The value can be completely empty.
How is Hyper-V KVP Exchange Different on Linux?
On Windows guests, a service runs (Hyper-V Data Exchange Service) that monitors the “wall”. When the host leaves a message, this service copies the information into the guest’s Windows registry. To send a message to the host, you (or an application) create or modify a KVP within a different key in the Windows registry. The service then places that “note” in the “wall” where the host can pick it up. More details can be found in the first article in this series.
Linux runs a daemon that is the analog to the Windows service. It has slightly different names on different platforms, but I’ve been able to locate it on all of my distributions with
sudo service--status-all|grepkvp. It may not always be running; more on that in a bit.
Linux doesn’t have a native analog to the Windows registry. Instead, the daemon maintains a set of files. It receives inbound messages from the host and places them in particular files that you can read (or ignore). You can write to one of the files. The daemon will transfer those messages up to the host.
On Windows, I’m not entirely certain of any special limits on KVP sizes. A registry key can be 16,384 characters and there is no hard-coded limit on value size. I have not tested how KVP Exchange handles these extents on Windows. However, the Linux daemon has much tighter constraints. A key can be no longer than 512 bytes. A value can be no longer than 2,048 bytes.
The keys are case sensitive on the host and on Linux guests. So, key “LinuxKey” is not the same as key “linuxkey”. Windows guests just get confused by that, but Linux handles it easily.
How does Hyper-V KVP Exchange Function on Linux?
As with Windows guests, Data Exchange must be enabled on the virtual machine’s properties:
The daemon must also be installed and running within the guest. Currently-supported versions of the Linux kernel contain the Hyper-V KVP framework natively, so several distributions ship with it enabled. As mentioned in the previous section, the exact name of the daemon varies. You should be able to find it with:
sudo service--status-all|grepkvp. If it’s not installed, check your distribution’s instruction page on TechNet.
All of the files that the daemon uses for Hyper-V KVP exchange can be found in the /var/lib/hyperv folder. They are hidden, but you can view them with ls‘s -a parameter:
Anyone can read any of these files. Only the root account has write permissions, but that can be misleading. Writing to any of the files that are intended to carry data from the host to the guest has no real effect. The daemon is always monitoring them and only it can carry information from the host side.
What is the Purpose of Each Hyper-V KVP Exchange File?
Each of the files is used for a different purpose.
.kvp_pool_0: When an administrative user or an application in the host sends data to the guest, the daemon writes the message to this file. It is the equivalent of HKLM\SOFTWARE\Microsoft\Virtual Machine\External on Windows guests. From the host side, the related commands are ModifyKvpItems, AddKvpItems, and RemoveKvpItems. The guest can read this file. Changing it has no useful effect.
.kvp_pool_1: The root account can write to this file from within the guest. It is the equivalent of HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest on Windows guests. The daemon will transfer messages up to the host. From the host side, its messages can be retrieved from the GuestExchangeItems field of the WMI object.
.kvp_pool_2: The daemon will automatically write information about the Linux guest into this file. However, you never see any of the information from the guest side. The host can retrieve it through the GuestIntrinsicExchangeItems field of the WMI object. It is the equivalent of the HKLM\SOFTWARE\Microsoft\Virtual Machine\Auto key on Windows guests. You can’t do anything useful with the file on Linux.
.kvp_pool_3: The host will automatically send information about itself and the virtual machine through this file. You can read the contents of this file, but changing it does nothing useful. It is the equivalent of the HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameter key on Windows guests.
.kvp_pool_4: I have no idea what this file does or what it is for.
What is the Format of the Hyper-V KVP Exchange File on Linux?
Each file uses the same format.
One KVP entry is built like this:
512 bytes for the key. The key is a sequence of non-null bytes, typically interpreted as
char. According to the documentation, it will be processed as using UTF8 encoding. After the characters for the key, the remainder of the 512 bytes is padded with null characters.
2,048 bytes for the value. As with the key, these are non-null bytes typically interpreted as
char. After the characters for the value, the remainder of the 2,048 bytes is padded with null characters.
KVP entries are written end-to-end in the file with no spacing, headers, or footers.
For the most part, you’ll treat these as text strings, but that’s not strictly necessary. I’ve been on this rant before, but the difference between “text” data and “binary” data is 100% semantics, no matter how much code we write to enforce artificial distinctions. From now until the point when computers can process something other than low voltage/high voltage (0s and 1s), there will never be anything but binary data and binary files. On the Linux side, you have 512 bytes for the key and 2,048 bytes for the value. Do with them as you see fit. However, on the host side, you’ll still need to get through the WMI processing. I haven’t pushed that very far.
How Do I Use Hyper-V KVP Exchange for Linux?
This is the part where it gets fun. Microsoft only goes so far as to supply the daemon. If you want to push or pull data, that’s all up to you. Or third parties.
But really, all you need to do is read to and/or write from files. The trick is, you need to be able to do it using the binary format that I mentioned above. If you just use a tool that writes simple strings, it will improperly pad the fields, resulting in mangled transfers. So, you’ll need a bit of proficiency in whatever tool you use. The tool itself doesn’t matter, though. Perl, Python, bash scripts,… anything will do. Just remember these guidelines:
Writing to files _0, _2, _3, and _4 just wastes time. The host will never see it, it will break KVP clients, and the files’ contents will be reset when the daemon restarts.
You do not need special permission to read from any of the files.
_1 is the only file that it’s useful to write to. You can, of course, read from it.
Deleting the existing contents deletes those KVPs. You probably want to update existing or append new.
The host only receives the LAST time that a KVP is set. Meaning that if you write a KVP with key “NewKey” twice in the _1 file, the host will only receive the second one.
Delete a KVP by zeroing its fields.
If the byte lengths are not honored properly, you will damage that KVP and every KVP following.
Source Code for a Hyper-V KVP Exchange Utility on Linux
I’ve built a small utility that can be used to read, write, and delete Hyper-V KVPs on Linux. I wrote it in C++ so that it can be compiled into a single, neat executable.
Long-term, I will only be maintaining this project on my GitHub site. The listing on this article will be forever locked in a 1.0 state.
Each file is set so that they all live in the same directory. Use
make to build the sources and
sudo makeinstall to put the executable into the /bin folder.
Paste the contents of all of these files into accordingly-named files. File names are in the matching section header and in the code preview bar.
Transfer all of the files to your Linux system. It doesn’t really matter where. They just need to be in the same folder.
Get help with:
hvkvp read –help
hvkvp write –help
hvkvp delete –help
Each includes the related keys for that command and some examples.
Microsoft has definitely been bringing the love for Linux lately! I’ve used Linux more in 2017 than in the entirety of my previous career combined. Microsoft made that happen. Recently, they added support to their premier development product, Visual Studio, so that it can connect, deploy, and debug C/C++ code on a Linux system. I’m going to show you how to use that new functionality in conjunction with Hyper-V to ease development on Linux. I’ll provide a demo program listing that you can use to retrieve the information that Hyper-V provides to Linux guests via KVP exchange.
Why Use Visual Studio for Linux C/C++ Development?
I think it’s natural to wonder why anyone would use a Microsoft product to write code on Linux. This discussion can get very religious very quickly, and I would personally like to stay out of that. I’ve never understood why people get so emotional over their own programming preferences that they feel the need to assault the preferences of others. If using Visual Studio causes you some sort of pain, don’t use Visual Studio. Simple.
For the more open-minded people out there, there are several pragmatic reasons to use Visual Studio for Linux C/C++ development:
Intellisense: Visual Studio quickly shows me errors, incomplete lines, unmatched braces/parentheses, and more. Lots of other products have something similar, but I haven’t found anything that I like as much as Intellisense.
Autocomplete: Everyone has autocomplete. But, when it’s combined with Intellisense, you’ve got a real powerhouse. A lot of other products seem to stumble in ways that Visual Studio doesn’t. It seems to know when I want help and when to stay out of the way. It might also be my familiarity with the product, but…
Extensions and Marketplace: Visual Studio sports a rich extension API. A veritable cottage industry sprang up to provide plug-ins and tools. Many are free-of-charge.
[CTRL]+[K], [D] (Format Document). This particular key chord keeps Visual Studio right at the top of my list of must-have tools. Disagreements over how to place braces and whether to use tabs or spaces are ridiculous, but frequently cause battles that reach family-splitting levels of vitriol anyway. VS’s Format Document will almost instantly convert whatever style in place when you opened the file into whatever style you’ve configured VS to use. Allman with tabs, OTBS with spaces — it doesn’t matter! I haven’t found any other tool that deals with this as well as Visual Studio.
Remote debugging. I’ve been using Visual Studio’s remote debugger on Windows for a while and have really liked it. It allows you to write code on one system but run it on another. Since VS won’t run directly on Linux, this feature makes the VS+Linux relationship possible.
No Linux GUI needed. Practically, this is the same as the previous bullet. I want it separate so that it skimmers don’t miss it. Out of all of my Linux installations, only two have a GUI. I know that some people declare that “real programmers” only use text editors to write code. That’s part of that religious thing that I’m avoiding. I want a good IDE for my coding activities. Visual Studio allows me to have a good IDE and a GUI-less system.
Use the compiler of your choice. Visual Studio only provides the development environment. It calls upon the target Linux system to compile and debug your code. You can specify what tools it uses.
Free Community Edition. That’s free as in beer, not open source. But, Community Edition contains most of the best parts of Visual Studio. I would like to see CodeLens extended to the Community Edition, especially since the completely free Visual Studio Code provides it. Most of the rest of the features missing from Community Edition involve the testing suite. You can see a comparison for yourself.
Why Use Hyper-V for Visual Studio and Linux Development?
I don’t know about you, but I like writing code in a virtual machine. Visual Studio 2017 does not modify systems as extensively as its predecessors, but it still uses a very invasive installer. Also, you get a natural sandbox environment when coding in a virtual machine. I feel the same way about target systems. I certainly don’t want to code against a production system, and I don’t want to litter my workspace with a lot of hardware. So, I code in a virtual machine and I test in a virtual machine (several, actually).
I can do all of these things from my Windows 10 desktop. I can also target systems on my test servers. I can mix and match. Since I’m a Hyper-V guy, I can also use this to test code that’s written specifically for a Linux guest of a Hyper-V host. I’ll provide some demo code later in this article specifically for that sort of environment.
Preparing the Linux Environment for Visual Studio->Linux Connections
Visual Studio does all of its work on the Linux environment via SSH (secure shell). So, you’ll need to ensure that you can connect to TCP port 22. I don’t use SELinux, but I believe that it automatically allows the local SSH daemon as long as the default port hasn’t been changed. You’re on your own if you did that.
You need the following components installed:
SSH server/daemon. In most cases, this will be pre-installed, although you might need to activate it
The GNU C Collection (GCC) and related C and C++ compilers
The GNU Debugger
The GNU Debugger Server
Installation will vary by distribution.
openSUSE (definitely Leap, presumably Tumbleweed and SEL as well):
sudo zypper install-yopenssh gcc-c++gdb gdbserver
If you needed to install SSH server, you’ll probably need to start it as well:
sudo service sshd start. You may also want to look up how to autostart a service on your distribution.
You’ll need a user account on the Linux system. Visual Studio will log on as that user to transfer source code and to execute compile and debug operations. Visual Studio does not SUDO, so the account that you choose will not run as a superuser. On some of the distributions, it might be possible to just use the root account. I did not spend any time investigating that. If you need to sudo for debugging, I will show you where to do that.
That’s all for the Linux requirements. You may need to generate a private key for your user account, but that’s technically not part of preparing the Linux environment. I’ll show you how to do that as part of the Windows preparation.
Preparing the Windows Environment for Visual Studio->Linux Connections
First, you need a copy of Visual Studio. You must at least use version 2015. 2017 is preferred. You can use any edition. I will be demonstrating with the Community Edition.
Visual Studio Install Options for Linux C/C++ Development
For Visual Studio 2017, the new installer includes the Linux toolset.
You may choose any other options as necessary, of course.
Connecting Visual Studio to your Linux System(s)
You will instruct Visual Studio to maintain a master list of target Linux systems. You will connect projects to systems from that list. In this section, you’ll set up the master list.
On the main Visual Studio top menu, click Tools->Options.
In the Options window, click Cross Platform. You should be taken right to the Connection Manager screen.
At the right of the window, click Add. You’ll fill in the fields with the relevant information. You have two separate connection options, which I’ll show separately.
Connecting Visual Studio to Linux Using a Password
Depending on the configuration of your SSH server, you might be able to use a simple password connection. By default, Ubuntu and Fedora (and probably CentOS) will allow this; openSUSE Leap will not.
Fill out the fields with the relevant information, using an account that exists on the target Linux system:
When you click Connect, Visual Studio will validate your entries. If successful, you’ll be returned to the Options window. If not, it will highlight whatever it believes the problem to be in red. It does not display any errors. If it highlights the host name and port, then it was unable to connect. If it highlights the user name and password, then the target system rejected your credentials. If you’re certain that you’re entering the correct credentials, read the next section for a solution.
Connect Visual Studio to Linux Using Key Exchange
Potentially, using full key exchange is more secure than using a password. I’m not so sure that it’s true in this case, but we’ll go with it. If you’re using openSUSE and don’t want to reconfigure your SSH server, you’ll need to follow these steps. For the other distributions, you can use the password method above or the key method.
Connect to/open the Linux system’s console as the user that you will be using in Visual Studio. Do not use sudo! On some distributions, you can use root via SSH; Ubuntu blocks it.
ssh-keygen-trsa. It may ask you where to create the files. Press [Enter] to accept the defaults (a hidden location in your home directory).
When prompted, provide a passphrase. Use one that you can remember.
You should see output similar to the following:
Generating public/private rsa key pair.
Enter fileinwhich tosave the key(/home/eric/.ssh/id_rsa):
Enter passphrase(empty forno passphrase):
Enter same passphrase again:
Your identification has been saved in/home/eric/.ssh/id_rsa.
Your public key has been saved in/home/eric/.ssh/id_rsa.pub.
firstname.lastname@example.org. For instance, I used
email@example.com. Remember, you want to use the name of the Linux system, not the remote Windows system running Visual Studio. The system may complain that it can’t verify the authenticity of the system. That’s OK in this case. Type out
yes and press [Enter].
You will be asked to provide a password. Use the password for your user account, not the passphrase that you created for the key.
Use any tool that you like to copy the file
~/.ssh/id_rsa to your local system. The .ssh folder is hidden. If you’re using WinSCP, go to the Options menu and select Preferences. On the Panels tab, check Show hidden files (CTRL+ALT+H).
The id_rsa file is a private key. The target Linux system now implicitly trusts that anyone wielding the specified user name (in step 5) and encrypting with this particular private key is perfectly safe to be allowed on to the system. You must take care with this key! In my case, I just dropped into my account’s My Documents folder. That folder already has some NTFS permission locking and I can be reasonably certain that I can trust everyone that has sufficient credentials to override. If not, the passphrase that I chose will serve as my last line of defense.
Now that I have my private key ready, I pick up where step 3 left off in the initial Connecting section above.
Fill in the target system and port
Fill in the user name
Change the Authentication type drop-down to Private Key
In the Private key file field, enter or browse to the id_rsa file
In the Passphrase field, enter the passphrase that you generated for this key
When you click Connect, Visual Studio will validate your entries. If successful, you’ll be returned to the Options window. If not, it will highlight whatever it believes the problem to be in red. It does not display any errors. If it highlights the host name and port, then it was unable to connect. If it highlights the user name and key section, then the target system rejected your credentials. If that happens, verify that you entered the ssh-copy-id command correctly.
Note: You can also use this private key with other tools, such as WinSCP.
Once you’ve added hosts, Visual Studio will remember them. Conveniently, it will also identify the distribution and bitness:
Configuring a Visual Studio C/C++ Project to Connect to a Linux System
At this point, you’ve prepared your overall environment. From this point onward, you’re going to be configuring for ongoing operational tasks. The general outlay of a Visual Studio to Linux connection:
Your project files and code remain on the Visual Studio system. That means the .sln, .vcxproj, etc. files.
During a build operation, Visual Studio transfers the source files to the target Linux system. It calls on the local compiler to build them.
During a debug operation, Visual Studio calls on the local gdb installation. It brings the output to your local system.
You’ll find all of the transferred files under ~/projects/. Expanded, that’s /home/userid/projects. The local compiler will create bin and obj folders in that location to hold the respective files.
The following sub-sections walk through creating a project.
Creating a Linux Project in Visual Studio
You must have followed all of the preceding steps or the necessary project templates will not be available in Visual Studio.
In Visual Studio, use the normal method to create a new solution or a project for an existing solution (File->New->Project).
In the New Project dialog, expand Installed -> Templates -> Visual C++ -> Cross Platform and click Linux.
In the center, choose Console Application. If you choose Empty Project, you just won’t get the starter files. If you have your own process for Linux projects, you can choose Makefile Project. I will not be demonstrating that. Fill out the Name, Location, and Solution Name (if applicable) fields as necessary. If you want to connect to a source control system, such as your Github account, you can facilitate that with the Add to Source Control check box.
Your new project will include an introductory home page and a default main.cpp code file. The Getting Started page contains useful information:
Default main.cpp code:
Selecting a Target System and Changing Linux Build Options in Visual Studio
If you’ve followed through directly and gotten this far, you can begin debugging immediately. However, you might dislike the default options, especially if you added multiple target systems.
Access the root location for everything that I’m going to show you by right-clicking on the project and clicking Properties:
I won’t show/discuss all available items because I trust that you can read. I’m going to touch on all of the major configuration points.
General Configuration Options
Start on the General tab. Use this section to change:
Directories on the remote system, such as the root project folder.
Project’s name as represented on the remote system.
Selections when using the Clean option
The target system to use from the list of configured connections
The type of build (application, standard library, dynamic library, or makefile)
Whether to use the Standard Library statically or as a shared resource
Directories (especially for Intellisense)
On the VC++ Directories tab, you can configure the include directories that Visual Studio knows about. This tab does not influence anything that happens on the target Linux system(s). The primary value that you will get out of configuring this section is autocomplete and Intellisense for your Linux code. For example, I have set up WinSCP to synchronize the include files from one of my Linux systems to a named local folder:
It won’t synchronize symbolic links, which means that Intellisense won’t automatically work for some items. Fortunately, you can work around that by adding the relevant targets as separate entries. I’ll show you that in a tip after the following steps.
To have Visual Studio access these include files:
Start on the aforementioned VC++ Directories tab. Set the Configuration to All Configurations. Click Include Directories to highlight it. That causes the drop-down button at the right of the field to appear. Click that, then click Edit.
In the Include Directories dialog, click the New Line button. It will create a line. At the end of that line will be an ellipsis (…) button that will allow you to browse for the folder.
One completed, your dialog should look something like this:
OK out of the dialog.
Remember, this does not affect anything on the target Linux system.
TIP: Linux uses symbolic links to connect some of the items. Those won’t come across in a synchronization. Add a second include line (or more) for those directories. For instance, in order to get Intellisense for
<sys/stat.h> on Ubuntu, I added x86_64-linux-gnu:
Visual Studio’s natural behavior is to compile C code with the C++ compiler. It assumes that you’ll do the same on your Linux system. If you want to override the compiler(s) that it uses, you’ll find that setting on the General tab underneath the C/C++ tree node.
TIP: In Release mode, VC++ sets the Debug Information Format to Minimal Debug Information (-g1). I’m not sure if there’s a reason for that, but I personally don’t look for any debug information in release executables. So, that default setting bloats my executable size with no benefit that I’m aware of. Knock it down to None (-g0) on the C/C++/All Options tab (make sure you select the Release configuration first):
Passing Arguments and Running Additional Commands
You can easily find the Pre- and Post- sections for the linker and build events in their respective sections. However, those only apply during a build cycle. In most cases, I suspect that you’ll be interested in changing things during a debug session. Visual Studio provides many options, but I’m guessing that the two of most interest will be running a command prior to the debug phase and passing arguments into the program. You’ll find both options on the Debugging tab:
If the program needs to run with super user privileges, then you could enter
sudo-s into the Pre-Launch Command field. However, by default, you’d also need to supply the password. That password would then be saved into the project’s configuration files in clear text. Even that by itself might not be so bad if the project files live in a secure location. However, if you add the project to your Github account… So, if you need to sudo, I would recommend simply bypassing the need for this account to enter a password at all. It’s ultimately safer to know that you have configured this account that way than to try to keep track of all the places where the password might have traveled. I’ve found two places that guide how to do that: StackExchange and Tecmint. I typically prefer Stack sites but the Tecmint directions are more thorough.
Starting a Debug Process for Linux C/C++ Code from Visual Studio
You’ve completed all configuration work! Now you just need to write code and start debugging.
Let’s start with the sample code since we know it’s a good working program. You can press the green arrow button titled Remote GDB Debugger or press the F5 key when the code window has focus.
You will be prompted to build the project:
If you’ve left the Windows Firewall active, you’ll need to allow Visual Studio to communicate out:
In the Output window, you should see something similar to the following:
If errors occur, you should get a fairly comprehensible error message.
Viewing the Output of a Remote Linux Debug Cycle in Visual Studio
After the build phase, the debug cycle will start. On the Debug output, you may get some errors that aren’t as comprehensible as compile errors:
As far as I can tell these errors (“Cannot find or open the symbol file. Stopped due to shared library event”) occur because the target system uses an older compiler. Changing the default compiler on a Linux distribution can be done, but it is a non-trivial task that may have unforeseen consequences. You have three choices:
As long as the older compiler can successfully build your application, live with the errors. If your final app will target that distribution, then you can bet that users of that distribution will also be using that older compiler.
Add a newer version of the compiler and use what you learned above to instruct Visual Studio to call it instead of the default. You’ll need to do some Internet searching to find out what the corrected command line needs to be.
Change the default compiler on the target. That would be my last choice, as it will affect all future software built on that system in a manner that is inconsistent with the distribution. If you want to do that, you’ll need to look up how.
The consequence of doing nothing is the normal effect of debugging into the code for which you have no symbols. I have not yet taken any serious steps to fix this problem on my own systems. I’m not even certain that I’m correct about the version mismatch. However, these aren’t showstoppers. Assuming that the code compiled, the debug session will start. Assuming that it successfully executed your program, it will have run through to completion and exit. If you remember the first time that you coded a Visual C++ Windows Console Application and didn’t have some way to tell the program to pause so that you could view the results, then you’ll already know what happened: you didn’t get to see any output aside from the return code.
Since you’re working in a remote session, you need to do more than just put some simple input process at the end of your code. In the Debug menu, click Linux Console.
This will open a new always-on-top window for you to view the results of the debug. Debug the default application again, and you should see this:
Of course, the built output will remain until you clean it, so you can always execute the app in a separate terminal window:
LinuxApp is the name that I used for my project. Substitute in the name that you used for your project.
Sample C++ Application: Retrieving KVP data from Hyper-V on a Linux Guest
If we’re going to have an article on Hyper-V, Linux, and C++, it seems only fair that it should include a sample program tying all three together, doesn’t it?
If you followed my guides, the Hyper-V KVP service will already be running on your Linux guest. Check for it:
sudo service--status-all|grepkvp. If it’s not there, you can look at the relevant guide on this site for your distribution (I’ve done Ubuntu, Ubuntu, openSUSE Leap, and Kali). You can also check TechNet for your distribution’s instructions. Also, make sure that the service is enabled on the virtual machine’s property page in Hyper-V Manager or Failover Cluster Manager.
Linux/Hyper-V KVP Input/Output Locations
On Windows, the KVP service operates via the local registry. On Linux, the KVP daemon operates via files:
/var/lib/hyperv/.kvp_pool_0: an inbound file populated by the daemon. This is data that an administrative user can send from the host. Same purpose as the External key on a Windows guest. You only read this file from the Linux side. It does not require special permissions. Ordinarily, it will be empty.
/var/lib/hyperv/.kvp_pool_1: an outbound file that you can use to send data to the host. Same purpose as the Guest key on a Windows guest.
/var/lib/hyperv/.kvp_pool_2: an outbound file populated by the daemon using data that it collects from the guest. Same purpose as the Auto key on a Windows guest. This information is read by the host. You cannot do anything useful with it from the client side.
/var/lib/hyperv/.kvp_pool_3: an inbound file populated by the host. This data contains information about the host. Same purpose as the Guest\Parameter key on a Windows guest. You can only read this file. It does not require special permissions. It should always contain data.
Linux/Hyper-V KVP File Format
All of the files follow a simple, straightforward format. Individual KVP records are simply laid end-to-end. These KVP records are a fixed length of 2,560 bytes. They use a simple format:
512 bytes that contain the data’s key (name). Process as
char. hyperv.h defines this value as HV_KVP_EXCHANGE_MAX_KEY_SIZE.
2,048 bytes that contain the data’s value. By default, you’ll also process this as
char, but data is data. hyperv.h defines this as HV_KVP_EXCHANGE_MAX_VALUE_SIZE.
Be aware that this differs from the Windows implementation, which doesn’t appear to use a fixed limit on value length.
Armed with the above knowledge, we’re going to read the inbound information that contains the auto-created host information.
I replaced the default main.cpp with the following code:
cout<<"Opening file: "<<KVPFileName<<endl;
cout<<"Reading KVP records."<<endl;
while(// slightly faster, somewhat shorter, much dodgier: while (KVPFile.read((char*)&KvpEntry, sizeof(KvpEntry)))
I got the base information about Hyper-V/Linux KVP from this article: https://technet.microsoft.com/en-us/library/dn798287(v=ws.11).aspx. If you want to write KVP readers/writers using C rather than C++, you’ll find examples there. While I certainly don’t mind using C, I feel that the lock code detracts from the simplicity of reading and writing KVP data.
The exciting time has come for my wife’s laptop to be replaced. After all the fun parts, we’ve still got this old laptop on our hands, though. Normally, we donate old computers to the local Goodwill. They’ll clean them up and sell them for a few dollars to someone else. Of course, we have no idea who will be getting the computer, and we don’t know what processes Goodwill puts them through before putting them on the shelf. A determined attacker might be able to retrieve social security numbers, bank logins, and other things that we’d prefer to keep private. As usual, I will wipe the hard drive prior to the donation. This time though, I have some new toys to use: Hyper-V and Kali Linux.
Why Use Hyper-V and Kali Linux to Securely Wipe a Physical Drive?
I am literally doing this because I can. You can easily find any number of other ways to wipe a drive. My reasons:
I don’t have any experience with Windows-based apps that wipe drives and didn’t find any freebies that spoke to me
I don’t really want to deal with booting this old laptop up to one of those security CDs
Kali Linux focuses on penetration testing, but Kali is also the name of the Hindu goddess of destruction. For a bit of fun, do an Internet image search on her, but maybe not around small children. What’s more appropriate than unleashing Kali on a disk you want to wipe?
I don’t want to deal with a Kali Live CD any more than I want to use one of the other CD-based tools, nor do I want to build a physical Kali box just for this. I already have Kali running in a virtual machine.
It’s very convenient for me to connect an external 2.5″ SATA disk to my Windows 10 system.
So yeah, I’m doing this mostly for fun.
Connect the Drive
I’m assuming that you’ve already got a Hyper-V installation with a Kali Linux guest. If not, get those first.
Since we’re working with a physical drive, you also need a way to physically connect the drive to the Hyper-V host. In my case, I have an old Seagate FreeAgent GoFlex that works perfectly for this. It has an enclosure for a small SATA drive and a detachable USB interface-to-SATA connector. I just pop off their drive and plug into the laptop drive, and voila! I can connect her drive to my PC via USB.
You might need to come up with some other method, like cracking your case and connecting the cables. Hopefully not.
I plugged the disk into my Windows 10 system, and as expected, it appeared immediately. Next, I went into Disk Management and took the disk Offline.
I then went into Hyper-V Manager and ensured the Kali guest was running. I opened its settings page to the SCSI Controller page. There, I clicked the Add button.
It created a new logical connection and asked me if I wanted a new VHDX or to connect a physical disk. In this case, the physical disk is what we’re after.
After clicking OK, the disk immediately appeared in Kali.
In Kali, open the terminal from the launcher at the left:
lsblk to verify that Kali can see your disk. I already had my terminal open so that I could perform a before and after for you:
Remember that Linux marks the SATA disks in order as sda, sdb, sdc, etc. So, I know that the last disk that it detected is sdb, even if I hadn’t run the before and after.
Use shred to Perform the Wipe
Now that we’ve successfully connected the drive, we only need to perform the wipe. We’ll use the “shred” utility for that purpose. On other distributions, you’d usually need to install that from a repository. Kali already has it waiting for you, of course.
The shred utility has a number of options. Use shred –help to view them all. In my case, I want to view progress and I want to increase the number of passes from the default of 3 to 4. I’ve been told that analog readers can sometimes go as far as three layers deep. Apparently, even that is untrue. It seems a that a single pass will do the trick. However, old paranoia dies hard. So, four passes it is.
And then, I found something else to do. As you can imagine, overwriting every spot on a 250GB laptop disk takes quite some time.
Because of the time involved, I needed to temporarily disable Windows 10 sleep mode. Otherwise, Connected Standby would interrupt the process.
After the process completed, I used Hyper-V Manager to remove the disk from the VM. Since I never mounted it in Kali, I didn’t need to do anything special there. After that, I bolted the drive back into the laptop. It’s on its way to its happy new owner, and I don’t need to worry about anyone stealing our information from it.
If we want to verify that the problem is an incompatible virtual switch, we just look at the Message property of the Incompatibilities items:
If the destination host does have a virtual switch with the sHoame name, you won’t get this line item in the compatibility report. In fact, you might not get a compatibility report at all. We’ll come back to that situation momentarily.
It’s not quite obvious, but the output above shows you three different incompatibility items. Let’s roll up one level and see the objects themselves. We do that by only asking for the Incompatibilities property.
We can’t read the message (why I showed you the other way first), but we can clearly see three distinct objects. The first two have no meaningful associated action; they only tell you a story. The last one, though, we can do something with. Look at the Source item on it.
If the  doesn’t make sense, it’s array notation. The first item is 0, second is 1, and the third (the one that we’re interested in) is 2. I could have also used Where-Object with the MessageID.
Can you identify that returned object? It’s a VMNetworkAdapter.
The incompatibility report embeds a copy of the virtual machine’s virtual network adapter. Luke’s article tells you to modify the network adapter’s connection during migration. However, you can modify any setting on that virtual network adapter object that you could on any other. That includes the VLAN.
Change the VLAN and the switch in the compatibility report like this:
I did all of that using the popular “one-liner” format. I’ve never been a huge fan of the one-liners fad; it’s usually preposterous showboating. But, if you can follow this one, it lets you work interactively. If you’d rather go multi-line, say for automation purposes, you can build something like this:
# ... other VMNetworkAdapter-related settings ... #
Once you’ve got the settings the way that you like them, perform the Live Migration:
Don’t forget that if the VM’s storage location(s) at the destination host will be different paths than on the source, you need to specify the location(s) when you make the Compare-VM call. Otherwise, you’ll get the networking part prepared for Move-VM, but then it will fail because of storage.
Changing the VLAN without a Compatibility Report
I tried for a while to generate a malleable compatibility report when the switch names match. You can run Compare-VM, of course. Doing so will get you a VMCompatibilityReport object. But, you won’t get the 33012 object/error combination object that we need to modify. There’s no way for the VLAN itself to cause an error because every Hyper-V switch supports VLANs 1-4096. The .Net objects involved (Microsoft.HyperV.PowerShell.VMCompatibilityReport and Microsoft.HyperV.PowerShell.VMCompatibilityError) do not have constructors that I can figure out how to call from PowerShell. I thought of a few ways to deal with that, but they were inelegant at best.
Instead, I chose to move the VLAN assignment out of the Live Migration:
$MovingVM=Move-VM-Namemovingvm-DestinationHostdesthost-Passthru# and other parameters like storage
A slightly different method would involve using Get-VM first and saving that to $MovingVM, then manipulating $MovingVM. I chose this method to save other tinkerers the trouble of exploring PassThru in this context. PassThru with Move-VM captures the original virtual machine, not the transferred virtual machine. Also, I didn’t need to match by VMId. I chose that technique because virtual machine names are not guaranteed to be unique. So, you have some room to change this script to suit your needs.
Whatever modifications you come up with, you’ll wind up with a two step operation:
Move the virtual machine to the target host
Change the VLAN
I hear someone overthinking it: If we’re accustomed to Live Migration causing only a minor blip in network connectivity, won’t this two step operation cause a more noticeable delay? Yes, it will. But that’s not because we’ve split it into two steps. It’s because the VLAN is being changed. That’s always going to cause a more noticeable interruption. The amount of effort required to combine the VLAN change into the Live Migration would not yield worthwhile results.
I should also point out the utility of the $MovedVM object. We focused on the VLAN and virtual network adapter in this article. With $MovedVM, you can modify almost any aspect of the virtual machine.
The title of this article describes the symptoms fairly well. You Live Migrate a virtual machine that’s backed by SMB storage, and the permissions shift in a way that prevents the virtual machine from being used. You’d have to be fairly sharp-eyed to notice before it causes problems, though. I didn’t catch on until virtual machines started failing because the hosts didn’t have sufficient permissions to start them. I don’t have a true fix, meaning that I can’t prevent the permissions from changing. However, I can show you how to eliminate the problem.
The root problem also affects local and Cluster Shared Volume locations, although the default permissions generally prevent blocking problems from manifesting.
I have experienced the problem on both 2012 R2 and 2016. The Hyper-V host causes the problem, so the operating system running on the SMB system doesn’t matter.
Symptom of Broken NTFS Permissions for Hyper-V
I discovered the problem when one of my nodes went down for maintenance and all of its virtual machines crashed. It only affected my test cluster, which I don’t keep a close eye on. That means that I can’t tell you when this became a problem. I do know that this behavior is fairly new (sometime in late 2016 or 1Q/2Q 2017).
Symptom 1: Cluster event logs will fill up with the generic access denied (0x80070005) message.
For example, Hyper-V-VMMS; Event ID 20100:
The Virtual Machine Management Service failed toregister the configuration forthe virtual machine'04C7BE1C-ECAC-4947-9D7D-775E28F3B76E'at'\\svstore\vms':General access denied error(0x80070005).Ifthe virtual machine ismanaged byafailover cluster,ensure that the file islocated atapath that isaccessible toother nodes of the cluster.
Hyper-V-High-Availability; Event ID 21502:
Live migration of'Virtual Machine svmanage'failed.
Virtual machine migration operation for'svmanage'failed at migration destination'svhv2'.(Virtual machine ID974174B7-A3F2-471C-91C2-5081832ACB5A)
User'NT AUTHORITY\SYSTEM'failed tocreate external configuration store at'\\svstore\vms':General access denied error.(0x80070005)
You will also have several of the more generic FailoverClustering IDs 1069, 1205, and 1254 and Hyper-V-High-Availability IDs 21102 and 21111 as the cluster service desperately tries to sort out the problem.
Symptom 2: Virtual machines disappear from Hyper-V Manager on all nodes while still appearing in Failover Cluster Manager.
Because the cluster can’t register the virtual machine ID on the target Hyper-V host, you won’t see it in Hyper-V Manager. The cluster still knows about it though. Remember that, even if they’re named the same, the objects that you see as Roles in Failover Cluster Manager are different objects than what you see in Hyper-V Manager. Don’t panic! As long as the cluster still knows about the objects, it can still attempt to register them once you’ve addressed the underlying problem.
I’m guessing that “helper” behavior gone awry has caused unintentional problems. When you Live Migrate a virtual machine, Hyper-V tries to “fix” permissions, even when they’re not broken. It adjusts the NTFS permissions for the host.
The GUI ACL looks like this:
The permission level that I set, and that I counsel everyone to set, is Full Control. As you can see, it’s been reduced. We click Advanced as the first investigative step and see:
The Access still only tells us Special, but we can see that inheritance did not cause this. Whatever changes the permissions is making the changes directly on this folder. This is the same folder that’s shared via SMB. Double-clicking the entry and then clicking the Show advanced permissions link at the right shows us the new permission set:
When I first found the permissions in this condition, I thought, “Huh, I wonder why/when I did that?” Then I set Full Control again. After the very next Live Migration, these permissions were back! Once I discovered that behavior, I tested other Live Migration types, such as using Cluster Shared Volumes. It does occur on those as well. However, the default permissions on CSVs have other entries that ensure that this particular issue does not prevent virtual machines from functioning. VMs on SMB shares don’t automatically have that kind of luck — but they can benefit from a similar configuration.
Permanently Correcting Live Migration NTFS Permission Problems
I don’t know why Hyper-V selects these particular permissions. I don’t know precisely which of those unchecked boxes cause these problems.
I do know how to prevent the problem from adversely affecting your virtual machines. In fact, even in the absence of the problem, I would label this as a “best practice” because it reduces overall administrative effort.
In Active Directory (I’ll use Active Directory Users and Computers; you could also use PowerShell), create a new security group. For my test environment, I call mine “Hyper-V Hosts”. In a larger domain, you’ll likely want more granular groups.
Select all of the Hyper-V hosts that you want in that new group. Right-click them and click Add to group.
In the Select Groups dialog, enter or browse to the group that you just created. Click OK to add them.
Restart the Workstation service on each of the Hyper-V hosts.
On the target SMB system, add the new group to the ACL of the folder at the root of the share. I personally recommend that you change both SMB and NTFS permissions, although the problem only manifests on NTFS. Grant the group Full Control.
You will now be able to Live Migrate and start virtual machines from this SMB share. If your virtual machines disappeared from Hyper-V Manager, use Failover Cluster Manager to start and/or Live Migrate them. It will take care of any missing registrations.
Why Does this Work?
Through group permissions, the same object can effectively appear multiple times in a single NTFS ACL (access control list). When that happens, NTFS grants the least restrictive set of permissions. So, while the SVHV1’s specific ACE (access control entry) excludes Write attributes, the Hyper-V Hosts group’s ACE includes it. When NTFS accumulates all possible permissions that could apply to SVHV1, it will find an Allow entry for the Write attributes property (and others not set on ACE specific to SVHV1). If it found a Deny anywhere, that would override any conflicting Allow. However, there are no Deny settings, so that single Allow wins.
Do remember that when a computer accesses an NTFS folder through an SMB share, the permissions on that share must be at least as permissive as NTFS in order for access to work as expected. So, if the SMB permission only allows Read, then it won’t matter that the NTFS allows Full Control. When NTFS permissions and SMB permissions must be evaluated together, the most restrictive cumulative effect applies. I’m mostly telling you this for completeness; Hyper-V will not modify SMB permissions. If they worked before, they’ll continue to work. However, I do recommend that you add the same group with Full Control permissions to the share.
As I mentioned before, I recommend that you adopt the group membership tactic whether you need it or not. When you commission new Hyper-V hosts, you’ll only need to add them to the appropriate groups for SMB access to work automatically. When you decommission servers, you won’t need to go around cleaning up broken SID ACEs.
Hello once again everyone! Back on June 27th we put on a webinar that was focused around helping Hyper-V Administrators migrate to the VMware platform. I find that this is always something of a contentious topic no matter what direction the migration is being done. VMware to Hyper-V, or Hyper-V to VMware, it doesn’t matter. Everyone is quite passionate about their chosen hypervisor it seems. However, despite this, this is actually a very important skill set to have. Many IT Pros today are finding themselves in multi-hypervisor environments for a number of potential reasons…
Company makes an acquisition and inherits a different hypervisor.
Company makes a policy decision to only support a specific vendor
IT Pro changes jobs into such an environment
You’re a service provider supporting both platforms.
Whatever the reason, there are many valid situations where you may have to make a migration like this. So with this webinar and another webinar we did some time ago, we now have resources for showing you how to move workloads from each platform to the other, so hopefully these resources will be of great use to you!
With that in mind, as usual, we have included a recording of the webinar (below) and a link to the slidedeck used so that you have access to this information in case you need to reference it at a later time or in the event that you missed the scheduled webinar. Additionally, below the recording below we’ve included a list of the questions asked during the Q & A and their associated answers.
Let’s take a look.
Revisit the webinar: How to Migrate to VMware for Hyper-V Administrators
Q. Are there any considerations needed when migrating a version 1 Hyper-V VM to VMware vs. version 2?
A. From a migration perspective they are treated the same.
Q: Are there PowerShell cmdlets to be used with the VMware Converter Stand-Alone for bulk conversion jobs
A:No Official PowerCLI cmdlets exist for this, but VMware does have a SDK HERE that can be used to build different types of conversion jobs if desired, thought the jobs functionality in the converter utility is usually enough for most use cases.
Q: What PowerCLI command would I use to get detailed event information for a VM like who powered it off?
A: Get-VIEvent is the cmdlet for you! run Get-Help Get-VIEvent -full for fully detailed syntax information
Q: Can you run a 2-node VSAN cluster?
A: As of vSphere 6.5 it is possible to run a 2 node configuration. More information on this type of setup can be found HERE.
Q: I’ve used the converter before and found that when i boot the newly created VM it gives me the OS Not found error…. what gives?
A: I’ve seen this before and the most common reason I’ve found is due to lack of storage drivers on the machine being converted, or the boot order is incorrect after the conversion job. in the advanced VM properties in vSphere you can force entry into the BIOS upon the next reboot to fix this.
Q: Are there issues with duplicate IPs on the network after the conversion job?
A: You could potentially run into this if you allow both the source machine and the newly created VM to be powered on and on the same network segment at the same time. The converter provides options to power down the source machine and likewise power on the new VM if needed. Just something to plan for when doing conversions.
Q: Are there recommendations for offline P2V software?
A: VMware Converter Standalone can be used for this use case as well.
Q: Anything I need to be aware of when it comes to Integration Components?
A: Absolutely. So like in Hyper-V, VMware does have a software package that gets installed the guest VM for things like drivers, and management and orchestration with the host system. VMware’s integration components equivalent is called VMware tools and it can be installed from the vSphere client UI by right clicking on a VM. You will want this installed on every VM. The Hyper-V integration components should be removed after the conversion job is successful.
Q: What about migration VM in Hyper-V that are part of a failover cluster.
A: The process is the same as a stand-alone Hyper-V Host.
As always, thank for reading, and if you have any follow up questions or you think of any new questions based on the content, be sure to let us know in the comments section below and we’ll be sure to get back with you!
Personally, I find Microsoft’s recent moves to improve support for Linux and its overall relationship with open source to be very exciting. I’ve taken full advantage of these new opportunities to rekindle my love for the C and C++ languages and to explore Linux anew. Since my general line of work keeps me focused on the datacenter, I’ve similarly kept tight focus on server Linux builds and within the confines of Microsoft’s support matrix. Sure, I’ve had a good time learning other distributions and comparing them to what I knew. But, I also realize that I’ve been restricting myself to the safe walled garden of enterprise-style deployments. It’s time for something new. For my first step outside the walls, I’m going to take a crack at Kali Linux.
What is Kali Linux?
The Kali Linux project focuses on security. In most of the introductory literature, you’ll find many references to “penetration testing”. With a bit of searching, you’ll find a plethora of guides on using Kali to test the strength of your Windows computers.
The distribution itself is based on Debian. Truthfully, even though I’d like to tell you that we’re going to stray far, far away from the beaten path, we won’t. Almost no one picks up a copy of the Linux kernel and builds an all-new distribution around it. Nearly every maintained distribution connects somewhere into the general distribution categories on Microsoft’s list. Anything else falls under the category of a “source-based” distribution (like Gentoo). I’d need to drastically improve my Linux knowledge to help anyone with one of those.
Why use Kali Linux?
The distributions that I tend to cover in these articles fall best under the category of “general purpose”. In that respect, they have much in common with Windows and Windows Server. You stand up the operating system first, then install whatever applications and servers you need it to operate or provide. Web, DNS, storage, games — anything goes.
Kali Linux has a purpose. You could use it as a general purpose platform, if you want. That’s not an optimal use of the distribution, though. Kali is designed to probe the strength of your environment’s computer security. During install, there won’t be any screens asking you to pick the packages you want to install. You won’t get an opportunity to tick off boxes for LAMP or DNS servers. If you want those things, look at other distributions. Kalix Linux is here to pentest, not hand out IP addresses. Err… well… I guess rogue DHCP qualifies as security testing… But, you get the idea.
A natural question, then, is, “So, Eric, what do you know about pentesting?” The answer is: very little. Where I work, we have a security team. I can notify them when I build a new system, and they’ll test it and send me a report. I accept that I will never rise to expert level, if for no other reason than because I don’t have the time. Still, I should know more than I do. Many seasoned sysadmins would be surprised at how easily an attacker can break into a system set at defaults. Since the people behind the Kali Linux project have done all the work to make a convenient entry point, I’m going to take advantage of it. I recommend that you do the same.
Why Use Client Hyper-V for Kali Linux?
I won’t tell you why you should use a Microsoft hypervisor as opposed to some other hypervisor. I use Microsoft platforms and services for almost every aspect of my home and work computing, so my natural choice is to stick with it. If your story is different, then stay with what you know.
I will tell you that Client Hyper-V makes more sense than server Hyper-V. I’ll make an exception for those of you that run Windows Server as your primary desktop. That’s not a thing that I would do, but hey, no judgment here.
Why I use Kali Linux under Client Hyper-V:
Kali Linux is best used interactively with a desktop interface. If I were to run Kali from within my datacenter, I’d need to use VMConnect against a remote host. I’ve never liked that.
Most attacks won’t come from within the datacenter, so why would your primary penetration testing tool live there? Put it into a user network. Run it from a computer that can access your wired and wireless networks.
Hyper-V allows you to perform all sorts of spoofing quickly and easily. You can flip MACs and hop networks in moments. You can hide Kali behind NAT to fool many network access protection schemes and then, within seconds, drop it on the network alongside my host OS.
I don’t want to replace my primary desktop. I don’t necessarily need to use any hypervisor; I could just install Kali right to my desktop. I could stand up a second physical machine right next to me and use Kali on that. But, this is the sort of thing that hypervisors were built for; more computers in less space. I can keep my general purpose desktop and have the special-purpose Kali running happily together.
Downloading Kali Linux
As a side effect of having a specific purpose, Kali Linux does not provide many install flavors. Start at the Kali Linux homepage. Click the Downloads header at the top of the page. Behold — the list. It looks long, but there’s really not that much there. You’re mostly picking the bitness (most are 64-bit) and the user interface experience that suits you.
This article uses the standard 64-bit distribution of Kali Linux 2017.1. If you choose something else, your experience may be different.
Verifying the ISO File Hash
Since we’re talking security, let’s start by verifying our file. On the Kali download page, next to the file link, you’ll find its SHA256 hash:
If you’re OK with “good enough”, you can do a quick ‘n’ dirty eye scan — basically, just visually verify that the codes look more or less the same. Even minor changes to a file will throw off the hash substantially. But, it’s not impossible to have two files with a similar hash. And, since we’re talking security, trust no one.
In your PowerShell prompt, do exactly this:
Ensure that you are at the beginning of a new command line; no text entered, just a prompt.
Type a single quote mark: ‘
Use the mouse to highlight the Hash output from the previous command. Press [Enter] to put it on the clipboard. Right-click to paste. That should place the code immediately after the single quote.
Type another single quote mark to close off the file hash.
Enter a space, then
-match, then another space.
Type another single quote mark to start a new string.
Highlight the corresponding hash code on the Kali download page. Switch back to the PowerShell prompt and right-click to paste it.
Type another single quote mark to close off the published hash.
This is what you should see (with possibly different hash values):
If you get an error, check your input. If you get False, check your input. If the input is OK, then your file does not match the expected hash. Most likely, the download corrupted. Maybe somebody hijacked it. Either way, get another.
Installing Kali Linux as a Guest in Client Hyper-V
This script creates a dynamically-expanding VHDX using a 1 megabyte block size, in accordance with Microsoft’s recommendation. A commenter on another of my Linux articles pointed out that the 1MB block size does not result in significant space savings on every Linux distribution. I have not tested the difference on Kali. It uses ext4, so I suspect that you’ll want the 1MB block size.
I used the script like this:
.\New-LinuxVM.ps1-VMNamedtkali-VMStoragePath'D:\VMs\'-VHDStoragePath'D:\VMs\dtkali\Virtual Hard Disks\'-InstallISOPathD:\ISO\software\kali-linux-2017.1-amd64.iso-VMSwitchNamevSwitch-StartupMemory1GB-MinimumMemory512MB-MaximumMemory2GB-VHDXSizeBytes100gb
It was necessary to pre-create the target VHDX path. That’s one of the deficiencies in the script. It’s also necessary to turn off Secure Boot after creation.
During use, I learned that Kali wants so much more memory than 2GB. These memory numbers are somewhat laughable. Be prepared to turn them up. It does seem to run well enough at 2GB, but I’m thinking that 4GB would be a more reasonable average running expectancy.
Installing Kali Linux from ISO
In case you missed it from the previous section: disable Secure Boot. Hyper-V does not include Kali’s boot signature. I did enable TPM support for it, but I don’t yet even know if Kali will make use of it.
From here, I doubt that you really need much from me. Installation of Kali is very straightforward. It shares one annoyance with Ubuntu: it has an obnoxious number of input screens broken up by long file operations, rather than cohesive input gathering followed by completion operations.
An installation walkthrough:
You’re given many options right from the start. I simply chose to Start installer: Note that several errors regarding not being able to find anything on SDA will scroll by; don’t worry about them. That’s normal for an empty disk.
Choose the installation language. I also want to draw attention to the Screenshot button; This appears on every page, so you can store install images for later retrieval:
Choose your location. Be aware that the options you see are determined by your language selection! The following two screenshots show the outcome of choosing English and French in step 2:
Choose your keyboard layout:
The installer will then load some files and perform basic network configuration. I noticed that it performed IP assignment from DHCP; I did not test to see what happens if it can’t reach a DHCP server.
After the component load, provide a host name. It appears to automatically choose whatever name DHCP held for that IP last. Only provide the name, no domain.
Provide your domain name. You can invent one if you aren’t using a domain, but you must enter something.
Enter a password for root. Even though it mentions user creation, you aren’t creating a standard user account like you would in other distributions.
Choose your time zone. Options will be selected based on your earlier region choices. Why it appears at this point of the installer, I certainly do not know.
Choose how you want your disk to be laid out and formatted. I personally choose Guided – use entire disk because I’m not the biggest fan of LVM. Any of the first three choices are fine if you’re new and/or not especially picky.
Confirm disk usage:
Then confirm partition usage:
Confirm disk usage:
And again… (this installer needs a lot of polishing):
Now, your formatting options will be applied and files will be copied. This will take a while and there will be more questions, so don’t go too far.
Now you need to choose whether or not you’ll allow software to be downloaded from the Internet (or a specially configured mirror). If you choose no, you’ll need to manually supply packages or add a repository later.
If you need to enter proxy information, do so now:
You’ll have a few more minutes of configuration, then what appears to be a completion screen.
There’s still more stuff to do, though:
As soon as that part completes, the system will reboot and launch into your new Kali environment.
Getting Started with Kali
Here’s your login screen! Remember to use root, because you didn’t create a regular user:
And finally, your new desktop:
I know that you’re anxious to start exploring this wonderful new environment, but we’ve got a bit of housekeeping to take care of first.
At the left, in the quick launch bar, hover over the second icon from the top. It should be a black square and the tool tip should say Terminal. Click it to launch a terminal window:
Since we’re running as root, the terminal will already be running with the highest privileges. You can tell by the # prompt as opposed to a > prompt.
Install Extra Hyper-V Services
The required Hyper-V components are already enabled. Let’s add the KVP, VSS, and file copy services. Enter:
This installs the file copy, KVP, and VSS services. Whether or not they start depends on whether or not the relevant services are enabled. The default Hyper-V setting enables all except Guest Services, so all except the file copy daemon should start automatically. Use
service--status-all|grephyperv to find out:
Change the Scheduler to NOOP
Linux has an I/O scheduler, but Hyper-V has an I/O scheduler. Turn off Linux’s for the best experience.
Edit the GRUB loader:
This will load the GRUB configuration file. Find the line that says:
Change it to:
Press [CTRL]+[X]. You’ll then need to press [Y] to save the changes, then [Enter] to indicate that you want to save the data back to the file you found it in. That will leave you back at the prompt.
Exploring Kali Linux
You have now completed all of your installation and preparation work! It’s time to take Kali for a spin!
If I didn’t make this clear enough earlier, I’ll be crystal clear right now: I don’t know that much about penetration testing. I recognize many of the names of the tools in Kali, but the only one I have a meaningful level of experience with is Wireshark. So, don’t ask me what this stuff does. That’s why we have the Internet and search engines.
Let’s start with the boring things to get them out of the way. In the top right you’ll find some system status icons. Click and you’ll get the system menu:
Hyper-V doesn’t (yet?) enable audio out of Linux systems, so the volume slider does nothing.
Where my screenshot shows Wired Connected, you’ll find your network settings. Click it to expand the menu where you can access them.
Where my screenshot shows Proxy None, you can click to access your proxy settings.
Where my screenshot shows root, you can click for a Log Out option and a link to your logged on user’s account settings.
The wrench/screwdriver icon takes you to the system settings screen. It’s analogous to Windows’ Control Panel. I don’t think you’ll need me to explain those items, so I’ll just recommend that you create users aside from root if you intend to use this desktop for more than just pentesting.
The padlock icon locks the desktop. From a lock screen, just press [Enter] to get a login prompt.
The power button icon takes you to a cancel/restart/shutdown dialog.
Move left from the system area, and you’ll see a camera icon (it appears in the screenshot above). Click that, and you can record your screen.
Now, the fun stuff! In the top left, you’ll see Applications and Places menu items. Places includes some shortcuts to common file system locations; it’s sort of like the Quick Access section in Windows Explorer. I’ll leave that to you to play with. Click Applications. You’ll immediately see why Kali is not a garden-variety distribution:
The Usual Applications group gave me a chuckle. You’ll find all the things that you’d find on a “normal” distribution there.
You met the quick launch dash earlier, when you started the terminal. It sits at the left of the screen and contains everything marked as a favorite. It will also include icons for running applications. The nine-dot grid at the bottom opens up Kali/Gnome’s equivalent to Windows’ Start menu. From there, you can launch any item on your system. You can also add items to the Favorites/Dash area:
You’ve got your shiny new Kali install ready to roll. Kick the tires and see what you can accomplish.
Oh, and remember that we’re the good guys. Use these tools responsibly.
I’ve provided some articles on monitoring Hyper-V using Nagios. In all of them, I’ve specifically avoided the topic of securing the communications chain. On the one hand, I figure that we’re only working with monitoring data; we’re not passing credit card numbers or medical history.
On the other hand, several of my sensors use parameterized scripts. If I didn’t design my scripts well, then perhaps someone could use them as an attack vector. Rather than pretend that I can ever be certain that I’ll never make a mistake like that, I can bundle the communications into an encrypted channel. Even if you’re not worried about the scripts, you can still enjoy some positive side effects.
What’s in this Article
The end effects of following this article through to completion:
You will access your Nagios web interface at an https:// address.
You can access your Nagios web interface using your Active Directory domain credentials. You can also allow others to use their credentials. You can control what any given account can access.
Nagios will perform NRPE checks against your Hyper-V and Windows Server systems using a secured channel
As you get started, be advised that this is a very long article. I did test every single line, usually multiple times. You will almost always need to use
sudo. I tried to add it everywhere it was appropriate, but each time I proofread this article, I find another that I missed. You might want to just
sudo-s right in the beginning and be done with it.
When I started working on this article, I fully intended to utilize a Microsoft-based certificate authority. Conceptually, PKI (public key infrastructure) is extremely simple. But, from that simplicity, implementers run off and make things as complicated as possible. I have not encountered any worse offender than Microsoft. After several days struggling against it, I ran up against problems that I simply couldn’t sort out. After trying to decipher one too many of Microsoft’s cryptic and non-actionable errors (“Error 0x450a05a1: masking tape will not eat pearl soufflé from the file that cannot be named”), I finally gave up. So, while it should be possible to use a Microsoft CA for everything that you see here, I cannot demonstrate it. Be aware that Microsoft’s tools tend to output DER (binary) certificates. Choose the Base64 option when you can. You can convert DER to Base64 (PEM).
Rather than giving up entirely, I re-centered myself and adopted the following positions:
Most of my readers probably don’t want to go to the hassle of configuring a Microsoft CA anyway; many of you don’t have the extra Windows Server licenses for that sort of thing, either
We’re securing monitoring traffic, not state secrets. We can shift the bulk of the security responsibility to the endpoints
To make all of this more secure, one simply needs to use a more secure CA. The remainder of the directions stay the same
Some things could be done differently. In a few places, I’m fairly certain that I worked harder than necessary (i.e., could have used fewer arguments to openssl). Functionality and outcome were most important.
In general, certificates are used to guarantee the identity of hosts. Anything else — host names, IP addresses, MAC addresses, etc., can be easily spoofed. In this case, we’re locking down a monitoring system. If someone manages to fool Nagios… uh… OK then. I am more concerned that, if you use the scripts that I provide, we are transmitting PowerShell commands to a service running with administrative privileges, and the transmission is sent in clear text. There are many safeguards to prevent that from being a security risk, but I want to add layers on top of that. So, while host authentication is always a good thing, my primary goal in this case is to encrypt the traffic. It’s on you to take precautions to lock down your endpoints. Maintain a good root password on your Linux boxes, maintain solid password protection policies, etc.
You need a Linux machine running Nagios. I wrote one guide for doing that on Ubuntu. I wrote another guide for doing that on CentOS. I have a third article forthcoming for doing the same with OpenSUSE. It’s totally acceptable for you to bring your own. The distributions aren’t so radically different that you won’t be able to figure out any differences that survive this article.
Also, for any of this to make sense, you need at least one Windows Server/Hyper-V Server system to monitor.
Have Patience! I can go on and on all day about how Microsoft makes a point of avoiding actionable error messages. In this case, they are far from alone. I lost many hours trying to decipher useless messages from openssl and NRPE. Solutions are usually simple, but the problems are almost always frustrating because the authors of these tools couldn’t be bothered to employ useful error handling. NSClient++ treated me much better, but even that let me down a few times. Take your time and remember that, even though there are an absurd number of configuration points, certificate exchange is fundamentally simple. Whatever problem you encounter is probably a small one.
Step 1. Acquire and Enable openssl
Every distribution that I used already had openssl installed. Just to be sure, use your distribution’s package manager. Examples:
sudo apt install openssl
yum install openssl
zypper install openssl
You’ll probably get a message that you already have the package. Good!
Next, we need a basic configuration. You should automatically get one along with the default installation of openssl. Look for a file named “openssl.cnf”. It will probably be in in /etc/ssl or /usr/lib/ssl. Linux can help you:
If you haven’t got one, then maybe removing and reinstalling openssl will create it… I never tried that. You could try this site: https://www.phildev.net/ssl/opensslconf.html. I’ll also provide the one that I used. Different sections of the file are used for different purposes. I’ll show each portion in context.
Set Up Your Directories and Environment
You will need to place your certificate files in a common place. First, look around the location where you found the openssl.cnf file. Specifically, check for “private” and “certs” directories. If they don’t exist, you can make some.
To keep things simple, I just dump everything there on systems that need a directory created. I will write the remainder of this document using that directory. If your system already has the split directories, use “private” to hold key files and “certs” to hold certificate files. Note that if you find these files underneath a “ca” path, that is for the certificate authority, not the client certificates that I’m talking about. I’ll specifically cover the certificate authority in the next section.
Step 2. Set Up a Certificate Authority
In this implementation, the Linux system that runs Nagios will also host a certificate authority. We’ll use that to CA to generate certificates that Nagios and NRPE can use. Some people erroneously refer to those as “self-signed” because they aren’t issued by an “official” CA. However, that’s not the definition of “self-signed”. A self-signed certificate doesn’t have an authority chain. In our case, that term will apply only to the CA’s own certificate, which will then be used to sign other certificates. All of those will be authentic, not-self-signed certificates. As I describe it, you’ll use the same system as for both your CA and Nagios system, but you could just as easily spin up another Linux system to be the CA. You would only need to copy CSR, certificate, and key files across the separate systems as necessary to implement that.
Set Up Your Directories and Environment
You need places to put your CA’s files and certificates. openssl will require its own particular files. If you found some CA folders near your openssl.cnf, use those. Otherwise, you can create your own.
Configure your default openssl.cnf (sometimes openssl.conf). Note the file locations that I mentioned in the previous section. Mine looks like this:
HOME = .
RANDFILE = $ENV::HOME/.rnd
# Extra OBJECT IDENTIFIER info:
#oid_file = $ENV::HOME/.oid
oid_section = new_oids
# To use this configuration file with the "-extfile" option of the
# "openssl x509" utility, name here the section containing the
# X.509v3 extensions to use:
# extensions =
# (Alternatively, use a configuration file that has only
# X.509v3 extensions in its main [= default] section.)
[ new_oids ]
# We can add new OIDs in here for use by 'ca', 'req' and 'ts'.
You will first be asked to answer a series of questions. If you filled out the fields correctly, then you can just press [Enter] all the way through them. You will then be asked to provide a password for the private key. Even though we aren’t securing anything of earth-shattering importance, take this seriously.
Your CA’s private key is the most vital file out of all that you’ll be creating. We’re going to lock it down so that it can only be accessed by root:
The public key is included in the public cert file (ca_cert.pem). That can safely be read by anyone, anywhere, any time.
For bonus points, research setting up access to your new CA’s certificate revocation list (CRL). I did not set that up for mine.
Step 3. Set Your Managing Computer to Trust the CA
Your management computer will access the Nagios site that will be secured by your new CA. Therefore, your management computer needs to trust the certificates issued by that CA, or you’ll get warnings in every major browser.
For a Linux machine (client, not the Nagios server), check to see if /etc/ssl/certs contains several files. If it does (Ubuntu, openSUSE), just copy the CA cert there. You can rename the file so that it stands out better, if you like. Not every app on Linux will read that folder; you’ll need to find directions for those apps specifically.
If your Linux distribution doesn’t have that folder (CentOS), then look for /etc/pki/ca-trust/source/anchors. If that exists (CentOS), copy the certificate file there. Then, run:
sudo update_ca_trust enable
For a Windows machine:
Use WinSCP to transfer the ca_cert.pem file to your Windows system (not the key; the key never needs to leave the CA).
Run MMC.EXE as administrator.
Click File->Add/Remove Snap-in.
Choose Computer Account and click Next.
Leave Local Computer selected and click Finish.
Click OK back on the Add/Remove Snap-ins dialog.
Back in the main screen, right-click Trusted Root Certification Authorities. Hover over All Tasks, then click Import.
On the Welcome screen, you should not be allowed to change the selection from Local Machine.
Browse to the file that you copied over using WinSCP. You’ll either need to change the selection to allow all files or you’ll need to have renamed the certificate to have a .cer extension.
Choose Trusted Root Certification Authorities.
Click Finish on the final screen.
Find your new CA in the list and double-click it to verify.
The above steps can be duplicated for other computers that need to access the Nagios site. For something a bit more widespread, you can deploy the certificate using Group Policy Management Console. In the GPO’s properties, drill down to Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities. You can right-click on that node and click Import to start the same wizard that you used above.
Note: Internet Explorer, Edge, and Chrome will use trusted root certificates from the Windows store. The authors of Firefox have decided that reinventing the wheel and maintaining a completely separate certificate store makes sense somehow. You’ll have to configure its trusted root certificate store within the program.
Step 4. Secure the Nagios Web Site
If you followed any of my earlier guides, you’re accessing your Nagios site over port 80 with Basic authentication. That means that any moderately enterprising soul can snag your Nagios site’s clear-text password(s) right out of the Ethernet. You have several options to fix that. I chose to use an SSL site while retaining Basic authentication. Your password still travels, but it travels encrypted. As long as you protect the site’s private key, an attacker should find cracking your password prohibitively difficult.
You could also use Kerberos authentication to have the Nagios site check your credentials against Active Directory. When that works, it appears that your password is protected, even using unencrypted HTTP. However, I could not find an elegant way to combine that with the existing file-based authentication. So, if you’re one of my readers at a smaller site with only one or two domain controllers and you lose your domain for some reason, you’d also lose your ability to log in to your monitoring environment. Also, managing Kerberos users in Nagios is kind of ugly. I didn’t find that a palatable option.
So, we’re going to keep the file-based authentication model and add LDAP authentication on top of it. You’ll be able to use your Active Directory account to log in to the Nagios site, but you’ll also be able to fall back to the existing “nagiosadmin” account when necessary.
One thing that I don’t demonstrate is updating the firewall to allow for port 443. Whatever directions you used to open up port 80, follow those for 443.
Create the Certificate for Apache
If you only use the one site address, then you can continue using the same openssl.cnf file from earlier steps. So, if I were using “https://svlmon1.siron.int/nagios” to access my site, then I would just proceed with what I have. However, I access my site with “https://nagios.siron.int”. I also have a handful of other sites on the same system. I (and you) could certainly create multiple certificates to handle them all. I chose to use Subject Alternate[sic] Names instead. That means that I create a single certificate with all of the names that I want. It means less overhead and micromanagement for me. Again, we’re not hosting a stock exchange, so we don’t need to complicate things.
You have two choices:
Edit your existing openssl.cnf file with the differences for the new certificate(s).
Copy your existing openssl.cnf file, make the changes to the copy, and override the openssl command to use the copied file.
I suppose a third option would be to hack at the openssl command line to manually insert what you want. That requires more gumption than I can muster, and I don’t see any benefits. I’m going with option 2.
Of course, it’s not a requirement to use nano. Use the editor you prefer.
The following shows sample additions to the file. They are not sufficient on their own!
[ req ]
req_extensions = v3_req # this line is commented out in the sample; remove that comment mark
[ v3_req ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = svlmon1.siron.int
DNS.2 = svlmon1
DNS.3 = nagios
DNS.4 = nagios.siron.int
DNS.5 = mrtg
DNS.6 = mrtg.siron.int
DNS.7 = cacti
DNS.8 = cacti.siron.int
IP.1 = 192.168.25.128
The req_extensions line already exists in the default sample config, but has a hash mark in front of it to comment it out. Remove that (or type a new line, whatever suits you). The [ v3_req ] section probably exists; whatever it’s got, leave it. Just add the subjectAltName line. The [ alt_names ] segment won’t exist (probably). Add it, along with the DNS and IP entries that you want.
Note: The certificates we create now are not part of the CA. I
sudo mkdir/var/certs to hold non-CA cert files. That’s a convenience, not a requirement. Follow the guidance from earlier.
If you’re copy/pasting, note that I used a .cnf file from /etc/ssl. Your structure may be different.
First, openssl will ask you to supply the password for the CA’s private key. Next, you’ll be shown a preview of the certificate and asked twice to confirm its creation.
The generated file will appear in your CA’s configured output directory (the one used in these directions is /var/ca/newcerts). It will use the next serial from your /var/ca/serial file as the name. So, if you’re following straight through, that will be /var/ca/newcerts/01.pem. You can
ls/var/ca/newcerts to see them all. The highest number is the one that was just generated. Verify that it’s yours:
Transfer the certificate to whatever location that you’ll have Apache call it from, and, for convenience, rename it:
Tell Apache to Use SSL with the Certificate
Apache allows so much latitude in configuration that it appears to be complicated. Every distribution that installs Apache from repositories follows its own conventions, making things even more challenging. I’ll help guide you where possible. If you feel lost, just remember these things:
The last time that Apache finds a configuration setting overrides all previous configurations of that setting
Apache reads files in alphabetical order
Apache doesn’t care about file names, only extensions
So, any time that a configuration doesn’t work, that means that a later setting overrides. It might be further down in the same file or it might be in another file, but it’s out there somewhere. It might be in a file with a seemingly related name, but it might not be.
Start by locating the master Apache configuration file.
Ubuntu and OpenSUSE: /etc/apache2/apache2.conf
This file will help you to figure out what extensions qualify a configuration file and which directories Apache searches for those configuration files.
We will take these basic steps:
Instruct Apache to listen on ports 80 and 443
Instruct Apache to redirect all port 80 traffic to port 443
Secure all 443 traffic with the certificate that we created in the preceding section
Enable SSL in Apache
Your distribution probably enabled SSL already. Verify on Ubuntu with
apache2-M|grepssl. Verify on CentOS/OpenSUSE with
httpd-M|grepssl. If you are rewarded with a return of
ssl_module, then you don’t need to do anything else.
To enable Apache SSL on Ubuntu/OpenSUSE:
sudo a2enmod ssl.
To enable Apache SSL on CentOS:
sudo yum install mod_ssl.
Configure SSL in Apache
We could do all of steps 2-4 in a single file or across multiple files. I tend to do step 2 in a place that makes sense for the distribution, then steps 3 and 4 in the primary site configuration file. We could also spread out certificates across multiple virtual hosts. I’m not hosting tenants, so I tend to use one virtual host per site, but each uses the same certificate.
Remember, it doesn’t really matter where any of these things are set. The only thing that matters is that they are processed by Apache after any conflicting settings. Do your best to simply eliminate any conflicts. For instance, CentOS puts a lot of SSL settings in /etc/httpd/conf.d/ssl.conf. For that distribution, I left all of the settings it creates for defaults but commented out the entire VirtualHost host item. I strongly encourage you to create backup copies of any file before you modify them. Ex:
Somewhere, you need a
Listen 443 directive. Most distributions will automatically set it when you enable SSL (look in ports.conf or a conf file with “ssl” in the name). However, I’ve had a few times when that only worked for IPv6. If you can’t get 443 to work on IPv4, try
Listen 0.0.0.0:443. This resolves step 2.
Next, we need a port 80 to 443 redirect. Apache has an “easy” Redirect command, but it’s too restrictive unless you’re only hosting a single site. In my primary site file, I create an empty port 80 site that redirects all inbound requests to an otherwise identical location using https:
This sequence sends a 301 code back to the browser along with the “corrected” URL. As long as the browser understands what to do with 301s (every modern browser does), then the URL will be rewritten right in the address bar. If you’re stuck for where to place this, I recommend:
On Ubuntu: /etc/apache2/sites-available/000-default.conf (symlinked from /etc/apache2/sites-enabled/)
On CentOS: /etc/httpd/conf.d/sites.conf
On OpenSUSE: /etc/apache2/default-server.conf
Wherever you put it, you need to verify that there are no other virtual hosts set to port 80. If there are, comment them out. You could also replace the 80 with 443, provided that you also add in the certificate settings that I’m about to show you.
After setting up the 80->443 redirect, you next need to configure a secured virtual host. It must do two things: listen on port 443 and use a certificate to encrypt traffic. Mine looks like this:
If you have other sites on the same host, create essentially the same thing but use the ServerName/ServerAlias fields to differentiate. For instance, my MRTG site is on the same server:
If you want, you can certainly use the instructions from the preceding section to create as many additional certificates as necessary for your other sites.
You’ve finished the hard work! Now just restart Apache.
service apache2 restart on systems that name the service “apache2” (Ubuntu, OpenSUSE) or
service httpd restart (CentOS). Test by accessing the site using an https prefix, then again with an http prefix to ensure that redirection works.
Step 5: Configure Nagios for Active Directory Authentication
Now that we’re securing the Nagios web site with SSL, we can feel a little bit better about entering Active Directory domain credentials into its challenge dialog. We have five phases for that process.
Create (or designate) an account to use for directory reads.
Select an OU to scan for valid accounts.
Enable Apache to use LDAP authentication.
Configure Apache directory security.
Set Nagios to recognize Active Directory accounts.
Create an Account for Directory Access
Use PowerShell or Active Directory Users and Computers to create a user account. It does not matter what you call it. It does not matter where you put it. It does not need to have any group membership other than the default Domain Users group. It only requires enough powers to read the directory, which all Domain Users have by default. I recommend that you set its password to never expire or be prepared to periodically update your Nagios configuration files.
Once you’ve created it, you need its distinguished name. You can find that on the Attribute Editor tab in ADUC. You can also find it with
Keep the DN and the password on hand. You’ll need them in a bit.
Selecting OUs for Apache LDAP
When an account logs in to the web site, Apache’s mod_authnz_ldap will search for it within locations that you specify. You need to know the distingished name of at least one organizational unit. Apache’s mod_ldap queries cannot run against the entire directory. I found many, many, many articles claiming that it’s possible, including Apache’s official document, but they are all lies (thanks for wasting hours of my time on searches and tests, though, guys, I always appreciate that).
It will, however, search multiple locations, and it can search downward into the child OUs of whatever OU you specify. Luckily for me, I have a special Accounts OU that I’ve created to organize user accounts. Hopefully, you have something similar. If not, you can use the default Users folder. You can do both.
I’ll show you how to connect to an OU and the default Users folder.
It is not necessary for the directory read account that you created in the first part of this section to exist in the selected location(s). The target location(s), or a sub-OU, only needs to contain the accounts that will log in to Nagios.
Once you’ve made your selection(s), you need to know the distinguished name(s). You can use the Attribute Editor tab like you did for the user, or
Enabling LDAP Authentication in Apache
Apache requires two modules for LDAP authentication: authnz_ldap_module and ldap_module. You will probably need to enable them, but you can check in advance. On Ubuntu, use
apache2-M|grepldap. On CentOS/OpenSUSE, use
httpd-M|grepldap. If you see both of these modules, then you don’t need to do anything else.
To enable Apache LDAP authentication on Ubuntu/OpenSUSE:
sudo a2enmod authnz_ldap. You might also need to:
sudo a2enmod ldap.
To enable Apache LDAP authentication on CentOS:
yum install mod_ldap.
Make certain to perform the apachectl -M verification afterward to ensure that both modules are available.
Configuring Apache Directories to Use LDAP Authentication
Collect your OU DN(s), your user DN, and the password that user. Now, we’re going to configure LDAP authorization sections in Apache. Again, you can put these in any conf file that pleases you. I usually find the distribution’s LDAP configuration file:
OpenSUSE: no default file is created for the ldap module on OpenSUSE; you can create your own or add it to another, like /etc/apache2/global.conf
Warning: On Ubuntu, the files always exist in mods-available; when you run a2enmod, it symlinks them from mods-enabled. I highly recommend that you avoid the mods-enabled directory. Eventually, something bad will happen if you touch anything there manually (yes, that’s experience talking). Edit the files in mods-available.
The initial lines are not required, but can make the overall experience a bit smoother. I am just using defaults; I didn’t tune any of those lines. The only thing to be aware of is that Apache will be oblivious to changes that occur during cache timeouts — including lockouts and disables.
A breakdown of the AuthnProviderAlias sections:
In the opening brackets, AuthnProviderAlias and ldap must be the first two parts. We are triggering the authn provider framework and telling it that we’re specifically working with the ldap provider. Where I used ldap-accounts and ldap-users, you can use anything you like. I named them after the specific OUs that I selected. Whatever you enter here will be used as reference tags in directories.
For AuthLDAPBindDN, use the distinguished name of the read-only Active Directory user that you created at the beginning of this section. You can omit the quotes if you have no spaces in the DN, but I would recommend keeping them just in case.
For AuthLDAPBindPassword, use the password of the read-only Active Directory account. Do not use quotes unless there are quotes in the password. If your password contains a quote, I recommend changing it.
For AuthLDAPURL, use the distinguished name of the OU to search. Use one? instead of sub? if you don’t want it to search sub-OUs.
Note that the Users folder uses CN, not OU.
TLS/SSL/LDAPS Configuration for Apache and LDAP
You should be able to authenticate with TLS or LDAPS if configured in your domain. I couldn’t get that to work because of the state of my domain. I have made it work elsewhere, so I can confirm that it does work. If you want to try on your own, I will tell you that right off, find the “LogLevel” line in your Apache configs and bump it up to “Debug” until you have it working, or you’ll have no idea why things don’t work. The logs are output to somewhere in /var/logs/httpd or /var/logs/apache2, depending on your configuration/distribution (the file is usually ssl_error_log, but it can be overridden, so you might need to dig a tiny bit). You can go through Apache’s documentation on this mod for some hints. You need at least:
LDAPTrustedGlobalCert CA_BASE64/path/to/your/domain/CA.pem in some Apache file. I use the built-in ldap.conf or 01-ldap.conf for Apache. If you download the certificate chain from your domain CA’s web enrollment server, you can extract the subordinate’s certificate and convert it from P7B to PEM.
LDAPTrustedMode SSL in some Apache file if you will be using LDAPS on port 636. I normally keep it near the previous entry. Note: you can also just append
SSL to any of the AuthLDAPURL entries for local configuration instead of global. In your AuthLDAPURL lines, you must change ldap: to ldaps: and append :636 to the hostname portion. Ex:
LDAPTrustedMode TLS in some Apache file if you will be using TLS on port 389. I normally keep it near the previous entry. Note: you can also just append
TLS any of the AuthLDAPURL entries for local configuration instead of global.
In the <Directory> fields that attach to AD, you need:
LDAPTrustedClientCert CERT_BASE64/var/certs/your-local-system.pem. It might also work in the
AuthnProviderAlias blocks; I haven’t yet been able to try.
You might need to add
LDAPVerifyServerCert off to an Apache configuration. I don’t like that, because it eliminates the domain controller authentication benefit of using TLS or LDAPS. Essentially, if you can get
openssl-s_client-connect your.domain.address:636-CAfile ca-cert-file-from-first-bullet.pem to work, then you will be fine.
The hardest part is usually keeping LDAPVerifyServerCert On. First, use
openssl s_client-connect your.domain.controller:636-CAfile your.addomain.cafile.pem. It will display a certificate. Paste that into a file and save it. Then, use
openssl verify-CAfile your.addomain.cafilethe.file.you.pasted. If that says OK, then you should be able to get SSL/TLS to work.
Because security is our goal here and I couldn’t get TLS or LDAP to work, I did run a Wireshark trace on the communication between the Nagios system and my domain controller. It does pass the user name in clear-text, but it does not transmit the password in clear text. I don’t love it that the user names are clear, but I also know that there are much easier ways to determine domain user accounts than by sniffing Apache LDAP authentication packets. There are also easier ways to crack your domain than by spoofing a domain controller to your Nagios system. If you can’t get TLS or LDAP to work, it won’t be the weakest link in your authentication system.
Note 1: Be very, very, very careful about typing. You’re handling your directory in read-only mode, so I wouldn’t worry about breaking the directory. What you need to worry about is the very poor error reporting in this module. I lost an enormous amount of time over a hyphen where an equal sign should have been. It was right on the side-scroll break of nano so I didn’t see it for a very long time. The only error that I got was
AH01618: user esadmin not found: /., or whatever account I was trying to authenticate. If things don’t work, slow down, check for typos, check for overrides from other conf files.
Note 2: I will happily welcome verifiable assistance on improving this section. If you just throw URLs at me, they’d better contain something that I didn’t find on any of the 20+ pages that made big promises without delivering, and the directions had better work. For example, using port 3268 to authenticate against the global catalog vs. 389 LDAP or 636 LDAPS does not do anything special for trying to authenticate the entire directory.
Configure Apache Directory Security for LDAP Authentication
From here, the Apache portion is relatively simple. Assuming that you already have a Nagios directory configured, just compare with mine:
The default Nagios site created by the Nagios installer contains a lot of fluff, which I’ve removed. For instance, I don’t check the Apache version because I know what version it is. There’s only one major change, though: look at the AuthBasicProvider line. Yours, if you’re using the default, just says file. Mine also says ldap-usersldap-accounts. Those are the tags that I applied to the providers in the previous sub-section. By leaving file in there, I can still use the original “nagiosadmin” account, as well as any others that I might have created. If you create additional providers for other OUs, just keep tacking them onto the AuthBasicProvider lines.
On the AuthBasicProvider line, order is important. I placed file first because I want accounts to be verified there first. The majority of my accounts will be found in Active Directory, but the file is only a couple of lines and can be searched in a few milliseconds. If I need to reach out to the directory for an uncached account, that will cause a noticeable delay. For the same reason, order your LDAP locations wisely.
We’re not quite done; Nagios still doesn’t know what to do with these accounts. However, stop right now and go make sure that AD authentication is working.
sudo service apache2 restart or
sudo service httpd restart, depending on your distribution. If Apache doesn’t restart successfully, use
sudo journalctl-xe to find out why. Fix that, and move on. Once Apache successfully restarts, access your site at https:/yournagiosite.yourdomain.yourtld. Log in using an Active Directory account inside a selected OU. You do not need to prefix it with the domain name.
If all is well, you should be greeted with the Nagios home page. Click any of the navigation links on the left. The pages should load, but you should not be able to see anything — no hosts, no services, nothing. If so, that means that Apache has figured out who you are, but Nagios hasn’t. You can double-check that at the top left of most any of the pages. For instance, on the Tactical Overview:
Do not move past this point until AD authentication works.
Configure Nagios to Recognize Active Directory Accounts
Truthfully, Nagios doesn’t know an AD account from a file account. All it knows is that Apache is delivering an account to it. It will then look through its registered contacts for a match. So, in /usr/local/nagios/etc/objects/contacts.cfg, I have:
From there, add that account to groups, services, hosts, etc. as necessary. So, if your CFO wants a dashboard to show him that the accounting server is working, add his AD account accordingly. An account will only be shown its assigned items.
Remember, after any change to Nagios files, you must:
sudo service nagios checkconfig
sudo service nagios restart
Note on cgi access: by default, only the “nagiosadmin” account can access the CGIs (most of the headings underneath the System menu item at the bottom left). That access is controlled by several “authorized_” lines in /usr/local/nagios/etc/cgi.cfg. As you become accustomed to using multiple accounts in Nagios, you’ll begin plopping them into groups for easier file maintenance. In this particular .cfg file, groups don’t mean anything. I found some Nagios documentation that insists that you can use groups in cgi.cfg, but I couldn’t make that work. You’ll have to enter each account name that you want to access any CGI.
Step 6: Configure check_nrpe and NSClient++ for SSL
After all that you’ve been through in this article, I hope that this serves as comfort: the rest is easy.
We’re going to take three major actions. First, we’ll create a “client” certificate for the check_nrpe utility, and then we’ll create a “server” certificate to be used with all of your NSClient++ systems. After that, we deploy the certificate to monitored systems and configure NSClient++ to use it.
Configure a Certificate for check_nrpe
This part is almost identical to the creation of the SSL certificate for the Apache site. You need to set up a config file to feed into openssl (or modify the default, but I don’t recommend that).
You have two choices:
Edit your existing openssl.cnf file with the differences for the new certificate(s).
Copy your existing openssl.cnf file, make the changes to the copy, and override the openssl command to use the copied file.
I suppose a third option would, again, be to hack at the openssl command line to manually insert what you want. I’m going with option 2 this time, as well.