Recently I had the great honor of hosting Microsoft’s very own Ben Armstrong (Principal Program Manager Lead, Microsoft, and co-creator of Hyper-V) for an AMA (Ask Me Anything) styled webinar about containers. Ben and the team have been hard at work for quite some time now on making containers the best that they can be in the Microsoft ecosystem, and he provided a ton of insight into their development, intended use, futures, and more.
As you can expect with an AMA styled webinar with someone of Ben’s caliber, there were many good questions asked throughout the webinar. With that in mind, we’ve compiled a list of the questions asked throughout and their associated answers below.
Haven’t Seen the Webinar Yet?
The questions and answers below are concise and straight to the point. They have not been published verbatim from the webinar so if you’d like a little more back-story and reasoning behind some of these answers, watch the hour-long webinar here. Ben did a fantastic job of explaining why things are a certain way, or why certain decisions were taken from the perspective of the developers themselves. What might seem straightforward from our side of the fence, sometimes turns out to be completely different from the inner sanctum of those creating the product itself. If you’re interested in hearing this insider info we thoroughly recommend watching the webinar recording!
Q. Is the Microsoft definition of a container different than the rest of the industry?
Ben: Not really. A container is still a container in the Microsoft world. Things are still managed via docker, and the only potential difference is the ability to run the container fully isolated within Hyper-V in addition to the standard kernel virtualization method, which Microsoft refers to as Windows Containers.
Q: What container platform is the friendliest to use or most suitable for most mid-sized companies?
Ben: Containers on Windows is powered by docker and it’s a good fit for mid-sized businesses as long as you’re running a more recent revision of Windows Server. Once you get started and get to a point where you need further orchestration, you can add something like Kubernetes to help you manage the solution.
Q: Are micro-services on Azure Service Fabric considered as part of the “container” definition? Also, with the rise of docker, does Microsoft plan to continue pushing microservices as much in the near future?
Ben: Azure Service Fabric is actually a container management/orchestration tool for containers running within Azure. This is an area that Microsoft will continue to invest in alongside docker/containers on Windows.
Q: Is there a certification exam for containers?
Ben: Not today there isn’t, but it would be safe to assume that container management will become part of the core MSCA/MSCE certifications for Windows Server due to It becoming a core role/feature on that platform.
Q: How does Microsoft License the use of containers?
Ben: It’s surprisingly simple actually. When you purchase Windows Server you can run as many Windows Containers as you want. If you want the added isolation of Hyper-V containers, those are licensed just like VMs. 2 Hyper-V Containers with the Standard Edition of Windows Server, and unlimited Hyper-V Containers with the Datacenter Edition of Windows Server.
Q: Which Orchestrator would you prefer to manage your containers?
Ben: Kubernetes is becoming the industry favorite here and likely has the most support, but you can use any of the others out there as well.
Q: Why should I use a container as a system administrator?
Ben: Containerized applications have numerous benefits. They are light-weight, easily movable, and designed for the emerging cloud-centric world. They remove a lot of the OS-related overhead from VMs and allow you to get a greater density of workload per node in your datacenter.
Q: What servers are prime for becoming containers?
Ben: While Microsoft is working to get all roles/features in Windows Server to a place where they can run inside of a container, they’re just not there yet. As It stands today, some of the best workloads to run inside of a container are things like, WebApps, SQL, Stateless applications, and things of that nature.
Q: Shouldn’t Core Services like Active Directory be run within a VM?
Ben: While it is true today that not all core services (AD Included) can be run within a container, it won’t always be that way. SQL is a good example of a core service that runs perfectly fine within a container today!
Q: What are some things that would prevent me from putting an application inside of a container?
Ben: If the app requires an older version of Windows that would be a deal-breaker as Containers only runs on Windows Server 2016 or newer. Also, if your app requires a GUI, that would be a no-go as containers are intended to be 100% headless.
Q: Will support for Containers on Windows Server 2016 be dropped and moved to Windows Server 2019?
Ben: Containers are fully supported today on Windows Server 2016 and will continue to be supported, however as new features are released you likely see those innovations happening in the technical preview of Windows Server 2019 moving forward.
Q: What controls the creation of a container? For example, if I have a container with a website on it and I enter something.internal into my browser, what causes the container to spin up?
Ben: For orchestration like this you’ll want to look into something like Kubernetes.
Q: Can you run containers under VMware and if so what OS is supported
Ben: As it stands today, you can only run Windows Containers on a Windows Server 2016 guest VM on VMware. Hyper-V containers are not currently supported in this instance.
Q: What are the basic requirements for using Containers?
Ben: The basic requirements are Windows Server 2016 or newer, or Windows 10 or newer. Docker is required and, if you want to run Hyper-V containers you need a machine capable of running Hyper-V.
Q: Do you have feedback from clients or case studies on a Windows Containerization project?
Q: When would you want to run Windows Containers vs. Hyper-V Containers?
Ben: Windows Containers share the kernel of the host OS. If you’re in a situation where you need further isolation, that when you would use Hyper-V containers over Windows Containers.
Q: With Windows Containers running on Windows Server 2016, what are the OS choices for the container?
Ben: There are two container images available in production today. A Windows Server 2016 Server Core image for legacy apps, and a Windows Server 2016 Nano Server Image that is designed for cloud-native apps.
Q: How does the resource footprint of a container compare with an equivalent VM?
Ben: Cpu usage is roughly the same while memory and storage usage is MUCH less.
Q: Is it possible to run a Failover Cluster configuration with containers?
Ben: What you’ll do in this instance is use Failover Cluster to serve up virtual machines and then run containers on those virtual machines.
Q: Could you provide some clarity on containers vs. remote app vs. APP-V?
Ben: Keep a look out on Ben’s blog for a future article about this.
Q: How many Windows Containers can you run on average on a decently sized node (4 CPU, 16GB mem)
Ben: Roughly 40 containers.
Q: What is the plan to reduce Windows Image Sizes
Ben: The Image sizes are actively being reduced in the current insiders preview of Windows Server. Both images (Nano and Core) have seen massive reductions in size. If you’d like to check these out be sure to be running the latest version of the insiders preview and then use docker to pull the image that is appended with the name -insiderpreview. You’ll be able to see how much smaller the image is.
Q: What about Altaro Support for Containers?
Ben: Containers is still quite new to the Microsoft world, and so far has primarily been used for dev workloads but that is changing quickly and as such vendors such as Altaro are working on how to support it for the wider audience they cater for.
Andy (Altaro): We at Altaro are actively evaluating the unique data protection needs of containerized workloads before we roll out support within our product range and will be sure to update you via this blog once we have something that protects the containers themselves. As for persistent data used by containers but stored elsewhere, that data likely sits on a VM somewhere and we have the ability to protect that data like we always have!
We hope you’ve enjoyed this Q&A list and that it got you up to speed on the potential containers provide for virtualization tasks. Don’t forget to watch the webinar in its entirety here if you haven’t done so already. It was fascinating getting insights directly from Microsoft and the engagement we received during the webinar clearly shows you guys enjoyed it too! Thanks to Ben for joining us and thanks to everyone who asked a question!
If you asked a question that you don’t see mentioned above, or have a new question, be sure to let us know in the comments form below and we’ll be sure to get you an answer ASAP.
Microsoft has made major changes to the way that they build and release their operating systems. The new Windows Server “Semi-Annual Channel” (SAC) marks a substantial departure from the familiar release pattern that Microsoft has established. The change has pleased some people, upset many, and confused even more. With all the flash of new features, it’s easy to miss the finer points — specifically, how you, your organization, and your issues fit into the new model.
The Traditional Microsoft OS Release Model
Traditionally, Microsoft would work on pre-release builds internally and with a small group of customers, partners, and other interested stakeholders (such as MVPs). Then, they would finesse the builds with open betas (usually called “Release Candidates”). Then, there would be an RTM (release-to-manufacturing) event followed by GA (general availability). The release would then be considered “current”. It would enjoy regular updates including service packs and feature upgrades for a few years, then it would go into “extended support” where it would only get stability and security updates. While customers purchased and worked with the “current” version, work on the next version would begin in parallel.
Not every version followed this model exactly, but all of them were similar. The most recent Windows Server operating system to employ this model is Windows Server 2016.
Changes to the Model with Windows 10/Server 2016
The “Windows Insider Program” was the first major change to Microsoft’s OS build and release model. Initially, it was most similar to the “release candidate” phase of earlier versions. Anyone could get in and gain access to Windows 10 builds before Windows 10 could even be purchased. However, it deviated from the RC program in two major ways:
The Windows Insider Program includes an entire community.
The Windows Insider Program continues to provide builds after Windows 10 GA
The Windows Insider Community
Most of us probably began our journey to Windows Insider by clicking an option in the Windows 10 update interface. However, you can also sign up using the dedicated Windows Insider web page. You get access to a dedicated forum. And, of course, you’ll get e-mail notifications from the program team. You can tell Microsoft what you think about your build using the Feedback Hub. That applet is not exclusive to Insiders, but they’ll know if you’re talking about an Insider build or a GA build.
Ongoing Windows Insider Builds
I expect that most Insiders prize access to new builds of Windows 10 above the other perks of the program. The Windows 10 Insider Program allows you to join one of multiple “rings” (one per joined Windows 10 installation). The ring that an installation belongs to dictates how close it will be to the “cutting edge”. You can read up on these rings and what they mean on the Insider site.
The most important thing about Windows Insider builds — and the reason that I brought them up at all in this article — is that they are not considered production-ready. The fast ring builds will definitely have problems. The preview release builds will likely have problems. You’re not going to get help for those problems outside of the Insider community, and any fix will almost certainly include the term “wait for the next build” (or the next… or the one after… or some indeterminate future build). I suspect that most software vendors will be… reluctant… to officially support any of their products on an Insider build.
Windows Server Insider Program
The Windows Server Insider Program serves essentially the same purpose as the Windows 10 Insider Program, but for the server operating system. The sign-up process is a bit different, as it goes through the Windows Insider Program for Business site. The major difference is the absence of any “rings”. Only one current Windows Server Insider build exists at any given time.
Introducing the Windows Server Semi-Annual Channel
I have no idea what you’ve already read, so I’m going to assume that you haven’t read anything. But, I want to start off with some very important points that I think others gloss over or miss entirely:
Releases in the Windows Server Semi-Annual Channel are not Windows Server 2016! Windows Server 2016 belongs to the Long-Term Servicing Channel (LTSC). The current SAC is simply titled “Windows Server, version 1709”.
You cannot upgrade from Windows Server 2016 to the Semi-Annual Channel. For all I know, that might change at some point. Today, you can only switch between LTSC and SAC via a complete wipe-and-reinstall.
On-premises Semi-Annual Channel builds require Software Assurance (I’d like to take this opportunity to point out: so does Nano). I haven’t been in the reseller business for a while so I don’t know the current status, but I was never able to get Software Assurance added to an existing license. It was always necessary to purchase it at the same time as its base volume Windows Server license. I don’t know of any way to get Software Assurance with an OEM build. All of these things may have changed. Talk to your reseller. Ask questions. Do your research. Do not blindly assume that you are eligible to use an SAC build.
The license for Windows Server is interchangeable between LTSC and SAC. Meaning that, if you are a Software Assurance customer, you’ll be able to download/use either product per license count (but not both; 1 license count = 1 license for LTSC or 1 license for SAC).
The keys for Windows Server are not interchangeable between LTSC and SAC. I’m not yet sure how this will work out for Automatic Virtual Machine Activation. I did try adding the WS2016 AVMA key to a WS1709 guest and it did not like that one bit.
SAC does not offer the Desktop Experience. Meaning, there is no GUI. There is no way to install a GUI. You don’t get a GUI. You get only Core.
Any given SAC build might or might not have the same available roles and features as the previous SAC build. Case in point: Windows Server, version 1709 does not support Storage Spaces Direct.
SAC builds are available in Azure.
SAC builds are supported for production workloads. SAC follows the Windows Server Insider builds, but SAC is not an Insider build.
SAC builds will only be supported for 18 months. You can continue using a specific SAC build after that period, but you can’t get support for it.
SAC builds should release roughly every six months.
SAC builds will be numbered for their build month. Ex: 1709 = “2017 September (09)”.
SAC ships in Standard and Datacenter flavors only.
The Semi-Annual Channel is Not for Everyone
Lots of people have lots of complaints about the Windows Server Semi-annual Channel. I won’t judge the reasonableness or validity of any of them. However, I think that many of these complaints are based on a misconception. People have become accustomed to a particular release behavior, so they expected SAC to serve as vNext of Windows Server 2016. Looking at Microsoft’s various messages on the topic. I don’t feel like they did a very good job explaining the divergence. So, if that’s how you look at it, then it’s completely understandable that you’d feel like WS1709 slapped you in the face.
However, it looks different when you realize that WS1709 is not intended as a linear continuation. vNext of Windows Server 2016 will be another release in the LTSC cycle. It will presumably arrive sometime late next year or early the following year, and it will presumably be named Windows Server 2018 or Windows Server 2019. Unless there are other big changes in our future, it will have the Desktop Experience and at least the non-deprecated roles and features that you currently have available in WS2016. Basically, if you just follow the traditional release model, you can ignore the existence of the SAC releases.
Some feature updates in SAC will also appear in LTSC updates. As an example, both WS1709 and concurrent WS2016 patches introduce the ability for containers to use persistent data volumes on Cluster Shared Volumes.
Who Benefits from the Semi-Annual Channel?
If SAC is not meant for everyone, then who should use it? Let’s get one thing out of the way: no organization will use SAC for everything. The LTSC will always have a place. Do not feel like you’re going to be left behind if you stick with the LTSC.
Basically, you need to have something akin to a mission-critical level of interest in one or more of these topics:
Containers and related technologies (Docker, Kubernetes, etc.)
High-performance networking. I’m not talking about the “my test file copy only goes 45Mbps” flavor of “high performance” networking, but the “processing TCP packets between the real-time interface and its OLTP database causes discernible delays for my 150,000 users” flavor.
Multiple large Hyper-V clusters
Read the “What’s New” article for yourself. If you can’t find any must-have-yesterdays in that list, then don’t worry that you might have to wait twelve to eighteen months for vNext of LTSC to get them.
Who Benefits from the Long-Term Servicing Channel?
As I said, the LTSC isn’t going anywhere. Not only that, we will all continue to use more LTSC deployments than SAC deployments.
Choose LTSC for:
Stability. Even though SAC will be production-ready, the lead time between initial conception and first deployment will be much shorter. The wheel for new SAC features will be blocky.
Predictability: The absence of S2D in WS1709 caught almost everyone by surprise. That sort of thing won’t happen with LTSC. They’ll deprecate features first to give you at least one version’s worth of fair warning. (Note: S2D will return; it’s not going away).
Third-party applications: We all have vendors that are still unsure about WS2008. They’re certainly not going to sign off on SAC builds.
Line-of-business applications: Whether third-party or Microsoft, the big app server that holds your organization up doesn’t need to be upgraded twice each year.
What Does SAC Mean for Hyper-V?
The above deals with Windows Server Semi-Annual Channel in a general sense. Since this is a Hyper-V blog, I can’t leave without talking about what SAC means for Hyper-V.
For one thing, SAC does not have a Hyper-V Server distribution. I haven’t heard of any long-term plans, so the safest bet is to assume that future releases of Hyper-V Server will coincide with LTSC releases.
Storage of VMs in storage-class memory (non-volatile RAM)
Splitting of “guest state” information out of the .vmrs file into its own .vmgs file
Support for running the host guardian service as a virtual machine
Support for Shielded Linux VMs
Virtual network encryption
Looking at that list, “Shielded Linux VMs” seems to have the most appeal to a small- or medium-sized organization. As I understand it, that’s not a feature so much as a support statement. Either way, I can shield a Linux VM on my fully-patched Windows Server 2016 build 1607 (LTSC) system.
As for the rest of the features, they will find the widest adoption in larger, more involved Hyper-V installations. I obviously can’t speak for everyone, but it seems to me that anyone that needs those features today won’t have any problems accepting the terms that go along with the switch to SAC.
For the rest of us, Hyper-V in LTSC has plenty to offer.
What to Watch Out For
Even though I don’t see any serious problems that will result from sticking with the LTSC, I don’t think this SKU split will be entirely painless.
For one thing, the general confusion over “Windows Server 2016” vs. “Windows Server, version 1709” includes a lot of technology authors. I see a great many articles with titles that include “Windows Server 2016 build 1709”. So, when you’re looking for help, you’re going to need to be on your toes. I think the limited appeal of the new features will help to mitigate that somewhat. Still, if you’re going to be writing, please keep the distinction in mind.
For another, a lot of technology writers (including those responsible for documentation) work only with the newest, hottest tech. They might not even think to point out that one feature or another belongs only to SAC. I think that the smaller target audience for the new features will keep this problem under control, as well.
The Future of LTSC/SAC
All things change. Microsoft might rethink one or both of these release models. Personally, I think they’ve made a good decision with these changes. Larger customers will be able to sit out on the bleeding edge and absorb all the problems that come with early adoption. By the time these features roll into LTSC, they’ll have undergone solid vetting cycles on someone else’s production systems. Customers in LTSC will benefit from the pain of others. That might even entice them to adopt newer releases earlier.
Most importantly, effectively nothing changes for anyone that sticks with the traditional regular release cycle. Windows Server Semi-Annual Channel offers an alternative option, not a required replacement.
Choice is a good thing, right? Well… usually. Sometimes, choice is just confusing. With most hypervisors, you get what you get. With Hyper-V, you can install in three different ways, and that’s just for the server hypervisor. In this article, we’ll balance the pros and cons of your options with the 2016 SKUs.
Server Deployment Options for Hyper-V
As of today, you can deploy Hyper-V in one of four packages.
When 2016 initially released, it brought a completely new install mode called “Nano”. Nano is little more than the Windows Server kernel with a tiny handful of interface bits attached. You then plug in the roles and features that you need to get to the server deployment that you want. I was not ever particularly fond of the idea of Hyper-V on Nano for several reasons, but none of them matter now. Nano Server is no longer supported as a Hyper-V host. It currently works, but that capability will be removed in the next iteration. Part of the fine print about Nano that no one reads includes the requirement that you keep within a few updates of current. So, you will be able to run Hyper-V on Nano for a while, but not forever.
If you currently use Nano for Hyper-V, I would start plotting a migration strategy now. If you are considering Nano for Hyper-V, stop.
Hyper-V Server is the product name given to the free distribution vehicle for Hyper-V. You’ll commonly hear it referred to as “Hyper-V Core”, although that designation is both confusing and incorrect. You can download Hyper-V Server as a so-called “evaluation”, but it never expires.
A word of advice: Hyper-V Server includes a legally-binding license agreement. Violation of that licensing agreement subjects you to the same legal penalties that you would face for violating the license agreement of a paid operating system. Hyper-V Server’s license clearly dictates that it can only be used to host and maintain virtual machines. You cannot use it as a file server or a web server or anything else. Something that I need to make extremely clear: the license agreement does not provide special allowances for a test environment. I know of a couple of blog articles that guide you to doing things under the guise of “test environment”. That’s not OK. If it’s not legal in a production environment, it doesn’t magically become legal in a test environment.
Windows Server Core
When you boot to the Windows Server install media, the first listed option includes “Core” in the name. That’s not an accident; Microsoft wants you to use Core mode by default. Windows Server Core excludes the primary Windows graphical interface and explorer.exe. Some people erroneously believe that means that no graphical applications can be run at all. Applications that use the Explorer rendering engine will not function (such as MMC), but the base Windows Forms libraries and mechanisms exist.
Windows Server with GUI
I doubt that the GUI mode of Windows Server needs much explanation. You have the same basic graphical interface as Windows 10 with some modifications that make it more appropriate for a server environment. When you install from 2016 media, you will see this listed as (Desktop Experience).
The Pros and Cons of the Command-line and Graphical Modes for Hyper-V
I know that things would be easier if I would just tell you what to do. If I knew you and knew your environment, I might do that. I prefer giving you the tools and knowledge to make decisions like this on your own, though. So, we’ll complement our discussion with a pros and cons list of each option. After the lists, I’ll cover some additional guidelines and points to consider.
Hyper-V Server Pros and Cons
If you skipped the preamble, remember that “Hyper-V Server” refers to the completely free SKU that you can download at any time.
Pros of Hyper-V Server:
Never requires a licensing fee
Never requires activation
Smallest “surface area” for attacks
Least memory usage by the management operating system
Fewest patch needs
Includes all essential features for running Hyper-V (present, not necessarily enabled by default):
Hyper-V PowerShell interface
Hyper-V Replica membership
Remote Desktop Virtual Host role for VDI deployments
RemoteFx (automatic with RDVH role)
Cons of Hyper-V Server:
Cannot provide Automatic Virtual Machine Activation
Cannot provide deduplication features
Impossible to enable the Windows Server GUI
Software manufacturers may refuse to support their software on it
Third-party support operations, such as independent consulting firms, may not have any experience with it
Switching to Windows Server requires a complete reinstall
Difficult to manage hardware
Hyper-V in Windows Server Core Pros and Cons
If you’ve seen the term “Hyper-V Core”, that probably means “Hyper-V Server”. This section covers the authentic Windows Server product installed in Core mode.
Largest attack surface, especially with explorer.exe
Largest deployment size
Largest memory usage
Largest patch requirements
Must be licensed and activated
Upgrading to the next version requires paying for that version’s license, even if you will wait to deploy newer guests
Side-by-Side Comparison of Server Modes for Hyper-V
Two items appear in every discussion of this topic: disk space and memory usage. I thought that it might be enlightening to see the real numbers. So, I built three virtual machines running Hyper-V in nested mode. The first contains Hyper-V Server, the second contains Windows Server Datacenter Edition in Core mode, and the third contains Windows Server Datacenter Edition in GUI mode. I have enabled Hyper-V in each of the Windows Server systems and included all management tools and subfeatures (
Add-WindowsFeature-NameHyper-V-IncludeAllSubFeature-IncludeManagementTools). All came from the latest MSDN ISOs. None are patched. None are on the network.
Disk Usage Comparison of the Three Modes
I used the following PowerShell command to determine the used space:
Used Disk Space (bytes)
Hyper-V Server 2016
Windows Server 2016 Datacenter Edition in Core mode
Windows Server 2016 Datacenter Edition in GUI mode
For shock value, the full GUI mode of Windows Server adds 78% space utilization above Hyper-V Server 2016 and 46% space utilization above Core mode. That additional space amounts to less than 5 gigabytes. If 5 gigabytes will make or break your deployment, you’ve got other issues.
Memory Usage Comparison of the Three Modes
We’ll start with Task Manager while logged on:
These show what we expect: Hyper-V Server uses the least amount of memory, Windows Server Core uses a bit more, and Windows Server with GUI uses a few ticks above both. However, I need to point out that these charts show a more dramatic difference than you should encounter in reality. Since I’m using nested VMs to host my sample systems, I only gave them 2 GB total memory apiece. The consumed memory distance between Hyper-V Server and Windows Server with GUI weighs in at a whopping .3 gigabytes. If that number means a lot to you in your production systems, then you’re going to have other problems.
But that’s not the whole story.
Those numbers were taken from Task Manager while logged on to the systems. Good administrators log off of servers as soon as possible. What happens, then, when we log off? To test that, I had to connect each VM to the network and join the domain. I then ran:
Get-WmiObjectWin32_OperatingSystem|selectFreePhysicalMemory with the ComputerName switch against each of the hosts. Check out the results:
Free Memory (MB)
Hyper-V Server 2016
Windows Server 2016 Datacenter Edition in Core mode
Windows Server 2016 Datacenter Edition in GUI mode
Those differences aren’t so dramatic, are they? Windows Server Core even has a fair bit more free memory than Hyper-V Server… at that exact moment in time. If you don’t have much background in memory management, especially in terms of operating systems, then keep in mind that memory allocation and usage can seem very strange.
The takeaway: memory usage between all three modes is comparable when they are logged off.
Hyper-V and the “Surface Area” Argument
Look at the difference in consumed disk sizes between the three modes. Those extra bits represent additional available functionality. Within them, you’ll find things such as Active Directory Domain Services and IIS. So, when we talk about choosing between these modes, we commonly point out that all of these things add to the “attack surface”. We try to draw the conclusion that using a GUI-less system increases security.
First part: Let’s say that a chunk of malware injects itself into one of the ADDS DLLs sitting on your Windows Server host running Hyper-V. What happens if you never enable ADDS on that system? Well, it’s infected, to be sure. But, in order for any piece of malware to cause any harm, something eventually needs to bring it into memory and execute it. But, you know that you’re not supposed to run ADDS on a Hyper-V host. Philosophical question: if malware attacks a file and no one ever loads it, is the system still infected? Hopefully, you’ve got a decent antimalware system that will eventually catch and clean it, so you should be perfectly fine.
On one hand, I don’t want to downplay malware. I would never be comfortable with any level of infection on any system. On the other hand, I think common sense host management drastically mitigates any concerns. I don’t believe this is enough of a problem to carry a meaningful amount of weight in your decision.
Second part: Windows Server runs explorer.exe as its shell and includes Internet Explorer. Attackers love those targets. You can minimize your exposure by, you know, not browsing the Internet from a server, but you can’t realistically avoid using explorer.exe on a GUI system. However, as an infrastructure system, you should be able to safely instruct your antimalware system to keep a very close eye on Explorer’s behavior and practice solid defensive techniques to prevent malware from reaching the system.
Overall takeway from this section: Explorer presents the greatest risk. Choose the defense-in-depth approach of using Hyper-V Server or Windows Server Core, or choose to depend on antimalware and safe operating techniques with the Windows Server GUI.
Hyper-V and the Patch Frequency Non-Issue
Another thing that we always try to bring into these discussions is the effect of monthly patch cycles. Windows Server has more going on than Hyper-V Server, so it gets more patches. From there, we often make the argument that more patches equals more reboots.
A little problem, though. Let’s say that Microsoft releases twelve patches for Windows Server and only two apply to Hyper-V Server. One of those two patches requires a reboot. In that case, both servers will reboot. One time. So, if we get hung up on downtime over patches, then we gain nothing. I believe that, in previous versions, the downtime math did favor Hyper-V Server a few times. However, patches are now delivered in only a few omnibus packages instead of smaller targeted patches. So, I suspect that we will no longer be able to even talk about reboot frequency.
One part of the patching argument remains: with less to patch, fewer things can go wrong from a bad patch. However, this argument faces the same problem as the “surface area” non-issue. What are you using on your Windows Server system that you wouldn’t also use on a Hyper-V Server system? If you’re using your Windows Server deployment correctly, then your patch risks should be roughly identical.
Most small businesses will patch their Hyper-V systems via automated processes that occur when no one is around. Larger businesses will cluster Hyper-V hosts and allow Cluster Aware Updating to prevent downtime.
Overall takeaway from this section: patching does not make a convincing argument in any direction.
Discussions: Choosing Between Core and GUI for Hyper-V
Now you’ve seen the facts. You’ve seen a few generic arguments for the impact level of two of those facts. If you still don’t know what to do, that’s OK. Let’s look at some situational points.
A Clear Case for Hyper-V on Windows Server Full GUI
If you’re in a small environment with only a single physical server, go ahead and use the full GUI.
Why? Some reasons:
It is not feasible to manage Hyper-V without any GUI at all. I advocate for PowerShell usage as strongly as anyone else, but sometimes the GUI is a better choice. In a multi-server environment, you can easily make a GUI-less system work because you have at least one GUI-based management system somewhere. Without that, GUI-less demands too much.
The world has a shortage of Windows Server administrators that are willing and able to manage a GUI-less system. You will have difficulty hiring competent help at a palatable price.
Such a small shop will not face the density problems that justify the few extra resources saved by the GUI-less modes.
The other issues that I mentioned are typically easier to manage in a small environment than in a large environment.
A GUI system will lag behind Core in features, but Hyper-V is quite feature-complete for smaller businesses. You probably won’t miss anything that really matters to you.
If you try Hyper-V Server or Windows Server Core and decide that you made a mistake, you have no choice but to reinstall. If you install the GUI and don’t want to use it, then don’t use it — switch to remote management practices. You won’t miss out on anything besides the faster feature release cycle.
We can make some very good arguments for a GUI-less system, but none are strong enough to cause crippling pain for a small business. When the GUI fits, use it.
A Clear Case for Hyper-V Server
Let’s switch gears completely. Let’s say that:
You’re a PowerShell whiz
You’re a cmd whiz
You run a lot of Linux servers
Your Windows Servers (if any) are all temporary testing systems
Hyper-V Server will suit you quite well.
If you’re somewhere in the middle of the above two cases, I think that Microsoft’s recommendation of Windows Server Core with Hyper-V fits perfectly. The parts that stand out to me:
Flexibility: Deduplication has done such wonders for me in VDI that I’m anxious to see how I can apply it to server loads. In 2012 R2, server guests were specifically excluded; VDI only. Server 2016 maintains the same wording in the feature setup, but I can’t find a comparable statement saying that server usage is verboten in 2016. I could also see a case for building a nice VM management system in ASP.Net and hosting it locally with IIS — you can’t do that in Hyper-V Server.
Automatic Virtual Machine Activation. Who loves activation? Nobody loves activation! Let the system deal with that.
Security by terror: Not all server admins are created equally. I find that the really incompetent ones won’t even log on to a Server Core/Hyper-V Server system. That means that they won’t put them at risk.
Remote management should be the default behavior. If you don’t currently practice remote management, there’s no time like the present! You can dramatically reduce the security risk to any system by never logging on to its console, even by RDP.
You can manage Hyper-V systems from a Windows 10 desktop with RSAT. It’s not entirely without pain, though:
Drivers! Ouch! Microsoft could help us out by providing a good way to use Device Manager remotely. We should not let driver manufacturers off the hook easily, though. Note: Honolulu is coming to reduce some of that pain.
Not everyone that requires the GUI is an idiot. Some of them just haven’t yet learned. Some have learned their way around PowerShell but don’t know how to use it for Hyper-V. You like taking vacations sometimes, don’t you?
Crisis mode when you don’t know what’s wrong can be a challenge. It’s one thing to keep the top spinning; it’s another to get it going when you can’t see what’s holding it down. However, these problems have solutions. It’s a pain, but a manageable one.
I’m not here to make the decision for you. You now have enough information to make an informed decision.
I know that most of you want to learn virtualization, and since you have Hyper-V in your company this is a very important task. A common problem is that you can’t test on production systems and a lab is out of the question for a company of your size. If this sounds all too familiar but you’re still eager to learn, nested virtualization will help you out tremendously. Now I’m not going to give you a nice and long definition of what nested virtualization is because Andy already has an article about it where he explains it very well; but long story short, nested virtualization lets us create Hyper-V virtual machines inside a Hyper-V virtual machine or Hyper-V host. Think about it, we can have a Hyper-V farm inside a single box. Off course, this will be only for testing purposes, because production environment won’t support this configuration largely due to the performance impact of running such a setup.
In order to be able to create a nested Hyper-V machine with Windows Server 2016, Hyper-V requires hardware virtualization support (such as Intel VT-x) to run virtual machines. In previous Windows Server versions, such as 2012 and 2012 R2, this capability was hidden from the guest VMs which prevents them from running the Hyper-V server role among other hypervisors. In Windows Server 2016 however, Microsoft has given us the choice to enable this processor capability inside a VM using PowerShell.
Before moving forward I presume you have already installed Windows Server 2016 on your host (the bare metal server), as well as the Hyper-V role. Also, make sure in the BIOS of the hardware you have the virtualization support available and enabled.
Now you’ve got that ready, let’s move into creating our first Hyper-V virtual machine.
Creating a Hyper-V nested virtual machine
Open the Hyper-V console and from the Actions pane click New > Virtual Machine; or right-click the host and follow the same steps.
On the first page of the New Virtual Machine Wizard, name the new VM and provide a location for the virtual disk(s) and configuration files.
In the Specify Generation page choose the second option to create a generation 2 virtual machine. This one has better performance and supports the newest features.
Since this will be a Hyper-V host, as far as the virtual machines that will run on it know, we will have to allocate a fair chunk of memory. Based on how much RAM you have on your physical server and how many VMs you want to run on this Hyper-V virtual machine, assign the right amount. Ensure that Use Dynamic Memory for this virtual machineis not selected then click Next.
Select the virtual switch you want this virtual machine to be connected to then continue the wizard.
Provide the path for the virtual disk then the size of it. If you intend to keep and run the virtual machines on this disk, make it big, if not use the appropriate size just for the OS.
Choose how you want to install the OS then click Next. This settings can be also configured alter on.
Click Finish to create our first virtual machine soon to be a Hyper-V virtual machine.
Once created, power on the VM and install Windows Server 2016. I’m not going to put the installation steps here which consist simply of just booting from the network and doing a few clicks on the installation wizard.
Once you get it up and running and try to install Hyper-V on the virtual machine an error will be presented:
Hyper-V cannot be installed: the processor does not have required virtualization capabilities.
The message pops-up because, by default, the hypervisor hides the hardware virtualization support capability from guest VMs. The next steps are to use some PowerShell command lines to enable virtualization support capability for guest VMs. Shut down the virtual machine then execute the commands shown below. The first one lets you see if virtualization support is enabled for the VM, and the second one enables virtualization support. The problem about running the second command line is that no message is displayed if it succeeds.
In case you want to speed things up and try to enable virtualization support while the virtual machine is still running, you will get an ugly error message in PowerShell; so make sure the VM is off.
Set-VMProcessor : Failed to modify device ‘Processor’. Cannot change the processor functionality of a virtual machine now. “…” Cannot change the processor functionality of virtual machine ‘Server-HV01’ while it is running.
Now power on the virtual machine and try installing Hyper-V. The wizard should let us install the role this time with no problems whatsoever.
I’ve seen people perform a second layer of virtualization, meaning inside this Hyper-V virtual machine they deployed another Hyper-V VM and only then they will start creating the required test machines. You can do that, but I don’t recommend it, because the performance rapidly declines and if you think about it, you don’t even need to. If the test you are doing requires another Hyper-V VM, all you need to do is follow the above steps and create another Hyper-V virtual machine on the bare metal hypervisor.
Creating virtual machines inside the nested hypervisor
Once you’re done creating your nested Hyper-V machines, open the console for one of them and start creating virtual machines like you would normally do except now your virtual machines will run inside a virtualized Hyper-V server. Pretty neat, huh??
As you can see they are running smoothly and if you can afford an SSD drive they will run great.
Now I’ve told you how to create Hyper-V nested virtualization here comes the disclaimer because you’ll quickly find out there are a few features that won’t work. To be honest, I don’t know if it’s such a problem missing these because you are probably not going to make use of them in a test lab anyway. Here they are:
Checkpoints are not going to work
Live migration is not going to work
Live memory resize is not possible. You will have shut down the Hyper-V VM.
Does not currently run with AMD processors. <This is the biggest drawback I can find>
Nested virtualization in Windows Server 2016 brings a new level to virtualization with Hyper-V that is perfect for creating lab environments and learning all there is about Microsoft technologies. I have to say, the feature was implemented very late compared to the competition, and the restriction on AMD processors can be a severe issue for some. However, if the drawbacks don’t affect you it’s a really useful tool, and we should expect some improvements from Microsoft in future updates. Until then, use it as it is and comment below with your own experiences!
When Windows Server 2016 was released last year, one of the features that I myself, and much of the community were excited about was the new installation option called Nano Server. The way I’ve always described Nano Server is that it’s like Windows Server Core, but on steroids. It is a completely gutted, only-what-you-need installation option, and it’s an installation option that really talked to my Linux and open-source roots. I loved the idea of having only what was absolutely necessary installed on a server, not just because of the attack surface reduction, but because of the reduction in software to maintain on the system as well. I remember running Gentoo Linux on some systems simply because it was a “compile from source” type of distribution and I loved the idea of again, only installing the needed bits, and with Nano Server I felt like we had arrived at something resembling that in the Microsoft world as well.
When Nano Server was released, it was stated that it would be the recommended installation path for containers and for core infrastructure workloads as well. This included things like Hyper-V, Storage Spaces Direct, DNS, IIS… and there was talk of more supported roles coming at some point. This was a lot to get excited about for sure! Hyper-V hosts running with a TINY OS image. It was amazing, it was awesome… and ultimately, not meant to be.
Nano Server Gets Gutted for Infrastructure Workloads
Last week the Windows Server team hit us with THIS bombshell. Here are the key takeaways from the announcement as far as Nano Server is concerned
Going forward, Nano Server will be used primarily as a container image.
Support for Infrastructure Workloads and Bare-Metal will be removed from Nano Server. (This includes the removal of support for Hyper-V and Storage Spaces Direct)
Windows Server Core will now be the recommended deployment mechanism for Infrastructure roles/features
Server Core will also be able to be used as a container image for deployment of traditional applications via container services
You could see some of this coming if you read between the lines in Microsoft’s marketing and what not. The messaging behind Nano never really went further towards infrastructure workloads other than saying its a great installation option for those roles. Installation was difficult, documentation was spread out over several pages, with different UI image builders, and scripted deployment options. It was being used for too many things to be able to keep it as it was, and if I’m really honest with myself, I can understand why this happened.
The concept of something like Nano Server dictates that it be very CLEARLY defined as to what it’s going to be and what it’s going to do. By continuing to add more role and feature support to it, Microsoft was essentially creating another Server Core. Jeff Woolsey from the Hyper-V Product group said it best in that IT Pros using Nano Server for infrastructure wanted more roles/features and more drivers. Devs wanted a smaller footprint for their applications. There was no way to reconcile both of those complaints.
I think Microsoft listened to customer feedback, reviewed telemetry data, and made a decision, that they were not going to continue using Nano Server for infrastructure any longer, and instead they were going to make it the best container image on the market. While it bums me out for my infrastructure stuff, it makes me feel better that it has a VERY specific goal now, and with that, I think Microsoft will succeed in that goal.
So what is the Recommended Installation Option Now?
That leaves us with Server Core as the recommended installation option for infrastructure workloads. I’ve been using Windows Server Core for Hyper-V since the 2008 R2 days and I’ve always found it to fit my needs. Additionally, Microsoft has made a TON of improvements and changes since then like including Server Core in their new Semi-Annual Release Cadence, also announced last week.
If you hadn’t heard, Microsoft will now be bringing core feature updates to Windows Server twice a year, in the spring and fall. This update branch applies to customers that have ALL of the following:
Windows Server 2016 Standard or Datacenter
Running with the Nano Server or Server Core Deployment Option
Active Software Assurance
Also below is a chart from Microsoft detailing the different installation options and their associated channels
Long-Term Servicing Channel
Now I understand that this quicker release cadence isn’t for everyone, and Microsoft gets that too. That’s why they are still providing what they call the Long-term Servicing Channel (LTSC). LTSC is what you’ve been used to using all these years. You install your Windows Server and another big release comes out in a couple of years. In the mean time you get 10 years of support (from the OS release date) on that installed operating system if you don’t decide to upgrade it. This is the best option for those organization that don’t need the latest and greatest features, and perhaps need the most stable branch of server software possible.
What if I already deployed Hyper-V on Nano in Production?
You have support on Nano Server (the Fall 2016 version) until Spring of 2018. You’ll want to migrate those workloads to a Windows Server Core option. Following something similar to the rolling cluster upgrade procedure should help with this process. We actually happen to have a how-to article on that HERE.
While it’s a let down to some, I feel better in that the lines are more clearly defined now. We have Server Core for infrastructure and we have Nano for Containers and both of them will get the attention they deserve moving forward.
As always if you have any follow-up questions and/or comments, be sure to leave a comment below!
NOTE: Please read THIS important update on the direction of Nano Server prior to using the below resources.
Hello once again everyone!
A few weeks ago, we put on a very special webinar here at Altaro where we had Andrew Mason from the Nano Server team at Microsoft on to answer all of your burning Nano Server questions. Both sessions were very well attended and the number of quality, engaging questions was amazing. It really made for a great webinar!
As we usually do after webinars, this post is intended to act as an ongoing resource for the content that was discuss during said webinar. Below you will find the recording of the webinar for your viewing pleasure in case you missed it, along with a written list of questions and their associated answers below that were not covered verbally during the Q & A due to time constraints.
Revisit the Webinar
Q & A
Q. Will we be able to run the Active Directory role on Nano Server in the future?
A. This is a frequent ask, which you can also vote for on the Windows Server User Voice HERE. We are investigating how to bring this to Nano Server, but at this time I don’t have a timeline to share.
Q. Will WSL eventually get into Nano Server? Could is replace the instance of OpenSSH from GitHub Eventually?
A. WSL was added to Windows 10 to support developer scenarios, so we hadn’t been considering it for Nano Server. This is a remote management scenario, it would be interesting to understand how many people would want this for management, so please vote on User Voice HERE.
Q. Will there be support for boot from USB for Nano Server, Hyper-V nodes for instance?
A. This is not currently planned. There have been a lot of asks for SD boot. If this is an important scenario for you, please vote for it on user voice.
Q. Are there plans to use MS DirectAccess on Nano?
A. This is not currently planned due to the cloud focus we have for Nano Server. If this is an important scenario for you, please vote for it on User Voice.
Q. How does one manage a Nano server if Azure or an Azure Account is unavailable?
A. You can still use the standard MMC tools to remotely manage nano server on-prem, just like any other Windows Server.
Q. Are there any significant changes in licensing for Nano Server?
A. There are some licensing implications when using Nano Server. Altaro has an ebook on licensing Windows Server 2016 that includes some information about Nano Server HERE.
Q. Can you manage a Nano Server host with SCVMM 2012 R2?
A. Unfortunately no. SCVMM 2016 is needed to manage 2016 Nano Server hosts.
Q. Do you see a role for Nano Server in regards to on-prem Hyper-V environments.
A. Absolutely! Nano Server lends itself very well to running as a Hyper-V host. The attack surface is smaller, less resources are needed for the OS, and you have fewer reboots needed due to patching. You can still manage it remotely just like any other Hyper-V host.
Q. How can I use the Anti-Malware options that are available in Nano Server?
A. Nano Server uses a Just-Enough-OS model, in that only the bits needed to run the OS are initially available. There is an Anti-Malware feature available, you just need to install it. More information on installing roles in Nano Server can be found HERE.
Q. Are iSCSI and MPIO usable on Nano Server?
A. Yes they are, they can be installed and managed via PowerShell Remoting.
Q. How do you configure NIC teaming in Nano Server?
A. NIC teaming can be managed and configured via PowerShell. Take note however, that the usual LBFO NIC teaming is not available on Nano Server and you will have to use the new Switch Embedded Teaming (SET) option that was released with Windows Server 2016.
Q. Does Altaro VM Backup support protecting VMs running on a Nano Server Hyper-V Host?
A. As Nano Server is such a radical departure from the usual Microsoft deployment option, we currently do not support backing up VMs on Nano Server hosts. We are currently looking at adding support for this deployment option, but do not have a date that can be provided at this time. Be sure to keep a look out on the Altaro blog for developments in this matter.
That wraps things up for our Q & A follow up post. We had lots of great questions and loved to see everyone actively participating in the webinar! As usual, if you think of any further follow up questions, feel free to ask them in the comments section below and we’ll get back to you ASAP!
I’ve put together a few different items in this follow-up post. I’ve got the recording of the webinar and the questions that were asked and the ones that went unanswered in the Q & A below. Additionally, I saw many requests during the webinar for the various scripts and code snippets that I used throughout the webinar, so I’ve included those as well along with some annotation that somewhat walks you through each particular demo.
NOTE: The Script that I mentioned to automated the nested cluster deployment will be published as a separate post in the coming weeks, so keep an eye out for that.
With that said, let’s start by taking another look at the webinar!
Revisit the Webinar
Q & A
Q. Will Node Fairness adhere to host placement rules?
A: Yes. If you have preferred owner rules in place in your cluster, node fairness will adhere to those rules when attempting to load balance a cluster.
Q: Is the startup delay for start order priorities configurable?
A: Yes. There is no need to stick with the default of 20 seconds if you don’t want to. Using the Set-ClusterGroupSet with the -StartupDelay parameter will allow you to configure the startup delay.
Q: Are all the mentioned features available on the free Hyper-V Server?
A: All features covered in the webinar are available on all editions of Hyper-V.
Q: I thought it was required that each nested Hyper-V host have 4GBs of memory?
A: I’m not aware of any such requirement. All the nested hosts in my demo had a static 2GBs of memory configured.
Q: Similar to Start Up Priorities, is there a feature to Power off VMs in a specific order?
A: Not in the same sense as Start-Up Priorities where one VM requires another VM be running to boot. What you can do is create a PowerShell script that calls the Stop-VM cmdlet to stop VMs one at a time in a specific order.
Q: Does Altaro VM Backup now support Windows Server 2016?
A: Yes! Windows Server 2016 support was added as part of our version 7 release.
Q: Is ReFS file level restore now supported in version 7 of Altaro VM Backup
A: While we don’t support doing file level restores of an ReFS volume, remember that ReFS was designed primarily with Storage Spaces Direct and Hyper-V in mind. The chances of you actually having to recover an individual file on an ReFS volume are remote. However, you could take advantage of some of the new ReFS features by hosting your Hyper-V VMs on an ReFS volume. Then, as long as the file system inside of the protected VM is NTFS, you can still do a file level restore with Altaro VM Backup even though that VM is hosted on an ReFS volume. The filesystem contained within the VM is the important one.
Scripts from the Webinar
NOTE – IMPORTANT: The below code blocks aren’t meant to be executed as scripts. They simply walk you through the steps (With commenting) and needed PowerShell Cmdlets for each feature mentioned in the webinar and are *NOT* intended for production use. They are simply intended to be informational.
Join Andrew Mason from Microsoft (Principal Program Manager on the Nano Server team at Microsoft), and MVP Andy Syrewicze in an AMA webinar on March 16th to discuss Nano Server. Register for the webinar and get answers directly from Andrew!
Any regular readers of our blog will know that I like to put together a monthly digest of Interesting links, good howtos, and eyebrow-raising news. As 2016 is winding to a close, and I’ve begun gathering up stuff to share with you, I find myself looking more at 2016 holistically. Where have we been? Where are we going? The end of the year is always a good time for reflection so that we can move into the new year with a clean slate and a new purpose.
With that in mind, I would like to take this time to cover three different areas to wrap up this year on our blog.
Where have we been? How did we start this year and what were the highlights?
Where are we going? What do I think the next 12 months is going to look like?
What Certifications should you be targeting with the new year?
Where have we been?
If you think about it, 2016 has been something of a strange year for us with Hyper-V. Don’t get me wrong, Server 2016 was released, and is awesome, but think all the way back to January. We’d just gotten done with 2015, Windows Server 2016 was in 1 form of a technical preview or another, and we were still using 2012 R2 Hyper-V hosts (Or Older) to drive our production environments. We’d heard a smattering of information about some of the new upcoming features in 2016 but some of them were half-baked, broken, or just rumor.
The mentality has really shifted as well if you think about it. For years, many on-premises administrators have been somewhat unsure of the cloud. Many worried that development for Windows Server would be stunted in favor of Microsoft throwing all their weight behind Azure. While there certainly has been a shift in focus to developing Azure, that doesn’t mean that 2016 was left out in the cold (It is Winter after all). On the contrary, development of Azure is what led to so many of the improvements that we have in Windows Server 2016 today!
I’ve heard it mentioned many times, in many different places, and I don’t know if it’s true or not, but it makes sense, based on how 2016 has been developed. Pre-2016 features flowed from Windows Server to Azure. With 2016, that flow has been reversed. New features in Windows Server 2016 are a result of a need that had to be filled in Azure. Some people like this, others don’t, but I find myself to be a staunch supporter of this strategy simply based on the (pardon my brevity) kick-ass features that we got when Server 2016 was released! Let’s look at a few of my favorite features…..
When you’re building a massive datacenter environment (Like Azure) you need to as much bang for buck as possible. Nano Server helps achieve this by providing a host operating system that consumes as little resource as possible. This way those resources can more efficiently go towards the VMs that are hosting workloads.
Now, many will say… “Well Andy, I just don’t need that level of optimization in my environment… why should I care?” My response to this has to do with 2 other HUGE enhancements Nano Server delivers.
Reduced Attack Surface – This was the number one reason I was a huge fan of Windows Server Core for 2008 R2 and 2012/R2 Hyper-V hosts. Nano Server takes this even further. With less than a 500MB footprint, there just isn’t much there to attack, and in today’s IT landscape, EVERYONE is responsible for security.
Less reboots due to patching – Everyone hates reboots, and with Nano server the projected needed reboots a year due to patching is 2. Yes you read correctly…. 2 reboots a year for patching with Nano server. Now, those are just projections, time will tell if Microsoft hits that goal, but it’s a new Microsoft we’re seeing these days so I’m inclined to believe them at this stage.
With this in mind, I would suggest you take a good, hard look at Nano Server in the coming year. Everyone will benefit from running Hyper-V in this configuration.
If you follow the blog, you’ve likely heard me talk about Storage Spaces Direct (or S2D for short) many, many times. I do so, because it’s one of my favorite features of Windows Server 2016. S2D, provides us the ability to do Hyper-Converged deployments with compute and storage in the same boxes. No one likes supporting complex and problematic SANs and storage fabrics, especially for smaller 2-node host clusters. S2D now allows the option to run 2-node deployments, and still maintain the N+1 status needed for clusters. The only networking needed is some nice 10GB+ plus NICs and an interlink between the two hosts for the east/west storage traffic. This is going to open the door for some very cost effective and powerful configurations moving forward.
The automation and management implication of PowerShell Direct are astronomical. The more I use it, the more I’m amazed at what can get done with it. (Check out the link if you want an example). The fact is, most of us are being asked to do more with less, and unless we can start automating some of the more mundane processes that we have to do on a daily basis, most of us will never catchup on things. PowerShell direct lends itself well to this, and you’ll want to utilize it heavily in the coming year once you have Windows Server 2016 in play in your environment.
Also, to note: if you still haven’t started learning PowerShell, I HIGHLY recommend that you do. PowerShell is the future (if it’s not here already) of Windows management, and you don’t want to be left behind. You’ll find a link to some beginner’s resources near the end of this article.
Those from the VMware world will know this feature as DRS (Distributed Resource Scheduler). Node Fairness is a feature that allows Hyper-V Clusters to load balance the VMs automagically across all the nodes in the cluster. This prevents those situations like when you were a kid, and your parents asked you and your brother/sister to go do a job, and you did all the work while your sibling sat on their rear-end? Yeah most of us have been there. Historically, we could do this System Center Virtual Machine Manager, but SCVMM is expensive, and many people in smaller environments are loathe to run it. Be fair to your Hyper-V hosts and enable this feature once you’re running Windows Server 2016. Not only will you’re hosts run better, you won’t have to hear complaints from an overworked Hyper-V host, like your parents did when your sibling wouldn’t get off their lazy butt.
Where are We Going
This small section here is strictly opinion. You can agree with it or not, it’s simply where I think Hyper-V and Microsoft is going to go in the next year based on where I sit. To start, let’s talk cloud.
Cloud computing is representing a HUGE shift in the industry, there is no doubt about that. However, I think where the angst comes in amongst IT pros, is how we should respond. Many see Cloud computing as a threat to jobs and an IT administrator’s general way of life, and while I in part agree with that assessment, I’m not worried by it. Times surely are changing but for the better. I don’t think we’re going to see on-premises deployments dry up as a result, because there are always going to be businesses that either want or require their equipment to be housed on-site. Additionally, it doesn’t always make fiscal sense to move everything into a cloud based model.
I think cloud need to be embraced as an infrastructure option, not a replacement. The use of Hybrid scenarios (Any workload, anywhere, and anytime) is going to start to become more and more the norm in 2017, not the exception. Azure and other cloud vendors provide the ability to have a geographic reach that most businesses wouldn’t have otherwise, so it needs to be embraced as a tool and utility for hosting certain workloads, and not seen as a usurper. Microsoft’s 2016 era of products makes this hybrid deployment scenario much simpler and even allows for unified management experiences for all your workloads across sites with some of their new Azure based management tools. It will be interesting to watch this transition, and the industry will adapt and software vendors will produce product to support this new hybrid model that is in use more frequently.
In regards to the on-prem suite of products (Windows Server, System Center….etc…etc)… I’ve seen and heard reassurances that new features are going to continue to be built and supported. Don’t believe the fear mongers, that Azure is going to swallow us whole and control the entire world. Remember, Azure runs on these technologies, and Microsoft has adopted the strategy of taking what they develop for Azure, and packaging it up in Windows Server for us to use on-site. I would make the argument, that without Azure, we may not have many of the awesome features that are part of the Windows Server 2016 release, so just keep that in mind when you find yourself worrying about cloud computing.
Training for the New Year
So, with all of this in mind, what areas should you focus your training on for the new year? The new year is always a good time to take stock of your skill set and where it needs to be improved to adapt to a changing industry, and the below would be my recommendations for the new year.
You have to continue to support your existing environments, and this includes getting them up to snuff with the new version of Windows Server and the plethora of enhancements and improvements it offers. I highly recommend getting certified in Windows Server 2016 as a part of this effort
While not a certification in its own right, you will want to either start (if you haven’t) or continue to learn how to use PowerShell for automation. Automation is going to become increasingly more important as time goes on, and you need to keep up with this movement to stay relevant. Most employers hiring today require some level of PowerShell experience, so be sure to train up!
Once you’ve knocked out the above 2, you’ll want to certify in Azure or some other cloud technology. Like I mentioned above, I don’t see the cloud ever truly replacing everything on-prem. More likely it supports and vast array of hybrid scenarios and deployment types. Learning how these cloud technologies work will go a long way towards teaching you how you can incorporate them into your existing IT strategy.
To wrap up, I’d like to say, thank you for reading our blog this year! We work hard to provide good solid content that is easily digestible and meaningful for your day to day activities. Please continue to visit us in the new year for a whole slew of Windows Server 2016 related content and much much more!
As always, if you’d like to share your thoughts and join the discussion, feel free to use the comments section below this article!
Hello again everyone! with October behind us, it’s time for another edition of Hyper-V Hot Topics!
In case you’re not aware, I do this segment once a month to showcase interesting articles, helpful howtos and cool news from Hyper-V world from over the past month. Additionally, I like to post the last month’s worth of Hyper-V Minute recordings that I do on a weekly basis on Facebook live, as to allow viewers to catch up in case they’ve missed any. Let’s get started!
It’s no surprise that the vast majority of news and buzz in the Microsoft world over the last month has largely been about Windows Server 2016. Since Microsoft announced GA and release Windows Server 2016 at the Microsoft Ignite conference at the end of September, many people are starting to move 2016 into their test labs and ultimately their production datacenters. If your thinking about going this route early with Server 2016, there are a few known issues you should be aware of before doing so. This link from TechNet has all the details. You’ll want to give this a look before moving forward.
Join Andrew Mason from Microsoft (Principal Program Manager on the Nano Server team at Microsoft), and MVP Andy Syrewicze in an AMA webinar on March 16th to discuss Nano Server. Register for the webinar and get answers directly from Andrew!
I’ve talked about this one before in a few different blog posts, but now it’s official! The image builder for Nano Server is available! I’m quite excited about this. Headless installations are near and dear to my heart as I came from the unix world originally a long LONG time ago, so the idea of a headless Windows Server installation is very appealing with a number of inherent benefits. A lot of people seem to agree with me on that as well, as the installation option has been widely popular while Windows Server 2016 was in it’s technical preview phase. One issue that has stifled adoption a bit tough is it can be a bit difficult for people to prep the Nano Server image if they aren’t literate with the use of Microsoft’s image tools and PowerShell. Nano Server Image Builder is a GUI tool that can be used to prep the image for you, and should make like MUCH easier for anyone trying out this new installation option.
While this video isn’t from October, I find that it’s still relevant to many of the discussions I’ve been having with IT Pros over the last several weeks. A lot of administrators are starting to look at Storage Spaces Direct (S2D) as a viable storage solution for their environments and are wonder what the best option for learning more about it is. I find this video does an excellent job of covering all the relevant bases, and I’ve embedded it below for ease of viewing.
As much as I love Storage Spaces Direct, it’s always been a bit out of reach for those in the SMB space due to the 4 node MINIMUM limitation. That’s quite a bit of hardware for small businesses to purchase and it put the solution out of reach for many. However, with one key announcement that was made at MS Ignite, that limitation is no longer an issue. Now we can achieve the same goal of hyper-convergence (That being compute and storage in the same chassis) with a 2-node cluster now. This brings the cost down considerably, and now those in the SMB space can start looking at this solution with seriousness now. This article covers in detail the building of project kepler, a small 2 node S2D solution. Certainly worth a read during lunch some day!
Continuing our trend of more information of stuff from Ignite, I wanted to follow suit with Shielded VMs. This is another one of those features that draws people in from a feature perspective, but once they start looking at some of the components involved the toss the idea out the window as being too difficult. While I’ll agree that the setup for Shielded VMs is not easy, or for the faint of heart, but the benefits are well worth the work. As such, I wanted to link this guide, which has been fully updated on the Technet site several times over the last month. If you’re interested in learning more about Shielded VMs. this is where you’re going to want to look.
While it’s a short article, I found it helpful. Like the author, Aidan, I was doing some testing with Containers in Windows Server 2016 and was looking for some information on an easy way to purge all containers from a host, and Aidan’s post here fit the bill. If you’re working with containers and you need a quick easy way to remove all containers, take a look at this article right here.
Last Month’s Hyper-V Minute Segments
So there you have it! Hopefully that will give you enough material to review for the next little while, and stay tuned for our next edition of Hot Hyper-V Topics Next Month!
As always, if you’ve found a link or helpful howto that you feel should be part of this list, feel free to post it in the comments section below!
Hello everyone! September is now well behind us and that means it’s time for another edition of our Hyper-V hot topics series! As a reminder, this series showcases useful links, interesting reads, and great howtos from the world of Hyper-V from throughout the previous month.
With that in mind, the big news on the Hyper-V front is all the new stuff that was announced and unveiled at Microsoft Ignite in late September. I attended and there certainly wasn’t a lack of good information and/or new technology to get excited about, so as we step through this month’s links, you’re going to see quite a bit from Ignite on the way. Let’s get started.
Microsoft always starts off the week at Ignite with a rather large and grandiose keynote, and this year was no different. The thing I like most about the keynotes at Ignite is that you get a real sense of the direction that Microsoft is taking as a company. You start to see the business reasons for some of their decisions and they also like to showcase some of the innovation and empowerment that is the direct result of those decisions. Obviously the way Microsoft is going is cloud, but that doesn’t mean there isn’t a slew of additional announcements to be seen. This video clocks in at just under 2 hours, but there is a lot to digest and take in here. It’s worth a view!
Microsoft did something a little different this year that was something of a surprise to me. Microsoft CEO Satya Nadella, gave an evening Keynote on day 1 that was simply titled the Innovation Keynote. While different, I felt this worked out quite well as a common complaint about Microsoft keynotes in the past has been length, so splitting it up like this seemed to be welcome by attendees. the other cool thing I liked about this format, is it allowed Microsoft to showcase Innovation and Cool stuff that is being done with their products and talk about what is coming next in the world of IT, while not getting bogged down by product announcements as they were mostly taken care of in the morning keynote. This one is slightly shorter at just over an hour, but again, well worth a watch!
So this wouldn’t be much of a post about Hyper-V Hot Topics without some specific mention of Hyper-V right? Fear not, if you’ve been waiting for the hot session from Ignite about all the cool new stuff in Microsoft virtualization, this is the session for you! Ben Armstrong (Hyper-V Program Manager) put on this session showcasing all the new features and technology that they have baked into Hyper-V for Windows Server 2016, and it is ALOT! I’ve embedded the video below for easier viewing.
This was another favorite session of mine from throughout the week. Anyone who has worked with virtualization technologies in recent history knows that the lines between virtualization and storage are blurring a bit. You can’t hardly manage one these days without having to manage the other. This session focuses on the new enhancements in Storage Spaces that allow you get amazing performance and flexibility when used in conjunction with Hyper-V. Again, I’ve embedded the video below for easier viewing.
While not strictly Hyper-V related, I always take great interest in the talk by Mark and Mark every year. I find that they do a good job in talking about what’s coming in the world of IT, and they both have a unique perspective on the industry as a whole. It is often times considered to be the best sessions at Ignite and is usually standing room only every year as a result.
What I’ve shown above is only just a fraction of the content that was put out at MS Ignite 2016. If you’re interested in seeing and learning more, you can download the various slide decks and video recordings if needed. Additionally, this can be done in the coolest way possible via PowerShell! This link will send you to the TechNet gallery where you can download the PowerShell tool that will allow you to do this. Enjoy!
Likely the only non-Ignite post for this segment, but I wanted to post it all the same. This is a really useful script I stumbled across by Ben Armstong. How many times have you wanted a basic report on the types of guest operating systems stored within your Hyper-V environment? This script does just that. While you may not need it today or tomorrow, squirrel it away somewhere because eventually it’ll be of use to you!
Catch-Up for the Altaro Hyper-V Minute
In addition to our usual links we like to catch everyone up on last month’s episodes of our Hyper-V Minute Facebook Live Feed. The Hyper-V Minute simply serves as a platform where I talk about something interesting from the Hyper-V world for a couple of minutes. With that in mind, the video segments from the last month are below!
Well that wraps up things for this month’s segment of Hyper-V Hot Topics! Hopefully you enjoy sifting through all the information I’ve linked and I hope it provides value in your day to day operations. Until next month, thanks for reading!