Microsoft has made major changes to the way that they build and release their operating systems. The new Windows Server “Semi-Annual Channel” (SAC) marks a substantial departure from the familiar release pattern that Microsoft has established. The change has pleased some people, upset many, and confused even more. With all the flash of new features, it’s easy to miss the finer points — specifically, how you, your organization, and your issues fit into the new model.
The Traditional Microsoft OS Release Model
Traditionally, Microsoft would work on pre-release builds internally and with a small group of customers, partners, and other interested stakeholders (such as MVPs). Then, they would finesse the builds with open betas (usually called “Release Candidates”). Then, there would be an RTM (release-to-manufacturing) event followed by GA (general availability). The release would then be considered “current”. It would enjoy regular updates including service packs and feature upgrades for a few years, then it would go into “extended support” where it would only get stability and security updates. While customers purchased and worked with the “current” version, work on the next version would begin in parallel.
Not every version followed this model exactly, but all of them were similar. The most recent Windows Server operating system to employ this model is Windows Server 2016.
Changes to the Model with Windows 10/Server 2016
The “Windows Insider Program” was the first major change to Microsoft’s OS build and release model. Initially, it was most similar to the “release candidate” phase of earlier versions. Anyone could get in and gain access to Windows 10 builds before Windows 10 could even be purchased. However, it deviated from the RC program in two major ways:
The Windows Insider Program includes an entire community.
The Windows Insider Program continues to provide builds after Windows 10 GA
The Windows Insider Community
Most of us probably began our journey to Windows Insider by clicking an option in the Windows 10 update interface. However, you can also sign up using the dedicated Windows Insider web page. You get access to a dedicated forum. And, of course, you’ll get e-mail notifications from the program team. You can tell Microsoft what you think about your build using the Feedback Hub. That applet is not exclusive to Insiders, but they’ll know if you’re talking about an Insider build or a GA build.
Ongoing Windows Insider Builds
I expect that most Insiders prize access to new builds of Windows 10 above the other perks of the program. The Windows 10 Insider Program allows you to join one of multiple “rings” (one per joined Windows 10 installation). The ring that an installation belongs to dictates how close it will be to the “cutting edge”. You can read up on these rings and what they mean on the Insider site.
The most important thing about Windows Insider builds — and the reason that I brought them up at all in this article — is that they are not considered production-ready. The fast ring builds will definitely have problems. The preview release builds will likely have problems. You’re not going to get help for those problems outside of the Insider community, and any fix will almost certainly include the term “wait for the next build” (or the next… or the one after… or some indeterminate future build). I suspect that most software vendors will be… reluctant… to officially support any of their products on an Insider build.
Windows Server Insider Program
The Windows Server Insider Program serves essentially the same purpose as the Windows 10 Insider Program, but for the server operating system. The sign-up process is a bit different, as it goes through the Windows Insider Program for Business site. The major difference is the absence of any “rings”. Only one current Windows Server Insider build exists at any given time.
Introducing the Windows Server Semi-Annual Channel
I have no idea what you’ve already read, so I’m going to assume that you haven’t read anything. But, I want to start off with some very important points that I think others gloss over or miss entirely:
Releases in the Windows Server Semi-Annual Channel are not Windows Server 2016! Windows Server 2016 belongs to the Long-Term Servicing Channel (LTSC). The current SAC is simply titled “Windows Server, version 1709”.
You cannot upgrade from Windows Server 2016 to the Semi-Annual Channel. For all I know, that might change at some point. Today, you can only switch between LTSC and SAC via a complete wipe-and-reinstall.
On-premises Semi-Annual Channel builds require Software Assurance (I’d like to take this opportunity to point out: so does Nano). I haven’t been in the reseller business for a while so I don’t know the current status, but I was never able to get Software Assurance added to an existing license. It was always necessary to purchase it at the same time as its base volume Windows Server license. I don’t know of any way to get Software Assurance with an OEM build. All of these things may have changed. Talk to your reseller. Ask questions. Do your research. Do not blindly assume that you are eligible to use an SAC build.
The license for Windows Server is interchangeable between LTSC and SAC. Meaning that, if you are a Software Assurance customer, you’ll be able to download/use either product per license count (but not both; 1 license count = 1 license for LTSC or 1 license for SAC).
The keys for Windows Server are not interchangeable between LTSC and SAC. I’m not yet sure how this will work out for Automatic Virtual Machine Activation. I did try adding the WS2016 AVMA key to a WS1709 guest and it did not like that one bit.
SAC does not offer the Desktop Experience. Meaning, there is no GUI. There is no way to install a GUI. You don’t get a GUI. You get only Core.
Any given SAC build might or might not have the same available roles and features as the previous SAC build. Case in point: Windows Server, version 1709 does not support Storage Spaces Direct.
SAC builds are available in Azure.
SAC builds are supported for production workloads. SAC follows the Windows Server Insider builds, but SAC is not an Insider build.
SAC builds will only be supported for 18 months. You can continue using a specific SAC build after that period, but you can’t get support for it.
SAC builds should release roughly every six months.
SAC builds will be numbered for their build month. Ex: 1709 = “2017 September (09)”.
SAC ships in Standard and Datacenter flavors only.
The Semi-Annual Channel is Not for Everyone
Lots of people have lots of complaints about the Windows Server Semi-annual Channel. I won’t judge the reasonableness or validity of any of them. However, I think that many of these complaints are based on a misconception. People have become accustomed to a particular release behavior, so they expected SAC to serve as vNext of Windows Server 2016. Looking at Microsoft’s various messages on the topic. I don’t feel like they did a very good job explaining the divergence. So, if that’s how you look at it, then it’s completely understandable that you’d feel like WS1709 slapped you in the face.
However, it looks different when you realize that WS1709 is not intended as a linear continuation. vNext of Windows Server 2016 will be another release in the LTSC cycle. It will presumably arrive sometime late next year or early the following year, and it will presumably be named Windows Server 2018 or Windows Server 2019. Unless there are other big changes in our future, it will have the Desktop Experience and at least the non-deprecated roles and features that you currently have available in WS2016. Basically, if you just follow the traditional release model, you can ignore the existence of the SAC releases.
Some feature updates in SAC will also appear in LTSC updates. As an example, both WS1709 and concurrent WS2016 patches introduce the ability for containers to use persistent data volumes on Cluster Shared Volumes.
Who Benefits from the Semi-Annual Channel?
If SAC is not meant for everyone, then who should use it? Let’s get one thing out of the way: no organization will use SAC for everything. The LTSC will always have a place. Do not feel like you’re going to be left behind if you stick with the LTSC.
Basically, you need to have something akin to a mission-critical level of interest in one or more of these topics:
Containers and related technologies (Docker, Kubernetes, etc.)
High-performance networking. I’m not talking about the “my test file copy only goes 45Mbps” flavor of “high performance” networking, but the “processing TCP packets between the real-time interface and its OLTP database causes discernible delays for my 150,000 users” flavor.
Multiple large Hyper-V clusters
Read the “What’s New” article for yourself. If you can’t find any must-have-yesterdays in that list, then don’t worry that you might have to wait twelve to eighteen months for vNext of LTSC to get them.
Who Benefits from the Long-Term Servicing Channel?
As I said, the LTSC isn’t going anywhere. Not only that, we will all continue to use more LTSC deployments than SAC deployments.
Choose LTSC for:
Stability. Even though SAC will be production-ready, the lead time between initial conception and first deployment will be much shorter. The wheel for new SAC features will be blocky.
Predictability: The absence of S2D in WS1709 caught almost everyone by surprise. That sort of thing won’t happen with LTSC. They’ll deprecate features first to give you at least one version’s worth of fair warning. (Note: S2D will return; it’s not going away).
Third-party applications: We all have vendors that are still unsure about WS2008. They’re certainly not going to sign off on SAC builds.
Line-of-business applications: Whether third-party or Microsoft, the big app server that holds your organization up doesn’t need to be upgraded twice each year.
What Does SAC Mean for Hyper-V?
The above deals with Windows Server Semi-Annual Channel in a general sense. Since this is a Hyper-V blog, I can’t leave without talking about what SAC means for Hyper-V.
For one thing, SAC does not have a Hyper-V Server distribution. I haven’t heard of any long-term plans, so the safest bet is to assume that future releases of Hyper-V Server will coincide with LTSC releases.
Storage of VMs in storage-class memory (non-volatile RAM)
Splitting of “guest state” information out of the .vmrs file into its own .vmgs file
Support for running the host guardian service as a virtual machine
Support for Shielded Linux VMs
Virtual network encryption
Looking at that list, “Shielded Linux VMs” seems to have the most appeal to a small- or medium-sized organization. As I understand it, that’s not a feature so much as a support statement. Either way, I can shield a Linux VM on my fully-patched Windows Server 2016 build 1607 (LTSC) system.
As for the rest of the features, they will find the widest adoption in larger, more involved Hyper-V installations. I obviously can’t speak for everyone, but it seems to me that anyone that needs those features today won’t have any problems accepting the terms that go along with the switch to SAC.
For the rest of us, Hyper-V in LTSC has plenty to offer.
What to Watch Out For
Even though I don’t see any serious problems that will result from sticking with the LTSC, I don’t think this SKU split will be entirely painless.
For one thing, the general confusion over “Windows Server 2016” vs. “Windows Server, version 1709” includes a lot of technology authors. I see a great many articles with titles that include “Windows Server 2016 build 1709”. So, when you’re looking for help, you’re going to need to be on your toes. I think the limited appeal of the new features will help to mitigate that somewhat. Still, if you’re going to be writing, please keep the distinction in mind.
For another, a lot of technology writers (including those responsible for documentation) work only with the newest, hottest tech. They might not even think to point out that one feature or another belongs only to SAC. I think that the smaller target audience for the new features will keep this problem under control, as well.
The Future of LTSC/SAC
All things change. Microsoft might rethink one or both of these release models. Personally, I think they’ve made a good decision with these changes. Larger customers will be able to sit out on the bleeding edge and absorb all the problems that come with early adoption. By the time these features roll into LTSC, they’ll have undergone solid vetting cycles on someone else’s production systems. Customers in LTSC will benefit from the pain of others. That might even entice them to adopt newer releases earlier.
Most importantly, effectively nothing changes for anyone that sticks with the traditional regular release cycle. Windows Server Semi-Annual Channel offers an alternative option, not a required replacement.
What does the future hold for Hyper-V and its users? Technology moves fast so should Hyper-V admins be concerned about the future? Well, we don’t have a crystal ball to tell us what the future holds but we do have 3 industry experts and Microsoft MVPs to tell you what to expect. Following our hugely popular panel webinar 3 Emerging Technologies that will Change the Way you use Hyper-V we’ve decided to bring together all of the questions asked during both sessions (we hold 2 separate webinar sessions on the same topic to accommodate our European and American audiences) into one article with some extended answers to address the issue of what’s around the corner for Hyper-V and related technologies.
Let’s get started!
Question 1: Do you think IT Security is going to change as more and more workloads move into the cloud?
Answer: Absolutely! As long as we’re working with connected systems, no matter where they are located, we will always have to worry about security. 1 common misconception though is that just because a workload is housed inside of Microsoft Azure, doesn’t mean that it’s LESS secure. Public cloud platforms have been painstakingly setup from the ground up with the help of security experts in the industry. You’ll find that if best practices are followed, and rules of least access and just-in-time administration are followed, the public cloud is a highly secure platform.
Question 2: Do you see any movement to establish a global “law” of data security/restrictions that are not threatened by local laws (like the patriot act)?
Answer: Until all countries of the world are on the same page, I just don’t see this happening. The US treats data privacy in a very different way than the EU unfortunately. The upcoming General Data Protection Regulation (GDPR) coming in may of 2018 is a step in the right direction, but that only applies to the EU and data traversing the boundaries of the EU. It will certainly affect US companies and organizations, but nothing similar in nature is in the works there.
Question 3: In the SMB Space, where a customer may only have a single MS Essentials server and use Office 365, do you feel that this is still something that should move to the cloud?
Answer: I think the answer to that question depends greatly on the customer and the use case. As Didier, Thomas and I discussed in the webinar, the cloud is a tool, and you have to evaluate for each case, whether it makes sense or not to run that workload in the cloud. If for that particular customer, they could benefit from those services living in the cloud with little downside, then it may be a great fit. Again, it has to make sense, technically, fiscally, and operationally, before you can consider doing so.
Question 4: What exactly is a Container?
Answer: While not the same at all, it’s often easiest to see a container as a kind of ultra-stripped down VM. A container holds an ultra-slim OS image (In the case of Nano Server 50-60 MB), any supporting code framework, such as DotNet, and then whatever application you want to run within the container. They are not the same as a VM due to the fact that Windows containers all share the kernel of the underlying host OS. However, if you require further isolation, you can do so with Hyper-V containers, which allows you to run a container within an optimized VM so you can take advantage of Hyper-V’s isolation capabilities.
Question 5: On-Premises Computing is Considered to be a “cloud” now too correct?
Answer: That is correct! In my view, the term cloud doesn’t refer to a particular place, but to the new technologies and software-defined methods that are taking over datacenters today. So you can refer to your infrastructure on-prem as “private cloud”, and anything like Azure or AWS as “Public Cloud”. Then on top of that anything that uses both is referred to as “Hybrid Cloud”.
Question 6: What happens when my client goes to the cloud and they lose their internet service for 2 weeks?
Answer: The cloud, just like any technology solution, has its shortcomings that can be overcome if planned for properly. If you have mission critical service you’d like to host in the cloud, then you’ll want to research ways for the workload to be highly available. That would include a secondary internet connection from a different provider or some way to make that workload accessible from the on-prem location if needed. Regardless of where the workload is, you need to plan for eventualities like this.
Question 7: What Happened to Azure Pack?
Answer: Azure Pack is still around and usable, it will just be replaced by Azure stack at some point. In the meantime, there are integrations available that allow you to manage both solutions from your Azure Stack management utility.
Question 8: What about the cost of Azure Stack? What’s the entry point?
Answer: This is something of a difficult question. Ranges that I’ve heard range from 75k to 250k, depending on the vendor and the load-out. You’ll want to contact your preferred hardware vendor for more information on this question.
Question 9: We’re a hosting company, is it possible to achieve high levels of availability with Azure Stack?
Answer: Just like any technology solution, you can achieve the coveted 4 9s of availability. The question is how much money do you want to spend? You could do so with Azure stack and the correct supporting infrastructure. However, one other thing to keep in mind, your SLA is only as good as your supporting vendors as well. For example, if you sell 4 9s as an SLA, and the internet provider for your datacenter can only provide 99%, then you’ve already broken your SLA, so something to keep in mind there.
Question 10: For Smaller businesses running Azure Stack, should software vendors assume these businesses will look to purchase traditionally on-prem software solutions that are compatible with this? My company’s solution does not completely make sense for the public cloud, but this could bridge the gap.
Answer: I think for most SMBs, Azure Stack will be fiscally out of reach. In Azure Stack you’re really paying for a “Cloud Platform”, and for most SMBs it will make more sense to take advantage of public Azure if those types of features are needed. that said, to answer your question, there are already vendors doing this. Anything that will deploy on public Azure using ARM will also deploy easily on Azure Stack.
Question 11: In Azure Stack, can I use any backup software and backup the VM to remote NAS storage or to AWS?
Answer: At release, there is no support for 3rd party backup solutions in Azure Stack. Right now there is a built-in flat file backup and that is it. I suspect that it will be opened up to third-party vendors at some point in time and it will likely be protected in much the same way as public Azure resources.
Question 12: How would a lot of these [Azure Stack] services be applied to the K-12 education market? There are lots of laws that require data to be stored in the same country. Yet providers often host in a different country.
Answer: If you wanted to leverage a providers Azure stack somewhere, you would likely have to find one that actually hosts it in the geographical region you’re required to operate in. Many hosters will provide written proof of where the workload is hosted for these types of situations.
Question 13: I’m planning to move to public Azure, how many Azure cloud Instances would I need?
Answer: There is no hard set answer for this. It depends on the number of VMs/Applications and whether you run them in Azure as VMs or in Azure’s PaaS fabric. The Azure Pricing Calculator will give you an idea of VM sizes and what services are available.
Watch the webinar
Did you miss the webinar when it first went out? Has this blog post instilled a desire for you to rewatch the session again? Have no fear, we have set up an on-demand version for you to watch right now! Simply click on the link below to go the on-demand webinar page where you can watch a live recording of the webinar free.
If you have a question on the future of Hyper-v or any of the 3 emerging technologies that were discussed in the webinar just post in the comments below and we will get straight back to you. Furthermore, if you asked a question during the webinar that you don’t see here, by all means, let us know in the comments section below and we will be sure to answer it here. Any follow-up questions are also very welcome – to feel free to let us know about that as well!
The I.T. landscape changes incredibly quickly (if you know a faster changing industry, I’d love to know!) I.T. professionals need to know what’s coming round the corner to stay ahead of the game or risk being left behind. Well, we don’t want that to happen to you, so we’ve run down what we feel are the three most important emerging technologies that will drastically change the Hyper-V landscape.
Continued Adoption of Public Cloud Platforms – It’s becoming clear that the public cloud is continuing to gain steam. It’s not just one vendor, but several, and it continues to pull workloads from on-premise to the cloud. Many people were keen to wait out this “cloud thing”, but it has become quite clear that it’s here to stay. Capabilities in online platforms such as Microsoft Azure and Amazon AWS, have increasingly made it easier, more cost-effective, and desirable to put workloads in the public cloud. These cloud platforms can often provide services that most customers don’t have available on-premise, and this paired with several other things that we’ll talk about in the webinar are leading to increased adoption of these platforms over on-premise installations.
Azure Stack and the Complete Abstraction of Hyper-V under-the-hood – With some of the latest news and release information out of Microsoft regarding their new Microsoft Azure Stack (MAS), things have taken an interesting turn for Hyper-V. As on-premise administrators have always been used to having direct access to the hypervisor, they may be surprised to learn that Hyper-V is so far under the hood in MAS that you can’t even access it. That’s right. The Hypervisor has become so simplified and automated, that there is no need to directly access it in MAS, but this is primarily because MAS follows the same usage and management guidelines as Microsoft Azure. This will bother a lot of administrators but it’s becoming the world we live in. As such, we’ll be talking about this extensively during the webinar.
Containers and Microservices and why they are a game-changer – Containers has become one of the new buzz-words in the industry. If you’re not aware, you can think of containers as similar to a VM, but fundamentally different. Whereas in a VM you’re virtualizing the OS, and everything on top of it, with containers you’re only virtualizing the application. Much of the underlying support functions are handled by the container host, as opposed to an OS built into a VM. For a long time it seemed that containers were going to primarily be a developer thing, but as the line between IT Pro and Dev continues to blur, Containers can no longer be ignored by IT Pros, and we’ll be talking about that revelation extensively during our panel discussion.
As you can see there is much to talk about, and many will be wondering how this affects them. You’re probably asking yourself questions like: “What new skills should IT Pros be learning to stay relevant?”, “Are hypervisors becoming irrelevant?”, “Will containers replace virtual machines?”, “Is the Cloud here to stay?”, “Is there still a place for Windows Server in the world?”, “What can I do now to stay relevant and what skills do I need to learn to future-proof my career?” Yep, these developments certainly raise a lot of issues which is why we decided to take this topic further.
Curious to know more? Join our Live Webinar!
As you know we love to put on webinars here at Altaro as we find them a critical tool for getting information about new technologies and features to our viewership. We’ve always stuck to the same basic educational format and it’s worked well over the years. However, we’ve always wanted to try something a bit different. There certainly isn’t anything wrong with an educational format, but with some topics, it’s often best to just have a conversation. This idea is at the core of our next webinar along with some critical changes that are occurring within our industry.
For the first time ever, Altaro will be putting on a panel-style webinar with not 1 or 2, but with 3 Microsoft Cloud and Datacenter MVPs. Andy Syrewicze, Didier Van Hoye, and Thomas Maurer will all be hosting this webinar as they talk about some major changes and take your questions/feedback regarding things that are occurring in the industry today. These are things that will affect the way you use and consume Hyper-V.
As always we will be hosting the webinar twice to accommodate those on both sides of the Atlantic. Both live sessions will have the same content but the respective audiences may have region-specific questions so it is recommended to stick to your regional time slot but feel free to join the other session if you can’t make your one.
Also remember, that this panel webinar isn’t just for our 3 speakers to share their opinions! This is a perfect chance to make your voice and opinions heard as well. We’ll be sure to provide every opportunity for you to ask questions and weigh in on the discussion as well, so bring your questions and comments!
Additionally, if there are any questions you’d like to address ahead of time, be sure to use the comments below to do so!
Last week saw us close the door on Microsoft Ignite 2017, and while the conference came and went in a blur, there was no lack of information or amazing reveals from Microsoft. While this conference serves as a great way to stay informed on all the new things that Microsoft is working on, I also find that it is a good way to get an overall sense of the company’s overall direction. With that in mind, I wanted to not only talk about some of my favorite reveals from the week but also discuss my take on Microsoft’s overall direction.
My take on the week from an Infrastructure Engineering Perspective
To put things simply….. things are changing, and they’re changing in a big way. I’ve had this gut feeling stirring for some time that the way we work with VMs and virtualization was changing, and the week of Ignite was a major confirmation of that. This is not to mention the continued shift from the on-premise model we’re used to, to the new cloud (Public, Private, and Hybrid) model that things are moving too.
It’s very clear that Microsoft is adopting what I would call the “Azure-Everywhere” approach. Sure, you’ve always been able to consume Azure using what Microsoft has publicly available, but things really changed when Azure Stack is put into the mix. Microsoft Azure Stack (MAS) is officially on the market now, and the idea of having MAS in datacenters around the world is an interesting prospect. What I find so interesting about it, is the fact that management of MAS onsite is identical to managing Azure. You use Azure Resource Manager and the same collection of tools to manage both. Pair that with the fact that Hyper-V is so abstracted and under-the-hood in MAS that you can’t even see it, and you’ve got a recipe for major day-to-day changes for infrastructure administrators.
Yes, we’ve still got Windows Server 2016, and the newly announced Honolulu management utility, but If I look out 5, or even 10 years, I’m not sure I see us working with Windows Server anymore in the way that we do so today. I don’t think VM usage will be as prevalent then as it is today either. After last week, I firmly believe that containers will be the “new virtual machine”. I think VMs will stay around for legacy workloads, and for workloads that require additional layers of isolation, but after seeing containers in action last week, I’m all in on that usage model.
We used to see VMs as this amazing cost-reducing technology, and it was for a long time. However, I saw containers do to VMs, what VMs did to physical servers. I attended a session on moving workloads to a container based model, and MetLife was on stage talking about moving some of their infrastructure to containers. In doing so they achieved:
-70% reduction in the number of VMs in the environment
-67% reduction in needed CPU cores
-66% reduction in overall cost of ownership
Those are amazing numbers that nobody can ignore. Given this level of success with containers, I see the industry moving to that deployment model from VMs over the next several years. As much as it pains me to say it, virtual machines are starting to look very “legacy”, and we all need to adjust our skill sets accordingly.
As you know, Ignite is that time of year where Microsoft makes some fairly large announcements, and below I’ve compiled a list of some of my favorite. While this is by no means a comprehensive list, but I feel these represent what our readers would find most interesting. Don’t agree? That’s fine! Just let me know what you think were the most important announcements in the comments. Let’s get started.
8. New Azure Exams and Certifications!
With new technologies, come new things to learn, and as such there are 3 new exams on the market today for Azure Technologies.
Azure Stack Operators – Exam 537: Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack
For Azure Solutions Architects – Exam 539: Managing Linux Workloads on Azure
For Azure DevOps – Exam 538: Implementing Microsoft Azure DevOps Solutions
Normally I wouldn’t make much of a fuss about SQL Server as I’m not much of a SQL guy myself, but Microsoft did something amazing with this release. SQL Server 2017 will run on Windows, Linux, and inside of Docker Containers. Yes, you read correctly. SQL Server 2017 will run on Linux and inside of docker containers, which opens up a whole new avenue of providing SQL workloads. Exciting times indeed!
6. Patch Management from the Azure Portal
Ever wanted to have WSUS available from the Azure portal? Now you have it. You can easily view, and deploy patches for your Azure based workloads directly from the Azure portal. This includes Linux VMs as well, which is great news as more and more admins are finding themselves managing Linux workloads these days!
5. PowerShell now Available in Azure CLI.
When Azure CLI was announced and released, many people were taken aback at the lack of PowerShell support. This was done for a number of reasons that I won’t get into in this article, but regardless, it has been added in now. It is now possible with Azure CLI to deploy a VM with a single PowerShell cmdlet and more. So, get those scripts ready!
I know many friends and colleagues that have been waiting for something like this. You can essentially view this as next-generation DFS. (Though it doesn’t use the same technology). It essentially allows you to sync your on-premise file servers with an Azure Files account for distributed access to the stored information around the globe.
3. Quality of Life Improvements for Windows Containers
While there were no huge reveals in the container space, Windows Server 1709 was announced and contains a lot of improvements and optimizations for running containers on Windows Server. This includes things like smaller images and support for Linux Containers running on Windows Server. I did an Interview with Taylor Brown from the Containers team, which you can view below for more information.
2. Nested Virtualization in Azure for Production Workloads
Yes, I know, nested virtualization in Azure has been announced for some time. However, what I found different was Microsoft’s Insistence that it could also be used for production workloads. During Scott Guthrie’s Keynote, Corey Sanders actually demonstrated the use of the M-Series (Monster) VM in Azure being used to host production workloads with nested VMs. While not ideal in every scenario obviously, this is simply another tool that we have at our disposal for added flexibility in our day-to-day operations.
If you’re interested, I actually interviewed Rick Claus from the Azure Compute team about this. That Interview can be seen below
1. Project Honolulu
This one is for the folks that are still strictly interested in only the on-prem stuff. Microsoft revealed and showed us the new Project Honolulu management utility for on-premise workloads. Honolulu, takes the functionality of all the management tools and MMC snap-ins that we’ve been using for years and packages them up into a nice easy to use web-UI. It’s worth a look if you haven’t seen it yet. We even have a nice article on our blog about it if you’re interested in reading more!
As I mentioned, this was by no means a comprehensive list, but we’ll be talking about items (From this list and some not mentioned) from Ignite on our blogs for some time. So, be sure to keep an eye on our blog if you’re interested in more information.
Additionally, if you attended Microsoft Ignite, and you saw a feature or product you think is amazing that is not listed above, be sure to let us know in the comments section below!
Q. If you’re new to Azure Stack, what are some good resources for learning more about it (Other than this webinar)
A. If you’re looking to learn more about Azure Stack, it would be best if you start by learning more about Azure. This is because managing Azure Stack is so similar to Azure, learning how to handle Azure, will help you with Azure Stack when you’re ready to deploy it. If you’re looking to focus on individual features, it is recommended that you focus on ARM (Azure Resource Manager) before focusing on other items. With that said, Microsoft has a lot of training materials about Azure and ARM, and even has an online virtual academy with some resources HERE
Q. Microsoft has already talked about scaling the solution up from the existing planned deployments, are there any mentioned plans to scale the solution down?
A. The smallest that Azure Stack scales down too is 4 nodes, with no mentioned plans to go below that. Due to the nature of the solution and what it’s capable of delivering, if 4 nodes is not small enough, it’s recommended to host the workloads directly in Azure instead.
Q. Will it be more resource efficient to host PaaS workloads of IaaS workloads in Azure Stack?
A. While the final numbers and pricing would tell you for sure, at this point it looks like PaaS will be the more efficient route (Like Public Azure). This is because PaaS services are inherently more efficient than IaaS as you’re not having to support an individual underlying OS for each workload.
Q. What are the differences between the different switch types in Azure Stack.
A. The Aggregate switch acts as an aggregation layer for all the different TOP switches to connect to. The TOR Switch is a top-of-rack switch that the physical hosts connect to, and the BMC switch is a switch that is used by the baseboard management controllers in the hosts for things like auto-power-on and power off, and patching.
Q. Can I use Altaro VM Backup to protect workloads running on Azure Stack?
A. At release Microsoft is not opening APIs or providing a way for 3rd party vendors to provide backup services inside of the stack. However, it is suspected (but not confirmed) that they will open a marketplace for MAS, much like they have for Azure. Through this backup vendors could deploy methods for protecting Azure Stack based workloads. We will be watching this closely and will be sure to notify you via the Altaro blog of any major product enhancements centered around this.
Q. Am I able to use an Azure Stack based storage account for hosting offsite backups with Altaro VM Backup?
A. Yes! You can connect to an Azure Stack based storage account just as you would connect to a storage account hosted in public Azure. All you need to do is follow the instructions posted in the offsite backup location section of the application and cut and paste in your connection string for the storage account.
Well that wraps up things for August’s webinar! Be sure to keep an eye out on this space, as we’ll be posting more information about Azure Stack as our authors find it interesting and of use to you!
As always, if there was a question you have that wasn’t answered, or you thought of a follow-up question, be sure to use the comments section below and we’ll be sure to get you your answer ASAP.
Thanks for attending, and we hope to see you for the next one!
On July 18th, we put on a webinar with Aidan Finn regarding Azure IaaS and Hybrid Cloud. The webinar was well attended, and we got some great questions and feedback throughout the session. As is our norm for webinars this post contains the following below:
A recording of the webinar in it’s entirety
A link to the slide deck used
A full list of the questions and their associated answers.
If you have any follow-up questions be sure to use the comments section below and we’ll be sure to get you an answer!
Watch Webinar – 4 Important Azure IaaS Features for Building your Hybrid Cloud
Q: If there is a trackable pending disaster such as a hurricane or a war, will Microsoft proactively move data and workloads to another Azure Datacenter Region?
A: The short answer here is no, that is because Microsoft leaves it up to the customer to design and architect the solutions over several datacenter regions yourself if you need that kind of failover and redundancy. Microsoft will do no syncing of data between datacenters on their own in this regards. You have to set it up yourself.
Q: Is it possible to select managed or un-managed for disks during the creation of a new VM in Azure?
A: It is. In the storage section under step 3 of the VM’s creation you have the option of selecting managed or unmanaged storage.
Q: Is it possible to change from un-managed to managed storage at a later time?
A: Yes! There are a few powershell cmdlets that can do this and the process is fairly quick. More information on this can be found HERE.
Q: Does an MSDN subscription allow you to do some testing with Azure?
A: Yes. You get various credits depending on your subscription level. You can find more information on this HERE
Q: When a host “warm reboots” in Azure, how do the VMs stay online? How do they get resources?
A: The answer here is they don’t stay online, however the downtime is only 15 to 30 seconds, so it’s nearly unidentifiable unless you’re running a very connectivity sensitive application.
Q: How can I keep track of which services are available in what regions?
It’s been a big year for us here at Altaro Software so far already. We’ve had quite a number of improvements and feature adds into the product since the start of the year, and we’re not showing any sign of slowing down.
We’d like to introduce you to our latest, exciting feature addition, and it’s been one that customers have been asking about for some time. We’re proud to announce, that as of Altaro VM Backup 7.5, we now can send offsite backups directly to an Azure Storage Account!
Some of you will read that and say, “Haven’t we been able to do that before?”. While it’s true, we’ve had some customers who send their offsite backups into Azure, but in order to do that before our 7.5 release, a compute instance with our Altaro Offsite Server software running on it, has historically been needed inside of Azure. That entailed a running VM, page blob storage or Azure Files storage, an Azure Security Group, Azure WAN IPs, and potentially more. From the mere standpoint of needing a place to put your offsite backups, it was more complex than we cared for.
Now with version 7.5 all you need is the Azure Storage Account and the connection string associated with it. As shown below you simply setup the Storage Account, paste in the connection string into the Altaro VM Backup Console, and you’re connected!
Not only are you saving money by not having to incur the monthly cost of running a VM in Azure, you’re using more efficient and cost-effective storage via the storage account as well.
When using the Altaro Offsite server in Azure you had to use Azure Page Blobs, which means you are essentially storing your data in VHDs associated with a given VM. This method is more expensive than the Azure Block Blobs that our new storage account integration uses. Additionally, the old method has some size limitations as well. VHD size in Azure is limited to 1TB per VHD. Sure, you could stripe the data across multiple VHDs, but you needed larger VM sizes in order to do so which would incure even more cost! With our new storage account integration your only limitation is the 500TB size limitation on an Azure Storage Account, and if you fill that up, you simply make another one and go about your day!
One common question that we’ve gotten since releasing this feature to beta, is whether or not this feature can take advantage of our new Augmented Inline Deduplication technology we released earlier this year. The answer is yes! All data moving into an Azure Storage Account via our software, is first deduped before crossing the wire. This insures that only the data that absolutely needs to go across the WAN goes across the WAN. So, not only are you getting cost and data efficiency, you’re getting transfer efficiency as well! It’s a win-win!
Again, we love bringing you new and improved features within our backup software to better prepare you and your organization for the inevitable recovery situation, and it is our hope with this feature we’ve given you one more tool you can use to keep your data, and your company safe from data loss.
Join Andrew Mason from Microsoft (Principal Program Manager on the Nano Server team at Microsoft), and MVP Andy Syrewicze in an AMA webinar on March 16th to discuss Nano Server. Register for the webinar and get answers directly from Andrew!
Any regular readers of our blog will know that I like to put together a monthly digest of Interesting links, good howtos, and eyebrow-raising news. As 2016 is winding to a close, and I’ve begun gathering up stuff to share with you, I find myself looking more at 2016 holistically. Where have we been? Where are we going? The end of the year is always a good time for reflection so that we can move into the new year with a clean slate and a new purpose.
With that in mind, I would like to take this time to cover three different areas to wrap up this year on our blog.
Where have we been? How did we start this year and what were the highlights?
Where are we going? What do I think the next 12 months is going to look like?
What Certifications should you be targeting with the new year?
Where have we been?
If you think about it, 2016 has been something of a strange year for us with Hyper-V. Don’t get me wrong, Server 2016 was released, and is awesome, but think all the way back to January. We’d just gotten done with 2015, Windows Server 2016 was in 1 form of a technical preview or another, and we were still using 2012 R2 Hyper-V hosts (Or Older) to drive our production environments. We’d heard a smattering of information about some of the new upcoming features in 2016 but some of them were half-baked, broken, or just rumor.
The mentality has really shifted as well if you think about it. For years, many on-premises administrators have been somewhat unsure of the cloud. Many worried that development for Windows Server would be stunted in favor of Microsoft throwing all their weight behind Azure. While there certainly has been a shift in focus to developing Azure, that doesn’t mean that 2016 was left out in the cold (It is Winter after all). On the contrary, development of Azure is what led to so many of the improvements that we have in Windows Server 2016 today!
I’ve heard it mentioned many times, in many different places, and I don’t know if it’s true or not, but it makes sense, based on how 2016 has been developed. Pre-2016 features flowed from Windows Server to Azure. With 2016, that flow has been reversed. New features in Windows Server 2016 are a result of a need that had to be filled in Azure. Some people like this, others don’t, but I find myself to be a staunch supporter of this strategy simply based on the (pardon my brevity) kick-ass features that we got when Server 2016 was released! Let’s look at a few of my favorite features…..
When you’re building a massive datacenter environment (Like Azure) you need to as much bang for buck as possible. Nano Server helps achieve this by providing a host operating system that consumes as little resource as possible. This way those resources can more efficiently go towards the VMs that are hosting workloads.
Now, many will say… “Well Andy, I just don’t need that level of optimization in my environment… why should I care?” My response to this has to do with 2 other HUGE enhancements Nano Server delivers.
Reduced Attack Surface – This was the number one reason I was a huge fan of Windows Server Core for 2008 R2 and 2012/R2 Hyper-V hosts. Nano Server takes this even further. With less than a 500MB footprint, there just isn’t much there to attack, and in today’s IT landscape, EVERYONE is responsible for security.
Less reboots due to patching – Everyone hates reboots, and with Nano server the projected needed reboots a year due to patching is 2. Yes you read correctly…. 2 reboots a year for patching with Nano server. Now, those are just projections, time will tell if Microsoft hits that goal, but it’s a new Microsoft we’re seeing these days so I’m inclined to believe them at this stage.
With this in mind, I would suggest you take a good, hard look at Nano Server in the coming year. Everyone will benefit from running Hyper-V in this configuration.
If you follow the blog, you’ve likely heard me talk about Storage Spaces Direct (or S2D for short) many, many times. I do so, because it’s one of my favorite features of Windows Server 2016. S2D, provides us the ability to do Hyper-Converged deployments with compute and storage in the same boxes. No one likes supporting complex and problematic SANs and storage fabrics, especially for smaller 2-node host clusters. S2D now allows the option to run 2-node deployments, and still maintain the N+1 status needed for clusters. The only networking needed is some nice 10GB+ plus NICs and an interlink between the two hosts for the east/west storage traffic. This is going to open the door for some very cost effective and powerful configurations moving forward.
The automation and management implication of PowerShell Direct are astronomical. The more I use it, the more I’m amazed at what can get done with it. (Check out the link if you want an example). The fact is, most of us are being asked to do more with less, and unless we can start automating some of the more mundane processes that we have to do on a daily basis, most of us will never catchup on things. PowerShell direct lends itself well to this, and you’ll want to utilize it heavily in the coming year once you have Windows Server 2016 in play in your environment.
Also, to note: if you still haven’t started learning PowerShell, I HIGHLY recommend that you do. PowerShell is the future (if it’s not here already) of Windows management, and you don’t want to be left behind. You’ll find a link to some beginner’s resources near the end of this article.
Those from the VMware world will know this feature as DRS (Distributed Resource Scheduler). Node Fairness is a feature that allows Hyper-V Clusters to load balance the VMs automagically across all the nodes in the cluster. This prevents those situations like when you were a kid, and your parents asked you and your brother/sister to go do a job, and you did all the work while your sibling sat on their rear-end? Yeah most of us have been there. Historically, we could do this System Center Virtual Machine Manager, but SCVMM is expensive, and many people in smaller environments are loathe to run it. Be fair to your Hyper-V hosts and enable this feature once you’re running Windows Server 2016. Not only will you’re hosts run better, you won’t have to hear complaints from an overworked Hyper-V host, like your parents did when your sibling wouldn’t get off their lazy butt.
Where are We Going
This small section here is strictly opinion. You can agree with it or not, it’s simply where I think Hyper-V and Microsoft is going to go in the next year based on where I sit. To start, let’s talk cloud.
Cloud computing is representing a HUGE shift in the industry, there is no doubt about that. However, I think where the angst comes in amongst IT pros, is how we should respond. Many see Cloud computing as a threat to jobs and an IT administrator’s general way of life, and while I in part agree with that assessment, I’m not worried by it. Times surely are changing but for the better. I don’t think we’re going to see on-premises deployments dry up as a result, because there are always going to be businesses that either want or require their equipment to be housed on-site. Additionally, it doesn’t always make fiscal sense to move everything into a cloud based model.
I think cloud need to be embraced as an infrastructure option, not a replacement. The use of Hybrid scenarios (Any workload, anywhere, and anytime) is going to start to become more and more the norm in 2017, not the exception. Azure and other cloud vendors provide the ability to have a geographic reach that most businesses wouldn’t have otherwise, so it needs to be embraced as a tool and utility for hosting certain workloads, and not seen as a usurper. Microsoft’s 2016 era of products makes this hybrid deployment scenario much simpler and even allows for unified management experiences for all your workloads across sites with some of their new Azure based management tools. It will be interesting to watch this transition, and the industry will adapt and software vendors will produce product to support this new hybrid model that is in use more frequently.
In regards to the on-prem suite of products (Windows Server, System Center….etc…etc)… I’ve seen and heard reassurances that new features are going to continue to be built and supported. Don’t believe the fear mongers, that Azure is going to swallow us whole and control the entire world. Remember, Azure runs on these technologies, and Microsoft has adopted the strategy of taking what they develop for Azure, and packaging it up in Windows Server for us to use on-site. I would make the argument, that without Azure, we may not have many of the awesome features that are part of the Windows Server 2016 release, so just keep that in mind when you find yourself worrying about cloud computing.
Training for the New Year
So, with all of this in mind, what areas should you focus your training on for the new year? The new year is always a good time to take stock of your skill set and where it needs to be improved to adapt to a changing industry, and the below would be my recommendations for the new year.
You have to continue to support your existing environments, and this includes getting them up to snuff with the new version of Windows Server and the plethora of enhancements and improvements it offers. I highly recommend getting certified in Windows Server 2016 as a part of this effort
While not a certification in its own right, you will want to either start (if you haven’t) or continue to learn how to use PowerShell for automation. Automation is going to become increasingly more important as time goes on, and you need to keep up with this movement to stay relevant. Most employers hiring today require some level of PowerShell experience, so be sure to train up!
Once you’ve knocked out the above 2, you’ll want to certify in Azure or some other cloud technology. Like I mentioned above, I don’t see the cloud ever truly replacing everything on-prem. More likely it supports and vast array of hybrid scenarios and deployment types. Learning how these cloud technologies work will go a long way towards teaching you how you can incorporate them into your existing IT strategy.
To wrap up, I’d like to say, thank you for reading our blog this year! We work hard to provide good solid content that is easily digestible and meaningful for your day to day activities. Please continue to visit us in the new year for a whole slew of Windows Server 2016 related content and much much more!
As always, if you’d like to share your thoughts and join the discussion, feel free to use the comments section below this article!
Hello everyone! July is behind us and that means it’s time for another edition of our Hyper-V Hot Topics Series!
Again, as a reminder, this series focuses on interesting links and news regarding Hyper-V from throughout the previous month, that I’ve found to be helpful and useful. In addition to this, I also like to post my Hyper-V Monday Minute recording from throughout the last month as well. For those that aren’t aware, I put on, what I call the Hyper-V Monday Minute every Monday at 2:00 PM Eastern time where I talk about some topic from the Hyper-V world. I’ve used a number of different formats for this, but have now finalized it on Facebook Live. if your interested in subscribing to that segment, you can do so by liking the Altaro Software facebook page HERE. The idea here is to serve as your one-stop-shop information source for everything to do with Hyper-V throughout the month, because I know it’s difficult to keep up with all the new developments when you’re working on IT from the trenches. With that said, let’s get started with this month’s entries!
While the title doesn’t scream it, this article basically served as a the official launch announcement for Windows Server 2016. To start off the article states that Windows Server 2016 will officially launch at the Microsoft Ignite Conference in Sept, so that is welcome news indeed! Additionally throughout the article you can read about the various editions that will be included in the release and some pricing information. If you’re in a position where you’re looking to purchase Windows Server 2016 to be ready for the initial release, this is a good link to check out!
This first link is brought to us from fellow Cloud and Datacenter MVP Aidan Finn. I found this post particularly useful because when I talk to customers and IT Pros about getting every ounce of performance out of their Hyper-V Clusters, this topic always comes up. You’ll always get the best performance if a Hyper-V host owns the CSV that a VM is running on as well as actually hosting the VM itself. In this post Aidan discusses the topic and then posts a script that will help you with this optimization. It’s a good one to put in the bookmarks for later in case you need it!
This next link also features Aidan Finn, but via the RunAsRadio podcast. Aidan and RunAsRadio host Richard Campbell spend this episode talking about all things Hyper-V in the upcoming Windows Server 2016 Release. The topic list includes things like rolling cluster upgrades, nano server and some of the new security features baked into Hyper-V 2016. it’s certainly worth a listen on one of your lunch breaks.
With some of the new emerging technologies that Microsoft has come out with over the past several years, the idea of VM mobility is no longer simply a, host to host, or site to site concept. Now we have the ability to do cloud to site VM migration as well. While currently its something of a manual process, it’s still a valuable ability to have. If you’re running VMs in Azure you have the ability to move them to a Hyper-V server running on-premises or in a remote data center somewhere. Another article by Aidan Finn, he covers this process in detail.
It’s pretty clear that Linux is becoming more and more of a common workload in today’s IT world, and with good reason. It can host many critical services reliably, and cheaply! Microsoft has certainly seen this trend and has added a great amount of support in Hyper-V for hosting Linux based workloads. So if you find yourself in a situation where you’re hosting a Linux based workload, one area where you may or may not have run into issues is in finding the correct version of the Linux integration Service to run. In this article Michael Kelley from Microsoft talks about your options for getting Integration Services running in Linux. If you find yourself hosting Linux workloads. This is a good article to look at.
Windows Server 2016 introduced some nice new storage features that are quite awesome. One of those is Storage Replica over Stretch Clusters. While it may sound like an overly complex topic, it’s actually quite simple to setup. I’ve embedded a video below by Ned Pyle from Microsoft where he configures a Storage Replica Stretch cluster in 30 seconds, which is pretty impressive!
Hyper-V Monday Minutes from July
July 4th – How do I Get Started with Hyper-V Certification?
July 11th – Introduction to the new .VMCX File Format in Windows Server 2016
July 25th – What New Hyper-V 2016 Features Should you be Focusing your Time on?
That wraps up our links for the month! Hopefully this will give you enough content to stay informed for a bit, and we’ll be doing another segment next month around the same time! Like always if there is a hot topic or link that you feel should be included, feel free to place it in the comments section below!
Welcome back everyone for Part 2 of our series on hosting an Altaro Offsite Server in Microsoft Azure! In Part 1 we covered the planning and pricing aspects of placing an Altaro Offsite Server in Microsoft Azure. While that post was light on the technical how to, this post is absolutely filled with it!
Below you’ll find a video that walks through the entire process from beginning to end. In this video we’ll be doing the following:
Provision a new 2012 R2 virtual machine inside of Azure
Configure Azure Security Group port settings
Define External DNS settings
RDP into the new server and install the Altaro Offsite Server software.
Attach a 1TB virtual data disk to the VM
Configure a new Altaro Offsite Server user account and attach the virtual disk from step 5.
Log into the on-premises instance of Altaro VM Backup and define a new offsite backup location.
Once these steps are complete, you’ll have a good starting offsite server to vault your backups too. I would like to note however, that for the purposes of this demo, it is assumed that you have no more than 1TBs worth of data to vault offsite. Microsoft Azure imposes a hard 1TB size limitation on virtual hard disks, and while there are ways around this limitation, they are outside the scope of the basic installation and setup instructions included in this post. I will be covering those situations in the next part of this series. Outside of that, the installation instructions covered here, are the same regardless.
The process is fairly straight forward, and I’ve done it in a way that doesn’t require a full understanding of Azure for this to work. However, I highly encourage you to take the time to learn about how Azure functions. With that said, lets get to the video!
As you can see, the process really isn’t that difficult once it’s broken down. If you have any follow up questions of need clarification on anything, feel free to let me know in the comments section below, and stay tuned for more advanced scenarios coming up next in this series!