In the last years, so-called “cloud services” have become more and more interesting and some customers are already thinking of going 100% cloud. There are a lot of competing cloud products out there, but is there a universal description of a cloud service? This is what I will address here.
Let’s start with the basics. Since time began (by that I mean “IT history”) we have all been running our own servers in our own datacenters with our own IT employees. A result of this was that you had different servers for your company, all configured individually, and your IT guys had to deal with that high number of servers. This led to a heavy and increasing load on the IT administrators, no time for new services, often they even had no time to update the existing ones to mitigate the risk to be hacked. In parallel, the development teams and management expect IT to behave in an agile fashion which was impossible for them.
Defining Cloud Services
This is not a sustainable model and is where the cloud comes in. A cloud is a highly optimized standard service (out of the box) without any small changes in the configuration. Cloud Services provide a way to just use a service (compared to power from the power plug) with a predefined and guaranteed SLA (service level agreement). If the SLA breaks, you as the customer would even get money back. The issue with these services is that these servers need to run in a highly standardized setup, in highly standardized datacenters, which are geo-redundant around the world. When it comes to Azure, these datacenters are being run in so-called “regions” with a minimum of three datacenters per region.
In addition to this, Microsoft runs their own backbone (not the internet) to provide a high quality of services. Let’s say available bandwidth meets Quality of Services (QoS).
To say it in one sentence, a cloud service is a highly standardized IT service with guaranteed SLAs running in public datacenters available from everywhere around the world at high quality. In general, from the financial point of view, you pay it per user, services or other flexible unit and you could increase or decrease it, based on your current needs.
Cloud Services – your options
If you want to invest in cloud services, you will have to choose between:
A private Cloud
A public Cloud
A hybrid Cloud
A private cloud contains IT services provided by your internal IT team, but in a manner, you could even get as external service. It is being provided by your datacenter and only hosts services for your company or company group. This means you will have to provide the required SLA.
A public cloud describes IT services provided by a hosting service provider with a guaranteed SLA. The services are being provided by public datacenters and they are not being spun up individually just for you.
A hybrid cloud is a mixture between a public and a private cloud, or in other words “a hybrid cloud is an internet-connected private cloud with services that are being consumed as public cloud services”. Hybrid Cloud deployments can be especially useful if there is a reason not to move a service to a public cloud such as:
Intellectual property needs to be saved on company-owned dedicated services
Highly sensitive data (e.g. health care) is not allowed to be saved on public services
Lack of connectivity could break the public cloud if you are in a region with poor connectivity
Responsibility for Cloud Services
If you decide to go with public cloud services, the question is always how many of your network services are you willing to move to the public cloud?
The general answer should be the more services you can transfer to the cloud, the better your result. However, even the best-laid plans sometimes can be at the mercy of your internet connectivity as well, which can cut you off from these services if not planned for. Additionally, industry regulations have made a 100% cloud footprint difficult for some organizations. The hybrid solution is then the most practical option for the majority of business applications.
Hybrid Cloud Scenarios
These reasons drove the decision by Microsoft to provide Azure to you for your own datacenter in a packaged solution based on the same technology as within Azure. Azure itself has the main concept of working with REST-Endpoints and ARM templates (JSON files with declarative definitions for services). Additionally, Microsoft deemed that this on-premises Azure solution should not provide only IaaS, it should be able to run PaaS, too. Just like the public Azure cloud.
This basically means, that for a service to become available in this new on-prem “Azure Stack”, it must already be generally available (GA) in public Azure.
This solution is called “Azure Stack” and comes on certified hardware only. This makes sure, that you as the customer will get performance, reliability and scalability. That ones you expect from Azure will be with Azure Stack, too.
As of today, the following Hardware OEMs part of this initiative:
The following services are available with Azure Stack today, but as it is an agile product from Microsoft, we will expect MANY interesting updates in the future.
With Azure Stack, Microsoft provides a simple way to spread services between on-premise and in the public cloud. Possible scenarios could be:
Disconnected scenarios (Azure Stack in planes or ships)
Azure Stack as your development environment for Azure
Low latency computing
Hosting Platform for MSPs
And many more
As we all know, IT is hybrid today in most of the industries all over the world. With the combination of Azure Stack and Azure, you will have the chance to fulfill the requirements and set up a unique cloud model for all of your company services.
As you have seen, Azure Stack brings public Azure to your datacenter with the same administration and configuration models you already know from public Azure. There is no need to learn twice. Training costs go down, the standardization gives more flexibility and puts fewer loads on the local IT Admins which gives them time to work on new solutions for better quality. Also, with cloud style licensing things becomes less complex, as things are simply based on a usage model. You could even link your Azure Stack licenses directly to an Azure Subscription.
As hybrid cloud services are the future for the next 10 years or even more, Azure and Azure Stack together can make your IT world the most successful that it ever was in the last 10 years and moving forward.
Are you finding the GUI of Azure Portal difficult to work with?
You’re not alone and it’s very easy to get lost. There are so many changes and updates made every day and the azure overview blades can be pretty clunky to traverse through. However, with Azure Cloud Shell, we can utilize PowerShell or Bash to manage Azure resources instead of having to click around in the GUI.
So what is Azure Cloud Shell? It is a web-based shell that can be accessed via a web browser. It will automatically authenticate with your Azure sign-on credentials and allow you to manage all the Azure resources that your account has access to. This eliminates the need to load Azure modules on workstations. So for some situations where developers or IT Pros require shell access to their Azure resources, Azure Cloud Shell can be a very useful solution, as they won’t have to remote into “management” nodes that have the Azure PowerShell modules installed on them.
How Azure Cloud Shell Works
As of right now, Azure Cloud Shell gives users two different environments to use. One is a Bash environment, which is basically a terminal connection to a Linux VM in Azure that gets spun up. This VM is free of charge. The second environment available is a PowerShell environment, which runs Windows PowerShell on a Windows Server Core VM. You will need to have some storage provisioned on your Azure account in order to create the $home directory. This acts as the persistent storage for the console session and allows users to upload scripts to run on the console.
To get started using Azure Cloud Shell, go to shell.azure.com. You will be prompted to sign in with your Azure account credentials:
Now we have some options. We can select which environment we prefer to run in. We can run in a Bash shell or we can use PowerShell. Pick whichever one you’re more comfortable with. For this example, I’ve selected PowerShell:
Next, we get a prompt for storage, since we haven’t configured the shell settings with this account yet. Simply select the “Create Now” button to go ahead and have Azure create a new resource group, or select “Show Advanced Settings” to configure those settings to your preference:
Once the storage is provisioned, we will wait a little bit for the console to finish loading, and then the shell should be ready for us to use!
In the upper left corner, we have all of the various controls for the console. We can reset the console, start a new session, switch to Bash, and upload files to our cloud drive:
For an example, I uploaded an activate.bat script file to my cloud drive. In order to access it we simply reference $home and specify our CloudDrive:
Now I can see my script:
This will allow you to deploy your custom PowerShell scripts and modules in Azure from any device! assuming you have access to a web browser, of course. Pretty neat!
Upcoming Changes and Things to Note
On May 21st, Microsoft announced that they will be going with Linux platform for both the Windows PowerShell and Bash experience. How is this possible? Essentially they will be using a Linux container to host the shell. By default PowerShell Core 6 will be the first experience. They claim that the startup time will be much faster than previous versions because of the Linux container. For switching between bash and PowerShell in the console, simply type “bash”. If you want to go back to PowerShell Core just type “pwsh”.
Microsoft is planning on having “persistent settings” for Git and SSH tools so that the settings for these tools are saved to the CloudDrive and users won’t have to hassle with them all the time.
There is some ongoing pain with modules currently. Microsoft is still working on porting over modules to .Net Core (for use with Powershell Core) and there will be a transition period while this happens. They are prioritizing the porting of the most commonly used modules first. In the meantime, there is one workaround that many people seem to forget, implicit remoting. This is the process of taking a module that is already installed on another endpoint and importing it into your PowerShell session allowing you to call that module and have it remotely execute on the node where the module is installed. It can be very useful for now until we get more modules converted over to .Net Core.
Want to Learn More About Microsoft Cloud Services?
The development pace of Azure is one of the most aggressive in the market today, and as you can see Azure Cloud Shell is constantly being updated and improved over a short period of time. In the near future, it will most likely be one of the more commonly used methods for interacting with Azure resources. It provides Azure customers with a seamless way of managing and automating their Azure resources without having to authenticate over and over again or install extra snap-ins and modules; and will continually shape the way we do IT today.
What are your thoughts regarding the Azure Cloud Shell? Have you used it yet? What are your initial thoughts? Let us know in the comments section below!
Do you have interest in more Azure Goodness? Are you wondering how to get started with the cloud and move some existing resources into Microsoft Azure? We actually have a panel styled webinar coming up in June that addresses those questions. Join Andy Syrewicze, Didier Van Hoye, and Thomas Maurer for a crash course on how you can plan your journey effectively and smoothly utilizing the exciting cloud technologies coming out of Microsoft including:
Windows Server 2019 and the Software-Defined Datacenter
New Management Experiences for Infrastructure with Windows Admin Center
Hosting an Enterprise Grade Cloud in your datacenter with Azure Stack
Taking your first steps into the public cloud with Azure IaaS
Automating deployments has quickly become the norm for IT professionals servicing most organizations from small-scale up. But automation can go far beyond just deploying VMs. It is used to configure Active Directory inside a VM, File Services, DNS, etc. automatically providing a boost to productivity, accuracy and workload management.
Earlier this year I had the privilege of speaking for the MVP Days Virtual Conference. For those that aren’t aware, the MVP Days Virtual Conference is a monthly event hosted by Dave and Cristal Kawula showcasing the skills and know-how of individuals in the Microsoft MVP Program. The idea being that Microsoft MVPs are a great resource of knowledge for IT Pros, and this virtual conference gives them a platform to share that knowledge on a monthly basis.
The following video is a recording of my presentation “3 Tools for Automating Deployments in the Era of the Modern Hybrid Cloud”.
Deployments in the Hybrid Cloud
Workloads and IT Infrastructures are becoming more complex and spread out than ever. It used to be that IT Pros had little to worry about outside the confines of their network, but those days are long over. Today a new workload is just as likely to be successfully residing in the public cloud as it is on premises. Cloud computing technologies like Microsoft Azure have provided a number of capabilities to IT Pros that were previously unheard of in all but the most complex enterprise datacenters. The purview of the IT Pro no longer stops within the 4 walls of his/her network but wherever the workload lives at a given time.
With these new innovations and technologies comes the ability to mass deploy applications and services either on-premises or in the public cloud in a very easy way, but what happens when you need to automate the deployment of workloads and services that stretch from on-premises to the public cloud? Many IT Pros struggle to automate deployments that stretch across those boundaries for a true hybrid cloud deployment.
In this demo-heavy session, you’ll discover a number of tools to assist you with your deployment operations. Learn:
How PowerShell ties-together and executes your deployment strategy end-to-end
How PowerShell Direct is used on-premises with Hyper-V to automate more than ever before
How Azure IaaS is used to effortlessly extend your automated deployments to the cloud
The Video: 3 Tools for Automating Deployments in the Era of the Modern Hybrid Cloud
Be sure to share your thoughts on the session with us in the comments section below! I’m especially interested if there are any 3rd party tools or other methods you’ve used in these kinds of deployment situations. I’d also like to hear about any challenges you’ve encountered in doing operations like this. We’ll be sure to take that info and put together some relevant posts to assist.
Microsoft has made major changes to the way that they build and release their operating systems. The new Windows Server “Semi-Annual Channel” (SAC) marks a substantial departure from the familiar release pattern that Microsoft has established. The change has pleased some people, upset many, and confused even more. With all the flash of new features, it’s easy to miss the finer points — specifically, how you, your organization, and your issues fit into the new model.
The Traditional Microsoft OS Release Model
Traditionally, Microsoft would work on pre-release builds internally and with a small group of customers, partners, and other interested stakeholders (such as MVPs). Then, they would finesse the builds with open betas (usually called “Release Candidates”). Then, there would be an RTM (release-to-manufacturing) event followed by GA (general availability). The release would then be considered “current”. It would enjoy regular updates including service packs and feature upgrades for a few years, then it would go into “extended support” where it would only get stability and security updates. While customers purchased and worked with the “current” version, work on the next version would begin in parallel.
Not every version followed this model exactly, but all of them were similar. The most recent Windows Server operating system to employ this model is Windows Server 2016.
Changes to the Model with Windows 10/Server 2016
The “Windows Insider Program” was the first major change to Microsoft’s OS build and release model. Initially, it was most similar to the “release candidate” phase of earlier versions. Anyone could get in and gain access to Windows 10 builds before Windows 10 could even be purchased. However, it deviated from the RC program in two major ways:
The Windows Insider Program includes an entire community.
The Windows Insider Program continues to provide builds after Windows 10 GA
The Windows Insider Community
Most of us probably began our journey to Windows Insider by clicking an option in the Windows 10 update interface. However, you can also sign up using the dedicated Windows Insider web page. You get access to a dedicated forum. And, of course, you’ll get e-mail notifications from the program team. You can tell Microsoft what you think about your build using the Feedback Hub. That applet is not exclusive to Insiders, but they’ll know if you’re talking about an Insider build or a GA build.
Ongoing Windows Insider Builds
I expect that most Insiders prize access to new builds of Windows 10 above the other perks of the program. The Windows 10 Insider Program allows you to join one of multiple “rings” (one per joined Windows 10 installation). The ring that an installation belongs to dictates how close it will be to the “cutting edge”. You can read up on these rings and what they mean on the Insider site.
The most important thing about Windows Insider builds — and the reason that I brought them up at all in this article — is that they are not considered production-ready. The fast ring builds will definitely have problems. The preview release builds will likely have problems. You’re not going to get help for those problems outside of the Insider community, and any fix will almost certainly include the term “wait for the next build” (or the next… or the one after… or some indeterminate future build). I suspect that most software vendors will be… reluctant… to officially support any of their products on an Insider build.
Windows Server Insider Program
The Windows Server Insider Program serves essentially the same purpose as the Windows 10 Insider Program, but for the server operating system. The sign-up process is a bit different, as it goes through the Windows Insider Program for Business site. The major difference is the absence of any “rings”. Only one current Windows Server Insider build exists at any given time.
Introducing the Windows Server Semi-Annual Channel
I have no idea what you’ve already read, so I’m going to assume that you haven’t read anything. But, I want to start off with some very important points that I think others gloss over or miss entirely:
Releases in the Windows Server Semi-Annual Channel are not Windows Server 2016! Windows Server 2016 belongs to the Long-Term Servicing Channel (LTSC). The current SAC is simply titled “Windows Server, version 1709”.
You cannot upgrade from Windows Server 2016 to the Semi-Annual Channel. For all I know, that might change at some point. Today, you can only switch between LTSC and SAC via a complete wipe-and-reinstall.
On-premises Semi-Annual Channel builds require Software Assurance (I’d like to take this opportunity to point out: so does Nano). I haven’t been in the reseller business for a while so I don’t know the current status, but I was never able to get Software Assurance added to an existing license. It was always necessary to purchase it at the same time as its base volume Windows Server license. I don’t know of any way to get Software Assurance with an OEM build. All of these things may have changed. Talk to your reseller. Ask questions. Do your research. Do not blindly assume that you are eligible to use an SAC build.
The license for Windows Server is interchangeable between LTSC and SAC. Meaning that, if you are a Software Assurance customer, you’ll be able to download/use either product per license count (but not both; 1 license count = 1 license for LTSC or 1 license for SAC).
The keys for Windows Server are not interchangeable between LTSC and SAC. I’m not yet sure how this will work out for Automatic Virtual Machine Activation. I did try adding the WS2016 AVMA key to a WS1709 guest and it did not like that one bit.
SAC does not offer the Desktop Experience. Meaning, there is no GUI. There is no way to install a GUI. You don’t get a GUI. You get only Core.
Any given SAC build might or might not have the same available roles and features as the previous SAC build. Case in point: Windows Server, version 1709 does not support Storage Spaces Direct.
SAC builds are available in Azure.
SAC builds are supported for production workloads. SAC follows the Windows Server Insider builds, but SAC is not an Insider build.
SAC builds will only be supported for 18 months. You can continue using a specific SAC build after that period, but you can’t get support for it.
SAC builds should release roughly every six months.
SAC builds will be numbered for their build month. Ex: 1709 = “2017 September (09)”.
SAC ships in Standard and Datacenter flavors only.
The Semi-Annual Channel is Not for Everyone
Lots of people have lots of complaints about the Windows Server Semi-annual Channel. I won’t judge the reasonableness or validity of any of them. However, I think that many of these complaints are based on a misconception. People have become accustomed to a particular release behavior, so they expected SAC to serve as vNext of Windows Server 2016. Looking at Microsoft’s various messages on the topic. I don’t feel like they did a very good job explaining the divergence. So, if that’s how you look at it, then it’s completely understandable that you’d feel like WS1709 slapped you in the face.
However, it looks different when you realize that WS1709 is not intended as a linear continuation. vNext of Windows Server 2016 will be another release in the LTSC cycle. It will presumably arrive sometime late next year or early the following year, and it will presumably be named Windows Server 2018 or Windows Server 2019. Unless there are other big changes in our future, it will have the Desktop Experience and at least the non-deprecated roles and features that you currently have available in WS2016. Basically, if you just follow the traditional release model, you can ignore the existence of the SAC releases.
Some feature updates in SAC will also appear in LTSC updates. As an example, both WS1709 and concurrent WS2016 patches introduce the ability for containers to use persistent data volumes on Cluster Shared Volumes.
Who Benefits from the Semi-Annual Channel?
If SAC is not meant for everyone, then who should use it? Let’s get one thing out of the way: no organization will use SAC for everything. The LTSC will always have a place. Do not feel like you’re going to be left behind if you stick with the LTSC.
Basically, you need to have something akin to a mission-critical level of interest in one or more of these topics:
Containers and related technologies (Docker, Kubernetes, etc.)
High-performance networking. I’m not talking about the “my test file copy only goes 45Mbps” flavor of “high performance” networking, but the “processing TCP packets between the real-time interface and its OLTP database causes discernible delays for my 150,000 users” flavor.
Multiple large Hyper-V clusters
Read the “What’s New” article for yourself. If you can’t find any must-have-yesterdays in that list, then don’t worry that you might have to wait twelve to eighteen months for vNext of LTSC to get them.
Who Benefits from the Long-Term Servicing Channel?
As I said, the LTSC isn’t going anywhere. Not only that, we will all continue to use more LTSC deployments than SAC deployments.
Choose LTSC for:
Stability. Even though SAC will be production-ready, the lead time between initial conception and first deployment will be much shorter. The wheel for new SAC features will be blocky.
Predictability: The absence of S2D in WS1709 caught almost everyone by surprise. That sort of thing won’t happen with LTSC. They’ll deprecate features first to give you at least one version’s worth of fair warning. (Note: S2D will return; it’s not going away).
Third-party applications: We all have vendors that are still unsure about WS2008. They’re certainly not going to sign off on SAC builds.
Line-of-business applications: Whether third-party or Microsoft, the big app server that holds your organization up doesn’t need to be upgraded twice each year.
What Does SAC Mean for Hyper-V?
The above deals with Windows Server Semi-Annual Channel in a general sense. Since this is a Hyper-V blog, I can’t leave without talking about what SAC means for Hyper-V.
For one thing, SAC does not have a Hyper-V Server distribution. I haven’t heard of any long-term plans, so the safest bet is to assume that future releases of Hyper-V Server will coincide with LTSC releases.
Storage of VMs in storage-class memory (non-volatile RAM)
Splitting of “guest state” information out of the .vmrs file into its own .vmgs file
Support for running the host guardian service as a virtual machine
Support for Shielded Linux VMs
Virtual network encryption
Looking at that list, “Shielded Linux VMs” seems to have the most appeal to a small- or medium-sized organization. As I understand it, that’s not a feature so much as a support statement. Either way, I can shield a Linux VM on my fully-patched Windows Server 2016 build 1607 (LTSC) system.
As for the rest of the features, they will find the widest adoption in larger, more involved Hyper-V installations. I obviously can’t speak for everyone, but it seems to me that anyone that needs those features today won’t have any problems accepting the terms that go along with the switch to SAC.
For the rest of us, Hyper-V in LTSC has plenty to offer.
What to Watch Out For
Even though I don’t see any serious problems that will result from sticking with the LTSC, I don’t think this SKU split will be entirely painless.
For one thing, the general confusion over “Windows Server 2016” vs. “Windows Server, version 1709” includes a lot of technology authors. I see a great many articles with titles that include “Windows Server 2016 build 1709”. So, when you’re looking for help, you’re going to need to be on your toes. I think the limited appeal of the new features will help to mitigate that somewhat. Still, if you’re going to be writing, please keep the distinction in mind.
For another, a lot of technology writers (including those responsible for documentation) work only with the newest, hottest tech. They might not even think to point out that one feature or another belongs only to SAC. I think that the smaller target audience for the new features will keep this problem under control, as well.
The Future of LTSC/SAC
All things change. Microsoft might rethink one or both of these release models. Personally, I think they’ve made a good decision with these changes. Larger customers will be able to sit out on the bleeding edge and absorb all the problems that come with early adoption. By the time these features roll into LTSC, they’ll have undergone solid vetting cycles on someone else’s production systems. Customers in LTSC will benefit from the pain of others. That might even entice them to adopt newer releases earlier.
Most importantly, effectively nothing changes for anyone that sticks with the traditional regular release cycle. Windows Server Semi-Annual Channel offers an alternative option, not a required replacement.
What does the future hold for Hyper-V and its users? Technology moves fast so should Hyper-V admins be concerned about the future? Well, we don’t have a crystal ball to tell us what the future holds but we do have 3 industry experts and Microsoft MVPs to tell you what to expect. Following our hugely popular panel webinar 3 Emerging Technologies that will Change the Way you use Hyper-V we’ve decided to bring together all of the questions asked during both sessions (we hold 2 separate webinar sessions on the same topic to accommodate our European and American audiences) into one article with some extended answers to address the issue of what’s around the corner for Hyper-V and related technologies.
Let’s get started!
Question 1: Do you think IT Security is going to change as more and more workloads move into the cloud?
Answer: Absolutely! As long as we’re working with connected systems, no matter where they are located, we will always have to worry about security. 1 common misconception though is that just because a workload is housed inside of Microsoft Azure, doesn’t mean that it’s LESS secure. Public cloud platforms have been painstakingly setup from the ground up with the help of security experts in the industry. You’ll find that if best practices are followed, and rules of least access and just-in-time administration are followed, the public cloud is a highly secure platform.
Question 2: Do you see any movement to establish a global “law” of data security/restrictions that are not threatened by local laws (like the patriot act)?
Answer: Until all countries of the world are on the same page, I just don’t see this happening. The US treats data privacy in a very different way than the EU unfortunately. The upcoming General Data Protection Regulation (GDPR) coming in may of 2018 is a step in the right direction, but that only applies to the EU and data traversing the boundaries of the EU. It will certainly affect US companies and organizations, but nothing similar in nature is in the works there.
Question 3: In the SMB Space, where a customer may only have a single MS Essentials server and use Office 365, do you feel that this is still something that should move to the cloud?
Answer: I think the answer to that question depends greatly on the customer and the use case. As Didier, Thomas and I discussed in the webinar, the cloud is a tool, and you have to evaluate for each case, whether it makes sense or not to run that workload in the cloud. If for that particular customer, they could benefit from those services living in the cloud with little downside, then it may be a great fit. Again, it has to make sense, technically, fiscally, and operationally, before you can consider doing so.
Question 4: What exactly is a Container?
Answer: While not the same at all, it’s often easiest to see a container as a kind of ultra-stripped down VM. A container holds an ultra-slim OS image (In the case of Nano Server 50-60 MB), any supporting code framework, such as DotNet, and then whatever application you want to run within the container. They are not the same as a VM due to the fact that Windows containers all share the kernel of the underlying host OS. However, if you require further isolation, you can do so with Hyper-V containers, which allows you to run a container within an optimized VM so you can take advantage of Hyper-V’s isolation capabilities.
Question 5: On-Premises Computing is Considered to be a “cloud” now too correct?
Answer: That is correct! In my view, the term cloud doesn’t refer to a particular place, but to the new technologies and software-defined methods that are taking over datacenters today. So you can refer to your infrastructure on-prem as “private cloud”, and anything like Azure or AWS as “Public Cloud”. Then on top of that anything that uses both is referred to as “Hybrid Cloud”.
Question 6: What happens when my client goes to the cloud and they lose their internet service for 2 weeks?
Answer: The cloud, just like any technology solution, has its shortcomings that can be overcome if planned for properly. If you have mission critical service you’d like to host in the cloud, then you’ll want to research ways for the workload to be highly available. That would include a secondary internet connection from a different provider or some way to make that workload accessible from the on-prem location if needed. Regardless of where the workload is, you need to plan for eventualities like this.
Question 7: What Happened to Azure Pack?
Answer: Azure Pack is still around and usable, it will just be replaced by Azure stack at some point. In the meantime, there are integrations available that allow you to manage both solutions from your Azure Stack management utility.
Question 8: What about the cost of Azure Stack? What’s the entry point?
Answer: This is something of a difficult question. Ranges that I’ve heard range from 75k to 250k, depending on the vendor and the load-out. You’ll want to contact your preferred hardware vendor for more information on this question.
Question 9: We’re a hosting company, is it possible to achieve high levels of availability with Azure Stack?
Answer: Just like any technology solution, you can achieve the coveted 4 9s of availability. The question is how much money do you want to spend? You could do so with Azure stack and the correct supporting infrastructure. However, one other thing to keep in mind, your SLA is only as good as your supporting vendors as well. For example, if you sell 4 9s as an SLA, and the internet provider for your datacenter can only provide 99%, then you’ve already broken your SLA, so something to keep in mind there.
Question 10: For Smaller businesses running Azure Stack, should software vendors assume these businesses will look to purchase traditionally on-prem software solutions that are compatible with this? My company’s solution does not completely make sense for the public cloud, but this could bridge the gap.
Answer: I think for most SMBs, Azure Stack will be fiscally out of reach. In Azure Stack you’re really paying for a “Cloud Platform”, and for most SMBs it will make more sense to take advantage of public Azure if those types of features are needed. that said, to answer your question, there are already vendors doing this. Anything that will deploy on public Azure using ARM will also deploy easily on Azure Stack.
Question 11: In Azure Stack, can I use any backup software and backup the VM to remote NAS storage or to AWS?
Answer: At release, there is no support for 3rd party backup solutions in Azure Stack. Right now there is a built-in flat file backup and that is it. I suspect that it will be opened up to third-party vendors at some point in time and it will likely be protected in much the same way as public Azure resources.
Question 12: How would a lot of these [Azure Stack] services be applied to the K-12 education market? There are lots of laws that require data to be stored in the same country. Yet providers often host in a different country.
Answer: If you wanted to leverage a providers Azure stack somewhere, you would likely have to find one that actually hosts it in the geographical region you’re required to operate in. Many hosters will provide written proof of where the workload is hosted for these types of situations.
Question 13: I’m planning to move to public Azure, how many Azure cloud Instances would I need?
Answer: There is no hard set answer for this. It depends on the number of VMs/Applications and whether you run them in Azure as VMs or in Azure’s PaaS fabric. The Azure Pricing Calculator will give you an idea of VM sizes and what services are available.
Watch the webinar
Did you miss the webinar when it first went out? Has this blog post instilled a desire for you to rewatch the session again? Have no fear, we have set up an on-demand version for you to watch right now! Simply click on the link below to go the on-demand webinar page where you can watch a live recording of the webinar free.
If you have a question on the future of Hyper-v or any of the 3 emerging technologies that were discussed in the webinar just post in the comments below and we will get straight back to you. Furthermore, if you asked a question during the webinar that you don’t see here, by all means, let us know in the comments section below and we will be sure to answer it here. Any follow-up questions are also very welcome – to feel free to let us know about that as well!
The I.T. landscape changes incredibly quickly (if you know a faster changing industry, I’d love to know!) I.T. professionals need to know what’s coming round the corner to stay ahead of the game or risk being left behind. Well, we don’t want that to happen to you, so we’ve run down what we feel are the three most important emerging technologies that will drastically change the Hyper-V landscape.
Continued Adoption of Public Cloud Platforms – It’s becoming clear that the public cloud is continuing to gain steam. It’s not just one vendor, but several, and it continues to pull workloads from on-premise to the cloud. Many people were keen to wait out this “cloud thing”, but it has become quite clear that it’s here to stay. Capabilities in online platforms such as Microsoft Azure and Amazon AWS, have increasingly made it easier, more cost-effective, and desirable to put workloads in the public cloud. These cloud platforms can often provide services that most customers don’t have available on-premise, and this paired with several other things that we’ll talk about in the webinar are leading to increased adoption of these platforms over on-premise installations.
Azure Stack and the Complete Abstraction of Hyper-V under-the-hood – With some of the latest news and release information out of Microsoft regarding their new Microsoft Azure Stack (MAS), things have taken an interesting turn for Hyper-V. As on-premise administrators have always been used to having direct access to the hypervisor, they may be surprised to learn that Hyper-V is so far under the hood in MAS that you can’t even access it. That’s right. The Hypervisor has become so simplified and automated, that there is no need to directly access it in MAS, but this is primarily because MAS follows the same usage and management guidelines as Microsoft Azure. This will bother a lot of administrators but it’s becoming the world we live in. As such, we’ll be talking about this extensively during the webinar.
Containers and Microservices and why they are a game-changer – Containers has become one of the new buzz-words in the industry. If you’re not aware, you can think of containers as similar to a VM, but fundamentally different. Whereas in a VM you’re virtualizing the OS, and everything on top of it, with containers you’re only virtualizing the application. Much of the underlying support functions are handled by the container host, as opposed to an OS built into a VM. For a long time it seemed that containers were going to primarily be a developer thing, but as the line between IT Pro and Dev continues to blur, Containers can no longer be ignored by IT Pros, and we’ll be talking about that revelation extensively during our panel discussion.
As you can see there is much to talk about, and many will be wondering how this affects them. You’re probably asking yourself questions like: “What new skills should IT Pros be learning to stay relevant?”, “Are hypervisors becoming irrelevant?”, “Will containers replace virtual machines?”, “Is the Cloud here to stay?”, “Is there still a place for Windows Server in the world?”, “What can I do now to stay relevant and what skills do I need to learn to future-proof my career?” Yep, these developments certainly raise a lot of issues which is why we decided to take this topic further.
Curious to know more? Join our Live Webinar!
As you know we love to put on webinars here at Altaro as we find them a critical tool for getting information about new technologies and features to our viewership. We’ve always stuck to the same basic educational format and it’s worked well over the years. However, we’ve always wanted to try something a bit different. There certainly isn’t anything wrong with an educational format, but with some topics, it’s often best to just have a conversation. This idea is at the core of our next webinar along with some critical changes that are occurring within our industry.
For the first time ever, Altaro will be putting on a panel-style webinar with not 1 or 2, but with 3 Microsoft Cloud and Datacenter MVPs. Andy Syrewicze, Didier Van Hoye, and Thomas Maurer will all be hosting this webinar as they talk about some major changes and take your questions/feedback regarding things that are occurring in the industry today. These are things that will affect the way you use and consume Hyper-V.
As always we will be hosting the webinar twice to accommodate those on both sides of the Atlantic. Both live sessions will have the same content but the respective audiences may have region-specific questions so it is recommended to stick to your regional time slot but feel free to join the other session if you can’t make your one.
Also remember, that this panel webinar isn’t just for our 3 speakers to share their opinions! This is a perfect chance to make your voice and opinions heard as well. We’ll be sure to provide every opportunity for you to ask questions and weigh in on the discussion as well, so bring your questions and comments!
Additionally, if there are any questions you’d like to address ahead of time, be sure to use the comments below to do so!
Last week saw us close the door on Microsoft Ignite 2017, and while the conference came and went in a blur, there was no lack of information or amazing reveals from Microsoft. While this conference serves as a great way to stay informed on all the new things that Microsoft is working on, I also find that it is a good way to get an overall sense of the company’s overall direction. With that in mind, I wanted to not only talk about some of my favorite reveals from the week but also discuss my take on Microsoft’s overall direction.
My take on the week from an Infrastructure Engineering Perspective
To put things simply….. things are changing, and they’re changing in a big way. I’ve had this gut feeling stirring for some time that the way we work with VMs and virtualization was changing, and the week of Ignite was a major confirmation of that. This is not to mention the continued shift from the on-premise model we’re used to, to the new cloud (Public, Private, and Hybrid) model that things are moving too.
It’s very clear that Microsoft is adopting what I would call the “Azure-Everywhere” approach. Sure, you’ve always been able to consume Azure using what Microsoft has publicly available, but things really changed when Azure Stack is put into the mix. Microsoft Azure Stack (MAS) is officially on the market now, and the idea of having MAS in datacenters around the world is an interesting prospect. What I find so interesting about it, is the fact that management of MAS onsite is identical to managing Azure. You use Azure Resource Manager and the same collection of tools to manage both. Pair that with the fact that Hyper-V is so abstracted and under-the-hood in MAS that you can’t even see it, and you’ve got a recipe for major day-to-day changes for infrastructure administrators.
Yes, we’ve still got Windows Server 2016, and the newly announced Honolulu management utility, but If I look out 5, or even 10 years, I’m not sure I see us working with Windows Server anymore in the way that we do so today. I don’t think VM usage will be as prevalent then as it is today either. After last week, I firmly believe that containers will be the “new virtual machine”. I think VMs will stay around for legacy workloads, and for workloads that require additional layers of isolation, but after seeing containers in action last week, I’m all in on that usage model.
We used to see VMs as this amazing cost-reducing technology, and it was for a long time. However, I saw containers do to VMs, what VMs did to physical servers. I attended a session on moving workloads to a container based model, and MetLife was on stage talking about moving some of their infrastructure to containers. In doing so they achieved:
-70% reduction in the number of VMs in the environment
-67% reduction in needed CPU cores
-66% reduction in overall cost of ownership
Those are amazing numbers that nobody can ignore. Given this level of success with containers, I see the industry moving to that deployment model from VMs over the next several years. As much as it pains me to say it, virtual machines are starting to look very “legacy”, and we all need to adjust our skill sets accordingly.
As you know, Ignite is that time of year where Microsoft makes some fairly large announcements, and below I’ve compiled a list of some of my favorite. While this is by no means a comprehensive list, but I feel these represent what our readers would find most interesting. Don’t agree? That’s fine! Just let me know what you think were the most important announcements in the comments. Let’s get started.
8. New Azure Exams and Certifications!
With new technologies, come new things to learn, and as such there are 3 new exams on the market today for Azure Technologies.
Azure Stack Operators – Exam 537: Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack
For Azure Solutions Architects – Exam 539: Managing Linux Workloads on Azure
For Azure DevOps – Exam 538: Implementing Microsoft Azure DevOps Solutions
Normally I wouldn’t make much of a fuss about SQL Server as I’m not much of a SQL guy myself, but Microsoft did something amazing with this release. SQL Server 2017 will run on Windows, Linux, and inside of Docker Containers. Yes, you read correctly. SQL Server 2017 will run on Linux and inside of docker containers, which opens up a whole new avenue of providing SQL workloads. Exciting times indeed!
6. Patch Management from the Azure Portal
Ever wanted to have WSUS available from the Azure portal? Now you have it. You can easily view, and deploy patches for your Azure based workloads directly from the Azure portal. This includes Linux VMs as well, which is great news as more and more admins are finding themselves managing Linux workloads these days!
5. PowerShell now Available in Azure CLI.
When Azure CLI was announced and released, many people were taken aback at the lack of PowerShell support. This was done for a number of reasons that I won’t get into in this article, but regardless, it has been added in now. It is now possible with Azure CLI to deploy a VM with a single PowerShell cmdlet and more. So, get those scripts ready!
I know many friends and colleagues that have been waiting for something like this. You can essentially view this as next-generation DFS. (Though it doesn’t use the same technology). It essentially allows you to sync your on-premise file servers with an Azure Files account for distributed access to the stored information around the globe.
3. Quality of Life Improvements for Windows Containers
While there were no huge reveals in the container space, Windows Server 1709 was announced and contains a lot of improvements and optimizations for running containers on Windows Server. This includes things like smaller images and support for Linux Containers running on Windows Server. I did an Interview with Taylor Brown from the Containers team, which you can view below for more information.
2. Nested Virtualization in Azure for Production Workloads
Yes, I know, nested virtualization in Azure has been announced for some time. However, what I found different was Microsoft’s Insistence that it could also be used for production workloads. During Scott Guthrie’s Keynote, Corey Sanders actually demonstrated the use of the M-Series (Monster) VM in Azure being used to host production workloads with nested VMs. While not ideal in every scenario obviously, this is simply another tool that we have at our disposal for added flexibility in our day-to-day operations.
If you’re interested, I actually interviewed Rick Claus from the Azure Compute team about this. That Interview can be seen below
1. Project Honolulu
This one is for the folks that are still strictly interested in only the on-prem stuff. Microsoft revealed and showed us the new Project Honolulu management utility for on-premise workloads. Honolulu, takes the functionality of all the management tools and MMC snap-ins that we’ve been using for years and packages them up into a nice easy to use web-UI. It’s worth a look if you haven’t seen it yet. We even have a nice article on our blog about it if you’re interested in reading more!
As I mentioned, this was by no means a comprehensive list, but we’ll be talking about items (From this list and some not mentioned) from Ignite on our blogs for some time. So, be sure to keep an eye on our blog if you’re interested in more information.
Additionally, if you attended Microsoft Ignite, and you saw a feature or product you think is amazing that is not listed above, be sure to let us know in the comments section below!
Q. If you’re new to Azure Stack, what are some good resources for learning more about it (Other than this webinar)
A. If you’re looking to learn more about Azure Stack, it would be best if you start by learning more about Azure. This is because managing Azure Stack is so similar to Azure, learning how to handle Azure, will help you with Azure Stack when you’re ready to deploy it. If you’re looking to focus on individual features, it is recommended that you focus on ARM (Azure Resource Manager) before focusing on other items. With that said, Microsoft has a lot of training materials about Azure and ARM, and even has an online virtual academy with some resources HERE
Q. Microsoft has already talked about scaling the solution up from the existing planned deployments, are there any mentioned plans to scale the solution down?
A. The smallest that Azure Stack scales down too is 4 nodes, with no mentioned plans to go below that. Due to the nature of the solution and what it’s capable of delivering, if 4 nodes is not small enough, it’s recommended to host the workloads directly in Azure instead.
Q. Will it be more resource efficient to host PaaS workloads of IaaS workloads in Azure Stack?
A. While the final numbers and pricing would tell you for sure, at this point it looks like PaaS will be the more efficient route (Like Public Azure). This is because PaaS services are inherently more efficient than IaaS as you’re not having to support an individual underlying OS for each workload.
Q. What are the differences between the different switch types in Azure Stack.
A. The Aggregate switch acts as an aggregation layer for all the different TOP switches to connect to. The TOR Switch is a top-of-rack switch that the physical hosts connect to, and the BMC switch is a switch that is used by the baseboard management controllers in the hosts for things like auto-power-on and power off, and patching.
Q. Can I use Altaro VM Backup to protect workloads running on Azure Stack?
A. At release Microsoft is not opening APIs or providing a way for 3rd party vendors to provide backup services inside of the stack. However, it is suspected (but not confirmed) that they will open a marketplace for MAS, much like they have for Azure. Through this backup vendors could deploy methods for protecting Azure Stack based workloads. We will be watching this closely and will be sure to notify you via the Altaro blog of any major product enhancements centered around this.
Q. Am I able to use an Azure Stack based storage account for hosting offsite backups with Altaro VM Backup?
A. Yes! You can connect to an Azure Stack based storage account just as you would connect to a storage account hosted in public Azure. All you need to do is follow the instructions posted in the offsite backup location section of the application and cut and paste in your connection string for the storage account.
Well that wraps up things for August’s webinar! Be sure to keep an eye out on this space, as we’ll be posting more information about Azure Stack as our authors find it interesting and of use to you!
As always, if there was a question you have that wasn’t answered, or you thought of a follow-up question, be sure to use the comments section below and we’ll be sure to get you your answer ASAP.
Thanks for attending, and we hope to see you for the next one!
On July 18th, we put on a webinar with Aidan Finn regarding Azure IaaS and Hybrid Cloud. The webinar was well attended, and we got some great questions and feedback throughout the session. As is our norm for webinars this post contains the following below:
A recording of the webinar in it’s entirety
A link to the slide deck used
A full list of the questions and their associated answers.
If you have any follow-up questions be sure to use the comments section below and we’ll be sure to get you an answer!
Watch Webinar – 4 Important Azure IaaS Features for Building your Hybrid Cloud
Q: If there is a trackable pending disaster such as a hurricane or a war, will Microsoft proactively move data and workloads to another Azure Datacenter Region?
A: The short answer here is no, that is because Microsoft leaves it up to the customer to design and architect the solutions over several datacenter regions yourself if you need that kind of failover and redundancy. Microsoft will do no syncing of data between datacenters on their own in this regards. You have to set it up yourself.
Q: Is it possible to select managed or un-managed for disks during the creation of a new VM in Azure?
A: It is. In the storage section under step 3 of the VM’s creation you have the option of selecting managed or unmanaged storage.
Q: Is it possible to change from un-managed to managed storage at a later time?
A: Yes! There are a few powershell cmdlets that can do this and the process is fairly quick. More information on this can be found HERE.
Q: Does an MSDN subscription allow you to do some testing with Azure?
A: Yes. You get various credits depending on your subscription level. You can find more information on this HERE
Q: When a host “warm reboots” in Azure, how do the VMs stay online? How do they get resources?
A: The answer here is they don’t stay online, however the downtime is only 15 to 30 seconds, so it’s nearly unidentifiable unless you’re running a very connectivity sensitive application.
Q: How can I keep track of which services are available in what regions?
It’s been a big year for us here at Altaro Software so far already. We’ve had quite a number of improvements and feature adds into the product since the start of the year, and we’re not showing any sign of slowing down.
We’d like to introduce you to our latest, exciting feature addition, and it’s been one that customers have been asking about for some time. We’re proud to announce, that as of Altaro VM Backup 7.5, we now can send offsite backups directly to an Azure Storage Account!
Some of you will read that and say, “Haven’t we been able to do that before?”. While it’s true, we’ve had some customers who send their offsite backups into Azure, but in order to do that before our 7.5 release, a compute instance with our Altaro Offsite Server software running on it, has historically been needed inside of Azure. That entailed a running VM, page blob storage or Azure Files storage, an Azure Security Group, Azure WAN IPs, and potentially more. From the mere standpoint of needing a place to put your offsite backups, it was more complex than we cared for.
Now with version 7.5 all you need is the Azure Storage Account and the connection string associated with it. As shown below you simply setup the Storage Account, paste in the connection string into the Altaro VM Backup Console, and you’re connected!
Not only are you saving money by not having to incur the monthly cost of running a VM in Azure, you’re using more efficient and cost-effective storage via the storage account as well.
When using the Altaro Offsite server in Azure you had to use Azure Page Blobs, which means you are essentially storing your data in VHDs associated with a given VM. This method is more expensive than the Azure Block Blobs that our new storage account integration uses. Additionally, the old method has some size limitations as well. VHD size in Azure is limited to 1TB per VHD. Sure, you could stripe the data across multiple VHDs, but you needed larger VM sizes in order to do so which would incure even more cost! With our new storage account integration your only limitation is the 500TB size limitation on an Azure Storage Account, and if you fill that up, you simply make another one and go about your day!
One common question that we’ve gotten since releasing this feature to beta, is whether or not this feature can take advantage of our new Augmented Inline Deduplication technology we released earlier this year. The answer is yes! All data moving into an Azure Storage Account via our software, is first deduped before crossing the wire. This insures that only the data that absolutely needs to go across the WAN goes across the WAN. So, not only are you getting cost and data efficiency, you’re getting transfer efficiency as well! It’s a win-win!
Again, we love bringing you new and improved features within our backup software to better prepare you and your organization for the inevitable recovery situation, and it is our hope with this feature we’ve given you one more tool you can use to keep your data, and your company safe from data loss.