Why We Liked Hyper-V R2 — and Why We’ll Love R3

In 2008, I was tasked with evaluating the potential virtualization strategy for a medium-sized business. A solutions provider was trying to make a sale on VMWare vSphere 3.5. The business wasn’t opposed to it, but as a matter of due diligence I was tasked with considering the competition. As you might recall, there weren’t many competitors at that time.I had heard that Microsoft had moved on from Virtual Server and produced Hyper-V, but I didn’t know anything about it. I evaluated Hyper-V for about two days and decided against it. It didn’t have true high availability while the VMWare product did. In 2010 I was working for another company and had to undergo the same process, only then the leading choices were vSphere 4.1 and Hyper-V R2. Upon re-examining Hyper-V in its R2 offering, I found a substantially matured product. Now in 2012, we’re about to experience another game-changing evolutionary step for this hypervisor.

Past is Prologue

Part of understanding what made Hyper-V R2 so compelling is understanding where it came from. Microsoft began to get serious about whole-machine virtualization with its acquisition of Connectix in 2003. From that merger we were introduced to Virtual PC and Virtual Server. Microsoft expanded and improved upon these offerings, but they were never very popular and really couldn’t compete with VMWare in any metric except price. With its head start and superior technology, VMWare managed to stay at the very top of the curve as virtualization rose from novelty to normative.

In 2008, Microsoft released the first iteration of Hyper-V. This was not simply a repackaged version of Virtual Server, but it also didn’t compare well to VMWare’s ESX product — technologically. It did retain one edge: the hypervisor could be had for free, and if you had a copy of Server 2008 or were willing to put up with Vista, Hyper-V Manager was also free. Virtualization was still not common at that time, but interest was growing, and this cheap entry-point grabbed attention. Microsoft had also begun introducing its product in another way: if you took MCT-led Official Curriculum courses, the computer you were working on had probably been deployed from a VHD image. Many students exited the training world and entered the working world with a pre-seeded understanding of the benefits of VHDs. That fed nicely into gaining understanding of the hypervisor that could use them. Whether or not this was intentional on Microsoft’s part or just a matter of convenience for those tasked with setting up training computers, we’ll never know.

The feature set in Hyper-V’s initial offering was quite limited. Where ESX had downtime-free vMotion, Hyper-V only had QuickMigration; fast, but with a brief service interruption. Most of its other limitations were related to how many resources it could manage. Realistically, people who could live without QuickMigration probably didn’t push the limits of Hyper-V R1 anyway. However, all the limits of R1 meant that it didn’t make it on many servers in the enterprise.

Hello, R2

The virtualization arena changed quickly and significantly with the release of Hyper-V R2. There were two major categories of change, both positive for the computing world.

New Technology in Hyper-V R2

The biggest technology impact from Hyper-V R2 was LiveMigration (a technology that allows a virtual machine to cease running on one host and start running on another without meaningfully perceptible downtime for the virtual machine’s services or users). For most small and medium businesses, this put Hyper-V on “good enough” feature-parity with VMWare ESX. The other major technology change was Cluster Shared Volumes, which as important as it is on its own, was actually pivotal in making LiveMigration viable. Without CSVs, it is only possible to LiveMigrate a virtual machine if it is the only VM on its LUN, which could lead to very wasteful provisioning of hard drive space. Beyond LiveMigration and CSV, Hyper-V R2 also raised many of the resource caps, such as usable physical processors, RAM, and LUN sizes.

Environmental Changes Caused or Accelerated by the Entrance of Hyper-V R2

With Hyper-V R2, Microsoft had produced a viable entry in the server virtualization market that could be used even in shops that could afford VMWare. The effect on the industry was quick and powerful. In 2011, Gartner placed Hyper-V on its x86 virtualization Magic Quadrant for the first time. With a Gartner stamp of approval, IT managers had an easier time making a business case to corporate boards; consultants had an easier time selling Hyper-V solutions to clients; the business community as a whole became more confident that Microsoft could do virtualization well.

VMWare felt the pinch, and it showed. vMotion had initially only been available to purchasers of the more expensive editions of vSphere, so a lot of their smaller customers immediately began evaluating a move to Hyper-V R2. In response, VMWare made vMotion available in its lower-cost SKUs as well. Overall, VMWare still had the superior technology set by quite a wide margin, but Hyper-V definitely put pressure on them to make several of those technologies available at a lower price point. VMWare customers at those levels gained from R2 without ever using it.

As with all large technologies, a cottage industry had formed around ESX. Much like “app stores” for cellphones today, VMWare had created a “virtual appliance” repository where you could download and license pre-built virtual machines for any number of purposes. Enter Hyper-V R2 with one serious advantage over VMWare: it was based on the Windows kernel, which has a much larger developer community than VMWare. Hyper-V R2 came into the world with a gigantic mass of developers who were most of the way to being able to code for it. Even though VMWare had a well-established application ecosystem, the learning curve for Hyper-V R2 developers was much shorter which has allowed vendors to begin quickly closing the gap.

The final, and arguably biggest, environmental change is still ongoing. With the feature and price wars fully engaged, the consumer can get much more virtualization and features for a dollar than s/he could even four years ago. We are nearing, or, depending on your perspective, have arrived, at the point where the hypervisor is a commodity, not a specialty product.

Looking Forward to R3

Where the biggest changes from R2 were in the larger world outside the technology itself, R3 is going to bring the focus firmly back to the technology. One of the biggest changes will be the “shared nothing” concept around LiveMigration. As long as the target host is reachable across the network from the source host, you’ll be able to move a virtual machine there without perceptible downtime. This is going to involve several new technologies, but the chief among these is a direct competition to VMWare’s Storage vMotion. Arguably, this is the second most-desired technology after regular memory vMotion and will do much to increase the attractiveness of Hyper-V. Another benefit will be the merging of snapshots without downtime, which has been another sticking point in the adoption of Hyper-V.

One change that Windows administrators will welcome is the change in domain controller behavior. With 2008 R2 and earlier domain controllers on Hyper-V R2 or earlier, snapshots and saved state could cause a USN rollback condition. Now, with a Windows 8 Server domain controller running on Hyper-V R3, this will no longer be a concern at all. Administrators are also going to like the introduction of Hyper-V Replica, which can help them to establish some very low recovery point objectives (RPOs) with a very high probability of being able to achieve those objectives.

Even with these and all the other technological improvements in R3, the non-technological environment is also improved. R2 has helped establish Hyper-V as a viable option in the server virtualization market. Its growth has added substantially to the number of administrators who are supporting it, writing about it, and promoting it, so R3 will be released into a much more informed and welcoming environment than R2. Organizations who are curious and open to adopting this technology will have greater access to subject-matter experts and material so that they can more easily make an intelligent decision for their virtualized future.

 

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

Leave a comment or ask a question

Your email address will not be published. Required fields are marked *

Your email address will not be published. Required fields are marked *

Notify me of follow-up replies via email

Yes, I would like to receive new blog posts by email

What is the color of grass?

Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.