9 vExpert Views on VMware VSAN and Hyper-Convergence

We’ve seen many new trends in technology over the last several years. Perhaps none of these technologies is as polarizing as the topic of Hyper-Convergence. Hyper-Convergence is the concept of putting storage and compute within the same chassis, with the idea that scale-up operations will be much simpler.

There are many companies that are starting to delve into this space, such as VMware, Nutanix, and now Microsoft with Storage Spaces Direct. As everyone is starting to develop their own Hyper-Converged solution, does that mean the industry is finally ready to adopt this technology en masse, instead of the niche adoption we’ve seen thus far?

To Find out more, we asked a number of VMware vExperts what their thought were on the topic.

The Question

We asked them a single two part question:

“In the next 1 – 3 years, do you see Hyper-Converged technologies such as VMware VSAN becoming mainstream? If so, what do you see as the primary benefit and use case for businesses looking to adopt this technology?” 

The responses from our group of polled vExperts are below for further reading.

Bilal Hashmi


“In my opinion, hyper-converged solutions add a lot of value. Perhaps at first they were looked upon as not so much like production running solutions, but as of today I see them running all over the place. With the addition of VSAN and its simplicity in deploying, and most importantly managing, I can see this picking up even faster than before. With stretched metro clusters, VSAN essentially delivers replication across sites which only used to be an option for customers who could pay the big bucks. I think VSAN delivers enterprise type solutions to customers of all sizes at an affordable cost. The technology is obviously great, the question is will businesses adapt to it? I feel at that price point and the simplicity in deploying/managing it, VSAN will really make hyper-converged the new norm.”

Andrea Mauro


“Sure, hyper-converged solutions are becoming more and more common not only in 2nd tiers, but also in business critical applications (see, for example, recent certification of Oracle or SAP).
Anyway I don’t consider this trend as a “killer application”, because “centralized storage” will remain, mostly becoming AFA (or, at least, hybrid) and more VM centric (for this Virtual Volumes, when it becomes more mature, will be a good driver, in the VMware ecosystem). For Business the primary benefit is related to the “building block” approach: the predictability of workload capacity and performance (that can simplify the design), but also the speed and the simplicity of the deployment phase.”

Andrew Morgan


“Hyper-Converged is, in essence, software based simplification of legacy storage paths and data tiering. Any technology that makes it possible to tier and distribute massive amounts of data across cheap SATA and SSD in the same chassis as the compute platform is welcomed to the industry and I’ve seen great, first hand, interest from customers in this architecture.

For 2016 and onwards, as customers find new challenges with their storage, have new point solution requirements or hardware refresh cycles, they’d be crazy not to consider a simplified software layer and shared compute platform. This market exploded with new competitors this year and last as they chase Nutanix for market share, 2016 will be an all out war.”

Vladan Seget


“One of the problems with traditional storage systems is that they are not evolutive. It means that every 3-4-5 years you must do a rip-and replace, to be within the limit of hardware servicing.

Additionally, technologies like VSAN, which are software solutions, provide flexibility we don’t have when using traditional hardware SANs. VSAN also allows businesses to do their investments in a more progressive and linear fashion. There isn’t a big investment needed to start, to buy a big storage SAN device, but the investment can be done in more smaller steps.

It’s fairly simple to add a node into a VSAN cluster when you need more capacity. When adding a node you do not have to create a new datastore, as the unique VSAN datastore is already there. It only gets more capacity when we add new VSAN node. Also, If a node gets old, out of warranty, and needs to be replaced, it’s easy to put the node into a maintenance mode to evacuate VMs, and then decommission the node. So we have this simplicity in the storage management that we did not have when we use traditional SAN systems.

Flexibility in performance and data protection is another aspect of VSAN which is crucial for today’s modern enterprises. There are workloads that need to perform faster than others. On the other hand there are other workloads that need to perform no matter what happens. We can we can apply different storage policies to some VMs than to others.”

Marcel van den Berg


I believe Hyper-Converged is a very interesting new deployment model. It makes provisioning of storage much easier and quicker, which is of great benefit for the business. In costs it allows us to add capacity in small granular steps without special knowledge and without huge investments. I do believe at the moment the costs  for certain solutions are a bit high. Costs will go down when adoption rate increases. In about two years time hyper-converged is likely to be  mainstream.

Rory Monaghan


“Hyper-Converged technologies look sure to become mainstream. There’s huge benefits, by having everything you require delivered in the box. All you have to do is connect in a network cable, power it up and start building whatever you want. For me, that means no relying on people from multiple teams in order to get a VM farm created. I can setup thousands of VM’s within hours of getting the box. It provides simplicity for IT Pro’s and the fact it’s through a single vendor, means it provides simplicity for the organization as a whole.

As a neutral, it’s exciting and entertaining to watch Nutanix and VMware duke it out. In a model which somewhat nurtures a dependence on a single vendor, it’s interesting to see how each vendor markets their product. Clearly, we all benefit from the simplicity which comes with hyper convergence but right now, we also benefit from the fact it’s a competitive market space. Competition drives innovation and drives a lower price point. If you’re not watching this space right now, you need to!”

Ricky El-Qasem


“The value of having tech like VSAN enabling hyper-converged infrastructure brings a more cost effective way of providing storage to a virtual machine infrastructure. If you break it down, common off the shelf storage platforms nowadays are nothing more than Disk + CPU + Network + Software. All things that can be moved into a virtual world and share those resources with workload VMs thus removing the need to purchase separate hardware just for storage. And actually once you move the storage IP into software and virtualise it enables all manner of automation possibilities.

VMware VSAN is not a new concept in my opinion. Virtualising Storage in this manner has been around for a decade or more. If you look at technologies like Datacore or Starwind they’ve been doing it for years. Back in 2006 I personally went on Instructor Training for Datacore so I’ve been aware of such concepts since then. For me the uptake of running such functions in the VM layer was limited by the requirement on performance. In days gone by if you tried to run your storage platform in a VM this would add tremendous load in the CPU architecture which had a detrimental effect on the existing VMs also running on the same hypervisor.  Since then what we see is tech like VSAN integrating more tightly with the hypervisor and as such the CPU is not constricted as much as previous.

Tech like VSAN has been criticised for not being as cost effective when it comes to providing capacity over performance because when you need to add more storage at some point it means adding more compute as well.  I’m on the fence with this view because I need to see an extensive TOC study to understand what the truth is.

The other limitation of using VSAN is the lack of data services like CIFS or NFS and the default answer is use the free version of Nexenta for such features. Now to me that sounds like a workaround. Personally I work with Enterprise Clients and my organisation would be too nervous to position Free or Open Source software as a solution for very big clients. So that means relying on traditional storage platforms once again like EMC and at that point we would be asking the question is it cheaper to load up the EMC array we have purchased for data services for block storage too or purchase VSAN and make use of internal server disks for the block storage. Again it’s a TOC exercise that would be required to help decide which is the best approach.

I don’t believe the future is all doom and gloom though. In many scenarios hyper-converged virtualised storage is more cost effective than purchasing separate hardware for block storage, we already established that.  The real value we need to pay attention to though is the fact that VMware is bringing storage closer to the compute with VSAN. In addition the level of automation is going to be key to organisation looking to build SDDCs.  And whilst its immature in terms of features that you might expect to see from storage platforms when it comes to use cases like VDI or big data it’s going to kick ass. People are for sure asking why do I need traditional storage when I have VSAN now, so imagine what VSAN will be like when it closes the gap on features like data services.

Hyper-converged is not coming soon, in next 1-3 years.. it’s here now. My brother currently works for a popular hyper-converged vendor, and for them it’s not just about the inbuilt virtualised storage any more. Hyper-converged is about a wider picture that includes backup, WAN and DR. But that’s a completely different blog post”

Wojciech Marusiak


“I don’t believe that Hyper-Converged technologies will become mainstream – Hyper-Convergence is mainstream already! I do believe that companies who will not turn their focus from being silo oriented to being virtual machine/application focused, will simply not exist or lose their markets dramatically in coming years. From my personal experience and from discussions with customers, I see more and more interest towards shifting the approach to this new way of doing IT – even for larger and more “Traditional-IT” customers.

Primary benefits for Customers and their Business is spending less time on “keeping the lights up” in their environment and spending more time on delivering solutions to users. Those users don’t care how underlying infrastructure works until it does what it supposed to do in a reliable and elegant manner. Having the possibility of developing new solutions in faster and easier way brings Hyper-Converged customers advantages over competitors, thus winning the battle in the market.”

Mike Preston


“Hyperconvergence, just as any new technology, has certainly sparked up a lot of interest in the IT world.  We have seen all of the work that startups such as Scale, Nutanix, and Simplivity have done get justified by VMware’s entry into the market with their EVO products.  However that said I don’t necessarily think it can be deemed as mainstream…..yet – just as with any technology, getting inside of the enterprise takes a long time – politics, past practice, and current investments prevent newer technologies from taking on the number one role in the datacenter.  That said there are many primary benefits for businesses to move to a Hyper-Converged architecture; efficiency, simplicity, cost and scale are some of the big ones and for these reasons we have seen Hyper-Converged architectures being looked at for specific use cases such as ROBO and VDI – new projects that companies want to complete quickly and future proof.  I can’t really predict whats going to happen in 3 years but I know that whatever happens there will be more Hyper-Converged designs running in the future than there are today.”


So there you have it! Lots of great responses from many different vExperts!

We got a number of good responses, with the general consensus on the first part of the question, on whether this technology will move into the mainstream, being a resounding: “Yes”. It looks like it’s time to start training up on these emerging technologies!

We’d like to first thank them for their time in providing these responses, then we’d like to pose the same question to our readers.

So do YOU think Hyper-Convergence will be the mainstream in the coming years? Leave a comment below and let us know!

Altaro VM Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

4 thoughts on "9 vExpert Views on VMware VSAN and Hyper-Convergence"

  • Dan Gillman says:

    Where do you see backup solutions headed? I just spent 900,000.00 on a Commvault solution and I am still shocked at the cost. With all this new integrated technology why are companies being held hostage just to backup their data effectively?

    • Andy Syrewicze says:

      Ouch! 900K certainly isn’t pocket change…. While we here at Altaro don’t think you
      should have to pay an arm, a leg, and your first born child to protect your data, many other backup vendors
      don’t see it that way.

      I think as more and more workloads become virtualized, we’ll see the cost of backups
      go down. The vast majority of workloads today are virtualized, and conducting backups
      at the hypervisor layer, without an agent, is quicker, more efficient, and has less
      moving parts.

      The problem comes from old legacy systems that can’t be virtualized, or workloads
      that must stay physical due to some sort of company policy or regulatory requirement.
      Those workoads are the ones that require additional functionality to protect and are
      the reason that many backup vendors are throwing everything, including the kitchen
      sink into thier solutions, thus increasing the cost.

      Then there is tape media, which takes more to support and develop for….

      Over time, once these old workloads and policies are phased out/removed, the overall cost of backup should come
      down, but time will tell.

Leave a comment

Your email address will not be published. Required fields are marked *