Everything you Need to Know about Containers in VMware

Containers have long been touted as the VM killer, and for many use-cases that is correct but not all..

Save to My DOJO

Everything you Need to Know about Containers in VMware

Unquestionably, organizations today are transforming from traditional infrastructure and workloads, including virtual machines, to modern containers running containerized applications. However, making this transition isn’t always easy as it often requires organizations to rethink their infrastructure, workflows, development lifecycles, and learn new skills. Are there ways to take advantage of the infrastructure already used in the data center today to run containerized workloads? For years, many companies have been using VMware vSphere for traditional virtual machines in the data center. So what are your options to run containers in VMware?

Why Shift to Containers?

Before we look at the options available to run containers in VMware, let’s take a quick overview of why we are seeing a shift to running containers in the enterprise environment. There are many reasons. However, consider a few primary reasons we see the change to containerized applications today.

One of the catalysts to the shift to containerized applications is the transition from large monolithic three-tier applications to much more distributed application architectures. For example, you may have a web, application, and database tier in a conventional application, each running inside traditional virtual machines. With these legacy three-tier architectures, the development lifecycle is often slow and requires many weeks or months to deploy upgrades and feature enhancements.

The application upgrade is performed by lifting the entire layer to a new version of code as it is required to happen in lockstep as a monolithic unit of code. The new layout of modern applications is very distributed, using microservice components running inside containers. With the new architectural design of modern applications, each microservice can be upgraded separately from the other application elements, allowing much faster development lifecycles, feature enhancements, upgrades, lifecycle management, and many other benefits.

Organizations are also shifting to a DevOps approach to deploying, configuring, and maintaining infrastructure. With DevOps, infrastructure is described in code, allowing infrastructure changes to be versioned like other development lifecycles. While DevOps processes can use virtual machines, containerized infrastructure is much more agile and more readily conforms to modern infrastructure management. So, the shift to a more modern approach to building applications offers benefits from both development and IT operations perspectives. To better understand containers vs. virtual machines, let’s look at the key differences.

Comparing Containers vs. Virtual Machines

Many have used virtual machines in the enterprise data center. How do containers compare to virtual machines? To begin, let’s define each. A virtual machine is a virtual instance of a complete installation of an operating system. The virtual machine runs on top of a hypervisor that typically virtualizes the underlying hardware of the virtual machine, so it doesn’t know it is running on a virtualized hardware layer.

Virtual machines are much larger than containers as a virtual machine contains the entire operating system, applications, drivers, and supporting software installations. Virtual machines require operating system licenses, lifecycle management, configuration drift management, and many other operational tasks to ensure they are fully compliant with the set of organizational governance policies decided.

Instead of containing the entire operating system, containers only package up the requirements to run the application. All of the application dependencies are bundled together to form the container image. Compared to a virtual machine with a complete installation of an operating system, containers are much smaller. Typical containers can range from a few megabytes to a few hundred megabytes, compared with the gigabytes of installation space required for a virtual machine with an entire OS.

One of the compelling advantages of running containers in VMware is that they can move between container hosts without worrying about the dependencies. With a traditional virtual machine, you must verify all the underlying prerequisites, application components, and other elements are installed for your application. As mentioned earlier, containers contain all the application dependencies and the application itself. Since all the prerequisites and dependencies move with the container, developers and IT Ops can move applications and schedule containers to run on any container host much more quickly.

Virtual machines still have their place. Installing traditional monolithic or “fat” applications inside a container is generally impossible. Virtual machines provide a great solution for interactive environments or other needs that still cannot be satisfied by running workloads inside a container.

Containers have additional benefits related to security. Managing multiple virtual machines can become tedious and difficult, primarily related to lifecycle management and attack surface. In addition, virtual machines have a larger attack surface since they contain a larger application footprint. The more software installed, the greater the possibility of attack.

Lifecycle management is much more challenging with virtual machines since they are typically maintained for the entire lifespan of an application, including upgrades. As a result, it can lead to stale software, old software installations, and other baggage brought forward with the virtual machine. Organizations also have to stay on top of security updates for virtual machines.

Containers in VMware help organizations to adopt idempotency. It means that the containers running the current version of the application will not be upgraded once deployed. Instead, businesses deploy new containers with new application versions. The result is a new application environment each time a new container is deployed.

Note the following summary table comparing containers and virtual machines.

Containers Virtual Machines
Small in size Yes No
Contains all application dependencies Yes No
Requires an OS license No Yes
Good platform for monolithic app installs No Yes
Reduced attack surface Yes No
Easy lifecycle management Yes No
Easy DevOps processes Yes No

It is easy to think that it is either containers or virtual machines. However, most organizations will find that there is a need for both containers and virtual machines in the enterprise data center due to the variety of business use cases, applications, and technologies used. These two technologies work hand-in-hand.

Virtual machines are often used as “container hosts.” They provide the operating system kernel needed to run containers and provide other benefits to be used as container hosts. They can take advantage of the benefits from a hypervisor perspective for high availability and resource scheduling.

Kubernetes (K8s) is the Modern Key to Running Containers

Businesses today are looking at running containers and refactoring for containerized applications. They are looking at doing so using Kubernetes. Kubernetes is the single more important aspect of running containers in business-critical environments.

Simply running your application inside a container does not satisfy the needs of production environments, such as scalability, performance, high availability, and other concerns. For example, suppose you have a microservice running in a single container that goes down. In that case, you are in the same situation as running the service in a virtual machine without some type of high availability.

Kubernetes is the container orchestration platform allowing businesses to run their containers much like they run VMs today in a highly-available configuration. Kubernetes can schedule containers to run on multiple container hosts and reprovision containers on a failed host onto a healthy container host.

While some companies may run simple containers inside Docker or containers and take care of scheduling using some homegrown orchestration or other means, most are looking at using Kubernetes to solve these challenges. Kubernetes is an open-source solution that allows managing containerized workloads and services and provides modern APIs to allow automation and configuration management.

Kubernetes provides:

  • Service discovery and load balancing – Kubernetes allows businesses to expose services using DNS names or IP addresses. It can also load balance between container hosts and distribute traffic between the containers for better performance and workload balance
  • Storage orchestration – Kubernetes provides a way to mount storage systems to back containers, including local storage, public cloud provider storage, and others
  • Automated rollouts and rollbacks – Kubernetes provides a way for organizations to perform “rolling” upgrades and application deployments, including automating the deployment of new containers and removing existing containers
  • Resource scheduling – Kubernetes can run containers on nodes in an intelligent way, making the best use of your resources
  • Self-healing – If containers fail for some reason, Kubernetes provides the means to restart, replace, or kill containers that don’t respond to a health check, and it doesn’t advertise these containers to clients until they are ready to service requests
  • Secret and configuration management – Kubernetes allows intelligently and securely storing sensitive information, including passwords, OAuth tokens, and SSH keys. Secrets can be updated and deployed without rebuilding your container images and without exposing secrets within the stack

Why Run Containers in VMware?

Why would you want to run containers in VMware when vSphere has traditionally been known for running virtual machines and is aligned more heavily with traditional infrastructure? There are many reasons for looking at running your containerized workloads inside VMware vSphere, and there are many benefits to doing so.

There have been many exciting developments from VMware over the past few years in the container space with new solutions to allow businesses to keep pace with containerization and Kubernetes effectively. In addition, To VMware’s numbers, some 70+ million virtual machine workloads are running worldwide inside VMware vSphere.

It helps to get a picture of the vast number of organizations using VMware vSphere for today’s business-critical infrastructure. Retooling and completely ripping and replacing one technology for something new is very costly from a fiscal and skills perspective. As we will see in the following overview of options for running containers in VMware, there are many excellent options available for running containerized workloads inside VMware, one of which is a native capability of the newest vSphere version.

VMware vSphere Integrated Containers

The first option for running containers in VMware is to use vSphere Integrated Containers (VIC). So what are vSphere Integrated Containers? How do they work? The vSphere Integrated Containers (VIC) offering was released back in 2019 and is the first offering from VMware to allow organizations to have a VMware-supported solution for running containers side-by-side with virtual machines in VMware vSphere.

It is a container runtime for vSphere that allows developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Also, vSphere administrators can manage these workloads by using vSphere in a familiar way.

The VIC solution to run containers in VMware is deployed using a simple OVA appliance installation to provision the VIC management appliance, which allows managing and controlling the VIC environment in vSphere. The vSphere Integrated Containers solution is a more traditional approach that uses virtual machines as the container hosts with the VIC appliance. So, you can think of the VIC option to run containers in VMware as a “bolt-on” approach that brings the functionality to traditional VMware vSphere environments.

With the introduction of VMware Tanzu and especially vSphere with Tanzu, vSphere Integrated Containers is not the best option for greenfield installations to run containers in VMware. In addition, August 31, 2021, marked the end of general support for vSphere Integrated Containers (VIC). As a result, VMware will not release any new features for VIC.

Components of vSphere Integrated Containers (VIC)

What are the main components of vSphere Integrated Containers (VIC)? Note the following architecture:


Architecture overview of vSphere Integrated Containers (VIC)

  • Container VMs – contain characteristics of software containers, including ephemeral storage, custom Linux guest OS, persistenting and attaching read-only image layers, and automatically configuring various network topologies
  • Virtual Container Hosts (VCH) – The equivalent of a Linux VM that runs Docker, providing many benefits, including clustered pool of resources, single-tenant container namespace, isolated Docker API endpoint, and a private network to which containers are attached by default
  • VCH Endpoint VM – Runs inside the VCH vApp or resource pool. There is a 1:1 relationship between a VCH and a VCH endpoint VM.
  • The vic-machine utility – It is the utility binary for Windows, Linux, and OSX to manage your VCHs in the VIC environment

How to Use vSphere Integrated Containers

As an overview of the VIC solution, getting started using vSphere Integrated Containers (VIC) is relatively straightforward. First, you need to download the VIC management appliance OVA and deploy this in your VMware vSphere environment. The download is available from the VMware customer portal.


Download the vSphere Integrated Containers appliance

Let’s look at the deployment screens for deploying the vSphere Integrated Containers appliance. The process to deploy the VIC OVA appliance is the standard OVA deployment process. Choose the OVA file for deploying the VIC management appliance.


Select the OVA template file

Name the VIC appliance.


Name the VIC appliance

Select the compute resource for deploying the VIC appliance.


Select the compute resource for deploying the VIC appliance

Review the details of the initial OVA appliance deployment.


Review the details during the initial deployment

Accept the EULA for deploying the OVA appliance.


Accept the EULA during the deployment of the OVA appliance

Select the datastore to deploy the VIC appliance.