Hyper-V Virtual Hardware: Emulated, Synthetic and SR-IOV
You may be aware that Hyper-V Server has two different virtual disk controller types (IDE and SCSI) and two different virtual network adapter types (emulated and synthetic). This post will look at these two types as well as the way that a virtual machine interacts with SR-IOV-capable devices. As we look ahead to Hyper-V Server 2012 R2 and its introduction of generation 2 VMs, understanding the differences will become more important.
This posting will cover some fairly detailed concepts at a quick, high level. I have chosen to strive for brevity and comprehensibility over strict accuracy. If you have a deeper interest in a thorough analysis of hardware, both physical and virtual, there are a great many resources available.
Basic Hardware Functionality
The CPU, as its name (Central Processing Unit) implies, is the essential core of what makes up a computer. It is connected to all other devices, whether they be internal components such as memory, hard drives, and video adapters, over a bus. There are many types of buses and all modern computers have more than one. But, regardless of the bus, all the hardware is controlled by the CPU.
Devices function by sending and receiving messages to and from the CPU. When software sends an instruction to disk to save a file, the instruction is processed by the CPU. Data going to and from the device is stored in a special location in memory set aside just for that device. Some ubiquitous devices have fixed, well-known locations in memory. For anyone who has been in the industry for a couple of decades or more, the address of “3F8″ is known to belong to COM1. When testing modems connected to COM1, we could actually use DOS commands to copy “files” to COM1:
The above would COPY from the CONsole to COM1. Where the copy command typically works with files, the CON source replaces a disk file with input from the console. The next three lines were the data for that console “file”; in this case, the first line sends a standard modem command to dial the phone and the second sends the standard modem command to hang up. The final line represents the end-of-file character, which, from the console, means that you’re done providing the file.
What would happen behind the scenes is that the CPU would place this data in the IO address space for COM1 (memory address 3F8) and then signal the device that it had data to process. When the device wants to talk back, it will do so by raising an interrupt request (IRQ), which is a signal to the CPU that the owner of that IRQ has something to say. As with data heading to the device, its response will also be routed through that address space. Hopefully, a software component has registered an interrupt handlerthat tells the CPU who to notify of this request. Of course, in our little DOS example above, there’s no handler, so the modem just makes a connection and sits there. If you look at the Resources tab of any entry in Device Manager, you can see its memory address and IRQ.
In later paradigms, drivers handle a lot of the nasty parts of talking to hardware. They register the interrupt handlers and know how to talk to the hardware. They expose simpler, and usually standardized, instruction sets that software can reference. So, instead of learning the exact way to get each and every printer to generate an “A”, they can send a common command to any print driver that understands that common set and let the driver worry about the particulars. This was an early form of hardware abstraction in personal computing. At that stage, drivers were often an interface between a particular application and a particular device.
In the Windows world (and other OSs), the next step was abstraction at the operating system level. Windows introduced the Hardware Abstraction Layer, which sat directly atop the physical hardware as an intermediary between it and the drivers. The HAL offered a greater degree of stability and compatibility. Drivers became less application-oriented and more operating-system-oriented. The specifics of the HAL are not overly important to this discussion except to know that it’s there.
With a basic understanding of how hardware functions, we can now transition to the virtualized world.
Emulated hardware is a software construct that the hypervisor presents to the virtual machine as though it were actual hardware. This software component implements a unified least-common-denominator set of instructions that are universal to all devices of that type. This all but guarantees that it will be usable by almost any operating system, even those that Hyper-V Server does not directly support. These devices can be seen even in minimalist pre-boot execution environments, which is why you can use a legacy network adapter for PXE functions and can boot from a disk on an IDE controller.
The drawback is that it can be computationally expensive and therefore slow to operate. The software component is a complete representation of a hardware device, which includes the need for IRQ and memory I/O operations. Within the virtual machine, all the translation you see in the above images occurs. Once the virtual CPU has converted the VM’s communication into that meant for the device, it is passed over to the construct that Hyper-V has. Hyper-V must then perform the exact same functions to interact with the real hardware. All of this happens in reverse as the device sends data back to the drivers and applications within the virtual machine.
The common emulated devices you’ll work with in Hyper-V are:
- IDE hard drive controllers
- Emulated (legacy) network adapters
- Video adapters
These aren’t the only emulated devices. If you poke around in Device Manager with a discerning eye, you’ll find more. The performance impact of using an emulated device varies between device types. For the network adapter, it can be fairly substantial. The difference between the emulated IDE controller and the synthetic SCSI controller is difficult to detect.
Synthetic hardware is different from emulated hardware in that Hyper-V does not create a software construct to masquerade as a physical device. Instead, it uses a similar technique to the Windows HAL and presents an interface that functions more closely to the driver model. The guest still needs to send instructions through its virtual CPU, but it’s able to use the driver model to pass these communications directly up into Hyper-V through the VMBus. The VMBus driver, and the drivers that are dependent on it, must be loaded in order for the guest to be able to use the synthetic hardware at all. This is why synthetic and SCSI devices cannot be used prior to Windows startup.
Here, the visual models we’ve been using so far don’t fit as well due to some shortcuts in the diagramming I had to use to keep it from becoming too confusing. This next diagram is less technically correct, but it should help get the concept across:
SR-IOV (single root i/o virtualization) eliminates the need for the hypervisor to perform as intermediary at all. They expose virtual functions, which are unique pathways to communicate with a hardware device. So, a virtual function is assigned to a specific virtual machine; the virtual machine manifests that as a device. No other device or computer system, whether in the host computer or inside that virtual machine, can use that virtual function at all. Whereas all traditional virtualization requires the hypervisor to manage resource sharing, SR-IOV is an agreement between the hardware and software to use any given virtual function for one and exactly one purpose. This one is easy to conceptualize and diagram:
The benefit of SR-IOV should be obvious: it’s faster. There’s a lot less work involved moving data between the device and the virtual machine. A drawback is that Hyper-V has practically no idea what’s going on. The virtual switch has been designed to work with adapters using SR-IOV, but teaming hasn’t. Another problem is that SR-IOV-capable adapters currently available only have a few virtual functions available, usually around eight or so. If a virtual machine has been assigned an SR-IOV adapter, then it must have access to a virtual function or that adapter will not work. Hyper-V cannot take over because it is not part of the SR-IOV conversation. One of the potentially unexpected outcomes of this is that if you attempt to move a virtual machine to a host that doesn’t have sufficient virtual functions available, that guest will fail to start. For Live Migration, the migration will fail and the guest will remain on the source system.
As of today, the only devices making use of SR-IOV are network adapters. The big question though, is how much of a difference in performance does SR-IOV actually make? The answer is that it’s sort of a mixed bag. The differences can certainly be seen in benchmarks. In practical applications, the differences are not remarkable. Most environments are a long way from pushing the limits of synthetic adapters as it is. The impact is actually more noticeable on CPU, since most of the functionality is offloaded to the hardware. However, most virtualization environments are not stressing their CPUs either, so again, the impact may not be noticeable.
Where This Is Headed
With 2012 R2, this will become more important. The new generation 2 virtual machines will not be able to use legacy hardware at all. However, they will use UEFI mode (unified extensible firmware interface) instead of the traditional BIOS mode, so all hardware will be usable right from startup. That, and all the consequences of the new platform, will be a subject for a later post.
Have any questions?
Leave a comment below!
Backing up Hyper-V
We hope you found this post useful. If you’d like to back up Hyper-V VMs, check out the new Altaro Hyper-V Backup v4 with major new features. It’s free for up to 2 VMs. Need more? Download a 30-day trial of our Unlimited Edition here: http://www.altaro.com/hyper-v-backup/.
(Don’t worry, we hate spam too!)