Some of you may have recently read the semi-viral post Virtualization is Coming to Its End, which claimed virtualization is dying out as cloud and containers are on the rise. Now, to be fair, IBM launched its mainframe-based virtualization solution 60 years ago at this point. But is virtualization actually being replaced by cloud and containers? This kind of claim is unfortunately based on some pretty deep misunderstandings of how the tech (and the industry for that matter) actually works.
Misconception 1: All datacenter data is being migrated to cloud, so it's time for virtualization to step aside.
Before we begin, let's start by exploring the technical connotation and history of virtualization. Generally, virtualization can mean compute virtualization, storage virtualization, and network virtualization. Compute virtualization is the oldest form, dating back to 1959, when Oxford professor Christopher Strachey published his seminal paper Time Sharing in Large Fast Computer at the International Conference on Information Processing. This is where he first proposed the concept of virtualization. Compute virtualization is also considered an ancestor of cloud technology. Storage virtualization came a bit later, with Berkeley researchers publishing their Redundant Array of Independent Disk (RAID) technology in 1987. RAID uses hardware and software to combine multiple disks into a single large-capacity virtual disk. Finally, network virtualization was designed to cope with the rapid development of cloud computing. When a cluster reaches a certain scale, traditional network technologies are often not flexible enough to handle its complexity. Software-defined networking (SDN) is what was then created to achieve network virtualization. Ultimately, what this all means is that virtualization is the foundational technology of cloud, and without good virtualization, there would be no cloud infrastructure.
But make no mistake, virtualization and cloud are not the same thing. First, they are different approaches to the same issue. Virtualization focuses on resource management and O&M capabilities from the perspective of administrators, as it allows administrators to provision services in a centralized manner. Centralized management and control are essential here. Cloud is the opposite. With cloud, tenants apply for resources and deploy services, highlighting the separation and independent management of privileges. Second, cloud and virtualization follow different business models. Virtualization is used for internal operations and reduces enterprise IT costs through resource coordination and automatic O&M. Cloud, on the other hand, is used to serve massive amounts of external tenants. It monetizes IT resources through resource sharing, elastic scaling, and metering and billing. The last major difference between virtualization and cloud is the customer pain points they are designed to solve. Virtualization is designed to improve the efficiency of IT resources by providing capabilities at the infrastructure as a service (IaaS) layer. Cloud is designed to help customers during digital transformation. It provides new technologies like big data, artificial intelligence, and blockchain as platform as a service (PaaS) and software as a service (SaaS) offerings.
As more governments and enterprises go digital, virtualization providers are seeing market growth in tenant management, as-a-service construction, and multi-cloud management. Leading virtualization vendors like VMware and Nutanix are launching new solutions that support multi-tenancy services and hybrid clouds. Governments and enterprises can build their own virtual resource pool and then gradually add capabilities based on their service needs. Alternatively, they can go cloud, migrating all of their data to the cloud and then removing unnecessary capabilities based on their requirements. Basically, virtualization and cloud are both valid IT infrastructure construction strategies, but they best serve different scenarios. Both will continue to be strong options for governments and enterprises going digital for years to come.
Fact 1: Virtualization is critical for cloud. They best serve different scenarios and will continue to be strong options for organizations for years to come.
Misconception 2: Virtualization is an outdated technology being replaced by containers.
Virtualization, specifically compute virtualization, has developed rapidly under the x86 framework. Full virtualization emerged first. It adds a hypervisor layer to the host OS and uses a purely software-based virtual machine monitor (VMM) to simulate the whole underlying hardware (including the CPU, memory, clock, and peripherals). This eliminates the need for guest OS and app adaptation to run on a virtual machine (VM). However, this comes with its own problems. All instructions need to be converted by software and VMM design is complex, which can impact the overall system performance.
That's where paravirtualization steps in. It modifies the code for the access privilege state of the guest OS so that the guest OS can directly interact with the VMM. This means certain hardware interfaces can be provided to the guest OS as software. This technology improves the performance of VMs, but requires the guest OS to be adapted. The apps running on the VMs must be also modified and cannot directly run on the host OS.
To address these issues, hardware-assisted virtualization was eventually developed. This technology was championed by chip vendors. Intel Virtualization Technology (Intel-VT) and AMD Virtualization (AMD-V) are two such hardware-assisted virtualization technologies currently available on the x86 platform, while the Huawei Kunpeng-V is an Arm-based hardware-assisted virtualization technology. The biggest advantage of this kind of virtualization is that the CPU provides virtual instructions, eliminating the step where the VMM captures and converts the instructions. This greatly improves virtualization performance. In addition, hardware-assisted virtualization can deliver many of the same benefits of full virtualization by isolating hardware differences and eliminating the need to adapt guest OSs and apps. As a result, hardware-assisted virtualization is currently the industry's most popular virtualization technology.
Typically, virtualization utilizes a two-layer architecture with a guest OS layer and a host OS layer. This makes the technology heavyweight and unable to cope with agile services that require quick release and deployment, like most Internet services. Containerization, also called OS virtualization, helps here. OS virtualization is a lightweight virtualization technology used in the server OS without a VMM layer, which allows to create multiple virtual OS instances (kernels and libraries) to isolate different processes (containers). The processes in different instances do not affect each other.
Containerization removes the guest OS and flattens the entire system stack, creating a more lightweight and efficient system. This is very attractive for most users, which is why many think virtualization should be completely replaced by containers.
Realistically though, this isn't really feasible. First, it's important to understand that containers were designed for the Internet and Internet services. It works best with agile services that require rapid upgrade and release. However, a large number of traditional services still prioritize stability, reliability, and resilience over agility. Virtualization is more suitable in these cases. Second, the fact of the matter remains that most services have not been containerized. Many traditional services that have been containerized have not yielded significant customer benefit, and the process has only made the system architecture more complex. For the foreseeable future, there simply isn't the need for a big move to containers. The transition from virtualization to containers involves a significant learning cost, and the top priority of most enterprise IT personnel is operational stability, not upgrade. It's not a simple one-off decision enterprises can make. Traditional virtualization and container technologies still better suit different scenarios based on the service development and scenario requirements. Mainstream vendors like VMware and OpenShift will continue to provide both VM and container dual-stack technologies, allowing customers to make their own choices.
Fact 2: Containers are simply a category of virtualization technologies designed for specific application scenarios that were not fully served by traditional virtualization solutions. This fear that you are being left behind if you haven't moved to containers is simply unnecessary.
Misconception 3: Virtualization is only suitable for general-purpose applications. It is not suitable for mission-critical applications.
This misconception comes from the stereotype that virtualization requires instruction conversion, consumes a large number of resources, and compromises performance and reliability. It is true that virtualization is mainly used for general-purpose applications that have low requirements on performance and stability, such as desktop clouds, office automation (OA) systems, and enterprise websites. However, new virtualization technologies, such as hardware-assisted virtualization, have compensated for the poorer performance by transitioning from applications to CPUs. The NoF standard protocol initiated by Intel and Huawei uses the Storage Performance Development Kit (SPDK) to implement cross-layer pass-through, which further improves applications' access to data on external storage systems. These improvements enable virtualization to be combined with reliable, high-performance external storage systems to provide deterministic services. More and more key applications, such as enterprise design platforms, securities trading systems, and hospital information systems (HISs), are being deployed on VMs.
The advantages of virtualization are high resource utilization, convenient resource sharing, better performance and reliability, and its abundance of applications.
Fact 3: The performance and reliability of virtualization have been improved to further support mission-critical applications.
Misconception 4: Virtualization is based on software and has nothing to do with hardware.
Hardware-assisted virtualization has been mentioned above, and this shows that virtualization is not only based on software. Software and hardware work together and complement each other. In addition, hyper-convergence is an important application of virtualization. Hyper-converged hardware is defined by virtualization software, and software and hardware are integrated for one-stop deployment. Hyper-convergence is gradually becoming the main mode for deploying virtualization.
Recently, there has been significant investment in dedicated hyper-converged hardware. Cisco HyperFlex defines four types of hyper-converged nodes, including UCS-series hardware solely for computing, hybrid nodes for storage and computing, all-flash NVMe nodes for high-performance scenarios, and edge nodes. HPE Nimble and EMC VxRail also define a series of dedicated hyper-converged hardware. Data processing units (DPUs) give full play to the advantages of dedicated hardware by augmenting hyper convergence with composability. DPUs are the data center of the hyper-converged architecture. They free up CPU resources from monotonous and repetitive data processing. DPUs work with disk enclosures to offload the storage virtualization capability and achieve CPU-free storage nodes. They work with CPUs to offload the computing virtualization capability and achieve diskless computing nodes. DPUs can also work with each other to offload network virtualization to facilitate data exchanges and flows. As a result, each CPU can access the required data in the way that they access local disks, eliminating cross-node data bottlenecks. VMware's Monterey project has been dedicated to incubating the DPU technology. In VxRail 8.0, ESXi can be automatically deployed on DPUs to offload the virtualization layer and improve performance for the entire hyper-converged system.
Therefore, virtualization technologies are not only built on software but on the collaboration between and complementarity of virtualization software and hardware, as well as full-stack infrastructure capabilities. Virtualization can be seen as the adhesive that fuses data center software and hardware.
Fact 4: Besides software, virtualization involves hardware and full-stack capabilities. Virtualization can be seen as the adhesive that fuses data center software and hardware.
Misconception 5: Virtualization only works for small data centers and cannot be applied to medium-sized or large data centers.
For small data centers, virtualization is the obvious choice because it is a lightweight and simple solution. However, virtualization can also be applied to large data centers.
From the perspective of technical architecture, data center virtualization comprises compute virtualization, storage virtualization, network virtualization, and O&M management platform. The scale of compute virtualization and storage virtualization depends on the software capabilities of vendors. Currently, mainstream vendors are using the distributed architecture to support thousands of nodes. The 64-node restriction on VMware vSAN is a commercial consideration rather than a technical constraint. Network virtualization, or software-defined networks (SDN), are catered for large data centers and not cost-effective for small and medium-sized data centers.
The only bottleneck is the O&M management platform. For large data centers, service provisioning is more important than common device management and routine O&M. However, this bottleneck can be resolved as the O&M management platform evolves into a private cloud management platform used by both administrators and tenants. Therefore, virtualizing medium-sized and large data centers is not a technical issue.
There are a large number of virtual resource pools, such as desktop cloud resource pools, office resource pools, and video processing resource pools in medium-sized and large data centers. These resource pools were considered as silos and must be migrated to the cloud. However, with the emergence of multi-cloud technologies, more and more enterprises are realizing that using just one cloud will inevitably lead to problems such as vendor binding, homogeneous competition, and data ownership issues. Therefore, many are considering introducing multiple clouds, including private clouds and public clouds, as well as a wide range of virtual resource pools. They can use a multi-cloud management platform to manage these resources in a unified manner.
From the perspective of multi-cloud architectures, virtual resource pools are part of a data center. Customers can select and combine virtual resource pools based on their service requirements, vendor capabilities, and current business conditions.
Truth 5: Virtualization will be associated with medium-sized and large data centers for a long time.
There are too many misconceptions about virtualization. However, when we look at the technology itself, we see that virtualization is developing rapidly and that many breakthroughs are being made in this field. Virtualization can meet customer requirements and provide pragmatic solutions. However, it is usually difficult to get an accurate overview of the problem. The fact is that virtualization is ubiquitous. It is not only a solid foundation for public clouds, but also a reliable partner for enterprise IT customers. Virtualization is a core technology for data centers, and it functions as the adhesive that fuses data center software and hardware. It is often considered to be the first step in enterprises' digital transformations. Ongoing research and innovation in this field and the accumulation of expertise and experience remain important.