Systems virtualization: a myriad of possibilities

Systems virtualization's evolution has altered the paradigm of datacenter infrastructure management and software development and maintenance.

System virtualization represents the ability to create multiple (virtual) environments or instances of the same (physical) hardware resource, increasing productivity and efficiency in both business and personal scenarios, as well as in cloud or on premises environments.

This is a concept that can be applied to resources such as processing, storage or network communications, as well as data, applications and workstations.

My first memory of this concept dates back to the early years of this millennium (perhaps 2002), when the first versions of VMWare Workstation were made publicly available. Since then, virtualization has evolved significantly, creating possibilities in the management of datacenter infrastructures and changing the paradigm in the development and maintenance of software and services.

Virtualização de sistemas
Virtualization has evolved significantly, creating possibilities in the management of datacenter infrastructures and changing the paradigm in the development and maintenance of software and services.

Systems virtualization: the battle of the titans

Despite several previous projects (IBM, Macintosh, Connectix, among others), the turn of the millennium brought a lot of new developments in the field of systems virtualization, making VMWARE one of the main players in the market.

Microsoft, with its acquisition of Connectix's Virtual PC and Virtual Server product line, followed this trend, making HyperV (which came with Windows Server) available in 2008. At the same time, opensource solutions such as Virtuozzo (in 2005), KVM (in 2007) and Docker (in 2013) appeared, all with innovative and differentiating approaches.

After a series of developments and acquisitions by tech giants, we now have multiple virtualization solutions to cater to all requirements, scenarios, and budgets in 2024.

Microsoft has decided to discontinue HyperV in favour of Azure Stack HCI, a hybrid product that allows integration of on-prem and cloud Azure resources for all types of services in the Microsoft ecosystem.

AWS (Amazon Web Services, created in 2006) and Google (Google Computing Engine, launched in 2012) have developed their cloud services with KVM technologies and have become leaders (along with Microsoft) in IaaS (cloud infrastructure as a service) and PaaS (Platforms as a Service).

Other solutions such as Nutanix Cloud Platform, Citrix Hypervisor, Red Hat Virtualization, Proxmox, Oracle VM Server, Virtuozzo Hybrid Server, Xen Project and QEMU continue to be presented as alternatives to the main players, each with their own advantages and benefits.

Of all these solutions, I have high hopes for the portfolio of Canonical, a company led by Mark Shuttleworth, which develops and supports Ubuntu (Operating System, server and workstation), participates in the Ceph project (Storage), develops Multipass (Virtualization), LXD (unified management of virtual machines and containers), Juju (orchestration engine), MaaS (Metal as a service), Data Fabric (data integration and processing) and MicroK8s (Kubernetes).

Considering the partnership Canonical made with Microsoft in 2016 (which allowed the inclusion of a bash in Windows 10), or its presence at the Ubuntu Summit 2023 event (held in November 2023), we could be witnessing a historic turning point for Canonical (a London-based company), which in recent years (since 2018) has been considering going public with an initial public offering (IPO), with companies such as Netflix, eBay, Walmart, AT&T or Telekom as its main stakeholders, which have chosen Canonical platforms to develop their services.

virtualização-de-sistemas
The hypervisor makes it possible to create an intermediate layer between the physical hardware (called hosts) and the virtual machines (usually called guests), managing the physical resources that are shared by the virtual loads.

From hypervisors to kubernetes

Regardless of the companies developing each technology, the truth is that system virtualization is a concept that has completely changed the paradigm of datacenter management, software development and maintenance.

The hypervisor makes it possible to create an intermediate layer between the physical hardware (called hosts) and the virtual machines (usually called guests), managing the physical resources that are shared by the virtual loads.

Hypervisor types are generally type 1 (also known as Bare Metal), in which the hypervisor interacts directly with the hardware resources and no additional operating system is required (e.g. Proxmox or KVM), or type 2, in which the hypervisor runs as if it were an application, on top of the existing operating system (e.g. VMWare Player, Oracle VM for x86).

This intermediate layer between the host's hardware and the guest's operating system has made it possible to optimize many of the sysadmin tasks, particularly in the installation, validation and maintenance processes.

By taking advantage of templates (pre-configured models), it is now possible to deploy virtual servers in just a few minutes. Using snapshots or cloning, it is now possible to test updates or restore old versions whenever necessary. Using live migration functionalities (transferring virtual loads between hosts), it is now possible to disconnect infrastructure components (e.g. physical servers) without any donwtime on the virtual loads. These are just a few examples of the advantages of virtualizing operating systems.

On the other hand, the emergence of containers (e.g. Docker) makes it possible to isolate the application component, virtualizing only this layer and other dependencies or configurations, creating much lighter and more portable images.

This containerization platform makes it possible to package and isolate applications in containers, which are lightweight and share the kernel of the host operating system. It differs from a common hypervisor in that it does not create virtual machines or differentiated operating systems. Instead, Docker containers use the resources of the host (physical machine) and provide an efficient way to install and run applications without worrying about dependencies or blocking compatibility issues.

Although Docker and both types of hypervisors (type 1 and 2) provide a layer of isolation, the underlying technology and approach are different. Hypervisors create virtual machines that simulate hardware and run their own operating systems. Docker containers, on the other hand, share the kernel of the host operating system and provide a lighter and more efficient solution because they virtualize only the resources needed to run the applications.

The success that docker has had in the technology community has led to the emergence of new technologies, such as cabinets, which are container orchestration systems that allow the implementation, automation, scaling and management of this ecosystem.

Currently maintained by the Cloud Native Computing Foundation (the result of Google's partnership with the Linux Foundation), this technology (commonly known as k8s) is based on the concept that the manager must define the parameters, requirements and limitations, and the kubernetes cluster must ensure that these objectives are met in the most efficient way.

The main components are the "worker machines" known as "nodes", which run the containerized applications. The "worker nodes" house the "pods", which are a kind of cabin where the application components are executed.
As mentioned, the cluster manager defines the number of worker machines and the desired scaling parameters, while the control plane is responsible for managing how the cluster distributes the load across the processing resources. This "control plane" also has components such as apiserver (which exposes APIs on the frontend), etcd (which records information about the cluster), scheduler, controller-manager, among others.

One of the most interesting features of kubernetes are the "namespaces", which allow you to create logical divisions between environments (such as separating production and test environments). Other resources or functionalities such as services, volumes, secrets and helm charts guarantee a global solution for development environments and the operation of information technologies and systems.

Will 2024 be a year of novelties when it comes to virtualization technologies?

What's next?

Institutions, companies, foundations and other players in these fields have been presenting the market (at a breakneck pace) with constant updates, new products and services, demanding constant updating and specialization from professionals in the sector.

With the recent acquisition of VMWARE by Broadcom (at the end of 2023, for $69 Billion), reactions are expected from other players, but also from VMWARE's large customers (some of whom are already looking for alternatives). Will VMWARE (now part of Broadcom) maintain its leading position in Gartnet's magic quadrant? Will 2024 be a banner year for virtualization technologies? Will there be new leaders in the sector in the short term? These are some of the questions that arise.


Carlos Domingues

IT & Security Coordinator