In this article. you’ll learn Virtualization Technology and understand the docker concepts. Docker is one implementation of container-based virtualization technologies. So, Let’s understand how virtualization technology has involved over time.
In the pre-virtualization days, we were using big server racks.
Underneath we have the physical server. We install the desired operating system on it, Then we run the application on top of the operating system. And each physical machine would only run one application.
So, what was the problem with this model?
1] First of all, we have to purchase a physical machine in order to deploy each application, and those commercial servers can be very expensive. And we might end up only using only a fraction of the CPU or memory of the machine. The rest of the resources are going to waste. But you have to pay for the whole hardware up-front.
2] Deployment time is often slow. The process of purchasing and configuring new physical servers can take ages, especially for big organizations.
3] It will be painful to migrate our applications to servers from different vendors.
Let’s say we installed our application on an IBM server, It would take us lots of effort to migrate to Dell servers. A significant amount of configuration change and manual intervention is required.
The rescue is the Hypervisor-based virtualization technology.
Let’s take a look at this virtualization model.
Underneath, we have the physical server. Then we install the desired operating system. on top of the operating system, a hypervisor layer is introduced which allows us to install multiple virtual machines on a single physical machine.
Each VM can have a different operating system. For example, we can have ubuntu installed on one VM and Debian on another.
In this way, we can run multiple operating systems on a single physical machine and each operating system can run a different application.
This is the traditional model of virtualization which is being referenced as the Hypervisor based virtualization. Some of the popular hypervisor providers are VMware and Virtualbox.
In the early stage, users would deploy VMs on their own physical servers. But, nowadays more and more companies have been shifted to deploying VMs in the cloud with providers. such as AWS and Microsoft Azure which means we don’t even have to purchase physical machines upfront.
There are some huge benefits to this model.
1] First of all, it is more cost-effective. Each physical machine is divided into multiple VMs. And each one only uses its own CPU, memory, and storage resources. We pay only for the computing power, storage, and other resources.
you use with no up-front commitments which are typical pay as you go model.
2] It is easy to scale. With VMs deployed in the cloud environment, If we want more instances of our application. we don’t need to go through the long process of ordering and configuring new physical servers.
We can click the mouse and deploy more VMs in the cloud. The time taken to scale our application can reduce from weeks to minutes.
This hypervisor-based virtualization model has obvious the advantage over the one application on one server model. But it still has some limitations.
1] First of all, each virtual machine still needs to have an operating system installed. this is an entire guest operating system with its own memory management, device drivers, daemons, etc.
When we are talking about a Linux operating system, we are talking about a kernel. For example, here we have three host operating systems and three kernels.
Even though they can be three different kernels, we are still replicating a lot of the core functionality of Linux.
In this traditional Hypervisor-based virtualization model, we have to have an entire operating system there simply to run our application which is still not efficient.
2] Application portability is not guaranteed. Even though some progress has been achieved in getting virtual machines to run across different types of hypervisors, there is still a lot of work to be done there. VM portability is still at an early stage.
Container-Based Virtualization Technology
Finally, the container-based virtualization technology comes out. Docker is one implementation of the container-based virtualization technology.
Let’s see the following structure.
Underneath, we have our server, and this can be either a physical machine or a virtual machine. Then we install our operating system on the server. On top of the OS, we install a container engine which allows us to run multiple guest instances. Each guest instance is called a container. Within each container, we install the application and all the libraries that application depends on.
The key to understanding the difference between the hypervisor-based virtualization model and container-based virtualization model is the replication of the kernels.
In the traditional model, each application is running in its own copy of the kernel and the virtualization happens at the hardware level.
the new model, we have only one kernel which will supply different binaries and runtime to the applications running in isolated containers. So, the container will share the base runtime kernel which is the container engine.
For the new model, the virtualization happens at the operating system level. Containers share the host’s OS so this is much more efficient and light-weighted.
If You might be thinking, what do we gain by running those applications in different containers? and Why can’t we run all applications in a single VM? This comes to the nature of isolation.
As you know, most applications depend on various third-party libraries.
Let’s say we want to run two C# applications with two different .NET Core frameworks. So, this is going to be quite challenging if we want to run those two applications in the same VM without introducing any conflicts.
By leveraging containers, we can easily isolate the two runtime environments. Let’s say application A requires .NET Core 2.2, then we install .NET Core 2.2 in the first container and run application A in the first container.
For container B, it requires .NET Core 3.1, then we install .NET Core 3.1 only for the second container and run application B inside the second container.
In this way, we have two containers on the same machine, running two different applications each with a different .NET Core framework versions. This is what we call runtime isolation.
Comparing to hypervisor-based virtualization, container-based virtualization has some obvious benefits.
1] It is more cost-effective.
Container-based virtualization does not create an entire virtual operating system. Instead only the required components are packaged up inside the container with the application. So containers consume less CPU, RAM, and storage space than VMs, which means we can have more containers running on one physical machine than VMs.
2] Faster Deployment Speed.
Containers house the minimal requirements for running the application which can speed up as fast as a process. A container can be several times faster to boost than a VM.
3] Great portability.
So, the containers are independent self-sufficient application bundles, they can be run across machines without compatibility issues.
Thank you for reading this article. I hope you get started your application using the Virtualization technology concept.