Containers vs. Virtual Machines

svg viewer

Virtualization

Traditionally, computers were bound to run one environment over a single OS and infrastructure, but this would result in the under-utilization of resources. To counter this problem and optimize the use of resources in hand, virtualization was introduced; first, through virtual machines (VM) and later in the shape of containers. Virtualization is an abstraction that enables the user to run multiple environments on a single piece of OS and infrastructure.

Virtual machine

Virtual machine (VM) helps us emulate the computer system. It provides a strong abstraction and helps run a guest-OS on the host-OS.

VM lives between the OS layer and the infrastructure layer. Each VM has its own underlying OS. The hypervisor, which is a software, firmware, or hardware, sits between the VM and the infrastructure and plays an integral part in virtualization. The hypervisor allocates the processor, storage, and memory resources between multiple VMs.

Windows host can run a full copy of another OS, for example, Linux, using a VM. Although the user can benefit from running two OS’s on one system, in actuality, it proves to be quite taxing on the processor and RAM. Sometimes, just due to a couple of dependencies and libraries, virtualizing a whole different OS can be overkill.

svg viewer

Container

A container is a relatively lightweight virtualization technique that, instead of virtualizing the whole computer system, virtualizes the OS. It enables users to run isolated servers on a single host OS.

Containers run on top of the OS layer. Therefore, all the containers use and share the same underlying host OS kernel, including the libraries and binaries, which significantly reduces overhead. The extra libraries and binaries needed can be deployed on the container alongside the application. The process makes containers extremely fast and lightweight.

svg viewer
Copyright ©2024 Educative, Inc. All rights reserved