Where Did Kubernetes Come From?
Let's go through the origins of Kubernetes.
We'll cover the following
Kubernetes was developed by a group of Google engineers partly in response to Amazon Web Services (AWS) and Docker. AWS changed the world when it invented modern cloud computing, and everyone needed to catch up.
One of the companies catching up was Google. They’d built their own cloud but needed a way to abstract the value of AWS and make it as easy as possible for customers to get off AWS and onto their cloud. They also ran production apps, such as Search and Gmail, on billions of containers per week.
At the same time, Docker was taking the world by storm, and users needed help managing explosive container growth.
While all this was happening, a group of Google engineers took the lessons they’d learned using their internal container management tools and created a new tool called Kubernetes. In 2014, they open-sourced Kubernetes and donated it to the newly formed Cloud Native Computing Foundation (CNCF).
At the time of writing, Kubernetes is ~10 years old and has experienced incredible growth and adoption. However, at its core, it still does the two things Google and the rest of the industry need:
It abstracts infrastructure (such as AWS)
It simplifies moving applications between clouds
These are two of the biggest reasons Kubernetes is important to the industry.
Kubernetes and Docker
All of the early versions of Kubernetes shipped with Docker and used it as its runtime. This means Kubernetes used Docker for low-level tasks such as creating, starting, and stopping containers. However, two things happened:
Docker got bloated
People created lots of Docker alternatives
As a result, the Kubernetes project created the container runtime interface (CRI) to make the runtime layer pluggable. This means we can pick and choose the best runtimes for our needs. For example, some runtimes provide better isolation, whereas others provide better performance.
Kubernetes 1.24 finally removed support for Docker as a runtime as it was bloated and overkill for what Kubernetes needed. Since then, most new Kubernetes clusters ship with containerd (pronounced “container dee”) as the default runtime. Fortunately, containerd is a stripped-down version of Docker optimized for Kubernetes, and it fully supports applications containerized by Docker. In fact, Docker, containerd, and Kubernetes all work with images and containers that implement the Open Container Initiative (OCI) standards.
The following figure shows a four-node cluster running multiple container runtimes.
Notice how some of the nodes have multiple runtimes. Configurations like this are fully supported and increasingly common. We’ll work with a configuration like this in a later chapter when we deploy a WebAssembly (Wasm) app to Kubernetes.
What about Docker Swarm
In 2016 and 2017, Docker Swarm, Mesosphere DCOS, and Kubernetes competed to become the industry standard container orchestrator. Kubernetes won.
However, Docker Swarm remains under active development and is popular with small companies wanting a simple alternative to Kubernetes.
Kubernetes and Borg: Resistance is futile!
We already said that Google has been running containers at a massive scale for a very long time. Well, orchestrating these billions of containers were two in-house tools called Borg and Omega. So, it’s easy to make the connection with Kubernetes — all three orchestrate containers at scale, and all three are related to Google.
However, Kubernetes is not an open-source version of Borg or Omega. It’s more like Kubernetes shares its DNA and family history with them.
As things stand, Kubernetes is an open-source project owned by the CNCF. It’s licensed under the Apache 2.0 license, version 1.0 shipped way back in July 2015, and at the time of writing, we’re already at version 1.31 and averaging three new releases per year.