Pod Theory

Every app on Kubernetes runs inside a Pod.

  • When we deploy an app, we deploy it in a Pod.

  • When we terminate an app, we terminate its Pod.

  • When we scale an app up, we add more Pods.

  • When we scale an app down, we remove Pods.

  • When we update an app, we deploy new Pods

This makes Pods important, and that is why this chapter goes into detail.

Introduction to Pods

Kubernetes uses Pods for a lot of reasons. They’re an abstraction layer, they enable resource sharing, add features, enhance scheduling, and more.

Let’s take a closer look at some of those.

Pods are an abstraction layer

Pods abstract the details of different workload types. This means we can run containers, VMs, serverless functions, and Wasm apps in them, and Kubernetes doesn’t know the difference.

Using Pods as an abstraction layer benefits Kubernetes as well as the workloads:

  • Kubernetes can focus on deploying and managing Pods without having to care what’s inside them

  • Heterogenous workloads can run side-by-side on the same cluster, leverage the full power of the declarative Kubernetes API, and get all the other benefits of Pods

Containers and Wasm apps work with standard Pods, standard workload controllers, and standard runtimes. However, serverless functions and VMs need a bit of extra help.

Serverless functions run in standard Pods but require apps like Knative to extend the API with custom resources and controllers. VMs are similar, needing apps like KubeVirt to extend the API.

The following figure shows four different workloads running on the same cluster. Each workload is wrapped in a Pod, managed by a controller, and uses a standard runtime. VM workloads run in a VirtualMachineInstance (VMI) instead of a Pod, but VMIs are very similar to Pods and utilize a lot of Pod features.

Get hands-on with 1400+ tech skills courses.