This section will show us some ways we can use to isolate workloads.

We’ll start at the cluster level, switch to the runtime level, and then look outside the cluster at infrastructure such as network firewalls.

Cluster-level workload isolation

Cutting straight to the chase, Kubernetes does not support secure multi-tenant clusters. The only way to isolate two workloads is to run them on their own clusters with their own hardware.

Let’s look a bit closer.

The only way to divide a Kubernetes cluster is by creating Namespaces. However, these are little more than a way of grouping resources and applying things such as:

  • Limits

  • Quotas

  • RBAC rules

Namespaces do not prevent compromised workloads in one Namespace from impacting workloads in other Namespaces. This means we should never run hostile workloads on the same Kubernetes cluster.

Despite this, Kubernetes Namespaces are useful, and we should use them. Just don’t use them as security boundaries.

Namespaces and soft multi-tenancy

For our purposes, soft multi-tenancy is hosting multiple trusted workloads on shared infrastructure. By trusted, we mean workloads that don’t require absolute guarantees that one workload cannot impact another.

An example of a trusted workload might be an e-commerce application with a web front-end service and a back-end recommendation service. As they’re part of the same application, they’re not hostile. However, we might want each one to have its own resource limits managed by different teams.

In situations like this, a single cluster with a Namespace for the front-end service and another for the back-end service might be a good solution.

Namespaces and hard multi-tenancy

We’ll define hard multi-tenancy as hosting untrusted and potentially hostile workloads on shared infrastructure. However, as we said before, this isn’t currently possible with Kubernetes.

This means workloads requiring a strong security boundary need to run on separate Kubernetes clusters! Examples include:

  • Isolating production and non-production workloads

  • Isolating different customers

  • Isolating sensitive projects and business functions

Other examples exist, but the take-home point is that workloads requiring strong separation need their own clusters.

Note: The Kubernetes project has a dedicated Multitenancy Working Group that’s actively working on multitenancy models. This means that future Kubernetes releases might have better solutions for hard multitenancy.

Node isolation

There will be times when we have applications that require non-standard privileges, such as running as root or executing non-standard syscalls. Isolating these on their own clusters might be overkill, but we might justify running them on a ring-fenced subset of worker nodes. Doing this will restrict compromised workloads from only impacting other workloads on the same node.

We should also apply defense in-depth principles by enabling stricter audit logging and tighter runtime defense options on nodes running workloads with non-standard privileges.

Kubernetes offers several technologies, such as labels, affinity and anti-affinity rules, and taints, to help us target workloads to specific nodes.

Runtime isolation

Containers versus virtual machines used to be a polarizing topic. However, when it came to workload isolation there is only ever one winner… virtual machines.

Most container platforms implement namespaced containers. This is a model where every container shares the host’s kernel, and isolation is provided by kernel constructs, such as namespaces and cgroups, that were never designed as strong security boundaries. Docker, containerd, and CRI-O are popular examples of container runtimes and platforms that implement namespaced containers.

This is very different from the hypervisor model, where every virtual machine gets its own dedicated kernel and is strongly isolated from other virtual machines using hardware enforcement.

However, it’s easier than ever to augment containers with security-related technologies that make them more secure and enable stronger workload isolation. These technologies include AppArmor, SELinux, seccomp, capabilities, and user namespaces, and most container runtimes and hosted Kubernetes services do a good job of implementing sensible defaults for them all. However, they can still be complex, especially when troubleshooting.

We should also consider different classes of container runtimes. Two examples are gVisor and Kata Containers, both of which provide stronger levels of workload isolation and are easy to integrate with Kubernetes thanks to the Container Runtime Interface (CRI) and Runtime Classes.

There are also projects that enable Kubernetes to orchestrate other workload types, such as virtual machines, serverless functions, and WebAssembly.

While some of this might overwhelm us, we need to consider all of it when determining the isolation levels our workloads require.

To summarize, the following workload isolation options exist:

  1. Virtual Machines: Every workload gets its own dedicated kernel. It provides excellent isolation but is comparatively slow and resource-intensive.

  2. Namespaced containers: All containers share the host’s kernel. These are fast and lightweight but require extra effort to improve workload isolation.

  3. Run every container in its own virtual machine: Solutions like these attempt to combine the versatility of containers with the security of VMs by running every container in its own dedicated VM. Despite using specialized lightweight VMs, these solutions lose much of the appeal of containers, and they’re not very popular.

  4. Use different runtime classes: This allows us to run all workloads as containers, but we target the workloads requiring stronger isolation to an appropriate container runtime.

  5. Wasm containers: Wasm containers package Wasm (WebAssembly) apps in OCI containers that can execute on Kubernetes. These apps only use containers for packaging and scheduling, at run time they execute inside a secure deny-by-default Wasm host. See the Web Assembly chapter for more details.

Network isolation

Firewalls are an integral part of any layered information security system. The goal is only to allow authorized communications.

In Kubernetes, Pods communicate over an internal network called the pod network. However, Kubernetes doesn’t implement the pod network. Instead, it implements a plugin model called the Container Network Interface (CNI) that allows 3rd-party vendors to implement the pod network. Lots of CNI plugins exist, but they fall into two broad categories:

  • Overlay

  • BGP

Each has a different impact on firewall implementation and network security.

Kubernetes and overlay networking

Most Kubernetes environments implement the pod network as a simple flat overlay network that hides any network complexity between cluster nodes. For example, we might deploy our cluster nodes across ten different networks connected by routers, but Pods connect to the flat pod network and communicate without needing to know any of the complexity of the host networking. The following figure shows four nodes on two separate networks and the Pods connected to a single overlay pod network.

Get hands-on with 1400+ tech skills courses.