Rolling Updates With Deployments

Let's see how to perform rolling updates with Deployments.

We'll cover the following

Let's take a closer look at rolling updates and rollbacks.

Rolling updates

Deployments are amazing at zero-downtime rolling updates (rollouts). But they work best if we design our apps to be:

  1. Loosely coupled via APIs

  2. Backward and forward compatible

Both are hallmarks of modern cloud-native microservices apps and work as follows.

Our microservices should always be loosely coupled and only communicate via well-defined APIs. Doing this means we can update and patch any microservice without having to worry about impacting others — all connections are via formalized APIs that expose documented interfaces and hide specifics.

Ensuring releases are backward and forward-compatible means we can perform independent updates without caring which versions of clients are consuming the service. A simple non-tech analogy is a car. Cars expose a standard driving “API” that includes a steering wheel and foot pedals. As long as we don’t change this “API”, you can re-map the engine, change the exhaust, and get bigger brakes, all without the driver having to learn any new skills.

With these points in mind, zero-downtime rollouts work like this.

Assume we’re running five replicas of a stateless microservice. Clients can connect to any of the five replicas as long as all clients connect via backward and forward-compatible APIs. To perform a rollout, Kubernetes creates a new replica running the new version and terminates one running the old version. At this point, we’ve got four replicas on the old version and one on the new. This process repeats until all five replicas are on the new version. As the app is stateless and multiple replicas are up and running, clients experience no downtime or interruption of service.

There’s a lot more going on behind the scenes, so let’s take a closer look.

Each microservice is built as a container and wrapped in a Pod. We then wrap each Pod in its own Deployment for self-healing, scaling, and rolling updates. Each Deployment describes all the following:

  • Number of Pod replicas

  • Container images to use

  • Network ports

  • How to perform rolling updates

We post Deployment YAML files to the API Server, and the ReplicaSet controller ensures the correct number of Pods get scheduled. It also watches the cluster, ensuring observed state matches desired state. A Deployment sits above the ReplicaSet, governing its configuration and adding mechanisms for rollouts and rollbacks.

All good so far.

Now, assume we’re exposed to a known vulnerability and need to release an update with the fix. To do this, we update the same Deployment YAML file with the new Pod spec and re-post it to the API server. This updates the existing Deployment object with a new desired state requesting the same number of Pods, but all running the newer version containing the fix.

At this point, observed state no longer matches desired state — we’ve got five old Pods, but we want five new ones.

To reconcile, the Deployment controller creates a new ReplicaSet defining the same number of Pods but running the newer version. We now have two ReplicaSets — the original one for the Pods with the old version and the new one for the Pods with the new version. The Deployment controller systematically increments the number of Pods in the new ReplicaSet as it decrements the number in the old ReplicaSet. The net result is a smooth incremental rollout with zero downtime.

The same process happens for future updates — we keep updating the same Deployment manifest, which we should store in a version control system.

The following figure shows a Deployment that’s been updated once. The initial release created the ReplicaSet on the left, and the update created the one on the right. The update has completed, as the ReplicaSet on the left is no longer managing any Pods, whereas the one on the right is managing three live Pods.

Get hands-on with 1400+ tech skills courses.