Deployment Theory

Get introduced to another fundamental Kubernetes object, i.e., Deployments.

Introduction to deployments

Deployments are the most popular way of running stateless apps on Kubernetes. They add self-healing, scaling, rollouts, and rollbacks.

Consider a quick example.

Assume we have a requirement for a web app that needs to be resilient, scale on demand, and be frequently updated. We write the app, containerize it, and define it in a Pod YAML so it can run on Kubernetes. We then wrap the Pod inside a Deployment and post it to Kubernetes, where the Deployment controller deploys the Pod. At this point, our cluster is running a single Deployment managing a single Pod.

If the Pod fails, the Deployment controller replaces it with a new one. If demand increases, the Deployment controller can deploy more identical Pods. When we update the app, the Deployment controller deletes the old Pods and replaces them with new ones.

Assume the app has another stateless microservice, such as a shopping cart. We’d containerize this, wrap it in its own Pod, wrap the Pod in its own Deployment, and deploy it to the cluster.

At this point, we’d have two Deployments managing two different microservices.

The following figure shows this setup with the Deployment controller watching and managing both Deployments. The web Deployment manages four identical web server Pods, and the cart Deployment manages two identical shopping cart Pods.

Create a free account to view this lesson.

By signing up, you agree to Educative's Terms of Service and Privacy Policy