Container orchestration#
Containers are great. They provide you with an easy way to package and deploy services, allow for process isolation, immutability, efficient resource utilization, and are lightweight in creation.
But when it comes to actually running containers in production, you can end up with dozens, even thousands of containers over time. These containers need to be deployed, managed, and connected and updated; if you were to do this manually, you’d need an entire team dedicated to this.
It’s not enough to run containers; you need to be able to:
- Integrate and orchestrate these modular parts
- Scale up and scale down based on the demand
- Make them fault tolerant
- Provide communication across a cluster
You might ask: aren’t containers supposed to do all that? The answer is that containers are only a low-level piece of the puzzle. The real benefits are obtained with tools that sit on top of containers — like Kubernetes. These tools are today known as container schedulers.
Great for multi-cloud adoption#
With many of today’s businesses gearing towards microservice architecture, it’s no surprise that containers and the tools used to manage them have become so popular.
Microservice architecture makes it easy to split your application into smaller components with containers that can then be run on different cloud environments, giving you the option to choose the best host for your needs.
What’s great about Kubernetes is that it’s built to be used anywhere so you can deploy to public/private/hybrid clouds, enabling you to reach users where they’re at, with greater availability and security. You can see how Kubernetes can help you avoid potential hazards with “vendor lock-in”.
Deploy and update applications at scale for faster time-to-market#
Kubernetes allows teams to keep pace with the requirements of modern software development. Without Kubernetes, large teams would have to manually script their own deployment workflows.
Containers, combined with an orchestration tool, provide management of machines and services for you — improving the reliability of your application while reducing the amount of time and resources spent on DevOps.
Kubernetes has some great features that allow you to deploy applications faster with scalability in mind:
- Horizontal infrastructure scaling: New servers can be added or removed easily.
- Auto-scaling: Automatically change the number of running containers, based on CPU utilization or other application-provided metrics.
- Manual scaling: Manually scale the number of running containers through a command or the interface.
- Replication controller: The replication controller makes sure your cluster has an equal amount of pods running. If there are too many pods, the replication controller terminates the extra pods. If there are too few, it starts more pods.
- Health checks and self-healing: Kubernetes can check the health of nodes and containers ensuring your application doesn’t run into any failures. Kubernetes also offers self-healing and auto-replacement so you don’t need to worry about if a container or pod fails.
- Traffic routing and load balancing: Traffic routing sends requests to the appropriate containers. Kubernetes also comes with built-in load balancers so you can balance resources in order to respond to outages or periods of high traffic.
- Automated rollouts and rollbacks: Kubernetes handles rollouts for new versions or updates without downtime while monitoring the containers’ health. In case the rollout doesn’t go well, it automatically rolls back.
- Canary Deployments: Canary deployments enable you to test the new deployment in production in parallel with the previous version.
“Before Kubernetes, our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And we’re working on getting it to an hour.”
— Box
Better management of your applications#
Containers allow applications to be broken down into smaller parts which can then be managed through an orchestration tool like Kubernetes. This makes it easy to manage codebases and test specific inputs and outputs.
As mentioned earlier, Kubernetes has built-in features like self-healing and automated rollouts/rollbacks, effectively managing the containers for you.
To go even further, Kubernetes allows for declarative expressions of the desired state as opposed to an execution of a deployment script, meaning that a scheduler can monitor a cluster and perform actions whenever the actual state does not match the desired. You can think of schedulers as operators who are continually monitoring the system and fixing discrepancies between the desired and actual state.
Overview/additional benefits#
- You can use it to deploy your services, to roll out new releases without downtime, and to scale (or de-scale) those services.
- It is portable.
- It can run on a public or private cloud.
- It can run on-premise or in a hybrid environment.
- You can move a Kubernetes cluster from one hosting vendor to another without changing (almost) any of the deployment and management processes.
- Kubernetes can be easily extended to serve nearly any needs. You can choose which modules you’ll use, and you can develop additional features yourself and plug them in.
- Kubernetes will decide where to run something and how to maintain the state you specify.
- Kubernetes can place replicas of service on the most appropriate server, restart them when needed, replicate them, and scale them.
- Self-healing is a feature included in its design from the start. On the other hand, self-adaptation is coming soon as well.
- Zero-downtime deployments, fault tolerance, high availability, scaling, scheduling, and self-healing add significant value in Kubernetes.
- You can use it to mount volumes for stateful applications.
- It allows you to store confidential information as secrets.
- You can use it to validate the health of your services.
- It can load balance requests and monitor resources.
- It provides service discovery and easy access to logs.