Canary Deployments with Flagger
Learn how to perform a canary deployment with Flagger.
We'll cover the following
- Performing a canary deployment with Flagger
- Creating a canary deployment with Flagger
- Ingress
- Canary
- Executing the canary deployment
- Find the external IP of the Ingress
- Configure the Ingress
- Apply the Ingress resource on the cluster
- Configure the canary
- Apply the canary resource on the cluster
- Inspect the application version
- Trigger the canary deployment
- Monitor the canary deployment
- Inspect the promoted application version
- Try it yourself
- Conclusion
Performing a canary deployment with Flagger
Canary deployments are a progressive delivery strategy supported by Flagger that allows the gradual rollout of a new application version, referred to as a canary. When a canary deployment is performed, Flagger will route an initial percentage of traffic to the canary and monitor its performance through metrics collected by Prometheus.
If the metrics appear healthy, Flagger will increase the traffic diverted to the canary instead of the primary application while continuing its monitoring. The traffic increase and monitoring will continue until the canary reaches the configured maximum percentage of traffic it should receive. At this point, the canary will be promoted and replace the current version of the application, becoming the primary application version that receives all of the incoming traffic.
Creating a canary deployment with Flagger
When using Flagger, progressive delivery strategies such as canary deployments are created as Kubernetes resources on the cluster. To support the canary deployment, the following resources must be created on the cluster:
Horizontal pod autoscaler
Ingress
Canary
To create these resources, we'll describe their specifications declaratively using YAML similarly to how we configured other Kubernetes resources.
Horizontal pod autoscaler
A horizontal pod autoscaler is used to increase the number of replicas of a pod as the demand on the resource grows. As the pod receives more traffic and its resource usage grows, the horizontal pod autoscaler will automatically provision new pods to meet the increased demand placed on the resource by the traffic. Once the demand subsides, the horizontal pod autoscaler will reduce the number of pod replicas, scaling down the unnecessary resources. Here's an example of the declarative specification for a horizontal pod autoscaler:
Get hands-on with 1400+ tech skills courses.