...
/Using Canary Strategy with Flager, Istio, and Prometheus
Using Canary Strategy with Flager, Istio, and Prometheus
This lesson demonstrates how we can use the canary strategy with Flagger, Istio and Prometheus.
We'll cover the following...
Before we start exploring Canary
deployments, let’s take a quick look at what we have so far to confirm that the first release using the new definition worked.
Confirming the first release
Is our application accessible through the Istio gateway? Let’s send a request to check.
curl $STAGING_ADDR
The output should say Hello from: Jenkins X golang http rolling update
.
Now that we confirmed that the application released with the new definition is accessible through the Istio gateway, we can take a quick look at the Pods running in the staging Namespace.
kubectl --namespace jx-staging \get pods
The output is as follows.
NAME READY STATUS RESTARTS AGEjx-jx-progressive-primary-... 2/2 Running 0 42sjx-jx-progressive-primary-... 2/2 Running 0 42sjx-jx-progressive-primary-... 2/2 Running 0 42s
There is a change at least in the naming of the Pods belonging to jx-progressive. Now they contain the word primary
. Given that we deployed only one release using Canary
, those Pods represent the main release accessible to all the users. As you can imagine, the Pods were created by the corresponding jx-jx-progressive-primary
ReplicaSet which, in turn, was created by the jx-jx-progressive-primary
Deployment. As you can probably guess, there is also the jx-jx-progressive-primary
Service that allows communication to those Pods, even though sidecar containers injected by Istio further complicate that. Later on, we’ll see why all those are important.
What might matter more is the canary
resource, so let’s take a look at it.
kubectl --namespace jx-staging \get canary
The output is as follows.
NAME STATUS WEIGHT LASTTRANSITIONTIMEjx-jx-progressive Initialized 0 2019-12-01T21:35:32Z
There’s not much going on there since we have only the first Canary
release running. For now, please note that canary
can give us additional insight into the process.
You saw that we set the Canary
gateway to jx-gateway.istio-system.svc.cluster.local
. As a result, when we deployed the first Canary
release, it created the gateway for us. We can see it by retrieving virtualservice.networking.istio.io
resources.
kubectl --namespace jx-staging \get virtualservices.networking.istio.io
The output is as follows.
NAME GATEWAYS HOSTS AGEjx-jx-progressive [jx-gateway.istio-system.svc.cluster.local] [staging.jx-progressive.104.196.199.98.nip.io jx-jx-progressive] 3m
We can see from the output that the gateway jx-gateway.istio-system.svc.cluster.local
is handling external requests coming from staging.jx-progressive.104.196.199.98.nip.io
as well as ...