Painting the Big Picture
Learn about the flow of requests and how applications are deployed using the INGRESS gateway.
Flow of requests
Before we dive into the actual usage of Knative, let’s see which components we have and how they interact with each other. We’ll approach the subject by trying to figure out the flow of a request. It starts with a user.
When we send a request, it goes to the external load balancer, which in our case forwards it to IstioGateway, accessible through a Kubernetes Service created when we installed Istio. That’s the same service that created the external load balancer if we’re using GKE, EKS, or AKS. In the case of Minikube and Docker Desktop, there’s no external load balancer, so we should use our imaginations.
Note: It could also be internal traffic but, for simplicity, we’ll focus on users. The differences are trivial.
From the external LB, requests are forwarded to the cluster and picked up by the Istio Gateway. Its job is to forward requests to the destination Service associated with our application. However, we don’t yet have the app, so let’s deploy something.
Deploying application
Creating namespace
We’ll simulate that this is a deployment of a serverless application to production, so we’ll start by creating a namespace.
kubectl create namespace production
Since we’re using Istio, we can just tell it to auto-inject Istio proxy sidecars (Envoy). That’s not a requirement. We could also use Istio just for Knative internal purposes, but since we already have it, why not go all the way and use it for our applications?
As we saw when we installed Knative, all we have to do is add the istio-injection
label to the namespace.
kubectl label namespace production \istio-injection=enabled
Insufficient CPU error
Now comes the big moment. We’re about to deploy our first application using Knative. To simplify the process, we’ll use kn
CLI for that.
In its simplest form, all we have to do is execute the kn
service and ...