Every component service in our application should provide liveness and readiness metrics as a minimum. In short, a liveness metric advertises that a service is functioning, and a readiness metric advertises that a service is available for requests. When a service is managed by an orchestrator, specifically Kubernetes, its definition can include references to endpoints that individually report these metrics. The endpoints don’t need to be exposed outside of a pod.

If a liveness endpoint indicates the failure of the service or doesn’t report anything at all, the orchestrator will terminate the pod and replace it with a new instance.

If a readiness endpoint indicates the container is unavailable, the orchestrator will stop sending request traffic to it.

Note that these endpoints will not have any useful functionality in our development environment with docker-compose. Docker is a simplified orchestrator that focuses on running multiple containers on a single machine. Kubernetes is a broader functioning orchestration platform that includes more complex frameworks, such as traffic control and monitoring service health over multiple compute nodes.

Creating liveness endpoints is relatively simple.

Liveness endpoints

It’s possible to configure a liveness probe in Kubernetes using any of the three different methods:

  1. Through a command probe

  2. Through an HTTP request probe

  3. Through a TCP probe

For the Producer service, we’ll focus on creating a Liveness API endpoint, which will be configured with an HTTP request probe.

For the Consumer service, we’ll focus on creating a TCP listener service, which will be configured with a TCP probe.

Adding the producer liveness endpoint

In the Program.cs file of the producer project, there already are inline definitions for simple endpoints using the MapGet() and MapPost() methods. We add the following between the / and /send setup code:

Get hands-on with 1400+ tech skills courses.