Measuring the Actual Memory and CPU Consumption

Learn how to measure the actual memory and CPU consumption.

We'll cover the following

Exploring the options

How did we come up with the current memory and CPU values? Why did we set the memory of the Mongo image to 100Mi? Why not 50Mi or 1Gi? It is embarrassing to admit that the values we have right now are random. We guessed that the containers based on the vfarcic/go-demo-2 image require fewer resources than the Mongo image, so their values are comparatively smaller. That was the only criteria we used to define the resources.

Before we put random values for resources, we should know that we do not have any metrics to back us up. Anybody’s guess is as good as ours.

The only way to truly know how much memory and CPU an application use is by retrieving metrics. We’ll use Metrics Server for that purpose.

Metrics Server collects and interprets various signals like compute resource usage, lifecycle events, etc. In our case, we’re interested only in the CPU and memory consumption of the containers we’re running in our cluster.

k3d clusters come with metrics-server already deployed as a system application.

The idea of developing a Metrics Server as a tool for monitoring needs is mostly abandoned. Its primary focus is to serve as an internal tool required for some of the Kubernetes features.

Instead, I’d suggest a combination of Prometheus combined with the Kubernetes API as the source of metrics and Alertmanager for our alerting needs. However, those tools are not in the scope of this chapter, so you might need to educate yourself from their documentation.

Note: Use Metrics Server only as a quick-and-dirty way to retrieve metrics. Explore the combination of Prometheus and Alertmanager for your monitoring and alerting needs.

Now that we have clarified what Metrics Server is good for, as well as what it isn’t, we can proceed and confirm that it is indeed running inside our cluster.

Get hands-on with 1400+ tech skills courses.