Observe Metrics Server Data
In this lesson, we will observe the data contained in Metrics Server.
Resource usage of the nodes is useful but is not what we’re looking for. In this chapter, we’re focused on auto-scaling Pods. But, before we get there, we should observe how much memory each of our Pods is using. We’ll start with those running in the kube-system
Namespace.
Memory consumption of pods running in kube-system
#
Execute the following from your command line to see the memory consumption of all the pods running in the Kube-system.
kubectl -n kube-system top pod
The output (on Docker For Desktop) is as follows.
NAME CPU(cores) MEMORY(bytes)
etcd-docker-for-desktop 16m 74Mi
kube-apiserver-docker-for-desktop 33m 427Mi
kube-controller-manager-docker-for-desktop 44m 63Mi
kube-dns-86f4d74b45-c47nh 1m 39Mi
kube-proxy-r56kd 2m 22Mi
kube-scheduler-docker-for-desktop 13m 23Mi
We can see resource usage (CPU and memory) for each of the Pods currently running in kube-system
. If we do not find better tools, we could use that information to adjust the requests
of those Pods to be more accurate. However, there are better ways to get that info, so we’ll skip ...