Exploring Prometheus Adapter
In this lesson, we will introduce and explore the Prometheus Adapter and the benefit of using it.
We'll cover the following
Given that we want to extend the metrics available through the Metrics API and that Kubernetes allows us to do so through its Custom Metrics API, one option to accomplish our goals could be to create our own adapter. Depending on the application (DB) where we store metrics, that might be a good option. But, given that it is pointless to reinvent the wheel, our first step should be to search for a solution. If someone already created an adapter that suits our needs, it would make sense to adopt it instead of creating a new one by ourselves. Even if we do choose something that provides only part of the features we’re looking for, it’s easier to build on top of it (and contribute back to the project) than to start from scratch.
Prometheus
Adapter #
Given that our metrics are stored in Prometheus
, we need a metrics adapter
that will be capable of fetching data from it. Since Prometheus
is very popular and adopted by the community, there is already a project waiting for us to use. It’s called Kubernetes Custom Metrics Adapter for Prometheus. It is an implementation of the Kubernetes Custom Metrics API that uses Prometheus
as the data source.
Since we adopted Helm
for all our installations, we’ll use it to install the adapter.
helm install prometheus-adapter \
stable/prometheus-adapter \
--version 1.4.0 \
--namespace metrics \
--set image.tag=v0.5.0 \
--set metricsRelistInterval=90s \
--set prometheus.url=http://prometheus-server.metrics.svc \
--set prometheus.port=80
kubectl -n metrics \
rollout status \
deployment prometheus-adapter
We installed the prometheus-adapter
Helm Chart from the stable
repository. The resources were created in the metrics
Namespace, and the image.tag
is set to v0.3.0
.
We changed metricsRelistInterval
from the default value of 30s
to 90s
. That is the interval the adapter will use to fetch metrics from Prometheus
. Since our Prometheus
setup is fetching metrics from its targets every sixty seconds, we had to set the adapter’s interval to a value higher than that. Otherwise, the adapter’s frequency would be higher than pulling frequency of Prometheus
, and we’d have iterations that would be without new data.
The last two arguments specified the URL and the port through which the adapter can access the Prometheus API
. In our case, the URL is set to go through the Prometheus
's Service.
🔍 Please visit Prometheus Adapter Chart README for more information about all the values you can set to customize the installation.
Finally, we waited until the prometheus-adapter
rolled out.
Prometheus
data provided through the adapter #
If everything is working as expected, we should be able to query Kubernetes’ Custom Metrics API and retrieve some of the Prometheus
data provided through the adapter.
kubectl get --raw \
"/apis/custom.metrics.k8s.io/v1beta1" \
| jq "."
🔍 Given the promise that each chapter will feature a different Kubernetes’ flavor and that AWS did not have its turn yet, all the outputs are taken from EKS. Depending on which platform you’re using, your outputs might be slightly different.
The first entries of the output from querying Custom Metrics are as follows.
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/memory_max_usage_bytes",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "jobs.batch/kube_deployment_spec_strategy_rollingupdate_max_unavailable",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
...
The list of the custom metrics available through the adapter is big, and we might be compelled to think that it contains all those stored in Prometheus
. We’ll find out whether that’s true later. For now, we’ll focus on the metrics we might need with HPA
tied to go-demo-5
Deployment. After all, providing metrics for auto-scaling is an adapter’s primary, if not the only function.
Data flow of Metrics Aggregator #
From now on, Metrics Aggregator contains not only the data from the Metrics Server
but also those from the Prometheus Adapter
which, in turn, is fetching metrics from Prometheus Server
. We are yet to confirm whether the data we’re getting through the adapter is enough and whether HPA
works with custom metrics.
Get hands-on with 1400+ tech skills courses.