Explore Centralized Logging
In this lesson, we will explore centralized logging through Elasticsearch, Fluentd, and Kibana.
Elasticsearch is probably the most commonly used in-memory database, At least if we narrow the scope to self-hosted databases. It is designed for many other scenarios, and it can be used to store (almost) any type of data. As such, it is almost perfect for storing logs, which could come in many different formats. Given its flexibility, some use it for metrics as well, and Elasticsearch competes with Prometheus
. We’ll leave metrics aside for now and focus only on logs.
EFK stack #
The EFK (Elasticsearch, Fluentd, and Kibana) stack consists of three components. Data is stored in Elasticsearch. Logs are collected, transformed, and pushed to the DB by Fluentd, and Kibana is used as UI through which we can explore data stored in Elasticsearch. If you are used to ELK (Logstash instead of Fluentd), the setup that follows should be familiar.
The first component we’ll install is Elasticsearch. Without it, Fluentd would not have a destination to ship logs, and Kibana would not have a source of data.
Elasticsearch #
As you might have guessed, we’ll continue using Helm
and, fortunately, Elasticsearch Chart is already available in the stable channel. I’m confident that you know how to find the chart and explore all the values you can use. So, we’ll jump straight into the values I prepared. They are the bare minimum and contain only the resources
.
cat logging/es-values.yml
client:
resources:
limits:
cpu: 1
memory: 1500Mi
requests:
cpu: 25m
memory: 750Mi
master:
resources:
limits:
cpu: 1
memory: 1500Mi
requests:
cpu: 25m
memory: 750Mi
data:
resources:
limits:
cpu: 1
memory: 3Gi
requests:
cpu: 100m
memory: 1500Mi
As you can see, there are three sections (client
, master
...