Implementing Message Broker Technologies
Explore essential Kafka components, configuring resiliency and scalability, and using Kafka UI for management.
We'll cover the following
As we saw in the previous chapter, standing up and using a minimal Kafka instance can be done relatively quickly. This is great for localized testing. However, it does not translate into a production-grade infrastructure that’s capable of handling the raw volume of events we might see with the application. While every configuration detail is not relevant to developing the domain code and the overall application, there are some points to keep in mind when we’re setting up and configuring Kafka that can impact how software components may process events.
Now, let’s walk through a high-level overview of the components that are needed to run Kafka, as well as relevant implementations and configurations that will enable resiliency and scalability.
Reviewing essential Kafka components
There are three primary components that we must have to establish a functioning Kafka instance. We’ve already talked about the broker, as well as topics. The final piece of the puzzle is the ZooKeeper component. ZooKeeper, according to the official Apache ZooKeeper site, “is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.” In short, it minds the brokers in our Kafka instance and helps facilitate event routing, configuration updates, replication, and leader election. It also has the following properties:
Event routing ensures events that are written to topics are written to the correct destination.
When configuration updates are sent for Kafka brokers or other components, ZooKeeper ensures those updates are applied uniformly.
ZooKeeper manages event replication across brokers and topic partitions based on the configuration that has been set.
ZooKeeper manages the primary active broker by declaring it as the lead node. This is known as leader election.
From an infrastructure perspective, this is something that requires a fair amount of planning and design. From a developer’s perspective, it’s good to understand what the base components are and how many brokers to use. For example, during local or integration testing, it might make sense to only use a single Kafka broker. In performance testing or production environments, the number of brokers would be increased significantly based on the anticipated volume of events moving through it.
Let’s look at an example. Consider the following docker-compose
file:
Get hands-on with 1400+ tech skills courses.