Example: Topics & Partitions
In this lesson, we'll study how our example is divided into topics and partitions.
Technical parameters of the partitions and topics #
The topic order
contains the order records. Docker Compose configures the Kafka Docker container based on the environment variable KAFKA_CREATE_TOPICS
in the file docker-compose.yml
in such a way as to create the topic order
.
The topic order
is divided into five partitions, as a greater number of
partitions allows for more concurrency. In the example scenario, it is
not important to have a high degree of concurrency. More partitions
require more file handles on the server and more memory on the client.
When a Kafka node fails, it might be necessary to choose a new leader
for each partition. This also takes longer when more
partitions exist.
This argues for a lower number of partitions as used in the example in order to save resources. The number of partitions in a topic can still be changed after creating a topic.
However, in that case, the mapping of records to partitions will change. This can cause problems because then the assignment of records to consumers is not unambiguous anymore. Therefore, the number of partitions should be chosen sufficiently high from the start.
No replication in the example #
For a production environment, a replication across multiple servers is necessary to compensate for the failure of individual servers. For a demo, the level of complexity required is not needed, so that only one Kafka node is running.
Producers #
The order microservice has to send the information about the order
to the other microservices. To do so, the microservice uses the
KafkaTemplate
. This class from the Spring Kafka framework
encapsulates the producer API and facilitates the sending of records.
Only the method send()
has to be called. This is shown in the code
piece from the class OrderService
in the listing.
Get hands-on with 1200+ tech skills courses.