First Steps with Kafka Scripts

Learn about Kafka servers and the location of the scripts provided in the Kafka distribution.

Kafka Streams applications are simply JVM applications that use the Kafka Streams application as a dependency. Most of the great features that come with Kafka Streams, such as fault tolerance, workload balancing, and parallelism, are actually provided by Kafka.

This is why, before we dive into the Streams part of Kafka Streams, we’ll pay a visit to the Kafka part. Just as web developers have tools (such as Postman and the browser’s dev tools) that aid them in the verification and debugging of REST API applications, we can also benefit from tools that will help us build our Kafka Streams applications.

This is what we will refer to as our Kafka tool belt—a collection of scripts and concepts that we’ll use to manually feed data into our application, consume outgoing data from it, understand how and when to scale it, and more. We will begin by learning about the Kafka scripts, where they are, and how to run them.

Kafka scripts

The Kafka distribution contains a directory with scripts that can be used to interact with Kafka. The producer and consumer scripts, named kafka-console-producer.sh and kafka-console-consumer.sh, respectively, are two popular examples of such scripts, but we’ll be using some other useful scripts as well.

The scripts are found in the bin directory inside the installation directory, which, in our case, is /kafka.

Get hands-on with 1200+ tech skills courses.