Serialization and Deserialization

Previously, we published unstructured strings to our new topology. In most real-life scenarios, we would be using a structured string with a defined format, like JSON with a specific schema.

In order to accept different data types, we need to provide Kafka Streams with something called Serde, which is short for serializer/deserializer. It is a class in charge of mapping the byte array used by Kafka to store and transfer records into a meaningful object that we can use in our application. Kafka Streams comes with Serdes for multiple data types, including primitives, String, UUID, and Void.

However, Kafka Streams does not come with Serdes for popular formats like JSON, protobuf, or Avro. The records in our input topic are in JSON format, so we would implement a JSON Serde for our new Track class:

Get hands-on with 1400+ tech skills courses.