A Kafka Client’s Request: There and Back Again
06-19, 14:00–14:40 (Europe/Berlin), Palais Atelier

Understand how data moves into and out of Apache Kafka® by taking a look at the producer and consumer request life cycle. Follow a request from an initial call to send() or poll(), all the way to disk


Do you know how your data moves into and out of your Apache Kafka® instance? From the programmer’s point of view, it’s relatively simple. But under the hood, writing to and reading from Kafka is a complex process with a fascinating life cycle that’s worth understanding.

When you call producer.send() or consumer.poll(), those calls are translated into low-level requests which are sent along to the brokers for processing. In this session, we’ll dive into the world of Kafka producers and consumers to follow a request from an initial call to send() or poll(), all the way to disk, and back to the client via the broker’s final response. Along the way, we’ll explore a number of client and broker configurations that affect how these requests are handled and discuss the metrics that you can monitor to help you to keep track of every stage of the request life cycle.

By the end of this session, you’ll know the ins and outs of the read and write requests that your Kafka clients make, making your next debugging or performance analysis session a breeze.

See also: Slides (595.3 KB)

Danica Fine is a Senior Developer Advocate at Confluent where she helps others get the most out of their event-driven pipelines. In her previous role as a software engineer on a streaming infrastructure team, she predominantly worked on Kafka Streams- and Kafka Connect-based projects. She can be found on Twitter, tweeting about tech, plants, and baking @TheDanicaFine.