Part 2 - Command Line Examples

Part 2 - Command Line Examples

Kafka 101

Introduction

This article belongs to a set of 3, where the aim is to introduce Kafka and how to use it.

The 3 parts are:

This part intends to show how to interact with Kafka using command line.
On my GitHub, there is a docker-compose file created for running the examples given here.

Start Kafka with Zookeper

If interested in how to create a Kafka server here is a guide.
Kafka is best to run if in Linux.

  1. Download Kafka from Apache Kafka site

  2. Decompress the archive

  3. Open a command line inside <kafka_<version>/bin

Start Zookeeper

zookeeper-server-start.sh ../config/zookeeper.properties

Start Kafka server

kafka-server-start.sh ../config/server.properties

Create Topics

Create a topic

kafka-topics.sh  --bootstrap-server kafka-1:9092 --create --topic topic_a

Create a topic with 5 partitions

kafka-topics.sh --bootstrap-server kafka-1:9092 --create --topic topic_b --partitions 5

Create a topic with a replication factor

Kafka Replication Factor refers to the multiple copies of data stored across several Kafka brokers. Setting the Kafka Replication Factor allows Kafka to provide high availability of data and prevent data loss if the broker goes down or cannot handle the request.

kafka-topics.sh --bootstrap-server kafka-1:9092 --create --topic topic_c --replication-factor 2

If having fewer Kafka brokers than the replication factor an error message will be issued and the topic will not be created

List topics

kafka-topics.sh --bootstrap-server kafka-1:9092 --list

Describe Topics

kafka-topics.sh --bootstrap-server kafka-1:9092 --topic topic_a --describe

Producing data

In the following examples, data is going to be produced without keys, meaning that data will be created and replicated amongst all available partitions.

Producing data to topic_a

Start Producer

kafka-console-producer.sh --bootstrap-server kafka-1:9092 --topic topic_a

When the > appears, enter the messages, one per line

>Hello World
>My name is XXX
>It's a wonder world

To exit just enter CTRL+C

Producing data with properties

The message sent will be ack by all the brokers

Start Producer

kafka-console-producer.sh --bootstrap-server kafka-1:9092 --topic topic_a --producer-property acks=all

When the > appears, enter the messages, one per line

>A message that is acked

To exit just enter CTRL+C

Producing to a non existing topic

The behavior depends on how the cluster is configured

Start Producer

kafka-console-producer.sh --producer.config playground.config --bootstrap-server cluster.playground.cdkt.io:9092 --topic non_existing_topic

When the > appears, enter the messages, one per line

>Hello World

If the cluster was configured to now allow to send messages to a non existing topic, you will receive messages like these:

[2023-05-19 16:09:42,650] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 94 : {non_existing_topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-19 16:09:42,827] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 95 : {non_existing_topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-19 16:09:43,133] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 96 : {non_existing_topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-19 16:09:43,404] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 97 : {non_existing_topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
^C[2023-05-19 16:09:43,789] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 98 : {non_existing_topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)

If the cluster has the default configuration, you receive an error message on the first attempt, but if you try a second or third time, it will be possible to send a message, because Kafka will create the new topic automatically.

Produce keys

Data will go only to the partition for that key

Start Producer

kafka-console-producer.sh --bootstrap-server kafka-1:9092 --topic topic_a --property parse.key=true --property key.separator=:

When the > appears, enter the messages, one per line

>example key:example value
>name:XXX

Consuming Messages

Create a topic with 3 partitions

kafka-topics.sh  --bootstrap-server kafka-1:9092 --create --topic topic_1 --partitions 3

Consume a Topic

Start Consumer

kafka-console-consumer.sh  --bootstrap-server kafka-1:9092 --topic topic_1

Produce a Topic in a RoundRobin Partition

This allows to produce to one partition at a time. This is useful for education purposes only.

Start Consumer

kafka-console-producer.sh  --bootstrap-server kafka-1:9092 --topic topic_1 --producer-property partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner

Consume a Topic from the beginning

This will consume all messages from the beginning of the logs topic. The messages will not be consumed by the order they where entered, but by ordered per partition

Start Producer and enter messages

kafka-console-producer.sh  --bootstrap-server kafka-1:9092 --topic topic_2 --producer-property partitioner.class=org.apache.kafka
>a
>b
>c
>d
>e
>f
>g

Start Consumer

kafka-console-consumer.sh --bootstrap-server kafka-1:9092 --topic topic_2 --formatter kafka.tools.DefaultMessageFormatter --property print.timestamp=true --property print.key=true  --property print.value=true --property print.partition=true --from-beginning

Messages displayed

CreateTime:1687025760921        Partition:0     null    c
CreateTime:1687025825279        Partition:0     null    f
CreateTime:1687025760209        Partition:2     null    b
CreateTime:1687025824145        Partition:2     null    e
CreateTime:1687025758459        Partition:1     null    a
CreateTime:1687025823573        Partition:1     null    d

List Consumer Groups

kafka-consumer-groups.sh --bootstrap-server kafka-1:9092 --list

References

[1] Sagar Kudu, “Set up a Kafka Cluster in Local with multiple Kafka Brokers” Medium. [Online]. Available: https://sagarkudu.medium.com/set-up-a-kafka-cluster-in-local-with-multiple-kafka-brokers-98d4e4e7a343. [Accessed: 17-Jun-2023]

[2] Apache Kafka, “Kafka 3.5 Documentation” Apache Kafka. [Online]. Available: https://kafka.apache.org/35/documentation.html. [Accessed: 17-Jun-2023]