Setting Up Kafka Locally Using Docker | Spring Boot Connect
Apache Kafka is a widely-used distributed streaming platform designed for building real-time data pipelines, stream processing, and more. If you’re looking to set up Kafka locally to test your applications or learn how it works, Docker offers a simple and efficient way to containerize Kafka on your machine.
This guide will walk you through every step of setting up Kafka using Docker, including the requirements, Docker Compose configuration, basic producer and consumer tests, troubleshooting tips, and how to use Kafka with Spring Boot Java.
Requirements and Images Used
Before we get started, ensure your machine meets these basic requirements and that you have the necessary tools installed.
Prerequisites
- Docker installed on your machine. You can download it from Docker’s website.
- Docker Compose installed. This usually comes with Docker Desktop.
Images Used
We’ll use the following images for Kafka and Zookeeper from Docker Hub (default repositories):
confluentinc/cp-kafka
: Kafka Docker image provided by Confluent.confluentinc/cp-zookeeper
: Zookeeper image required for Kafka.
You can access these images directly on Docker Hub.
Docker Compose File
To simplify the process of running Kafka locally, we’ll use Docker Compose to define the necessary services for Kafka and Zookeeper. Copy the following docker-compose.yml
file into your working directory.
Example docker-compose.yml
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka
container_name: kafka
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://kafka:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
What’s Happening in the File:
- Zookeeper Service
Zookeeper is essential for managing the Kafka cluster. It’s defined here with port 2181
for client connections.
- Kafka Service
Kafka uses port 9092
for brokers to establish communication. The advertised.listeners
and listeners
variables configure how Kafka communicates internally and externally.
- Dependencies
Kafka depends on Zookeeper, which is why the depends_on
directive is used.
To Start the Kafka Setup:
Run the following command from your terminal inside the directory with your docker-compose.yml
file:
docker-compose up -d
This will start both Kafka and Zookeeper services in detached mode.
Basic Producer Consumer Test
Once Kafka is up and running locally, you’ll want to test if everything works by setting up a basic producer and consumer interaction. The easiest way to test is using the Kafka CLI provided by the Kafka image itself.
Testing Producer and Consumer
- Access the Kafka Container
Run the following command to open a bash shell in your Kafka container:
docker exec -it kafka bash
- Create a Topic
Inside the container, create a topic called test-topic
using the kafka-topics.sh
script:
bin/kafka-topics.sh --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
- Start a Producer
Produce messages for test-topic
by starting the Kafka producer command:
bin/kafka-console-producer.sh --topic test-topic --bootstrap-server localhost:9092
Type a few messages into the terminal and press Enter to send them.
- Start a Consumer
Open another terminal and start the Kafka consumer for the same topic:
bin/kafka-console-consumer.sh --topic test-topic --from-beginning --bootstrap-server localhost:9092
You should see the messages produced in the producer terminal appear here in real time.
If you see those messages, congratulations! Your Kafka setup is working perfectly.
Troubleshooting Tips
If you run into issues while setting up Kafka, here are some common problems and their solutions:
- Port Conflicts
If port 9092
or 2181
is already in use, modify the ports in the docker-compose.yml
file to unused ports.
- Connection Refused
Ensure that your advertised.listeners
settings in the docker-compose.yml
file include the correct hostname (e.g., localhost
).
- Kafka Container Failing to Start
Check the logs of the Kafka container for detailed error messages:
docker logs kafka
- Zookeeper Unreachable
Ensure Zookeeper is running by checking its container status with:
docker ps
Using Kafka with Spring Boot Java
Once your Kafka setup is running, you can integrate it with Spring Boot to build powerful, event-driven applications. Here’s a quick example:
Add Dependencies
Add the necessary dependencies to your pom.xml
file:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
Producer Configuration
Create the KafkaProducerConfig
class to set up Kafka producer properties:
@Configuration
public class KafkaProducerConfig {
@Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Consumer Configuration
Similarly, configure the consumer:
@Configuration
public class KafkaConsumerConfig {
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
configProps.put(ConsumerConfig.GROUP_ID_CONFIG, "group-id");
configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(configProps);
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
With these configurations, you can start building producer and consumer services within your Spring Boot application!
Kafka Made Simple with Docker
Setting up Apache Kafka locally with Docker is a practical and efficient way to learn and test its features. From a simple docker-compose.yml
file to a functioning producer/consumer environment, you too can master the basics in no time.
If you’re ready to explore further, integrating Kafka with Spring Boot is your next logical step. This will empower your applications with real-time messaging and data streaming capabilities.
Start building event-driven applications today!
Meta Information
Meta Title
Set Up Kafka Locally Using Docker
Meta Description
Learn how to use Docker to set up Kafka locally. Includes a Docker Compose file, basic producer/consumer test, troubleshooting, and Spring Boot integration.
All the code sections have been properly formatted into Markdown code snippets. Let me know if there’s anything else you’d like to tweak!