DEV Community

Cover image for Kafka Made Simple: A Hands-On Quickstart with Docker and Spring Boot
Arata
Arata

Posted on

Kafka Made Simple: A Hands-On Quickstart with Docker and Spring Boot

Apache Kafka is a distributed, durable, real-time event streaming platform. It goes beyond a message queue by providing scalability, persistence, and stream processing capabilities.

In this guide, we’ll quickly spin up Kafka with Docker, explore it with CLI tools, and integrate it into a Spring Boot application.


1. What is Kafka?

Apache Kafka is a distributed, durable, real-time event streaming platform.

It was originally developed at LinkedIn and is now part of the Apache Software Foundation.

Kafka is designed for high-throughput, low-latency data pipelines, streaming analytics, and event-driven applications.

What is an Event?

An event is simply a record of something that happened in the system.

Each event usually includes:

  • Key → identifier (e.g., user ID, order ID).
  • Value → the payload (e.g., “order created with total = $50”).
  • Timestamp → when the event occurred.

Example event:

{
  "key": "order-123",
  "value": { "customer": "Alice", "total": 50 },
  "timestamp": "2025-09-19T10:15:00Z"
}
Enter fullscreen mode Exit fullscreen mode

What is an Event Streaming Platform?

An event streaming platform is a system designed to handle continuous flows of data — or events — in real time.

Instead of working in batches (processing data after the fact), it allows applications to react as events happen.


2. What Kafka Can Do

Kafka is more than a message queue—it's a real-time event backbone for modern systems.

Messaging Like a Message Queue

Kafka decouples producers and consumers, enabling asynchronous communication between services.
Example:
A banking system publishes transaction events to Kafka. Fraud detection, ledger updates, and notification services consume these events independently.

Event Streaming

Kafka streams data in real time, allowing systems to react instantly.
Example:
An insurance platform streams claim events to trigger automated validation, underwriting checks, and customer updates in real time.

Data Integration

Kafka Connect bridges Kafka with databases, cloud storage, and analytics platforms.
Example:
A semiconductor company streams sensor data from manufacturing equipment into a data lake for predictive maintenance and yield optimization.

Log Aggregation

Kafka centralizes logs from multiple services for monitoring and analysis.
Example:
An industrial automation system sends logs from PLCs and controllers to Kafka, where they’re consumed by a monitoring dashboard for anomaly detection.

Replayable History

Kafka retains events for reprocessing or backfilling.
Example:
An insurance company replays past policy events to train a model that predicts claim risk or customer churn. This avoids relying on static snapshots and gives the model a dynamic, time-aware view of behavior.

Scalable Microservices Communication

Kafka handles high-throughput messaging across distributed services.
Example:
A financial institution uses Kafka to coordinate customer onboarding, KYC checks, and account provisioning across multiple microservices.


3. Core Concepts

Let’s break down the key components that power Kafka’s event-driven architecture:

Concept Description
Event The basic unit in Kafka, including key, value, and timestamp.
Topic A category for events, like a database table.
Partition A Topic can be split into multiple partitions for parallelism and scalability.
Producer Application that sends events to Kafka.
Consumer Application that reads events from Kafka.
Consumer Group A group of Consumers that share the load of processing.
Broker Kafka server node storing data and handling requests.
Offset A unique ID for each record within a Partition.

4. QuickStart with Docker

This configuration sets up a single-node Kafka broker using the KRaft. It’s ideal for development, testing scenarios

name: kafka
services:
  kafka:
    image: apache/kafka:4.1.0
    container_name: kafka
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: BROKER://:9092,CONTROLLER://:9093
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_INTER_BROKER_LISTENER_NAME: BROKER
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: BROKER:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: BROKER://localhost:9092
      KAFKA_CLUSTER_ID: "kafka-1"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_LOG_DIRS: /var/lib/kafka/data
    volumes:
      - kafka_data:/var/lib/kafka/data
    ports:
      - "9092:9092"
volumes:
  kafka_data:

Enter fullscreen mode Exit fullscreen mode

How to Run

Start the Kafka container using:

docker compose up
Enter fullscreen mode Exit fullscreen mode

Kafka will be available at localhost:9092 for producers and consumers, and internally at localhost:9093 for controller communication.


5. Kafka CLI

Before running Kafka commands, log into the Kafka container:

docker container exec -it localhost bash
Enter fullscreen mode Exit fullscreen mode

Create Topic

Create a topic named quickstart with one partition and a replication factor of 1:

/opt/kafka/bin/kafka-topics.sh --create \
  --bootstrap-server localhost:9092 \
  --replication-factor 1 \
  --partitions 1 \
  --topic quickstart
Enter fullscreen mode Exit fullscreen mode

List Topic

Check all existing topics:

/opt/kafka/bin/kafka-topics.sh --list \
  --bootstrap-server localhost:9092
Enter fullscreen mode Exit fullscreen mode

Consume Message

Read messages from the order topic starting from the beginning:

/opt/kafka/bin/kafka-console-consumer.sh \
  --bootstrap-server localhost:9092 \
  --topic quickstart \
  --from-beginning

Enter fullscreen mode Exit fullscreen mode

Send Message

You can send messages to the quickstart topic using either direct input or a file.

Option A: Send a single message

echo 'This is Event 1' | \
/opt/kafka/bin/kafka-console-producer.sh \
  --bootstrap-server localhost:9092 \
  --topic quickstart
Enter fullscreen mode Exit fullscreen mode

Option B: Send multiple messages from a file

echo 'This is Event 2' > messages.txt
echo 'This is Event 3' >> messages.txt
cat messages.txt | \
/opt/kafka/bin/kafka-console-producer.sh \
  --bootstrap-server localhost:9092 \
  --topic quickstart
Enter fullscreen mode Exit fullscreen mode

5. Spring Boot Integration

This configuration enables seamless integration between a Spring Boot application and an Apache Kafka broker. It defines both producer and consumer settings for message serialization, deserialization, and connection behavior.

pom.xml

<!-- spring-web -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>3.4.9</version>
</dependency>
<!-- kafka -->
<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.3.9</version>
</dependency>
<!-- Lombok(optional) -->
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.18.30</version>
    <optional>true</optional>
</dependency>
Enter fullscreen mode Exit fullscreen mode

applicaiton.yml

spring:
  kafka:
    bootstrap-servers: localhost:9092
    template:
      default-topic: orders
    consumer:
      group-id: quickstart-group
      auto-offset-reset: latest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
      properties:
        spring.json.trusted.packages: "dev.aratax.messaging.kafka.model"
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer

Enter fullscreen mode Exit fullscreen mode

Topic Setup

@Bean
public NewTopic defaultTopic() {
    return new NewTopic("orders", 1, (short) 1);
}
Enter fullscreen mode Exit fullscreen mode

Event Model

public class OrderEvent {
    private String id;
    private Status status;
    private BigDecimal totalAmount;
    private Instant createdAt = Instant.now();
    private String createdBy;

    public enum Status {
        IN_PROGRESS,
        COMPLETED,
        CANCELLED
    }
}
Enter fullscreen mode Exit fullscreen mode

Producer Example

@RestController
@RequestMapping("/api")
@RequiredArgsConstructor
public class OrderEventController {

    private final KafkaTemplate<String, OrderEvent> kafkaTemplate;

    @PostMapping("/orders")
    public String create(@RequestBody OrderEvent event) {
        event.setId(UUID.randomUUID().toString());
        event.setCreatedAt(Instant.now());
        kafkaTemplate.sendDefault(event.getId(), event);
        return "Order sent to Kafka";
    }
}
Enter fullscreen mode Exit fullscreen mode

Consumer Example

@Component
public class OrderEventsListener {

    @KafkaListener(topics = "orders")
    public void handle(OrderEvent event) {
        System.out.println("Received order: " + event);
    }
}
Enter fullscreen mode Exit fullscreen mode

6. Demo Project

I built a demo project using Spring Boot and Kafka to demonstrate basic producer/consumer functionality.
Check it out on GitHub: springboot-kafka-quickstart


7. Key Takeaways

  • Kafka is more than a message queue—it's a scalable, durable event streaming platform.
  • Events are central to Kafka’s architecture, enabling real-time data flow across systems.
  • Docker makes setup easy, allowing you to spin up Kafka locally for development and testing.
  • Kafka CLI tools help you explore topics, produce messages, and consume events interactively.
  • Spring Boot integration simplifies Kafka usage with built-in support for producers and consumers.
  • Real-world use cases span industries like banking, insurance, semiconductor, and automation.

8. Conclusion

Apache Kafka empowers developers to build reactive, event-driven systems with ease. Whether you're streaming financial transactions, processing insurance claims, or monitoring factory equipment, Kafka provides the backbone for scalable, real-time communication.

With Docker and Spring Boot, you can get started in minutes—no complex setup required. This quickstart gives you everything you need to explore Kafka hands-on and begin building production-grade event pipelines.

Ready to go deeper? Try explore its design/implementation, stream processing, or Kafka Connect integrations next.

Top comments (0)