DEV Community

Fatima Alam
Fatima Alam

Posted on

All things Kafka

Kafka Crash Course: From Concept to a Hands-On Python Project
Have you ever wondered how giant platforms like Uber, Netflix, or LinkedIn handle millions of events per second in real-time? The answer, more often than not, is Apache Kafka.

If you are a microservices developer, data engineer, or just curious about event-driven architecture, understanding Kafka is no longer optional—it's a superpower.

In this post, we will deconstruct what Kafka is, why it exists, and build a real-world "Notify Me" backend using Java.

Let's dive in!

The Problem – "Tight Coupling"
Imagine you are building a simple e-commerce application called "AllThingsStore." In the beginning, you use a synchronous, monolithic architecture where microservices talk directly to each other.

When a customer places an order:

Order Service calls Payment Service.

Payment Service calls Inventory Service.

Inventory Service calls Notification Service.

This works fine...What could go wrong??
Until Black Friday. Suddenly, thousands of orders rush in.

The Problem visualized:
Here is what happens when direct, synchronous communication fails under load.

In traditional microservices, systems use Synchronous Communication. If the "Order Service" calls the "Payment Service" directly, it must wait for a response.

The Domino Effect: If one service lags or crashes (especially during peak traffic), the entire chain freezes.

Single Point of Failure: A 10-minute outage in one service can lead to hours of backlog and business loss.

EG: When Payment Service gets overwhelmed and crashes, the entire chain freezes. The Order Service is stuck waiting for a response that will never come, leading to a catastrophic system failure and lost revenue. This is called tight coupling.

The Kafka Solution – "Event-Driven"
What if we redesigned the system so that services don't need to know about each other directly? This is where Apache Kafka enters the scene as a central message broker.

Instead of Order Service calling Payment Service directly, it simply publishes an event: "An order was placed."

Order Service (now a Producer) hands this event to Kafka and continues its work. It doesn't wait. It trusts Kafka to deliver.

Kafka then organizes this event into a logical category called a Topic (e.g., "orders"). Any other service that needs this information (now a Consumer) simply subscribes to that topic.

Key Kafka Concepts:
Producers: Services that create and send events.

Consumers: Services that read and process events.

Topics: A logical bucket or category for events.

Brokers: The actual servers that store and manage the data. Unlike traditional message brokers, Kafka persists data to disk, allowing it to be re-read (which is not the case with Rabbit MQ, Active MQ, Pub/Sub

Mastering Kafka’s Architecture
To truly harness Kafka, you must understand how it scales. Two main concepts achieve this: Partitions and Consumer Groups.

  1. Partitions A Topic isn't just one big file. To allow for parallel processing, Kafka splits a Topic into multiple Partitions. Think of a partition as a commit log where data is only appended at the end.

This architecture enables multiple producers to write to different partitions simultaneously and multiple consumers to read from different partitions simultaneously.

  1. Consumer Groups To read partitions in parallel, consumers are organized into Consumer Groups. Kafka automatically ensures that each partition in a topic is assigned to exactly one consumer within that group. This provides massive scalability.

Summarizing the Scale: The KRaft Architecture
Modern Kafka (v3.0+) has replaced its dependency on an external orchestration tool (Zookeeper) with a built-in system called KRaft (Kafka Raft).

This simplifies the architecture. A modern Kafka node can act as both a Broker (storing data) and a Controller (managing cluster consensus).

Hands-On Project: A Java Notify me Service for an E-Comm
Now for the best part: bringing these concepts to life! We will build a simple "Notification" simulator:

User clicks "Notify Me"
        ↓
Subscription saved (DB)

Inventory Service
item restocked
        ↓
Kafka Producer publishes event

Topic: item-restocked
        ↓
Kafka Broker
        ↓
Notification Service (Consumer)
        ↓
Fetch subscribed users
Send Email / Push Notification
Enter fullscreen mode Exit fullscreen mode

Producer (Client App): Inventory Service which makes fake products and publishes them to an Products topic.

Consumer (Order Processor): Notification Service that listens to the Products topic in real-time and "processes" the new products.

Prerequisites
Kafka Setup (Local)

Install Apache Kafka

Start services:

zookeeper-server-start.sh config/zookeeper.properties
kafka-server-start.sh config/server.properties
Enter fullscreen mode Exit fullscreen mode

Create topic:

kafka-topics.sh --create \
--topic item-restocked \
--bootstrap-server localhost:9092 \
--partitions 3 \
--replication-factor 1
Enter fullscreen mode Exit fullscreen mode

IntelliJ Project Setup

Create Maven project

Add dependency:

<dependency>
 <groupId>org.apache.kafka</groupId>
 <artifactId>kafka-clients</artifactId>
 <version>3.6.0</version>
</dependency>
Enter fullscreen mode Exit fullscreen mode

Event Model

class RestockEvent {

    private String productId;
    private int quantity;

    public RestockEvent(String productId, int quantity) {
        this.productId = productId;
        this.quantity = quantity;
    }

    public String getProductId() { return productId; }
}
Enter fullscreen mode Exit fullscreen mode

Kafka Producer (Inventory Service)

When item stock becomes available.

import org.apache.kafka.clients.producer.*;

import java.util.Properties;

public class RestockProducer {

    private static KafkaProducer<String,String> producer;

    static {

        Properties props = new Properties();

        props.put("bootstrap.servers","localhost:9092");
        props.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");

        producer = new KafkaProducer<>(props);
    }

    public static void publishRestock(String productId) {

        ProducerRecord<String,String> record =
                new ProducerRecord<>("item-restocked", productId);

        producer.send(record);
    }
}
Enter fullscreen mode Exit fullscreen mode

Usage inside Inventory Service

if(product.getStock() > 0){
    RestockProducer.publishRestock(product.getId());
}
Enter fullscreen mode Exit fullscreen mode

Kafka Consumer (Notification Service)

import org.apache.kafka.clients.consumer.*;

import java.time.Duration;
import java.util.*;

public class RestockConsumer {

    public static void main(String[] args) {

        Properties props = new Properties();

        props.put("bootstrap.servers","localhost:9092");
        props.put("group.id","notification-service");
        props.put("key.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String,String> consumer =
                new KafkaConsumer<>(props);

        consumer.subscribe(List.of("item-restocked"));

        while(true){

            ConsumerRecords<String,String> records =
                    consumer.poll(Duration.ofMillis(100));

            for(ConsumerRecord<String,String> record : records){

                String productId = record.value();

                notifyUsers(productId);
            }
        }
    }

    static void notifyUsers(String productId){

        List<String> users =
                SubscriptionRepository.getUsers(productId);

        for(String user : users){
            System.out.println("Sending notification to " + user);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Subscription Table

When user clicks Notify Me

Subscription
------------
userId
productId
createdAt
Enter fullscreen mode Exit fullscreen mode

Query used:

SELECT userId
FROM subscription
WHERE productId = ?
Enter fullscreen mode Exit fullscreen mode

Example Event Flow

Inventory updated
Stock = 10
Enter fullscreen mode Exit fullscreen mode

Producer sends:

Topic: item-restocked
Message: productId=iphone15
Enter fullscreen mode Exit fullscreen mode

Consumer receives:

Notification service
Enter fullscreen mode Exit fullscreen mode

Fetch subscribers:

user1
user2
user3
Enter fullscreen mode Exit fullscreen mode

Send:

email / push notification
Enter fullscreen mode Exit fullscreen mode

Why Kafka Works Well Here
Benefits:

decouples inventory and notification service
scalable fanout to millions of users
asynchronous processing
fault tolerant

Production Improvements
Real systems add:

Redis → store top subscribers
Kafka partitions → scale notifications
Email service workers
Rate limiting
Batch notifications

Top comments (0)