DEV Community

Gopi Gugan
Gopi Gugan

Posted on

Event-Driven Architecture 101: Building a Simple App with Kafka - By Gopi Gugan

Event-driven architecture (EDA) has become a cornerstone of modern backend systems — powering real-time analytics, notification pipelines, and scalable microservices. Yet for many developers, Kafka still feels intimidating.

This guide breaks Kafka down into first principles and walks through a minimal, real-world example you can understand and run locally in under 10 minutes.


What Is Kafka (In Plain English)?

Apache Kafka is a distributed event streaming platform used to:

  • Publish events (producers)
  • Store events durably (topics)
  • Consume events (consumers)

Instead of services calling each other directly, they emit events to Kafka. Other services react to those events asynchronously, when they are ready.

Think of Kafka as a highly reliable, scalable event log.


When Should You Use Kafka?

Kafka is a strong choice when you need:

  • Asynchronous communication between services
  • High-throughput data pipelines
  • Real-time processing
  • Decoupled microservices

You probably do not need Kafka if:

  • You only have one service
  • You rely on simple request/response APIs
  • Your scale is small and predictable

Kafka is powerful — but unnecessary complexity is still complexity.


High-Level Architecture

At a high level, Kafka works like this:

  1. A producer sends an event to a topic
  2. Kafka stores the event durably
  3. One or more consumers read the event at their own pace

This eliminates tight coupling and prevents cascading failures between services.


Step 1: Run Kafka Locally with Docker

The fastest way to get started is Docker.

# docker-compose.yml
version: "3"
services:
  zookeeper:
    image: confluentinc/cp-zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181

  kafka:
    image: confluentinc/cp-kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Enter fullscreen mode Exit fullscreen mode

Start Kafka:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

You now have a working Kafka broker running locally.


Step 2: Create a Producer (Node.js)

A producer’s only responsibility is to emit events.

import { Kafka } from "kafkajs";

const kafka = new Kafka({
  brokers: ["localhost:9092"],
});

const producer = kafka.producer();

async function sendEvent() {
  await producer.connect();

  await producer.send({
    topic: "orders",
    messages: [
      {
        value: JSON.stringify({
          orderId: 123,
          total: 49.99,
        }),
      },
    ],
  });

  await producer.disconnect();
}

sendEvent();
Enter fullscreen mode Exit fullscreen mode

Key takeaway:

The producer does not know who consumes the event.


Step 3: Create a Consumer

Consumers react to events independently.

import { Kafka } from "kafkajs";

const kafka = new Kafka({
  brokers: ["localhost:9092"],
});

const consumer = kafka.consumer({ groupId: "billing-service" });

async function run() {
  await consumer.connect();
  await consumer.subscribe({ topic: "orders" });

  await consumer.run({
    eachMessage: async ({ message }) => {
      const event = JSON.parse(message.value.toString());
      console.log("Processing order:", event.orderId);
    },
  });
}

run();
Enter fullscreen mode Exit fullscreen mode

You can now add:

  • A shipping service
  • An analytics service
  • A notification service

All without changing the producer.


Why Event-Driven Architecture Scales

Traditional Architecture Event-Driven Architecture
Tight coupling Loose coupling
Synchronous calls Asynchronous events
Hard to scale Horizontally scalable
Fragile failures Resilient systems

Kafka acts like a shock absorber between services.


Common Kafka Mistakes to Avoid

  1. Treating Kafka like a queue (it is a log)
  2. Creating too many tiny topics
  3. Ignoring schema evolution
  4. Using Kafka when a database and cron job would be simpler

Kafka should reduce complexity — not add to it.


When Kafka Becomes a Superpower

Kafka really shines when combined with:

  • Schema Registry (Avro or Protobuf)
  • Stream processing (Kafka Streams or Flink)
  • Real-time analytics pipelines
  • Event-driven notifications

At that point, Kafka becomes your system’s central nervous system.


Kafka is not scary — it is just a durable event log with rules.

If you understand:

  • Topics
  • Producers
  • Consumers
  • Consumer groups

You already understand most of Kafka.

Top comments (0)