Your Personal Message Delivery System
Welcome to your hands-on Kafka learning journey! In this guide, we'll build a complete Kafka system on a single machine - perfect for learning, testing, and understanding how everything works together.
π¦ What We're Building
Think of this as creating your own Digital Post Office on your computer:
βββββββββββββββββββββββββββββββββββββββββββ
β YOUR LAPTOP (Single Machine) β
β β
β βββββββββββββββββββββββββββββββββ β
β β KAFKA BROKER (Port 9092) β β
β β Your Post Office β β
β βββββββββββββββββββββββββββββββββ€ β
β β β β
β β π¬ Topic: "customer-orders" β β
β β π¬ Topic: "payment-alerts" β β
β β π¬ Topic: "user-activity" β β
β β β β
β βββββββββββββββββββββββββββββββββ β
β β β β
β PRODUCERS CONSUMERS β
β (Send messages) (Receive messages) β
βββββββββββββββββββββββββββββββββββββββββββ
Important Note: Single-node setup is ONLY for:
- β Learning and experimentation
- β Development and testing
- β Understanding Kafka concepts
- β NOT for production use (no fault tolerance!)
π Step 1: Installation & Setup
Option A: Using Docker (Recommended - Easiest!)
# 1. Generate a unique Cluster ID
CLUSTER_ID=$(docker run --rm apache/kafka:latest /opt/kafka/bin/kafka-storage.sh random-uuid)
echo "Generated Cluster ID: $CLUSTER_ID"
# 2. Create a docker-compose.yml file with the generated ID
cat > docker-compose.yml << EOF
services:
kafka1:
image: apache/kafka:latest
container_name: kafka1
ports:
- "9093:9093"
volumes:
- ./kafka-data/kafka1:/var/lib/kafka/data
networks:
- kafka-bridge
environment:
KAFKA_BROKER_ID: 1
CLUSTER_ID: ${CLUSTER_ID}
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:29092,EXTERNAL://localhost:9093
KAFKA_LISTENERS: CONTROLLER://:9092,EXTERNAL://0.0.0.0:9093,INTERNAL://:29092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka1:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/data
networks:
kafka-bridge:
driver: bridge
volumes:
kafka-data:
EOF
# 3. Start Kafka
docker-compose up -d
# 4. Check if it's running
docker ps
# Expected output:
# CONTAINER ID IMAGE STATUS PORTS
# abc123def456 apache/kafka:latest Up 10 seconds 0.0.0.0:9093->9093/tcp
Alternative: Manual Cluster ID Generation
# If you prefer to set it manually, you can generate and copy the ID:
docker run --rm apache/kafka:latest /opt/kafka/bin/kafka-storage.sh random-uuid
# Output example: MkU3OEVBNTcwNTJENDM2Qk
# Then paste this ID into CLUSTER_ID in docker-compose.yml
Understanding the Configuration:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β YOUR DOCKER KAFKA SETUP β
βββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Container: kafka1 β
β βββββββββββββββββββββββββββββββββββββββββββββ β
β β Port 9093 (External) β You connect here β β
β β Port 29092 (Internal) β Container-only β β
β β Port 9092 (Controller) β Management β β
β βββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β Volume: ./kafka-data/kafka1 β
β (Your data persists here!) β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
What Each Port Does:
- 9093 (EXTERNAL): Use this from your laptop (localhost:9093)
- 29092 (INTERNAL): Used by containers talking to each other
- 9092 (CONTROLLER): Kafka's internal coordination
Data Persistence:
Your messages are saved in ./kafka-data/kafka1 directory. Even if you stop the container, your data remains!
πͺ Step 2: Your First Kafka Topic
Let's create our coffee shop's order board!
Create a Topic
# Using Docker:
docker exec -it kafka1 /opt/kafka/bin/kafka-topics.sh \
--create \
--topic coffee-orders \
--bootstrap-server localhost:9093 \
--partitions 3 \
--replication-factor 1
What This Creates:
Topic: "coffee-orders" (3 sections on your order board)
βββββββββββββββββββββββββββββββββββββββββββ
β COFFEE ORDERS BOARD β
βββββββββββββββββββββββββββββββββββββββββββ€
β β
β Section 0 Section 1 Section 2 β
β (Partition) (Partition) (Partition) β
β ββββββββββ ββββββββββ ββββββββββ β
β β Order 1β β Order 2β β Order 3β β
β β Order 4β β Order 5β β Order 6β β
β β ... β β ... β β ... β β
β ββββββββββ ββββββββββ ββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββ
List All Topics
docker exec -it kafka1 /opt/kafka/bin/kafka-topics.sh \
--list \
--bootstrap-server localhost:9093
View Topic Details
docker exec -it kafka1 /opt/kafka/bin/kafka-topics.sh \
--describe \
--topic coffee-orders \
--bootstrap-server localhost:9093
Output Explanation:
Topic: coffee-orders
PartitionCount: 3 β Your order board has 3 sections
ReplicationFactor: 1 β Only 1 copy (single node)
Partition: 0
Leader: 1 β Broker 1 manages this section
Replicas: 1 β Only one copy exists
Isr: 1 β In-sync replicas
β Step 3: Sending Orders (Producer)
Now let's be the cashier and take some orders!
Start the Console Producer
docker exec -it kafka1 /opt/kafka/bin/kafka-console-producer.sh \
--topic coffee-orders \
--bootstrap-server localhost:9093
Type these orders (press Enter after each):
{"customer":"Alice","drink":"Latte","size":"Medium","price":4.50}
{"customer":"Bob","drink":"Espresso","size":"Small","price":3.00}
{"customer":"Charlie","drink":"Cappuccino","size":"Large","price":5.50}
{"customer":"Diana","drink":"Mocha","size":"Medium","price":4.75}
{"customer":"Eve","drink":"Americano","size":"Large","price":3.50}
What's Happening:
You (Cashier) Kafka Storage
β β β
βββOrder: Alice's Latteββββββ β
β βββSave to Partition 0βββββ
β β β
βββOrder: Bob's Espressoβββββ β
β βββSave to Partition 1βββββ
β β β
βββOrder: Charlie's Capp.ββββ β
β βββSave to Partition 2βββββ
β β β
Press Ctrl+C to stop the producer
π§βπ³ Step 4: Processing Orders (Consumer)
Now let's be the barista and read the orders!
Start the Console Consumer (Read from Beginning)
docker exec -it kafka1 /opt/kafka/bin/kafka-console-consumer.sh \
--topic coffee-orders \
--bootstrap-server localhost:9093 \
--from-beginning
You'll see all the orders scroll by:
{"customer":"Alice","drink":"Latte","size":"Medium","price":4.50}
{"customer":"Bob","drink":"Espresso","size":"Small","price":3.00}
{"customer":"Charlie","drink":"Cappuccino","size":"Large","price":5.50}
...
What's Happening:
Storage Kafka Barista (You)
β β β
ββββRequest all ordersβββββ β
β β β
βββSend order 1ββββββββββββββAlice's Latteβββββββββββ
βββSend order 2ββββββββββββββBob's Espressoββββββββββ
βββSend order 3ββββββββββββββCharlie's Cappuccinoββββ
β β β
Start Another Consumer (Real-time Only)
Open a NEW terminal and run:
docker exec -it kafka1 /opt/kafka/bin/kafka-console-consumer.sh \
--topic coffee-orders \
--bootstrap-server localhost:9093
This consumer only sees new orders (like a new barista who just arrived).
π Step 5: Multiple Producers & Consumers
Let's simulate a busy coffee shop with multiple cashiers and baristas!
Experiment Setup
Terminal 1 (Cashier A - Producer):
docker exec -it kafka1 /opt/kafka/bin/kafka-console-producer.sh \
--topic coffee-orders \
--bootstrap-server localhost:9093
Terminal 2 (Cashier B - Producer):
docker exec -it kafka1 /opt/kafka/bin/kafka-console-producer.sh \
--topic coffee-orders \
--bootstrap-server localhost:9093
Terminal 3 (Barista A - Consumer):
docker exec -it kafka1 /opt/kafka/bin/kafka-console-consumer.sh \
--topic coffee-orders \
--bootstrap-server localhost:9093 \
--from-beginning
Terminal 4 (Barista B - Consumer):
docker exec -it kafka1 /opt/kafka/bin/kafka-console-consumer.sh \
--topic coffee-orders \
--bootstrap-server localhost:9093 \
--from-beginning
Now:
- Type orders in Terminal 1 and 2 (both cashiers taking orders)
- Watch them appear in Terminal 3 and 4 (both baristas see ALL orders)
Visual:
Cashier A βββ
ββββ ORDER BOARD βββ¬βββ Barista A (sees all orders)
Cashier B βββ (Kafka) ββββ Barista B (sees all orders)
Top comments (0)