Bitnami is deprecating their Kafka images, and ZooKeeper is being removed from Apache Kafka 4.0. Time to migrate to KRaft mode with Confluent's official images for a future-proof, simpler architecture.
If you're using bitnami/kafka in your Docker setup, you've probably seen this warning:
⚠️ Important Notice: Beginning August 28th, 2025, Bitnami will evolve its public catalog... All existing container images have been migrated to "Bitnami Legacy" repository where they will no longer receive updates.
But there's a bigger issue: Apache Kafka is removing ZooKeeper entirely in version 4.0. Most current setups still rely on ZooKeeper, which means double trouble ahead.
The Solution: KRaft Mode
KRaft (Kafka Raft) eliminates ZooKeeper dependency entirely, giving you:
✅ Simpler architecture (fewer moving parts)
✅ Better performance and scalability
✅ Future-proof (ZooKeeper support ends with Kafka 3.9)
✅ Faster startup and recovery times
Before: The Old Way
version: '3.8'
services:
zookeeper:
image: bitnami/zookeeper:latest
ports:
- "2181:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka:latest
ports:
- "9092:9092"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
depends_on:
- zookeeper
Problems:
- Deprecated Bitnami images (no more updates)
- ZooKeeper dependency (removed in Kafka 4.0)
- Two services to manage
After: The Modern KRaft Way
Here's the future-proof solution using Confluent's official image:
version: '3.8'
services:
kafka:
image: confluentinc/cp-kafka:7.4.0
ports:
- "9092:9092"
environment:
# KRaft Configuration
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
CLUSTER_ID: MkU3OEVBNTcwNTJENDM1Tk
# Network Configuration
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
# Storage Configuration
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
# Topic Configuration
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
Key Configuration Explained
Let me break down the important KRaft settings:
KAFKA_NODE_ID: 1 # Unique node identifier
KAFKA_PROCESS_ROLES: broker,controller # Combined mode (handles both roles)
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093 # Controller election setup
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk # Unique cluster identifier
Network Configuration
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
The controller listener (port 9093) is used internally for KRaft consensus, while 9092 remains your application port.
Benefits You'll See Immediately
- Faster Startup: No waiting for ZooKeeper election
- Simpler Debugging: One service instead of two
- Better Resource Usage: No ZooKeeper overhead
- Future-Proof: Ready for Kafka 4.0
Conclusion
The writing is on the wall: ZooKeeper is going away, and Bitnami is changing their strategy. Don't wait for your pipelines to break in production.
KRaft mode isn't just a workaround, it's genuinely better. Simpler architecture, better performance. Your future self will thank you for making this change now.
Have you made the switch to KRaft? let me know your experience in the comments!
Top comments (2)
Great write-up! One tip: generate/format KRaft storage before first start (kafka-storage.sh random-uuid; kafka-storage.sh format --cluster-id … --config …) and mount a volume for KAFKA_LOG_DIRS so the cluster ID survives restarts. Also pin cp-kafka to a specific version and, if you scale to 3 brokers, list all voters in KAFKA_CONTROLLER_QUORUM_VOTERS and bump replication factors. Did you run into any advertised.listeners quirks with Docker host vs service name?
Hey Ivan, thanks for the tips! You’re right about formatting storage and persisting KAFKA_LOG_DIRS, that’s a great call for production. I’ve only tested using the service name in advertised.listeners, didn’t face issues so far, did you run into problems with host vs service name?