Message queues are one of those architectural choices where the wrong pick haunts you for years. Pick Kafka when RabbitMQ would have done, and you've bought a 3-node cluster, ZooKeeper (or KRaft) operations, partition management, and consumer group coordination — all to replace what would have been a single RabbitMQ box. Pick RabbitMQ when Kafka was the right call, and you'll spend months migrating when throughput overwhelms you.
At Xenotix Labs we've shipped systems using both. This post is a concrete decision guide, with two case studies from our own work.
The one-sentence summary
RabbitMQ is a message broker. Kafka is a distributed event log. They look similar on the surface, but their internal models are completely different — and that shows up in how you use them.
RabbitMQ model: work queues
RabbitMQ is optimized for task distribution. A producer sends a message, the broker routes it to one of many competing consumers, the consumer acks, and the message is deleted from the queue.
Key properties: messages are consumed once and then gone (no replay), routing is rich (direct/topic/fanout/headers exchanges), priorities work, per-message ack, first-class delayed messages + DLQs + TTLs. This makes RabbitMQ great for work-queue patterns: "process these orders", "send these emails", "resize these images".
Kafka model: partitioned event log
Kafka is optimized for durable, ordered, replayable event streams. A producer appends an event to the end of a partition. Consumers read at their own pace, tracking position via offsets. Messages are never "consumed" — they sit in the log until retention expires.
Key properties: events are retained (rewind offsets and reprocess), ordering is per-partition, throughput is enormous (hundreds of thousands of events/sec on modest hardware), consumers are independent, partition keys matter (design once, hard to change later). This makes Kafka great for event-sourcing: "every trade", "every user interaction", "everything that happened in the system".
Case study 1: Veda Milk — RabbitMQ
Veda Milk is our D2C dairy subscription platform. Every night at 10 p.m., the system generates tomorrow's orders for every active subscriber. Classic work-queue.
Why RabbitMQ: each message represents work that must succeed exactly once — ack-on-success, nack-on-failure, DLQ for retries. We don't need replay; if an order failed, the fix is manual retry, not replaying a week of events. Throughput is low (~100k messages per night — a rounding error for RabbitMQ). Delayed messages matter for wallet-low reminders. One RabbitMQ instance on Amazon MQ runs the whole thing.
Case study 2: Cricket Winner — Kafka
Cricket Winner is our real-time cricket platform with live scores, news feeds, and opinion trading. Every trade is an event published to the trades topic, partitioned by market_id. Multiple consumers — matching engine, pricing, settlement, personalization — read the same events.
Why Kafka: multiple consumers need the same events. Replay matters — when we found a matching-engine bug, we rewound the partition offset and reprocessed. Throughput is high (~50,000 trades/minute on match days). Partitioning on market_id gives per-market ordering and cross-market parallelism simultaneously. Three-broker MSK cluster holds under match-day load.
Decision checklist
Ask these in order:
- Do consumers need to replay events? Yes → Kafka.
- Do multiple independent systems need the same events? Yes → Kafka.
- Consistently exceeding 10,000 messages/second? Yes → Kafka.
- Need rich routing, priorities, delays, DLQs out of the box? Yes → RabbitMQ.
- Otherwise → RabbitMQ is almost always simpler to operate.
Common mistakes
Picking Kafka for a classic work queue — you'll end up implementing DLQ, priorities, and delays by hand, badly. Picking RabbitMQ for event sourcing — you lose history the moment a consumer acks. Running Kafka without a real ops plan (monitor ISR, disk, partition lag). Mixing the two without clear boundaries (it's fine to use both — we do — but draw the line: Kafka for events, RabbitMQ for tasks).
Need help designing your messaging layer?
Picking the right message broker is cheap to get right at day zero and brutally expensive to fix at year two. If you're architecting a real-time system, event-driven platform, or high-throughput commerce stack, Xenotix Labs has shipped the full spectrum — from subscription commerce on RabbitMQ to real-time trading on Kafka. Reach out at https://xenotixlabs.com.
Top comments (0)