DEV Community

Young Gao
Young Gao

Posted on

Pub/Sub Messaging Patterns: Redis vs NATS — When to Use What (2026 Comparison)

You're building a notification system. Service A needs to tell Services B, C, and D that something happened. You reach for HTTP calls. Now you have three points of failure, retry logic everywhere, and a deployment that takes down notifications when the user service restarts.

Pub/sub exists to solve exactly this.

The Three Messaging Models

Before picking a tool, know what you actually need:

Request-Response: "Do this thing and tell me the result." HTTP, gRPC. Synchronous. The caller waits.

Queue: "Do this thing eventually." One message, one consumer. Work distribution. Think background jobs.

Pub/Sub: "This thing happened." One message, many consumers. Event broadcasting. Nobody waits for anybody.

Most backend systems need all three. The mistake is using one pattern for everything.

Redis Pub/Sub: The Simple Path

Redis Pub/Sub is fire-and-forget in the purest sense. Messages go to whoever is connected right now. Nobody listening? Message is gone.

import { createClient } from "redis";

// Publisher
const pub = createClient({ url: "redis://localhost:6379" });
await pub.connect();

async function publishOrderEvent(order: { id: string; status: string }) {
  await pub.publish("orders", JSON.stringify({
    event: "order.updated",
    data: order,
    timestamp: Date.now(),
  }));
}

// Subscriber
const sub = createClient({ url: "redis://localhost:6379" });
await sub.connect();

await sub.subscribe("orders", (message) => {
  const event = JSON.parse(message);
  console.log(`Order ${event.data.id} -> ${event.data.status}`);
});
Enter fullscreen mode Exit fullscreen mode

That's it. No consumer groups, no acks, no offsets. Ridiculously simple.

When Redis Pub/Sub fits:

  • Real-time notifications (WebSocket fan-out)
  • Cache invalidation across instances
  • Live dashboards, presence indicators
  • You already run Redis and don't want another dependency

When it doesn't:

  • You need message durability (subscriber was down for 30 seconds and missed events)
  • You need replay (new service joins and wants historical events)
  • You need backpressure (slow consumer can't keep up)

For durable streaming with Redis, look at Redis Streams — a different API entirely. But at that point, consider whether NATS is a better fit.

NATS: The Messaging Swiss Army Knife

NATS core gives you pub/sub like Redis — fast, ephemeral, at-most-once delivery. But NATS JetStream adds persistence, replay, consumer groups, and exactly-once semantics on top.

import { connect, StringCodec, JSONCodec } from "nats";

const nc = await connect({ servers: "nats://localhost:4222" });
const jc = JSONCodec();

// --- Core NATS: ephemeral pub/sub ---
nc.subscribe("events.orders.*", {
  callback: (_err, msg) => {
    const data = jc.decode(msg.data);
    console.log(`Got ${msg.subject}:`, data);
  },
});

nc.publish("events.orders.created", jc.encode({ id: "abc", total: 99 }));

// --- JetStream: persistent pub/sub ---
const jsm = await nc.jetstreamManager();
await jsm.streams.add({
  name: "ORDERS",
  subjects: ["orders.>"],
  retention: "limits" as any,
  max_msgs: 100_000,
  max_age: 7 * 24 * 60 * 60 * 1_000_000_000, // 7 days in nanos
});

const js = nc.jetstream();

// Publish with persistence
await js.publish("orders.created", jc.encode({ id: "abc", total: 99 }));

// Durable consumer — survives restarts, tracks position
const consumer = await js.consumers.get("ORDERS", "order-processor");
const messages = await consumer.consume();

for await (const msg of messages) {
  const order = jc.decode(msg.data);
  try {
    await processOrder(order);
    msg.ack();
  } catch (e) {
    msg.nak(); // redelivery
  }
}
Enter fullscreen mode Exit fullscreen mode

Subject wildcards (orders.*, orders.>) give you topic routing without configuring exchanges. * matches one token. > matches one or more.

Fan-Out vs Fan-In

Fan-out: one event, multiple independent consumers. Order created → inventory service adjusts stock, email service sends confirmation, analytics service logs it. Each gets every message.

In NATS, each service creates its own named consumer on the same stream. In Redis, each subscriber on the channel gets every message. Easy.

Fan-in: many producers, one logical consumer processing all messages. Multiple API servers publishing to one audit.logs subject, single aggregator consuming. Both Redis and NATS handle this naturally — many publishers, one subscriber (or one consumer group).

The pattern that trips people up is competing consumers — fan-out at the topic level, but load-balanced within each service. Three instances of the email service should share the work, not send three emails.

NATS handles this with queue groups:

// All instances with the same queue name share messages
nc.subscribe("orders.created", {
  queue: "email-service",
  callback: (_err, msg) => {
    // Only ONE instance processes each message
    sendConfirmationEmail(jc.decode(msg.data));
  },
});
Enter fullscreen mode Exit fullscreen mode

Redis Pub/Sub can't do this. Every subscriber gets every message. You'd need Redis Streams with consumer groups instead.

Dead Letter Handling

Messages will fail. Your system needs a plan for that.

NATS JetStream has built-in max delivery attempts. After N redeliveries, you route to a dead letter subject:

// Consumer config with max deliveries
await jsm.consumers.add("ORDERS", {
  durable_name: "order-processor",
  max_deliver: 5,
  // After 5 attempts, message becomes "advisory" on
  // $JS.EVENT.ADVISORY.CONSUMER.MAX_DELIVERIES.ORDERS.order-processor
});

// Monitor the advisory subject for dead letters
nc.subscribe("$JS.EVENT.ADVISORY.CONSUMER.MAX_DELIVERIES.>", {
  callback: (_err, msg) => {
    const advisory = jc.decode(msg.data);
    console.error("Dead letter:", advisory);
    // Store in DB, alert on-call, push to DLQ stream
  },
});
Enter fullscreen mode Exit fullscreen mode

With Redis Pub/Sub? You're on your own. There's no retry, no ack, no dead letter concept. If the handler throws, the message is gone. Build your own retry layer or accept the trade-off.

Quick Decision Matrix

Need Redis Pub/Sub NATS Core NATS JetStream
Ephemeral broadcast Yes Yes Overkill
Message durability No No Yes
Replay from history No No Yes
Competing consumers No Yes (queue groups) Yes
Wildcard routing Pattern subscribe Yes Yes
Dead letters DIY DIY Built-in
Ops complexity Near zero Low Medium

Common Mistakes

Using pub/sub when you need a queue. If only one consumer should process each message, you need competing consumers or a proper job queue. Raw pub/sub duplicates work.

Assuming Redis Pub/Sub is durable. It isn't. Not even a little. If your subscriber disconnects for one second, those messages are gone. This catches people who confuse Pub/Sub with Streams.

Ignoring backpressure. A slow consumer on NATS core drops messages. On JetStream, unacked messages pile up. Either way, you need monitoring. Set max_pending limits and alert before you hit them.

Publishing large payloads. Pub/sub is for events, not data transfer. Publish { orderId: "abc" }, not the entire order with all line items. Let consumers fetch what they need.

Not setting max age on streams. JetStream streams grow forever by default. Set max_age and max_msgs. One team I saw had 400GB of NATS data because nobody set limits.

Skipping the schema. When five services consume the same event, a schema (JSON Schema, Protobuf, whatever) is the only thing preventing "I added a field and broke three services" incidents.


Part of my Production Backend Patterns series. Follow for more practical backend engineering.


If this was useful, consider:

Top comments (0)