DEV Community

Md Zonieed Hossain
Md Zonieed Hossain

Posted on

Kafka Inbox/Outbox in Go — Guaranteeing Exactly-Once Delivery in a Multi-Tenant ERP

Kafka Inbox/Outbox in Go — Guaranteeing Exactly-Once Delivery in a Multi-Tenant ERP

The Problem

In a multi-tenant ERP handling concurrent stock and order updates,
duplicate event processing is not a theoretical risk — it is a
production reality.

In our system at Gononet, duplicate events happened for two reasons:

  1. Network retries — a message is delivered but the consumer crashes before committing the offset. Kafka re-delivers it.
  2. Double-submit orders — a user clicks "confirm order" twice due to slow network. Two events hit the queue simultaneously.

The result? Stock deducted twice. Orders created twice.
Inventory counts wrong. In a warehouse system, this is a critical failure.

We needed exactly-once delivery. This is how we built it.


Why Not Just Use Kafka Transactions?

Kafka transactions guarantee exactly-once delivery
within Kafka itself — producer to broker. But our problem was
on the consumer side. We needed to guarantee that our
business logic — stock deduction, ledger insert —
executed exactly once, not just that the message arrived once.

Kafka transactions do not solve this. The Inbox pattern does.


The Inbox Pattern — Core Idea

The idea is simple:

  1. When a message arrives, first record it in an inbox table with its unique message_id
  2. Wrap the inbox insert + business logic + ledger update in a single database transaction
  3. If the message_id already exists — the transaction was already processed. Return early.

Because everything happens in one atomic transaction,
there is no window for duplicates.


Implementation in Go

The Inbox Table

CREATE TABLE inventory_inbox (
    message_id   TEXT PRIMARY KEY,
    processed_at TIMESTAMPTZ DEFAULT NOW()
);
Enter fullscreen mode Exit fullscreen mode

Simple. The message_id is the Kafka message key —
unique per business event.

The Repository

// inbox.go
func (r *Repository) InsertFirstAttempt(
    ctx context.Context, 
    tx bun.Tx, 
    messageID string,
) (bool, error) {
    result, err := tx.NewInsert().
        TableExpr("inventory_inbox").
        Value("message_id", "?", messageID).
        On("CONFLICT (message_id) DO NOTHING").
        Exec(ctx)
    if err != nil {
        return false, err
    }
    rows, _ := result.RowsAffected()
    return rows > 0, nil
}
Enter fullscreen mode Exit fullscreen mode

If rows == 0 — the message was already processed.
We return early. No duplicate processing.

The Consumer

func (c *Consumer) HandleStockUpdate(
    ctx context.Context, 
    msg kafka.Message,
) error {
    tx, err := c.db.BeginTx(ctx, nil)
    if err != nil {
        return err
    }
    defer tx.Rollback()

    // Step 1 — check inbox
    isFirst, err := c.repo.InsertFirstAttempt(
        ctx, tx, string(msg.Key),
    )
    if err != nil {
        return err
    }
    if !isFirst {
        // duplicate — skip silently
        return nil
    }

    // Step 2 — business logic inside same transaction
    if err := c.repo.DeductStock(ctx, tx, msg); err != nil {
        return err
    }

    // Step 3 — commit everything atomically
    return tx.Commit()
}
Enter fullscreen mode Exit fullscreen mode

The inbox check, stock deduction and commit happen
in one transaction. Either all succeed or none do.


Handling the Offset Commit Failure Edge Case

Q: What if the transaction commits but Kafka offset commit fails?

Kafka will re-deliver the message. But when it arrives again,
InsertFirstAttempt finds the message_id already in the inbox
and returns early. The business logic never runs twice.

This is the key insight — the inbox makes the consumer
idempotent by design, not by luck.


Inbox Table Retention

The inbox table cannot grow forever. We run a background
cleanup worker using a cron job that deletes records older
than 7 days:

func (w *Worker) CleanInbox(ctx context.Context) error {
    _, err := w.db.NewDelete().
        TableExpr("inventory_inbox").
        Where("processed_at < NOW() - INTERVAL '7 days'").
        Exec(ctx)
    return err
}
Enter fullscreen mode Exit fullscreen mode

7 days is a safe buffer — duplicates in practice arrive
within seconds or minutes, never days.


Results

After implementing the Kafka Inbox pattern across our
stock and order workflows at Gononet:

  • Duplicate stock deductions caused by network retries — eliminated
  • Double-submit order duplicates — eliminated
  • Zero changes required to Kafka configuration or producer code
  • Pattern reusable across any consumer in the system

When To Use This Pattern

Use the Inbox pattern when:

  • You have at-least-once Kafka delivery (the default)
  • Your consumer does database writes as part of business logic
  • Duplicate processing has real business consequences (money, inventory, orders)

Do not use it when:

  • Your consumer is read-only
  • Duplicate processing is harmless (analytics, logging)
  • You can use Kafka Streams with exactly-once semantics natively

Summary

The Kafka Inbox pattern is one of the most practical tools
in distributed systems engineering. It does not require
exotic infrastructure — just a table, a transaction and
a unique message ID.

If you are building event-driven systems where correctness
matters more than throughput, this pattern belongs in your toolkit.


I am a Senior Backend Engineer specialising in Go distributed
systems. Currently building RetailerBook ERP and GorillaMove
grocery delivery backend at Gononet. Open to senior backend
and staff engineer roles — remote or relocation worldwide.

GitHub: github.com/zonieedhossain
LinkedIn: linkedin.com/in/zonieedhossain

Top comments (0)