DEV Community

Cover image for Understanding Azure Service Bus: How to Manage Routing, Sessions, and Atomicity
Harnoor Puniyani
Harnoor Puniyani

Posted on

Understanding Azure Service Bus: How to Manage Routing, Sessions, and Atomicity

When I first used Azure Service Bus, I thought it was “just a queue.” It’s much more: routing, ordering, retries, DLQ, and transactions—useful for high‑volume, near real‑time, stateful integrations.

Service Bus in 60 seconds

  • Queues: point‑to‑point

  • Topics & Subscriptions: pub/sub with SQL filters on message properties

  • Sessions: FIFO per key (keeps order for a customer/order)

  • Retries & DLQ: automatic retries; poison messages move to DLQ

  • Extras: duplicate detection, scheduled delivery, deferral, transactions


Using Service Bus in Modern Architectures

I’m highlighting three common use cases that are easily handled with queues and topics.

1) Event‑Driven Architecture with Routed Messages

Scenario: accept requests, fan‑out by properties, and keep downstreams isolated.

For an asynchronous inbound flow, expose a Topic where a client can send messages. Based on custom metadata (for example, a type property), if type is ERP_PRODUCT_MASTER it goes to the product subscription; if type is ERP_CUSTOMER_MASTER it goes to the customer subscription. Messages are filtered into their respective subscriptions. If a subscription needs to receive all events, omit the filter—product events will go to both the logs subscription and the product subscription, and similarly for customer events.

High level Event Driven architecture flow

2) Sessions as a Keyed Index (the “HashMap” trick)

Docs focus on sessions for strict ordering. You can also use a session as a single‑item, keyed bucket for quick lookups.

Imagine a transactional flow where you store the data in Service Bus and keep only a reference to that message in the client system. When the client later sends the reference, you need to fetch the message from Service Bus. The simplest method is to use sessions:

  • Set SessionId to a business key (e.g., REF-123)

  • Store one message per session (acts like key → value)

  • The consumer accepts the session with that ID to fetch the message directly

Benefits:

  • No scanning/deferring through many messages

  • Avoids long lock durations while you hunt for the right payload

3) Transactional Atomicity

Goal: process a batch atomically; complete only on success.

  • Receive in peek‑lock, do work, then complete

  • On transient failure: abandon (retry)

  • On business retry later: defer (preserve for targeted pickup)

  • On poison: let it exceed max deliveries → DLQ

Notes:

  • Tune MaxDeliveryCount to balance retries vs. fast DLQ

  • Renew locks if processing may exceed LockDuration

  • Make handlers idempotent (at‑least‑once delivery)


Ops and Security Tips

  • Enable duplicate detection if producers can retry sends

  • Alert on DLQ > 0, retry spikes, handler failures

  • Prefer Premium + private endpoints/VNet where possible

  • Use Managed Identities/App Registrations over SAS

  • Log CorrelationId and business keys; avoid payload logging


Thanks for reading. If you want configuration‑focused examples (Functions/Logic Apps, sessions, filters, or DLQ replay), let me know.

Top comments (0)