DEV Community

Cover image for Introducing @hazeljs/pubsub: Google Cloud Messaging, HazelJS Style
Muhammad Arslan
Muhammad Arslan

Posted on

Introducing @hazeljs/pubsub: Google Cloud Messaging, HazelJS Style

We shipped @hazeljs/pubsub to make Google Cloud Pub/Sub feel native inside HazelJS applications.

If your team already runs on GCP, Pub/Sub usually becomes the backbone for async communication. But most app code ends up repeating the same integration work: wiring clients, parsing payloads, handling ack/nack, and registering handlers with slightly different conventions in every service.

This package turns that into a consistent HazelJS module + decorator experience.


TL;DR

  • Use @hazeljs/pubsub to publish/consume Pub/Sub messages in HazelJS with less boilerplate.
  • You get a DI-friendly publisher service, decorator-based consumers, and explicit acknowledgement control.
  • It helps teams standardize event-driven code across multiple services.

Why we built it

We built this package because we kept seeing the same pain points in event-driven Node.js services:

  1. Too much repeated plumbing

    Every service re-implements Pub/Sub initialization and subscription handling.

  2. Inconsistent handler behavior

    Some handlers auto-ack, some forget to nack, some swallow errors, and reliability suffers.

  3. Leaky transport concerns in business code

    Product logic gets mixed with low-level broker/client setup.

  4. Harder onboarding

    New developers need to learn each service’s custom Pub/Sub pattern instead of one framework pattern.

HazelJS already gives a clean, declarative style for modules, providers, and decorators. Pub/Sub should follow the same principle.


Purpose of @hazeljs/pubsub

The purpose is simple: make Pub/Sub integration predictable, testable, and framework-native.

@hazeljs/pubsub gives you:

  • a single module entrypoint (PubSubModule.forRoot / forRootAsync)
  • one publisher service (PubSubPublisherService)
  • declarative consumers (@PubSubConsumer + @PubSubSubscribe)
  • clear acknowledgement behavior (ackOnSuccess, nackOnError, plus manual ack()/nack())

So instead of each service inventing its own event-consumer framework, your team uses one shared pattern.


What problems it solves

1) Boilerplate client setup

Without a package abstraction, every service manually creates and passes Pub/Sub clients.

With PubSubModule, setup is centralized and DI-ready.

2) Message handling drift

Ack/nack logic is usually spread across handlers and easy to get wrong.

With @PubSubConsumer + @PubSubSubscribe, defaults are explicit and overrideable.

3) Payload parsing repetition

Teams repeatedly decode and parse message payloads.

With package defaults, JSON parsing and handler payload typing are built in.

4) Uneven production behavior

Operationally, small differences in handler semantics lead to retries, duplicates, or dropped work.

Standardized patterns reduce those surprises.


What’s in the box

PubSubModule

Configure once with forRoot() or forRootAsync() and use everywhere via DI.

PubSubPublisherService

Publish events from your controllers/services:

  • publish(topic, data, options?) for string/buffer/object payloads
  • publishJson(topic, data, options?) for JSON-first workflows

Decorator-based consumers

Define consumers declaratively:

  • @PubSubConsumer({...defaults}) at class level
  • @PubSubSubscribe({...}) at method level

Acknowledgement controls

Use defaults (ackOnSuccess, nackOnError) or control each message explicitly:

  • return 'ack' | 'nack'
  • call payload.ack() / payload.nack()

Optional subscription auto-create

Enable autoCreateSubscription for bootstrap convenience when topic is provided.


Practical example: Order workflow fan-out

A common SaaS pattern:

  1. API creates an order.
  2. API publishes an order.created event.
  3. Multiple consumers react independently:
    • billing creates invoice
    • notification sends confirmation email
    • analytics tracks conversion event

This decouples services while keeping each handler focused.

Producer

@Service()
export class OrderService {
  constructor(private readonly publisher: PubSubPublisherService) {}

  async createOrder(order: { id: string; userId: string; total: number }) {
    // Persist order first...
    await this.publisher.publishJson('orders-topic', order, {
      attributes: {
        event: 'order.created',
        source: 'order-service',
      },
      orderingKey: order.id,
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Consumer

@PubSubConsumer({ ackOnSuccess: true, nackOnError: true, parseJson: true })
@Service()
export class BillingConsumer {
  @PubSubSubscribe({
    subscription: 'billing-orders-subscription',
    topic: 'orders-topic',
    autoCreateSubscription: true,
  })
  async handleOrder(
    payload: PubSubSubscriptionHandlerPayload<{ id: string; userId: string; total: number }>
  ) {
    // idempotency check recommended
    // create invoice, emit internal metrics, etc.
  }
}
Enter fullscreen mode Exit fullscreen mode

Quick start

Install:

npm install @hazeljs/pubsub
Enter fullscreen mode Exit fullscreen mode

Register module:

@HazelModule({
  imports: [
    PubSubModule.forRoot({
      projectId: process.env.GCP_PROJECT_ID,
    }),
  ],
})
export class AppModule {}
Enter fullscreen mode Exit fullscreen mode

Pub/Sub vs Queue vs Kafka

  • Use @hazeljs/pubsub when you’re on GCP and want managed Pub/Sub semantics.
  • Use @hazeljs/queue for Redis/BullMQ background job processing.
  • Use @hazeljs/kafka when Kafka is your streaming/event backbone.

Production notes

  1. Keep handlers idempotent (at-least-once delivery can reprocess messages).
  2. Attach correlation IDs in message attributes for tracing/debugging.
  3. Monitor nack/error rates for early schema/runtime regressions.
  4. Plan retry + dead-letter strategy at the platform level.
  5. Keep handlers fast and offload heavy work when needed.

Backlinks and resources

If you’re building event-driven systems on GCP with HazelJS, try it and share feedback via GitHub issues or Discord.

Top comments (0)