DEV Community

Augustus
Augustus

Posted on

I open-sourced our Redis-based webhook replacement (And why you should try it)

TL;DR

  • Built a lightweight Redis-based message queue that replaces unreliable HTTP webhooks while maintaining a familiar developer experience (looks and feels almost exactly like POST requests)
  • Solves common webhook issues: service downtime, retry complexity, rate limiting, and lack of transactions.
  • Provides dead letter queues, transactions, TTL, and more.
  • Perfect for teams scaling microservices who want reliability without the operational complexity of Kafka/RabbitMQ
  • Open source and available on GitHub

Every developer who has built microservices at scale knows the struggle. You start with a simple architecture and basic HTTP webhooks between services. Everything works great—until it doesn't.

Messages get lost when services go down. Rate limiting causes cascading failures. Retries become a tangled mess. And soon enough, you're debugging outages instead of shipping features.

That's exactly where we found ourselves last year—and why I built LeanMQ, an internal tool that has now become my newest open-source project.

The Problem with Internal Webhooks

I've worked with various messaging patterns over the years. When we started with a new project, we initially chose simple HTTP webhooks for service-to-service communication. This approach is common and works well at certain scales:

# Service A sends a webhook
requests.post("http://service-b/webhook/order-status", json={
    "order_id": "123",
    "status": "shipped"
})

# Service B receives it
@app.route("/webhook/order-status", methods=["POST"])
def handle_order_status():
    data = request.json
    # Process the webhook...
    return "", 200
Enter fullscreen mode Exit fullscreen mode

If anything fails when sending, then add it to a queue in Postgres and re-attempt in a CRON.

But as we scaled to millions of webhooks per month, several architectural limitations became apparent:

  1. Service availability coupling: When a receiving service was down, we needed complex retry mechanisms
  2. Complex retry logic: Each service reimplemented similar retry patterns
  3. Rate limiting challenges: Services under heavy load would reject webhooks
  4. Debugging complexity: Limited visibility into webhook delivery status
  5. Lack of transactional guarantees: Difficult to ensure multiple services were updated atomically

We thoroughly evaluated several alternatives:

  1. Cloud-based webhook services: We tested services like Svix, Hookdeck, and others, but the volume of our webhooks (millions per month with plans to scale to hundreds of millions) made the pricing prohibitively expensive, which would lead to a terrible ROI for our specific use case.

  2. Enterprise message brokers: We looked at RabbitMQ, Kafka, and other established solutions. While these are excellent products with rich feature sets, they introduced significant operational complexity, required specialized knowledge, and would have necessitated substantial architectural changes.

Our services were already using Redis extensively, so we wanted to leverage our existing infrastructure if possible. We needed something that maintained the familiar webhook pattern but provided the reliability of a message queue.

Building the Internal Solution

After evaluating the tradeoffs, I realized we could build a solution that combined the best of both worlds: the simplicity and familiar developer experience of webhooks with the reliability of a proper message queue, all while leveraging our existing Redis infrastructure.

Redis Streams — a relatively new feature in Redis — provided the perfect foundation. It offered persistence, consumer groups, and powerful primitives for building reliable message delivery without adding new infrastructure components to our stack.

I designed an abstraction layer around Redis Streams with two key design principles:

  1. Maintain the familiar webhook-like developer experience
  2. Add production-grade reliability features like DLQs and transactions

The result was a simple API that required minimal changes to existing code:

# Replace this:
requests.post("http://service-b/webhook/order-status", json=data)

# With this:
webhook.send("/order/status/", data)
Enter fullscreen mode Exit fullscreen mode

On the receiving end, the API closely resembled web frameworks everyone is already familiar with:

# Instead of a Flask or FastAPI route, use this decorator:
@webhook.get("/order/status/")
def process_order_status(data):
    # Process the webhook data...
    pass

# And run a service to process incoming webhooks
webhook.run_service()

# Smaller projects can skip the service and
# simply run this in a CRON script
webhook.process_messages()
Enter fullscreen mode Exit fullscreen mode

One of the biggest advantages was the introduction of transactions. We could now atomically send multiple messages to different services, ensuring that either all operations succeeded or none did—something that was nearly impossible with our previous HTTP webhook approach:

# Either both messages are delivered or neither is
with webhook.transaction() as tx:
    tx.send("/order/status/", {"order_id": "123", "status": "shipped"})
    tx.send("/inventory/update/", {"product_id": "456", "quantity_change": -1})
Enter fullscreen mode Exit fullscreen mode

Within weeks, our webhook-related challenges were addressed. Failed messages went to dead letter queues for easy inspection and reprocessing. Services could go down without affecting the reliability of our messaging. We gained atomic transactions for operations that needed to span multiple services.

And the best part? The development experience remained nearly identical to our previous webhook pattern, making adoption painless across the engineering team.

Why Open Source?

I liked how this library solved our internal async problems and I thought maybe it could help other teams facing similar challenges. I spent evenings and weekends polishing the codebase, adding documentation, and preparing it for public release as LeanMQ.

As with any project, the journey from an internal tool to an open-source project taught me a few things, especially developer experience and documentation...

A few lessons

1. Documentation is (almost) everything

Documentation can make or break an open-source project. I spent more time on docs than on the actual code! I aimed to create documentation that:

  • Makes it really easy to get started
  • But can go deep when developers want to explore more
  • Provides real-world examples
  • Addresses common scenarios

This led to a comprehensive documentation site with detailed guides, examples, and reference materials.

2. API design is a delicate balance

When designing APIs for others to use, there's a constant tension between:

  • Simplicity vs. flexibility
  • Convention vs. configuration
  • Opinionated vs. unopinionated design

I ultimately opted for a simple, opinionated core API with escape hatches for advanced use cases. This made the library approachable while still supporting complex scenarios.

3. Community starts before your first user

It would be awesome if we do build an open-source community around this library. Maybe there are more niche use cases we can cover. And even if a few developers use it, I want it to delight!

Technical Deep Dive: How LeanMQ Works

For those interested in the technical details, LeanMQ has several key components:

1. The Core Message Queue (simple but powerful)

At its heart, LeanMQ provides a simple but powerful message queue abstraction:

from leanmq import LeanMQ

# Initialize the client
mq = LeanMQ(redis_host="localhost", redis_port=6379)

# Create queues
main_queue, dlq = mq.create_queue_pair("notifications")

# Send a message
message_id = main_queue.send_message({"user_id": 123, "message": "Hello"})

# Get and process messages
messages = main_queue.get_messages(count=10)
for message in messages:
    # Process the message...
    main_queue.acknowledge_messages([message.id])
Enter fullscreen mode Exit fullscreen mode

2. The Webhook Pattern (the main use-case of LeanMQ)

On top of this core, LeanMQ adds a webhook-like pattern for easier service-to-service communication:

from leanmq import LeanMQWebhook

webhook = LeanMQWebhook(redis_host="localhost", redis_port=6379)

# Register handlers with a familiar decorator pattern
@webhook.get("/order/status/")
def process_order_status(data):
    print(f"Order {data['id']} status: {data['status']}")

# Run a service to process messages
webhook.run_service()
Enter fullscreen mode Exit fullscreen mode

3. Advanced Features

LeanMQ includes several advanced features common in enterprise message queues:

  • Transactions: Send multiple messages atomically
  • Message TTL: Automatically expire old messages
  • Dead Letter Queues: Capture and inspect failed messages
  • Consumer Groups: Distribute processing across workers

Transactions were particularly important for our use case. In a distributed system, ensuring consistent state across multiple services is challenging. For example, when an order is shipped, we might need to update the order status, decrement inventory, and notify the customer—all as an atomic operation:

# Atomic transactions ensure all messages are sent or none are
with mq.transaction() as tx:
    tx.send_message(orders_queue, {"order_id": "123", "status": "shipped"})
    tx.send_message(inventory_queue, {"product_id": "ABC", "quantity": -1})
    tx.send_message(notifications_queue, {"user_id": "456", "type": "order_shipped"})
Enter fullscreen mode Exit fullscreen mode

With traditional webhooks, implementing this pattern reliably would require complex distributed transaction patterns or eventual consistency mechanisms. LeanMQ makes it trivial while maintaining the familiar webhook-like developer experience.

The Road Ahead

Open-sourcing LeanMQ is just the beginning. I have plans to add:

  • More language bindings (Node.js, Go)
  • Additional transport options beyond Redis
  • Enhanced monitoring and observability
  • Performance optimizations for high-throughput scenarios

I'm not trying to replace established message brokers like RabbitMQ or Kafka—they're excellent solutions for many use cases. LeanMQ fills a specific niche: providing reliable asynchronous communication with minimal operational overhead and a webhook-like developer experience.

I'm looking forward to seeing how others use, extend, and improve upon this foundation, especially teams that want to upgrade their internal webhooks without adopting a full-scale message broker.

Try It Out

If you're dealing with internal webhooks or seeking a lightweight message queue solution, give LeanMQ a try:

pip install leanmq
Enter fullscreen mode Exit fullscreen mode

Check out the documentation and GitHub repository.

I'd love to hear your feedback, questions, and suggestions in the comments.


Have you open-sourced an internal tool? What challenges did you face? What do you think of LeanMQ?

Hostinger image

Get n8n VPS hosting 3x cheaper than a cloud solution

Get fast, easy, secure n8n VPS hosting from $4.99/mo at Hostinger. Automate any workflow using a pre-installed n8n application and no-code customization.

Start now

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Please consider leaving a ❤️ or a friendly comment if you found this post helpful!

Okay