Introduction
Many organizations depend on third-party vendors for services e.g payment processing and merchandise reselling. While these partnerships can enhance efficiency but it comes with challenges such as how to keep track of vendor performance and transparency.
One effective way to tackle these challenges is through an event-driven architecture (EDA).
The Problem
As my organisation grows in revenue and size. Their third party vendors network also expands. I was tasked with providing a solution that monitors all activities in our third parties system to improve transparency and performance review. Communicating key metric like "sales per month" visually is compulsory. A dashboard that visualize these data is a key component of the solution.
Event Driven Architecture
An event-driven architecture focuses on producing, detecting, and responding to events in real time.
What is a Webhook?
A webhook is a simple way for one system to send data to another when a specific event occurs. For example, when a payment is processed, a webhook can instantly notify your system by sending relevant data. This eliminates the need for constant checks on the vendor’s system, providing real-time updates.
Advantages of Webhooks
- Instant Updates: Webhooks provide immediate notifications, keeping your organization informed.
- Resource Efficiency: By avoiding constant polling, webhooks save server resources and bandwidth.
- Customization: You can set up your own endpoints and tailor the data to your needs.
- Ease of Use: Webhooks are straightforward to implement with minimal changes to existing systems.
Functional Requirements
Accept Executed API Requests: The webhook service should be capable of receiving incoming API calls from third-party systems.
Execute Corresponding Events: Upon receiving a request, the service must trigger the appropriate events based on the webhook data.
Persist Events and Results: The service should store both the event data and the results of processed events in a database for future reference and analysis.
Non-Functional Requirements
High Availability: The webhook service must be designed for minimal downtime, ensuring it is consistently accessible.
At Least Once Delivery: Every event must be delivered at least once, even in the face of system failures.
Idempotency: The system should handle duplicate event deliveries gracefully, ensuring that processing the same event multiple times does not lead to inconsistent states.
Traditional Architecture Flaws
figure 1.0 A simple webhook implementation
In a traditional setup, the request handler directly processes incoming requests, executes the necessary business logic, and saves results to the database. While this method may seem straightforward, it has several significant drawbacks:
- Tight Coupling: Request handling and business logic are closely integrated, making it hard to scale components independently.
- Single Point of Failure: If the request handler fails, the entire system may become unresponsive.
- Lack of Load Buffering: High traffic periods can overwhelm the system, leading to potential failures.
- No Built-in Retry Mechanism: If an operation fails, there is often no automated way to retry processing the event.
Solution: Integrating a Message Queue
figure 1.1 An resilient webhook
To address these challenges, adding a message queue into the architecture can significantly enhance the webhook service's performance and reliability. Here’s how this integration works:
Decoupling: Instead of directly processing events, the request handler sends messages to a queue. This decouples the request handling from event processing, allowing for independent scaling.
Load Buffering: The message queue acts as a buffer, holding incoming requests during peak traffic, ensuring the system can handle bursts of activity without failure.
Scalability: Additional consumers can be added to process messages from the queue, enabling horizontal scaling as demand increases.
Failure Recovery: If an event fails to process, the message can be retried automatically, ensuring at least once delivery.
Benefits of Message Queue Integration
Improved Reliability: A message queue can handle traffic spikes gracefully, preventing service outages.
Enhanced Flexibility: Components can evolve independently, simplifying maintenance and upgrades.
Efficient Resource Utilization: Offloading processing to the queue keeps the request handler responsive, enhancing overall performance.
Data flow of a resilient webhook
figure 1.2 data flow of resilient webhook
Handling Failure
In a robust event-driven architecture, handling failures effectively is crucial to maintaining system resilience.
1. Webhook Trigger and API Request
When an event occurs in the third-party system, the webhook sends an API request to your service with the event data.
2. Message Queue Enqueueing
Upon receiving the request, the webhook at enqueue the message into the message queue:
Success Scenario: If the message is successfully enqueued, the system returns a 200 response to the user, indicating that the event has been received and will be processed.
Failure Scenario: If the message fails to enqueue (for instance, due to a queue service outage), the system responds with an appropriate error code (e.g., 500 Internal Server Error) instead of a 200. This informs the user that the event was not successfully processed.
3. Event Processing from the Queue
Once the message is in the queue, a consumer service will pick it up for processing:
Data Persistence Check: During processing, the consumer attempts to save the event data to the database.
Success Scenario: If the data is successfully saved, the event can be dequeued, and processing is considered complete.
-
Failure Scenario: If the data fails to save (due to database issues, for example):
- The event remains in the queue and is not dequeued.
- If the maximum retry attempts are reached without success, tan alert can be generated for system administrators.
4. Response Handling
To summarize the response handling:
- If the event is not enqueued, the user receives a 500 error response.
- If the event is enqueued but the data fails to save to the database, the event remains in the queue, and no acknowledgment of success is provided until the data is successfully saved to the database .
Handling Security with HMAC
What is HMAC?
- HMAC (Hash-based Message Authentication Code) is a method that uses a shared secret key to create a unique hash value for a message. This ensures both integrity (the message hasn't been altered) and authenticity (the message is from a trusted source).
Implementation Steps:
- Shared Secret Generation: Establish a confidential shared secret between your system and the third-party service.
- HMAC Signature Creation: When an event occurs, the vendor generates an HMAC using the shared secret and the payload.
-
Sending the Webhook: The service sends the payload along with the HMAC in an HTTP header (e.g.,
X-Hub-Signature
). - Signature Verification: Your system recalculates the HMAC for the received payload and compares it with the received signature to ensure they match.
Overview of Services
- Database Service: Stores webhook events.
- Request Handler Service: Receives incoming webhook requests and enqueues messages to RabbitMQ.
- Queue Broker: RabbitMQ to manage the message queue.
- Consumer Service: Processes messages from the queue and interacts with the database.
Step 1: Create the Request Handler Service
//server.js
const express = require('express');
const amqp = require('amqplib');
const bodyParser = require('body-parser');
const HMAC = require('./verfiy-hmac');
require('dotenv').config();
const app = express();
app.use(bodyParser.json());
const RABBITMQ_URL = process.env.QUEUE_URL;
app.post('/webhook', async (req, res) => {
const payload = req.body;
if (!HMAC.verify(payload, req.headers['x-hub-signature'])) {
console.log(req.headers['x-hub-signature'],"hdhdhhd")
return res.status(403).send('Unauthorized');
}
try {
const connection = await amqp.connect(RABBITMQ_URL);
const channel = await connection.createChannel();
await channel.assertQueue('webhook_queue');
channel.sendToQueue('webhook_queue', Buffer.from(JSON.stringify(payload)));
res.status(200).send('Event received');
} catch (error) {
console.error(error);
res.status(500).send('Internal Server Error');
}
});
app.listen(process.env.PORT, () => {
console.log(Request Handler Service running on port ${process.env.PORT} );
});
Step 2: Create HMAC Verification
// verify-hmac.js
const crypto = require('crypto');
require('dotenv').config();
const SECRET = process.env.PW;
module.exports.verify = (payload, signature) => {
const hmac = crypto.createHmac('sha256', SECRET);
hmac.update(JSON.stringify(payload));
const calculatedSignature = hmac.digest('hex');
return calculatedSignature === signature;
};
step 4: create your consumer service
const amqp = require('amqplib');
const mysql = require('mysql');
require("dotenv").config();
const RABBITMQ_URL = process.env.QUEUE_URL;
const db = mysql.createConnection({
host: "db",
user: "root",
password:"rootpassword",
database: "events_db",
port:3306
});
db.connect((err) => {
if (err) {
console.error("Error connecting to MySQL:", err);
return;
}
console.log("Connected to MySQL database.");
});
async function consume() {
const connection = await amqp.connect(process.env.QUEUE_URL);
const channel = await connection.createChannel();
await channel.assertQueue('webhook_queue');
channel.consume('webhook_queue', async (msg) => {
if (msg !== null) {
const payload = JSON.parse(msg.content.toString());
db.query("INSERT INTO events (payload) VALUES (?)", [JSON.stringify(payload)], (err) => {
if (err) {
console.error("Error saving to database:", err);
} else {
console.log("Saved to database:", payload);
}
});
channel.ack(msg);
}
});
}
consume().catch(console.error);
step 5: create relational database
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: events_db
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
networks:
webhook_network:
ipv4_address: 172.20.0.20
step 6: create message queue
rabbitmq:
image: rabbitmq:3-management
ports:
- "15672:15672"
- "5672:5672"
networks:
webhook_network:
ipv4_address: 172.20.0.10
//docker-compose.yml
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- "15672:15672"
- "5672:5672"
networks:
webhook_network:
ipv4_address: 172.20.0.10
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: events_db
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
networks:
webhook_network:
ipv4_address: 172.20.0.20
request-handler:
build:
context: ./requestHandler
dockerfile: Dockerfile
depends_on:
- rabbitmq
ports:
- "3000:3000"
networks:
webhook_network:
ipv4_address: 172.20.0.30
consumer:
build:
context: ./consumer
dockerfile: Dockerfile
volumes:
- ./app
depends_on:
- rabbitmq
- db
networks:
webhook_network:
ipv4_address: 172.20.0.40
networks:
webhook_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
db-data:
After running docker-compose up, you’ll have four services up and running: RabbitMQ, MySQL, a Consumer service, and a Request Handler service. Each of these services is assigned a static IP address to ensure stable communication within your application.
You can find the code for this setup on my github.
If you found this guide helpful, please give it a like! If I receive 10 likes, I’ll create a detailed tutorial video that walks you through the entire process.
Top comments (0)