Brokers are the hello world of distributed systems, for two reasons:
- Easy to get up and running
- They enforce the hive / master-node pattern, which scales naturally
node <-->
node <--> hive / broker <--> client-facing server <--> client
node <--> <--> client
So I thought: why not bring a pure JavaScript broker to Node.js?
// broker.js
import Bunny from "bunnimq";
import path from "path";
import { fileURLToPath } from "url";
Bunny({
port: 3000,
DEBUG: true,
cwd: path.dirname(fileURLToPath(import.meta.url)), // path to the .auth file
queue: {
Durable: true,
MessageExpiry: 60 // 1 hour
}
});
To be honest, low-level Node.js is super impressive. It took a few days, but it works, with a few optimizations:
- Object → binary compiler
- SharedArrayBuffers and threads
It can actually be way better, which (by the way) Node supports seamlessly:
const buffer = new SharedArrayBuffer();
const worker = new Worker(); // <-
So here’s an example of an FFmpeg distributed system running purely in Node.js.
But first, make sure you have FFmpeg installed and available in your PATH. Test it in the terminal:
ffmpeg -i img.jpg img.png
A Distributed Video Transcoding Example
Start a Node project:
npm init -y && npm i bunnimq bunnimq-driver
Folder structure:
ffmpegserver/
server.js # <- the hive
producer.js # client-facing server
consumer.js # node servers / workers
.auth # credentials for producer and consumer verification (like .env)
.auth
Put your secret credentials here: username:password:privileges
(see privileges in the repo)
sk:mypassword:4
jane:doeeee:1
john:doees:3
server.js
Simple, non-TLS setup (TLS is supported - see the GitHub repo):
import Bunny from "bunnimq";
import path from "path";
import { fileURLToPath } from "url";
Bunny({
port: 3000,
DEBUG: true,
cwd: path.dirname(fileURLToPath(import.meta.url)), // for .auth file
queue: {
Durable: true,
QueueExpiry: 0,
MessageExpiry: 3600
}
});
producer.js
This is the server browsers and other clients talk to.
It accepts requests and pushes jobs into the hive.
import BunnyMQ from "bunnimq-driver";
import fs from "node:fs/promises";
const bunny = new BunnyMQ({
port: 3000,
host: "localhost",
username: "sk",
password: "mypassword",
});
Create the queue if it doesn’t exist:
bunny.queueDeclare(
{
name: "transcode_queue",
config: {
QueueExpiry: 60,
MessageExpiry: 20,
AckExpiry: 10,
Durable: true,
noAck: false,
},
},
(res) => {
console.log("Queue creation:", res);
}
);
Usually videos come from the client.
For the demo, we’ll just read from a local folder:
async function processVideos() {
const videos = await fs.readdir(
"C:/Users/[path to a folder with videos]/Videos/Capcut/test"
); // usually a storage bucket link
for (const video of videos) {
const job = {
id: Date.now() + Math.random().toString(36).substring(2),
input: `C:/Users/[path to a folder with videos]/Videos/Capcut/test/${video}`,
outputFormat: "webm",
};
// put into the queue
bunny.publish("transcode_queue", JSON.stringify(job), (res) => {
console.log(`Job ${job.id} published:`, res ? "ok" : "400");
});
}
}
processVideos();
consumer.js
These are the nodes, the workers that pull jobs, transcode videos, and report back.
import BunnyMQ from "bunnimq-driver";
import { spawn } from "child_process";
import path from "path";
const bunny = new BunnyMQ({
port: 3000,
host: "localhost",
username: "john",
password: "doees",
});
Consume the transcode queue:
bunny.consume("transcode_queue", async (msg) => {
console.log("Received message:", msg);
try {
const { input, outputFormat } = JSON.parse(msg);
// normalize paths
const absInput = path.resolve(input);
const output = absInput.replace(/\.[^.]+$/, `.${outputFormat}`);
console.log(
`Spawning: ffmpeg -i "${absInput}" -f ${outputFormat} "${output}" -y`
);
await new Promise((resolve, reject) => {
const ffmpeg = spawn(
"ffmpeg",
["-i", absInput, "-f", outputFormat, output, "-y"],
{ shell: true } // helps Windows find ffmpeg.exe
);
ffmpeg.on("error", reject);
// FFmpeg logs to stderr
ffmpeg.stderr.on("data", (chunk) => {
process.stderr.write(chunk);
});
ffmpeg.on("close", (code, signal) => {
if (code === 0) {
console.log(`Transcoding complete: ${output}`);
return resolve(
bunny.Ack((ok) => console.log("Ack sent:", ok))
);
}
reject(
new Error(
signal ? `Signaled with ${signal}` : `Exited with code ${code}`
)
);
});
});
} catch (error) {
console.error("Error processing message:", error);
if (bunny.Nack) bunny.Nack();
}
});
Open multiple terminals:
node .\server.js # terminal 1
node .\producer.js # terminal 2
The nodes can be as many terminals as you want, that’s the parallel, distributed part:
node .\consumer.js
This is the entire pattern.
Simple, powerful, and it just scales, because the hive is responsible for that.
You can take this exact pattern and translate it to RabbitMQ and it’ll work.
I built bunnimq mostly as a joke after reading RabbitMQ’s source and how it works… and somehow it worked.
But that’s the point:
Brokers are the hello world of distributed systems.
It’s actually really hard to fail at them.
More from me:
How I Built a Graphics Renderer in Node.js
Visualizing Evolutionary Algorithms in Node.js
Thanks for reading!
Find me here:
Top comments (0)