Introduction
Have you ever wondered what really happens when you upload a photo on an app like Instagram or LinkedIn?
You open the app, choose your image, and boom — you instantly see the photo on the screen. Then you add a caption, hit “Post,” and within seconds, your post is live. It feels instant. But... is it really?
Let’s pause for a second. Here’s the truth:
Your app isn't actually storing that image in a regular database. Images are heavy, and databases aren’t built to store them directly. Instead, apps use services like Cloudinary, Amazon S3, or Azure Blob Storage to store media. These services return a publicUrl
and a publicId
for each file.
Once the image is uploaded and the app gets this data, it then creates the post using the image URL, caption, and user details.
But here's the real brain-tickler:
When you upload a post, your image seems to appear immediately. Yet, uploading an image takes more time than saving a bit of text. So how come the post shows up so fast?
Is the image really uploaded instantly? Is your post created before the image is even fully processed?
These questions got me hooked. 🧠
So I did what any dev would do: stared at a whiteboard too long, grilled ChatGPT endlessly, and eventually... got it.
In this blog, I’ll walk you through my thought process, the options I considered, the trade-offs, and the final approach using RabbitMQ to simulate how real-world apps handle media uploads behind the scenes.
Trust me — this was fun to build, and I hope reading it will be just as fun. Let’s go! 🚀
Extreme Naive Solution 🫠 The Monolith Way (Easy Peasy)
Let’s start with the most straightforward approach — a good old monolith.
Here’s the plan:
We create a createPost
function in our backend that accepts a caption
and mediaUrls
. Pretty standard, right?
Here's how it would go down:
- The user uploads a photo.
- On the backend, you hit Cloudinary (or S3, etc.) and
await
the upload response. - While the upload is in progress, the user can go ahead and write their caption.
- Once you get the media URL back, boom! You now have everything to create the post — the caption and the image.
Seems simple enough. So what's the problem?
🪫 The Problem: It Feels Slow
You can't create the post until the image is uploaded. That means the user has to wait — and waiting = friction.
Even if it’s just 2-3 seconds, that’s enough to kill the “instant” vibe users are used to.
🧃 A Fake Instant Experience?
Sure, you could save the image temporarily in `localStorage` and show the user a preview — making it look like the upload was instant.
But here’s the catch:
You still don’t have the imageUrl
from Cloudinary, so you can't actually create the post. That snappy UI? It’s just a mirage — a visual trick. Sooner or later, reality (and async logic) will catch up.
Verdict
This monolith-style approach works — but it’s not smooth.
You’re mixing concerns, delaying the response, and risking a poor UX. Not ideal for modern apps.
🚀 The Optimized Microservice Version
Before we dive in, here’s a quick recap:
In a microservice world, each core functionality — like auth
(user identity), post
(creating & managing posts), or media
(uploading files) — runs as its own service on separate servers. Clean separation, clean scaling.
Let’s talk about one of the smarter ways to connect these services: webhooks.
🔹 Method 1: Webhooks — Trigger and Forget
What are webhooks?
Webhooks are a way for one service to notify another when something happens — without needing the caller to wait. It’s a decoupled pattern where, once an event is triggered, the sender can move on, and the listener handles it asynchronously.
Imagine this:
- The
media
service finishes uploading a file - It automatically calls the
post
service’s webhook endpoint - The
post
service reacts — e.g., attaches the media ID to the post
Simple, elegant, and powerful — especially when you want to keep services loosely coupled.
⚖️ Tradeoffs
✅ Pros:
- Decoupled communication – services don’t block or depend tightly on each other
- Simple to implement – great for quick wins or small projects
- Faster user experience – the client doesn’t wait for everything to finish
❌ Cons:
-
No built-in retry or buffering – if the
post
service is down, the webhook call is lost - Low fault tolerance – you'd have to manually implement retries, logging, or queuing
- Doesn’t scale well – becomes harder to manage in larger systems
📌 Note:
I won’t go deeper into webhooks here — this article focuses more on RabbitMQ and event-driven architecture. But if this pattern interests you, it’s worth exploring further. Just know that while webhooks are great for simple use cases, they often fall short in production-grade, fault-tolerant systems — and that’s where queues truly shine.
🔹 Let’s Start With a Basic Question:
If services are hosted on different servers…
How do they talk to each other?
And even more importantly — why should they care to communicate at all?
Let’s take a real-world example:
Imagine you just created a new post on a social media app.
To improve performance, the app might use caching to store recent posts — so users see fresh content fast.
Now here’s the problem:
When your post gets created, how does the search or feed service (hosted elsewhere) know that the cached posts are outdated?
You need to tell it:
“Hey, someone just posted something new. Please update the feed!”
You could fire a POST request from one service to another. But…
- What if the other service is down?
- What if you need to tell multiple services?
- What if this direct communication makes your system fragile and tightly coupled?
That’s where simple API calls fall apart. You need something better.
💡 What You Need Instead: A Message Broker
Think of a message broker like RabbitMQ as a reliable postman 📨
- It takes a message from one service (like
NEW_POST_CREATED
) - Delivers it to all the services that care (like feed, search, analytics)
- It can retry, queue, and decouple services cleanly
This is exactly how modern systems communicate — using events, queues, and publish-subscribe patterns.
How RabbitMQ Works:
Let’s briefly understand the 4 key components in RabbitMQ using a real-world example:
🔹 Publisher
The service that sends the event.
📌 Example: Media service publishes an event after an image is uploaded successfully.
🔹 Event = Message + Payload
The event is the actual data being sent through RabbitMQ.
It’s made of:
-
Message: A string identifier like
'media.success'
to indicate what kind of event it is. -
Payload: The actual data sent (e.g.
draftId
,mediaId
,publicUrl
,userId
).
🔹 Channel (aka Queue)
Think of a channel like a pipe or named queue that connects publishers and subscribers.
We name channels to distinguish different event types or workflows.
The channel is responsible for carrying the event from the publisher to one or more subscribers.
🔹 Subscriber (aka Consumer)
A service that listens to a specific message on a specific channel.
It only acts on relevant events — filtered by message type.
📌 Example: The Post service listens for media.success
and updates the post with the image URL once it receives the event.
✅ Summary of Your Key Idea:
“A message is an identifier. An event is the combination of message + payload. A channel is a named pipe that carries the event, and a subscriber listens for only relevant messages to act upon.”
🧰 Method 2: Real-World Media Uploads Using RabbitMQ
Now that you know:
- How microservices communicate
- Why they need to communicate
- What a message broker is
- How RabbitMQ works...
Let’s see how it all comes together in a real-world app.
🖼️ Flow: Uploading Media with RabbitMQ
Here’s how media uploads work using event-driven communication:
- User uploads an image → it hits the Media Service.
- The image uploads in the background while the user writes the caption.
- We allow the user to hit "Post" — even if media upload isn’t complete yet.
- The Post Service stores this post in draft state.
- Once the media upload finishes, Media Service publishes an event:
media.success
. - The Post Service (subscriber) listens for this event.
- It updates the corresponding draft post with
mediaId
andmediaUrl
. - The post is marked live and the user is notified.
- ✅ All of this happens asynchronously, using
**draftId**
as a shared link between the services.
🌟 Why This Works Well
- ⚡ Fast user experience (no waiting on uploads)
- 🔁 Async & fault-tolerant
- 🧱 Decoupled, scalable, and clean architecture
⚖️ Pros and Cons of Using RabbitMQ
So far, RabbitMQ seems magical — but like every tool, it has trade-offs. Let’s break them down:
✅ Pros
-
Loose Coupling Between Services
- Services don’t need to know about each other — just publish or subscribe to events. This makes scaling and updating easier.
-
Asynchronous Processing = Faster User Experience
- Offloading tasks like image processing or sending emails to background jobs keeps your app snappy for users.
-
Reliable Delivery
- RabbitMQ uses message acknowledgments and persistence, meaning messages won’t just disappear if something crashes mid-process.
-
Scalable Architecture
- Want to process 1,000 images at once? Just scale up consumers. RabbitMQ helps decouple load from user interaction.
-
Retry and Dead Letter Queues (DLQs)
- If something goes wrong, failed messages can be retried or moved to a DLQ for debugging later. You don’t lose important events.
⚠️ Cons
-
Increased Complexity
- Now you’re managing RabbitMQ itself, plus writing consumers, handling retries, and monitoring failures. More moving parts = more things to maintain.
-
Operational Overhead
- RabbitMQ is another service to deploy, secure, and monitor. You'll need alerting in place for failures, queue backlogs, or memory issues.
-
Harder Debugging
- With async behavior, bugs can feel like ghosts. Tracing an error from post creation → media upload → queue → consumer can get tricky.
-
Potential Message Loss
- If not configured properly (e.g., queues not durable, no acknowledgments), you can lose messages on crashes or reboots.
-
Latency (Sometimes)
- Although async improves UX, the actual task (like showing the uploaded image) might take longer to fully process compared to inline logic.
🧯 What If RabbitMQ Goes Down?
Good question.
- Publish Fails? Your service should catch the error and either retry or log it for a retry job later.
- Consumer Crashes? Messages stay in the queue until a consumer comes back online.
- Broker Crash? If queues aren’t marked as durable, messages may be lost. Always enable durability for production queues.
- DLQ (Dead Letter Queue): Failed messages can be routed to a special queue for investigation instead of being lost or retried endlessly.
💡 Tip: Use monitoring tools like Prometheus + Grafana or RabbitMQ's built-in dashboard to keep tabs on queues, consumers, and delivery rates.
TALK IS CHEAP SHOW ME THE CODE AND EXPLAIN
Boiler plate code
Here's a simple boilerplate setup to use RabbitMQ in a Node.js app.
It includes functions to connect, publish, and consume events using a topic exchange
connectToRabbitMq()
this function is straightforward just connects to RabbitMQ and create a channel named after const EXCHANGE_NAME = 'FACEBOOK_EVENTS'
.
publishEvent(routingKey, message)
this publishes message to routingKey
.
function consumeEvent(routingKey, callback)
The consumeEvent
function allows a service to listen for a specific event. It sets up a temporary queue, binds it to the correct routing key, and then starts consuming messages. Every time a new message arrives, it runs your callback function and tells RabbitMQ that the message was handled successfully using channel.ack()
import ampqlib from "amqplib" // nodejs library to work with rabbitmq
import logger from "./logger.js"
import dotenv from "dotenv"
dotenv.config();
let channel = null
let connection = null
console.log(process.env.RABBITMQ_URL)
const EXCHANGE_NAME = 'FACEBOOK_EVENTS'
// code to connect with rabbitMq with a spacified channel name
async function connectToRabbitMq() {
try {
connection = await ampqlib.connect(process.env.RABBITMQ_URL);
channel = await connection.createChannel()
await channel.assertExchange(EXCHANGE_NAME,"topic",{durable:false})
logger.info("Connected to rabbit mq");
return channel
} catch (error) {
logger.error("Error connecting to rabbit mq", error);
}
}
// function to publish a message
// the routingKey is identifier which will be passed for ex. 'media.success'
async function publishEvent(routingKey, message) {
console.log("ROUTING KEY RECEIVED:", routingKey);// used for debugging
console.log("IS STRING?", typeof routingKey);
logger.info("key: ", routingKey)
if(!channel){
await connectToRabbitMq();
// if not connected connect to rabbitMq
}
// once conncected publish the message with routing key and message.
channel.publish(EXCHANGE_NAME, routingKey, Buffer.from(JSON.stringify(message)))
logger.info("Event Published at", routingKey);
}
// Function to subscribe to a specific event (routingKey)
// Whenever a message with that key is published, this function will call the provided callback
async function consumeEvent(routingKey, callback) {
// If not connected to RabbitMQ yet, connect first
if (!channel) {
await connectToRabbitMq();
}
// Create a temporary exclusive queue just for this consumer
const q = await channel.assertQueue("", { exclusive: true });
// Bind this queue to the exchange using the routingKey
// So only messages with that routingKey will go to this queue
await channel.bindQueue(q.queue, EXCHANGE_NAME, routingKey);
// Start listening to the queue for incoming messages
channel.consume(q.queue, (msg) => {
if (msg !== null) {
// Parse the message from Buffer to JSON
const content = JSON.parse(msg.content.toString());
// Call the user-provided callback function with the message content
callback(content);
// Acknowledge the message (let RabbitMQ know it was processed successfully)
channel.ack(msg);
}
});
// Log what we're subscribed to
logger.info(`Subscribed to event '${routingKey}' from exchange '${EXCHANGE_NAME}'`);
}
export {connectToRabbitMq, publishEvent, consumeEvent}
Media Controller Publishing Messages Code:-
const uploadMedia = async(req, res)=>{
logger.info("Starting media upload");
try {
if(!req.file){
logger.error("No file found. Add a file and try again.")
return res.status(400).json({
success:false,
message:"No file found. Please add a file and try again."
})
}
const {originalname, mimetype, buffer} = req.file
const userId = req.user; // userId for authenticated users
const {draftId} = req.body;
logger.info(`File details: name=${originalname}, type=${mimetype}`);
logger.info("upload to cloudinary started")
const cloudinaryUploadResult = await uploadMediaToCloudinary(req.file);
logger.info(`Cloudinary Upload successfully. Public Id ${cloudinaryUploadResult.public_id}`)
const newlyCreatedMedia = new Media({
publicId: cloudinaryUploadResult.public_id,
originalName:originalname,
mimeType:mimetype,
url: cloudinaryUploadResult.secure_url,
userId
})
await newlyCreatedMedia.save();
// added this to test the automated post updation with mediaIds
await publishEvent("media.success", {
draftId: draftId,
publicUrl: newlyCreatedMedia.url,
userId:userId,
mediaId: newlyCreatedMedia._id
})
res.json({
success:true,
mediaId:newlyCreatedMedia._id,
url: newlyCreatedMedia.url,
message:"Media Upload is successful"
})
} catch (error) {
logger.error("Error happend while uploading.", error)
res.status(500).json({
success:false,
message:"Internal Server Error Happend At Our Side"
})
}
}
Post Service Consuming message "media.success"
async function startServer(){
try {
await connectToRabbitMq();
await consumeEvent("media.success",updatePostWithMedia)
app.listen(PORT, ()=>{
logger.info(`Post service started listening on port ${PORT}`);
})
} catch (error) {
logger.error("Failed to connect to server")
process.exit(1)
}
}
startServer();
finally handling post update with function updatePostWithMedia
import logger from "../utils/logger.js";
import Post from "../model/post.js";
async function updatePostWithMedia(event) {
logger.info(`updatePost is intialized for event ${event}`);
try {
console.log("event", event)
const findPostWithDraftId = await Post.find({draftId: event.draftId})
if(!findPostWithDraftId){
logger.warn(`No post could be found associated: ${event.draftId}`);
return
}
const updatedPostDetails = await Post.findOneAndUpdate({draftId:event.draftId}, {$set:{mediaIds:[event.mediaId]}}, {new:true})
logger.info(`Updated post is: ${updatedPostDetails}`)
} catch (error) {
logger.error(`Error happend while updating post with draftId: ${event?.draftId}`)
}
}
export default updatePostWithMedia;
💫 Wrapping Up
Building this mini real-world social media app has been nothing short of fun, frustrating, and absolutely worth it.
You only truly appreciate RabbitMQ when you see your database update itself without you lifting a finger. That first time it works? You will jump out of your seat and yell: "Yo, it’s actually working!"
That’s the magic of event-driven architecture.
And once you feel it — not just understand it — you're hooked.
So go ahead: take the concepts here, clone the repo, and mess with the code.
Trigger new events. Chain them. Break things. Fix them.
You'll start falling in love with backend systems, architecture patterns, and this beautiful chaos called event-driven programming.
And hey — don’t just build to get it done or pad your resume.
Live a little.
- Mess with code.
- Weep.
- Cry.
- Fix it.
- Take pride.
😩 I once debugged a RabbitMQ event flow for three hours — turned out it was a typo.
Was I mad? Sure.
Was I proud? Absolutely.
Much love and peace to all who made it this far.
I’ll drop the GitHub link below — fork it, break your localhost, make it better.
Peace out. 💻🔥
https://github.com/PRASHANTSWAROOP001/Social-Media-Microservice
Top comments (0)