<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jagannath Shingne</title>
    <description>The latest articles on DEV Community by Jagannath Shingne (@jagannathshingne01).</description>
    <link>https://dev.to/jagannathshingne01</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jagannathshingne01"/>
    <language>en</language>
    <item>
      <title>Automated MongoDB Backups — Production‑Ready Guide</title>
      <dc:creator>Jagannath Shingne</dc:creator>
      <pubDate>Fri, 06 Feb 2026 14:16:04 +0000</pubDate>
      <link>https://dev.to/jagannathshingne01/automated-mongodb-backups-production-ready-guide-2308</link>
      <guid>https://dev.to/jagannathshingne01/automated-mongodb-backups-production-ready-guide-2308</guid>
      <description>&lt;h2&gt;
  
  
  Why Backups Matter (More Than You Think)
&lt;/h2&gt;

&lt;p&gt;Production databases &lt;em&gt;will&lt;/em&gt; fail — human error, cloud outages, bad deployments, or accidental deletes. The only real safety net is a &lt;strong&gt;reliable, automated backup system&lt;/strong&gt; that you trust enough to restore from at 3 AM.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll walk through a &lt;strong&gt;battle‑tested MongoDB backup system&lt;/strong&gt; built using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;mongodump&lt;/code&gt; / &lt;code&gt;mongorestore&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Node.js + &lt;code&gt;node-cron&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Amazon S3&lt;/li&gt;
&lt;li&gt;MongoDB (as backup metadata store)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end, you’ll understand &lt;strong&gt;how to safely back up MongoDB&lt;/strong&gt;, store it in S3, clean old backups, and fully restore your database if disaster strikes.&lt;/p&gt;




&lt;h2&gt;
  
  
  System Goals
&lt;/h2&gt;

&lt;p&gt;We wanted a system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backs up &lt;strong&gt;multiple MongoDB databases&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Runs &lt;strong&gt;daily, weekly, and monthly&lt;/strong&gt; automatically&lt;/li&gt;
&lt;li&gt;Stores backups safely in &lt;strong&gt;S3&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Tracks everything in &lt;strong&gt;MongoDB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Deletes expired backups automatically&lt;/li&gt;
&lt;li&gt;Can &lt;strong&gt;fully restore a deleted database&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No manual steps. No risky shortcuts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Core Concept: MongoDB Dumps
&lt;/h2&gt;

&lt;p&gt;MongoDB provides a native backup tool called &lt;strong&gt;&lt;code&gt;mongodump&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We use it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongodump &lt;span class="nt"&gt;--uri&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;MONGO_URI&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--archive&lt;/span&gt; &lt;span class="nt"&gt;--gzip&lt;/span&gt; &lt;span class="nt"&gt;--oplog&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What this does
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--archive&lt;/code&gt; → outputs a &lt;strong&gt;single file&lt;/strong&gt; (easy to store)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--gzip&lt;/code&gt; → compresses the backup&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--oplog&lt;/code&gt; → ensures &lt;strong&gt;consistent snapshots&lt;/strong&gt; for replica sets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result: &lt;strong&gt;one &lt;code&gt;.archive.gz&lt;/code&gt; file per backup&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This backup is &lt;strong&gt;read‑only&lt;/strong&gt; — your original database is never modified.&lt;/p&gt;




&lt;h2&gt;
  
  
  Supporting Multiple Databases
&lt;/h2&gt;

&lt;p&gt;Instead of hard‑coding one database, we define a &lt;strong&gt;generic list of MongoDB connections&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;databases&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;app-db&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MONGO_URI_1&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;analytics-db&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MONGO_URI_2&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These names are &lt;strong&gt;examples only&lt;/strong&gt; and can represent any MongoDB database in any environment.&lt;/p&gt;

&lt;p&gt;Each database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses the same backup logic&lt;/li&gt;
&lt;li&gt;Generates its own backup file&lt;/li&gt;
&lt;li&gt;Stores metadata separately&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach keeps the system *&lt;em&gt;generic and reusable*&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Automated Cron Jobs
&lt;/h2&gt;

&lt;p&gt;All backups run &lt;strong&gt;inside the Node.js app&lt;/strong&gt; using &lt;code&gt;node-cron&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Job Type&lt;/th&gt;
&lt;th&gt;Schedule&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Daily&lt;/td&gt;
&lt;td&gt;Every day&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Weekly&lt;/td&gt;
&lt;td&gt;Every Sunday&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly&lt;/td&gt;
&lt;td&gt;1st day of month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cleanup&lt;/td&gt;
&lt;td&gt;Every day&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each cron:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Runs &lt;code&gt;mongodump&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Uploads the file to S3&lt;/li&gt;
&lt;li&gt;Saves metadata in MongoDB&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No external cron servers required.&lt;/p&gt;




&lt;h2&gt;
  
  
  S3 Storage Structure
&lt;/h2&gt;

&lt;p&gt;Backups are uploaded as &lt;code&gt;.archive.gz&lt;/code&gt; files to an S3 bucket.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;generic and recommended folder structure&lt;/strong&gt; looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3://mongo-backups/
  {dbName}/
    daily/
    weekly/
    monthly/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;{dbName}&lt;/code&gt; → sanitized MongoDB database name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;daily | weekly | monthly&lt;/code&gt; → backup frequency&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Production Safety Rules
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Database names are &lt;strong&gt;sanitized&lt;/strong&gt; (lowercase, no spaces)&lt;/li&gt;
&lt;li&gt;Files are uploaded &lt;strong&gt;only after dump succeeds&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Local files are deleted &lt;strong&gt;only after S3 upload succeeds&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure works for &lt;strong&gt;any project or organization&lt;/strong&gt; and scales cleanly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Backup Metadata in MongoDB
&lt;/h2&gt;

&lt;p&gt;MongoDB stores &lt;strong&gt;generic backup metadata&lt;/strong&gt;, independent of any company or product:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;dbName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;app-db&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;backupType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;daily&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;storageProvider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;s3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;storageKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongo-backups/app-db/daily/backup.archive.gz&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ISODate&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="nx"&gt;expireAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ISODate&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why this matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB becomes the &lt;strong&gt;source of truth&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Every backup is &lt;strong&gt;auditable&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Cleanup logic is &lt;strong&gt;safe and deterministic&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing is deleted unless it is &lt;strong&gt;explicitly tracked&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Automatic Cleanup of Expired Backups
&lt;/h2&gt;

&lt;p&gt;A separate daily cron job:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Finds backups where &lt;code&gt;expireAt &amp;lt; now&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Deletes the file from S3&lt;/li&gt;
&lt;li&gt;Deletes the record from MongoDB&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If it’s not in MongoDB → it’s &lt;strong&gt;not deleted&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This prevents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accidental data loss&lt;/li&gt;
&lt;li&gt;Unlimited storage growth&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Disaster Recovery (Restore Process)
&lt;/h2&gt;

&lt;p&gt;If &lt;strong&gt;any MongoDB database&lt;/strong&gt; is deleted or corrupted:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Download the latest backup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3 &lt;span class="nb"&gt;cp &lt;/span&gt;s3://mongo-backups/&lt;span class="o"&gt;{&lt;/span&gt;dbName&lt;span class="o"&gt;}&lt;/span&gt;/daily/latest.archive.gz &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Restore using &lt;code&gt;mongorestore&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongorestore &lt;span class="nt"&gt;--gzip&lt;/span&gt; &lt;span class="nt"&gt;--archive&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;latest.archive.gz &lt;span class="nt"&gt;--drop&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--drop&lt;/code&gt; clears existing data&lt;/li&gt;
&lt;li&gt;The database is rebuilt from the archive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This process works for &lt;strong&gt;any MongoDB deployment&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Safety Guarantees
&lt;/h2&gt;

&lt;p&gt;✔ Original DB is never modified&lt;br&gt;
✔ Backups are full &amp;amp; consistent&lt;br&gt;
✔ Storage growth is controlled&lt;br&gt;
✔ Every backup is tracked&lt;br&gt;
✔ Restore is always possible&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;production‑grade&lt;/strong&gt;, not a demo script.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;This guide demonstrated a &lt;strong&gt;generic, production‑safe MongoDB backup strategy&lt;/strong&gt; using &lt;code&gt;mongodump&lt;/code&gt;, scheduled jobs, S3 storage, MongoDB‑tracked metadata, automated cleanup, and a reliable restore flow — adaptable to &lt;strong&gt;any project or organization&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Happy building 🚀&lt;/p&gt;

</description>
      <category>automation</category>
      <category>mongodb</category>
      <category>node</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>From Synchronous to Scalable: Redesigning Video Processing with RabbitMQ</title>
      <dc:creator>Jagannath Shingne</dc:creator>
      <pubDate>Wed, 04 Feb 2026 14:08:48 +0000</pubDate>
      <link>https://dev.to/jagannathshingne01/from-synchronous-to-scalable-redesigning-video-processing-with-rabbitmq-3dbb</link>
      <guid>https://dev.to/jagannathshingne01/from-synchronous-to-scalable-redesigning-video-processing-with-rabbitmq-3dbb</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Handling video at scale is never trivial. What starts as a simple upload-and-process flow can quickly turn into a performance bottleneck as video length, user traffic, and processing complexity grow.&lt;/p&gt;

&lt;p&gt;In our case, we faced a major challenge when increasing video durations began to slow down our APIs and degrade the user experience.&lt;/p&gt;

&lt;p&gt;This blog walks through the problem we faced with synchronous video processing, why it didn’t scale, and how we redesigned the system into a &lt;strong&gt;RabbitMQ-driven asynchronous pipeline&lt;/strong&gt; that significantly improved performance and reliability.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Original Problem
&lt;/h2&gt;

&lt;p&gt;Initially, our backend followed a &lt;strong&gt;fully synchronous video processing approach&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user recorded a video on the client.&lt;/li&gt;
&lt;li&gt;The full video was sent to the backend API.&lt;/li&gt;
&lt;li&gt;The API handled everything:

&lt;ul&gt;
&lt;li&gt;Uploading the video to S3&lt;/li&gt;
&lt;li&gt;Processing the video&lt;/li&gt;
&lt;li&gt;Generating subtitles&lt;/li&gt;
&lt;li&gt;Uploading the subtitle &lt;code&gt;.vtt&lt;/code&gt; file to S3&lt;/li&gt;
&lt;li&gt;Updating the database with both URLs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Only after all these steps completed did the API return a success response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach worked fine when video lengths were limited to &lt;strong&gt;5–6 minutes&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
However, as interview durations gradually increased, this design started showing serious limitations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Didn’t Scale
&lt;/h2&gt;

&lt;p&gt;As video length grew, so did the problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Long API response times&lt;/strong&gt; due to heavy video processing
&lt;/li&gt;
&lt;li&gt;APIs blocked until FFmpeg and subtitle generation finished
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poor user experience&lt;/strong&gt;, as users had to wait until everything completed
&lt;/li&gt;
&lt;li&gt;Limited scalability as traffic increased
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The API was doing too much. Video processing, cloud uploads, and database updates were tightly coupled with user-facing requests.&lt;/p&gt;

&lt;p&gt;We needed a better architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  The New Scalable Architecture
&lt;/h2&gt;

&lt;p&gt;To overcome these limitations, we redesigned the system with &lt;strong&gt;asynchronous processing and background workers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firdlqnco8xcgmn7o1t8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firdlqnco8xcgmn7o1t8j.png" alt="Asynchronous video processing with RabbitMQ, S3, and background workers" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Chunked Video Upload with Presigned URLs
&lt;/h3&gt;

&lt;p&gt;Instead of sending the full video to the API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client records the video in &lt;strong&gt;small chunks&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Each chunk is uploaded &lt;strong&gt;directly to S3&lt;/strong&gt; using &lt;strong&gt;presigned URLs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This removes the API entirely from the upload path.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the last chunk is uploaded, the client sends a lightweight request to mark the interview as completed.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Asynchronous Processing with RabbitMQ
&lt;/h3&gt;

&lt;p&gt;After interview completion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API publishes a message to &lt;strong&gt;RabbitMQ&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The request immediately returns success to the user.&lt;/li&gt;
&lt;li&gt;All heavy processing happens in the background.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We introduced &lt;strong&gt;two dedicated RabbitMQ workers&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Video Merge Worker
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Fetches all uploaded chunks from S3
&lt;/li&gt;
&lt;li&gt;Uses &lt;strong&gt;FFmpeg&lt;/strong&gt; to merge them into a single video
&lt;/li&gt;
&lt;li&gt;Uploads the final video back to S3
&lt;/li&gt;
&lt;li&gt;Updates the database with the merged video URL
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Subtitle Worker
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Downloads the final merged video from S3
&lt;/li&gt;
&lt;li&gt;Extracts audio from the video&lt;/li&gt;
&lt;li&gt;Converts audio into subtitles (&lt;code&gt;.vtt&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Uploads the &lt;code&gt;.vtt&lt;/code&gt; file to S3
&lt;/li&gt;
&lt;li&gt;Updates the database with the subtitle URL
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each worker is isolated, scalable, and focused on a single responsibility.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Benefits
&lt;/h2&gt;

&lt;p&gt;This redesign brought immediate and measurable improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;80–90% reduction in API response time&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Heavy CPU tasks moved out of the API layer&lt;/li&gt;
&lt;li&gt;Fully asynchronous, fault-tolerant processing&lt;/li&gt;
&lt;li&gt;Easily scalable by adding more workers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No waiting time for users&lt;/strong&gt; after finishing the interview&lt;/li&gt;
&lt;li&gt;Support for &lt;strong&gt;more than 30-minute videos&lt;/strong&gt; with zero downtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Users can now finish their interview and move on instantly, while processing continues quietly in the background.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This shift from a synchronous to an asynchronous architecture was a turning point.&lt;br&gt;&lt;br&gt;
By decoupling video uploads and processing from the API layer and leveraging RabbitMQ workers, we built a system that is faster, more scalable, and far more resilient.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Redesigned synchronous video processing into a RabbitMQ-driven asynchronous pipeline, cutting API response time by ~70–80% while enabling scalable background processing for video merging, subtitle generation, and cloud storage.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you’re dealing with long-running tasks like video processing, this pattern can dramatically improve both performance and user experience.&lt;/p&gt;

&lt;p&gt;Happy building 🚀&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
      <category>performance</category>
      <category>systemdesign</category>
    </item>
  </channel>
</rss>
