<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kabeer N Shah</title>
    <description>The latest articles on DEV Community by Kabeer N Shah (@kabeer_nshah_8bdaa6e7fc8).</description>
    <link>https://dev.to/kabeer_nshah_8bdaa6e7fc8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kabeer_nshah_8bdaa6e7fc8"/>
    <language>en</language>
    <item>
      <title>How We Built a 100% Free Disaster Recovery System for Our Client</title>
      <dc:creator>Kabeer N Shah</dc:creator>
      <pubDate>Mon, 20 Apr 2026 18:05:28 +0000</pubDate>
      <link>https://dev.to/kabeer_nshah_8bdaa6e7fc8/how-we-built-a-100-free-automated-backup-system-for-our-client-417i</link>
      <guid>https://dev.to/kabeer_nshah_8bdaa6e7fc8/how-we-built-a-100-free-automated-backup-system-for-our-client-417i</guid>
      <description>&lt;p&gt;When a client’s database is the heart of their business, a simple local backup isn't enough. If the server catches fire, the local backup burns with it. What they really needed was a &lt;strong&gt;Disaster Recovery (DR) System&lt;/strong&gt;—a way to ensure that even in a worst-case scenario, their data is safe, off-site, and ready to be restored.&lt;/p&gt;

&lt;p&gt;Standard cloud recovery plans can cost hundreds of dollars a month. However, after analyzing the client's data volume, we realized we could build a high-resilience system for &lt;strong&gt;$0/month&lt;/strong&gt; using &lt;strong&gt;Docker&lt;/strong&gt;, &lt;strong&gt;Google Drive&lt;/strong&gt;, and &lt;strong&gt;rclone&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Strategy: Off-Site Redundancy
&lt;/h3&gt;

&lt;p&gt;A true Disaster Recovery plan follows the &lt;strong&gt;3-2-1 Rule&lt;/strong&gt;: 3 copies of data, on 2 different media, with &lt;strong&gt;1 copy off-site&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Because our client had modest storage needs, we recommended skipping expensive enterprise vaults and instead using their existing 15GB of free Google Drive storage as their off-site DR site. It’s secure, global, and provides the redundancy needed to survive a local server disaster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: The "Vault" (Docker)
&lt;/h3&gt;

&lt;p&gt;We encapsulated the recovery tools into a Docker container. This ensures that the recovery process is portable; if the primary server fails, we can spin up this same "vault" on any other machine in minutes to begin the restoration.&lt;/p&gt;

&lt;p&gt;To make it bulletproof on their Windows server, we baked the settings directly into the image. For example, here is a typical &lt;code&gt;Dockerfile&lt;/code&gt; for this DR setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example Dockerfile&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine:3.18&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; postgresql-client rclone bash ca-certificates
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /backups /config
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./config /config&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./scripts/backup-db.sh /scripts/backup-db.sh&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /scripts/backup-db.sh
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["/bin/bash", "-c", "while true; do /scripts/backup-db.sh; sleep 86400; done"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: The "Bridge" (Rclone)
&lt;/h3&gt;

&lt;p&gt;To move data to our off-site DR location, we used &lt;strong&gt;rclone&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge:&lt;/strong&gt; Standard service accounts often have zero storage quota on personal accounts.&lt;br&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; We used an &lt;strong&gt;OAuth token&lt;/strong&gt;, allowing the system to act as the client and utilize their full personal storage quota for disaster protection.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important Security Tip for Windows Users:&lt;/strong&gt; &lt;br&gt;
If you're trying to generate your token and Windows blocks the connection, you may need to temporarily stop the service that "hogs" the network ports. &lt;strong&gt;Always remember to turn it back on immediately after to keep your system working correctly.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stop the service: &lt;code&gt;net stop winnat&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get your token: &lt;code&gt;rclone authorize "drive"&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dp94sm3c0e5ekikw61u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dp94sm3c0e5ekikw61u.png" alt="Login to Google via Rclone" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start the service back up:&lt;/strong&gt; &lt;code&gt;net start winnat&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhoxraurs8mudzjz9m8pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhoxraurs8mudzjz9m8pt.png" alt="Local Login Success" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Step 3: Making it Permanent (The "Forever" Fix)
&lt;/h3&gt;

&lt;p&gt;In a disaster recovery scenario, the last thing you want is a "broken link." Google tokens expire every 7 days in "Testing" mode. To ensure the recovery pipeline is always active, we "Published" the app in the &lt;a href="https://console.cloud.google.com/apis/credentials/consent" rel="noopener noreferrer"&gt;Google Cloud Console&lt;/a&gt;. This ensures the refresh token lasts indefinitely, making the DR system truly "set-and-forget."&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: The "Recovery Engine" (The Script)
&lt;/h3&gt;

&lt;p&gt;We wrote a script that automates the daily protection cycle. It doesn't just copy files; it ensures data integrity by creating compressed, timestamped snapshots of the entire database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example logic for the DR engine:&lt;/span&gt;
&lt;span class="c"&gt;# 1. Snapshot the data for integrity&lt;/span&gt;
pg_dump &lt;span class="nt"&gt;-h&lt;/span&gt; db_host &lt;span class="nt"&gt;-U&lt;/span&gt; user &lt;span class="nt"&gt;-d&lt;/span&gt; database &lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backup.dump

&lt;span class="c"&gt;# 2. Compress for faster off-site transfer&lt;/span&gt;
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-czf&lt;/span&gt; backup.tar.gz backup.dump

&lt;span class="c"&gt;# 3. Securely transfer to the off-site DR folder&lt;/span&gt;
rclone &lt;span class="nt"&gt;--config&lt;/span&gt; /config/rclone.conf copy backup.tar.gz gdrive:nordible/db-backups
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The "Bonus" Feature: Automatic Retention
&lt;/h3&gt;

&lt;p&gt;A good DR system stays lean. As a bonus, we added a cleanup rule that automatically deletes snapshots older than 5 days. This keeps the recovery site organized and ensures the client never hits a storage limit during a crisis.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Result: Business Resilience
&lt;/h3&gt;

&lt;p&gt;By combining these free tools, we achieved a professional-grade Disaster Recovery system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; $0/month.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Off-site Redundancy:&lt;/strong&gt; Fully Automated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resilience:&lt;/strong&gt; If the server goes down, the data is safe in the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, our client can sleep soundly knowing that even if disaster strikes, their business data is just a few clicks away from a full recovery.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshy40qqctmc69j21ny2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshy40qqctmc69j21ny2f.png" alt="Google Drive Backups" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How I Slashed API Latency by 80% (350ms -&gt; 60ms) by Ditching Serverless</title>
      <dc:creator>Kabeer N Shah</dc:creator>
      <pubDate>Sun, 25 Jan 2026 18:14:53 +0000</pubDate>
      <link>https://dev.to/kabeer_nshah_8bdaa6e7fc8/how-i-slashed-api-latency-by-80-350ms-60ms-by-ditching-serverless-28p4</link>
      <guid>https://dev.to/kabeer_nshah_8bdaa6e7fc8/how-i-slashed-api-latency-by-80-350ms-60ms-by-ditching-serverless-28p4</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/kabeer_nshah_8bdaa6e7fc8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3731690%2Fb01d5cc8-6597-497d-ba31-8bb939fe32a3.png" alt="kabeer_nshah_8bdaa6e7fc8"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kabeer_nshah_8bdaa6e7fc8/300ms-to-60ms-how-i-slashed-api-latency-by-80-with-one-config-change-3153" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;300ms to 60ms: How I Slashed API Latency by 80% with One Config Change&lt;/h2&gt;
      &lt;h3&gt;Kabeer N Shah ・ Jan 25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#software&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#performance&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>devops</category>
      <category>software</category>
      <category>docker</category>
      <category>performance</category>
    </item>
    <item>
      <title>300ms to 60ms: How I Slashed API Latency by 80% with One Config Change</title>
      <dc:creator>Kabeer N Shah</dc:creator>
      <pubDate>Sun, 25 Jan 2026 17:09:14 +0000</pubDate>
      <link>https://dev.to/kabeer_nshah_8bdaa6e7fc8/300ms-to-60ms-how-i-slashed-api-latency-by-80-with-one-config-change-3153</link>
      <guid>https://dev.to/kabeer_nshah_8bdaa6e7fc8/300ms-to-60ms-how-i-slashed-api-latency-by-80-with-one-config-change-3153</guid>
      <description>&lt;p&gt;When migrating an application from a serverless environment to a dedicated cloud VPS, you expect a performance boost. After all, you're moving from transient functions to a persistent server. &lt;/p&gt;

&lt;p&gt;But when I finished my migration recently, my health checks told a different story. &lt;/p&gt;

&lt;p&gt;Despite my application server and managed database being hosted in neighboring regions, the database latency was consistently hovering around &lt;strong&gt;300ms - 350ms&lt;/strong&gt;. For a simple &lt;code&gt;SELECT 1&lt;/code&gt; query, that is an eternity.&lt;/p&gt;

&lt;p&gt;Here is how I diagnosed the "Serverless Hangover" and reclaimed 80% of my performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Investigation
&lt;/h2&gt;

&lt;p&gt;Physics told me the connection should be fast. A standard network ping between the two servers showed a round-trip time of about &lt;strong&gt;60ms&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;So, why was my application reporting &lt;strong&gt;350ms&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;I looked at the usual suspects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Resource Constraints:&lt;/strong&gt; CPU and RAM usage were minimal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Logic:&lt;/strong&gt; The health check endpoint was as lean as possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Congestion:&lt;/strong&gt; Consistent results across different times of day ruled this out.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The culprit was deeper: it was an architectural "best practice" that had become a bottleneck in a new context.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Root Cause: The "Double Pooling" Trap
&lt;/h2&gt;

&lt;p&gt;In a &lt;strong&gt;Serverless&lt;/strong&gt; architecture, database connection management is a major challenge. Because functions spin up and down instantly, they can easily exhaust a database's connection limit. The standard solution is to use an &lt;strong&gt;external transaction pooler&lt;/strong&gt;. This middleware sits between your functions and your database, managing a pool of persistent connections.&lt;/p&gt;

&lt;p&gt;When I moved to a &lt;strong&gt;Persistent VPS (Docker)&lt;/strong&gt;, I kept using that same external pooler. &lt;/p&gt;

&lt;p&gt;However, my new environment was fundamentally different. Unlike serverless functions, my Docker container stays alive. It uses an ORM with its own built-in, highly efficient connection pooling.&lt;/p&gt;

&lt;p&gt;By pointing my persistent server to the external transaction pooler, I was effectively &lt;strong&gt;double-pooling&lt;/strong&gt;. Every single query was forced through an extra middleware layer, incurring unnecessary SSL handshake negotiations and processing overhead, instead of holding a direct, persistent connection open.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix
&lt;/h2&gt;

&lt;p&gt;The solution was deceptively simple: &lt;strong&gt;Switch to the Direct Connection.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I updated my database connection string to bypass the external pooler and connect directly to the database server's standard port.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before:&lt;/strong&gt; App -&amp;gt; ORM Pool -&amp;gt; External Pooler (SSL Negotiation) -&amp;gt; Database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After:&lt;/strong&gt; App -&amp;gt; ORM Pool (Persistent Connection) -&amp;gt; Database&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;The moment the change was deployed, the latency dropped from &lt;strong&gt;350ms&lt;/strong&gt; to &lt;strong&gt;60ms&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;By removing one unnecessary layer of "best practice" that no longer applied to my architecture, I achieved an &lt;strong&gt;80% reduction in latency&lt;/strong&gt; and a significantly snappier user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lesson Learned
&lt;/h2&gt;

&lt;p&gt;"Best practices" are not universal truths; they are context-dependent solutions. &lt;/p&gt;

&lt;p&gt;What is a life-saver in a Serverless environment can be a performance killer in a Containerized one. Always audit your configuration and middleware when changing your underlying deployment architecture.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm currently building **HabitBuilder&lt;/em&gt;&lt;em&gt;, a privacy-first habit tracker that focuses on flexible consistency rather than rigid streaks. Check it out at &lt;a href="https://habits.planmydaily.com" rel="noopener noreferrer"&gt;habits.planmydaily.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>software</category>
      <category>docker</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
