<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kabeer N Shah</title>
    <description>The latest articles on DEV Community by Kabeer N Shah (@kabeer_nshah_8bdaa6e7fc8).</description>
    <link>https://dev.to/kabeer_nshah_8bdaa6e7fc8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kabeer_nshah_8bdaa6e7fc8"/>
    <language>en</language>
    <item>
      <title>How I Slashed API Latency by 80% (350ms -&gt; 60ms) by Ditching Serverless</title>
      <dc:creator>Kabeer N Shah</dc:creator>
      <pubDate>Sun, 25 Jan 2026 18:14:53 +0000</pubDate>
      <link>https://dev.to/kabeer_nshah_8bdaa6e7fc8/how-i-slashed-api-latency-by-80-350ms-60ms-by-ditching-serverless-28p4</link>
      <guid>https://dev.to/kabeer_nshah_8bdaa6e7fc8/how-i-slashed-api-latency-by-80-350ms-60ms-by-ditching-serverless-28p4</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/kabeer_nshah_8bdaa6e7fc8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3731690%2Fb01d5cc8-6597-497d-ba31-8bb939fe32a3.png" alt="kabeer_nshah_8bdaa6e7fc8"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kabeer_nshah_8bdaa6e7fc8/300ms-to-60ms-how-i-slashed-api-latency-by-80-with-one-config-change-3153" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;300ms to 60ms: How I Slashed API Latency by 80% with One Config Change&lt;/h2&gt;
      &lt;h3&gt;Kabeer N Shah ・ Jan 25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#software&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#performance&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>devops</category>
      <category>software</category>
      <category>docker</category>
      <category>performance</category>
    </item>
    <item>
      <title>300ms to 60ms: How I Slashed API Latency by 80% with One Config Change</title>
      <dc:creator>Kabeer N Shah</dc:creator>
      <pubDate>Sun, 25 Jan 2026 17:09:14 +0000</pubDate>
      <link>https://dev.to/kabeer_nshah_8bdaa6e7fc8/300ms-to-60ms-how-i-slashed-api-latency-by-80-with-one-config-change-3153</link>
      <guid>https://dev.to/kabeer_nshah_8bdaa6e7fc8/300ms-to-60ms-how-i-slashed-api-latency-by-80-with-one-config-change-3153</guid>
      <description>&lt;p&gt;When migrating an application from a serverless environment to a dedicated cloud VPS, you expect a performance boost. After all, you're moving from transient functions to a persistent server. &lt;/p&gt;

&lt;p&gt;But when I finished my migration recently, my health checks told a different story. &lt;/p&gt;

&lt;p&gt;Despite my application server and managed database being hosted in neighboring regions, the database latency was consistently hovering around &lt;strong&gt;300ms - 350ms&lt;/strong&gt;. For a simple &lt;code&gt;SELECT 1&lt;/code&gt; query, that is an eternity.&lt;/p&gt;

&lt;p&gt;Here is how I diagnosed the "Serverless Hangover" and reclaimed 80% of my performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Investigation
&lt;/h2&gt;

&lt;p&gt;Physics told me the connection should be fast. A standard network ping between the two servers showed a round-trip time of about &lt;strong&gt;60ms&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;So, why was my application reporting &lt;strong&gt;350ms&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;I looked at the usual suspects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Resource Constraints:&lt;/strong&gt; CPU and RAM usage were minimal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Logic:&lt;/strong&gt; The health check endpoint was as lean as possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Congestion:&lt;/strong&gt; Consistent results across different times of day ruled this out.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The culprit was deeper: it was an architectural "best practice" that had become a bottleneck in a new context.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Root Cause: The "Double Pooling" Trap
&lt;/h2&gt;

&lt;p&gt;In a &lt;strong&gt;Serverless&lt;/strong&gt; architecture, database connection management is a major challenge. Because functions spin up and down instantly, they can easily exhaust a database's connection limit. The standard solution is to use an &lt;strong&gt;external transaction pooler&lt;/strong&gt;. This middleware sits between your functions and your database, managing a pool of persistent connections.&lt;/p&gt;

&lt;p&gt;When I moved to a &lt;strong&gt;Persistent VPS (Docker)&lt;/strong&gt;, I kept using that same external pooler. &lt;/p&gt;

&lt;p&gt;However, my new environment was fundamentally different. Unlike serverless functions, my Docker container stays alive. It uses an ORM with its own built-in, highly efficient connection pooling.&lt;/p&gt;

&lt;p&gt;By pointing my persistent server to the external transaction pooler, I was effectively &lt;strong&gt;double-pooling&lt;/strong&gt;. Every single query was forced through an extra middleware layer, incurring unnecessary SSL handshake negotiations and processing overhead, instead of holding a direct, persistent connection open.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix
&lt;/h2&gt;

&lt;p&gt;The solution was deceptively simple: &lt;strong&gt;Switch to the Direct Connection.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I updated my database connection string to bypass the external pooler and connect directly to the database server's standard port.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before:&lt;/strong&gt; App -&amp;gt; ORM Pool -&amp;gt; External Pooler (SSL Negotiation) -&amp;gt; Database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After:&lt;/strong&gt; App -&amp;gt; ORM Pool (Persistent Connection) -&amp;gt; Database&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;The moment the change was deployed, the latency dropped from &lt;strong&gt;350ms&lt;/strong&gt; to &lt;strong&gt;60ms&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;By removing one unnecessary layer of "best practice" that no longer applied to my architecture, I achieved an &lt;strong&gt;80% reduction in latency&lt;/strong&gt; and a significantly snappier user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lesson Learned
&lt;/h2&gt;

&lt;p&gt;"Best practices" are not universal truths; they are context-dependent solutions. &lt;/p&gt;

&lt;p&gt;What is a life-saver in a Serverless environment can be a performance killer in a Containerized one. Always audit your configuration and middleware when changing your underlying deployment architecture.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm currently building **HabitBuilder&lt;/em&gt;&lt;em&gt;, a privacy-first habit tracker that focuses on flexible consistency rather than rigid streaks. Check it out at &lt;a href="https://habits.planmydaily.com" rel="noopener noreferrer"&gt;habits.planmydaily.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>software</category>
      <category>docker</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
