<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vishal Pandey</title>
    <description>The latest articles on DEV Community by Vishal Pandey (@0xv1shal).</description>
    <link>https://dev.to/0xv1shal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/0xv1shal"/>
    <language>en</language>
    <item>
      <title>Introducing limitngin — A Lightweight, Memory-Efficient &amp; Scalable Rate Limiter for Express</title>
      <dc:creator>Vishal Pandey</dc:creator>
      <pubDate>Tue, 03 Mar 2026 01:52:00 +0000</pubDate>
      <link>https://dev.to/0xv1shal/introducing-limitngin-a-lightweight-memory-efficient-scalable-rate-limiter-for-express-on</link>
      <guid>https://dev.to/0xv1shal/introducing-limitngin-a-lightweight-memory-efficient-scalable-rate-limiter-for-express-on</guid>
      <description>&lt;p&gt;I recently built and open-sourced &lt;strong&gt;limitngin&lt;/strong&gt;, a lightweight, zero-dependency, ESM-only rate limiter middleware for Express.&lt;/p&gt;

&lt;p&gt;The goal was simple:&lt;/p&gt;

&lt;p&gt;Build something that works efficiently for both small projects and larger systems, without forcing external infrastructure from day one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Designed for Small Projects — Without Redis
&lt;/h2&gt;

&lt;p&gt;For:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Personal projects&lt;/li&gt;
&lt;li&gt;MVPs&lt;/li&gt;
&lt;li&gt;Internal tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You often don’t want to set up Redis just to enable basic rate limiting.&lt;/p&gt;

&lt;p&gt;limitngin uses an optimized in-memory store that is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Map-based for better memory behavior&lt;/li&gt;
&lt;li&gt;Automatically cleaned up&lt;/li&gt;
&lt;li&gt;Stable under high key churn&lt;/li&gt;
&lt;li&gt;Zero external dependencies&lt;/li&gt;
&lt;li&gt;Drop it in and it works.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Built With Scale in Mind
&lt;/h2&gt;

&lt;p&gt;Even though it starts in-memory, the internal architecture is designed for extensibility.&lt;/p&gt;

&lt;p&gt;It already supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sliding Window Counter (default)&lt;/li&gt;
&lt;li&gt;Token Bucket&lt;/li&gt;
&lt;li&gt;IP-based limiting&lt;/li&gt;
&lt;li&gt;Auth-token–based limiting&lt;/li&gt;
&lt;li&gt;Standard &lt;code&gt;RateLimit-*&lt;/code&gt; + &lt;code&gt;Retry-After&lt;/code&gt; headers&lt;/li&gt;
&lt;li&gt;Type-safe configuration using discriminated unions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And Redis (or external storage adapters) are planned for future releases.&lt;/p&gt;

&lt;p&gt;So you can start simple — and scale later.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stress &amp;amp; Memory Testing
&lt;/h2&gt;

&lt;p&gt;Before publishing, I ran controlled stress tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;~500,000 unique key simulations&lt;/li&gt;
&lt;li&gt;Continuous request churn&lt;/li&gt;
&lt;li&gt;Heap inspection under sustained load&lt;/li&gt;
&lt;li&gt;Manual GC checks using &lt;code&gt;node --expose-gc&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stable memory growth curve&lt;/li&gt;
&lt;li&gt;No memory leaks&lt;/li&gt;
&lt;li&gt;Proper cleanup of inactive keys&lt;/li&gt;
&lt;li&gt;~10MB reduction in peak memory after migrating from Record → Map&lt;/li&gt;
&lt;li&gt;Heap returning close to baseline after GC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The focus was to predict behavior under pressure.&lt;/p&gt;




&lt;p&gt;If you’re building with Express and want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A clean API&lt;/li&gt;
&lt;li&gt;Zero dependencies&lt;/li&gt;
&lt;li&gt;Algorithm flexibility&lt;/li&gt;
&lt;li&gt;Efficient in-memory performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/limitngin" rel="noopener noreferrer"&gt;limitngin&lt;/a&gt; is now available.&lt;/p&gt;

&lt;p&gt;Open to discussions, feedback, and contributions.&lt;/p&gt;

</description>
      <category>node</category>
      <category>express</category>
      <category>opensource</category>
      <category>typescript</category>
    </item>
    <item>
      <title>How I Took a Slow Node.js API from 5.7 sec 11ms Using Real Load Tests (Beginner-Friendly)</title>
      <dc:creator>Vishal Pandey</dc:creator>
      <pubDate>Mon, 24 Nov 2025 14:46:21 +0000</pubDate>
      <link>https://dev.to/0xv1shal/how-i-took-a-slow-nodejs-api-from-57-sec-11ms-using-real-load-tests-beginner-friendly-2bm2</link>
      <guid>https://dev.to/0xv1shal/how-i-took-a-slow-nodejs-api-from-57-sec-11ms-using-real-load-tests-beginner-friendly-2bm2</guid>
      <description>&lt;p&gt;Most tutorials show you “fast Node.js code,” but they never show you what happens when you actually measure performance under load.&lt;/p&gt;

&lt;p&gt;I wanted to learn real backend optimization, so I built a small URL redirect API:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GET /:slug  →  returns original URL&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Simple idea.&lt;br&gt;
But the first time I load-tested it, the API took 5.73 seconds to respond.&lt;/p&gt;

&lt;p&gt;This article is a beginner → mid-level friendly breakdown of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;what slowed my API&lt;/li&gt;
&lt;li&gt;how I optimized it &lt;/li&gt;
&lt;li&gt;what actually improved performance &lt;/li&gt;
&lt;li&gt;real k6 load test results at every step&lt;/li&gt;
&lt;li&gt;and what beginners usually misunderstand about backend latency&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;🖥️ Environment Note&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;All benchmarks, optimizations, and load tests in this article were performed on a local machine, not a server or cloud instance.&lt;br&gt;
Numbers will vary depending on hardware — but the optimization principles remain the same.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 1 — Baseline Test (No Index, Direct MySQL Connection)&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: 5.73 seconds&lt;br&gt;
Requests: ~68 RPS&lt;/p&gt;

&lt;p&gt;This was the first k6 run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuetl7kx34ztrfftevic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuetl7kx34ztrfftevic.png" alt="before-slug-index" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are the baseline numbers:&lt;/p&gt;

&lt;p&gt;Why it was this slow?&lt;br&gt;
MySQL was doing a full table scan for every slug.&lt;br&gt;
No index → MySQL compares slug against every row → worst possible plan.&lt;br&gt;
At high concurrency, MySQL was simply overwhelmed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 2 — Added a Slug Index&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: 27.1 ms&lt;br&gt;
Requests: ~18k RPS&lt;/p&gt;

&lt;p&gt;Index command: &lt;code&gt;CREATE INDEX idx_slug ON urls(slug);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj336m7y4qh1pqrofnvdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj336m7y4qh1pqrofnvdy.png" alt="after-slug-index" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding the index:&lt;br&gt;
This alone brought latency from 5.73 seconds → 27 ms.&lt;br&gt;
This is the single biggest improvement you can ever make in a read-heavy API.&lt;br&gt;
Beginners underestimate how critical indexing is.&lt;br&gt;
No amount of Node.js code optimization can beat a proper DB index.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 3 — Switched from mysql.createConnection to MySQL2 Connection Pool&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: ~18 ms&lt;br&gt;
Before this, every request created a new MySQL connection.&lt;br&gt;
That’s expensive under load.&lt;br&gt;
So I switched to:&lt;br&gt;
&lt;code&gt;export const db = mysql2.createPool({&lt;br&gt;
  host,&lt;br&gt;
  user,&lt;br&gt;
  password,&lt;br&gt;
  database,&lt;br&gt;
  connectionLimit: 10,&lt;br&gt;
});&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After switching to pooling:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed53spcu2z8bzngdans9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed53spcu2z8bzngdans9.png" alt="after-pooling" width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why this helps&lt;br&gt;
Reuses open connections&lt;br&gt;
Avoids handshake overhead&lt;br&gt;
Much better concurrency&lt;br&gt;
MySQL stays stable under load&lt;br&gt;
If you're using MySQL + Node.js:&lt;br&gt;
Connection pools are mandatory.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 4 — Added Redis Cache (Final Optimization)&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: 11.75 ms&lt;br&gt;
Requests: ~42k RPS&lt;br&gt;
Redis turns your slug lookup into an O(1) in-memory lookup.&lt;br&gt;
Updated handler:&lt;/p&gt;

&lt;p&gt;app.get('/:slug', async (req, res) =&amp;gt; {&lt;br&gt;
  const slug = req.params.slug;&lt;/p&gt;

&lt;p&gt;const cached = await redis.get(slug);&lt;br&gt;
  if (cached) return res.redirect(cached);&lt;/p&gt;

&lt;p&gt;const data = await db.query("SELECT url FROM urls WHERE slug = ?", [slug]);&lt;br&gt;
  if (!data[0].length) return res.status(404).send("Not found");&lt;/p&gt;

&lt;p&gt;await redis.set(slug, data[0][0].url);&lt;br&gt;
  return res.redirect(data[0][0].url);&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;k6 results after Redis:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno94v4x5zd3dpk0osvlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno94v4x5zd3dpk0osvlc.png" alt="after-redis" width="800" height="516"&gt;&lt;/a&gt;&lt;br&gt;
From 5.7 seconds → 11 ms.&lt;br&gt;
This is a ~520x improvement, and nothing about the API changed except internal optimizations.&lt;/p&gt;

&lt;p&gt;🔗 GitHub Repository (Full Source Code)&lt;/p&gt;

&lt;p&gt;You can check out the entire project here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/Termux-Dark-Dev/url-shortner-practice.git" rel="noopener noreferrer"&gt;Github Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🙏 Thanks for Reading&lt;br&gt;
If you have questions, want to discuss backend engineering, or just want to say hi —&lt;br&gt;
I’m always open to messages.&lt;/p&gt;

</description>
      <category>node</category>
      <category>learning</category>
      <category>javascript</category>
      <category>redis</category>
    </item>
    <item>
      <title>How I Took a Slow Node.js API from 5.7 sec 11ms Using Real Load Tests (Beginner-Friendly)</title>
      <dc:creator>Vishal Pandey</dc:creator>
      <pubDate>Mon, 24 Nov 2025 14:46:21 +0000</pubDate>
      <link>https://dev.to/0xv1shal/how-i-took-a-slow-nodejs-api-from-57-sec-11ms-using-real-load-tests-beginner-friendly-3nop</link>
      <guid>https://dev.to/0xv1shal/how-i-took-a-slow-nodejs-api-from-57-sec-11ms-using-real-load-tests-beginner-friendly-3nop</guid>
      <description>&lt;p&gt;Most tutorials show you “fast Node.js code,” but they never show you what happens when you actually measure performance under load.&lt;/p&gt;

&lt;p&gt;I wanted to learn real backend optimization, so I built a small URL redirect API:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GET /:slug  →  returns original URL&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Simple idea.&lt;br&gt;
But the first time I load-tested it, the API took 5.73 seconds to respond.&lt;/p&gt;

&lt;p&gt;This article is a beginner → mid-level friendly breakdown of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;what slowed my API&lt;/li&gt;
&lt;li&gt;how I optimized it &lt;/li&gt;
&lt;li&gt;what actually improved performance &lt;/li&gt;
&lt;li&gt;real k6 load test results at every step&lt;/li&gt;
&lt;li&gt;and what beginners usually misunderstand about backend latency&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 1 — Baseline Test (No Index, Direct MySQL Connection)&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: 5.73 seconds&lt;br&gt;
Requests: ~68 RPS&lt;/p&gt;

&lt;p&gt;This was the first k6 run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuetl7kx34ztrfftevic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuetl7kx34ztrfftevic.png" alt="before-slug-index" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are the baseline numbers:&lt;/p&gt;

&lt;p&gt;Why it was this slow?&lt;br&gt;
MySQL was doing a full table scan for every slug.&lt;br&gt;
No index → MySQL compares slug against every row → worst possible plan.&lt;br&gt;
At high concurrency, MySQL was simply overwhelmed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 2 — Added a Slug Index&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: 27.1 ms&lt;br&gt;
Requests: ~18k RPS&lt;/p&gt;

&lt;p&gt;Index command: &lt;code&gt;CREATE INDEX idx_slug ON urls(slug);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj336m7y4qh1pqrofnvdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj336m7y4qh1pqrofnvdy.png" alt="after-slug-index" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding the index:&lt;br&gt;
This alone brought latency from 5.73 seconds → 27 ms.&lt;br&gt;
This is the single biggest improvement you can ever make in a read-heavy API.&lt;br&gt;
Beginners underestimate how critical indexing is.&lt;br&gt;
No amount of Node.js code optimization can beat a proper DB index.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 3 — Switched from mysql.createConnection to MySQL2 Connection Pool&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: ~18 ms&lt;br&gt;
Before this, every request created a new MySQL connection.&lt;br&gt;
That’s expensive under load.&lt;br&gt;
So I switched to:&lt;br&gt;
&lt;code&gt;export const db = mysql2.createPool({&lt;br&gt;
  host,&lt;br&gt;
  user,&lt;br&gt;
  password,&lt;br&gt;
  database,&lt;br&gt;
  connectionLimit: 10,&lt;br&gt;
});&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After switching to pooling:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed53spcu2z8bzngdans9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed53spcu2z8bzngdans9.png" alt="after-pooling" width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why this helps&lt;br&gt;
Reuses open connections&lt;br&gt;
Avoids handshake overhead&lt;br&gt;
Much better concurrency&lt;br&gt;
MySQL stays stable under load&lt;br&gt;
If you're using MySQL + Node.js:&lt;br&gt;
Connection pools are mandatory.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;code&gt;Stage 4 — Added Redis Cache (Final Optimization)&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Avg latency: 11.75 ms&lt;br&gt;
Requests: ~42k RPS&lt;br&gt;
Redis turns your slug lookup into an O(1) in-memory lookup.&lt;br&gt;
Updated handler:&lt;/p&gt;

&lt;p&gt;app.get('/:slug', async (req, res) =&amp;gt; {&lt;br&gt;
  const slug = req.params.slug;&lt;/p&gt;

&lt;p&gt;const cached = await redis.get(slug);&lt;br&gt;
  if (cached) return res.redirect(cached);&lt;/p&gt;

&lt;p&gt;const data = await db.query("SELECT url FROM urls WHERE slug = ?", [slug]);&lt;br&gt;
  if (!data[0].length) return res.status(404).send("Not found");&lt;/p&gt;

&lt;p&gt;await redis.set(slug, data[0][0].url);&lt;br&gt;
  return res.redirect(data[0][0].url);&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;k6 results after Redis:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno94v4x5zd3dpk0osvlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno94v4x5zd3dpk0osvlc.png" alt="after-redis" width="800" height="516"&gt;&lt;/a&gt;&lt;br&gt;
From 5.7 seconds → 11 ms.&lt;br&gt;
This is a ~520x improvement, and nothing about the API changed except internal optimizations.&lt;/p&gt;

&lt;p&gt;🔗 GitHub Repository (Full Source Code)&lt;/p&gt;

&lt;p&gt;You can check out the entire project here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/Termux-Dark-Dev/url-shortner-practice.git" rel="noopener noreferrer"&gt;Github Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🙏 Thanks for Reading&lt;br&gt;
If you have questions, want to discuss backend engineering, or just want to say hi —&lt;br&gt;
I’m always open to messages.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>performance</category>
      <category>node</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
