<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: nicomedina</title>
    <description>The latest articles on DEV Community by nicomedina (@miltivik).</description>
    <link>https://dev.to/miltivik</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/miltivik"/>
    <language>en</language>
    <item>
      <title>How I built a high-performance Social API with Bun &amp; ElysiaJS on a $5 VPS (handling 3.6k reqs/min)</title>
      <dc:creator>nicomedina</dc:creator>
      <pubDate>Tue, 13 Jan 2026 05:08:47 +0000</pubDate>
      <link>https://dev.to/miltivik/how-i-built-a-high-performance-social-api-with-bun-elysiajs-on-a-5-vps-handling-36k-reqsmin-5do4</link>
      <guid>https://dev.to/miltivik/how-i-built-a-high-performance-social-api-with-bun-elysiajs-on-a-5-vps-handling-36k-reqsmin-5do4</guid>
      <description>&lt;h1&gt;
  
  
  The Goal
&lt;/h1&gt;

&lt;p&gt;I wanted to build a "Micro-Social" API—a backend service capable of handling Twitter-like feeds, follows, and likes—without breaking the bank. My constraints were simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Budget:** $5 - $20 / month.&lt;/li&gt;
&lt;li&gt;Performance:** Sub-300ms latency.&lt;/li&gt;
&lt;li&gt;Scale:** Must handle concurrent load (stress testing).
Most tutorials show you &lt;code&gt;Hello World&lt;/code&gt;. This post shows you what happens when you actually hit &lt;code&gt;Hello World&lt;/code&gt; with 25 concurrent users on a cheap VPS. (Spoiler: It crashes).
Here is how I fixed it.
## The Stack 🛠️
I chose &lt;strong&gt;Bun&lt;/strong&gt; over Node.js for its startup speed and built-in tooling.&lt;/li&gt;
&lt;li&gt;Runtime: &lt;a href="https://bun.sh" rel="noopener noreferrer"&gt;Bun&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Framework: &lt;a href="https://elysiajs.com" rel="noopener noreferrer"&gt;ElysiaJS&lt;/a&gt; (Fastest Bun framework)&lt;/li&gt;
&lt;li&gt;Database: PostgreSQL (via Dokploy)&lt;/li&gt;
&lt;li&gt;ORM: Drizzle (Lightweight &amp;amp; Type-safe)&lt;/li&gt;
&lt;li&gt;Hosting: VPS with Dokploy (Docker Compose)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The "Oh Sh*t" Moment 🚨
&lt;/h2&gt;

&lt;p&gt;I deployed my first version. It worked fine for me.&lt;br&gt;
Then I ran a load test using &lt;strong&gt;k6&lt;/strong&gt; to simulate 25 virtual users browsing various feeds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k6 run tests/stress-test.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✗ http_req_failed................: 86.44%&lt;br&gt;
✗ status is 429..................: 86.44%&lt;br&gt;
The server wasn't crashing, but it was rejecting almost everyone.&lt;/p&gt;
&lt;h3&gt;
  
  
  Diagnosis
&lt;/h3&gt;

&lt;p&gt;I initially blamed Traefik (the reverse proxy). But digging into the code, I found the culprit was &lt;em&gt;me&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/index.ts&lt;/span&gt;
&lt;span class="c1"&gt;// OLD CONFIGURATION&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;rateLimit&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="c1"&gt;// 💀 100 requests per minute... GLOBAL per IP?&lt;/span&gt;
&lt;span class="p"&gt;}))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since my stress test (and likely any future NATed corporate office) sent all requests from a single IP, I was essentially DDOSing myself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fixes 🔧
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Tuning the Rate Limiter
&lt;/h3&gt;

&lt;p&gt;I bumped the limit to &lt;strong&gt;2,500 req/min&lt;/strong&gt;. This prevents abuse while allowing heavy legitimate traffic (or load balancers).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/index.ts&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;rateLimit&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2500&lt;/span&gt; &lt;span class="c1"&gt;// Much better for standard reliable APIs&lt;/span&gt;
&lt;span class="p"&gt;}))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Database Connection Pooling
&lt;/h3&gt;

&lt;p&gt;The default Postgres pool size is often small (e.g., 10 or 20).&lt;br&gt;
My VPS has 4GB RAM. PostgreSQL needs RAM for connections, but not &lt;em&gt;that&lt;/em&gt; much.&lt;br&gt;
I bumped the pool to &lt;strong&gt;80 connections&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/db/index.ts&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;postgres&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
    &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt; 
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Horizontal Scaling with Docker
&lt;/h3&gt;

&lt;p&gt;Node/Bun is single-threaded. A single container uses 1 CPU core effectivey.&lt;br&gt;
My VPS has 2 vCPUs.&lt;br&gt;
I added a &lt;code&gt;replicas&lt;/code&gt; instruction to my &lt;code&gt;docker-compose.dokploy.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="c1"&gt;# One for each core!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This instantly doubled my throughput capacity. Traefik automatically load-balances between the two containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Final Result 🟢
&lt;/h2&gt;

&lt;p&gt;Ran k6 again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ✓ checks_succeeded...: 100.00%
  ✓ http_req_duration..: p(95)=200.45ms
  ✓ http_req_failed....: 0.00% (excluding auth checks)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;0 errors. 200ms latency.&lt;/strong&gt; On a cheap VPS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;You don't need Kubernetes for a side project. You just need to understand where your bottlenecks are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Application Layer:&lt;/strong&gt; Check your Rate Limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Layer:&lt;/strong&gt; Check your Connection Pool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardware:&lt;/strong&gt; Use all your cores (Replicas).
If you want to try the API, I published it on RapidAPI as &lt;strong&gt;Micro-Social API&lt;/strong&gt;.
&lt;a href="https://rapidapi.com/ismamed4/api/micro-social" rel="noopener noreferrer"&gt;https://rapidapi.com/ismamed4/api/micro-social&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Happy coding! 🚀&lt;/p&gt;

</description>
      <category>bunjs</category>
      <category>api</category>
      <category>javascript</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
