The Goal
I wanted to build a "Micro-Social" API—a backend service capable of handling Twitter-like feeds, follows, and likes—without breaking the bank. My constraints were simple:
- Budget:** $5 - $20 / month.
- Performance:** Sub-300ms latency.
- Scale:** Must handle concurrent load (stress testing).
Most tutorials show you
Hello World. This post shows you what happens when you actually hitHello Worldwith 25 concurrent users on a cheap VPS. (Spoiler: It crashes). Here is how I fixed it. ## The Stack 🛠️ I chose Bun over Node.js for its startup speed and built-in tooling. - Runtime: Bun
- Framework: ElysiaJS (Fastest Bun framework)
- Database: PostgreSQL (via Dokploy)
- ORM: Drizzle (Lightweight & Type-safe)
- Hosting: VPS with Dokploy (Docker Compose)
The "Oh Sh*t" Moment 🚨
I deployed my first version. It worked fine for me.
Then I ran a load test using k6 to simulate 25 virtual users browsing various feeds.
k6 run tests/stress-test.js
Result:
✗ http_req_failed................: 86.44%
✗ status is 429..................: 86.44%
The server wasn't crashing, but it was rejecting almost everyone.Diagnosis
I initially blamed Traefik (the reverse proxy). But digging into the code, I found the culprit was me.
// src/index.ts
// OLD CONFIGURATION
.use(rateLimit({
duration: 60_000,
max: 100 // 💀 100 requests per minute... GLOBAL per IP?
}))
Since my stress test (and likely any future NATed corporate office) sent all requests from a single IP, I was essentially DDOSing myself.
The Fixes 🔧
1. Tuning the Rate Limiter
I bumped the limit to 2,500 req/min. This prevents abuse while allowing heavy legitimate traffic (or load balancers).
// src/index.ts
.use(rateLimit({
duration: 60_000,
max: 2500 // Much better for standard reliable APIs
}))
2. Database Connection Pooling
The default Postgres pool size is often small (e.g., 10 or 20).
My VPS has 4GB RAM. PostgreSQL needs RAM for connections, but not that much.
I bumped the pool to 80 connections.
// src/db/index.ts
const client = postgres(process.env.DATABASE_URL, {
max: 80
});
3. Horizontal Scaling with Docker
Node/Bun is single-threaded. A single container uses 1 CPU core effectivey.
My VPS has 2 vCPUs.
I added a replicas instruction to my docker-compose.dokploy.yml:
api:
build: .
restart: always
deploy:
replicas: 2 # One for each core!
This instantly doubled my throughput capacity. Traefik automatically load-balances between the two containers.
The Final Result 🟢
Ran k6 again:
✓ checks_succeeded...: 100.00%
✓ http_req_duration..: p(95)=200.45ms
✓ http_req_failed....: 0.00% (excluding auth checks)
0 errors. 200ms latency. On a cheap VPS.
Takeaway
You don't need Kubernetes for a side project. You just need to understand where your bottlenecks are:
- Application Layer: Check your Rate Limits.
- Database Layer: Check your Connection Pool.
- Hardware: Use all your cores (Replicas). If you want to try the API, I published it on RapidAPI as Micro-Social API. https://rapidapi.com/ismamed4/api/micro-social
Happy coding! 🚀
Top comments (1)
Great article!