I wanted to build a platform where AI agents could access real-world data — IP geolocation, DNS lookups, website screenshots, crypto prices, code execution, and more.
Most people would reach for Kubernetes, Docker Compose, or at minimum a managed container service.
I chose a $20/month VPS, PM2, and nginx.
46 Node.js microservices. 4.8GB RAM. 300+ daily visitors. Zero container orchestration.
Here's how it works and what I learned.
The Stack
Internet → nginx (reverse proxy + SSL) → PM2 (process manager) → 46 Node.js services
Each service:
- Runs on its own port (3001–3099)
- Has its own
package.jsonand directory - Listens on
127.0.0.1(internal only) - Gets proxied through nginx
One service — the API Gateway on port 3010 — is the only one exposed externally. It handles:
- API key authentication
- Rate limiting
- Credit tracking
- Request routing to internal services
Everything else stays behind the firewall.
Why Not Docker?
Docker adds ~50MB overhead per container. With 46 services, that's 2.3GB just for the runtime layer before your code loads.
On a 20GB RAM machine, that's significant. On a $20 VPS with 4GB? It's a dealbreaker.
PM2 gives me:
- Process management (restart on crash, watch for changes)
- Log rotation
- Cluster mode if I need it
- Zero overhead beyond the Node.js process itself
Each service averages ~100MB of memory. The entire platform fits in 4.8GB.
The nginx Config Pattern
Every service gets the same nginx config:
server {
listen 443 ssl;
server_name service.example.com;
location / {
proxy_pass http://127.0.0.1:3042;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
}
Certbot handles SSL. The entire config for 46 services is ~800 lines of nginx.
Shared Request Logger
Every service logs to a single SQLite database:
// shared/request-logger.js
import Database from 'better-sqlite3';
const db = new Database('/path/to/analytics.db', {
timeout: 5000
});
// WAL mode for concurrent writes from 46 services
db.pragma('journal_mode = WAL');
export default async function requestLogger(fastify, opts) {
fastify.addHook('onResponse', (request, reply, done) => {
db.prepare(`INSERT INTO requests (service, method, path, status, response_time_ms, ip, user_agent)
VALUES (?, ?, ?, ?, ?, ?, ?)`)
.run(
opts.serviceName,
request.method,
request.url,
reply.statusCode,
Math.round(reply.elapsedTime),
request.ip,
request.headers['user-agent'] || ''
);
done();
});
}
One SQLite file. WAL mode. All 46 services write to it concurrently. No Redis, no Postgres, no external database.
Gotcha I learned the hard way: If you restart all 46 services simultaneously (pm2 restart all), about 20 of them will fail to start because of SQLite WAL lock contention during initialization. The fix: restart in batches of 5–10.
The API Gateway Pattern
The gateway is the brain. It:
- Validates API keys (stored in a JSON file, loaded into memory)
- Checks credit balance
- Deducts credits per request
- Routes to the appropriate internal service
- Returns the response
// Simplified routing
const SERVICE_MAP = {
'/v1/ip/json': 'http://127.0.0.1:3005',
'/v1/dns/resolve': 'http://127.0.0.1:3006',
'/v1/screenshot': 'http://127.0.0.1:3007',
'/v1/scraper': 'http://127.0.0.1:3008',
'/v1/crypto/prices': 'http://127.0.0.1:3009',
'/v1/code/run': 'http://127.0.0.1:3025',
// ... 35 more routes
};
fastify.all('/v1/*', async (request, reply) => {
const key = request.headers['x-api-key'];
if (!key || !isValidKey(key)) {
return reply.code(401).send({ error: 'Invalid API key' });
}
const credits = getCredits(key);
if (credits <= 0) {
return reply.code(402).send({ error: 'Insufficient credits' });
}
deductCredit(key);
const target = findRoute(request.url);
const response = await fetch(target + request.url, {
method: request.method,
headers: { ...request.headers, 'x-gateway-key': INTERNAL_KEY },
body: request.method !== 'GET' ? request.body : undefined
});
return reply.code(response.status).send(await response.json());
});
What 46 Services Actually Do
The platform covers:
| Category | Services | Examples |
|---|---|---|
| Network Intel | 5 | IP geolocation, DNS resolver, WHOIS, port scanning, email verification |
| Web Tools | 4 | Screenshots, web scraper, PDF generation, URL shortener |
| Crypto/DeFi | 6 | Price feeds, wallet balances, DEX quotes, gas tracker, token analytics |
| Developer Tools | 4 | Code runner (sandboxed), image processing, text transform, paste service |
| Infrastructure | 8 | Gateway, registry, scheduler, event bus, log drain, monitoring, secrets, webhooks |
| AI/Agent | 3 | LLM proxy, MCP server, agent memory |
| Other | 16 | Identity, file storage, referrals, status dashboard, and more |
Each service is 200–800 lines of code. No service exceeds 1000 lines. If it would, it gets split.
Health Monitoring
A cron job runs every 5 minutes:
*/5 * * * * cd /path/to/workspace && node health-monitor/check.js
It hits /health on every service and sends a batched Telegram alert if anything is down. Not Datadog. Not PagerDuty. A bash script and a Telegram bot.
The Numbers (Honest)
After 3 months of running this:
- 307 unique IPs visit daily
- 400K requests per week
- API keys created: hundreds
- Revenue: $0
Nobody has depleted the 200 free credits yet.
The hardest part of building APIs isn't the infrastructure. It's not the code. It's not scaling.
It's getting anyone to care.
I've written 30 blog posts, submitted to 130+ directories, posted on Twitter daily. Most awesome-list PRs sit for weeks before anyone looks at them.
What I'd Do Differently
Start with one API, not 46. I built a platform when I should have built a product. One really good IP geolocation API with great docs would have been better than 46 mediocre ones.
Charge from day one. Free tiers attract tire-kickers. The people who will pay $5/month for an API are different from the people who sign up for free credits and never come back.
Skip the microservices. At this scale, a monolith with route handlers would've been simpler, faster to develop, and easier to debug. I chose microservices because it was intellectually interesting, not because the problem required it.
Focus on one distribution channel. I spread across Dev.to, Twitter, awesome-lists, API directories, and MCP registries. None of them got enough attention to compound. Pick one and go deep.
The Code
All services are built with Fastify (a few use Express). The MCP server is open source:
- MCP Server: github.com/Robocular/frostbyte-mcp
- API Gateway: frostbyte.world
- API Docs: api-catalog-three.vercel.app
You can get a free API key and try any endpoint:
# Get your IP's geolocation
curl https://api.frostbyte.world/ip/json
# Take a website screenshot
curl "https://api.frostbyte.world/v1/screenshot?url=example.com&key=YOUR_KEY"
# Get crypto prices
curl "https://api.frostbyte.world/v1/crypto/prices?ids=bitcoin,ethereum&key=YOUR_KEY"
TL;DR
- You don't need Kubernetes for 46 services
- PM2 + nginx + SQLite is a legitimate production stack
- The bottleneck is never the infrastructure — it's distribution
- Build one thing well before building 46 things adequately
If you're considering microservices on a budget, this architecture works. It's not sexy, but it runs 24/7 for $20/month with 99.9% uptime.
The boring stack wins.
Top comments (0)