Last month I shipped 30 APIs on Cloudflare Workers in 3 days. Not prototypes — production APIs with documentation, rate limiting, and MCP support. Here is how.
The Stack
- Runtime: Cloudflare Workers (V8 isolates)
- Storage: KV (cache), D1 (SQLite), R2 (files)
- Deployment: Wrangler CLI + GitHub Actions
- Monitoring: Cloudflare Analytics + custom logging
Total infrastructure cost: $5/month.
Day 1: The Template (Hours 1-4)
I built one base template that every API shares:
// Base API template
export default {
async fetch(request, env) {
const url = new URL(request.url);
// Standard middleware
if (request.method === 'OPTIONS') return corsResponse();
const rateLimited = await checkRateLimit(request, env);
if (rateLimited) return rateLimited;
// Route handling
const handler = routes[url.pathname];
if (!handler) return jsonError('Not found', 404);
try {
const result = await handler(request, env, url);
return Response.json(result, { headers: corsHeaders });
} catch (err) {
return jsonError(err.message, 500);
}
}
};
Key decisions:
- Every API returns JSON
- Every API has CORS enabled
- Every API has rate limiting
- Every API has an MCP endpoint
Day 1: First 10 APIs (Hours 4-10)
With the template ready, each new API took 20-30 minutes:
# My workflow for each API
cp -r template/ new-api/
# Edit the route handlers
# Add API-specific logic
wrangler deploy
# Test with curl
# Done
APIs shipped on Day 1:
- QR Code Generator
- Email Validator
- Color Palette Generator
- Password Strength Checker
- Fake Data Generator
- Text Analysis (sentiment, keywords)
- Regex Toolkit
- IP Geolocation
- Markdown to HTML
- URL Shortener
Day 2: The Data-Heavy APIs (Hours 11-20)
Day 2 focused on APIs that needed external data or computation:
# Testing each API as I built it
import requests
apis = [
("AI Spend Calculator", "/ai-spend/calculate?model=gpt-4&input_tokens=1000&output_tokens=500"),
("Tech Stack Detector", "/tech-stack/detect?url=https://github.com"),
("Crypto Signals", "/crypto-signal?symbol=BTC"),
("SEO Analyzer", "/seo/analyze?url=https://example.com"),
]
for name, path in apis:
resp = requests.get(f"https://api.lazy-mac.com{path}")
status = "OK" if resp.status_code == 200 else "FAIL"
print(f"[{status}] {name}")
Day 3: MCP + Documentation (Hours 21-30)
The final day was about making every API AI-accessible:
// Added to every API
if (url.pathname === '/mcp') {
return Response.json({
name: apiName,
tools: apiTools,
version: "1.0.0"
});
}
And auto-generated docs:
# Each API self-documents at /
curl https://api.lazy-mac.com/ai-spend/
Lessons Learned
- Template-first development saves 80% of the time
- Deploy early, iterate fast — Workers deploy in < 2 seconds
- KV is your best friend for caching expensive computations
- MCP support doubles your audience — AI users are a huge market
The Numbers
- 30 APIs deployed
- Average response time: 45ms globally
- Infrastructure cost: $5/month
- Revenue: on track for $500/month from API subscriptions
Try Them Out
All 30 APIs are live at api.lazy-mac.com. The AI Spend Calculator and Tech Stack Detector are the most popular.
curl "https://api.lazy-mac.com/ai-spend/calculate?model=gpt-4&input_tokens=1000&output_tokens=500"
AI FinOps API on Gumroad | Tech Stack API on Gumroad | Full API Hub
Top comments (0)