DEV Community

Cover image for How I Built a $10/mo Headless CMS That Competes with $99/mo Solutions
Nagendra Yadav
Nagendra Yadav

Posted on

How I Built a $10/mo Headless CMS That Competes with $99/mo Solutions

Technical deep dive into building BlogNow - a production-ready headless CMS that costs 90% less than competitors

How I Built a $10/mo Headless CMS That Competes with $99/mo Solutions

I was paying $99/month to Contentful just to serve blog posts via an API. After hitting the limit on their "generous" free tier for the third time, I did the math: $1,200/year to fetch markdown content through REST endpoints felt... excessive.

So I built BlogNow - a headless CMS that does the same job for $9.99/month. Here's the technical breakdown of how I kept costs low without compromising on features.

The Stack: Boring Technology Wins

Backend:    Python + FastAPI (AWS Lambda)
Database:   Neon PostgreSQL (serverless)
Cache:      AWS ElastiCache (Valkey)
CDN:        CloudFront
Storage:    S3
Frontend:   Next.js 14 + Clerk Auth
Enter fullscreen mode Exit fullscreen mode

Why these choices?

1. FastAPI over Express/Nest.js

FastAPI gave me three massive wins:

  • Auto-generated OpenAPI docs - My /redoc endpoint is literally zero maintenance
  • Pydantic validation - Request/response validation with type safety out of the box
  • Async by default - Better resource utilization = lower server costs
from fastapi import FastAPI, Depends
from typing import List

@app.get("/v1/posts", response_model=List[PostResponse])
async def get_posts(
    status: str = "published",
    limit: int = 10,
    offset: int = 0,
    api_key: str = Depends(validate_api_key)
):
    # Validation, serialization, docs - all automatic
    return await fetch_posts(status, limit, offset)
Enter fullscreen mode Exit fullscreen mode

The auto-generated Swagger docs saved me weeks of documentation work. Competitors charge extra for "interactive API documentation" - mine came free.

2. AWS Lambda: True Serverless Backend

Running FastAPI on Lambda was a game-changer. You only pay for actual request processing time, not idle servers.

The serverless advantage:

  • Traditional EC2/Fargate: $20-50/month minimum (24/7 uptime)
  • Lambda: Pay per request, scales to zero when idle
  • Combined with aggressive caching: ~50% less cost

The cold start problem? Solved with caching. Since 95% of requests hit the cache layer (Valkey), cold starts only affect ~5% of traffic. For those, CloudFront's edge caching acts as another buffer.

3. Neon PostgreSQL: Serverless Database

This complements Lambda perfectly. Traditional PostgreSQL requires a server running 24/7. Neon scales to zero when idle.

The math:

  • Traditional RDS: $15-50/month minimum
  • Neon: $0 when idle, ~$5-10/month projected for production workloads

For a micro-SaaS with sporadic traffic, this combination (Lambda + Neon + caching) keeps infrastructure costs incredibly low.

Why SQL over NoSQL?

Unless you have a specific reason to choose NoSQL, go with SQL by default. Here's why:

  • Better query performance for relational data (posts → categories → tags)
  • ACID transactions - crucial for billing and API key management
  • Mature ecosystem - ORMs, migration tools, monitoring all just work
  • Cost-effective - NoSQL databases charge per read/write operation. SQL? You pay for compute.
-- This query would be painful in NoSQL
SELECT p.*, c.name as category_name, array_agg(t.name) as tags
FROM posts p
LEFT JOIN categories c ON p.category_id = c.id
LEFT JOIN post_tags pt ON p.id = pt.post_id
LEFT JOIN tags t ON pt.tag_id = t.id
WHERE p.status = 'published' AND p.workspace_id = $1
GROUP BY p.id, c.name
ORDER BY p.created_at DESC
LIMIT 10;
Enter fullscreen mode Exit fullscreen mode

4. Smart Caching: AWS ElastiCache (Valkey)

The secret to keeping API response times under 100ms while minimizing database hits and Lambda invocations.

Caching strategy:

async def get_post_by_slug(workspace_id: str, slug: str):
    cache_key = f"post:{workspace_id}:{slug}"

    # Try cache first
    cached = await redis.get(cache_key)
    if cached:
        return json.loads(cached)

    # Cache miss - hit database
    post = await db.fetch_one(query, workspace_id, slug)

    # Cache for 15 minutes
    await redis.setex(cache_key, 900, json.dumps(post))
    return post
Enter fullscreen mode Exit fullscreen mode

Cache invalidation on updates:

async def update_post(post_id: str, data: PostUpdate):
    # Update database
    updated_post = await db.execute(update_query, data, post_id)

    # Invalidate cache
    cache_key = f"post:{workspace_id}:{updated_post.slug}"
    await redis.delete(cache_key)

    return updated_post
Enter fullscreen mode Exit fullscreen mode

This means:

  • 95% of read requests never hit the database or trigger Lambda cold starts
  • Lower database costs AND lower Lambda invocation costs
  • Sub-100ms response times globally (with CloudFront on top)

5. CloudFront CDN: Edge Caching

Static blog content is perfect for edge caching. CloudFront caches responses at 400+ locations worldwide.

from fastapi import Response

@app.get("/v1/posts")
async def get_posts(response: Response):
    posts = await fetch_posts()

    # Cache at edge for 5 minutes
    response.headers["Cache-Control"] = "public, max-age=300"

    return posts
Enter fullscreen mode Exit fullscreen mode

Result: Australian users get the same 50ms response time as US users, without me paying for global database replication.

The Hard Problems

Problem 1: Public API Keys in Client Code

Unlike most APIs that run server-side, BlogNow is designed for client-side use (think Next.js, React apps). This means API keys are exposed in the browser.

The solution: CORS + Rate Limiting

# Custom CORS middleware
async def validate_cors_and_key(request: Request):
    api_key = request.headers.get("Authorization", "").replace("Bearer ", "")
    origin = request.headers.get("Origin")

    # Fetch API key record with allowed domains
    key_record = await get_api_key(api_key)

    if not key_record:
        raise HTTPException(401, "Invalid API key")

    # Validate origin against allowed domains
    if origin not in key_record.allowed_domains:
        raise HTTPException(403, f"CORS: {origin} not allowed")

    # Rate limit check
    if await is_rate_limited(api_key, origin):
        raise HTTPException(429, "Rate limit exceeded")

    return key_record
Enter fullscreen mode Exit fullscreen mode

Multi-layer rate limiting:

  1. Per API key: 50K requests/month (enforced at middleware level)
  2. Per IP address: 100 requests/minute (prevent single-origin abuse)
  3. Per workspace: Hard limit based on plan
async def is_rate_limited(api_key: str, ip: str) -> bool:
    # Check per-IP rate limit (100 req/min)
    ip_key = f"ratelimit:ip:{ip}"
    ip_count = await redis.incr(ip_key)
    if ip_count == 1:
        await redis.expire(ip_key, 60)  # 1 minute window
    if ip_count > 100:
        return True

    # Check per-key monthly limit
    key_usage = await get_monthly_usage(api_key)
    plan_limit = await get_plan_limit(api_key)

    return key_usage >= plan_limit
Enter fullscreen mode Exit fullscreen mode

Problem 2: Authentication & Organization Management

Mistake: I built custom auth, user management, organization switching, and invitation flows from scratch. Took 3 weeks.

Reality check: Clerk does this 10x better, especially for Next.js projects.

// Before: 500 lines of auth code
// After: 3 lines

import { ClerkProvider } from '@clerk/nextjs'

export default function RootLayout({ children }) {
  return (
    <ClerkProvider>
      {children}
    </ClerkProvider>
  )
}
Enter fullscreen mode Exit fullscreen mode

Clerk's pre-built components (<UserButton />, <OrganizationSwitcher />) saved me weeks of UI work. Their webhook system made syncing organizations to my database trivial.

Lesson learned: Don't build what you can buy. Your time is better spent on your core product.

Problem 3: Keeping Documentation in Sync

FastAPI's auto-generated OpenAPI spec solved this beautifully:

class PostResponse(BaseModel):
    """A published blog post"""
    id: str
    title: str
    slug: str
    content: str
    excerpt: Optional[str]
    published_at: datetime

    class Config:
        # Auto-generate examples for docs
        schema_extra = {
            "example": {
                "id": "post_123",
                "title": "My First Post",
                "slug": "my-first-post",
                # ... etc
            }
        }
Enter fullscreen mode Exit fullscreen mode

This single Pydantic model:

  • Validates API requests
  • Serializes responses
  • Generates OpenAPI schema
  • Creates interactive docs at /redoc

Zero documentation drift. Competitors maintain separate OpenAPI files manually - I don't.

The TypeScript SDK: Developer Experience Matters

The SDK isn't just a wrapper - it has built-in intelligence:

export class BlogNowClient {
  private retryCount = 3;
  private retryDelay = 1000;

  async get(endpoint: string, options?: RequestOptions) {
    for (let i = 0; i < this.retryCount; i++) {
      try {
        const response = await fetch(endpoint, {
          headers: {
            'Authorization': `Bearer ${this.apiKey}`,
            ...options?.headers
          }
        });

        // Handle rate limits with exponential backoff
        if (response.status === 429) {
          const retryAfter = response.headers.get('Retry-After');
          await this.sleep(retryAfter ? parseInt(retryAfter) * 1000 : this.retryDelay * Math.pow(2, i));
          continue;
        }

        if (!response.ok) {
          throw new BlogNowError(response.status, await response.json());
        }

        return await response.json();
      } catch (error) {
        if (i === this.retryCount - 1) throw error;
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Smart features:

  • Automatic retries with exponential backoff
  • Rate limit handling
  • TypeScript types for all responses
  • Environment variable support out of the box

Cost Breakdown: Why $9.99 Works

Projected monthly costs to serve 1,000 users with 50K requests each:

Infrastructure (projected at scale):
- Neon PostgreSQL:      $10-15/mo  (10GB storage + compute)
- ElastiCache (Valkey): $13/mo     (cache.t3.micro)
- CloudFront:           $5/mo      (1TB transfer)
- S3:                   $3/mo      (media storage)
- Lambda:               $5-10/mo   (50M requests/mo with 95% cache hit)
- Domain/SSL:           $2/mo      (amortized)
-----------------------------------
Total:                  ~$40-50/mo

Revenue (1,000 users × $9.99):  $9,990/mo
Projected gross margin:         99.5%+
Enter fullscreen mode Exit fullscreen mode

Key optimizations:

  1. True serverless - Lambda + Neon scale to zero when idle
  2. Aggressive caching - 95% cache hit rate = minimal DB load and Lambda invocations
  3. Edge caching - Reduces origin requests by 80%
  4. Efficient compute - Python + async = high throughput per Lambda invocation

Compare this to competitors running dedicated RDS instances ($150/mo), managed Elasticsearch ($80/mo), always-on servers ($50+/mo), and premium CDNs ($200/mo).

API Versioning: Future-Proofing

All endpoints are versioned: /v1/posts, /v2/posts, etc.

from fastapi import APIRouter

v1_router = APIRouter(prefix="/v1")
v2_router = APIRouter(prefix="/v2")

@v1_router.get("/posts")
async def get_posts_v1():
    # Legacy behavior
    pass

@v2_router.get("/posts")
async def get_posts_v2():
    # New features, breaking changes OK
    pass

app.include_router(v1_router)
app.include_router(v2_router)
Enter fullscreen mode Exit fullscreen mode

This means:

  • SDK v1 keeps working forever
  • I can ship breaking changes in v2 without migration headaches
  • Developers upgrade on their timeline

Database Indexing Strategy

Currently optimized for reads (90% of traffic):

-- Primary keys (auto-indexed)
CREATE INDEX idx_posts_workspace ON posts(workspace_id);
CREATE INDEX idx_posts_status ON posts(status);
CREATE INDEX idx_posts_slug ON posts(workspace_id, slug);

-- Foreign keys
CREATE INDEX idx_posts_category ON posts(category_id);
CREATE INDEX idx_post_tags_post ON post_tags(post_id);

-- Composite for common queries
CREATE INDEX idx_posts_published ON posts(workspace_id, status, published_at DESC);
Enter fullscreen mode Exit fullscreen mode

Future improvement: Full-text search indexing on title and content. Currently using basic ILIKE queries - works fine for small datasets, but will add PostgreSQL full-text search as usage grows.

What I'd Do Differently

  1. Start with Clerk from day 1 - Don't build auth yourself
  2. Add observability earlier - I should've had proper logging/monitoring from the start, not as an afterthought
  3. Design for multi-tenancy from the beginning - I retrofitted workspace isolation, which was painful
  4. Over-index on DX - The best feature is one developers love using. The pre-engineered AI prompts for integration are getting the most positive feedback - developers literally paste them into Claude/Cursor and have a working blog in 10 minutes.

The Results

Just launched, but early metrics are promising:

  • Growing user base (launched publicly this week!)
  • 99.8% API uptime so far
  • Sub-100ms p95 response times globally
  • $0 spent on customer support (good docs + AI prompts + SDK design pays off)

Try It Yourself

The full platform is live at blognow.tech with a 7-day free trial.

Quick start:

npm install @blognow/sdk

# In your .env
NEXT_PUBLIC_BLOGNOW_API_KEY=your_key_here
Enter fullscreen mode Exit fullscreen mode
import { BlogNow } from '@blognow/sdk';

const blognow = new BlogNow({
  apiKey: process.env.NEXT_PUBLIC_BLOGNOW_API_KEY
});

const posts = await blognow.posts.list({ status: 'published' });
Enter fullscreen mode Exit fullscreen mode

Open Questions for the Community

  1. Should I open-source the SDK? (The API stays proprietary)
  2. Better caching strategy ideas for full-text search?
  3. How do you handle API versioning in production?

Drop your thoughts in the comments! And if you're building a headless CMS or similar API product, happy to answer questions about the architecture.


Tech stack summary:

  • Backend: Python, FastAPI, AWS Lambda
  • Database: Neon PostgreSQL (serverless)
  • Cache: AWS ElastiCache (Valkey)
  • CDN: CloudFront
  • Storage: S3
  • Frontend: Next.js 14, Clerk, shadcn/ui
  • SDK: TypeScript with smart retries

Links:


Building in public. Follow my journey on X/Twitter @nagendra402

Top comments (0)