DEV Community

Cover image for Why we ditched our CMS halfway through building an affiliate site (and kept it anyway)
Elyvora US
Elyvora US

Posted on

Why we ditched our CMS halfway through building an affiliate site (and kept it anyway)

Last month I was refactoring our product pages when I realized we'd built something weird: half our content comes from Postgres, half from Sanity CMS, and somehow... it works better than either approach alone. Here's why we ended up with this Frankenstein architecture and why you might want to consider it too.

The original plan: CMS for everything

We started building an Amazon affiliate site (product reviews, buying guides, that kind of stuff) with Sanity CMS as our content backend. The plan was simple:

  • Editors write product reviews in Sanity Studio
  • Next.js pulls content via GROQ queries
  • Deploy with ISR, everyone's happy

Classic JAMstack setup. Clean, modern, all that good stuff.

Where it fell apart

Two weeks in, we hit a wall. Our product pages needed:

  • Real-time Amazon pricing data
  • Inventory status updates
  • Click tracking for analytics
  • Dynamic category filters
  • Search across 70+ products

Sanity could technically handle this, but we'd be fighting the framework. CMSs are built for content, not transactional data. Plus, every product update meant a write operation to Sanity, API limits, webhook processing... it felt wrong. Meanwhile, our Postgres database was just sitting there. Already handling user sessions, analytics events, that internal metrics API we built. Why not use it for products too?

The hybrid approach

What lives in Postgres:

  • Product catalog (name, slug, pricing, category)
  • Analytics events (clicks, views, conversions)
  • User data (if you have authentication)
  • Dynamic filters and search indices

What lives in Sanity CMS:

  • Blog posts (long-form content with images)
  • Editorial content (buying guides, comparisons)
  • Rich text descriptions (Portable Text is actually great for this)
  • SEO metadata overrides

The split is simple: transactional data goes to Postgres, editorial content goes to Sanity.

How we actually built it

The implementation is surprisingly clean. Here's the architecture:

Product pages (database-first)

// app/products/page.tsx
export default async function ProductsPage() {
  // Fetch from Postgres
  const products = await db.product.findMany({
    where: { status: 'active' },
    include: { category: true },
    take: 100
  });

  return <ProductGrid products={products} />;
}

export const revalidate = 60; // ISR every 60 seconds
Enter fullscreen mode Exit fullscreen mode

Fast, predictable, easy to query. We can filter by category, search by name, sort by price—all the stuff databases are good at.

Blog pages (CMS-first)

// app/blog/page.tsx
export default async function BlogPage() {
  // Fetch from Sanity
  const posts = await client.fetch(`
    *[_type == "blogPost" && status == "published"] {
      title, slug, excerpt, featuredImage, publishedAt
    } | order(publishedAt desc)
  `);

  return <BlogList posts={posts} />;
}

export const revalidate = 60;
Enter fullscreen mode Exit fullscreen mode

Editors get the Sanity Studio UI for rich content, we get Portable Text for flexible layouts. Perfect match.

The fallback strategy

Here's the part that saved us: graceful degradation. Sanity goes down sometimes (it's rare, but it happens). Our blog page handles it like this:

export default async function BlogPage() {
  let posts = [];

  try {
    posts = await getAllBlogPosts(); // Sanity query
  } catch (error) {
    console.error('Sanity fetch failed:', error);
    posts = sampleBlogPosts; // Fallback to static data
  }

  return <BlogList posts={posts} />;
}
Enter fullscreen mode Exit fullscreen mode

If Sanity's API is slow or down, we serve cached sample posts instead of a broken page. Users see something, Google still crawls content, no 500 errors. We don't do this for product pages because that data lives in Postgres, which is always available (and if it's not, we have bigger problems).

Real-world results

After running this setup for a few weeks:

Performance:

  • Product pages: 0.4s average load time (Postgres query + ISR)
  • Blog pages: 0.6s average load time (Sanity API + ISR)
  • Zero downtime from CMS issues (fallback content FTW)

Developer experience:

  • Database migrations for product schema changes (fast, predictable)
  • Sanity Studio for editorial workflows (non-technical editors love it)
  • No fighting with either system to do things it wasn't designed for

Costs:

  • 1. Postgres: Free tier (Neon, Supabase, whatever)
  • 2. Sanity: Free tier handles our ~20 blog posts just fine
  • 3. Total: $0/month for content infrastructure

When this makes sense

This hybrid approach works well if you have:
Distinct content types: Transactional data (products, events, user actions) and editorial content (blog posts, guides)
Different update frequencies: Products change daily, blog posts weekly
Separate workflows: Developers manage product data, editors manage blog content
Performance requirements: Need fast queries for filtering/search

Don't do this if:

  • You only have one type of content (just use a CMS or database, not both)
  • Your content is mostly static (use a CMS with Git-based storage)
  • You have a huge team (coordination overhead increases)

The code

We built this for elyvora.us, where we review Amazon products and write buying guides. The product catalog (40+ items) lives in Postgres, blog content lives in Sanity.

You can see it in action:

Both use the same Next.js ISR strategy, but different data sources. Works great.

Key takeaways

🔹 Don't force your CMS to be a database - Use it for what it's good at (editorial content)
🔹 Don't force your database to be a CMS - Editors shouldn't write SQL queries
🔹 Build fallbacks - Your CMS will have issues, plan for it
🔹 ISR is your friend - Revalidate both data sources on the same schedule
🔹 Keep it simple - Two data sources is manageable, five is chaos

Sometimes the "messy" solution that uses the right tool for each job beats the "clean" solution that forces everything through one system.


Have you built something similar? Or am I overthinking this and should've just stuck with one approach? Let me know in the comments.

Top comments (0)