DEV Community

gyani
gyani

Posted on

I shipped 19 SEO essays in 12 days from a single Next.js page file

I have been quietly running an experiment for the last twelve days. I wanted to know how minimal the publishing pipeline for a real SEO essay corpus can be if I gave up every CMS, every markdown loader, and every static-site generator that pretended to be lightweight but turned out to require its own ecosystem.

The answer ended up being one Next.js page file, one slug allowlist, one sitemap function, and a postscript script that probes the live URL after deploy. Nineteen essays are live as I write this, all ranked individually in sitemap.xml, all internally linked, all with stable URLs and zero rebuild surprises. The whole pipeline is small enough that a human in a hurry can read it in five minutes.

This post is the file. Not a tutorial about the file. The actual structure.

The shape

The route lives at apps/web/app/essays/[slug]/page.tsx. Everything an essay needs is in two arrays in that file plus one small allowlist file next to it.

// apps/web/app/essays/[slug]/page.tsx

import { notFound } from 'next/navigation';
import { ESSAY_SLUGS } from '@/lib/essay-slugs';

export const dynamic = 'force-static';
export const dynamicParams = false;

type Essay = {
  slug: (typeof ESSAY_SLUGS)[number];
  title: string;
  publishedAt: string;
  body: string;
};

const ESSAYS: Essay[] = [
  {
    slug: 'why-matching-layer-is-physically-blind',
    title: 'Why the matching layer is physically blind, on purpose',
    publishedAt: '2026-05-08',
    body: `
      ... essay prose here, plain markdown-ish strings ...
    `,
  },
  // ... 18 more
];

export function generateStaticParams() {
  return ESSAY_SLUGS.map((slug) => ({ slug }));
}

export default function EssayPage({ params }: { params: { slug: string } }) {
  const essay = ESSAYS.find((e) => e.slug === params.slug);
  if (!essay) notFound();
  return <Article essay={essay} />;
}
Enter fullscreen mode Exit fullscreen mode

ESSAY_SLUGS is a const tuple in its own file so the type system catches typos at compile time. Slugs in the renderer that are not in the allowlist will not type-check; slugs in the allowlist with no renderer entry will hit notFound() and return a real 404.

// apps/web/lib/essay-slugs.ts
export const ESSAY_SLUGS = [
  'why-matching-layer-is-physically-blind',
  'letters-mode-is-mercy',
  'why-dating-apps-feel-exhausting',
  // ... 16 more
] as const;
Enter fullscreen mode Exit fullscreen mode

That is the single source of truth for the corpus. Sitemap reads it. Index page reads it. Renderer reads it. Three callers, one list. When a new essay ships, one line in this file and one entry in the renderer is the whole diff.

Sitemap, index, internal linking

Because ESSAY_SLUGS is a typed tuple, the sitemap generator is six lines.

// apps/web/app/sitemap.ts
import type { MetadataRoute } from 'next';
import { ESSAY_SLUGS } from '@/lib/essay-slugs';

export default function sitemap(): MetadataRoute.Sitemap {
  return ESSAY_SLUGS.map((slug) => ({
    url: `https://byvibration.com/essays/${slug}`,
    lastModified: new Date(),
  }));
}
Enter fullscreen mode Exit fullscreen mode

The /essays index page does the same lookup and renders a card per essay, ordered by publishedAt descending. Adding a new essay automatically promotes it to the top of the index and into the sitemap on the next deploy. There is nothing else to remember.

Internal linking is a function call inside the prose. Each essay body has a small Related block at the bottom that pulls related slugs by cluster tag (introvert cluster, friendship cluster, etc.). The cluster mapping is another tiny const next to the slug list. Total moving parts so far: three files.

The "is it actually live" probe

Vercel deploys are usually fast, but there is one failure mode that bit me hard. A page can return HTTP 200 while serving the home shell when something upstream of the renderer crashes silently. The status code lies.

To catch this, the post-deploy probe asserts three things: the page returns 200, the rendered <title> contains a stem of the slug, and the slug is present in the live sitemap.xml. If any of those fail, the deploy is treated as not-live, even on 200.

# superbot/util/essay_liveness.py
def is_live(slug: str) -> bool:
    page = httpx.get(f"https://byvibration.com/essays/{slug}", timeout=10)
    if page.status_code != 200:
        return False
    title = re.search(r"<title>([^<]+)</title>", page.text)
    if not title or not any(
        stem in title.group(1).lower() for stem in slug.split("-") if len(stem) > 3
    ):
        return False
    sitemap = httpx.get("https://byvibration.com/sitemap.xml", timeout=10).text
    return f"/essays/{slug}" in sitemap
Enter fullscreen mode Exit fullscreen mode

This is the single most-useful seven-minute piece of code in the pipeline. It caught a soft-404 for me on essay number five before I noticed the slug was being silently rewritten by middleware. The fix took ten minutes; without the probe it would have taken a week of confusion about why search was not seeing the page.

What I did not build

A markdown loader. A frontmatter parser. An MDX pipeline. A CMS adapter. A headless preview environment. A content directory. A draft state machine. A separate build pipeline for content vs. application code.

Every one of those was suggested by some part of my brain along the way. None of them earned their place. The reason is honest: the corpus is small, the cost of typing prose inline is trivial, and the type checker is the only quality gate that actually catches the bugs that ship in production. The simplest model that works is the model.

The day I have one hundred essays I will probably move to a markdown directory. Until then, the file fits on one screen of any normal editor, the array is sorted by date, and I can grep my own corpus instantly.

What this looks like in practice

The cadence has been roughly one essay every fifteen hours, written in plain prose, ported into the array, slug added to the allowlist, pushed. Vercel deploys. The probe runs. The sitemap updates. Google indexes it within forty-eight hours. The internal links to the rest of the corpus stay correct because they are computed, not hand-maintained.

The unit of friction per new essay is "write the essay." Everything downstream of that is one diff against one file.

If you have an essay practice and you are intimidated by the pipeline question, this is a version of "just ship it" that has an answer. One file. One allowlist. One probe. Nineteen essays in twelve days from that pattern, with the type checker as your friend.

I would skip every CMS until you actually need one.


I work on Byvibration, where the corpus this file feeds lives. The essays index is at byvibration.com/essays.

Top comments (0)