Six months into building MosesTab, our church management platform, we had a grand total of twelve pages on our public site. A landing page, a pricing page, a handful of feature pages, and a contact form. We were getting maybe 40 organic visits a day, almost all of them branded searches; people who already knew the name. For a SaaS trying to grow, that is basically zero.
The advice you hear everywhere is "start a blog." So we did. We wrote a few posts about church management, tithing, volunteer coordination. They were fine. Some ranked, most didn't, and even the ones that ranked brought in people looking for general church advice, not people evaluating software. The conversion rate from blog traffic was effectively zero.
Then I stumbled across the concept of programmatic SEO; the idea that you can generate hundreds of targeted pages from structured data instead of writing each one by hand; and everything clicked. Within a few weeks, we went from 12 pages to over 150, and the pages we generated were far more targeted than anything we could have written manually.
This is how we did it, what went wrong along the way, and what I would do differently if I started over.
The Insight That Changed Everything
People do not search for "church management software" and stop there. They search for very specific things:
- "church management software for Baptist churches"
- "church software for small churches under 100 members"
- "church management software USA"
- "volunteer management tools for churches"
Each of those queries has real search volume, and each one represents a person with a specific context and need. A Baptist church administrator is not going to click on a generic "church management software" page and feel like it speaks to them. But a page titled "Church Management Software for Baptist Churches" that addresses Baptist-specific organizational structures, committee governance, and worship planning? That converts.
The math was obvious. We were never going to write 20 country-specific pages, 10 denomination pages, 30 use-case pages, and 45 glossary entries by hand. But we could write 15 TypeScript data files and let Next.js generate all of them at build time.
Data First, Templates Second
The mistake I almost made was starting with the page template and then figuring out what data to fill in. That gets you thin, repetitive pages that Google will rightly ignore. Instead, I started with the data interfaces.
Here is the one we use for country-specific pages:
// src/data/programmatic/countries.ts
export interface Country {
name: string;
slug: string;
continent: string;
currency: { code: string; symbol: string; name: string };
languages: string[];
statistics: {
churchCount: string;
christianPopulation: string;
growthRate: string;
};
localContext: string;
specificNeeds: string[];
popularDenominations: string[];
metaTitle: string;
metaDescription: string;
faqs: { question: string; answer: string }[];
}
The key fields are localContext, specificNeeds, and faqs. These are where the unique, substantive content lives. The localContext for the United States talks about the American church landscape; 380,000+ churches, the megachurch phenomenon, early adoption of church management technology, and the highly competitive market for solutions. The localContext for Kenya talks about mobile-first communication, M-Pesa integration needs, and rapid church growth in East Africa. These are not the same page with a swapped noun. They are genuinely different content that addresses genuinely different search intent.
TypeScript catches the obvious mistakes here. Forget metaDescription on a new country entry and the compiler yells at you before you can deploy. With a CMS you would ship a blank meta tag and discover it in Search Console weeks later, or never.
We ended up building 15 of these interfaces; countries, denominations, use cases, glossary terms, church sizes, staff roles, budget tiers, organization types, integrations, and more. Each one exports an array of data objects and a couple of lookup helpers:
export const countries: Country[] = [
{
name: "United States",
slug: "united-states",
continent: "North America",
currency: { code: "USD", symbol: "$", name: "US Dollar" },
statistics: {
churchCount: "~380,000+",
christianPopulation: "~230 million",
growthRate: "Stable",
},
localContext:
"The United States has the world's largest concentration of Christian churches, from small rural congregations to megachurches with tens of thousands of members...",
specificNeeds: [
"Text-to-give functionality",
"Multi-campus support",
"Advanced reporting and analytics",
],
metaTitle: "Church Management Software for US Churches | MosesTab",
metaDescription:
"MosesTab is an all-in-one church management platform for American churches...",
faqs: [
{
question: "How does MosesTab compare to other US church management software?",
answer: "MosesTab offers all 16 features in one integrated platform...",
},
],
// ...
},
// 19 more countries
];
export function getCountryBySlug(slug: string): Country | undefined {
return countries.find((c) => c.slug === slug);
}
export function getAllCountrySlugs(): string[] {
return countries.map((c) => c.slug);
}
Writing the data for 20 countries took an afternoon. Writing 20 individual pages would have taken weeks, and maintaining them would have been a nightmare.
Letting Next.js Do the Heavy Lifting
The actual page generation is almost anticlimactic. Next.js App Router has two functions that make this entire approach work: generateStaticParams tells Next.js which pages to build, and generateMetadata provides the SEO metadata for each one.
// src/app/(main)/for/[country]/page.tsx
export async function generateStaticParams() {
return getAllCountrySlugs().map((country) => ({ country }));
}
export async function generateMetadata({
params,
}: {
params: Promise<{ country: string }>;
}): Promise<Metadata> {
const { country: slug } = await params;
const country = getCountryBySlug(slug);
if (!country) return { title: "Not Found" };
return {
title: country.metaTitle,
description: country.metaDescription,
openGraph: {
title: country.metaTitle,
description: country.metaDescription,
url: `https://mosestab.com/for/${country.slug}`,
},
alternates: {
canonical: `https://mosestab.com/for/${country.slug}`,
},
};
}
Run next build and you get /for/united-states, /for/united-kingdom, /for/canada, /for/ghana; each as a static HTML file with its own metadata, Open Graph tags, and canonical URL. No server-side rendering, no database queries at runtime, no API calls. Just HTML files served from a CDN.
The same pattern repeats for every page type. The denomination template generates /denominations/baptist, /denominations/catholic, /denominations/methodist. The solutions template generates /solutions/member-management, /solutions/online-giving, /solutions/event-planning. One page.tsx per category, one data file per category, dozens of pages per category.
Structured Data, or: The Part Everyone Skips
Most programmatic SEO guides stop at metadata. That is a mistake. Google uses JSON-LD structured data for rich results; the FAQ accordions you see in search results, breadcrumb trails, software info panels. If your competitors have these and you don't, you lose clicks even when you rank higher.
We built a reusable component for it:
export function FAQJsonLd({
questions,
}: {
questions: { question: string; answer: string }[];
}) {
return (
<JsonLd
data={{
"@context": "https://schema.org",
"@type": "FAQPage",
mainEntity: questions.map((q) => ({
"@type": "Question",
name: q.question,
acceptedAnswer: {
"@type": "Answer",
text: q.answer,
},
})),
}}
/>
);
}
Every programmatic page includes this alongside BreadcrumbList and SoftwareApplication schemas. The FAQ schema alone is worth implementing; it makes your search result take up more vertical space, which pushes competitors further down the page.
The Sitemap Strategy
Google does not treat all pages equally, and neither should your sitemap. We assign priority tiers based on conversion potential:
const countryPages: MetadataRoute.Sitemap = countries.map((country) => ({
url: `https://mosestab.com/for/${country.slug}`,
lastModified: new Date(),
changeFrequency: "monthly",
priority: 0.8,
}));
const glossaryPages: MetadataRoute.Sitemap = glossaryTerms.map((term) => ({
url: `https://mosestab.com/glossary/${term.slug}`,
lastModified: new Date(),
changeFrequency: "monthly",
priority: 0.7,
}));
Hub pages like /solutions and /integrations get 0.9. Country and denomination pages get 0.8 because they target people actively evaluating software. Glossary and how-to pages get 0.7; they drive traffic but convert at a lower rate. Blog posts get 0.5. Legal pages get 0.3.
Does Google actually respect the priority field? Officially, they say it is a hint, not a directive. In practice, I have noticed that pages with higher declared priority tend to get crawled sooner after deployment. Whether that is causation or correlation, I cannot say. But it costs nothing to include, so we do.
Internal Linking Is the Actual Secret
The part of this approach that has the most impact on rankings is also the least glamorous: internal linking. Every data interface includes relatedX fields that create connections between pages.
A glossary term like "tithe" links to related terms ("stewardship," "offering," "first fruits") and to relevant product features ("online-giving," "donation-tracking"). A denomination page links to related denominations with similar governance structures. A country page links to the denominations popular in that country.
{
relatedTerms: ["tithe", "stewardship", "offering"],
relatedFeatures: ["online-giving", "donation-tracking"],
}
These links serve two purposes. For Google, they distribute authority across the site and signal topical relationships. For visitors, they create natural browsing paths; someone reading about tithing might click through to learn about online giving, and from there to the giving feature page where they can actually sign up. Each click takes them closer to conversion without feeling like a funnel.
Where It Went Wrong
It was not all smooth. Three things bit us.
City-level pages were a mistake. Early on, I generated pages for 200+ cities; "church management software in Dallas," "church management software in Atlanta," and so on. Google crawled them, decided they were too similar to each other (because they were; the content was mostly identical with the city name swapped), and flagged the entire batch as "Crawled; currently not indexed." Worse, I suspect the thin content diluted the authority of our legitimate pages. We killed all of them.
The lesson: if you cannot write at least 200 words of genuinely unique content per page variant, those pages should not exist. Consolidate them into a parent page instead.
We ignored search intent at first. Some of our early glossary pages targeted informational keywords but were written like product pitches. Google is very good at matching page content to search intent. If someone searches "what is church tithing," they want an educational answer, not a sales page. We rewrote every glossary entry to lead with the educational content and only mention the product in a "Related Features" section at the bottom. Rankings improved within weeks.
The sitemap was initially flat. Our first sitemap listed all 150+ pages at the same priority. Google took over a month to index everything. After we added priority tiers, new pages started getting indexed within days of deployment. Again, I cannot prove causation, but the timing was conspicuous.
The Numbers Today
From 15 TypeScript data files, we generate:
| Page Type | Count |
|---|---|
| Solutions / Use Cases | 30 |
| Glossary Terms | 45+ |
| Country Pages | 20 |
| Competitor Comparisons | 15 |
| Region Pages | 11 |
| Denomination Pages | 10 |
| Integration Pages | 10+ |
| How-To Guides | 10+ |
| Total | 150+ |
All statically generated at build time. All with unique metadata, JSON-LD structured data, and internal links. All type-checked by TypeScript so a missing field is a build error, not a silent SEO bug.
What I Would Do Differently
If I were starting over, I would do keyword research before writing a single data object. We maintain a CSV of 877 target keywords now, organized by category and search intent. Every page maps to specific keywords with known search volume. But we built that spreadsheet after creating half the pages, which meant some early pages targeted keywords nobody actually searches for. Research first, generate second.
I would also invest more time in the content itself and less in the infrastructure. The generateStaticParams and generateMetadata pattern took maybe two hours to set up. The JSON-LD components took another afternoon. The sitemap was a few hours. But writing genuinely useful, non-generic content for each data entry; that is where the real work is, and it is the only part that actually determines whether Google indexes your pages or ignores them.
The infrastructure is the easy part. The content is the hard part. Don't let the elegance of the technical solution distract you from the fact that Google ultimately cares about whether your pages are useful to the person searching.
You can see how this all comes together at mosestab.com. Every page under /for/, /solutions/, /glossary/, /denominations/, and /compare/ is generated this way. No CMS, no database, no content writers. Just TypeScript, Next.js, and an afternoon of research per page category.
Top comments (0)