DEV Community

Cover image for How I Built a Personalized Funding Newsletter That Sends Every Subscriber a Different Email
Himanshu Goswami
Himanshu Goswami

Posted on

How I Built a Personalized Funding Newsletter That Sends Every Subscriber a Different Email

Most newsletters are one-to-many: one piece of content, broadcast to everyone. I wanted to build something different — a newsletter where every single subscriber receives a different set of recommendations based on their profile.

Here's how I built it with Next.js, Supabase, and Gemini.

The architecture

Startup911 is a funding discovery platform for founders, students, and researchers. Users sign up through a detailed questionnaire — their role, the types of opportunities they want (grants, fellowships, accelerators), their target regions, and their sector tags (AI, climate, health, fintech, etc.).

The stack:

  • Frontend: Next.js 14 (App Router) deployed on Vercel
  • Database: Supabase (PostgreSQL) for subscribers, opportunities, and content
  • AI: Gemini 2.5 Flash for content enrichment and generation
  • Email delivery: Custom send pipeline matching subscriber profiles to opportunity tags

Step 1: Ingestion pipeline

Opportunities come from multiple sources — RSS feeds from grant-making organizations, government portals, foundation websites, and manual submissions. I built a set of admin API routes that:

  1. Fetch new opportunities from configured RSS sources (/api/admin/fetch-rss)
  2. Enrich them using Gemini — extracting structured fields like eligibility criteria, funding amount, deadline, regions, and sectors (/api/admin/enrich-opportunities)
  3. Approve them into a newsletter-ready pool

The enrichment step is crucial. Raw RSS data is messy — a grant title might say "Innovation Fund for Early-Career Researchers" but not explicitly state the funding amount, geographic restrictions, or whether it's for individuals or organizations. Gemini reads the source page content and extracts this into structured JSON.

Step 2: Profile-based matching

Every subscriber has a profile stored in Supabase with their selected tags — think of it as a set of interests (e.g., ["climate-tech", "africa", "pre-seed", "grant"]). When it's time to send, the matching logic scores each opportunity against each subscriber's tag set and assembles a personalized email.

The scoring is straightforward: overlap between the opportunity's enriched tags and the subscriber's profile tags, weighted by how specific the match is. A climate-tech + africa match is more valuable than a generic grant match.

Step 3: Decision guides (the SEO layer)

Beyond the newsletter, I wanted the enriched opportunity data to serve a second purpose — SEO. For each high-quality opportunity in the pipeline, I built a one-click "generate page" flow:

POST /api/admin/generate-opportunity-page
Body: { "opportunity_id": "<uuid>" }
Enter fullscreen mode Exit fullscreen mode

This calls Gemini with the enriched opportunity data and generates a structured decision guide: who the program is for, who it's not for, what selectors look for, key facts, FAQs, and an editorial "should you apply?" take. The output is stored as a JSON in opportunity_pages with status: draft and index_status: noindex.

I review the draft, edit if needed, then publish and flip to index — at which point it appears on the public /opportunities page and enters the sitemap.

Step 4: Programmatic blog posts

I also built a blog generator that creates "Top 10" listicle posts from the same enriched data:

POST /api/admin/generate-blog
Body: { "topic": "AI Startup Grants", "time_period": "April 2026" }
Enter fullscreen mode Exit fullscreen mode

Gemini generates structured JSON with 10 opportunity cards (program name, organization, funding amount, deadline, summary) plus editorial content. The output includes a structured_items array that the blog template renders as interactive cards — each with a "Read Full Guide" link to the decision guide (when one exists) and a direct "Apply" button.

The tech decisions I'd make differently

Slug generation was a mess at first. My auto-generated slugs looked like the-atlantic-fellows-for-healt-the-atlantic-fellows-for-health-equity-fellowship--2026. I've since cleaned up the generator, but the early slugs were embarrassing. Lesson: think about your URL structure before you have 30 pages indexed in Google.

I should have used structured JSON from day one instead of asking Gemini to output markdown and then parsing it with regex. The structured items[] approach I'm using now is much more reliable for rendering cards, generating JSON-LD schema, and cross-linking between content types.

Supabase RLS (Row Level Security) matters. My early API routes didn't have proper RLS policies, which meant the admin endpoints were more exposed than they should have been. Lock this down before you have real users.

Results so far

It's early — the site is about a year old. We have ~175 newsletter subscribers, around 50% of which came from organic channels (Google search + ChatGPT referrals, which is an interesting signal for AI-era SEO). Google has indexed 191 pages, and we're getting around 6,500 impressions/month with a 0.6% CTR.

The AI referral traffic is the most surprising part. About 10-15% of our non-direct traffic comes from chatgpt.com and Bing (which feeds Copilot). The structured, fact-heavy format of our decision guides — with explicit eligibility criteria, deadlines, and FAQs — seems to be exactly what LLMs want to cite.

If you're building content sites in 2026, optimizing for AI answer engines (AEO) alongside traditional SEO isn't optional anymore. Structure your content with clear facts, use FAQ schema, and make your key data points extractable.

Try it

If you work with founders or know people looking for grants/fellowships, Startup911's newsletter is free. Every subscriber gets different recommendations based on their profile.

The code patterns I described here — RSS ingestion, AI enrichment, profile-based matching, and programmatic content generation — are applicable to any domain where you need to match structured data to user preferences at scale. Happy to answer questions in the comments.

Top comments (0)