DEV Community

Cover image for How I Built a Zero-Cost, Auto-Updating Content Site for My Daily Word Game Addiction
小皓Frotes G
小皓Frotes G

Posted on

How I Built a Zero-Cost, Auto-Updating Content Site for My Daily Word Game Addiction

The Problem

I'm addicted to the NYT Spelling Bee. Every morning at 3 AM EST, a new puzzle drops. And every morning, I found myself googling "spelling bee answers today" — only to land on ad-ridden, slow-loading sites that felt like they were built in 2005.

So I thought: How hard could it be to build something better?

Turns out, not hard at all. Here's how I built spellingbeeanswers.xyz — a clean, fast, auto-updating answer site that costs me $0 and requires zero manual maintenance.


The Stack (Keep It Simple)

I didn't want to over-engineer this. The requirements were:

  • ⚡ Fast load times
  • 🔍 SEO-friendly
  • 🤖 Zero manual updates
  • 💰 Free hosting

The stack:

  • Frontend: Vanilla HTML/CSS/JS (no framework, just speed)
  • Data: Static JSON files (commit to repo)
  • Automation: GitHub Actions + cron
  • Hosting: Cloudflare Pages (unlimited bandwidth, global CDN)
  • Analytics: Google Analytics 4 (free tier)

The Architecture

Instead of a backend server, I went fully static. Here's the flow:

NYT publishes new puzzle
    ↓
GitHub Actions (cron @ UTC 00:00)
    ↓
Scraper fetches answers from public source
    ↓
Update answers.json + sitemap.xml
    ↓
Auto-commit → Cloudflare Pages deploy
    ↓
Site live in < 30 seconds
Enter fullscreen mode Exit fullscreen mode

Why Static?

  1. Speed: No database queries, no server-side rendering. Just HTML.
  2. SEO: Pre-rendered pages, proper meta tags, sitemap auto-generated.
  3. Cost: Cloudflare Pages = $0 for unlimited requests.
  4. Reliability: No server to crash, no database to corrupt.

The Automation Magic

The tricky part: NYT doesn't have an official API for this. So I built a scraper that runs daily via GitHub Actions.

# .github/workflows/daily-update.yml
name: Daily Answer Update
on:
  schedule:
    - cron: '0 0 * * *'  # Every day at UTC midnight
  workflow_dispatch:  # Manual trigger for testing

jobs:
  update:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'
      - name: Install deps
        run: npm ci
      - name: Fetch today's answers
        run: node scripts/fetch-answers.js
      - name: Commit and push
        run: |
          git config user.name "github-actions"
          git config user.email "actions@github.com"
          git add .
          git diff --quiet && git diff --staged --quiet || git commit -m "Update answers for $(date +%Y-%m-%d)"
          git push
Enter fullscreen mode Exit fullscreen mode

The scraper itself is ~50 lines of Node.js using node-fetch and some regex. No puppeteer, no headless browser — just parsing public HTML. Fast and lightweight.


SEO Strategy (The Real Win)

This isn't just a side project — it's an experiment in SEO-driven development.

What I Did Right:

  1. Keyword in domain: spellingbeeanswers.xyz — tells Google exactly what this is
  2. Daily fresh content: New page every day = Googlebot visits frequently
  3. Structured data: JSON-LD for breadcrumbs, FAQ schema for answer lists
  4. Sitemap automation: Auto-updated with every new puzzle
  5. Core Web Vitals: 100/100 on PageSpeed Insights (vanilla JS + Cloudflare)

The Results (So Far):

  • Indexed by Google in 48 hours
  • First organic traffic within a week
  • Average page load: 0.8s
  • Perfect mobile score

The Code (Minimal & Fast)

Here's the entire data layer:

// answers.json (auto-generated)
{
  "date": "2025-03-19",
  "centerLetter": "T",
  "outerLetters": ["A", "E", "I", "N", "R", "V"],
  "pangrams": ["VERITABLE"],
  "answers": ["AIRTIGHT", "ATTAIN", "ATTIRE", ...]
}
Enter fullscreen mode Exit fullscreen mode

And the rendering:

// app.js
async function loadToday() {
  const today = new Date().toISOString().split('T')[0];
  const res = await fetch(`./data/${today}.json`);
  const data = await res.json();

  renderAnswers(data);
  updateMeta(data);  // Dynamic title & description for SEO
}
Enter fullscreen mode Exit fullscreen mode

That's it. No React, no build step, no hydration overhead.


What I Learned

1. Ship Fast, Iterate Later

I built the MVP in 3 hours. It was ugly, but it worked. The automation came day 2. Polish came week 2. Get it live first.

2. Automation > Manual

Setting up GitHub Actions took 30 minutes. It now saves me 5 minutes every day. That's 30+ hours per year of manual work eliminated.

3. SEO is Engineering

Good SEO isn't magic — it's:

  • Fast pages
  • Clear structure
  • Fresh content
  • Proper metadata

All of which are engineering problems, not marketing tricks.

4. Free Tier is Powerful

  • Cloudflare Pages: Unlimited requests
  • GitHub Actions: 2,000 minutes/month
  • Google Analytics: Free forever

You can run serious traffic on $0 infrastructure.


What's Next?

  • Email subscriptions: Daily answer alerts
  • Historical archive: Search past puzzles
  • API endpoint: Let others build on this data
  • More games: Expanding to Wordle, Connections, etc.

Build Your Own

This pattern works for any "daily content" site:

  • Daily crypto prices
  • Weather summaries
  • News aggregators
  • Sports scores
  • Stock market data

The formula: Static site + scheduled scraper + free hosting = passive traffic machine.


Check It Out

🔗 spellingbeeanswers.xyz

Got questions? Drop a comment or find me on Twitter/X.


Built with ☕, vanilla JS, and the desire to never manually update a website again.

Top comments (0)