DEV Community

xuejiep-bit
xuejiep-bit

Posted on

How I Went From Zero Code to 27-Page SEO Website Using Only AI and Free Tools

I recently built nuzlocketracker.xyz — a free online tool for tracking Pokémon Nuzlocke challenge runs. The twist? I wrote zero lines of code myself. Every single line was generated with AI assistance (Claude).

Here's what I learned about building, deploying, and getting a site indexed on Google — without any traditional coding skills.

The Tech Stack (Stupidly Simple)

  • Frontend: Plain HTML, CSS, JavaScript. No React, no frameworks.
  • Hosting: GitHub Pages (free)
  • DNS: Cloudflare (free tier)
  • Domain:.xyz domain (~$2/year)
  • Analytics: Google Analytics
  • Indexing: Google Search Console + Bing Webmaster Tools

Total monthly cost: $0

What I Built

27 static HTML pages, each targeting a specific long-tail keyword:

  • A main tracker tool with encounter logging
  • Game-specific trackers (Emerald, FireRed, Platinum, Black, White, X, Y, Moon)
  • Gym leader guides
  • Tier list pages
  • Blog articles answering common player questions

Each page includes localStorage for data persistence, type weakness analysis, and pre-loaded route lists for each game.

3 Painful Lessons About Google Indexing

  1. Cloudflare Pages + Googlebot = Redirect Hell

My site worked perfectly in every browser. But Google Search Console showed "redirect error" for 20 out of 21 pages.

The problem: Cloudflare's proxy treats Googlebot differently from regular browsers. After weeks of debugging SSL settings, Bot Fight Mode, and DNS records, I gave up on Cloudflare Pages and switched to GitHub Pages.

Fix: In Cloudflare DNS, set the CNAME record to "DNS only" (gray cloud, not orange). This bypasses the proxy entirely.

Result: Google successfully crawled all pages within 24 hours.

  1. The robots.txt That Wasn't Yours

Cloudflare has a hidden feature called "Managed robots.txt" that silently replaces your custom robots.txt. My Sitemap declaration was being stripped out for weeks.

Fix: Cloudflare dashboard → AI Crawl Control → Turn off "Managed robots.txt"

Always verify by visiting yoursite.com/robots.txt in a browser.

  1. Non-.com Domains Have Sitemap Bugs

Google couldn't fetch my sitemap.xml from the root directory. Bing had no issues with the same file. Turns out .xyz (and .cc) domains have a known bug.

Fix: Place a copy of sitemap.xml in a subdirectory: /sitemap/sitemap.xml and submit that path in Search Console.

Original Pixel Art Instead of Copyrighted Sprites

Since the site will eventually run ads, I couldn't use official Pokémon sprites. Instead, I created an original pixel art decoration system (pixel-decorations.js) that adds:

  • Floating pixel star particles
  • Pixel capture ball dividers between sections
  • Auto-matched pixel icons on headings
  • Pixel grass footer decoration

All SVG-based, all original, zero copyright risk.

Current Results

  • 5 pages indexed on Google (after 3 weeks of struggle)
  • 27 total pages live
  • Both Google and Bing sitemaps successfully crawled
  • ~95 impressions for the main keyword

Key Takeaways

  1. GitHub Pages > Cloudflare Pages for Google indexing reliability
  2. Always check robots.txt in browser after deploying
  3. Sitemap in subdirectory** for non-.com domains
  4. One page = one keyword** for SEO
  5. AI can write the code, but can't debug your infrastructure— that took more time than building the site itself

If you're a non-developer thinking about building a website, it's absolutely doable with AI tools. Just be prepared to spend more time on deployment and SEO than on the actual code.


What's your experience with Google indexing? Any horror stories? I'd love to hear them in the comments.

Top comments (0)