DEV Community

Cover image for Zero Lines of Code: How Claude Code and Gemini Built My SaaS
Tomoki Ikeda
Tomoki Ikeda

Posted on

Zero Lines of Code: How Claude Code and Gemini Built My SaaS

I didn't write a single line of code. Not one.

Claude Code (Anthropic) wrote every line — frontend, backend, database, infra, tests. Gemini (Google) powers all the AI features in production.

Two competing AIs built one product. I just told them what to do.

The product is Nokos — an AI note-taking app that auto-captures your conversations from Claude Code, ChatGPT, Cursor, Copilot, and 20+ other AI tools. It's live and it works. Free tier available.

Here's exactly how it happened.

My Role: The AI Orchestrator

People ask: "If you didn't code, what did you do for 30 days?"

Everything that isn't code:

  • Product vision & design docs: I wrote the project plan and technical design with Claude.ai — describing the product concept, data model, and architecture in conversation. The AI drafted the documents; I made every decision
  • Architecture decisions: Every technical choice — database schema, auth strategy, document format — was mine. I described them in plain language, and Claude Code implemented them
  • AI cost audit: Manually reviewed every endpoint. Found 4 critical bugs Claude Code had written — including a storage limit that was tracked but never enforced
  • Pricing strategy: Researched 10+ competitors, designed a soft-gate model where free users taste every premium feature
  • Legal: Terms of Service (16 articles, 10 languages), downgrade policy, inactive account policy
  • QA: Ran every flow, filed bugs, described fixes for Claude Code to implement
  • Design review: Had Gemini 2.5 Pro review screenshots and critique the UI

This role isn't "non-technical." It's "technical without typing." You need to understand databases, APIs, and auth flows to make good decisions. You just don't type the code.

My workflow:

Me (Product Manager)
  ↓ "Build a session ingest endpoint with rate limiting"
Claude Code (Developer)
  ↓ writes code, runs tests, commits
  ↓ calls Gemini API to test AI features
Gemini Flash (Production AI)
  ↓ generates metadata, writes diaries, powers RAG
Enter fullscreen mode Exit fullscreen mode

Three roles. Two AIs from competing companies. One product.

Why Two AIs?

Claude Code is the best AI developer I've found. 1M token context window holds the entire project. It refactors across dozens of files in a single pass.

Gemini Flash is the best production AI for the price. ~30x cheaper than Claude Sonnet. It powers all of Nokos's features: auto-tagging, daily diaries, natural language search, and Personal AI (RAG).

I didn't pick sides. I picked the best tool for each job.

Here's the wild part: during development, Claude Code called the Gemini API directly — testing prompts, evaluating outputs, iterating until the AI pipeline worked. An Anthropic AI invoking a Google AI, debugging its responses, and adjusting prompts to improve them.

But they didn't just coexist silently. They debated. When I asked Claude Code to consult Gemini on a design decision, Gemini would give its opinion. Claude would consider it, blend it with its own perspective, and then present me with a synthesized recommendation: "Here's what Gemini suggested, here's what I think, and here's my recommendation — what would you like to do?"

Two AIs from competing companies, having a constructive discussion, with a human making the final call.

The Tech Stack

Layer Tech Why
Frontend Next.js 15 App Router, RSC, single codebase for web + mobile
Backend Hono Lightweight, fast, perfect for Cloud Run
Database PostgreSQL + pgvector + pg_bigm Vector search for RAG, bigram search for Japanese
Production AI Gemini Flash ~30x cheaper than Claude. Fast metadata/diary/report generation
Embedding gemini-embedding-001 768 dimensions, fire-and-forget on every save
Development Claude Code (Opus) 1M context, wrote 100% of the codebase
Design Docs Claude.ai (Sonnet) Co-authored project plan and technical design
Design Review Gemini 2.5 Pro Screenshot analysis, UI/UX feedback
Auth Firebase Auth Google + GitHub + Email
Billing Stripe 3 plans, multi-currency (JPY/USD)
Infra GCP (Cloud Run, Cloud SQL, Cloud Storage) Managed, auto-scaling
i18n next-intl 10 languages from day one

The Claude-Gemini Collaboration, In Practice

Here's a real example. I asked Claude Code to build the AI metadata generation feature — when you save a memo, AI automatically generates a title, tags, category, sentiment, and importance.

Claude Code:

  1. Wrote the Gemini API client
  2. Designed the prompt (in both Japanese and English)
  3. Called Gemini Flash to test the prompt with sample memos
  4. Evaluated the JSON output quality
  5. Adjusted the prompt based on Gemini's responses
  6. Built the API endpoint with proper error handling
  7. Added fire-and-forget embedding generation
  8. Wrote tests

An Anthropic AI writing code that calls a Google AI, testing the Google AI's outputs, and iterating on prompts to improve them. This happened dozens of times throughout development.

When it came to pricing strategy, I had Claude Code call Gemini to analyze competitor pricing and evaluate our positioning. Gemini came back with sharp criticism of our approach. Claude didn't just pass it along — it incorporated Gemini's feedback, added its own analysis, and proposed a revised strategy. Then it asked me: "What do you think?" I made the call, and Claude implemented the changes across the codebase and documentation in one session.

What Went Wrong

1. Claude Code doesn't understand your business

It writes code. It doesn't understand why. I had to constantly prevent it from adding features I didn't need or over-engineering simple functions.

My fix: CLAUDE.md — a 300+ line file at the repo root describing every architecture decision, convention, and constraint. Claude Code reads it at the start of every session. It's the most important file in the repo.

2. AI-generated billing code is dangerous

In a single audit session, I found 4 critical issues:

  • A storage limit constant that was defined but never enforced in the upload handler
  • A backward-compatible endpoint that bypassed all rate limits
  • A memo counter that never decremented on delete (free users could get permanently locked out)
  • A monthly reset that only triggered from one code path (users hitting another path first would be blocked with stale counters)

Claude Code wrote all of this. Each piece worked in isolation. None worked together correctly.

Lesson: Always manually audit security and billing logic. AI doesn't think about exploit paths.

3. "It works locally" doesn't mean it deploys

Claude Code can't test against real infrastructure. I lost days to:

  • Docker build failing because pnpm strict isolation needed node-linker=hoisted (one line)
  • Cloud SQL Proxy TLS failing because the slim Docker image was missing ca-certificates
  • Prisma 7 breaking the url = env() syntax that Prisma 6 required

Each fix was trivial once found. Finding them was the hard part.

The Numbers

After 30 days of solo development with Claude Code + Gemini:

  • 24 database tables with Row-Level Security on every single one
  • 19 API routes, ~60 endpoints
  • 15 AI tool integrations (Claude Code, Codex, Cursor, Copilot Chat, Aider, and more)
  • 10 languages (Japanese, English, Chinese, Korean, Hindi, Spanish, Portuguese, German, Turkish, French)
  • 473 tests passing
  • 3 deployed services on Cloud Run
  • 27 Playwright E2E tests
  • ~$70/month infrastructure cost
  • 0 lines of code written by a human
  • 1 founder

What I Learned

  1. The PM role becomes more important, not less. When AI writes all the code, the bottleneck shifts to decision-making. What to build, why, and in what order — these questions don't go away. They become everything.

  2. Use competing AIs — and let them debate. Claude Code is great at building. Gemini is great at evaluating. When they disagree, you get a richer perspective. The human's job is to be the tiebreaker.

  3. Design docs matter more than ever. I co-authored the project plan and technical design with Claude.ai before writing any code. These documents became the shared context that kept Claude Code on track. Without them, AI writes what it thinks you want. With them, AI writes what you actually want.

  4. Audit everything that touches money or security. AI generates plausible-looking code that can have subtle, critical bugs. Trust but verify.

  5. Ship before you're ready. I spent too long on world-building and 10-language support when I should have been getting user feedback.

Try Nokos

nokos.ai — unlimited memos, AI chat, and auto-capture from 20+ AI tools. Free to start.

If you use Claude Code, Cursor, or ChatGPT daily, try connecting them to Nokos. Your AI conversations are full of knowledge that vanishes after every session. Nokos catches it all.

We're launching on Product Hunt soon — follow @tomoking1122 to catch it.


I want to hear from you:

  1. Have you used AI to build an entire product? What was your experience?
  2. Would you trust an AI-built codebase in production?
  3. Is "Product Manager + AI" the future of solo SaaS?

Drop your thoughts in the comments. I read and reply to every one.

Building in public. Follow the journey on Twitter/X.

Top comments (0)