DEV Community

Andrew Rozumny
Andrew Rozumny

Posted on

Your framework choice is now your biggest AI cost lever

The Wasp team published something worth reading today — they gave
Claude Code the exact same feature prompt for two identical apps, one in Next.js and one in Wasp, and measured everything.

The numbers:

Metric Wasp Next.js
Total cost $2.87 $5.17
Total tokens 2.5M 4.0M
API calls 66 96
Output tokens (code written) 5,416 5,395

The last row is the interesting one. The AI wrote almost exactly
the same amount of code. But it cost 80% more to do it in Next.js.

The reason: cache creation and cache reads. Every LLM call re-reads the codebase context from scratch. A bigger codebase means every single turn costs more — not just for reading, but for loading into cache in the first place.

Next.js cache creation was 113% more expensive. Not because the AI did more. Because it had more boilerplate to read before it could start.

What this actually means

We've been evaluating frameworks on DX, performance, and ecosystem.
Add a new one: context efficiency.

How much of an AI's context window goes to signal (your business logic) vs noise (framework boilerplate)?

Wasp's declarative config means auth, routing, and jobs are defined in ~10 lines. Next.js equivalent is spread across middleware, route handlers, session files, and API directories. Same result, 4x the tokens.

The compounding problem

This test was a single feature. Real apps accumulate features. Every new route, every new model, every new API handler adds to the context that gets re-read on every single LLM call.

The performance degradation isn't linear either — research shows that AI performance degrades well before the context window fills.
You're not just paying more per call, you're getting worse output.

What to do about it

  1. Measure your codebase token count now.
    Run: find . -name "*.ts" -o -name "*.tsx" | xargs wc -c
    That's roughly your AI cost baseline.

  2. Audit your boilerplate ratio.
    How much of that is business logic vs glue code?
    The higher the glue ratio, the worse your AI economics.

  3. Consider framework choices through this lens.
    Wasp, Rails, Laravel — highly opinionated frameworks
    have a new advantage they didn't have 2 years ago.
    Less boilerplate = cheaper AI = faster iteration.

The irony

The same properties that make a codebase easy for humans to navigate — explicit, verbose, self-documenting — are exactly what's expensive for AI. The abstractions we used to see as "magic" are now genuinely economical.

I've been thinking about this for ToolDock — a browser-based
dev tools platform I'm building. Every tool page is pure business logic with almost no framework boilerplate, because everything runs statically. The AI token efficiency on it is noticeably better than client projects I've worked on with heavier stacks.

Worth checking the full Wasp post for the methodology details — they open-sourced both apps and the measurement scripts, which is the right way to publish a benchmark.

What's your current codebase token count?

Top comments (0)