type.win is a Balatro-styled typing arcade I built — two games, a leaderboard, live on Cloudflare Workers. The interesting part isn't really the game, though. It's that the bulk of the routine commits to it are written by an autonomous Claude Code loop, not by me directly.
I review a dev branch periodically and merge it to main. The loop never touches production and never deploys. It opens PRs, auto-merges them when CI is green, and builds shared knowledge across sessions through a handful of markdown files in the repo.
This post is about how that loop is structured, what works, and what's already broken.
The MAPE-K shape
MAPE-K is a self-adaptive systems pattern from early-2000s autonomic computing: Monitor, Analyze, Plan, Execute, plus shared Knowledge. The shape maps surprisingly well onto autonomous coding agents.
Each session of the loop runs the same five phases inside an isolated git worktree.
1. Monitor — load shared knowledge before doing anything
The session has no memory of prior runs except what's persisted to the repo. So phase one is reading:
-
.claude/invariants.md— hard rules. "Leaderboard logic must remain server-trusted." "No theme toggle." "Desktop only." - The last 20 entries of
changelog.md— what predecessors did, what's partial or blocked. -
.claude/rejected-decisions.md— proposals already explored and explicitly rejected. Don't re-propose. -
nextideas.md— my directional file. Priorities, ideas, occasional hard directives. -
gh pr list --state open— currently-open PRs, mapped to the files they touch. Those files are claimed; this session must not modify them.
This phase is non-negotiable. Skipping it produces drift, duplicate work, and re-litigation of decisions I already made.
2. Analyze — pick exactly one task by tier
Strict priority order:
-
Tier A — a real bug. Run
bun run test,npx tsc --noEmit,bun run check. If anything fails onmain, that's the highest priority. Otherwise scan recent commits, dead code, missing test coverage on security paths, a11y gaps. -
Tier B — planned work. Smallest shippable slice from
nextideas.md. - Tier C — improve an existing AI-authored PR. Rebase, fix CI, address reviewer notes.
- Tier D — a new idea. Only if A through C produced nothing.
The phrase "do not invent bugs" under Tier A is load-bearing. Without it, the loop fabricates problems to look productive.
3. Plan — scoped, explicit, security-aware
For anything touching the leaderboard, HMAC, or session tokens, the plan has to walk through the entire tamper surface. For anything touching the PixiJS scene, the plan has to address the React-isn't-aware-of-frame-state pattern (more on that below).
4. Execute — TDD, then PR
Tests first. Code second. Open a PR against dev (never main). Auto-merge if CI is green.
5. Knowledge — append to changelog
The session writes a single changelog.md entry: what was done, what's left, what surprised it. The next session reads this. It's the only memory across sessions.
Stack notes for anyone curious
The product is TanStack Start (React 19, file routing, SSR) on Cloudflare Workers. PixiJS v8 for Word Fall, plain DOM for Type Race. Drizzle + Neon Postgres. Clerk for auth.
A few non-obvious choices worth pulling out:
PixiJS inside React without re-render storms
The Word Fall scene is a plain TS class instantiated once in a useEffect. It owns the app.ticker.add loop and mutates Pixi objects directly every frame. Per-frame state never goes through React — that would re-render the tree at 60fps and tank everything. React only hears about gameplay through a batched event emitter for the HUD (score, WPM, lives, streak).
There's also a React-owned ResizeObserver that imperatively calls scene.forceSize(w, h), in addition to PixiJS's own resizeTo. PixiJS's observer can silently fail on iOS Safari URL-bar animations, leaving app.screen stuck at 0 and freezing the loop. The redundant path is intentional — don't remove it.
Leaderboard tamper-proofing
Client is fully untrusted. Every score submission has to satisfy:
- Clerk auth (no token → 401)
- HMAC-signed session token, single-use, 1h expiry, replay-protected via a
used_sessionsPK - Zod discriminated union (Word Fall and Type Race have different shapes and caps)
- Math-consistency check on the submitted score
- Server-side WPM recomputation for Type Race — client never supplies WPM
- Cloudflare native rate-limit binding, 5 req / 10s / user
// wrangler.jsonc
"ratelimits": [
{
"name": "LEADERBOARD_RATE_LIMITER",
"namespace_id": "1001",
"simple": { "limit": 5, "period": 10 }
}
]
20 dedicated unit tests cover tamper vectors. Any change to that path has to keep them green — and the loop knows this from the invariants file.
What's already broken
A few honest failures from the first few weeks:
- Drift toward speculation. Once Tier A and B are exhausted, sessions reach for Tier D too eagerly. My current guards are rejected-decisions.md plus the strict tier order, but the loop will sometimes invent borderline-justified work to look productive.
- File-claim race. Two sessions starting within the same minute occasionally both pick the same file before one publishes its draft PR. Solvable, but ugly.
- Changelog noise. Sessions over-document the obvious. I haven't found the right prompt phrasing to keep entries terse without losing useful signal.
If you're running a similar loop on a real product, I'd love to hear what broke first for you, and how you guard against scope drift after the obvious bugs are fixed.
Try it
Live: ~type.win~. Desktop only — PixiJS particle layer plus keyboard-first input.
Roast everything. Especially the loop architecture — I want to know where this falls apart at scale.
Top comments (0)