🔥 I burned $40 on Claude credits… fixing ONE hydration bug.
Same loop. Three times:
• Agent “fixed” the layout
• Mobile nav broke again
• I pasted more logs into chat
• Repeat
(Yes, I absolutely tried 10 console.logs before admitting I needed help.)
And here’s the part that stung: The AI wasn’t dumb. It literally couldn’t see my app. 👉 Read the full field report
We keep blaming prompts. But the real problem? We’re debugging blind… and expecting the AI not to.
Next.js 16.2 quietly shipped the missing pieces for this. Most of us just haven’t changed how we work yet.
Here’s what actually changed:
• AGENTS.md + local docs → your agent reads the correct Next.js version every time
• Browser logs in your terminal → no screenshots, no copy-paste
• Dev server lock file → no more ghost next dev processes fighting each other
• agent-browser → your AI can inspect the real React tree from the CLI
Put it together and the loop flips: Agent writes code → next dev logs everything → agent inspects real UI state → agent fixes with context (not vibes)
No more: "Can you share a screenshot?" "Try increasing z-index…"
👉 Read the full field report
I wrote a full field report on this:
• The exact AGENTS.md rules I’m using
• The exact browserToTerminal config you need
• What agent-browser actually looks like
• A PPR bug that went from 40 minutes → 5
If your workflow still looks like: copy → paste → pray
This will feel uncomfortable. In a good way.
Top comments (0)