I want to talk about something the ""AI lets anyone build"" crowd doesn't discuss.
Building with AI and understanding what you built are two completely different things.
I've been shipping products for 6 months with Cursor and Claude Code. I'm proud of what I've built. Real users, real revenue.
And there are files in my own codebase I couldn't explain under pressure.
The speed gap is real
AI agents write code faster than humans can read it. A comfortable reading pace is 200–300 lines per hour with genuine comprehension. An agent produces that in minutes.
So what happens? You scroll, skim, trust, merge. Every session. Because you have to.
Why this matters more than people admit
Veracode's 2025 research found AI coding tools choose the insecure option 45% of the time when given a choice. A hardcoded key. An open endpoint. No rate limiting.
The agent doesn't warn you. You don't notice. Your users find out later.
What I think is missing
The toolchain needs a layer between ""agent writes code"" and ""developer ships code"" that keeps humans actually informed — not reviewing after the fact, but staying in the loop as it happens.
That layer doesn't really exist yet. But it's being built.
Is this something you've felt? How are you currently handling comprehension of AI-generated code?"
Top comments (0)