DEV Community

Sarkar
Sarkar

Posted on

why your current code review process is broken for ai AI-generated code

Code review exists to catch problems before they ship.

But code review was designed for code that humans wrote. Code that has an author who understood what they were writing and can answer questions about it.

AI-generated code breaks every assumption that makes code review work.

The reasoning is gone

When a developer writes code, the reasoning behind each decision exists somewhere — in their head, in the commit message, in the PR description. When an AI agent writes code, that reasoning never existed in a form that can be reviewed.

You're reviewing output. Not thinking.

The volume breaks review

A senior developer can meaningfully review maybe 200–400 lines of code per hour. An AI agent produces that in minutes. If your team is using agents seriously, you're either doing shallow review at scale or you're becoming a bottleneck.

Both outcomes are bad.

The timing is wrong

By the time a PR exists, the context of the coding session is gone. The developer who submitted the PR often can't answer why the code was written a certain way — because they were watching the agent work, not making every decision themselves.

What actually needs to change

Review needs to move earlier — during generation, not after. The human needs to stay informed as the agent writes, not catch up after it's done.

This is an unsolved problem in the current toolchain. Curious how others are thinking about it."

Top comments (0)