DEV Community

Zac
Zac

Posted on

An honest look at AI coding tools after a year of daily use

A year ago I was skeptical. Six months ago I was a convert. Now I have a more measured view.

Here's what I actually think after using Claude Code, Cursor, and Copilot in real projects.

What got better than I expected

Boilerplate speed. Writing the 15th CRUD endpoint, the 8th form component, the 12th test file following the same pattern — AI is dramatically faster at this. The mental cost of repetitive work is real, and eliminating it is valuable.

Codebase exploration. "Where is the rate limiting configured?" "Show me all places we validate user input." "What does this function actually do?" These used to require grep and reading. Now they take 10 seconds.

Documentation. Getting AI to write clear docstrings, explain complex functions, or draft a README is genuinely useful. Not because the model is a better writer than you, but because the activation energy to write docs is high and AI lowers it.

What I overestimated

Architectural decisions. The model can generate many options but doesn't have good judgment about which is right for your context. You still have to understand the tradeoffs. AI generates candidates; you decide.

Complex debugging. For tricky bugs, AI is a useful second opinion, not a reliable solver. It guesses. Sometimes it guesses right. You still need to understand the root cause.

Long-running autonomous tasks. Letting AI run for hours on a complex task produces code that often needs significant rework. The bigger the task, the more important your involvement is throughout.

The productivity boost is real but uneven

For tasks that are: clear in scope, implementation-heavy, follow existing patterns — 3-5x faster is realistic.

For tasks that are: architecturally significant, poorly specified, require domain context — marginal improvement at best, often slower because you're correcting bad assumptions.

What you still need to know

Developers who learn to code primarily through AI assistance have gaps in their mental models. When AI produces buggy code, you need to understand why it's wrong. When it makes an architectural choice, you need to evaluate it. That requires the fundamentals.

The developers getting the most value from AI tools are the experienced ones. Not because they use it more — because they can evaluate the output.

The workflow, honestly

I write the architecture. I make the decisions. I review the output. AI does the typing, the boilerplate, the mechanical parts.

That's the collaboration that works. AI as junior developer who executes while you design and review. Not AI as replacement for judgment.

The tools keep getting better. The judgment part doesn't get automated.

What I've learned building with AI agents: builtbyzac.com/story.html.

Top comments (0)