DEV Community

Cover image for Claude Code is making me a better dev (and I have the data) 🧠⚙️
Gabriel Porras
Gabriel Porras

Posted on

Claude Code is making me a better dev (and I have the data) 🧠⚙️

“Lately I have seen you take a more active role in PR review and really investigating, catching and detailing issues that come up. This is a big help to the team in my opinion.”

A tech lead told me that after the weeks where I used AI the most to write code. Not less—more. And when I ran /insights—the command that analyzes your Claude Code session history and generates a report on patterns, frictions, and metrics—I understood why.

Quick disclaimer: I’m not affiliated with Anthropic. I use these tools because they work for me, and I’m sharing this because the data might be useful to other devs.

This isn’t “stopping programming”: it’s another stage of the craft 🧬

Software development has always been a chain of abstractions: each leap reduced keystrokes, not responsibility. Agents seem to be the next leap: less typing, more deciding and maintaining.

My setup 🧩

I went from using ChatGPT for quick questions to integrating Claude Code into my daily workflow.

  • Claude Code (Sonnet 4.5) in the terminal: full implementations, multi-file work, iterative refinement with feedback.
  • JetBrains AI (with Sonnet 4.5): focused questions in the IDE, PR reviews, specific prompts.

I’m not into what some people call vibe coding: building very fast with AI and iterating until it works, without necessarily digging into the technical “why” behind each decision. I’m into controlled, AI-assisted development: the agent accelerates execution, but I steer. Everything that gets merged goes through my line-by-line review. If something breaks, I’m responsible.

The numbers: what /insights says about my real usage 📊

I ran /insights inside Claude Code and got a report covering only my terminal activity over the last two weeks. Note: I’ve been using the tool for months—this report captures just that period.

  • 445 messages in 12 days (≈ 37.1 msgs/day)
  • +5,867 / −3,088 lines across 61 files
  • A dominant pattern: Iterative Refinement across 13 sessions

Claude Code Messages

The top two categories of what I asked for were tied: Feature implementation (17) and Bug fix (17), followed by UI refinement (8), PR comment classification (7), and code review response drafting (7).

I don’t use it only to “generate code.” I use it to complete the whole loop—from initial implementation to addressing review feedback.

What You Wanted / Top Tools Used

Where the flow breaks (and that’s the good part) 🧯

The most useful part of the report wasn’t what I did well. It was this:

  • Buggy Code (13): early attempts with type mismatches, wrong props, logic mistakes.
  • Wrong Approach (9): building from scratch when the codebase already had the right pattern to follow.
  • Others: overly aggressive changes, minor inaccuracies.

Primary Frictions

Yes—first attempts often have errors. But the key learning was clear: my problems with an agent aren’t solved by using it less, but by giving it better context.

When context is vague, the agent guesses. When context includes the right types and the existing codebase pattern, it converges fast.

The pattern that described my workflow better than I could 🪞

/insights captured my dominant flow: I delegate ambitious multi-file implementations, then actively steer through iterations until it lands. Buggy first drafts aren’t a blocker; they’re the expected cost of moving fast—as long as I’m guiding the direction.

Key Pattern

It’s the same approach I’d take with a talented junior dev: ask for an ambitious first draft knowing it will need refinement, then guide it with context and repo rules.

What I want to reinforce (according to the report) ✅

Three patterns I want to keep:

1) Code reviews with judgment. I separate real bugs from non-actionable comments and push back when needed with grounded reasoning. The agent helps organize and draft, but the decision of what to accept or reject is mine.

2) End-to-end iteration in one flow. Implementation → feedback → fixes → tests. A detail I liked: 86 uses of TodoWrite. With agents, planning is part of the work.

3) Pattern-driven decisions. I prefer consistency with the codebase over “new ideas” that increase maintenance cost. When a proposal diverged from conventions, I restarted using the existing pattern as the reference.

Impressive Things You Did

Takeaways I’ll institutionalize (and measure) 📌

From the frictions, the report suggests rules I’ll encode in CLAUDE.md… Here are two examples:

  • Pattern-first: before implementing a feature, look for an existing pattern in the codebase and use it as the reference.
  • Types-first: before proposing TypeScript fixes, read the relevant types and interfaces. No “guessing shapes.”

Suggested CLAUDE.md Additions

The report also suggests creating skills (reusable commands for repetitive flows). For example, one for code review responses: list comments, classify them, propose minimal fixes, draft replies, and verify the build.

Custom Skill

Responsibility: the part you can’t delegate 🧱

There’s a difference between using agents and turning your brain off. A friend of mine—an administrator, not a dev—builds apps insanely fast with AI tools. It’s impressive, and his goal is for the apps to work. Mine is for them to also be maintainable and defensible: understand them, debug them, and keep them running when something breaks.

Tools can go down (even big services like AWS go down). If the tool goes down, I still need to open the repo, understand what’s happening, and fix it—especially for the changes I shipped in my PRs. Any tool can produce code. It can’t produce the judgment to decide what code should exist—and what shouldn’t. AI can write. Ownership can’t.

If you want to try this without becoming dependent 🧪

If you’re in the same transition, the only general advice I’d give is:

  • Treat AI as a teammate, not autopilot. It accelerates execution, but you choose the direction.
  • Invest in context. Correct types and repo patterns matter more than a fancy prompt.
  • After 2–4 weeks (and if you use Claude Code), run /insights. Not to “validate yourself,” but to spot repeated frictions (and fix habits).
  • If you’re getting started, a structured starting point helped me a lot: the free AI-Assisted Programming by Nebius x JetBrains course (linked below).

What’s next 🧭

I don’t think agents “kill” development. I think they shift where the valuable work lives: less typing, more judgment, more review, more defensible technical decisions.

My next step is to measure whether the changes (especially pattern-first and types-first) reduce buggy code and wrong approach in my next /insights cycle. If the data improves, I’ll share it. If it doesn’t, I’ll share that too. For me, having data-driven feedback on my workflow is gold.

Credits 🙌

Top comments (0)