âLately I have seen you take a more active role in PR review and really investigating, catching and detailing issues that come up. This is a big help to the team in my opinion.â
A tech lead told me that after the weeks where I used AI the most to write code. Not lessâmore. And when I ran /insightsâthe command that analyzes your Claude Code session history and generates a report on patterns, frictions, and metricsâI understood why.
Quick disclaimer: Iâm not affiliated with Anthropic. I use these tools because they work for me, and Iâm sharing this because the data might be useful to other devs.
This isnât âstopping programmingâ: itâs another stage of the craft đ§Ź
Software development has always been a chain of abstractions: each leap reduced keystrokes, not responsibility. Agents seem to be the next leap: less typing, more deciding and maintaining.
My setup đ§Š
I went from using ChatGPT for quick questions to integrating Claude Code into my daily workflow.
- Claude Code (Sonnet 4.5) in the terminal: full implementations, multi-file work, iterative refinement with feedback.
- JetBrains AI (with Sonnet 4.5): focused questions in the IDE, PR reviews, specific prompts.
Iâm not into what some people call vibe coding: building very fast with AI and iterating until it works, without necessarily digging into the technical âwhyâ behind each decision. Iâm into controlled, AI-assisted development: the agent accelerates execution, but I steer. Everything that gets merged goes through my line-by-line review. If something breaks, Iâm responsible.
The numbers: what /insights says about my real usage đ
I ran /insights inside Claude Code and got a report covering only my terminal activity over the last two weeks. Note: Iâve been using the tool for monthsâthis report captures just that period.
- 445 messages in 12 days (â 37.1 msgs/day)
- +5,867 / â3,088 lines across 61 files
- A dominant pattern: Iterative Refinement across 13 sessions
The top two categories of what I asked for were tied: Feature implementation (17) and Bug fix (17), followed by UI refinement (8), PR comment classification (7), and code review response drafting (7).
I donât use it only to âgenerate code.â I use it to complete the whole loopâfrom initial implementation to addressing review feedback.
Where the flow breaks (and thatâs the good part) đ§Ż
The most useful part of the report wasnât what I did well. It was this:
- Buggy Code (13): early attempts with type mismatches, wrong props, logic mistakes.
- Wrong Approach (9): building from scratch when the codebase already had the right pattern to follow.
- Others: overly aggressive changes, minor inaccuracies.
Yesâfirst attempts often have errors. But the key learning was clear: my problems with an agent arenât solved by using it less, but by giving it better context.
When context is vague, the agent guesses. When context includes the right types and the existing codebase pattern, it converges fast.
The pattern that described my workflow better than I could đŞ
/insights captured my dominant flow: I delegate ambitious multi-file implementations, then actively steer through iterations until it lands. Buggy first drafts arenât a blocker; theyâre the expected cost of moving fastâas long as Iâm guiding the direction.
Itâs the same approach Iâd take with a talented junior dev: ask for an ambitious first draft knowing it will need refinement, then guide it with context and repo rules.
What I want to reinforce (according to the report) â
Three patterns I want to keep:
1) Code reviews with judgment. I separate real bugs from non-actionable comments and push back when needed with grounded reasoning. The agent helps organize and draft, but the decision of what to accept or reject is mine.
2) End-to-end iteration in one flow. Implementation â feedback â fixes â tests. A detail I liked: 86 uses of TodoWrite. With agents, planning is part of the work.
3) Pattern-driven decisions. I prefer consistency with the codebase over ânew ideasâ that increase maintenance cost. When a proposal diverged from conventions, I restarted using the existing pattern as the reference.
Takeaways Iâll institutionalize (and measure) đ
From the frictions, the report suggests rules Iâll encode in CLAUDE.md⌠Here are two examples:
- Pattern-first: before implementing a feature, look for an existing pattern in the codebase and use it as the reference.
- Types-first: before proposing TypeScript fixes, read the relevant types and interfaces. No âguessing shapes.â
The report also suggests creating skills (reusable commands for repetitive flows). For example, one for code review responses: list comments, classify them, propose minimal fixes, draft replies, and verify the build.
Responsibility: the part you canât delegate đ§ą
Thereâs a difference between using agents and turning your brain off. A friend of mineâan administrator, not a devâbuilds apps insanely fast with AI tools. Itâs impressive, and his goal is for the apps to work. Mine is for them to also be maintainable and defensible: understand them, debug them, and keep them running when something breaks.
Tools can go down (even big services like AWS go down). If the tool goes down, I still need to open the repo, understand whatâs happening, and fix itâespecially for the changes I shipped in my PRs. Any tool can produce code. It canât produce the judgment to decide what code should existâand what shouldnât. AI can write. Ownership canât.
If you want to try this without becoming dependent đ§Ş
If youâre in the same transition, the only general advice Iâd give is:
- Treat AI as a teammate, not autopilot. It accelerates execution, but you choose the direction.
- Invest in context. Correct types and repo patterns matter more than a fancy prompt.
-
After 2â4 weeks (and if you use Claude Code), run
/insights. Not to âvalidate yourself,â but to spot repeated frictions (and fix habits). - If youâre getting started, a structured starting point helped me a lot: the free AI-Assisted Programming by Nebius x JetBrains course (linked below).
Whatâs next đ§
I donât think agents âkillâ development. I think they shift where the valuable work lives: less typing, more judgment, more review, more defensible technical decisions.
My next step is to measure whether the changes (especially pattern-first and types-first) reduce buggy code and wrong approach in my next /insights cycle. If the data improves, Iâll share it. If it doesnât, Iâll share that too. For me, having data-driven feedback on my workflow is gold.
Credits đ
- The course that got me into this: AI-Assisted Programming by Nebius x JetBrains
- Terminal tool: Claude Code
- In my IDE (WebStorm): JetBrains AI
- Photo by Daniil Komov on Unsplash







Top comments (0)