Reading time: 12 minutes | Difficulty: Intermediate
I didn't plan to write this post. But after two weeks of building across multiple projects with Claude Code, I looked at the git log and realized something interesting: the tool had gotten better at working with me — not because of a software update, but because it learned how I think.
This isn't a review or a tutorial. It's a field report. What worked, what didn't, what I'd do differently, and a few tricks I haven't seen anyone talk about.
🔷 What Two Weeks of Real Work Looks Like
Across my projects over a two-week sprint, Claude Code and I shipped roughly 40 commits. Security hardening. SEO fixes. Accessibility improvements. A comprehensive test suite overhaul. Automated blog cross-posting. Docker pipeline improvements. Translations. Refactoring.
None of it was greenfield "build me a todo app" work. It was the messy, contextual stuff that breaks most AI tools — fixing hreflang tags on a bilingual site, debugging why Docker kept serving stale pre-rendered HTML, getting Content Security Policy headers right without breaking analytics scripts.
Here's what I noticed: the first few sessions felt like onboarding a new hire. By the end of the second week, it felt more like pair programming with someone who'd read my playbook.
🔷 The Memory System Changes Everything
Most AI coding assistants are goldfish. Every conversation starts from zero. Claude Code is different — it has two memory systems that compound over time.
File-Based Memory (Project-Scoped)
Each project can have a CLAUDE.md file and a set of rules. Mine includes code style preferences, git workflow, testing standards, security rules, and architecture principles. Claude Code reads these at the start of every session.
This sounds basic, but the ROI is absurd. I spent maybe an hour writing my CLAUDE.md and rules files. Every session since then has started with Claude Code already knowing:
- Use conventional commits (
feat:,fix:,chore:) - Never commit
.envfiles - Run single tests during development, not the full suite
- Server Components by default,
'use client'only when needed - Functional components with named exports
That hour of setup has saved me hundreds of corrections.
Knowledge Graph (Cross-Project)
The second layer is a shared knowledge graph accessible via MCP. This is where things get interesting — it persists across projects and tools. When I correct Claude Code's behavior in one project, that correction carries over to the next.
For example, early on I noticed Claude Code using words like "delve" and "tapestry" in generated content. I corrected it once. That preference got stored. Now, across every project, it avoids those words without me saying anything.
It also learned:
- My website is a Vite + React SPA, not Next.js (a mistake it kept making until I corrected it)
- I prefer autonomous execution after plan approval — don't ask for permission on each sub-task
- Never fabricate or exaggerate my professional experience
- Quality over speed — better to be slow and correct than fast and wrong
The Compounding Effect
This is the part most people miss. Each correction, each preference saved, each rule written makes the next session marginally better. Over two weeks, those margins stack up. By the end, I was spending almost no time on style corrections, commit message formatting, or architectural disagreements.
The AI didn't get smarter. My configuration got better.
🔷 Patterns and Tricks That Actually Work
Here are the workflow patterns that emerged. Some are obvious in retrospect. A few surprised me.
1. CLAUDE.md Is Your Highest-ROI File
I've said this already, but it bears repeating. The CLAUDE.md file and project rules are the single highest-leverage thing you can do. Every minute spent writing clear rules saves five minutes of corrections later.
My rules cover:
-
Code style — TypeScript strict mode, no
any,interfaceovertypefor object shapes - Git workflow — always branch, conventional commits, never force push main
- Testing — AAA pattern, mock external deps only, descriptive test names
- Security — no hardcoded secrets, parameterized queries, validate at boundaries
- Architecture — files under 200 lines, group by feature, dependency injection for testability
None of this is rocket science. But without these rules, I'd be correcting the same mistakes every session.
2. Sub-Agents for Exploration, Main Context for Implementation
One pattern that emerged naturally: use sub-agents (background agents) for research and exploration, and keep the main conversation context for actual implementation.
When I need to understand how something works in an unfamiliar part of the codebase, I spawn a sub-agent to explore. It reads files, traces dependencies, maps architecture — and reports back with a summary. My main context window stays clean and focused on the task.
This matters because context window pressure is real. Long sessions accumulate tool results, file contents, and conversation history. If you dump all your exploration into the main thread, you lose implementation context. Sub-agents are a pressure valve.
3. Plans as Files, Not Chat
Another non-obvious trick: always save plans to files, not as ephemeral chat output. When Claude Code creates an implementation plan, I make it write the plan to a markdown file. Why?
- Plans survive context compression (long conversations get summarized, losing detail)
- Plans can be reviewed by sub-agents before implementation
- Plans create accountability — you can diff what was planned vs. what was built
- Plans can be resumed in a new session if you hit context limits
I use the SPARC methodology: Specification, Pseudocode, Architecture, Refinement, Completion. Each phase is a checkpoint where I can review and redirect before committing to implementation.
4. Correct the AI, Don't Tolerate It
This is the biggest behavioral insight. Most developers, when an AI generates slightly wrong code, will either:
a) Accept it and fix it manually later, or
b) Regenerate and hope for better output
Both are wrong. The correct move is to explicitly correct the AI and explain why. "Don't mock the database in these tests — we got burned when mocks diverged from the real schema." This creates a feedback loop. The correction gets stored. The mistake doesn't recur.
Tolerating mediocre output teaches the AI nothing. Correcting it teaches it everything.
5. Multi-Agent Code Review
One of my favorite patterns: after writing a chunk of code, I run multiple review agents in parallel — security, architecture, performance, and code simplification. Each agent has a narrow mandate and catches different things.
A security agent once flagged unsanitized user input being rendered directly into the DOM. A simplification agent caught that I'd created an abstraction for something used exactly once. Neither would have been caught by a single-pass review.
The trick is running them in parallel. Four agents reviewing simultaneously takes the same wall-clock time as one.
6. Testing Culture Compounds Too
On one project, I inherited a codebase with exactly one test. Over several weeks of consistent work — writing tests for every bug fix, every new feature — the count grew past 2,750.
Claude Code is genuinely good at writing tests. Not perfect — it occasionally writes tests that test implementation rather than behavior — but with the right rules (test WHAT it does, not HOW), the output quality is high. The key is making testing non-negotiable in your rules, not optional.
7. Git Worktrees for Parallel Feature Work
When working on multiple features simultaneously, I use git worktree to maintain separate working directories. This lets Claude Code work on one feature while I review another, without branch-switching overhead.
Combined with sub-agents, this creates a genuinely parallel workflow: one worktree for the current feature, a sub-agent exploring the codebase in another, and the main thread orchestrating both.
🔷 Where It Breaks Down
Honesty time. Not everything is smooth.
Over-Engineering
Claude Code has a tendency to propose complex solutions when simple ones exist. Today, for example, I asked about making blog posts update dynamically. It went deep into runtime database fetching, React context providers, and a complete frontend rewrite — when the actual answer was "just add auto-merge to the existing publish script." Ten minutes of questioning got us to the simple answer.
The antidote: ask "what's the simplest version of this?" early and often.
Context Window Pressure
Long sessions (3+ hours) start to feel sluggish. The AI has seen too many file contents, too many tool results. Important context from early in the conversation gets compressed or lost.
My workaround: for large tasks, break them into multiple sessions. Each session has a clear scope. Plans and progress get persisted to files so the next session can pick up seamlessly.
Plugin and Hook Noise
I use several plugins and hooks with Claude Code. They're powerful but noisy — sometimes triggering irrelevant skill suggestions or injecting context that has nothing to do with the current task. A Next.js skill firing when I'm working on a Vite project. A Vercel deployment guide when I'm deploying to Hetzner.
This is a tooling maturity issue, not a fundamental problem. But it does add friction.
The Permission Dance
Claude Code errs on the side of caution with destructive operations, which is generally good. But sometimes it asks for permission on things that are obviously safe — running a linter, reading a config file. The friction is small per-instance but adds up.
I've tuned my permission settings to auto-allow common operations. It's worth spending a few minutes configuring this.
🔷 What I'd Tell Someone Starting Out
If you're considering Claude Code for real work (not just toy projects), here's what I wish I'd known:
Invest in CLAUDE.md first. Before your first coding session, spend 30 minutes writing your rules. Code style, git workflow, testing standards, security rules. This single file determines 80% of output quality.
Don't fight the planning phase. Claude Code wants to plan before implementing. Let it. The plan-then-implement workflow catches architectural mistakes that would cost hours to fix later.
Correct aggressively, not passively. Every time the AI does something you don't like, say so explicitly. "Don't do X because Y." The correction gets stored and compounds.
Use sub-agents for everything exploratory. File searches, codebase exploration, dependency analysis — all of this should happen in sub-agents, not your main thread. Protect your context window.
Persist everything important to files. Plans, decisions, progress notes. Chat is ephemeral. Files survive context compression and session boundaries.
Break large tasks into sessions. A 6-hour session is less productive than three 2-hour sessions with clear scopes. Context quality degrades over time.
Automate the boring parts. Set up hooks for commit formatting, deploy checks, and code review. The less you have to manually trigger, the more you stay in flow.
If you're a technical leader evaluating how AI assistants fit into your development workflow — whether for your own productivity or for your team — I help startups figure this out. Not every project needs AI tooling, but when it fits, the compounding effect is real.
Conclusion: It's Not About Speed
The most common question I get about AI coding assistants is "how much faster are you?" It's the wrong question.
The real value isn't speed — it's consistency. Claude Code doesn't forget my commit conventions at 11pm. It doesn't skip tests because it's Friday. It doesn't introduce any types because it's tired. The rules I wrote on day one are enforced on day thirty with the same rigor.
The second value is compounding. Two weeks of corrections and preference-tuning produced an assistant that understands my technical opinions, my writing style, and my quality standards. That investment doesn't reset. It carries forward.
Am I faster? Probably. But more importantly, the quality floor is higher. And the quality floor is what ships.
Originally published at padawanabhi.de
Top comments (0)