At NDC Manchester 2025, Aleksander Stensby gave one of the more honest talks I’ve seen about AI-assisted coding. It wasn't a hype-filled demo reel; it was a practical breakdown of why tools like Claude Code, Cursor, and GitHub Copilot often disappoint experienced developers—and how to fix it.
The core idea is simple, but uncomfortable:
If AI produces bad code for you, it’s often a workflow problem, not a model problem.
Stensby calls the fix Compounding Engineering. This isn't about "prompt engineering." It’s about teaching your tools over time, the same way you onboard a junior developer to make them better week by week.
1. Treat the AI Like a Junior Intern
This is the mindset shift everything else depends on.
If a junior developer submits messy code, you don’t fire them. You review it, explain why it’s wrong, and show what “good” looks like. Most people don’t do this with AI. They accept the output, complain about “AI slop,” and move on.
The shift: Stop being a passive user. If you want better results, you must actively guide, correct, and push back. Your value doesn't disappear just because the code compiles; your value is in the mentorship of the tool.
2. Context Management: Less is More
Large language models (LLMs) don't have infinite attention. The mistake many Java devs make is trusting long-running, cluttered chats where the model eventually loses focus.
The fix is boring but effective:
- Clear chats aggressively.
- Store state explicitly.
-
Use Markdown for "Memory": Keep architecture notes, constraints, and rejected approaches in
.mdfiles. When starting a new session, re-load only the context that matters.
3. Rule Files are Living Knowledge Bases
Files like CLAUDE.md or .cursorrules aren't just for styling. They are training manuals.
When you fix a subtle bug or define a specific pattern for your Spring Boot service, don’t just fix it once. Update the rule file. Over time, the AI aligns with your specific standards, and those productivity gains compound.
4. The "Plan Mode" Strategy
If you let an AI jump straight into implementation, it will make architectural decisions for you—and some will be terrible.
Always ask for a plan first. Ask the model to outline steps, list assumptions, and ask clarifying questions. Planning is cheap; refactoring bad AI-generated architecture is expensive.
5. Your Job is Taste and Judgment
AI can produce working code fast. It cannot reliably produce good code.
Your value lies in knowing when something "feels" wrong even if the tests pass. Don’t just issue commands; have a conversation. Challenge the design choices. This back-and-forth is where quality emerges.
6. Practical Tips for Scalability
- Atomic Tasks: Keep features small. Use GitHub issues or markdown task files.
- Model Selection: Use fast/cheap models for boilerplate; use high-reasoning models for architecture.
- MCP (Model Context Protocol): Use tools that allow the AI to actually "see" your docs, logs, and environment rather than just guessing.
- The Safety Net: AI will break your code. Use Git checkpoints religiously. If rolling back is painful, you won't experiment—and that kills the upside of AI.
The Real Takeaway
AI coding tools are no longer just advanced autocomplete; they are collaborators. But collaborators need onboarding, feedback, and boundaries.
If you invest in that relationship, your results compound. If you don’t, you just get faster mediocrity.
This post was originally published on The Main Thread.
Top comments (0)