I was at a client project recently, facing a situation I know well by now. New codebase, no AI tooling set up, and a developer on the team asks me: "How do you get the AI to produce such good results?"
I looked at his screen. He had Claude open - in the browser, no project context, a fresh chat for every single task.
That's the problem. Not the tool, not the model, not the prompt. The missing structure.
After a year of using AI daily in real projects, I'm convinced: the difference between "AI is pretty decent" and "AI has fundamentally changed how I work" isn't the perfect prompt. It's the context you give the AI - before it answers the first question.
CLAUDE.md Is Where Everything Starts
When I start working on a new project with Claude Code, the first thing I create is the CLAUDE.md. Not the first component, not the first feature - the context file.
What goes in it? Everything a new developer would need to know on day one:
- Tech stack and versions
- Project structure and why it's set up that way
- Design system - colors, typography, spacing - concrete, with actual values
- Coding rules: what always applies, what never does
- External links, API endpoints, important accounts
That sounds like work. It is - once. But after that I save myself from rebuilding that context in every single AI session. I don't explain "we use Tailwind, not Styled Components" or "all texts must go through i18n" anymore. It's in the file. Claude reads it, follows it.
The practical difference: when I say "create a new section for the services page", I get code that uses my design system, my components, is TypeScript strict and contains no hardcoded strings. Not because Claude is magic - because it knows the rules.
PRODUCT_SPEC.md - The Product Vision Written Down
CLAUDE.md describes how things are built. PRODUCT_SPEC.md describes what and why.
I created it the first time when I realized: if I haven't touched a project for three weeks and then pick it back up, I need a moment to get back into the right frame of mind myself. The AI needs that even more.
My PRODUCT_SPEC.md covers things like:
- What is the core promise of the product?
- Who are the users and what do they want?
- What features exist, and what design decisions are behind them?
- What was deliberately not built - and why?
That last point is underrated. "What we don't build" is just as important as "what we build". If I don't tell the AI that we've deliberately decided against a certain feature, it will suggest or even implement it the next time it seems relevant.
Externalizing Coding Rules - Not Explaining Them in the Prompt
I've noticed that a lot of people include their rules in the prompt every time. "Don't write inline CSS. Don't use any types in TypeScript. Always create a German and English version."
And then they write that again for the next task.
That's exhausting - and error-prone. At some point you forget it, or you phrase it slightly differently, and the AI interprets it slightly differently.
My solution: a thorough section in the CLAUDE.md. Everything that always applies goes there.
A few examples from real projects:
- ESLint must pass without errors before every commit
- No eslint-disable comments without explicit approval
- Every new text must appear in i18n/de.json and i18n/en.json
- Typography: en dash with non-breaking spaces, never the English em dash
That last rule is one anyone recognizes who has forgotten to specify it. I learned it by spending an hour fixing wrong dashes across an entire project.
Recurring Tasks as Skills
This is the part that gets talked about the least - and the one that has helped me the most.
Every project has tasks that always run the same way. After a new feature: run the linter, take a browser screenshot, comment on the ticket with the result. For a new blog post: validate frontmatter, run the typography check, create the German and English versions.
I used to describe this in the prompt every time. "Don't forget to run the linter afterwards" - every single time.
Now I put these workflows into skill files. A Markdown file that describes step by step what to do in a given context. I mention it briefly in the prompt: "Use the publish-blog-post skill."
That sounds like a small thing. In practice, it's what makes the difference between an AI I have to correct and an AI that simply does the right thing.
Define Data Models Before Writing Code
This is a lesson I learned the hard way.
I had given an AI the task of implementing a new feature with several database tables. The result was technically working - but the data model was wrong. Not wrong in the sense of "syntax error", but wrong in the sense of "this will be a problem in three months". Missing constraints, an N:M relation that should have been 1:N, and a field that semantically belonged in a different table.
Since then I do it differently: before I let the AI loose on database work, I write a short model spec. In prose, no special syntax:
Table: orders
- id: UUID, Primary Key
- user_id: FK → users.id, NOT NULL
- status: ENUM (pending, processing, shipped, cancelled), NOT NULL
- total_cents: INTEGER, NOT NULL (no DECIMAL - we calculate in cents)
- created_at: TIMESTAMP WITH TIME ZONE, NOT NULL, DEFAULT now()
Relations:
- One order has many OrderItems (1:N)
- One order belongs to exactly one user
That takes 15 minutes. It saves an hour of refactoring.
Context Engineering Instead of Prompt Engineering
I notice the term "prompt engineering" showing up everywhere right now. Workshops, courses, LinkedIn posts about the perfect phrasing.
I think that's the wrong focus.
The prompt is the last step. The system around it - CLAUDE.md, PRODUCT_SPEC.md, coding rules, skills, data models - is what actually determines quality. The AI is only as good as the context it receives. And that context can be built systematically, versioned and improved over time.
A good prompt in a poorly prepared project delivers mediocre results.
A mediocre prompt in a well-prepared project delivers good results.
I call it context engineering. Whoever understands this has a real advantage - not just with Claude, but with any AI tool.
TL;DR
| What | Why |
|---|---|
CLAUDE.md |
Tech stack, project structure, design tokens, coding rules |
PRODUCT_SPEC.md |
What we build, why, and what we deliberately don't build |
| Skill files | Recurring workflows defined once, referenced forever |
| Data model specs | Written before code, not discovered through refactoring |
The better the structure around the AI, the better the results. That's not theory - that's the experience from a year of daily use in real projects.
I'm Christopher, a freelance fullstack developer from Hamburg. I write about Vue/Nuxt, TypeScript and working with AI in real projects - at grossbyte.io.
Top comments (0)