Most developers use Claude like a smart search engine — paste code, get answer, move on. That works until you realize how much you're paying for it. Token costs add up fast when Claude has to rediscover your codebase from scratch every single session. A CLAUDE.md file and a /skills folder fix that.
The Problem
If you've been using Claude as a development tool, you've probably noticed a pattern. You open a new conversation, paste in some code, ask your question, get a decent answer, and close the tab. Next time you need help, you start over. Paste the code again. Re-explain the context again. Watch Claude make the same reasonable-sounding wrong assumptions about your stack again.
That cycle has a cost — and I don't just mean the token bill, though that's real. Every time Claude starts cold, you're paying to re-establish context that should already be there. You're burning tokens on boilerplate explanation instead of actual problem solving. And the suggestions you get back are generic, because Claude is pattern-matching against codebases it's seen before — not yours.
This isn't a Claude problem. It's a workflow problem. Claude has no memory between sessions by design. Without a system to feed it context, you'll keep getting capable-but-wrong answers no matter how good the model gets.
Enter CLAUDE.md
The first piece of the system is a file called CLAUDE.md. It lives in the root of your repository, and Claude reads it automatically at the start of every session. No prompting required. No copy-pasting context into the chat. It's just there.
Think of it as an onboarding document — except instead of writing it for a new team member, you're writing it for your AI collaborator. The goal is the same: get them up to speed on what this project is, how it's structured, and what the rules of the road are before they touch a single line of code.
A good CLAUDE.md answers the questions Claude would otherwise have to guess at. What's the stack? What's the architecture? What patterns do we follow here? What should never happen in this codebase? Where does business logic live? What does a good commit message look like?
Without it, Claude fills those gaps with assumptions. With it, Claude walks into every session already knowing your project the way a senior dev would after a solid onboarding week.
The investment is maybe an hour to write it the first time. The return is every future session starting from the right place instead of zero.
What Goes In It
There's no rigid spec for a CLAUDE.md — but after building several of them across different projects, a few sections show up every time.
Stack and environment. List your languages, frameworks, major libraries, and any tooling that isn't obvious from the file structure. Include versions where they matter. Claude should never have to guess whether you're on React 17 or 19, or whether you're using Prisma or raw SQL.
Architecture overview. A short description of how the project is structured at a high level. Is it a monolith or microservices? Where does the frontend live relative to the backend? Are there multiple services that talk to each other? One paragraph is usually enough.
Conventions. This is where you save the most tokens over time. Naming conventions, file organization patterns, how you handle errors, how state is managed, what a component is expected to look like. The more explicit you are here, the less Claude improvises.
Anti-patterns. Equally important — what should never happen. Don't split this component. Don't add a new dependency without flagging it. Don't touch this file. Explicit guardrails prevent a whole category of suggestions you'd just have to reject anyway.
Agent instructions. If you're using Claude as a coding agent assigned to GitHub issues, this section matters a lot. Tell it exactly what a completed issue looks like. Scope what it's allowed to do. My biggest lesson here: agents that do full file audits on every issue are expensive and slow. A single line in CLAUDE.md that says "do not audit files unrelated to the current issue" cuts that waste immediately.
Here's a minimal template to get started:
# CLAUDE.md
## Stack
- Framework: Next.js 16, React 19
- Language: TypeScript (strict)
- Database: PostgreSQL via Prisma 7
- Styling: Tailwind CSS v4
- Testing: Vitest, Playwright
- Package manager: pnpm
## Architecture
Monolithic Next.js App Router application. All DB routes are force-dynamic.
Business logic lives in /lib. Components in /components. API routes in /app/api.
## Conventions
- Use named exports for components
- Co-locate tests with the files they test
- Error handling via Result pattern — no raw throws in business logic
## Anti-patterns
- Do not split single-file components unless explicitly asked
- Do not install new dependencies without flagging them first
- Do not modify /prisma/schema.prisma without explicit instruction
## Agent Instructions
- Work only within the scope of the assigned issue
- Do not audit files unrelated to the current task
- A completed issue includes passing lint, typecheck, and tests
Adapt it to your project. The goal isn't completeness — it's giving Claude enough signal to stop guessing.
The Skill System
CLAUDE.md solves the context problem. But there's a second problem it doesn't address: repetitive procedures.
Every project has tasks that follow the same pattern every time. Fixing a bug. Writing a README. Building a new component. Reviewing a pull request. Without any guidance, Claude approaches each of these from scratch — and the output varies based on how well you happened to phrase the prompt that day.
Skills fix that.
A Skill is a Markdown file that teaches Claude a specific, repeatable process. It lives in a /skills folder in your repo, and you reference it when you need it — either in a prompt, or in an agent's instructions. Where CLAUDE.md gives Claude permanent knowledge about your project, a Skill gives it a procedure to follow for a specific type of task.
Here's what a bug fix Skill might look like:
# SKILL: Bug Fix
## Process
1. Read the issue description and identify the expected vs actual behavior
2. Locate the relevant file(s) — do not audit unrelated files
3. Identify the root cause before writing any code
4. Write the minimal fix that resolves the issue without side effects
5. Add or update tests to cover the fixed behavior
6. Verify lint and typecheck pass before marking complete
## Rules
- Do not refactor code outside the scope of the bug
- Do not change function signatures unless the bug requires it
- If the fix requires touching more than 3 files, flag it before proceeding
That's it. No magic. Just a documented process in a format Claude can read and follow consistently.
The real power is that Skills are modular and reusable. Write a README Skill once and every project gets the same quality README. Write a component Skill and every new component follows your conventions without you having to re-explain them. Stack Skills together and you're not just giving Claude context — you're giving it a repeatable workflow.
Here's where it gets interesting: Skills can also generate other parts of the system. I built a CLAUDE.md generator Skill — you point it at a new project, answer a few questions, and it produces a complete, properly structured CLAUDE.md ready to drop in. No remembering the syntax. No staring at a blank file. The link is in the closing section.
And at the meta level, there's a skill-builder Skill — a Skill that helps you write new Skills. You describe what you want Claude to be able to do repeatedly, and it walks you through drafting, testing, and refining the Skill until it's solid. The system is fully composable. Once you're in it, you're building your own AI workflow toolkit.
This is where the system starts to feel less like prompting and more like onboarding a reliable collaborator.
Putting It Together
On their own, CLAUDE.md and Skills are both useful. Together, they cover the two failure modes that make Claude frustrating: starting cold, and improvising the process.
Here's the mental model: CLAUDE.md is the onboarding. Skills are the SOPs.
When a new employee joins a team, you don't just hand them a list of procedures and send them off. You first get them up to speed on what the company does, how it's structured, and what the culture is. Then you hand them the playbooks for the specific tasks they'll be doing. One without the other leaves gaps.
Claude works the same way. A Skill without CLAUDE.md means Claude knows how to fix a bug generically, but not how your project specifically handles errors, or what files are off limits. A CLAUDE.md without Skills means Claude knows your codebase but still improvises the process every time.
Together, every session starts with Claude already knowing your project — and every task starts with Claude already knowing how to approach it.
In practice, my workflow looks like this. Each repo has a CLAUDE.md in the root. A /skills folder holds Skills for the recurring task types on that project. When I open a session or assign an issue to an agent, both are already there. I don't re-explain anything. I don't re-establish context. I just describe the problem and get to work.
The token savings are real. The consistency is real. But the bigger shift is psychological — Claude stops feeling like a tool you have to wrangle and starts feeling like a collaborator who already knows the codebase.
That's the whole system. And the best part is you can start with just the CLAUDE.md today, in under an hour, and feel the difference immediately.
Start Here
Most developers using Claude daily are paying — in tokens, in time, in frustration — for a workflow problem with a straightforward fix.
Create a CLAUDE.md in your current project root. The template above is a starting point. Or use the CLAUDE.md generator Skill and let Claude build it from your answers. An imperfect one written today beats a perfect one you never get around to.
Pick one recurring task and write a Skill for it. Bug fixing is the obvious first choice. README generation is another easy win. If you're stuck, the skill-builder Skill walks you through the process.
The first session after this is set up is usually when it clicks.
Both the CLAUDE.md template and the generator Skill are available in my ai-workflow-toolkit. Fork them, adapt them, make them yours. If you build something interesting on top of this system, I'd genuinely like to see it.
This is the first article in a series on building with AI agents. Next up: I built an agentic loop that writes code, reviews it, and iterates on its own output — here's how Code Genie works and what it gets right that most AI coding tools get wrong.
Tags: claude-md, claude, ai-coding-tools, developer-productivity, llm-context-management
Series: Building With AI Agents — Article 1 of 12
Top comments (0)