The first time you look at Claude Code closely, it can feel like a box full of parts that all seem important, but not obviously related.
You see CLAUDE.md. Then rules. Then skills. Then hooks. Then subagents. Then MCP servers. Then… the list goes on. Then context management, planning before coding, verifying outputs, and avoiding letting the session turn into a mess. At first glance, it is easy to think: why are there so many pieces just to get an AI coding workflow working well?
But after spending some time with these concepts, I think the answer is actually pretty simple.
Claude Code is not one thing. It is a small operating system for AI-assisted development.
And like any good system, different parts exist for different jobs.
Start here. Claude Code needs memory, structure, and reach
A good mental model is this:
-
CLAUDE.mdand rules shape behaviour. - Skills package reusable know-how.
- Hooks enforce automatic actions.
- Subagents isolate work.
- MCP servers connect Claude to the outside world.
That is the big picture (mainly, because there is more to it, but this is enough to get us started).
Once you see it that way, the moving parts stop feeling random. They start feeling like layers. The interesting part is not learning each feature in isolation, but understanding what problem each one solves, and what happens when you try to use the wrong one for that problem. Anthropic’s docs make that distinction pretty clear. CLAUDE.md gives persistent instructions, rules can scope those instructions more precisely, skills extend Claude with reusable playbooks, hooks provide deterministic automation, subagents work in separate contexts with their own tools and prompts, and MCP is the bridge to external tools, data sources, and APIs.
CLAUDE.md and rules. The behavioural layer
If I had to pick the first thing to set up in Claude Code, it would be this.
CLAUDE.md is where you put persistent instructions that Claude should carry into every session for a project, for your personal workflow, or even across an organisation. The docs describe it as plain markdown that Claude reads at the start of every session. They also make an important point that is easy to miss: these instructions are context, not hard enforcement. So if the file is vague, bloated, or contradictory, performance drops. Anthropic recommends keeping it specific, concise, and well-structured, with a rough target of under 200 lines per file.
That last bit matters more than it seems.
A lot of people treat instruction files like a dumping ground. They keep adding preferences, workflows, architecture notes, testing conventions, edge cases, and random reminders until the thing becomes unreadable. Then they wonder why the agent follows it inconsistently. The problem is not that the model is “bad at instructions.” The problem is that the instruction layer is carrying too much weight.
That is where rules come in.
For larger projects, Anthropic recommends organising instructions inside .claude/rules/, and those rules can even be path-specific so they only load when Claude is working in matching areas of the codebase. That means you do not need every frontend rule, testing rule, security note, and data migration convention loaded all the time. You can scope them. In other words, CLAUDE.md should define the broad operating principles of the project, while rules help keep that guidance modular and targeted.
So, if you want one simple distinction:
Use
CLAUDE.mdfor shared default behaviour. Use rules when the project is getting too big for one instruction file to stay sharp.
Skills. The reusable playbook layer
This is where Claude Code starts getting interesting.
A skill is not just “more instructions.” A skill is a packaged capability. Anthropic defines skills around a SKILL.md file. Claude can load a skill automatically when it is relevant, or you can invoke it directly with a slash command (/) based on the skill name (/my_skill). Skills can also include frontmatter, supporting files, argument handling, restricted tool access, and even subagent execution.
That changes the way you should think about them.
A rule says, “here is how we do things around here.”
A skill says, “when this kind of task appears, here is the exact playbook to follow.”
That is a huge difference.
For example, suppose your team has a repeatable workflow for debugging flaky tests, preparing release notes, reviewing API design, or migrating components from one pattern to another. You could try to cram that into CLAUDE.md, but then every session pays the context cost whether that workflow is relevant or not. Skills solve that by staying optional and task-focused. Anthropic explicitly positions them that way in the docs and even notes that for task-specific instructions that do not need to live in context all the time, skills are the better fit.
This is one of those places where the abstraction really clicks.
A good Claude Code setup is not just about telling the model how your codebase works, but about identifying which workflows deserve to become first-class tools.
That is what skills are.
Hooks. The enforcement layer
If CLAUDE.md and skills are about guidance, hooks are about guarantees.
Anthropic describes hooks as user-defined shell commands that run at specific points in Claude Code’s lifecycle. Their wording is important here: hooks provide deterministic control. In other words, they are for the things that must happen, every time, without relying on the model to remember or choose correctly. The best practices guide reinforces the same distinction: unlike CLAUDE.md, which is advisory, hooks are deterministic and guarantee the action happens.
That makes hooks the right tool for a very specific category of problems:
- Run formatting after edits.
- Block writes to protected files.
- Send a notification when Claude needs input.
- Reinject context after compaction.
- Audit configuration changes.
Hooks are workflow mechanics. If you try to solve these mechanics with prompts alone, you are basically hoping the model behaves. Hooks exist so you do not need hope.
This is also where a lot of AI workflows quietly become trustworthy. Not because the model got smarter, but because some parts of the workflow stopped depending on the model entirely.
That is a pattern worth paying attention to.
The more a task sounds like “always do X when Y happens,” the more likely it belongs in a hook, not in a prompt.
Subagents. The isolation layer
Subagents are one of the clearest signs that Claude Code is built for agentic work, not just chat.
Anthropic describes subagents as specialised AI assistants that run in their own context window, with their own system prompt, tool access, and permissions. Claude can delegate work to them when a task matches the subagent’s description, and the subagent works independently before returning the result. The docs also point out why this matters: subagents preserve context, enforce constraints, specialise behaviour, and can route work to different models.
In long coding sessions, context gets polluted fast. Research, false starts, logs, file reads, debugging dead ends, and exploratory thoughts all pile into the same conversation. Anthropic’s best practices page repeatedly emphasises that context is the most important resource to manage, because performance degrades as the context window fills.
Subagents are how you stop every task from becoming everyone’s problem:
- Need a security pass across many files? Use a security reviewer subagent.
- Need research on a subsystem before implementation? Delegate it.
- Need a focused investigation that should not clutter the main thread? Subagent.
I think this is one of the most underrated shifts in AI coding workflows. The question is no longer just “what should the model do?” It is also “which context should do it?”
That is a different level of workflow design.
MCP servers. The reach layer
Without MCP, Claude Code is mostly limited to the local environment and whatever tools it already has access to. With MCP, it can reach out.
Anthropic describes MCP, the Model Context Protocol, as an open standard that lets Claude Code connect to external tools and data sources through MCP servers. Their examples include issue trackers, monitoring platforms, databases, design tools, and workflow automation systems. They also note an important security warning: third-party MCP servers are not universally verified, and untrusted servers can introduce prompt injection or other risks.
This is where Claude Code stops being “an assistant in my repo” and starts becoming “an agent in my workflow.” That sounds exciting, and it is. But it also changes the stakes.
Once Claude can read tickets, inspect production signals, query a database, read Figma, or draft emails, the design problem becomes larger than coding. You are now designing an operational system. Permissions matter. Trust boundaries matter. Server choice matters. Tool access matters.
So while MCP is easily the most expansive capability in the stack, it is also the one that benefits most from restraint.
Just because Claude can connect to everything does not mean it should.
How these parts fit together in practice
Here is the cleanest way I can describe the whole stack:
-
CLAUDE.mdtells Claude how to behave by default. - Rules narrow that guidance by topic or path.
- Skills package repeatable expertise or workflows.
- Hooks automate mandatory actions.
- Subagents keep big or noisy tasks isolated.
- MCP servers connect Claude to outside systems.
A practical workflow might look like this:
Your CLAUDE.md defines coding standards, project conventions, and preferred commands. A rule file adds stricter guidance for the /api directory. A skill gives Claude your internal release checklist. A hook runs linting after edits and blocks writes to a protected migration folder. A subagent handles security review in its own context. MCP connects Claude to GitHub, Jira, Sentry, and your database.
At that point, Claude Code is no longer just reacting to prompts, but operating inside a designed environment. And I think that is the real lesson here.
The quality of the workflow does not come from one magical feature. It comes from putting the right responsibility in the right layer.
Best practices that actually matter
Anthropic’s best practices page covers a lot, but a few ideas stand out because they connect directly to all the moving parts above.
First, give Claude a way to verify its work. Tests, screenshots, and expected outputs dramatically improve results because Claude can check itself instead of making you the only feedback loop.
Second, explore first, then plan, then code. Anthropic recommends separating research and planning from implementation, especially for tasks that touch multiple files or unfamiliar areas. That maps nicely to plan mode, but also to skills and subagents when you want a more structured workflow.
Third, provide specific context. Reference files, constraints, patterns, and symptoms. Claude can infer a lot, but it cannot read your mind (yet 😅).
And finally, manage context aggressively. Use /clear between unrelated tasks and avoid stuffing persistent instructions with everything under the sun.
Most of Claude Code’s “moving parts” are really context management tools in disguise.
Closing thoughts
When people first look at Claude Code, they often ask which feature matters most. I think that is the wrong question.
The better question is: what kind of problem am I trying to solve?
- If the problem is persistent guidance, reach for
CLAUDE.mdand rules. - If the problem is reusable know how, reach for skills.
- If the problem is guaranteed automation, reach for hooks.
- If the problem is isolation and focus, reach for subagents.
- If the problem is external connectivity, reach for MCP.
That is when the system starts making sense.
Claude Code does not become powerful because it has many parts. It becomes powerful when each part has a clear job. And honestly, that is probably the deeper pattern behind good AI workflows in general. Not more prompts. Not more cleverness. Better boundaries.
Practical takeaway:
- Start with a small, sharp
CLAUDE.md - Add rules only when the project needs more structure
- Turn repeated workflows into skills
- Use hooks for anything that must happen every time
- Use subagents to protect context
- Add MCP carefully, with trust and permissions in mind
Which part of Claude Code feels most useful to you right now: rules, skills, hooks, subagents, or MCP servers, and why?
Top comments (0)