Learn Claude Code by doing: 5 real exercises with copy-paste solutions
Everyone has opinions about Claude Code. "It's amazing." "It deleted my repo." "It saved me 3 hours."
Here's what actually matters: you learn it by using it, not by reading about it.
These are 5 exercises I run with every new Claude Code setup. Each one teaches you something specific. Each one has a copy-paste solution you can verify against.
Exercise 1: Make it explain code you don't understand
The task: Find the most confusing function in your codebase. Ask Claude Code to explain it — but also ask it to write a test that proves its explanation is correct.
Why this works: This forces Claude to commit to an interpretation. If the test passes, the explanation is probably right. If it fails, you've learned something.
Prompt:
Explain what this function does, then write a test that would only pass
if your explanation is correct:
[paste function here]
What you learn: Claude Code is most useful when you force it to be verifiable. Explanations that can't be tested are opinions.
Exercise 2: Refactor something you've been avoiding
The task: Pick a file you haven't touched in 6 months because it's too messy. Ask Claude to refactor it — but with a constraint.
The constraint is critical. Without one, Claude will do a full rewrite. With one, it learns your style.
Prompt:
Refactor this file. Rules:
- Keep the same external API (same exports, same function signatures)
- Don't change any variable names in the exported functions
- Add JSDoc comments to each exported function
- Maximum 10% increase in line count
[paste file]
What you learn: Constraints are how you stay in control of Claude Code. The more specific your rules, the more useful the output.
Exercise 3: Debug something that "shouldn't" be broken
The task: Find a bug that confused you ��� one where the code looks right but isn't working. Walk Claude through it.
Don't paste the error immediately. Describe the symptom first. See if it asks the right questions.
What to watch for: Does Claude ask about environment? Does it ask to see more context? Does it suggest adding logging before jumping to a fix?
What you learn: Claude Code is better at debugging when it can ask questions. If it jumps straight to "the problem is X", push back: "What else could cause this?"
Exercise 4: Write a CLAUDE.md for a real project
This is the exercise most people skip. Don't skip it.
The task: Write a CLAUDE.md file that tells Claude everything it needs to know about your project. Then test it by starting a fresh conversation with just the CLAUDE.md loaded — no other context.
Starter template:
# Project: [name]
## What this does
[One paragraph. Be specific. "It's a web app" is not specific.]
## Stack
- Language: [exact version]
- Framework: [exact version]
- Database: [what and how it's accessed]
- Auth: [how authentication works]
## Where things are
- API routes: [path]
- Business logic: [path]
- Tests: [path and how to run]
## Rules Claude must follow
- Never modify [protected files]
- Always run tests before suggesting a commit
- Use [code style details]
## What "done" looks like
- Tests pass
- No TypeScript errors
- [Any project-specific checklist]
What you learn: CLAUDE.md is the single biggest multiplier for Claude Code quality. The first conversation without it versus with it is night and day.
Exercise 5: Use it like a senior engineer, not a code generator
The task: Bring Claude a real architectural decision you're facing. Not "write me a login system" — that's code generation. Something like:
I'm building a notification system. I have two options:
Option A: Store notifications in PostgreSQL, query on page load
Option B: Use Redis pub/sub, push to client via WebSocket
My constraints:
- 10,000 users
- Notifications are low-frequency (< 5/day per user)
- I need notifications to survive server restarts
- I'm a solo developer
Which do I pick and why?
What you learn: Claude Code's reasoning on architecture is genuinely useful — but you have to give it your real constraints. Without constraints, it gives you the enterprise answer. With constraints, it gives you your answer.
The setup that makes all of this work
All five exercises assume Claude Code has access to your actual codebase. If you're running claude.ai in the browser, you're doing copy-paste development — slower and less accurate.
For real agentic work, you need:
- Claude Code running locally against your repo
- A
CLAUDE.mdfile (Exercise 4 will get you there) - An API endpoint that doesn't throttle you mid-exercise
On that last point: Claude Code uses ANTHROPIC_BASE_URL to point at an API. If you're running into rate limits or don't want to pay $20/month for Claude Pro just for API access, SimplyLouie gives you a Claude API endpoint for $2/month. You set it in one line:
export ANTHROPIC_BASE_URL=https://simplylouie.com
That's it. Claude Code works exactly the same. You pay $18 less per month.
Run these in order
- Start with Exercise 1 (explain + test) — it calibrates your expectations
- Exercise 4 (CLAUDE.md) should happen before anything agent-heavy
- Exercise 5 (architecture) is where Claude Code earns its cost
The exercises are designed to teach you how Claude Code thinks, not just how to use it. Once you understand its reasoning patterns, you'll know when to trust it and when to push back.
What exercise surprised you most? Drop a comment — I'm curious what people find most useful.
Running Claude Code locally on a $2/month API budget? Check out SimplyLouie — it's the ANTHROPIC_BASE_URL proxy that won't break the bank.
Top comments (0)