Conversational Development With Claude Code — Part 19: Course Summary — From Prompts to Contextual Engineering
TL;DR — Claude Code doesn’t primarily make you faster. It makes you more correct.
This series walked through a full, end‑to‑end feature (ratings for PlatziFlix) and a full, end‑to‑end workflow: analysis → planning → execution → validation.
The real shift is philosophical: software development is no longer a solitary act of typing; it’s a technical conversation with full system context—where your judgment remains the steering wheel.
The thesis: “Context over perfect prompts”
Most developers approach AI like a slot machine:
Pull lever → prompt → output → hope.
Claude Code flips that posture.
The unit of work becomes a shared thinking space: codebase context + decisions + constraints + tools + validation loops.
You don’t “ask for code.” You construct intent—and then you audit the result like a senior engineer reviewing a change proposal.
That is why this workflow survives real projects:
- It scales beyond a single file.
- It produces decisions you can defend.
- It leaves artifacts (specs, CLAUDE.md, workflows) that your team can inherit.
The repeatable loop: Analysis → Planning → Execution
If there is one pattern worth keeping, it’s this:
1) Analysis — understand the existing system before you touch it
2) Planning — decide what to change, in what order, with what risks
3) Execution — implement iteratively, with guardrails and tests
4) Validation — verify behavior, security, performance, and merge safety
It’s not “AI workflow.”
It’s just professional engineering—made faster to reason about because context is now queryable.
1) Installation & setup: the boring part that decides everything
A correct setup is not optional; it’s the foundation of reliable context.
Global install (CLI)
npm install -g claude-code
Sanity & diagnostics
-
claude— interactive REPL -
help//help— command discovery -
status— session + environment status -
doctor— verification (the “does my setup make sense?” command) -
login— authenticate
Why editor integration matters
Whether you use VS Code, Cursor, or a Claude‑friendly environment, editor integration is not about convenience. It’s about shortening the feedback loop:
- reference files without copy/pasting
- review multi‑file impact without losing structure
- keep “the conversation” close to “the code”
A good integration reduces context switching—and context switching is where mistakes breed.
2) The core mechanics: Context Window, Sub‑Agents, MCP
Context window: your working memory (not your documentation)
Claude Code can hold a lot, but the point is not capacity. The point is curation.
Best practice that pays dividends immediately:
- Prefer
@filereferences over pasted code - Keep decisions explicit and re‑stated as constraints
- Use context commands when sessions get long (
context,compact,reset)
Sub‑agents: a team, not a chatbot
Sub‑agents are how you keep reasoning clean when the system grows.
Typical roles:
- @architect — impact analysis + phased plan + risks
- @backend — models, migrations, endpoints, service boundaries
- @frontend — UI composition, state, types, integration
- @qa — tests, regressions, edge cases, validation strategy
The real win is not parallelism.
It’s separation of responsibilities—a mental firewall against “one thread trying to be everything.”
MCP: when tools become part of the conversation
Model Context Protocol (MCP) is how Claude Code stops being “a model” and becomes a programmable collaborator.
Instead of leaving your workflow to do browser checks, test automation, or external lookups, you can bring those capabilities into the same reasoning loop.
Common examples:
- Playwright for automated UI validation and screenshots
- Notion/Linear connectors as living documentation context
- Database tools for direct queries in controlled environments
3) Architecture discovery: reading a city without walking every street
The first serious step in a mature repo is not “where’s the file.”
It’s “what’s the shape of the system.”
Claude Code shines here because it can help you:
- infer domain boundaries
- locate patterns and inconsistencies
- identify coupling points
- map flows end‑to‑end
Make it persistent: CLAUDE.md as architectural memory
A repo without memory is a repo that re‑learns itself every sprint.
A good CLAUDE.md can include:
- conventions (naming, layering, error handling)
- review criteria
- “how we do migrations”
- “what we consider safe”
It turns tribal knowledge into a first‑class artifact.
4) Planning as an artifact: specs, phases, and risk budgets
A plan that lives only in chat is not a plan. It’s a mood.
The durable pattern we used:
spec/00-feature-name.mdspec/01-backend-…spec/02-frontend-…
This is not bureaucracy. It’s auditability:
- reviewable in PRs
- diff‑able over time
- readable by humans without the chat thread
Planning is how you pay for less rework later.
5) Implementation (Backend): production discipline, not demo code
For the ratings feature, backend work wasn’t “write endpoints.” It was:
- database design (constraints, indexes, rollback awareness)
- models aligned with domain language
- REST contracts with predictable shapes
- logging and error handling built-in
- unit tests as the stabilizer, not the afterthought
Docker + Make: consistent execution environment
A controlled environment turns the system into something testable, repeatable, and less fragile.
Example:
docker compose exec api pytest
The point is not Docker itself.
The point is repeatable validation.
6) Implementation (Frontend): design systems and truth‑preserving UI
Frontend integration is where “fast AI code” often collapses—because UI is not just logic, it’s trust.
What kept quality high:
- reusable components
- predictable states (loading/error/empty)
- strict adherence to the design system
- integration via well‑defined API contracts (Swagger/OpenAPI)
And when UI needed polish, we used visual feedback loops:
- screenshots as context
- conversational iteration until the UI looked “native” to the product
7) Validation: tests, integration checks, and end‑to‑end confidence
Quality is not a phase you visit. It’s a loop you live in.
Unit + integration
Baseline → change → re‑run → stabilize.
End‑to‑end UI with Playwright via MCP
Once UI + API are connected, the fastest way to regain confidence is automated navigation + screenshots + console error detection.
Not because it’s fancy.
Because it produces evidence.
8) Security: “review” is not optional anymore
Claude Code can help you perform security reviews earlier than you usually would.
Example:
security review
A useful security output has:
- concrete findings
- OWASP‑aligned categories
- confidence levels
- actionable mitigations
But remember: security is not “find the bug.”
It’s “design so the bug is harder to exist.”
9) GitHub integration: when the repo becomes the context
Once Claude Code lives inside GitHub, the workflow becomes social:
- automated reviews on PR creation
- conversational invocation via PR/issue comments
- a shared context that includes the actual codebase
This is where the tool becomes a team member:
- not authoritative
- but consistently present
The practical loop: PR → review → feedback → terminal fixes → green
A pattern that worked well:
- Create a branch and a minimal change
- Open PR → Claude review runs automatically
- Mention Claude in PR comments for targeted help
- Pull comments into the terminal (
pr comments) - Apply fixes conversationally, push, re‑run, merge
It’s a clean loop: feedback becomes input, not friction.
10) Cost management: engineer it like any other resource
AI usage is a budget like compute.
- For subscriptions: you manage daily limits
- For API billing: you manage token cost
Practical tool:
npx ccusage
Cost optimization is rarely “use less AI.”
It’s “use it more precisely”:
- targeted prompts
- bounded max turns
- avoid rerunning full analysis unnecessarily
- lean on caching where supported
11) Claude Code v2.x: quality-of-life changes that matter
Small UX changes can reshape the workflow:
- Checkpoints: revert changes without a Git commit
- Thinking toggle: reason deeply when needed, not always
- History search: reuse strong prompts instead of rewriting them
These aren’t “features.” They’re friction reducers.
12) Command map: the minimal set that pays rent
Session & health
| Command | What it’s for |
|---|---|
claude |
Start interactive mode |
login |
Authenticate |
status |
See what Claude thinks is configured |
doctor |
Validate the environment |
Context management
| Command | What it’s for |
|---|---|
@file / @folder
|
Add explicit code context |
context |
Inspect what’s loaded |
compact |
Compress the conversation while keeping decisions |
reset |
Start fresh when the thread becomes noisy |
resume |
Continue a prior conversation thread |
adddir /abs/path |
Pull external directories into context |
GitHub collaboration
| Command | What it’s for |
|---|---|
pr comments |
Pull PR feedback into your terminal workflow |
@claude |
Invoke Claude in PR/issue comments |
Security & validation
| Command | What it’s for |
|---|---|
security review |
Automated security analysis vs base |
The real outcome: you learned a new programming stance
The ratings feature is a proof of execution.
But the bigger win is a mental upgrade:
- Think in systems before you change a component
- Plan in artifacts, not in chat vapor
- Validate continuously, not at the end
- Use AI as a reasoning amplifier, not a decision replacement
Claude Code doesn’t remove responsibility.
It makes responsibility more visible.
And that’s exactly what professional software needs.
What we built (reference)
Example project: Ratings system for PlatziFlix
Architecture: Full‑stack (FastAPI + Next.js + PostgreSQL + Docker)
Workflow: Specs + Sub‑agents + Tests + Security review + GitHub Actions automation
If you’ve been treating AI like autocomplete, you’re underusing it.
Treat it like a disciplined collaborator inside your repository context—
and you’ll ship less chaos, fewer regressions, and better decisions.
— Written by Cristian Sifuentes
Full‑stack engineer · AI‑assisted systems thinker

Top comments (0)