DEV Community

Cover image for How the Creator of Claude Code Actually Uses It
Ashok Naik
Ashok Naik

Posted on

How the Creator of Claude Code Actually Uses It

Boris Cherny built Claude Code. When someone asked how he actually uses it, his answer surprised everyone: it's surprisingly vanilla.

But don't let "vanilla" fool you. His setup is a masterclass in AI-assisted development that any developer can steal.

Think of his approach like conducting an orchestra—each Claude instance plays its part while you orchestrate the symphony.

The stakes? Most developers use one AI assistant at a time. Boris runs 15+ simultaneously. Here's exactly how.

Key Principles

Mindset Shifts:

  • AI coding is asynchronous, not synchronous
  • Plan first, execute second
  • Mistakes become institutional knowledge
  • Verification loops are non-negotiable
  • Automate everything you do twice

What Makes Boris's Setup Different:

  • Running 15+ Claude instances in parallel
  • Shared team knowledge in version-controlled files
  • Automated workflows via slash commands and subagents
  • Quality gates through hooks and verification
  • Institutional learning from every mistake

The Parallel Execution Framework

Step 1: Terminal Setup (5 Parallel Claudes)

Boris runs 5 Claude instances in his terminal simultaneously, numbered 1-5 in tabs:

❌ WRONG: Run one Claude, wait for it, start another
✅ RIGHT: Run 5 Claudes, context-switch between them as they work
Enter fullscreen mode Exit fullscreen mode

The Smart Setup:

  1. Number your tabs 1-5 → Quick visual reference
  2. Enable system notifications → Know when Claude needs input
  3. Let them cook → Check in periodically, not constantly
  4. Batch your reviews → Review multiple outputs together

The 5-Layer Productivity Stack

Layer 1: Model Selection (Bigger = Faster)

Counterintuitive Truth:

  • Boris uses Opus 4.5 exclusively
  • Despite being larger and slower than Sonnet
  • Total time to completion is almost always shorter

Why Bigger Wins:

  • Less steering required
  • Better tool use out of the box
  • Fewer correction cycles
  • Less clarification needed
  • Better first-attempt quality

Layer 2: The Living CLAUDE.md

The Claude Code team shares a single CLAUDE.md checked into git:

The Golden Rule:

Anytime Claude does something wrong, add it to CLAUDE.md

Team Workflow:

1. Everyone contributes multiple times per week
2. Tag @claude on PRs to add learnings
3. Review CLAUDE.md changes like any code
4. Mistakes compound into institutional knowledge
Enter fullscreen mode Exit fullscreen mode

What Goes In:

  • Common errors Claude makes
  • Project-specific conventions
  • Domain knowledge Claude needs
  • Edge cases and gotchas

Layer 3: Plan Mode (The 1-Shot Enabler)

Most sessions start in Plan mode (shift+tab twice):

The Workflow:

1. Enter Plan mode
2. Iterate on plan with Claude until it's right
3. Switch to auto-accept edits mode
4. Claude executes—usually in one shot
Enter fullscreen mode Exit fullscreen mode

Why This Works:

  • Planning time is cheap
  • Execution rework is expensive
  • Good plans eliminate debug cycles
  • Auto-accept mode enables flow state

Layer 4: Slash Commands (Inner Loop Automation)

Boris uses slash commands for any workflow done multiple times per day:

Example: /commit-push-pr

- Used dozens of times daily
- Pre-computes git status via inline bash
- Minimizes round trips to model
- Lives in .claude/commands/
- Checked into git for team sharing
Enter fullscreen mode Exit fullscreen mode

The Pattern:

  • Identify repetitive prompting patterns
  • Encode as slash command
  • Add inline bash for context pre-computation
  • Share with team via git

Layer 5: Subagents (Specialized Automation)

Beyond slash commands, Boris maintains specialized subagents:

Active Subagents:

  • code-simplifier → Simplifies code after Claude completes work
  • verify-app → Detailed E2E testing instructions
  • Custom agents → Automate common PR workflows

Think of Subagents As:

Your code review checklist, automated and delegated to specialized agents

The Most Important Tip: Verification Loops

Boris saves his most critical advice for last:

Give Claude a way to verify its work.

Claude tests every change to claude.ai/code using the Chrome extension. It opens a browser, tests the UI, and iterates until the code works and the UX feels right.

Verification by Domain:

Domain Verification Method
CLI tools Run bash commands
Backend Run test suite
Frontend Browser testing
Mobile Phone simulator

The Impact:

  • 2-3x quality improvement on final results
  • Without verification: generating code
  • With verification: shipping working software

The Harsh Reality: Everyone has access to the same AI tools. The developers winning are those who build systems around them.

Boris's "vanilla" setup works because the systems are solid:

Component Purpose
Parallel execution Maximize throughput
Shared CLAUDE.md Institutional learning
Slash commands Automate inner loops
Subagents Specialize workflows
Hooks Quality gates
Verification Ship working code

Resources


What's your Claude Code setup? Drop your tips in the comments.

Top comments (0)