DEV Community

Cover image for How to Use Claude Code as a Solo Developer in 2026
DevToolsPicks
DevToolsPicks

Posted on • Originally published at devtoolpicks.com

How to Use Claude Code as a Solo Developer in 2026

Originally published at devtoolpicks.com


You install Claude Code. You type claude in your terminal. Now what?

Most solo devs use maybe 10% of what Claude Code can actually do. They get stuck in a chat loop, paste code back and forth, and wonder why it feels like a fancy autocomplete. The problem isn't the tool. It's that most guides teach features one at a time instead of showing how they fit together in a real shipping workflow.

This post covers the opposite approach. Not a feature catalog. Not a full tutorial. Just the workflow that actually works for solo developers shipping production software in 2026, no matter what stack you're on.

Start with CLAUDE.md. Nothing else matters as much.

The single highest-ROI setup step is a well-written CLAUDE.md at the root of every project.

Claude Code reads this file on every session. If it's good, Claude behaves like it already works on your codebase. If it's missing or vague, Claude will guess. And when it guesses wrong, it will hand you patterns from a different framework, suggest raw SQL where you have an ORM, or miss that you're using a UI library where it assumed plain components.

Here's the structure of a good CLAUDE.md:

# Tech stack
TypeScript + Next.js 15 + Postgres. Deployed on Vercel.
Authentication via Clerk. Payments via Stripe.

## Code style
- TypeScript strict mode. No `any`. Prefer `unknown` with type guards.
- Run the linter before every commit. Errors, not warnings.
- Functional components only. No class components.

## Don't
- Don't drop tables or run destructive migrations without asking first.
- Don't add comments that explain what code does. Only why.
- Don't install a new dependency without flagging it.

## Project conventions
- API routes live in app/api/. Server actions in lib/actions/.
- All validation uses zod schemas, never manual checks.
- Components get extracted after 3+ uses. Inline before that.
Enter fullscreen mode Exit fullscreen mode

Notice what's not in there: architecture diagrams, philosophy, full style guides. CLAUDE.md is for the things Claude will otherwise get wrong, not for documenting your project end-to-end. Two clear lines that stop a destructive migration beat ten paragraphs about hexagonal architecture.

Adapt the stack and rules to match your own project. The structure is what matters: stack summary, code style rules, things to never do, project conventions. That's it.

For advanced memory management (.claude/rules/ for path-specific overrides, CLAUDE.md precedence across repos), the Claude Skills vs MCP Connectors vs Plugins breakdown explains how this fits into the broader system. CLAUDE.md is where to start though.

The three modes that matter

Claude Code has six permission modes. In practice, three of them get real use.

Plan mode (/plan or shift-tab). For anything that touches more than one file. Plan mode stops Claude from rushing to write code. Instead it produces a step-by-step plan you can push back on before a single line gets written. It saves a ridiculous amount of time. Catching a bad plan before Claude refactors something you didn't want touched takes thirty seconds. Cleaning up after a bad refactor takes an hour.

Default mode (acceptEdits with review). For focused single-file work. Fix a bug, add a method, refactor a function. Default mode shows each change before applying it. Fast feedback loop.

Auto mode. New with Opus 4.7 for Max plan users. Claude makes its own decisions about when to search files, when to run tests, when to verify changes. Useful for background batch work like running a linter fix pass across a repo or generating boilerplate for several components at once. Not ideal for architectural work where you still want to review decisions, not just outcomes. The Opus 4.7 launch review covers what else changed with this release.

One mode to avoid: bypassPermissions. It sounds fast. It also means Claude can run destructive commands without asking. A solo dev who just lost a day to a bad rm -rf is not a solo dev shipping anything that week.

Slash commands worth building yourself

The built-in slash commands are fine. The ones that actually move the needle are the ones you build for your specific stack and the specific mistakes that waste your time.

Three patterns worth copying:

/lint-fix. Runs the linter on the current file, then asks Claude to review what the linter couldn't catch: unused imports that slipped through, return types that should be stricter, error handling that's too broad, type assertions masking real bugs. Kills the loop of "run lint, review diff, ask Claude, repeat."

---
name: lint-fix
description: Run the linter then catch what the linter misses
---

1. Run the linter on the current file
2. Review the diff for:
   - Type assertions that could be replaced with proper types
   - Return types that are implicit but should be explicit
   - Unused imports the linter did not catch
   - Error handling that is too broad (catch-all patterns)
3. Show suggested changes separately from the lint output
4. Do not auto-apply type changes. Show the diff and wait.
Enter fullscreen mode Exit fullscreen mode

/migration-review. Scans a just-generated database migration for gotchas. Foreign key constraints pointing the wrong way. Missing indexes on foreign keys. Columns that should be nullable. Enum constraints that don't match application code. AI tools generate migrations well most of the time. The one time they miss an index on a foreign key, the cost shows up at scale.

/test-one. Bootstraps a single test for a specific function or method. Just the happy path plus one edge case. Asking Claude to "write tests" produces 400-line test files nobody maintains. Asking for exactly two cases produces tests that ship.

The common thread: slash commands that match your stack and your specific pain points. Generic helpers already exist. The ones you build yourself are the ones that get used every day.

Which model to pick, and when

Claude Code lets you pick between Opus 4.7, Sonnet 4.6, and Haiku 4.5. The honest split looks like this:

Opus 4.7 for architecture, multi-file refactors, debugging something subtle, and anything where being wrong costs an hour. It's $5/$25 per million tokens on the API, but on Max plans it's included in the flat rate. On Opus 4.7 specifically, the new /ultrareview command is worth running before shipping anything that touches auth or payments.

Sonnet 4.6 for about 80% of daily coding. Routine feature work, adding a column, writing a controller method, updating a component. Cheaper, faster, and good enough that the quality difference only shows up maybe one time in five. The Cursor vs GitHub Copilot vs Claude Code comparison goes deeper on this if the choice is between tools rather than models.

Haiku 4.5 for documentation, commit messages, quick lookups, and running the same prompt against ten files at once. Haiku is fast enough that it stops feeling like an LLM. Bad choice for reasoning work though. It will confidently make things up.

The common mistake is defaulting to Opus for everything. It works, but Pro plan rate limits blow through in three days. Better default: start on Sonnet. Escalate to Opus when Sonnet gets something wrong twice.

Pro vs Max vs API: the honest call

This is the most common question about Claude Code. Here's the decision framework by actual usage pattern.

Claude Pro at $20/month. If Claude Code is a supplement to your workflow, not the main thing. Maybe one to two hours of active use a day. Rate limits will bite occasionally, especially on Opus. Pro is the right tier for the first two or three months of using Claude Code seriously.

Claude Max 5x at $100/month. If Claude Code is your primary development tool. Active sprints typically hit 50 to 80 million tokens a month, which would cost $800 to $1,500 at API rates. Max 5x pays for itself inside one month of real use. No rate limit anxiety is worth the upgrade on its own.

Claude Max 20x at $200/month. For parallel sessions across multiple repos, heavy use of the Claude Code desktop app's parallel sessions feature (covered in the Claude Code desktop redesign review), or long-horizon autonomous agents. Most solo devs don't need this tier. Anyone who does already knows.

API direct billing. Only relevant for building something on top of Claude Code, not for using it as a developer. At API rates, a typical heavy day of coding costs more than a full month of Max 5x. The math doesn't work for personal use.

If the call is also between Claude Max and ChatGPT Pro $100, the ChatGPT Pro $100 vs Claude Max vs Cursor breakdown covers the full three-way decision.

Four mistakes that cost real time

Letting sessions grow too long. Claude's context window is big but not infinite. A three-hour session where new tasks keep piling on starts producing weird mistakes around hour two. The fix is simple: end the session, summarise the state in CLAUDE.md or a task file, start fresh. Pushing through context bloat instead of resetting costs hours and degrades output quality in ways that are hard to spot in the moment.

Skipping plan mode on "quick" changes. Anything that touches three or more files deserves plan mode, even if it looks simple. A "quick" refactor without plan mode can easily touch fourteen files. Cleaning up afterwards takes longer than writing the whole thing by hand would have. Plan mode isn't for slow jobs. It's for jobs where knowing what's about to happen matters.

Trusting generated tests without reading them. Claude writes tests that pass. That's not the same thing as tests that catch bugs. Every generated test needs a read before acceptance. Often the assertion tests the wrong thing, or the setup is missing a case, or the test passes even when the code is broken because a mock is wrong. This matters more than most people admit.

Under-using hooks. A pre-commit hook that runs the affected tests before any commit reaches git is maybe the single highest-ROI thing to add to a Claude Code workflow. Claude generates code fast. A hook that catches bad code before it reaches git history is the safety net that lets you trust the speed.

When NOT to reach for Claude Code

There are scenarios where Claude Code is the wrong answer.

One-line changes. Fixing a typo, adjusting a constant, renaming a variable. Opening Claude, waiting for a response, reviewing a diff, applying it. That whole loop takes longer than typing the change.

Architectural decisions that haven't been thought through. Asking Claude Code "should I use service classes or action classes for this?" produces a confident answer. It might be a good one. There's no way to tell without already having a view. Form the view first. Use Claude to pressure-test it, not to generate it.

Debugging where you're the domain expert. If the bug is in code Claude wrote, sure, let it debug. If the bug is in your understanding of your own product's business logic, Claude will happily generate plausible-sounding fixes that make the symptom go away while the actual bug stays. Domain debugging stays with you.

Anything involving payments or auth at the core level. Claude Code works well for auth-adjacent work: email verification flows, password reset pages, session lifetime config. Poor choice for designing the auth system itself. The consequences of a subtle mistake are too high. Same goes for billing logic.

Claude Code is a force multiplier, not a substitute for judgment. Most of the people who get burned by it are skipping the judgment part.

FAQ

Do you need the Max plan to use Opus 4.7?

No. Claude Pro at $20/month includes Opus 4.7 access, just with lower rate limits. Max plans give significantly more Opus usage per 5-hour window, which matters when Claude Code is a primary dev tool.

Can Claude Code and Cursor be used together?

Yes, they coexist fine. Cursor handles IDE-integrated inline suggestions well. Claude Code handles multi-file agentic work in a terminal-first flow. Different strengths, both valid in the same workflow.

What's the difference between the Claude Code CLI and the desktop app?

Same backend, different interface. The CLI is keyboard-driven and fits a terminal-first workflow. The desktop app adds a visual interface with parallel sessions, drag-and-drop panes, and an integrated terminal. Full breakdown in the Claude Code desktop redesign review.

How do you stop Claude Code from running destructive commands?

Avoid bypassPermissions mode. Stick with default or acceptEdits, which require approval for anything that modifies the filesystem. Add a pre-commit git hook as a second safety layer. For auto mode, configure .claude/settings.json with explicitly forbidden commands.

Is Claude Code worth it for developers who also use ChatGPT or Gemini?

For primary coding work, yes. Claude Code's terminal-first design and the quality of Opus 4.7 on multi-file refactors are tuned for software engineering in a way general-purpose chat interfaces aren't. Pair it with whatever else works for non-coding tasks.

Further reading

For deeper tutorials, visual diagrams, and copy-paste templates covering every Claude Code feature, the claude-howto repo on GitHub is the best community learning resource available right now. Ten numbered modules, updated with every Claude Code release.

To see where Claude Code fits alongside the other AI coding options in 2026:

Final take

Claude Code isn't something you learn once and move on from. It pays back more the more time you put into the setup. Three things separate the solo devs who get real value from it from the ones who don't: they write a CLAUDE.md for every project, they use plan mode for multi-file work, and they build a handful of slash commands that fit their stack. That's the 20% that delivers most of the value.

Everything else in the feature list (hooks, subagents, MCP servers, skills) is worth learning too. Just do them in that order. The foundation has to work before anything else matters.

Top comments (0)