DEV Community

孫昊
孫昊

Posted on

Claude Code vs Cursor for solo indie dev: an honest breakdown (I shipped 4 iOS apps to find out)

I'm going to give you the comparison I couldn't find when I was choosing.

Most "Claude Code vs Cursor" articles are either vibe-based or benchmarks that don't match solo indie dev workflows. I wanted something grounded in an actual multi-product project: 4 iOS apps, 5 distribution surfaces, 11 public repos, CI/CD across all of them.

So I spent 14 days building exactly that — exclusively with Claude Code Pro — while having used Cursor previously for frontend work. This is my honest breakdown.

Upfront disclaimer: I'm not paid by Anthropic or Cursor. I pay $20/mo for both (at different points). All numbers are from my actual project.


The setup

Project: autoapp — 4 iOS apps (SwiftUI, StoreKit 2, Privacy Manifest), TestFlight + App Store pipeline, 1 Gumroad product, 1 Chrome extension, 1 VSCode extension, 1 WeChat miniprogram.

Duration: 14 days of productive work (not calendar days — actual tracked hours).

Tools: Claude Code Pro ($20/mo) for the 14-day sprint. Cursor experience from prior React/Next.js projects.


TLDR matrix

Scenario Recommended
First AI coding tool, beginner Cursor
Multi-repo / multi-surface project Claude Code
Long-running tasks (4–8hr) Claude Code
Heavy frontend (React / Next.js) Cursor
Backend + DevOps + scripting Claude Code
Single-file debugging with real-time feedback Cursor
Cross-file refactors, 50+ files Claude Code

Round 1: Setup and onboarding

Cursor: Install, open project, right-click → "Ask Cursor". Done in 5 minutes. The GUI is familiar — it feels like VS Code because it is VS Code. Autocomplete works immediately.

Claude Code: Install via npm install -g @anthropic-ai/claude-code, set ANTHROPIC_API_KEY, then claude in your terminal. ~10 minutes. You need to get comfortable with CLI interaction before you're productive.

Verdict: Cursor wins for onboarding. If you've never used an AI coding tool, start there.


Round 2: Multi-repo and multi-surface projects

This is where Claude Code separates itself.

My project had 4 separate iOS repos plus a monorepo with toolkit scripts, scrapers, site HTML, WeChat miniprogram, and Chrome extension. Languages: Swift, JavaScript, Python, Bash, HTML/CSS.

Claude Code in this scenario:

  • CLI-native means I can run it across any directory: cd repos/autoapp-hello && claude "check CI status and fix any Swift warnings"
  • Scripting becomes trivial: loop over 4 repos, run the same verification in each, collect results
  • Sub-agents let me parallelize: I ran 4 repo checks simultaneously instead of sequentially. That's 8 minutes instead of 32.

Cursor in this scenario:

  • GUI limits you to 1–2 workspaces at a time
  • Switching between repos is manual
  • Batch automation requires workarounds

If your project is one repo + one language, this round is a tie. If it's not — Claude Code.


Round 3: Long-running tasks (4–8 hours)

Some of my tasks took a full work session: scaffolding a new iOS app from scratch, rebuilding the Gumroad SKU pipeline, wiring TestFlight CI across 4 repos after 9 failed attempts.

Claude Code for long runs:

  • 5-hour context window with a built-in "ScheduleWakeup" reset pattern — a session can outlast a full work block
  • Persistent TodoWrite tool: tasks survive context resets. I closed my laptop, reopened it, and Claude Code continued from where it stopped
  • Sub-agent parallelism: 3 tasks run simultaneously, so a "6-hour task" becomes a "2-hour wall-clock"

Cursor for long runs:

  • GUI session resets if you close the window; you re-describe context manually
  • No native task persistence
  • Real-time diff preview is genuinely useful for long sessions though — you see what's changing

Verdict: For tasks over 2 hours, Claude Code's architecture is designed for it. Cursor is better for short, iterative sessions.


Round 4: Debugging

Scenario Cursor Claude Code
Red squiggles + inline suggestions ✅ instant ❌ requires git diff
Pasting a stack trace ★★★★ ★★★★
Race conditions / concurrency bugs ★★★ ★★★★ (multi-file grep + reasoning)
Test output → fix loop ★★★★ (test panel) ★★★ (CLI bash)
Refactor across 50+ files ★★★ ★★★★ (multi-edit + sub-agent)

The real difference: Cursor is faster for single-file bugs you can see in the editor. Claude Code is more systematic for bugs that span multiple files or repos.

My hardest bug was a Swift 6 strict concurrency error in IAPManager — it touched 4 files across 4 repos. Claude Code found and fixed it in all of them in one pass.


Round 5: Speed and cost

Both tools cost $20/month.

In my 14-day project:

  • Rough output rate with Claude Code: ~250–300 LOC/hour (including review time)
  • My prior Cursor experience on React projects: ~130–160 LOC/hour

The gap is partly because of sub-agent parallelism — Claude Code was running 3 things simultaneously at times. It's not that Claude's suggestions are faster; it's that the architecture eliminates serial bottlenecks.

On token economy: Claude Code uses more tokens than Cursor for the same output (CLI overhead, sub-agents). If you're on a usage-based plan, watch this. The $20/mo Pro plan gives you a reasonable budget for a full sprint.


What I'd tell a solo dev choosing today

Start with Cursor if:

  • You're new to AI tools
  • Your project is a single repo, primarily React/Next.js/TypeScript
  • You want real-time visual feedback on your code

Switch to Claude Code if:

  • You're managing 3+ repos simultaneously
  • You run tasks that last more than 2 hours
  • You want to automate repetitive multi-repo operations (CI fixes, metadata updates, schema migrations across services)
  • You're comfortable with CLI

Neither is "better". Cursor wins on approachability and real-time feedback. Claude Code wins on scale, automation, and long-horizon tasks.

The honest framing: Cursor is an AI-powered IDE. Claude Code is an AI-powered agent that also does IDE things.


What I actually built (14 days, Claude Code only)

  • 4 iOS apps (SwiftUI + StoreKit 2): AutoChoice, AltitudeNow, DaysUntil, PromptVault
  • Full TestFlight + App Store CI/CD pipeline (fastlane + GitHub Actions)
  • 1 Gumroad digital product (160-prompt pack, live)
  • autoapp-toolkit (open source orchestration layer, MIT)
  • This comparison post

Full repo: github.com/jiejuefuyou


No affiliate links. No vendor money. Both tools paid out of pocket. Happy to answer specific questions in the comments.


Top comments (0)