DEV Community

Cover image for DeepClaude: Run Claude Code on DeepSeek for 90% Less
Max Quimby
Max Quimby

Posted on • Originally published at computeleap.com

DeepClaude: Run Claude Code on DeepSeek for 90% Less

This morning's #1 story on Hacker News is a four-line shell script. 606 points, 257 comments, and the top reply is the smoking gun: someone cancelled their Claude subscription and switched their entire coding workflow to DeepSeek V4 Pro — same Claude Code CLI, same agent loop, same /resume and sub-agents — for roughly 17× less money per million tokens.

📖 Read the full version with charts and embedded sources on ComputeLeap →

DeepClaude HN front page thread, 606 points and 257 comments

The script lives in a small repo called DeepClaude. It does almost nothing: it sets ANTHROPIC_BASE_URL, sets ANTHROPIC_AUTH_TOKEN to a DeepSeek API key, picks ANTHROPIC_MODEL=deepseek-v4-pro, and runs a tiny Node proxy that forwards Claude Code's tool calls to DeepSeek's Anthropic-compatible endpoint. That's the whole thing.

aattaran/deepclaude GitHub repo — Same UX, 17x cheaper

📊 The pricing gap that started the run on the bank. Claude Sonnet 4.6 lists at $3/M input, $15/M output. DeepSeek V4 Pro lists at $0.27/M input, $1.10/M output — and $0.014/M on a cache hit, which is the path you actually take in an agent loop where the system prompt and file context get re-sent every turn. The math comes out to 90%+ savings on a normal day and 99%+ on cache-heavy sessions.

Pair this with Hmbown/DeepSeek-TUI, which picked up +1,277 stars in 24 hours as the Rust-native sibling of DeepClaude, and you have today's actual story: the agent loop and the model are now decoupled. The harness is one product. The brain is another. You can swap them.

This guide is the four-line shim, the real quality tradeoffs nobody on Twitter is being honest about, and the cases where you should pay full freight to Anthropic anyway.

The Four-Line Shim, In Full

DeepSeek shipped an Anthropic-compatible endpoint at api.deepseek.com/anthropic. It speaks the same JSON schema Claude Code expects: same tool-call format, same streaming chunks, same messages array. Claude Code already respects the ANTHROPIC_BASE_URL env var (this isn't a hack — it's documented behavior the DeepSeek docs walk through).

DeepSeek's official Claude Code integration documentation page

So the entire shim is:

export ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic
export ANTHROPIC_AUTH_TOKEN=sk-...your-deepseek-key...
export ANTHROPIC_MODEL=deepseek-v4-pro
claude
Enter fullscreen mode Exit fullscreen mode

That's it. Claude Code launches. The CLI doesn't know it's not talking to Anthropic. Tool calls work. File edits work. /resume, sub-agents, MCP servers — all of it works, because none of those features live on the model side.

💡 What deepclaude adds on top of the env vars. The repo's actual binary is a ~50-line Node HTTP proxy. It's not strictly required for normal coding sessions — the env vars alone work. The proxy exists for two narrow cases: routing Claude Code's WebSocket bridge auth back to Anthropic, and injecting per-subagent model overrides.

Setup In Two Minutes

1. Get a DeepSeek API key

Sign up at platform.deepseek.com, top up $5 of credits. The same key works for direct API calls and the Anthropic-compatible endpoint.

2. Pick your model

Model Input Cache hit Output When to pick
deepseek-v4-pro $0.27/M $0.014/M $1.10/M Default. SWE-bench-Verified parity with Claude Opus 4.6.
deepseek-v4-flash $0.14/M $0.030/M $0.28/M Fast, smaller, surprisingly good for read-edit-test loops.

Source: DeepSeek pricing docs.

3. Set the env vars

export ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic
export ANTHROPIC_AUTH_TOKEN=sk-deepseek-...
export ANTHROPIC_MODEL=deepseek-v4-pro
Enter fullscreen mode Exit fullscreen mode

⚠️ One gotcha: subagent routing. Claude Code's sub-agents inherit the model setting from ANTHROPIC_DEFAULT_OPUS_MODEL, ANTHROPIC_DEFAULT_SONNET_MODEL, and ANTHROPIC_DEFAULT_HAIKU_MODEL. If those are unset, sub-agents may try to call claude-opus-4-7 on the DeepSeek endpoint and 404. Set all three:

export ANTHROPIC_DEFAULT_OPUS_MODEL=deepseek-v4-pro
export ANTHROPIC_DEFAULT_SONNET_MODEL=deepseek-v4-pro
export ANTHROPIC_DEFAULT_HAIKU_MODEL=deepseek-v4-flash
export CLAUDE_CODE_SUBAGENT_MODEL=deepseek-v4-pro
Enter fullscreen mode Exit fullscreen mode

The DevTk team has a walkthrough with the same fix if you hit a sub-agent error.

Where DeepSeek V4 Pro Actually Wins And Loses

What V4 Pro matches or wins on

  • SWE-bench Verified: V4 Pro 80.6 vs Claude Opus 4.7's 80.8 — within the noise floor (buildfastwithai analysis).
  • Terminal-Bench: V4 Pro 67.9% vs Claude Sonnet 4.6 65.4% (benchlm.ai).
  • LiveCodeBench: V4 Pro 93.5% vs 88.8% for Claude.
  • Single-file edits, narrow refactors, regex, SQL: essentially indistinguishable.

What V4 Pro loses on

  • Multi-file architectural reasoning — Claude Opus 4.7 still produces cleaner plans (AkitaOnRails benchmarks).
  • SWE-bench Pro: V4 Pro 55.4 vs Opus 4.7 64.3.
  • Long-horizon agent loops where the agent has to recover from its own mistakes.
  • Computer Use tooling — Codex Pro's GPT-5.4 leads Terminal-Bench 2.0 at 77.3% (Builder.io).

The honest verdict: for ~80% of normal coding work, V4 Pro is indistinguishable from Sonnet 4.6 and within a hair of Opus 4.6. For the hard 20% — large refactors, architecture, agent loops that need to self-correct — Claude is still worth the money.

What The Bill Actually Looks Like

Workload Sonnet 4.6 / day V4 Pro / day Savings
Light (5 sessions) $1.50 $0.18 88%
Medium (15 sessions) $4.50 $0.55 88%
Heavy (40 sessions) $20–40 $1.50–3.00 92–93%
Power-user "ran all day" $80–150 $4–8 95%+

The percentage gap grows with usage because DeepSeek's cache-hit rate dominates the bill once a session is more than a few turns deep.

When You Should NOT Use DeepClaude

  1. Enterprise code with a compliance review. DeepSeek is a Chinese company. Data goes to servers in China.
  2. Computer Use or browser-based sub-agents — Codex Pro and Claude both have a real lead here.
  3. System-design work — architecture conversations are where Claude's marginal IQ shows up. A bad architecture costs days of rework.

For everything else — feature work, bug fixes, refactors, scripts, glue code, dotfile tweaks — DeepClaude is the new default.

The Rust-Native Alternative: DeepSeek-TUI

Hmbown/DeepSeek-TUI is the Rust binary that ships its own agent loop. +1,277 stars in 24 hours. Single binary, no Node, no Python, ~12MB at idle.

Hmbown/DeepSeek-TUI on GitHub trending — +1,277 stars in 24 hours

cargo install deepseek-tui-cli --locked
cargo install deepseek-tui --locked
deepseek
Enter fullscreen mode Exit fullscreen mode

The aisignal.dev write-up has a clean teardown of the architecture. Pick DeepClaude if you love Claude Code's UX and just want to swap the brain. Pick DeepSeek-TUI if you want a tool designed around DeepSeek from the first line.

Why This Matters Beyond Saving Money

For two years, the bull case for the frontier labs was: the model is the product, the harness is a commodity wrapper.

DeepClaude inverts the polarity. The harness is the product. Claude Code's sub-agents, MCP, /resume, hooks, plugins, IDE integrations — that's the moat. The model is the swappable component. Anthropic now has a product that runs on a competitor's brain and a competitor's bill, with Anthropic making nothing on the inference.

The Anthropic-compatible API spec is now an industry standard. DeepSeek implements it. OpenRouter implements it. Local Ollama implements it. The genie isn't going back.

Bottom Line

If you've been paying $200/month for Claude Max and you do mostly feature-implementation work: try DeepClaude this week. Spend $5 on DeepSeek credits. Set the four env vars. If the quality is acceptable, you've bought yourself an order of magnitude more runway. If not, you've spent twenty minutes and five dollars to find out.


Originally published at ComputeLeap

Top comments (0)