Forem

DevForge Templates
DevForge Templates

Posted on

How I Built a Crypto Trading Bot with Claude Code in 3 Weeks

Building a crypto trading bot sounds glamorous until you're debugging why your scoring algorithm triggered a buy at 3 AM on a coin that dropped 40% by morning.

Here's what I learned building a production crypto trading system with Claude Code as my primary development tool.

The Architecture

The bot has 22 scoring components that analyze different market signals: volume spikes, price momentum, RSI divergence, order book depth, social sentiment, and more. Each component returns a weighted score between -1 and 1. The aggregate score determines buy/sell/hold.

The backend runs on Fastify 5 with TypeScript. PostgreSQL stores historical scores and trades. A React dashboard shows real-time portfolio status, active signals, and historical performance.

Why Claude Code Changed Everything

Before Claude Code, I was context-switching between docs, Stack Overflow, and my editor constantly. With Claude Code, the workflow became:

  1. Describe the scoring component I need
  2. Claude reads the existing codebase (CLAUDE.md + shared types)
  3. It generates the component following the established pattern
  4. I review, test, iterate

The key was the CLAUDE.md file. I defined the project structure, naming conventions, and the scoring component interface once. Every new component followed the same pattern automatically.

22 Scoring Components in Detail

Each component is a standalone module with the same interface:

interface ScoringComponent {
  name: string;
  weight: number;
  analyze(data: MarketData): Promise<Score>;
}

interface Score {
  value: number; // -1 to 1
  confidence: number; // 0 to 1
  reasoning: string;
}
Enter fullscreen mode Exit fullscreen mode

The reasoning field was critical for debugging. When a trade goes wrong, you can trace back through each component's reasoning to understand what the aggregate model "thought."

Some components that worked well:

  • Volume-Price Divergence: When volume spikes but price stays flat, something is about to move
  • Multi-Timeframe RSI: RSI on 5m, 15m, 1h, 4h — agreement across timeframes = strong signal
  • Order Book Imbalance: >3:1 bid/ask ratio at key levels often precedes moves

Components that looked good in backtests but failed live:

  • Social Sentiment: Too noisy, too slow. By the time sentiment shifts measurably, the move already happened
  • Correlation Clustering: BTC correlation breaks down exactly when you need it most — during crashes

Testing: 1000+ Tests

Every scoring component has unit tests with historical data. But the real value came from integration tests that replay entire market days and verify the aggregate score matches expected behavior.

pnpm test
# 1,047 tests passing
# Coverage: 89% statements, 94% branches on scoring
Enter fullscreen mode Exit fullscreen mode

Claude Code generated most of the test scaffolding. I described the edge cases, it wrote the tests. Saved days of tedious work.

15 Deploys Per Day

With Docker + Traefik on a VPS, deploying is one command:

rsync -az . vps:~/crypto-bot/ && ssh vps 'cd crypto-bot && docker compose up -d --build'
Enter fullscreen mode Exit fullscreen mode

During active development, I was deploying 10-15 times per day. Hot-fixing a scoring weight, deploying, watching the next few signals, adjusting. The tight feedback loop was essential.

Lessons Learned

  1. Backtesting lies. Every backtest has survivorship bias. Your bot will encounter market conditions that don't exist in your test data.

  2. Start with paper trading. Run your bot with zero real money for at least 2 weeks. Track what it would have done.

  3. Position sizing matters more than signal quality. A mediocre signal with good risk management beats a great signal with poor position sizing.

  4. Claude Code excels at repetitive patterns. 22 components that all follow the same interface? Perfect for AI-assisted development. Novel algorithm design? Still needs human thinking.

  5. CLAUDE.md is your best investment. 30 minutes writing a good CLAUDE.md saves hours of correcting AI-generated code that doesn't match your patterns.

The Stack

  • Backend: Fastify 5 + TypeScript + Prisma 7 + PostgreSQL
  • Frontend: React 19 + Vite + Mantine (dashboard)
  • Infra: Docker + Traefik + VPS
  • AI Dev: Claude Code with custom CLAUDE.md
  • Testing: Vitest, 1000+ tests

Would I Do It Again?

Yes, but I'd skip the social sentiment component entirely and invest that time in better position sizing logic. The scoring is 30% of the system. Risk management is 70%.

And I'd use Claude Code from day one instead of starting manually and migrating halfway through. The CLAUDE.md-driven workflow is genuinely faster for structured, pattern-heavy codebases.

Top comments (0)