DEV Community

Cover image for How I Built an AI-Powered Trading Platform Solo in 30 Days (FastAPI + React + Rust + OpenRouter)
senthil Achari
senthil Achari

Posted on

How I Built an AI-Powered Trading Platform Solo in 30 Days (FastAPI + React + Rust + OpenRouter)

How I Built an AI-Powered Trading Platform Solo in 30 Days
(FastAPI + React + Rust + OpenRouter)
I built Chai Street — an AI-powered trading intelligence platform — solo, in one month. Nights and weekends. 6-12 hours a day. Eyes hurting, leg muscles gone, cortisol through the roof. But the pain was sweet.
This is the full technical breakdown — every major decision, what worked, what didn't, and what I'd do differently.

The stack at a glance
Frontend → React 19 + Vite + Tailwind CSS (Vercel)
Backend → FastAPI + Uvicorn + SQLite (Railway)
Scanners → Python (LEAP) + Node.js + Rust binary (CSP)
LLM → OpenRouter (Claude/GPT/Gemini via one API key)
Auth → JWT HS256 + PBKDF2-SHA256 password hashing
Billing → Stripe
Email → Resend
13 external providers. One developer.

Why FastAPI over Django or Flask
Django is too opinionated for a data-heavy API — I don't need its ORM, admin panel, or template engine. Flask is too bare-bones; I'd be rebuilding request validation from scratch.
FastAPI hits the sweet spot:

Pydantic validates every request automatically
Native async support
Auto-generated OpenAPI docs — invaluable when you're solo and need to remember your own endpoints at 1am

Why SQLite in production (yes, really)
This raises eyebrows. Here's the honest case.
For a single-server application under 1,000 users, SQLite handles thousands of reads per second with zero connection pooling, zero configuration, and zero separate service to manage. Railway's persistent volume gives me durability across deploys. No ORM — raw SQL via Python's sqlite3 module.
When I need multi-instance scaling, the migration to Postgres is a query-by-query port — not a framework overhaul. That day isn't today.

The Rust binary for Black-Scholes
The CSP screener prices hundreds of option contracts per scan. Python was too slow. The solution: a compiled Rust binary communicating via stdin/stdout JSON.
Node.js CSP screener
→ shells out to Rust binary
→ passes option parameters as JSON via stdin
→ receives priced contracts via stdout
→ roughly 100x faster than equivalent Python
No FFI complexity. Clean interface. The binary just works.

OpenRouter instead of direct API keys
One API key. Access to Claude, GPT-4, Gemini, Llama, Mixtral, and dozens of others.
python# Switch models without redeploying
model = os.environ.get("STEEP_LLM_MODEL", "anthropic/claude-3-sonnet")


Tune cost vs. depth by changing one environment variable. No vendor lock-in. No multiple billing relationships. When a cheaper model improves, I switch in 30 seconds.

---

## The data caching strategy

External API calls were killing scan performance. Solution: aggressive SQLite caching keyed by ticker and data type.
Enter fullscreen mode Exit fullscreen mode

Fundamentals → cached 7 days (change quarterly)
Options chains → cached 1-4 hrs (change frequently)
Historical prices → cached 18 hrs
VIX regime data → cached 4 hrs


This cut external API calls by ~80% and made scans dramatically faster. Every cache miss triggers a fresh fetch with graceful fallback across three data sources: Schwab API → Polygon.io → yfinance.

---

## The scanner architecture

The options scanner runs a 5-stage pipeline across ~550 S&P 500 and Nasdaq 100 tickers:
Enter fullscreen mode Exit fullscreen mode

Stage 1 → Universe construction
Stage 2 → Fast filter: RSI + SMA(200) trend
Stage 3 → Fundamentals gate: revenue growth, FCF, margins
Stage 4 → Options chain analysis: delta, spread, OI, IV rank
Stage 5 → News sentiment scoring


Each candidate scores across four quadrants:
Enter fullscreen mode Exit fullscreen mode

Technicals 30%
Fundamentals 30%
Options Quality 20%
Catalyst 20%
Output: ranked list of actionable setups classified as Conservative, Moderate, or Aggressive.

The commit-driven project management page
This is the feature I'm most proud of that nobody talks about.
No Jira. No Notion. No separate project management tool. Instead I built a Project tab directly inside Chai Street that updates automatically on every commit and push.
The gate script enforces it:
javascript// scripts/project_page_gate.js
// Runs as a pre-push hook
// If you push product code without updating the project log — it blocks you
// "Where we are" never drifts from the repo. Ever.


The workflow:
Enter fullscreen mode Exit fullscreen mode

Write code
→ git commit
→ gate script runs
→ Project page updates automatically
→ push goes through


What makes this powerful for LLM-assisted development specifically:

When you're building with Claude or Cursor across multiple sessions, **context drift is your biggest enemy.** The LLM doesn't remember what you built last Tuesday. You might not either at 1am.

The Project page solves this:
Enter fullscreen mode Exit fullscreen mode

Start new session
→ paste Project page snapshot into context
→ LLM knows exactly where you are
→ no re-explaining, no drift, no "wait what did we build already"


Every session starts clean. Every push keeps the record honest. The project page becomes your single source of truth for both you and the AI. For context-managed AI development — it's infrastructure, not a nice-to-have.

The same admin hub surfaces everything else you need as a solo operator:
Enter fullscreen mode Exit fullscreen mode

Stats → P95 latency, error rate, DAU/MAU,
conversion, 7-day trend charts
Users → signups, promo redeemers, API activity
Providers → per-integration health, bottleneck hints
Status → NOC-style health, LLM wallet check,
API smoke tests, scanner last-run
Project → commit-linked ship log


This replaced standup meetings, sprint planning, status emails, and half my README.

---

## Managing context and cost with LLMs

The biggest practical challenge building solo with AI assistance wasn't the code — it was managing context bloat across long sessions.

**What worked:**
- Break large features into isolated Cursor sessions
- Keep a running `CONTEXT.md` with current state of each module
- Set `max_tokens` caps on every LLM call to bound costs
- Use cheaper models for iteration, better models for final output

**What didn't work:**
- Long single sessions that drift — the LLM loses track of your architecture
- Asking for too much in one prompt — surgical prompts beat broad ones every time

---

## The deployment split
Enter fullscreen mode Exit fullscreen mode

Vercel → React SPA
Instant deploys, global CDN
Preview deployments per PR
Free for frontend

Railway → FastAPI backend
Persistent SQLite volume
Background scanner processes
~$5-20/month depending on usage
Monthly infra cost is intentionally tiny. The variable line item is LLM usage as Ticker IQ traffic grows.

Security — what I learned the hard way
A user ran my site through securityheaders.io and sent me a D grade screenshot. Here's what I built in response:

PBKDF2-SHA256 password hashing with live strength meter
Email verification — 6-digit codes, 10-minute expiry
Brute-force lockout — 5 failed attempts per email+IP → 15-min lockout
Sliding-window rate limiting — 100 req/hr per user
Full security header suite — CSP, X-Frame-Options, X-Content-Type-Options, Referrer-Policy
XSS mitigation — HTML stripping on backend, safeUrl() on frontend

Grade went from D to A. Build this day one. Not day thirty.

What I'd do differently
Start with security headers. I retrofitted them. Should have been day one.
Cache earlier. I optimized caching in week 3. Should have been week 1.
Separate scanner processes sooner. Running scanners inline with the API caused timeout issues. Background processes with status polling was the right architecture — I just got there late.
Use OpenRouter from day one. I started with direct OpenAI keys and switched. The flexibility of model-switching via env var has saved me multiple times.

What's next

Natural language screener queries — "show me oversold large-cap tech with options under $20"
Other option strategies like spreads, 0DTEs ( OH YEAH!), futures alerts based on ICT 2022, Liquidity sweeps etc.
Market view — Shows Global macros with sector bias
Command center with VIX and capital allocation guide
Ticker IQ Overview — fundamentals, technicals, and an AI Narrative in plain English. Ford right now: “A rusted American icon trading like scrap metal, but even junk has value at the right price.”
Per-portfolio risk analysis cross-referenced against scanner output
WebSocket integration for real-time VIX and scanner progress
PostgreSQL migration when multi-instance scaling is needed

If you're building at the intersection of AI and finance, or have questions about any of these decisions — drop a comment. Always happy to talk architecture.
Explore the platform: www.chaistreet.ai
Full story on Medium: https://medium.com/@senthilachari/from-dental-school-to-devops-to-building-an-ai-trading-platform-the-chai-street-story-32785d062d5f

Top comments (0)