DEV Community

Rakesh Paul
Rakesh Paul

Posted on

I Built a Full-Stack F1 Fantasy Platform in 4 Weeks — Solo, With AI Agents


This post was originally published on binaryroute.com.

In late December 2025, during a short holiday break, I opened my laptop with a simple idea — build the F1 fantasy league platform I'd always wanted as a fan.

Four weeks later, Formula1.Plus is live. 70+ database tables. 30+ API modules. Race predictions, live leaderboards, private leagues, telemetry dashboards, news aggregation, community features, and a full admin panel.

Built and shipped by one developer.

This post is a technical walkthrough of how I pulled it off — the framework I built first, the stack I chose, the AI workflow that made it possible, and what I'd do differently.

Formula1.Plus Dashboard


The Problem With Building Solo

If you've tried building a full-stack product alone, you know the pain. You're the architect, the frontend dev, the backend dev, the DBA, the DevOps person, and the QA — all at once.

Every context switch costs you. You spend more time on plumbing than on the product. Projects stall, scope shrinks, or you burn out halfway through.

A project with this scope — predictions engine, scoring system, leaderboards, leagues, news aggregation with semantic search, telemetry dashboards, background job processing, passkey auth — would have been a 6-month grind. Probably abandoned by month 3.

Two things changed the math: a framework I built to eliminate boilerplate and AI agents as pair programmers.


Step 1: Build the Framework First

Before writing a single line of F1 code, I built ProjectX — an opinionated full-stack TypeScript framework with a CLI.

The idea: define your database schema once, run one command, and get everything generated.

projectx crud --models drivers,races,predictions
Enter fullscreen mode Exit fullscreen mode

That single command generates:

  • API routes with Hono, including input validation via Zod
  • Service layer with business logic and authorization hooks
  • Repository layer with Drizzle ORM queries
  • TanStack Query hooks for the frontend with proper cache keys
  • Unit test scaffolds for the service layer

All type-safe. All wired up with dependency injection. No manual API type definitions — the frontend imports route types from the backend via Hono's typed client.

The Architecture

HTTP Layer (Hono Routes)
    ↓
Middleware (Auth, Rate Limiting, Logging, Caching)
    ↓
DI Container
    ↓
Service Layer (Business Logic + Authorization)
    ↓
Repository Layer (Drizzle ORM)
    ↓
PostgreSQL
Enter fullscreen mode Exit fullscreen mode

Every feature follows this pattern. Every feature gets its own folder. The AI agents I used later could navigate this structure instantly because it was consistent everywhere.

The Monorepo

f1plus/
├── apps/
│   ├── api/          # Hono backend
│   └── web/          # TanStack Start frontend
├── packages/
│   ├── db/           # Drizzle schemas + migrations
│   ├── db-sync/      # F1 data synchronization
│   ├── types/        # Shared TypeScript types
│   ├── ui/           # shadcn/ui component library
│   ├── emails/       # React Email templates
│   ├── env/          # Zod-based env validation
│   └── tsconfig/     # Shared TS configs
Enter fullscreen mode Exit fullscreen mode

Seven shared packages. Two apps. One pnpm workspace. Everything shares types, nothing drifts.

This structure was the single most important decision. Not because it's novel — because it gave me and the AI agents a predictable codebase from day one.


Step 2: The Stack

I chose each piece deliberately. Here's what and why.

Frontend: TanStack Start + React 19

TanStack Start is still relatively new, but it clicked for this project. SSR-ready, file-based routing via TanStack Router, and TanStack Query baked in for data fetching.

The routing is fully type-safe — route params, search params, loaders — all typed. Combined with Hono's typed client on the backend, I get end-to-end type safety from the database to the component without writing a single manual type.

// Frontend hook — generated by ProjectX CLI
export function useDriverStandings(seasonId: string) {
  return useQuery({
    queryKey: ['driver-standings', seasonId],
    queryFn: () => api.standings.drivers.$get({ query: { seasonId } }),
  })
}
Enter fullscreen mode Exit fullscreen mode

The query function is typed from the Hono route definition. Change the API response shape and TypeScript catches it in the frontend immediately.

Styling: Tailwind CSS v4 + shadcn/ui

Tailwind v4 with CSS custom properties for theming. I defined a set of design tokens:

--f1-bg-card
--f1-bg-secondary
--f1-border
--f1-text
--f1-text-muted
--f1-red
Enter fullscreen mode Exit fullscreen mode

These resolve to different values in light and dark mode. Every component uses these tokens instead of hardcoded colors. This made the entire UI theme-able with zero per-component overrides.

shadcn/ui gives you accessible, unstyled primitives that you own. No dependency lock-in. I customized every component to match the F1 aesthetic.

Backend: Hono

Hono is a 15KB web framework that runs everywhere — Node, Cloudflare Workers, Deno, Bun. I chose it for three reasons:

  1. Typed routes — The hono/client package gives you zero-runtime-overhead type inference. The frontend knows every route's request/response shape.
  2. Middleware composition — Rate limiting, auth, logging, body limits, CORS — all composable.
  3. Performance — It's fast. Measurably fast.
// Rate limiting tiers
const rateLimits = {
  global: { max: 200, window: '1m' },
  mutations: { max: 30, window: '1m' },
  expensive: { max: 20, window: '1m' },
}
Enter fullscreen mode Exit fullscreen mode

Database: Drizzle ORM + PostgreSQL + pgvector

Drizzle ORM is the sweet spot between raw SQL and heavy ORMs. Type-safe queries, zero runtime overhead, and the schema definitions are plain TypeScript.

export const drivers = pgTable('drivers', {
  id: text('id').primaryKey(),
  name: text('name').notNull(),
  abbreviation: varchar('abbreviation', { length: 3 }),
  nationality: text('nationality'),
  dateOfBirth: date('date_of_birth'),
  // ...
})
Enter fullscreen mode Exit fullscreen mode

I added pgvector for semantic search on news articles — embedding articles with HuggingFace Transformers and querying by similarity. This lets users search F1 news by meaning, not just keywords.

Background Jobs: BullMQ + Redis

Five job workers handle async processing:

  • Scoring worker — calculates race scores and updates leaderboards
  • News sync — fetches and processes F1 news articles
  • Email worker — transactional emails via React Email + Resend
  • F1DB sync — auto-syncs historical data from F1DB, the community-maintained open-source F1 dataset on GitHub
  • Task worker — generic async tasks

Bull Board provides an admin dashboard for monitoring all queues.

Auth: BetterAuth + Passkeys

BetterAuth handles OAuth (Google, Discord, X) and passkey/WebAuthn support. Passkeys are the future of auth — no passwords, phishing-resistant, biometric. It took one afternoon to set up.


Step 3: AI Agents as the Development Team

This is the part that made 4 weeks possible.

I used Claude Code, OpenAI Codex, and Gemini throughout the entire build — not as autocomplete, but as collaborators that could hold the full codebase in context.

Claude Code was the core developer — writing, refactoring, and debugging code directly in the codebase. Claude, Codex, and Gemini served as architects: every non-trivial feature went through an extensive design process where I'd gather perspectives from multiple models before settling on a final implementation plan. Different models catch different edge cases, and the overlap builds confidence.

What AI agents actually did

Feature scaffolding. I'd describe a feature ("add a predictions system where users pick drivers for each race, with a lock-of-the-week mechanic for bonus points"), and the agent would generate the schema, the service, the routes, the validation, and the frontend hooks. Not perfect on the first pass, but 80% there. I'd review, adjust, and iterate.

Parallel code reviews. When I needed to migrate 50+ component files from hardcoded bg-white/10 opacity patterns to CSS custom properties for light mode support, I launched three AI agents in parallel — each handling a batch of files, following the same mapping guide. What would have been a full day of tedious find-and-replace was done in minutes with consistent results.

Debugging. One example: the track circuit SVG component had a blur effect from 8 stacked CSS drop-shadow filters that were compounding exponentially. The agent identified the root cause (each filter applies to the cumulative result of all previous filters), removed the outline system, and adjusted stroke widths — a fix I might have spent an hour on.

Consistency. As the codebase grew past 50+ components, the AI kept patterns consistent — same naming conventions, same file structure, same validation approach. This is where solo developers usually start cutting corners.

Why structure matters for AI

Here's the key insight: AI agents are only as good as the patterns you give them.

ProjectX's opinionated architecture meant the AI always had guardrails:

  • It knew services go in features/<name>/<name>.service.ts
  • It knew validation uses Zod schemas in features/<name>/validations.ts
  • It knew routes are thin — they delegate to services
  • It knew the frontend hooks follow TanStack Query conventions

Without that consistency, you're just generating spaghetti faster. The framework wasn't just for me — it was for the AI too.


Step 4: Deployment — Cheaper Than You Think

The deployment story is one of my favorite parts.

Frontend: Cloudflare Workers

TanStack Start builds to Cloudflare Workers via the Vite plugin. The frontend runs at the edge, globally distributed, with near-zero cold starts.

Cost: practically free at this scale. Cloudflare's free tier is generous.

Backend: Railway

Railway runs the Hono API, PostgreSQL, and Redis. Docker multi-stage build keeps the image lean. Health checks, auto-restarts, and deploy previews are built in.

# railway.toml
[deploy]
healthcheckPath = "/health/ready"
healthcheckTimeout = 30
restartPolicyType = "on_failure"
restartPolicyMaxRetries = 5
Enter fullscreen mode Exit fullscreen mode

CI/CD: GitHub Actions

Two workflows:

  1. CI — lint, type-check, build, test on every PR
  2. Deploy — manual trigger, runs migrations first, then deploys API to Railway and frontend to Cloudflare
Push to main → CI passes → Trigger deploy →
  Run migrations → Deploy API (Railway) → Deploy Web (Cloudflare)
Enter fullscreen mode Exit fullscreen mode

Total monthly cost

The entire production infrastructure — API server, PostgreSQL, Redis, edge-deployed frontend, CI/CD — costs approximately \$20/month — or closer to ~\$10/month with serverless database and Redis options. No Kubernetes. No Terraform. No DevOps engineer.


What's Inside Formula1.Plus

Here's what shipped in 4 weeks:

Predictions & Scoring

  • Race predictions with driver and constructor picks
  • Lock-of-the-week mechanic for bold predictions (bonus points)
  • Last place picks, constructor top-3, bonus picks
  • Automated scoring engine via background workers

Leaderboards & Leagues

  • Live leaderboards with all-time and per-championship rankings
  • Private leagues with invite codes
  • League-specific leaderboards and standings

Telemetry & Data

  • Driver DNA breakdowns and performance analysis
  • Historical data auto-synced from F1DB, a community-maintained open-source dataset
  • Circuit profiles with past results and statistics
  • Recharts-powered visualizations

Community

  • Grand Stand — polls, discussions, community engagement
  • News aggregation with semantic search (pgvector)
  • Activity feeds and social features

Admin

  • Event management and prediction configuration
  • Queue monitoring via Bull Board
  • Audit logs, contact management, poll templates

Auth & Infrastructure

  • OAuth (Google, Discord, X) + passkey/WebAuthn
  • Rate limiting (tiered: global, mutations, expensive queries)
  • OpenTelemetry for distributed tracing
  • Structured logging with Pino

Feature Overview


What's Next

The F1 season is approaching, and I'm getting early users in before Round 1.

More importantly: ProjectX — the framework that made this possible — is going open source soon. A single monorepo CLI that scaffolds web, mobile, and browser extensions with the full architecture out of the box.

The plugin system, the CRUD generator, the Railway and Cloudflare deployment presets — all of it.

If you want to follow the open source drop, I'll be posting updates on X/Twitter and the repo will be at github.com/your-handle/projectx.


TL;DR

  • Built a full F1 fantasy platform in 4 weeks, solo
  • Built a CLI framework first to eliminate boilerplate (ProjectX)
  • Used AI agents as pair programmers, not just autocomplete
  • Stack: TanStack Start + Hono + Drizzle + PostgreSQL + Redis
  • Deployed to Cloudflare Workers + Railway for approximately \$20/month (~\$10 with serverless options)
  • ProjectX going open source soon

Try it: formula1.plus


What questions do you have about the build, the stack, or AI-assisted development? I'll be in the comments.

Top comments (0)