DEV Community

Ujjawal Tyagi
Ujjawal Tyagi

Posted on

Real-Time Cricket at Scale: The Architecture Behind a Live Scoring + Opinion Trading Platform

webdev, architecture, node, kafkaIndia consumes cricket like no other country on earth. When Kohli walks out to bat, millions of users hit refresh simultaneously. When a wicket falls, chat rooms explode. When an opinion-trading market opens on the next ball, orders pour in at rates you'd expect from a small stock exchange.

Building a platform that holds up under that load — and makes money from it — is an interesting engineering problem. Here's how we built Cricket Winner at Xenotix Labs, a real-time cricket intelligence platform with live scores, news, and opinion trading in one app.

Three user experiences, one platform

Cricket Winner isn't a single product. It's three products glued together:

  1. Live score engine — ball-by-ball updates synced within seconds of the actual ball being bowled
  2. News feed — minute-by-minute cricket news and editorial content, personalized per user
  3. Opinion trading — a prediction market where users buy and sell "yes/no" contracts on cricket outcomes

Each subsystem has wildly different engineering constraints. The trick was building a shared backbone that doesn't compromise any of them.

The fan-out problem

When a ball is bowled and the score changes, every active user needs to know — within 1–2 seconds. We're talking hundreds of thousands of concurrent connections during a peak match.

Polling is out (wasteful, laggy, and kills battery life on mobile). Server-Sent Events are good but one-way. We went with WebSockets backed by a Redis pub/sub layer.

The flow: data ingestion worker pulls from our score provider's feed. Every score delta is published to a Redis channel keyed by match_id. A cluster of WebSocket gateway nodes subscribe to Redis and fan out to connected clients. Clients get a delta, not a full state refresh (saves bandwidth).

Horizontal scaling is easy: add more gateway nodes behind an ALB, and Redis pub/sub takes care of distributing messages.

Why Kafka (not RabbitMQ) for trading and news events

For opinion trading, the throughput and event-replay requirements are very different. Every trade, every order-book update, every price recalculation is an event that needs to be durable and replayable.

Kafka is a better fit here because:

  • High throughput — Kafka handles millions of messages per second on modest hardware
  • Replay — we can rewind and reprocess events (useful for rebuilding order books after bugs)
  • Partitioning — we partition by market_id, so each market's events are totally ordered and processed by a single consumer

The news pipeline uses the same Kafka cluster for a different reason: personalization. Every user interaction (read, skip, like, share) is a Kafka event. A ranking worker consumes these events and updates per-user feed ranking in near real time.

Why MongoDB for the data layer

Most Xenotix Labs projects default to PostgreSQL. Cricket Winner was the exception.

Ball-by-ball match data is deeply nested. An over has 6 balls. Each ball has a batsman, bowler, runs, extras, a commentary string, and sometimes wicket details. Storing that as JSON documents is a natural fit. Schema evolution is constant — new stats, new tournament formats, new commentary types — and MongoDB's flexible schema lets us ship new features without migrations. Read patterns favor document stores: the most common query is "give me everything about this match" — one document fetch vs. six joins in a relational DB.

For the wallet and trading ledger, we kept things stricter: a separate PostgreSQL database with strong ACID guarantees. Money never lives in MongoDB.

The opinion trading engine

This was the hardest part to build. An opinion-trading market works like this: a market opens ("Will India win the toss?"), users buy YES at ₹3 or NO at ₹7 (prices sum to ₹10), as opinion shifts the price shifts, and when the event resolves YES holders get ₹10 each.

Behind the scenes: each market has an order book (limit orders on both sides). A matching engine pairs buyers with sellers at crossing prices. Settled orders update user wallets atomically. Market prices flow back to the client via WebSocket for live UX.

The matching engine is a single-threaded Node.js worker per market partition (Kafka guarantees per-partition ordering). Running single-threaded avoids race conditions; partitioning by market_id avoids the worker becoming a bottleneck.

10 each.

Behind the scenes: each market has an order book (limit orders on both sides). A matching engine pairs buyers with sellers at crossing prices. Settled orders update user wallets atomically. Market prices flow back to the client via WebSocket for live UX. The matching engine is a single-threaded Node.js worker per market partition (Kafka guarantees per-partition ordering). Running single-threaded avoids race conditions; partitioning by market_id avoids the worker becoming a bottleneck.

Tech stack summary

  • Mobile: Flutter (iOS + Android)
  • Web: Next.js
  • Backend: Node.js + MongoDB (+ PostgreSQL for money)
  • Real-time: WebSockets + Redis pub/sub
  • Event backbone: Kafka
  • Architecture: Microservices
  • Deployment: AWS

What we'd do differently

  • Put the WebSocket gateway behind a dedicated load balancer early
  • Start with Kafka from day one instead of migrating mid-project
  • Cache the order book in Redis for fast recovery after worker restarts

Building a real-time product?

Whether it's live sports, collaborative tools, or trading platforms — real-time is a discipline. If you're building something in this space, Xenotix Labs has the stack and the scars. Get in touch at https://xenotixlabs.com.

Top comments (0)