DEV Community

Cover image for How I Built a Real-Time Multiplayer Typing Game with React, TypeScript, Socket.IO & MongoDB
Gagan Kumar
Gagan Kumar

Posted on • Originally published at gagankumar.me

How I Built a Real-Time Multiplayer Typing Game with React, TypeScript, Socket.IO & MongoDB

There's a specific kind of developer procrastination where you end up spending hours on a website that has nothing to do with what you were supposed to be working on. For me, that website was TypeRacer.

I'd gone down a rabbit hole — racing strangers, watching my WPM climb, refreshing leaderboards at midnight — and somewhere between race 12 and race 13, the thought hit me: I wonder how this actually works under the hood.

That curiosity turned into TypeRacrer — a full-stack competitive typing platform I built from scratch. And honestly, it ended up being one of the most technically interesting projects I've ever worked on. Not because it's the most complex thing I've built, but because it forced me to think about problems I'd never had to think about before.

This isn't a code tutorial. It's the story behind the project — the decisions, the surprises, and the concepts that finally clicked while building it.


Why This Project Is Different From Most Portfolio Projects

Most portfolio projects are essentially CRUD apps with a nice UI. You have a database, you fetch data, you display it, you maybe add some auth. That's completely fine — the real world runs on CRUD apps.

But TypeRacrer introduced a fundamentally different challenge: state that is alive.

In a regular app, state lives in a database. It's static until someone changes it. You request it when you need it. In a real-time multiplayer game, state is constantly changing — and every player needs to see everyone else's changes as they happen, not when they refresh the page.

That one difference changes everything about how you architect your application.


The Core Concept: WebSockets and Why They Matter

To understand what makes this project tick, you first need to understand the difference between how a normal web app communicates versus how a real-time one does.

In a typical web request, your browser asks the server a question and the server answers. That's it — the conversation is over. The server has no way to reach out to you first. It just waits to be asked.

WebSockets flip this model entirely. Instead of a series of one-off conversations, a WebSocket creates a persistent, open connection between the client and the server. Either side can send a message at any time. The server can push data to the client without being asked.

For a typing game, this is essential. When you type a character, your progress needs to appear on every other player's screen almost instantly. There's no "refresh to see updates" — the whole appeal of the game is watching the race happen live.

Socket.IO, the library I used, sits on top of WebSockets and adds a clean event-based API on top of them, plus automatic fallbacks for older browsers and built-in room management. The concept of "rooms" in Socket.IO maps almost perfectly to race lobbies — a group of connected clients that receive the same events.


The Architecture Decision That Shaped Everything

One of the first big decisions I had to make: where does the game state live?

My first instinct was the database. I already had MongoDB set up. Why not just save the race state there and have all clients poll it every second or two?

I tried this mentally and immediately saw the problem. With multiple players typing simultaneously, you'd potentially have dozens of database writes and reads every second per race. And polling every second isn't even fast enough — a good typist averages 4-5 keystrokes per second. A one-second delay would make the game feel completely broken.

The answer is in-memory state on the server. Active race rooms live in the server's memory as plain data structures — fast to read, fast to write, no database overhead. When a player types, the server updates the in-memory state and immediately broadcasts the change to all connected clients.

MongoDB only enters the picture when a race finishes. The final results — who won, everyone's WPM, timestamps — get persisted to the database for leaderboards and history. But during the race itself, the database doesn't touch it.

This taught me something important: not all data needs to be persisted, and not all data should be treated the same way. The right storage mechanism depends entirely on how the data is used.


Building the Monorepo: Sharing Code Between Frontend and Backend

One of the most satisfying architectural decisions I made was setting this up as a TypeScript monorepo using pnpm workspaces and Turborepo.

What does that mean in practice? The project has three main parts: the React frontend, the Express backend, and a shared package that both of them import. That shared package contains the type definitions — the shapes of players, race rooms, and socket events — that both sides of the application agree on.

Before I understood monorepos, I would have defined these types twice: once on the frontend, once on the backend, and inevitably let them drift apart. A small mismatch in a socket payload type could cause a bug that takes an hour to track down.

With the monorepo setup, there's one source of truth. If I change what a "Player" object looks like, TypeScript immediately tells me everywhere in both the frontend and backend that needs to be updated. The compiler becomes a collaborator, not just a syntax checker.

This is the kind of architectural thinking that scales. It's how large engineering teams maintain consistency across codebases that dozens of people touch.


The Race Lifecycle: Thinking in States

One of the most clarifying exercises in building TypeRacrer was mapping out the lifecycle of a race as a state machine.

A race room is never just "active" or "inactive." It moves through distinct phases:

Waiting — the room exists, players are joining. We're holding until enough players are ready.

Countdown — all players have hit ready. A 3-2-1 countdown begins. During this phase, inputs are locked. The tension before a race starts is actually a UX feature, not just a delay.

Racing — the timer starts, inputs unlock, and progress is tracked and broadcast in real time.

Finished — all players have completed the text, or the timeout has triggered. Results are calculated, the leaderboard is shown, and results are written to MongoDB.

Thinking in states like this prevented a whole class of bugs. Without it, you end up writing scattered if-checks everywhere: "should I accept input right now? is the race started? has it ended?" With a clearly defined state machine, the answer is always just: "what state are we in?"

This mental model — breaking a complex flow into discrete, named states — is something I now apply to almost every feature I build.


The Surprisingly Hard Problem: Measuring Typing Speed

WPM (words per minute) sounds simple. Count the words, divide by time. Done.

But in practice it's messier than that. What counts as a "word"? The standard in competitive typing is to treat every 5 characters as one word, regardless of actual word boundaries. This normalizes the score so that typing "a a a a a" doesn't give you an unfair advantage over typing "strength."

What about errors? Raw WPM counts every keystroke regardless of accuracy. Adjusted WPM penalizes mistakes. For a competitive game, you want adjusted WPM — otherwise people just spam keys and don't care about accuracy.

And then there's the smoothing problem. If you calculate WPM fresh every second, the number jumps around wildly — especially at the start of a race when the sample size is tiny. Early in a race, one slow second tanks your WPM. The solution is to calculate WPM based on the full elapsed time since race start, which naturally smooths out as more time passes.

None of this is conceptually hard, but it requires careful thought about what you're actually measuring and why. It's a good reminder that even "simple" features have hidden depth when you think them through properly.


Real-Time Progress: The Optimization That Mattered

When I first implemented progress broadcasting, I sent the entire room state to every client on every keystroke. The room object contains all players, all their stats, the race text, the room metadata — everything.

In testing with a few players it seemed fine. But I realized quickly that this approach doesn't scale. With 4 players each typing 80 WPM, that's around 5-6 keystrokes per second per player. You're potentially broadcasting a large object 20+ times per second to every connected client.

The fix was straightforward once I thought about it: only send what changed. When a player's progress updates, broadcast just that player's updated data. The clients already have the full room state — they just need to know what one player's progress changed to. Smaller payloads, less bandwidth, less processing on every client.

This principle — send the minimum necessary data — is one of the core optimization strategies in real-time systems. It's the difference between a snappy app and one that starts lagging when more people join.


Anti-Cheat: The Problem You Don't Think About Until You Have To

Once I had a working leaderboard, I started thinking about something uncomfortable: what stops someone from just submitting a fake result of 500 WPM directly to the API?

In single-player mode, the client sends the result to the server at the end of a race. If I trust that result blindly, the leaderboard is meaningless.

The solution is server-side validation. The server knows when the race started and what the text was. When a result comes in, it can sanity-check it: is this WPM physically possible given the time elapsed and the text length? If someone claims they typed 300 WPM on a 500-character passage in 10 seconds, the math doesn't add up and the result gets flagged.

In multiplayer, this is even more natural — the server is already tracking every player's progress in real time, so it knows the final state. The client doesn't submit results; the server calculates them. There's no vector for spoofed scores.

Building this made me think differently about trust in web applications. The client is user-controlled. The server is yours. Never trust the client for anything that matters.


What I'd Build Differently

Every project teaches you things you wish you'd known at the start. TypeRacrer is no different.

Redis instead of in-memory Maps. The current setup stores active race rooms in the server's memory. This means a server restart loses all active races, and it would be impossible to run multiple server instances (since they'd each have their own separate memory). Redis would solve both problems — it's fast like memory but persistent and shareable across instances.

A matchmaking queue. Right now, players need to share a room ID to play together. A real matchmaking system that pairs players by skill level would make the game far more playable for strangers.

More robust reconnection handling. If your connection drops mid-race, you're out. A proper reconnection system that restores your session would make the experience much less frustrating.

These aren't oversights — they're trade-offs. Building the perfect version of everything would have meant never shipping at all. Getting a working version live and iterating is almost always the right call.


The Bigger Picture: What Real-Time Development Teaches You

I came into this project thinking it was about learning Socket.IO. I came out of it thinking differently about software architecture in general.

Real-time applications demand that you think clearly about state — who owns it, where it lives, how it flows, and how it gets synchronized. These questions exist in every application, but in a standard app you can get away with fuzzy answers. In a real-time multiplayer game, fuzzy answers become bugs that you can watch happen live in front of you.

The monorepo taught me about shared contracts — the value of having both sides of a system agree on the shape of data before either side is built.

The anti-cheat work taught me about trust boundaries — being explicit about what the server should verify and never delegating that responsibility to the client.

The performance optimization taught me about minimal data transfer — that what you don't send is often as important as what you do.

None of these are TypeRacrer-specific lessons. They show up everywhere in software engineering. But TypeRacrer made them concrete in a way that abstract reading never quite does.


Try It

TypeRacrer is live — open two tabs and race yourself, or share the link with a friend. The source code is on GitHub if you want to dig into how any of this is actually implemented.

If you're a developer thinking about building something similar, my honest advice: build it. Not to have a portfolio piece (though that's a bonus), but because the problems you'll run into will teach you things that are genuinely hard to learn any other way.

Real-time applications are a different world. And once you understand how they work, you'll never look at the web the same way again.


Originally published at gagankumar.me

Top comments (0)