DEV Community

Cover image for Week 6: How GusLift Matches Rides in Real Time

Week 6: How GusLift Matches Rides in Real Time

Every school morning, a few drivers leave their dorms for campus, and a bunch of riders need seats. The core problem: match them fast, in the right direction, before the window closes.

Here's how the matching engine actually works.

The Matching Room Abstraction

We don't run one global pool. We partition by location, day, and departure time. A room key looks like:

Westie:mon:08:00
Enter fullscreen mode Exit fullscreen mode

That string becomes the identity of a Cloudflare Durable Object(DO), a live, in-memory process that owns all real-time state for that slot. Everyone heading from Westie at 8am Monday shares one room. Leaving at 10:30? Different room entirely. This keeps each instance small and focused. The matching room doesn't know or care about any other departure window.

Getting Into the Right Room

When a user opens the app, a Cloudflare Worker handles the request. It authenticates, resolves the user's schedule, generates the slot key, and forwards the WebSocket connection to the right DO. The Worker is stateless, purely a router. All the interesting state lives in the object it points to.

What the DO Tracks

Four things:

  • drivers — map of driver ID → seats remaining
  • riders_waiting — FIFO queue of riders requesting a ride
  • pending_matches — riders a driver has selected but who haven't confirmed
  • connections — live WebSocket handles, one per user

Every state change broadcasts to all connected clients. The frontend is a mirror of what the DO holds, nothing more.

The Flow

Driver goes online → sends driver_online with seat count → registered in the room, broadcast to everyone. Rider wants a seat → sends rider_request → added to the queue.

Driver picks someone. That rider moves out of riders_waiting into pending_matches and gets a match_request pushed to their socket. They have 30 seconds to accept. No response → back to the queue. They accept → seat decrements, ride written to Postgres, room updates for everyone.

The part that could get hairy is concurrent events — two drivers selecting the same rider at the same time. The DO's single-threaded execution model handles that without any extra locking. One message at a time, in order. It's one of the legitimately nice properties of this architecture.

Why Durable Objects

Serverless and WebSockets are a bad pairing by default. Serverless assumes requests are stateless and short-lived; a WebSocket is neither. DOs give you a persistent, single-threaded process with in-memory state that survives across events. For a campus app where load is concentrated in a 20-minute morning window, you get rooms that spin up when needed and hibernate when not. No idle infrastructure, no connection hand-off problems.

What's Left

A few open problems the current design doesn't address: drivers who cancel after a match is accepted, rooms that never get cleaned up after their departure window passes, and rate limiting on socket events. None of these tasks is difficult in principle. They just weren't the initial challenges.

Top comments (0)