DEV Community

Cover image for Building a New Poker Variant with an AWS Serverless Architecture
Elijah
Elijah

Posted on

Building a New Poker Variant with an AWS Serverless Architecture

Building Point Game: Production-Grade Serverless Architecture

How I designed and built a real-time multiplayer game platform using AWS serverless infrastructure


Point Game was born at Virginia Tech, where it quickly became a favorite in the poker community before spreading to the UT Austin Poker Club. Unlike traditional poker variants, the game focuses on point totals calculated using blackjack values, with your hand discarding hole cards, and the pot splitting into high and low sides.

I built this platform so I could play with friends online, but also to bring Point Game to the wider poker community. What makes the game special is that it's unsolved—no GTO solvers, no established playbook. Just pure poker theory applied to fresh problems. I wanted to give others the chance to experience that.

But this article isn't about the game. It's about the engineering.


Designing Before Coding

Before writing a single line of code, I wrote a 36-page Low-Level Design document. Most personal projects skip this. I didn't. The LLD forced me to think through every edge case, every state transition, every failure mode before implementation. This is how production systems are built.

The result is a system that handles real-time multiplayer gameplay with sub-second latency, maintains perfect state consistency across distributed clients, and scales automatically with zero server management.

By the numbers:

  • 7 AWS Services
  • 8 DynamoDB Tables
  • 36-Page LLD
  • 0 Servers to Manage

The Architecture

Point Game is a fully serverless, event-driven system. Clients interact through REST APIs for account and table operations, and WebSockets for real-time gameplay.

Infrastructure Overview

Component Technology
CDN & Static Assets CloudFront + S3
API Layer API Gateway (REST + WebSocket)
Compute AWS Lambda
Database DynamoDB
Authentication Cognito
Scheduling EventBridge

The key insight: DynamoDB is the single source of truth. Every game state, every action log, every connection mapping lives in DynamoDB. Lambda functions are stateless—they read state, process actions, write state, and broadcast. This makes the system horizontally scalable and resilient to failures.

Client Action → API Gateway → Lambda → DynamoDB → Broadcast
Enter fullscreen mode Exit fullscreen mode

Data Model

Eight DynamoDB tables power the system, each designed for specific access patterns:

Table Purpose
Game State Current hand state, seats, pots, board
Action Log Append-only record of every action
Hand Snapshots End-of-hand state for replay/audit
Connection Store WebSocket ID → Player mapping
Turn Timers Scheduled timeout tracking
Inter-Round Queue Pending join/leave/config actions
Users Account data and balances
Ledger Buy-in/cash-out history

The Hard Problems

Anyone can spin up a Lambda. The real engineering is in solving the problems that break multiplayer games at scale.

Challenge 1: Optimistic Concurrency Control

What happens when two players act simultaneously? Without careful handling, you get corrupted game state.

I implemented sequence-based versioning: every state mutation includes an expected sequence number. If it doesn't match, the write fails and the client resyncs. No race conditions. No lost actions.

Challenge 2: Turn Timer System

Players need time limits. But Lambda functions can't "wait"—they execute and terminate.

Solution: EventBridge scheduled events. When a player's turn starts, I schedule a future event with a timer sequence. When it fires, the timeout Lambda checks if that sequence is still current. If the player acted, the timer is stale and ignored. If not, auto-fold.

Challenge 3: Privacy-Filtered Broadcasting

Every player sees a different game state. You see your hole cards; opponents see card backs.

The broadcaster loads authoritative state, then generates player-specific views by filtering out private information before sending. Each WebSocket message is tailored to its recipient.

Challenge 4: Inter-Round Action Queue

Players can join, leave, or change settings mid-hand—but those actions can't disrupt active gameplay.

I built a queue system that stores these actions and processes them atomically between hands. The game state remains consistent while accommodating real-world player behavior.

Challenge 5: Complex Game Rules

Point Game has a large number of rules and edge cases that make gameplay non-trivial to implement correctly. Translating real-world game rules into reliable code was a challenge in itself. One area that stood out was showdown logic: accurately tracking side pots, handling split pots, and resolving multiple winners without corrupting state.

The complexity here isn't any single feature. It's making them all work together reliably under concurrent load with zero dedicated servers.


Why This Exceeds Industry Standard

Most hobby projects never work with this breadth of technical depth and cloud infrastructure. The typical portfolio project is a single HTML page or a CRUD app with a simple database. This is:

  • A real-time distributed system with WebSocket state synchronization
  • Event-driven architecture with scheduled triggers and async processing
  • Production-grade consistency guarantees via optimistic concurrency
  • Domain-specific game logic handling complex state machines with multiple simultaneous players and games

I designed it. I documented it. I built it. And I can explain every decision.


Experience It Yourself

The best way to understand Point Game is to play it: pointgame.live

Join the community: Discord


Built by Elijah Widener Ferreira • PortfolioGitHub

Top comments (0)