When I decided to build WireRoom, I had two goals: learn how real deployment works end to end, and understand WebSockets beyond the "it's like HTTP but persistent" explanation everyone gives.
Here's what I actually ran into.
What WireRoom does
Users sign in — via Google, GitHub, or a plain username/password — and get dropped into shareable rooms with short alphanumeric codes. The first person in becomes the host, can kick participants, and can transfer host privileges. Messages are real-time. The whole thing runs on Go + WebSockets in production.
Why Go for a chat server?
Go's concurrency model is a natural fit for WebSocket servers. Each connection is a long-lived, stateful thing — you need to read from it, write to it, and track it. Goroutines make this straightforward: spin one up per client, let the runtime handle scheduling. Compare this to Node.js where you're managing an event loop and callback chains the moment things get complex.
I used Gorilla WebSocket — the de facto Go library for this. It wraps the upgrade handshake cleanly and gives you a Conn type you can read and write on directly.
The architecture: goroutine per client
Every client gets two goroutines — one for reading, one for writing. The reader blocks on conn.ReadMessage() and forwards messages to a central hub. The writer blocks on a channel and flushes messages out as they arrive.
gofunc (c *Client) readPump() {
defer func() {
c.hub.unregister <- c
c.conn.Close()
}()
for {
_, message, err := c.conn.ReadMessage()
if err != nil {
break
}
c.hub.broadcast <- message
}
}
The hub is a single goroutine that owns all shared state — the client map, room assignments, host tracking, everything. All mutations go through it via channels. This is the design decision that kept the code race-free without fighting mutexes everywhere.
The host system
This was the most interesting backend problem. WireRoom has room ownership — first user in becomes host, with the ability to kick others or transfer the crown. That means the state isn't just "who's connected" but "who owns this room" and "what can they do."
The hub handles this entirely. A kick event isn't just closing a connection — it's updating the host map, broadcasting a system message to the room, and gracefully closing the target client's goroutines in the right order.
System events like joins, leaves, and host transfers get broadcast to the room as distinct message types so the frontend can render them differently from chat messages.
Auth: OAuth plus passwords
I implemented two auth paths — OAuth via Google and GitHub, and a plain username/password fallback. The OAuth flow uses a state token for CSRF protection: generate a random token, store it in a cookie, send it to the provider, verify it matches on the redirect back. Without this, an attacker can trick a user into completing an OAuth flow the attacker initiated, linking the wrong account.
The password path stores credentials against Supabase PostgreSQL and validates on login. Both paths converge at the same session — once you're in, the WebSocket layer doesn't care how you authenticated.
Deployment: Railway over Render
Render's free tier spins down services after inactivity. For a WebSocket server that's a dealbreaker — the first user after idle gets a cold start, and persistent connections can't survive a process restart. Railway keeps services alive and deploys straight from GitHub. Supabase handled PostgreSQL with connection pooling, so I didn't have to think about database connections under concurrent load.
What I'd add next
Emoji reactions. The infrastructure handles it — adding a new message type to the hub is trivial. It's just not shipped yet.
What building this taught me
The Go race detector (go test -race) is not optional. Use it from day one. The goroutine-per-client model with a central hub absorbed everything I threw at it — dozens of concurrent connections, rapid room switching, abrupt disconnects, host transfers mid-session. None of it caused issues once the hub ownership model was right.
The full source is on GitHub.



Top comments (0)