Go, Rust, and Node under one million users: the crashes, the surprises, and the bigger truth about scaling apps.

Most apps start sweating at 500 concurrent users. By 5,000, someone on your team is posting a “temporary outage” message in Slack. So naturally, we thought: what if we went straight to one million?
It wasn’t smart. It wasn’t necessary. But if you’ve ever sat through a late-night argument about whether Go, Rust, or Node “scales better,” you’ll understand the itch. The only way to end runtime wars is to throw absurd traffic at them and watch what happens. Spoiler: we panicked more than the servers did.
Running this test felt less like responsible benchmarking and more like hosting a DDoS competition against ourselves. Imagine sitting in front of Grafana charts that look like a heart monitor during a boss fight — that was our weekend.
But here’s the thing: the results actually tell us something useful. Not just about which runtime sweated the most, but about how we think about scaling, stack choice, and developer pain.
TLDR
- Rust: fastest, lowest latency, but you’ll fight the compiler harder than the traffic.
- Go: steady, boringly reliable, handled chaos better than expected.
- Node: fine at small scale, but the event loop tapped out fast at a million.
- Bigger lesson: scaling is about ops, architecture, and developer sanity not just raw runtime speed.
Why we did this (and why it was dumb)
Every dev community has its comfort food: memes about tabs vs spaces, jokes about regex, and benchmarks that prove absolutely nothing but still get everyone riled up. “Rust will always win.” “Go is boring but scales forever.” “Node is fine if you know what you’re doing.” Heard it all before? Same.
At some point, you get tired of reading Hacker News threads where people throw charts at each other without context. So instead of arguing, we did the most engineer-brained thing possible: built a test we knew would hurt. One million concurrent users. No warmup, no mercy.
To be clear: nobody on our team is actually running a system with one million simultaneous active humans (if you are, props you’re probably at Discord or Netflix). Most of us are just trying to stop staging from falling over during sprint demos. But there’s something fun about stress-testing a runtime way past its breaking point, just to see which cracks appear first.
The seed was a dumb late-night Discord argument. Someone insisted that Node “scales just fine” if you architect it correctly. Another swore Go would eat it alive at 100k. Then Rust fans showed up like Dark Souls bosses, yelling about zero-cost abstractions. By 2 a.m., we had a plan: stop talking, start breaking things.
Was it useful? Questionable. Was it fun? Absolutely. And maybe, just maybe, there are lessons in watching code crumble under traffic that looks more like a botnet than a real user base.
The setup (how we legally DDoS’d ourselves)
Before the runtime war could begin, we had to answer the most important question: how do you even fake a million users without accidentally getting your ISP to send you angry emails?
We went with the classic load-testing trifecta: Locust, k6, and some homegrown scripts duct-taped together with bash and caffeine. Each runtime got its own sandbox same machine specs, same OS, same monitoring. No excuses, no secret tuning (okay, a little tuning, but only the obvious stuff like connection pools).
To keep things “fair,” we pointed all of them at roughly the same toy app: a simple API endpoint that pretends to be a backend service. It did a little JSON parsing, some basic data juggling, and threw back a response. Nothing fancy like databases or caching layers because then we’d be testing infra, not the runtime.
And yes, half the time went into fixing our test harness. It turns out “let’s simulate a million users” is a good way to uncover bugs in your benchmark code itself. At one point, Node wasn’t the thing crashing our load generator was. Nothing makes you question your life choices like debugging your stress test, not your actual app.
We had Grafana and Prometheus dashboards glowing like a spaceship cockpit. CPU, memory, request latency, error rates all set up so we could see exactly when each runtime started screaming. The graphs were mesmerizing. Sometimes beautiful. Sometimes terrifying.
So yeah: picture a team of devs staring at charts, cheering when lines stayed flat, groaning when spikes looked like shark fins. We basically DDoS’d ourselves, but legally, and with prettier dashboards.
Press enter or click to view image in full size
Go takes the hit
Go is the language you bring to a fight when you want things to “just work.” It doesn’t have Rust’s chest-thumping performance promises or Node’s endless npm chaos, but it does have goroutines and that’s basically Go’s cheat code for concurrency. Fire off lightweight threads like they’re nerf darts, let the scheduler do the work, and hope your machine doesn’t sound like a jet engine.
Under load, Go behaved exactly how its fans would expect: boringly stable. For the first few hundred thousand users, it cruised. Throughput stayed solid, latency hovered within acceptable limits, and the CPU graphs looked like they were sipping tea. At around 700k concurrent users, though, the cracks started showing. Latency jittered, and the graphs we were staring at suddenly resembled a heart monitor.
The thing about Go is it doesn’t usually faceplant it just wobbles. Where Node eventually tripped over its event loop and Rust screamed at us via compiler errors, Go kept soldiering on, throwing occasional hiccups but never fully collapsing. Think Toyota Camry: not exciting, not glamorous, but you can redline it on a road trip and it’ll still get you home.
Anecdote: during the heaviest spike, someone on the team joked, “Go’s fine we’re the ones panicking.” And it was true. Watching the Grafana dashboards light up felt scarier than what the actual runtime was doing.
Go’s biggest win here wasn’t raw speed Rust had it beat but consistency. When you’re dealing with insane concurrency, sometimes “boring” is exactly what you want. It’s why Cloudflare, Uber, and half the infra companies you’ve heard of keep betting on it. Go doesn’t always win the benchmark flex-offs, but when it takes a hit, it doesn’t immediately fold.
Rust the Ferrari with sharp edges
Rust came into this test with the loudest hype. If you’ve been anywhere near Twitter/X, you’ve seen the “Rust is the future” takes zero-cost abstractions, fearless concurrency, memory safety without garbage collection. It’s the language that makes performance nerds glow and everyone else sigh.
And honestly? Rust lived up to the hype… mostly. Once we got the thing running, the numbers were jaw-dropping. Lowest latency of the bunch, throughput that made Go look like it was jogging, and memory graphs so clean they could’ve been framed. Rust at full tilt felt like a Formula 1 car hugging the corners at 200 mph.
But here’s the thing: even F1 cars stall. Before we even got to clean runs, we lost hours fighting the compiler. Unsafe blocks, subtle lifetimes, weird errors that looked like riddles it felt like the language was testing us before it agreed to handle traffic. The Ferrari metaphor fits perfectly: beautiful when it works, infuriating when you’re stuck in the pits.
Once the benchmark was actually humming, Rust handled concurrency with ease. No dramatic spikes, no sweaty graphs, just raw speed. But that came with its own tax: the dev team’s collective patience. One teammate summed it up: “Rust is faster than Go, but Go didn’t waste three hours making me cry before the test.”
Real-world tie-in: this is exactly why Discord uses Rust to scale Elixir. It’s perfect for performance-critical hot paths where every millisecond counts. But for day-to-day “just get it out the door” work? Rust demands a lot.
So yeah, Ferrari energy. You win races, but you also spend a suspicious amount of time fixing the engine instead of driving.
Node at a million
If Go is the Camry and Rust is the Ferrari, Node is… the scrappy hatchback with a spoiler someone bolted on in their garage. It’s light, it’s fun, it’s everywhere, and half the web wouldn’t exist without it. But asking Node to handle one million concurrent users is like asking that hatchback to tow a semi-truck up a mountain.
For the first stretch, Node actually held up fine. A few thousand concurrent? Smooth. Tens of thousands? Manageable. But when the numbers climbed into six figures, things started to feel shaky. By the time we hit one million, the event loop was crying in the corner, latency looked like a seismograph, and memory usage resembled a balloon at a kid’s birthday party expanding until you prayed it wouldn’t pop.
The killer here was exactly what everyone expected: Node’s single-threaded event loop. Yes, you can scale horizontally, yes, worker threads exist, but in raw “throw insane traffic at one runtime” terms, Node tapped out first.
Anecdote: at one point during the test, Node wasn’t just slow it was leaking memory so badly that our monitoring tool looked suspiciously like it was trolling us. Cue a 3 a.m. debugging session and one teammate asking, “Why do we do this to ourselves?”
To be fair, Node was never designed for this kind of stress. Its strength has always been fast prototyping, rich libraries, and letting small teams ship big things quickly. Netflix, for example, uses Node heavily on the edge where speed-to-market matters more than raw concurrency bragging rights. That’s the sweet spot.
So yeah: Node made it to the party, but at a million users, it was sweating bullets and looking for the exit.

The ugly truth no silver bullet
So after all the graphs, crashes, and caffeine, what did we actually learn? Mostly that there’s no single winner here. Each runtime had its moment: Rust obliterated latency, Go shrugged through concurrency like a workhorse, and Node… well, Node reminded us why it dominates startup MVPs but not benchmark wars.
We pulled the data into a simple comparison:

Pretty charts are fun, but here’s the uncomfortable truth: raw benchmarks rarely map cleanly to real systems. Scaling to millions of real users involves load balancers, caching layers, databases that aren’t on fire, and devs who can debug things without crying. None of that shows up in a synthetic stress test.
Benchmarks are gym flexes. They look good, they make your fans cheer, but they don’t tell you how someone handles a flight of stairs or a weekend road trip. Rust looks shredded on the weight bench, Go jogs past you without breaking a sweat, and Node shows up late but brings snacks.
The point isn’t who won it’s what these results say about trade-offs. And trade-offs are what actually decide whether your stack survives production.
What this means for you
So, if you’re not stress-testing servers with a million fake users (please don’t), what do these results actually mean? It comes down to choosing the runtime that matches your team’s pain tolerance profile, not just the leaderboard in a Hacker News thread.
Rust: the “suffer now, fly later” option
You’ll lose hours wrestling with lifetimes and the borrow checker, but the payoff is unmatched speed and reliability. Rust is for hot paths where every millisecond matters.
use std::thread;
fn main() {
let handles: Vec<_> = (0..1_000_000).map(|i| {
thread::spawn(move || {
println!("User {i} connected");
})
}).collect();
for h in handles {
h.join().unwrap();
}
}
Beautifully fast, brutally unforgiving. Ferrari energy.
Go: boring but battle-tested
Go is the Toyota Camry of backends. No frills, but you’ll trust it on a road trip. Concurrency feels effortless with goroutines.
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
for i := 0; i < 1000000; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
fmt.Printf("User %d connected\n", i)
}(i)
}
wg.Wait()
}
You don’t brag about Go. You rely on it.
Node: ship fast, scale later
Node isn’t made for this test but if you need to demo tomorrow, nothing beats it.
const http = require("http");
http.createServer((req, res) => {
res.end("Hello, user!");
}).listen(3000, () => {
console.log("Server ready on :3000");
});
Simple, quick, and great until you start piling on users like it’s Black Friday.
Conclusion
In the end, the million-user stunt wasn’t really about Go, Rust, or Node. It was about us the devs staring at dashboards at 2 a.m., watching graphs spike like a boss fight health bar. The runtimes didn’t panic as much as we did.
Rust flexed hard: blistering speed, zero drama once it compiled. Go proved its Camry reputation: boringly reliable, even when hammered. Node? It reminded us why it’s perfect for MVPs, but not a gladiator at one million concurrents.
Here’s the controversial bit: benchmarks are memes until you run them in production. The real bottlenecks at scale are ops, infra, and human sanity. Your language is just one piece of the chaos puzzle.
The future is polyglot. You’ll see Rust cores crunching the critical paths, Go quietly running infra glue, and Node handling the edge where speed-to-market trumps raw horsepower. That’s not a compromise that’s evolution.
# tomorrow’s stack in one command:
docker-compose up rust-core go-service node-gateway
So I’ll leave you with this: if one million users hit your app tomorrow, which runtime would you trust and which one would you blame?
Helpful resources
- Go docs the official starting point
- The Rust book required reading that’s actually good
-
Node.js docs because yes, you still forget
http.createServer
- Discord on Rust Rust saving Elixir’s skin
- Netflix on Node Node’s lessons at the edge

Top comments (0)