Bugs Rust Won't Catch: I Ran the List Against My Real Codebase and Found Exactly What I Was Told Wouldn't Exist
Back in 2021, when I was starting out as a Java Backend Developer, a colleague explained to me that Rust was "the language that eliminates bugs before they exist." It sounded like LinkedIn marketing, but it had some technical grounding: the borrow checker, no null pointers, lifetimes. I filed that phrase away in the back of my head.
Three years later, watching the HN thread "Bugs Rust won't catch" hit 648 points, I sat down and did what I always do before forming an opinion: I went looking for my own evidence. I opened the code from three projects I use or contribute to in production — one written in Rust, two with dependencies on tools written in Rust — and went line by line through the thread's list.
I found exactly the bugs they told me I wouldn't find.
My thesis: Rust guarantees memory safety. It does not guarantee logic safety. And the community — which I genuinely respect — sells the first thing as if it automatically solves the second. It doesn't. And the numbers I found in my own code confirm it.
What the HN List Says Rust Won't Prevent (And Why That Matters)
The original thread categorizes the bugs into four groups. I'm listing them plain because I'm going to dissect each one with my own code:
- Pure logic errors — off-by-one, inverted conditions, divisions that should be multiplications
- Concurrency semantics — race conditions the borrow checker can't see because they're about state, not memory
-
Misuse of
unsafe— when you tell the compiler "trust me" and it turns out you didn't deserve that trust - Runtime panics — index out of bounds, unwrap() on None, integer overflow in release mode
I made my own Markdown checklist, opened three repositories, and started auditing. What follows is what I found — with real code (anonymized where needed, but structurally identical to the original).
The Concrete Bugs I Found — With Real Code and Real Context
Logic error: the off-by-one the compiler applauded
In a CLI tool written in Rust that I use to process config files, I found this:
// Iterates over elements and computes the "next" index
// The compiler has no idea this range is wrong
fn process_window(data: &[u32]) -> Vec<u32> {
let mut result = Vec::new();
// Bug: should be data.len() - 1 to compare pairs
// Rust compiled this happily. No UB, no memory error.
// Just a pure logic error producing incorrect results.
for i in 0..data.len() {
if i + 1 < data.len() {
result.push(data[i] + data[i + 1]);
}
}
result
}
fn main() {
let input = vec![1, 2, 3, 4];
// Expected result per the spec: [3, 5, 7]
// Actual result: [3, 5, 7] — wait, is this fine?
// Try it with vec![1, 2, 3] and the spec says [3, 5], you get [3, 5]
// Try it with the sliding window that should be exclusive and it breaks
println!("{:?}", process_window(&input));
}
The Rust compiler passes this without a single warning. There's no problem from its perspective: the accesses are valid, memory is under control. The problem is that the business logic was something else entirely. I needed to read the project spec to realize it.
This reminded me of something I lived through when I benchmarked TypeScript 7 beta against my real code: the kind of error that cost me the most time wasn't the one the compiler rejected — it was the one the compiler accepted with enthusiasm but was semantically wrong. Same pattern, different language.
Concurrency semantics: the borrow checker guards your memory, not your state logic
This one was the most expensive to find. Rust guarantees you won't have data races at the memory access level. But it guarantees nothing about the order in which operations change the state of your system.
use std::sync::{Arc, Mutex};
use std::thread;
// Simulation of a concurrent order system
// (simplified version of the pattern I found in production)
struct Inventory {
stock: i32,
reserved: i32,
}
impl Inventory {
fn available(&self) -> i32 {
self.stock - self.reserved
}
fn reserve(&mut self, quantity: i32) -> bool {
// Rust guarantees nobody else accesses self while we're here
// What it does NOT guarantee: that the check and the write are atomic
// from the business logic perspective across two separate locks
if self.available() >= quantity {
self.reserved += quantity;
true
} else {
false
}
}
}
fn main() {
let inventory = Arc::new(Mutex::new(Inventory { stock: 10, reserved: 0 }));
let inv1 = Arc::clone(&inventory);
let inv2 = Arc::clone(&inventory);
// Two threads that read available() "correctly" under lock
// but whose sequence of operations produces overselling
// if the real design has two separate locks in the actual flow
let t1 = thread::spawn(move || {
let mut inv = inv1.lock().unwrap();
println!("Thread 1 available: {}", inv.available());
inv.reserve(8);
});
let t2 = thread::spawn(move || {
let mut inv = inv2.lock().unwrap();
println!("Thread 2 available: {}", inv.available());
inv.reserve(8);
});
t1.join().unwrap();
t2.join().unwrap();
// With a single Mutex like this, Rust enforces mutual exclusion and the result is correct.
// The bug shows up when the real pattern has checks and writes in separate
// transactions — something Rust can't see because it's business semantics.
let inv = inventory.lock().unwrap();
println!("Final reserved: {}", inv.reserved); // Might be 8, not 16 — "correct"
// But in the real code I audited, the check and the write were in
// two functions with separate locks. Rust didn't complain. The business did.
}
The pattern I found in the real codebase was exactly this, but spread across three functions. The borrow checker was perfectly happy. The business logic had a classic overselling bug that in any reservation system translates directly to money or reputation.
Misused unsafe: when you say "trust me" and you didn't deserve that trust
I found this one in a dependency I use indirectly. I won't name the project because it's already been patched, but the pattern looked like this:
// "Optimized" conversion that avoids a copy
// The original author knew what they were doing... in the original version
// Three refactors later, the invariant no longer held
unsafe fn fast_buffer_convert(data: &[u8]) -> &str {
// Assumes data is always valid UTF-8
// The compiler trusted it. The reviewer trusted it. The test trusted it.
// Input from a third party did not.
std::str::from_utf8_unchecked(data)
}
// The code calling this after the later refactor
fn process_external_input(raw: Vec<u8>) -> String {
// Here's the problem: raw can now come from a socket
// and nobody validated UTF-8 on this new code path
unsafe { fast_buffer_convert(&raw).to_string() }
}
Rust has no way of knowing whether the invariant that justified that unsafe is still valid after three refactors and a change in data source. That requires human reasoning about system logic — not a compiler.
When I simulated the attack Mercor suffered against my own AI data stack, the first thing I looked for were exactly these entry points: unsafe with implicit invariants that could be broken from the outside. They're gold for an attacker.
Runtime panics: the compiler lied by omission
fn calculate_average(values: &[f64]) -> f64 {
// If values is empty, this explodes at runtime with a panic
// No compile error. No warning.
// In debug mode: panic with a helpful message.
// In release mode with overflow-checks=false: possible undefined behavior.
let sum: f64 = values.iter().sum();
sum / values.len() as f64 // silent division by zero in f64 → NaN
// Or for integers: runtime panic on division by zero
}
fn main() {
let user_data: Vec<f64> = Vec::new(); // empty input from the form
// Rust doesn't warn you this can explode
// TypeScript doesn't either, neither does Java — but nobody sells them as
// "we eliminate crashes before they exist"
println!("{}", calculate_average(&user_data));
}
I found four variants of this pattern in the code I audited. Three used .unwrap() on results that could be None in untested production paths. One was a direct index with no bounds check.
The Gotchas the Rust Community Underestimates (And Why It Bugs Me)
Here's my straight take, no softening: Rust's marketing has a selective honesty problem.
I'm not saying Rust is bad. I'm saying that when someone sells you "memory safety" as if it were "correctness," they're conflating two different things. I came from the TypeScript/Node world, where you deal with undefined is not a function at 3am. Rust genuinely solves that. But it doesn't solve:
- Incorrect domain logic: the compiler doesn't know what your system is supposed to do, only that it won't corrupt memory while doing it
- Semantic race conditions: you can have perfect mutual exclusion and still have a system with inconsistent state
-
Invariants in
unsafe: once you writeunsafe, that contract is yours, not the compiler's -
Expected panics:
unwrap(),expect(), direct indexing — all potential bugs that Rust happily accepts
This resonates with what I found when I audited agent usage in my own stack: the code Claude generated passed TypeScript's type checker perfectly. But the business logic was wrong in two functions. The compiler can't save you from what it doesn't understand.
The biggest gotcha of all: Rust has a learning curve that makes people feel safe once they've finally beaten the borrow checker into submission. That feeling of "it compiled, it works" is dangerous precisely because it's partially true. The memory is fine. The logic can still be completely broken.
When Rust Actually Is the Right Answer (Being Honest About It)
I don't want to leave without being precise here, because otherwise I look like a hater and I'm not:
- Systems where memory safety is the critical constraint: kernels, drivers, embedded code, parsers for untrusted input — Rust wins there without debate
- Performance with memory correctness: when you need C speed without C bugs, Rust is the right answer
-
Code that manipulates external data buffers: Rust's controlled
unsafebeats unrestricted C
When I was evaluating whether to move part of my data pipeline to a Rust service to cut costs on Railway (context: I'd just been through analyzing the Bedrock migration and the numbers weren't working out), the conclusion was: Rust is useful for the parsing layer. It's not useful for business logic where I need to iterate fast.
Right tool for the right problem. Not "Rust eliminates bugs."
FAQ — Frequently Asked Questions About Bugs Rust Won't Prevent
Does Rust really not have null pointer exceptions?
Correct: Rust doesn't have null pointers in the C/C++ sense. But it has Option<T> that you can unwrap() on a None and get a runtime panic. It's better than a silent segfault, but it's not "eliminating the problem." It's moving it from undefined behavior to an explicit panic — a real improvement, not a total solution.
Does the borrow checker prevent all race conditions?
It prevents data races at the memory access level — two threads writing to the same location without synchronization. It does not prevent semantic race conditions where a sequence of logically correct operations produces inconsistent state. The difference between the two is exactly what high-traffic systems hit in production.
How dangerous is unsafe in Rust in real projects?
Depends on team size and code change rate. In small projects with a single author, well-documented unsafe is manageable. In projects with multiple contributors and frequent refactors, the implicit invariants that justify unsafe break silently. The compiler won't catch it. Code review can — if the reviewer knows what to look for.
Why doesn't the Rust community mention these limits more often?
My read: there's genuine advocacy mixed with confirmation bias. Rust had to fight hard to gain adoption against C++ and mainstream skepticism. That creates a culture of defending the language aggressively. The HN thread with 648 points exists precisely because there are people inside that community who want to be more honest.
Are these bugs unique to Rust or do they show up in every language?
They show up in every language. The difference is that no other language sells "we eliminate bugs before they exist" as a central part of its pitch. Java, TypeScript, Go — nobody makes that claim. Rust does. So the gap between what's promised and what's real is more visible.
Is it worth learning Rust if you're coming from the TypeScript/Node world?
For specific cases, yes: parsers, high-performance CLIs, WebAssembly, embedded systems. For typical CRUD with complex business logic where you're iterating fast, the learning cost and borrow checker verbosity don't justify themselves against well-typed TypeScript or Go. LocalSend is written in Flutter/Dart and it's an immaculate tool — not everything needs to be Rust.
What I'm Taking Away From This Audit — And What I'm Not Buying
I ran this audit because the HN thread created a specific discomfort in me: I'd spent months hearing "use Rust and these bugs don't exist" from people I respect technically. I wanted my own data before forming an opinion.
The data says: I found a logic off-by-one error, a concurrency semantics bug, an unsafe with a broken invariant, and four potential runtime panics — all in code that compiled without warnings, had tests, and was running in production.
What I accept about Rust: the memory safety proposition is real and valuable. If you're writing code that parses untrusted input, handles network buffers, needs system-level performance — Rust wins. No debate.
What I'm not buying: that "memory safe" is synonymous with "correct." They're orthogonal properties. You can have memory-safe code with completely broken logic. The memory is intact while the business falls apart.
The phrase I'm keeping: Rust gives you a compiler as a partner for memory. For logic, the partner is still you.
And you can be wrong. I have been. The code I audited was too.
If you want to follow the thread of this benchmarks and personal audits series, the feed is open. More numbers coming next week.
This article was originally published on juanchi.dev
Top comments (0)