Prelude: A Predictable Controversy
On December 16, 2025, a peculiar entry appeared in the Linux kernel CVE announcement list: CVE-2025-68260.
This was no ordinary kernel vulnerability. It came from rust_binder, the Rust implementation of the Android Binder driver in the Linux kernel. This marks the first CVE officially assigned to Rust code since Rust entered the Linux kernel mainline.
The news set social media ablaze.
Rust critics felt they had finally found the "smoking gun" they'd been waiting for:
"Since unsafe is unavoidable in the kernel, and unsafe code has hidden dangers, maybe you shouldn't have been bragging everywhere that 'if it compiles, it works.' So now we need to add another condition that unsafe code doesn't count as compiled code?"
Rust supporters quickly fired back:
"Before you start trending on the first reported CVE linked to rust code on the linux kernel, note that: on the same day, 159 kernel CVEs were issued for the C portion of the codebase."
One C developer even stepped up to defend Rust:
"It's funny how as a C dev I gotta defend Rust, but there are over a hundred of these daily caused by C code. A single non-critical CVE in 5 years doesn't seem that big with this information."
The debate ultimately centered on a frequently misunderstood feature of the Rust language: Unsafe.
So today, let's use this CVE as an opportunity to thoroughly understand: What exactly is Unsafe Rust? Is it a "loophole" in Rust's safety promises, or a "necessary evil" of systems programming?
Part One: What Exactly Happened with This CVE?
Before diving into unsafe, let's first understand the vulnerability itself.
Background: Safe Abstraction in Unsafe Rust
To understand this vulnerability, we first need to understand one of Rust's core design principles: Safe Abstraction.
Rust divides code into two worlds:
Safe Rust: The compiler automatically proves memory safety at compile time through mechanisms like ownership and the borrow checker. You don't need to do anything extra—if it compiles, it means there are no data races, dangling pointers, buffer overflows, etc.
Unsafe Rust: Certain low-level operations (like dereferencing raw pointers or calling FFI functions) exceed the compiler's static analysis capabilities. In these cases, developers must mark code blocks with the
unsafekeyword, personally promising the compiler: I have manually verified that this code is safe.
The key point is: code inside an unsafe block still receives the same memory safety checks as Safe Rust (Unsafe is a superset of Safe), but for certain specific operations—like manipulating raw pointers—the compiler simply cannot help, so developers must guarantee safety themselves.
This raises a question: How do you prove your promise is trustworthy?
The answer is SAFETY comments: a community-established documentation convention. Above every unsafe block, developers must clearly state:
- Why is unsafe needed here? (What operation can't the compiler verify?)
- What preconditions does safety depend on? (i.e., "invariants")
- Why do these preconditions hold? (Provide the reasoning)
This isn't optional "nice-to-have" documentation—it's a mandatory convention in the Rust community. Unsafe code without SAFETY comments will be rejected outright in code review.
Why? Because the correctness of unsafe code depends entirely on whether these invariants hold. If invariants are wrong, incomplete, or become invalid as code evolves, bugs will emerge.
Now, let's look at the problematic code in this CVE.
The Problematic Code
The vulnerability appeared in the drivers/android/binder/node.rs file. To understand this bug, we need some background.
What is Binder?
Binder is the core Inter-Process Communication (IPC) mechanism in Android. When one app wants to call another app or system service, it goes through Binder. It's one of the cornerstones that makes Android work.
What is death_list?
In Binder, there's an important concept called "Death Notification." When a service process crashes unexpectedly, all clients depending on it need to be notified. The NodeDeath struct represents such death notifications, and death_list is a linked list storing all death notifications registered to a particular Binder node.
What is this code doing?
When cleaning up a NodeDeath, the code needs to remove it from its associated death_list linked list. The problem lies in this removal operation:
// SAFETY: A `NodeDeath` is never inserted into the death_list
// of any node other than its owner, so it is either in this
// death list or in no death list.
unsafe { node_inner.death_list.remove(self) };
The SAFETY comment claims: A NodeDeath node is only ever inserted into the death_list of its owning node, so it's either in this list or in no list at all.
Based on this assumption, the developer believed this unsafe operation was safe.
But the problem is: this assumption doesn't hold in concurrent scenarios.
Formation of the Race Condition
Let's look at the logic of the Node::release function:
Thread A (Node::release):
1. Acquire lock
2. Move all items from death_list to a temporary list on the stack
3. Release lock ← Problem is here
4. Iterate through the temporary list, processing each node
Thread B (NodeDeath cleanup):
At some point calls unsafe { death_list.remove(self) }
See the problem?
After Thread A releases the lock in step 3 but before completing step 4, Thread B might simultaneously call remove on the original list. At this point:
- Thread A is traversing via the temporary list, accessing nodes'
prev/nextpointers - Thread B is removing via the original list, also modifying nodes'
prev/nextpointers - Two threads modifying the same memory simultaneously, with no synchronization mechanism
https://pbs.twimg.com/media/G8YfVRnW4AEALhU?format=jpg&name=4096x4096
(Diagram from x.com/forefy)
This is a classic Data Race, corrupting the linked list pointers. The end result is a kernel crash:
Unable to handle kernel paging request at virtual address 000bb9841bcac70e
...
Internal error: Oops: 0000000096000044 [#1] PREEMPT SMP
The Fix
The fix is actually simple: instead of moving items to a temporary list, pop and process items directly from the original list while holding the lock. This eliminates the window where "other threads can access the list after the lock is released."
Key Insight
What's the essence of this bug?
It's not that unsafe itself is problematic, but that the invariant assumption in the SAFETY comment was wrong.
The developer assumed "the node is either in this list or in no list," but in reality, the Node::release implementation could move nodes to a temporary list that wasn't protected by the lock. This broke the original assumption.
In the words of Ralf Jung (Rust language team member): The most important comment in unsafe code is the one about its invariants. And this comment was precisely wrong.
Part Two: What Exactly Is Unsafe Rust?
Coincidentally, shortly before this CVE was published, Ralf Jung—an authoritative researcher on Rust's memory model—gave a talk about unsafe Rust at Scala Days 2025. This talk perfectly answers the question "What exactly is Unsafe?"
unsafe ≠ Unsafety
First, let's clarify a common misconception: unsafe doesn't mean "unsafe code," but rather "code whose safety the compiler cannot automatically verify."
In Safe Rust, the compiler can prove at compile time—through the ownership system, borrow checker, and other mechanisms—that code won't exhibit:
- Null pointer dereferences
- Dangling pointers
- Data races
- Buffer overflows
- Double frees
But some operations cannot be statically proven safe by the compiler, such as:
- Dereferencing raw pointers
- Calling FFI functions
- Accessing mutable static variables
- Implementing certain special traits
These operations require the unsafe keyword. Unsafe doesn't mean "you're allowed to write dangerous code," but rather "you're promising the compiler: I have manually verified this code's safety."
Ralf Jung's original words:
"Rust doesn't make unsafe 'free'; it makes it explicit and accountable."
The Five Operations unsafe Allows
Inside an unsafe block, you can perform these five types of operations:
| Operation | Description |
|---|---|
| Dereference raw pointers |
*const T and *mut T
|
| Call unsafe functions or methods | Including FFI functions |
| Access or modify mutable static variables | static mut |
| Implement unsafe traits | Such as Send, Sync
|
| Access union fields | Because the compiler can't know which variant is currently stored |
Key point: unsafe is "permission," not "correctness." The compiler allows you to do these things, but you must ensure you don't trigger undefined behavior.
Invariants: The Soul of unsafe
Every piece of unsafe code should have clearly documented invariants. Invariants answer three questions:
- Why is unsafe needed? What is this code doing that the compiler can't verify?
- What assumptions must be maintained? What conditions does correct execution depend on?
- What could break it? What inputs or concurrent accesses might violate these assumptions?
Looking back at CVE-2025-68260's problematic code:
// SAFETY: A `NodeDeath` is never inserted into the death_list
// of any node other than its owner, so it is either in this
// death list or in no death list.
unsafe { node_inner.death_list.remove(self) };
The comment does state "why it's safe," but the assumption itself is incomplete—it didn't consider that Node::release would move nodes to a temporary list. If the invariants are wrong, the unsafe code cannot possibly be safe.
Part Three: Why Must the Kernel Use unsafe?
"If unsafe has risks, why not avoid it entirely?"
The answer is: In systems programming, unsafe is unavoidable.
Three Hard Requirements of Systems Programming
Ralf Jung listed three fundamental reasons why unsafe exists:
1. Interacting with the Low-Level World
Operating system kernels must interact with hardware, C code, and device drivers. These interactions require:
- Raw pointer operations
- Specific memory layout control
- FFI calls
Rust's safe abstractions are built on top of these low-level operations, and these operations themselves cannot be covered by safe abstractions.
2. Performance and Predictability
Certain high-performance data structures (like lock-free queues, memory allocators, intrusive linked lists) require fine-grained control over memory layout and access patterns. The compiler's automatic checks sometimes prevent safe but efficient implementations.
3. Expressing Cross-Layer Invariants
Some safety guarantees span multiple abstraction layers that the compiler cannot statically capture. For example:
- "This pointer points to valid memory because we guarantee it at a higher logical level"
- "This trait implementation definitely satisfies certain semantic constraints"
Ralf Jung's summary:
"Unsafe is where Rust meets the world."
The Linux Kernel's Special Challenges
As a complex piece of systems software, the Linux kernel has numerous scenarios requiring unsafe:
- Memory management: Direct manipulation of physical memory and page tables
- Interrupt handling: Executing code in special contexts
- Concurrency primitives: Implementing locks, atomic operations, RCU
- Device drivers: Interacting with hardware registers
- C code interoperation: The vast majority of the kernel is still C
This isn't Rust's problem—it's the nature of systems programming. C's approach is unsafe by default—all code might have problems, with no compile-time checks. Rust's approach is safe by default + explicit unsafe (unsafe block & keyword)—concentrating risk at a few controlled boundaries.
Part Four: How to Use unsafe Correctly
Since unsafe is unavoidable, how do we minimize risk?
Ralf Jung offered three core principles: Minimize, Document, Encapsulate.
Principle One: Minimize
Use unsafe only where unavoidable, keeping it to the smallest possible scope.
Don't write this:
unsafe {
// A whole bunch of code
// Only one line actually needs unsafe
// But the entire block is marked unsafe
}
Write this instead:
// Safe preparation work
let ptr = get_pointer();
let len = calculate_length();
// Use unsafe only where necessary
let value = unsafe { *ptr };
// Safe follow-up processing
process(value);
Principle Two: Document
Write clear invariants above every unsafe block.
A good comment template:
// SAFETY:
// - `ptr` is valid because it came from `Box::into_raw` and hasn't been freed yet
// - `ptr` points to properly aligned memory because Box guarantees this
// - We have exclusive access because no other references exist
unsafe { Box::from_raw(ptr) }
Bad comments (or no comments at all) lead to:
- Future maintainers not knowing what assumptions the code depends on
- Refactoring potentially breaking assumptions inadvertently
- Inability to verify correctness during code review
Principle Three: Encapsulate
Wrap unsafe implementations with safe APIs.
This is the most important principle. Unsafe code should be encapsulated inside modules, exposing completely safe interfaces externally. Users don't need to know there's unsafe inside and cannot misuse it.
The classic example is the standard library's Vec<T>:
// Internal implementation uses unsafe for memory management
// But the external interface is 100% safe
let mut v = Vec::new();
v.push(1); // Safe call, no unsafe needed
v.push(2);
let first = v[0]; // Safe call
The Vec implementer bears the responsibility of unsafe; users don't need to worry about memory safety at all.
Architecture Diagram: Proper Layering of unsafe
┌─────────────────────────────────────────────────────────────┐
│ Application Layer Code │
│ (100% Safe Rust) │
├─────────────────────────────────────────────────────────────┤
│ Safe API Layer │
│ (Safe interface, encapsulating unsafe) │
├─────────────────────────────────────────────────────────────┤
│ unsafe Core Layer │
│ (Minimized unsafe code, detailed invariant docs) │
├─────────────────────────────────────────────────────────────┤
│ Low-level / FFI / Hardware │
└─────────────────────────────────────────────────────────────┘
Key insight: The smaller the unsafe surface area, the lower the audit cost, and the easier bugs are to find.
Part Five: Tools and Methods for Making unsafe Verifiable
The Rust community hasn't stopped at "relying on human effort to ensure unsafe correctness." A series of tools and formal methods are shifting unsafe risk management from "experience-driven" to "evidence-driven."
Miri: The Gatekeeper of unsafe
Miri is a Rust interpreter that can detect various undefined behaviors at runtime:
- Invalid memory accesses
- Aliasing rule violations (Stacked Borrows / Tree Borrows)
- Uninitialized memory reads
- Memory leaks
When developing unsafe code, running tests with Miri can catch many hidden bugs.
Loom: The Concurrency Revealer
Loom is a concurrency testing framework that discovers race conditions by exhaustively (or intelligently sampling) thread schedules.
The bug in CVE-2025-68260 would likely have been found before release if tested with Loom.
Sanitizers: Sentinels at FFI Boundaries
AddressSanitizer (ASan) and ThreadSanitizer (TSan) can detect memory errors and data races, particularly useful for Rust-C interaction boundaries.
RustBelt: Formal Verification
RustBelt is an academic project providing formal semantic foundations for Rust's type system and unsafe encapsulation. It mathematically proves: If unsafe code satisfies specific invariants, then its safe encapsulation is safe in any usage scenario.
Ralf Jung's original words:
"Tooling makes unsafe less mystical; formal methods make it trustworthy."
Part Six: A Rational View of This CVE
Now, let's return to the original debate.
Let the Data Speak
On the same day CVE-2025-68260 was published, the C portion of the Linux kernel had 159 CVEs issued.
Let's make a simple comparison:
| Metric | Rust Code | C Code |
|---|---|---|
| Time in kernel | ~3 years | ~33 years |
| Code proportion | < 1% | > 99% |
| Cumulative CVE count | 1 | Tens of thousands |
| CVEs issued this time | 1 | 159 |
This data isn't meant to prove "Rust is perfect," but to show: Rust significantly reduces the occurrence rate of memory safety vulnerabilities.
What Does This Bug Tell Us?
- Unsafe code needs more careful review: Safety assumptions in comments must cover all scenarios, including concurrent scenarios.
- Rust hasn't eliminated bugs, but changed their distribution: Safe Rust eliminates entire classes of memory safety bugs; remaining bugs concentrate at unsafe boundaries and logical errors.
- Explicit unsafe makes problems easier to locate: Once this bug was found, developers immediately knew to check the unsafe block's invariants. With C code, locating the problem would be much harder.
Rust's Promise Was Never "Zero Bugs"
Rust's safety promise is:
If your code is 100% safe Rust (no unsafe), it won't have memory safety issues.
For code containing unsafe, the promise becomes:
The correctness of unsafe code depends on whether your maintained invariants are correct. If invariants are correct, the encapsulated safe API is safe.
This CVE precisely proves: When invariants are wrong, unsafe code will have problems. This isn't a Rust defect—it's Rust's design working as intended—concentrating risk in auditable, accountable places.
Comparison with C
What would it look like if the same code were implemented in C?
- No unsafe markers: All code is "unsafe," making it impossible to quickly locate high-risk areas
- No mandatory SAFETY comments: Invariants might not be documented at all
- More prone to other memory safety issues: use-after-free, buffer overflow, null pointer, etc.
- Harder to trace problem sources: No ownership system to assist analysis
Part Seven: Responding to Rust Haters
Let's directly address the criticism from the beginning of this article:
"Since unsafe is unavoidable in the kernel, and unsafe code has hidden dangers, maybe you shouldn't have been bragging everywhere that 'if it compiles, it works'..."
This criticism is based on a false premise.
The Rust community has never said "if it compiles, it works." The accurate statement is:
- Safe Rust code: Compilation passing = no memory safety issues (this has formal proof support)
- Unsafe Rust code: Compilation passing ≠ no problems; human verification of invariants is required
- Mixed code: Overall safety depends on the correctness of the unsafe portions
Rust's value lies in:
- Narrowing the scope requiring human review: Only unsafe blocks need special attention
- Clarifying review focus: Invariant documentation tells you what to verify
- Eliminating entire classes of bugs: In safe Rust, data races, use-after-free, etc. simply cannot occur
Reducing "100% of code needs memory safety concerns" to "only 1% of code needs concerns"—this itself is enormous progress.
Conclusion: Safety Isn't About Avoiding Low-Level Work, But Doing It the Right Way
CVE-2025-68260 is the first CVE since Rust entered the Linux kernel, but it won't be the last. As long as there's unsafe code, there's potential for mistakes.
But this precisely illustrates Rust's design philosophy:
Acknowledge the necessity of low-level operations, but use explicit markers, invariant documentation, tooling detection, and community conventions to control risk to the minimum scope.
Ralf Jung's summary at the end of his talk is worth remembering for every systems programmer:
- unsafe is a necessary interface layer, not wanton escape
- Document and encapsulate with invariants at the core, keeping danger in a small room
- Treat UB as a red line, guarding it with tools and tests
- Use community conventions and formal methods to upgrade experience into verifiable engineering
Safety isn't about avoiding low-level work, but doing it the right way.
This CVE isn't Rust's failure—it's the unsafe mechanism working as designed—exposing problems at controlled boundaries, allowing us to quickly locate, fix, and learn from them.
159 versus 1. This numerical comparison is the story truly worth discussing today.
Appendix: CVE-2025-68260 Technical Details
Vulnerability Type: Race Condition
Affected Versions: Linux 6.18 (introduced)
Fixed Versions: Linux 6.18.1, 6.19-rc1
Problem File: drivers/android/binder/node.rs
Root Cause: Node::release iterates over a temporary list after releasing the lock, creating a data race with other threads' unsafe remove operations
Fix: Pop elements directly from the original list while holding the lock, avoiding the race window after lock release
Reference Links:
If you found this article helpful, feel free to like, share, and subscribe.
What are your thoughts on Rust and systems programming? Feel free to discuss in the comments.
Top comments (0)