Let me paint a picture.
You're writing a Rust program. It starts, runs top to bottom, and exits. One thread. One lane of traffic. Fine — until you want two things to happen at the same time. Maybe you want to download a file while showing a progress bar. Maybe you're writing a server that needs to handle multiple requests. Maybe you just want to go fast.
That's when threads enter the picture. And that's when most languages say "good luck" and hand you a loaded footgun.
Rust does something different.
Starting a Thread is Stupidly Simple
use std::thread;
fn main() {
thread::spawn(f);
thread::spawn(f);
println!("Hello from the main thread.");
}
fn f() {
println!("Hello from another thread!");
let id = thread::current().id();
println!("This is my thread id: {id:?}");
}
std::thread::spawn takes a function, hands it to a new thread, and gets out of the way. That's it.
But here's the first gotcha: if main() returns, the whole program dies — even if other threads are still running. So if you want to wait for a thread to finish, you need to .join() it:
let handle = thread::spawn(f);
handle.join().unwrap(); // wait here until the thread is done
In practice, you'll almost always pass a closure instead of a named function, because closures let you capture values and move them into the thread:
let numbers = vec![1, 2, 3];
thread::spawn(move || {
for n in &numbers {
println!("{n}");
}
}).join().unwrap();
That move keyword is important. Without it, the closure would borrow numbers by reference — and the compiler would immediately yell at you, because the spawned thread could potentially outlive the variable. Rust catches this at compile time. No segfaults. No "works on my machine."
Scoped Threads — When You Know the Lifetime
Sometimes you know a thread won't outlive a certain scope. For that, std::thread::scope lets you borrow local variables safely:
let numbers = vec![1, 2, 3];
thread::scope(|s| {
s.spawn(|| println!("length: {}", numbers.len()));
s.spawn(|| { for n in &numbers { println!("{n}"); } });
});
// all scoped threads are automatically joined when we reach here
Both threads can borrow numbers — no move, no Arc, no cloning. Clean and safe.
A quick piece of history: The original scoped thread API before Rust 1.0 had a subtle bug — it assumed an object's destructor would always run, which let you write "safe" code that was actually unsound. The fix was radical:
std::mem::forgetwas made safe (to make the assumption explicit), and scoped threads were removed entirely. They came back in Rust 1.63 with a new design that doesn't rely on destructors at all. This incident is affectionately known as The Leakpocalypse.
Okay But How Do You Actually Share Data?
So you've got two threads. Now you want them to look at the same data. Here's where things get interesting.
You have three real options.
Option 1: Statics. A static belongs to the whole program, lives forever, and any thread can borrow it freely. Simple, but inflexible — you have to know the value at compile time.
static X: [i32; 3] = [1, 2, 3];
thread::spawn(|| dbg!(&X));
Option 2: Leaking. You can deliberately "forget" to free memory using Box::leak, which gives you a 'static reference. Works in a pinch, but leaking memory on purpose is the kind of thing that comes back to bite you if you do it repeatedly.
Option 3: Arc<T>. This is what you'll actually use.
Arc stands for Atomically Reference Counted. The idea is elegant: instead of one owner, you have shared ownership, tracked by a counter. Every time you clone an Arc, the counter goes up. When a clone is dropped, the counter goes down. When it hits zero, the value is freed.
use std::sync::Arc;
let a = Arc::new([1, 2, 3]);
let b = a.clone(); // same memory, counter = 2
thread::spawn(move || dbg!(a)); // counter → 1 when this thread finishes
thread::spawn(move || dbg!(b)); // counter → 0, memory freed
You might be wondering: what's the difference between Arc and Rc? One word — threads.
Rc<T> |
Arc<T> |
|
|---|---|---|
| Thread-safe | ❌ | ✅ |
| Counter updates | Non-atomic | Atomic (hence the A) |
| Overhead | Lower | Slightly higher |
Use Rc when you're on a single thread and need shared ownership. Use Arc when multiple threads are involved. The compiler will stop you if you try to send an Rc across thread boundaries — and that's a feature, not a limitation.
One small quality-of-life tip: when you need to clone an Arc before moving it into a closure, use variable shadowing to keep things readable:
let a = Arc::new([1, 2, 3]);
thread::spawn({
let a = a.clone(); // new `a` scoped to this block
move || dbg!(a)
});
dbg!(a); // original `a` still lives here
⚠️ One important constraint:
Arc<T>gives you shared read access — like a&T. You cannot mutate through it. If you want shared and mutable, you need aMutex. More on that shortly.
Interior Mutability — Bending the Rules, Safely
Here's something that trips people up early on: in Rust, a shared reference (&T) means immutable by default. But what if you need to mutate something that's shared?
This is where interior mutability comes in. It's not a hack — it's a principled part of the language. The trick is reframing how you think about references:
-
&T= shared reference (many can exist at once) -
&mut T= exclusive reference (only one can exist)
Interior mutability allows mutation through a shared reference, as long as the type enforces safety rules of its own. Here's the full lineup:
| Type | Thread-safe | How it works | Cost |
|---|---|---|---|
Cell<T> |
❌ | Copy in / copy out | Zero |
RefCell<T> |
❌ | Runtime borrow check | Small counter |
Mutex<T> |
✅ | Exclusive OS lock | Blocks thread |
RwLock<T> |
✅ | Shared + exclusive lock | Blocks thread |
Atomic* |
✅ | CPU instruction | Minimal |
UnsafeCell<T> |
⚠️ | Raw pointer | Zero (unsafe) |
Every single one of these is ultimately built on UnsafeCell<T> — the only legal way in Rust to mutate through a shared reference at the primitive level. The rest are safe wrappers that enforce the "no aliased mutation" rule at different points: compile time, runtime, or the hardware level.
Send and Sync — The Type System's Secret Weapons
Ever wonder how Rust actually prevents data races at compile time? Meet Send and Sync.
Send means: ownership of this type can be transferred to another thread. Arc<i32> is Send. Rc<i32> is not — because moving an Rc across threads would let two threads modify the counter at the same time without synchronisation.
Sync means: a shared reference to this type can be sent to another thread. i32 is Sync. Cell<i32> is not — its interior mutability isn't safe for concurrent access.
Both traits are automatically derived. If all your struct's fields are Send, your struct is Send. You don't have to think about it unless you're doing something unusual.
What you do get to see is the compiler error when you get it wrong:
error[E0277]: `Rc<i32>` cannot be sent between threads safely
--> src/main.rs:3:5
|
3 | thread::spawn(move || {
| ^^^^^^^^^^^^^ `Rc<i32>` cannot be sent between threads safely
No undefined behaviour. No race condition that only shows up in production. The mistake is caught before your code ever runs.
Mutexes — Finally, Shared Mutation
You've got two threads, they share an Arc, and now you want both of them to write to it. This is where Mutex<T> comes in.
A mutex (short for mutual exclusion) is simple in concept: only one thread can hold the lock at a time. Everyone else waits.
What makes Rust's Mutex<T> special is that the data lives inside the mutex. You literally cannot access the data without holding the lock. Compare that to C++, where the mutex and the data it protects are completely separate — enforced only by convention and hope.
use std::sync::Mutex;
let n = Mutex::new(0);
let mut guard = n.lock().unwrap(); // acquire the lock
*guard += 1; // modify the data
// guard is dropped here → lock is automatically released
That's RAII at work. No .unlock() call to forget. When the guard goes out of scope, the lock releases. Always.
A few things to keep in mind:
- Keep locks short. Holding a lock while doing slow work (sleeping, I/O) forces every other thread to wait. That's serial execution with extra steps.
-
Watch out for
if let. Unlike plainif, theif letpattern keeps temporaries alive for the whole body — meaning the mutex stays locked longer than you expect. Drop the guard explicitly if needed. -
Lock poisoning. If a thread panics while holding a mutex, the mutex is marked "poisoned." Subsequent
.lock()calls return anErr. Most code calls.unwrap()which re-panics, propagating the failure. That's usually the right call.
When you have data that's read far more often than it's written, RwLock<T> is the better tool — multiple readers can hold the lock simultaneously, but a writer gets exclusive access.
Waiting for Things to Happen
A Mutex lets threads take turns accessing data. But what if a thread needs to wait until the data is in a certain state — like a queue becoming non-empty?
The naive approach is a busy loop: lock, check, unlock, repeat. This works but wastes CPU spinning on nothing. There are two proper solutions.
Thread Parking
For simple one-producer, one-consumer patterns, thread parking is elegant. One thread calls thread::park() and goes to sleep. Another thread calls handle.thread().unpark() to wake it up.
// Consumer thread
loop {
let item = queue.lock().unwrap().pop_front();
if let Some(item) = item {
dbg!(item);
} else {
thread::park(); // nothing here, go to sleep
}
}
// Producer thread
queue.lock().unwrap().push_back(i);
t.thread().unpark(); // wake the consumer
Two things worth knowing: if unpark() fires before park() is called, the wake-up is saved — the next park() returns immediately instead of sleeping. But tokens don't stack — two calls to unpark() still only save one wake-up.
Always re-check your condition after waking. Spurious wake-ups are real — park() can return without a matching unpark(). It's rare, but your code needs to handle it.
Condition Variables
Thread parking doesn't scale. What if you have multiple consumers? You'd need to track which thread to unpark — messy.
Condvar is the general solution. Multiple threads can wait on the same condition variable, and any thread can wake one or all of them:
use std::sync::{Condvar, Mutex};
let queue = Mutex::new(VecDeque::new());
let not_empty = Condvar::new();
// Consumer
let mut q = queue.lock().unwrap();
let item = loop {
if let Some(item) = q.pop_front() {
break item;
} else {
q = not_empty.wait(q).unwrap(); // atomically: unlock → sleep → re-lock
}
};
// Producer
queue.lock().unwrap().push_back(i);
not_empty.notify_one();
The key magic is condvar.wait(guard) — it atomically unlocks the mutex and starts waiting. There's no gap where a notification could arrive and be silently lost. This is the thing that makes it correct.
| Thread Parking | Condvar | |
|---|---|---|
| Multiple consumers | ❌ | ✅ |
| Spurious wake-ups | Possible | Possible |
| Notification before wait | Safe | Safe |
| Requires Mutex | No | Yes |
| Use when | Simple 1:1 wake-ups | General-purpose waiting |
What You Actually Walk Away With
Here's the honest summary of Chapter 1:
Rust doesn't just give you threads — it gives you a model for thinking about what can go wrong with threads and a type system that enforces the rules automatically. Data races? Impossible in safe code. Sending a non-thread-safe type across threads? Compile error. Accessing shared data without a lock? Can't happen.
The primitives — Arc, Mutex, Condvar, Send, Sync — aren't arbitrary. They each solve a specific, concrete problem:
-
Arc<T>— shared ownership across threads -
Mutex<T>— shared mutation with exclusive access -
RwLock<T>— shared reads, exclusive writes -
Condvar— waiting for a condition, not just a lock -
Send/Sync— the compile-time proof that your types are thread-safe
That's the foundation. Everything in the rest of the book — atomics, memory ordering, lock-free data structures — builds on exactly this.
Based on Atomics and Locks by Mara Bos — highly recommended if you want to understand how concurrency actually works under the hood.
Top comments (0)