As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Let me tell you about a different way to handle multiple parts of your program trying to work on the same information at the same time. It’s a problem as old as multitasking itself. Think about an office with one shared filing cabinet. If two people try to pull out the same file at once, things get messy. Papers tear. People get confused. In computing, we call this a data race, and it can cause programs to crash or give wrong answers.
For decades, the solution has been to put a lock on the cabinet. The first person to get there takes the key, does their work, and then returns the key so someone else can have a turn. This works, but it’s slow. Everyone else stands around waiting. Worse, what if the person with the key forgets to return it? Or what if two people are each waiting for a key the other has? The whole office grinds to a halt. This is a deadlock.
In most programming languages, avoiding these disasters is entirely up to the programmer. You must remember to lock the cabinet every single time, in the exact right order. The computer trusts you to do it correctly. It’s like working in an office where the only rule is "don't cause chaos," but no one is checking if you follow it until everything breaks in a strange, hard-to-repeat way.
Rust approaches this problem from a completely different angle. Instead of trusting you to follow the rules, it builds the rules into the very structure of the language. The compiler acts like a meticulous office manager who designs the workspace so that many of these classic problems are simply impossible to create by accident.
The secret lies in Rust's rules of ownership and borrowing. In Rust, a piece of data has one clear owner at any given time. You can allow others to look at it (borrow it immutably) or let one other person change it (borrow it mutably), but you can never have two people able to change it at the same time. The compiler enforces this strictly. If you try to write code that would allow that simultaneous mutation, your program won't even compile.
This might sound restrictive for concurrency. If only one thread can modify data at a time, how is that helpful? The brilliance is in the tools Rust gives you to move ownership between threads or to wrap data in protective shells that can be safely shared.
Let's talk about two special concepts: Send and Sync. These are markers, like labels the compiler uses.
A type is
Sendif it is safe to transfer its ownership from one thread to another. Most types are. Simple integers, strings, structs made ofSendtypes—they can all be packed up and sent to another thread to own and operate on.A type is
Syncif it is safe to share references to it between multiple threads. This is the trickier one. A type isSyncif you can have multiple&Treferences to it alive in different threads at once. Immutable references (&T) are generally fine. The compiler automatically figures out which of your types areSendandSyncbased on their contents.
Here’s the key: if your type uses something that isn't thread-safe, the compiler will see that and refuse to let you share it across threads. You are stopped at your desk, before the program ever runs, and told, "The way you've built this filing cabinet won't work in a shared office." You must use the right tools.
The most common tool is a combination of Arc<T> and Mutex<T>. This sounds complex, but it's straightforward.
Arc<T>stands for "Atomically Reference Counted." Think of it as a sturdy, thread-safe box. You can put your dataTinside this box. The box knows how many people are currently holding a handle to it. You can clone the handle (Arc::clone), passing new handles to other threads. The data inside the box will only be cleaned up when the last handle is gone. It manages the lifetime safely across threads.Mutex<T>stands for "Mutual Exclusion." It's the lock for the cabinet, but a smart one. You wrap your data in it:Mutex<MyData>. To read or write the data inside, a thread must ask the mutex for the lock. If the lock is available, the mutex gives the thread a special guard (a sort of temporary key) that allows access to the inner data. When the guard goes out of scope, the lock is automatically released. No one can forget to unlock it.
Let's build a shared counter, the classic example. Multiple threads will try to add to the same number.
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
// Wrap our counter in Mutex, then put that inside an Arc.
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![]; // A list to keep track of our threads.
for _ in 0..10 {
// Clone the Arc handle. This doesn't clone the data,
// just the pointer to the shared box.
let counter = Arc::clone(&counter);
// Spawn a new thread, moving the cloned handle into it.
let handle = thread::spawn(move || {
// Inside the thread, lock the mutex.
// `lock()` returns a Result. We use `unwrap()` here for simplicity.
// The `num` variable is our guard.
let mut num = counter.lock().unwrap();
// We can now modify the data.
*num += 1;
// The guard `num` goes out of scope here, releasing the lock automatically.
});
handles.push(handle);
}
// Wait for all threads to finish.
for handle in handles {
handle.join().unwrap();
}
// One final lock to read the result.
let final_count = *counter.lock().unwrap();
println!("Final count: {}", final_count); // This will print 10.
}
Look at what we didn't have to do. We didn't have to remember to unlock the mutex. We didn't have to initialize any special threading library. The structure of the code and the types (Arc<Mutex<i32>>) force us to handle sharing correctly. The compiler would not let us pass a plain Mutex<i32> to the thread, because it's not Send. We had to put it in the thread-safe box, Arc.
This is what "thread safety by construction" means. By using the right type, you have constructed a program that is safe. The safety is baked into the design, verified by the compiler.
Sometimes, a full lock is too heavy. For very simple operations, like adding one to a number, modern CPUs provide atomic operations. These are like instructions that say "increment this" in a single, uninterruptable step. Rust provides these as types like AtomicI32. You can use these directly inside an Arc without a Mutex for specific operations.
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use std::thread;
fn main() {
let atomic_counter = Arc::new(AtomicUsize::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&atomic_counter);
let handle = thread::spawn(move || {
// This `fetch_add` is a single, atomic CPU operation.
// No lock is needed.
counter.fetch_add(1, Ordering::SeqCst);
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Final count: {}", atomic_counter.load(Ordering::SeqCst));
}
The Ordering::SeqCst is about memory ordering—it's the strongest guarantee, ensuring all threads see operations in a consistent order. It's a detail you need to learn for lock-free code, but even here, the type system guides you. You can't perform an atomic operation without specifying an ordering, making you think about the consequences.
This approach changes how you think. In other languages, you write your logic first and then, as an afterthought, ask "oh, is this thread-safe?" and sprinkle locks around. In Rust, the very first question is, "What type will I use to share this?" The choice of Arc<Mutex<T>> versus Arc<AtomicI32> versus a channel is a fundamental architectural decision you make upfront. The compiler then becomes your relentless partner, checking every single interaction with that data across every thread boundary.
Does this eliminate all concurrency bugs? No. You can still create logical deadlocks (Thread A needs Resource X and Y, Thread B needs Y and X). But it eliminates an entire category of the nastiest, most random-seeming bugs: data races. The memory cannot be accessed in an undefined way by two threads at once. That guarantee is provided before you ever run the program.
It feels restrictive at first. The compiler says "no" a lot. But this restriction is the source of its power. It forces you into a design that is robust from the ground up. Once you get used to it, writing concurrent code becomes less of a terrifying gamble and more of a structured engineering task. You spend less time debugging bizarre, timing-related crashes and more time thinking about the actual flow of data and work. The office manager's strict rules mean everyone can work efficiently without fear of the filing cabinet exploding.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)