Async Rust has a reputation for being complex. You've probably encountered intimidating terms like Send, Sync, Future, Pin, and tokio. The terminology can feel overwhelming, but the core concepts are more straightforward than they appear. Understanding async programming in Rust is crucial for building performant applications that handle I/O operations efficiently - whether you're building web servers, database clients, or any application that needs to wait for external resources.
In this guide, we'll demystify async/await by building a practical example: a student transcript generator. Along the way, you'll learn what Futures really are, why traits like Send and Sync matter for thread safety, and the critical difference between concurrency and parallelism. We'll progress from blocking code to async implementations, exploring the trade-offs and design patterns that make async Rust powerful.
Here's a preview of the transformation we'll make - converting blocking, sequential code into concurrent async operations:
// Before: One transcript at a time (blocking)
fn generate_transcript_sync(student_id: u32) -> String {
// Fetch courses, calculate GPA, format PDF
std::thread::sleep(Duration::from_secs(3));
format!("Transcript for Student {}: 3.8 GPA, 120 credits", student_id)
}
// After: Many transcripts at once (async)
async fn generate_transcript_async(student_id: u32) -> String {
tokio::time::sleep(Duration::from_secs(3)).await;
format!("Transcript for Student {}: 3.8 GPA, 120 credits", student_id)
}
The Problem with Blocking Code
To understand why async matters, let's first look at what happens when we use traditional blocking code. Imagine a university system where students request their academic transcripts. Each transcript generation involves several time-consuming steps: fetching data from a database, calculating GPAs, formatting PDFs, and saving files to disk.
Let's see what happens when 3 students request transcripts using traditional blocking code:
fn generate_transcripts_blocking() {
let start = Instant::now();
let transcript1 = generate_transcript_sync(101);
let transcript2 = generate_transcript_sync(102);
let transcript3 = generate_transcript_sync(103);
println!("Total time: {:?}", start.elapsed());
// Output: Total time: ~9 seconds (3s × 3)
}
This code has two major problems. First, everything executes sequentially - each student must wait for all previous transcripts to finish before theirs even begins. Second, if this code ran on an application's main thread, the UI would freeze for the entire 9 seconds while the thread sits idle, waiting.
What's Really Happening During Transcript Generation?
Behind the scenes, generating a transcript involves multiple distinct operations:
- Network I/O: Call an API over the network to fetch the student's data
- CPU work: Calculate averages and totals based on that data
- File I/O: Generate a PDF in memory
- Disk I/O: Save the PDF file to storage
- More network I/O: Perhaps verify data with external services
Here's the key insight: not all of those 3 seconds represent actual computational work. Much of that time is spent waiting - waiting for network responses, waiting for disk writes, waiting for database queries. While we're waiting for one task's I/O operations to complete, we could be making progress on another task. This is where async shines.
The Async Solution
Now let's see the async version that handles all three transcripts concurrently:
async fn generate_transcripts_async() {
let start = Instant::now();
let (transcript1, transcript2, transcript3) = tokio::join!(
generate_transcript_async(101),
generate_transcript_async(102),
generate_transcript_async(103)
);
println!("Total time: {:?}", start.elapsed());
// Output: Total time: ~3 seconds (all run together!)
}
The transformation is dramatic: instead of 9 seconds (3 tasks × 3 seconds each), all three transcripts complete in approximately 3 seconds total. The tokio::join! macro runs all three futures concurrently, switching between them whenever one is waiting for I/O. This is concurrency in action - making progress on multiple tasks by efficiently utilizing wait time.
When to use this approach:
- I/O-bound operations (network calls, file operations, database queries)
- Applications that need to handle many simultaneous requests
- Situations where tasks spend significant time waiting rather than computing
Understanding Futures: The Foundation of Async
When you write async fn in Rust, you're not creating a function that executes immediately. Instead, you're creating a function that returns a Future - a value that represents a computation that may not be ready yet but will complete at some point in the future. This is a fundamentally different model than synchronous programming, and understanding it is crucial to working effectively with async Rust.
Futures Are Lazy
One of the most important characteristics of Futures is that they are lazy - they do absolutely nothing until you explicitly ask them to run by calling .await:
// This doesn't run immediately! Just creates the future
let future = generate_transcript_async(101);
// At this point, nothing has happened yet. No database calls, no network requests.
// This starts execution and waits for completion
let result = future.await;
This lazy evaluation is a feature, not a bug. It allows you to compose futures together, pass them around, and decide exactly when and how to execute them. It also enables powerful combinators like join!, select!, and try_join! that can coordinate multiple futures efficiently.
Futures as State Machines
Behind the scenes, Rust compiles your async function into a state machine that can be paused at each .await point and resumed later. This is what enables cooperative multitasking - the runtime can switch between different tasks at well-defined suspension points.
The Future trait is the core abstraction that makes this possible:
pub trait Future {
type Output;
// Required method
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}
pub enum Poll<T> {
Ready(T),
Pending,
}
When the runtime wants to make progress on a future, it calls the poll method. The future can respond in one of two ways:
-
Poll::Ready(T): The future has finished running and here's the output value -
Poll::Pending: The future is still executing, check back later
This polling mechanism is what allows the runtime to efficiently schedule thousands of concurrent tasks. Each task is polled, makes some progress, and then either completes or yields control back to the runtime.
Thread Safety: Send and Sync
Async Rust introduces an additional layer of complexity because futures often need to move between threads. The async runtime might start executing your future on one thread, pause it, and then resume it on a completely different thread. This is where the Send and Sync traits become critical.
The Send Trait: Safe to Move Between Threads
Let's explore what happens when we try to return different types from async functions. This simple example works perfectly:
struct Transcript {
student_name: String,
courses: Vec<String>,
gpa: f64,
}
async fn generate_full_transcript(student_id: u32) -> Transcript {
let name = fetch_student_name(student_id).await;
let courses = fetch_course_history(student_id).await;
let gpa = calculate_gpa(student_id).await;
Transcript { student_name: name, courses, gpa }
}
This works because String and Vec<String> both implement Send, which means they can be safely moved between threads. The async runtime can pause this future on one thread and resume it on another without any safety issues.
Now let's see what happens when we try to use Rc<String>, a reference-counted pointer that is explicitly not thread-safe:
use std::rc::Rc;
struct BadTranscript {
student_name: Rc<String>, // ❌ Not Send!
courses: Vec<String>,
}
// This won't compile:
async fn bad_generate_transcript(student_id: u32) -> BadTranscript {
// Error: `Rc<String>` cannot be sent between threads safely
}
The compiler stops us dead in our tracks. Why? Because Rc<String> doesn't implement Send. Its reference counting is not thread-safe - multiple threads modifying the count concurrently would cause data races. Since BadTranscript contains an Rc<String>, it also doesn't implement Send, which means the async runtime cannot safely move this value between threads.
This is why the Send trait is crucial: it prevents thread safety bugs at compile time.
If you need a reference-counted pointer in async code, use Arc<String> (Atomic Reference Counted) instead, which implements Send because it uses atomic operations for thread-safe reference counting.
When to use Send-aware types:
- Use
Arcinstead ofRcfor shared ownership in async code - Use
MutexorRwLockfromtokio::syncinstead ofstd::syncfor async-aware locking - Most standard types like
String,Vec, and primitives are automaticallySend
The Sync Trait: Safe to Share References
While Send is about moving values between threads, Sync is about sharing references between threads. A type T is Sync if &T (a reference to T) is Send - meaning it's safe to share references to that type across threads.
The relationship between Send and Sync is important: a type cannot be Sync unless it's also Send. This makes logical sense - you can't safely share references to something that can't be moved between threads in the first place.
The good news is that most types automatically implement both Send and Sync. Rust's compiler derives these traits for your types when it's safe to do so. In practice, for typical async code, you'll find that things "just work" most of the time:
struct StudentDatabase {
students: HashMap<u32, String>, // Send + Sync ✅
}
// This works - we can share references safely
async fn lookup_student(db: &StudentDatabase, id: u32) -> Option<&String> {
db.students.get(&id)
}
Because HashMap<u32, String> is both Send and Sync, our StudentDatabase is too. We can safely pass references to it across async boundaries without any compiler complaints.
The Async Runtime: Executing Futures
There's a crucial piece we haven't discussed yet: how do futures actually run? Unlike synchronous code that executes immediately when called, futures need something to drive their execution. This is the job of the async runtime.
The runtime is responsible for:
-
Polling futures: Repeatedly calling
poll()on futures to make progress - Task scheduling: Deciding which futures to work on and when
- I/O management: Handling network sockets, file operations, and timers
- Thread pool management: Distributing work across CPU cores efficiently
The most popular runtime in the Rust ecosystem is Tokio, though alternatives like async-std and smol exist. Let's look at two ways to set up a runtime:
Setting Up a Tokio Runtime
First, add Tokio to your Cargo.toml:
[dependencies]
tokio = { version = "1", features = ["full"] }
Now you can create a runtime in two ways:
// Approach 1: Sync main with explicit runtime
fn main() {
tokio::runtime::Runtime::new()
.unwrap()
.block_on(async {
generate_transcript_async(101).await;
});
}
// Approach 2: Async main with equivalent implicit runtime
#[tokio::main]
async fn main() {
generate_transcript_async(101).await;
}
Both approaches create the same runtime behind the scenes. The #[tokio::main] attribute is syntactic sugar that expands to the explicit runtime creation shown in Approach 1. Most Rust developers prefer the cleaner syntax of Approach 2.
Cooperative Multitasking: The .await Discipline
A critical aspect of async Rust is that it uses cooperative multitasking. The runtime can only switch between tasks at .await points - it cannot preemptively interrupt a running task like an operating system can with threads.
This has an important implication: if a task never yields control (never hits an .await), it can block the entire runtime. Consider this problematic code:
async fn bad_cpu_intensive_work() {
// This blocks the entire runtime!
for i in 0..1_000_000_000 {
// CPU-intensive work with no .await
}
}
This task monopolizes the runtime thread, preventing other tasks from making progress. For CPU-intensive work, use threads or spawn blocking tasks with tokio::task::spawn_blocking.
Choosing the Right Tool: Async vs Threads
One of the most common questions when learning async Rust is: "When should I use async, and when should I use threads?" The answer lies in understanding the type of work you're doing.
Use Async for I/O-Bound Work
Async shines when your code spends most of its time waiting for external resources. This includes:
// Perfect for async: Waiting for external services
async fn verify_degree() {
let transcript = fetch_from_database().await; // I/O bound - database query
let verification = call_external_api().await; // I/O bound - network call
generate_pdf(transcript).await; // I/O bound - file operations
}
Why async is ideal here:
- Tasks spend most of their time waiting for I/O
- Can handle thousands of concurrent operations efficiently
- Low memory overhead per task (compared to threads)
- Excellent for web servers, API clients, database connections
Use Threads for CPU-Bound Work
When your code is doing heavy computation rather than waiting, threads are the better choice:
// Better with threads: CPU-intensive work
fn calculate_class_gpa(grades: Vec<f64>) -> f64 {
grades.iter().sum::<f64>() / grades.len() as f64 // CPU bound
}
// For parallel CPU work, use rayon or spawn threads:
use rayon::prelude::*;
fn process_all_students(students: Vec<Student>) -> Vec<f64> {
students.par_iter() // Parallel iterator using thread pool
.map(|s| calculate_complex_gpa(s))
.collect()
}
Why threads are ideal here:
- Actual CPU parallelism across multiple cores
- No need for cooperative yielding
- Better utilization of available CPU resources
- Simpler mental model for compute-heavy tasks
Decision Guide
Here's a quick reference to help you choose:
| Scenario | Use | Why |
|---|---|---|
| HTTP server handling many requests | Async | I/O-bound, need to handle thousands of concurrent connections efficiently |
| Database query aggregating millions of rows | Threads | CPU-bound computation, benefits from true parallelism |
| File system operations (reading/writing) | Async | I/O-bound, waiting for disk operations |
| Image processing or video encoding | Threads | CPU-intensive computation requiring all available cores |
| WebSocket server with many clients | Async | I/O-bound, mostly waiting for messages over network |
| Scientific computation or data analysis | Threads | CPU-bound mathematical operations |
| Making multiple API calls simultaneously | Async | I/O-bound, waiting for network responses |
Key insight: Async provides concurrency (managing many tasks by switching between them during wait times), while threads provide parallelism (actually running multiple tasks simultaneously on different CPU cores). Choose async when you're waiting, choose threads when you're computing.
Conclusion
We've journeyed from blocking, sequential code to concurrent async operations, exploring the fundamental concepts that make async Rust powerful:
-
Futures as lazy state machines: They do nothing until you
.awaitthem, enabling flexible composition and efficient execution -
The polling model: Futures return
Poll::Readywhen complete orPoll::Pendingwhen they need more time, allowing the runtime to efficiently schedule work - Thread safety through Send and Sync: These traits ensure that values can be safely moved between threads and references can be shared, preventing data races at compile time
- Async runtime orchestration: The runtime manages polling, scheduling, and I/O, providing the infrastructure that makes async code run
-
Cooperative multitasking discipline: Tasks yield at
.awaitpoints, requiring careful consideration to avoid blocking the runtime
Async Rust might seem intimidating at first with its unfamiliar terminology and concepts. However, each piece serves a clear purpose: ensuring memory safety and thread safety while providing excellent performance for I/O-bound operations. The type system guides you toward correct concurrent code, catching potential bugs at compile time rather than at runtime.
Remember: start with async for I/O-heavy workloads like web servers and API clients. Use threads when you need true CPU parallelism for computation. Most importantly, let the compiler be your guide - when you encounter a Send or Sync error, it's catching a real concurrency issue before it becomes a bug.
Want to get better at Rust, 1% at a time? I send out a Rust fundamentals thread every Thursday - clear, practical examples, and beginner-friendly.
👉 Join the newsletter here
Top comments (0)