As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Let me tell you about how I learned to write programs that do many things at once without falling apart. It started when I tried to build a simple web server. Every new visitor would get their own thread. This worked fine with ten visitors. With a hundred, it slowed down. With a thousand, it used all my computer's memory and stopped. I needed a better way.
That better way is called asynchronous programming. Think of it like this: if you’re cooking dinner, you don’t stand and stare at the pot of water until it boils. You put it on the stove, set a timer, and while it heats up, you chop vegetables. Your attention jumps between tasks only when something needs it. Asynchronous programming lets your code work the same way.
Rust gives us tools to write code like this. The most important are two keywords: async and await. They let you write code that looks like it runs step-by-step, but can actually pause and let other tasks run while it waits for something slow, like reading from a network.
Here’s what that looks like in its simplest form.
use tokio::time::{sleep, Duration};
async fn make_tea() {
println!("Putting the kettle on.");
sleep(Duration::from_secs(2)).await; // Simulate waiting for water to boil
println!("Water boiled. Making tea.");
}
#[tokio::main]
async fn main() {
make_tea().await;
}
When you run this, you’ll see a pause between the two messages. The .await keyword is where the magic happens. It says, "This sleep function is going to take time. Instead of blocking everything and wasting seconds, pause this specific task and go work on something else until the sleep is finished."
But what is a "task," and what is this "something else" it works on? This brings us to the core ideas.
Understanding Futures and Executors
In Rust, an asynchronous function doesn’t run immediately. When you call make_tea(), it doesn’t start boiling water. Instead, it returns something called a Future. You can think of a Future as a to-do list, or a recipe card. It describes the work that needs to be done, but hasn’t started yet.
The Future just sits there until something actively works through its steps. That "something" is called an executor. The executor’s job is to manage all the pending Futures, check which ones are ready to make progress, and drive them forward. It’s like a chef in a kitchen, looking at all the recipe cards (Futures) and performing the next step on each one when possible.
In the code above, #[tokio::main] sets up a default executor. When we say .await, we hand the Future back to the executor and say, "Wake me up when this step is done." The executor can then check on other Futures.
This is powerful because one executor can manage thousands of Futures on just a few operating system threads. Creating a thread is expensive—it needs its own stack, a chunk of memory. A Future is tiny in comparison. You can have a hundred thousand Futures waiting for network data, all managed efficiently by a handful of threads.
Writing a Real Concurrent Application
Let’s build something more practical: a tiny server that can handle multiple clients at once.
use tokio::net::{TcpListener, TcpStream};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use std::error::Error;
async fn handle_connection(mut socket: TcpStream) -> Result<(), Box<dyn Error>> {
// A small buffer to read data into
let mut buffer = [0; 1024];
// Read from the socket. `.await` means we yield control
// while waiting for data to arrive.
let bytes_read = socket.read(&mut buffer).await?;
let request = String::from_utf8_lossy(&buffer[..bytes_read]);
println!("Received request: {}", request);
// A simple response
let response = "HTTP/1.1 200 OK\r\n\r\nHello from async Rust!";
// Write the response. `.await` again.
socket.write_all(response.as_bytes()).await?;
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// Bind a listener to an address
let listener = TcpListener::bind("127.0.0.1:8080").await?;
println!("Server listening on 127.0.0.1:8080");
loop {
// Accept a new connection. We wait asynchronously for this.
let (socket, _) = listener.accept().await?;
// For each new connection, spawn a new task.
tokio::spawn(async move {
// Handle the connection inside this new task.
if let Err(e) = handle_connection(socket).await {
eprintln!("Error handling connection: {}", e);
}
});
}
}
Run this code and open http://127.0.0.1:8080 in your browser. You’ll see the greeting. Now, open fifty browser tabs at once. The server won’t create fifty threads. Instead, it will create fifty lightweight tasks. The executor will juggle them all, reading and writing data for each connection as it becomes ready on the network.
The key line is tokio::spawn. This takes an async block (a Future) and hands it to the executor, saying "Run this independently." The main loop can then immediately go back to listener.accept().await to wait for the next client, without being blocked by the previous one.
Why Not Just Use Threads?
You absolutely can use threads. But they come with a cost. Each thread has its own memory stack, typically around 2 MB. Ten thousand threads means 20 GB of memory just for stack space. Context switching between them is heavy work for the operating system.
An async Task in Rust might use only a few hundred bytes. Switching between tasks is incredibly fast because it’s managed by our Rust executor, not the OS kernel. This is why async Rust can handle a massive number of simultaneous connections—like a chat server with a million users—on ordinary hardware.
The safety Rust is famous for extends perfectly into this async world. The compiler’s borrow checker ensures you can’t accidentally share mutable data between tasks in a way that causes corruption. If data needs to travel between tasks, the type system guides you to use safe channels or locks designed for async.
Common Patterns: Channels and Select
Doing work concurrently often means tasks need to communicate. A classic tool for this is a channel.
use tokio::sync::mpsc; // mpsc: Multi-Producer, Single-Consumer
#[tokio::main]
async fn main() {
// Create a channel
let (tx, mut rx) = mpsc::channel(32);
// Spawn a task that sends messages
tokio::spawn(async move {
for i in 1..=5 {
tx.send(format!("Message {}", i)).await.expect("Failed to send");
tokio::time::sleep(Duration::from_secs(1)).await;
}
});
// The main task receives messages
while let Some(message) = rx.recv().await {
println!("Received: {}", message);
}
println!("All messages received.");
}
Here, one task produces messages every second, and the main task consumes them. The .await on send and recv means the producing task will yield if the channel is full, and the receiving task will yield if the channel is empty. No CPU cycles are wasted.
Another essential tool is select!. It allows a task to wait on multiple async operations and respond to the first one that completes.
use tokio::time::{sleep, timeout, Duration};
async fn fetch_from_network() -> String {
sleep(Duration::from_secs(3)).await; // Simulate a slow network
"Data from network".to_string()
}
async fn fetch_from_cache() -> String {
sleep(Duration::from_secs(1)).await; // Cache is faster
"Data from cache".to_string()
}
#[tokio::main]
async fn main() {
let result = tokio::select! {
// Try to get data from the network with a timeout
network_data = timeout(Duration::from_secs(2), fetch_from_network()) => {
network_data.unwrap_or_else(|_| "Network timeout".to_string())
}
// Simultaneously try to get it from cache
cache_data = fetch_from_cache() => {
cache_data
}
};
println!("Result: {}", result); // This will likely print the cache data
}
The select! macro runs both futures concurrently. Whichever completes first, its branch runs and the other is cancelled. This is incredibly useful for implementing timeouts, failovers, or responsive user interfaces.
Dealing with Errors and Shared State
Error handling in async feels just like in regular Rust, thanks to the ? operator.
async fn fallible_operation() -> Result<String, std::io::Error> {
// Some operation that might fail
Ok("Success".to_string())
}
async fn my_task() -> Result<(), Box<dyn std::error::Error>> {
let data = fallible_operation().await?; // Propagates error if it occurs
println!("{}", data);
Ok(())
}
Sometimes you need to share data between tasks. For this, you need async-aware locks. A normal std::sync::Mutex will block an entire thread if you await while holding it, potentially stalling all other tasks on that thread. Tokio provides a Mutex you can .await.
use tokio::sync::Mutex;
use std::sync::Arc;
#[tokio::main]
async fn main() {
let counter = Arc::new(Mutex::new(0));
for _ in 0..10 {
let counter = Arc::clone(&counter);
tokio::spawn(async move {
let mut num = counter.lock().await;
*num += 1;
});
}
// Wait a bit for tasks to finish
sleep(Duration::from_millis(100)).await;
let final_count = counter.lock().await;
println!("Final count: {}", *final_count);
}
The lock is acquired with .await. If it’s held by another task, this task will pause and let others run, instead of wasting CPU cycles spinning.
The Journey, Not Just the Destination
Learning async Rust had a steep curve for me. The concepts of executors, reactors, and wakers felt abstract. I built small programs that failed in confusing ways. But the payoff was immense. The first time I wrote a data scraper that managed hundreds of web requests concurrently, using minimal CPU and memory, it felt like a superpower.
This model lets you build systems that are both reliable under load and economical with resources. It combines the clear, linear flow of synchronous code with the raw efficiency of event-driven programming. You write what you want to happen, and Rust’s async runtime figures out the most efficient way to interleave the waiting parts.
Start small. Write a function that uses sleep.await. Then write two tasks that run concurrently with tokio::spawn. Try adding a channel between them. Each step builds your mental model. Before long, you’ll be designing services that handle complexity not with more threads, but with smarter coordination, all checked for safety at compile time. It’s a compelling way to write software for our connected world.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)