Async/.await in Rust Asynchronous Programming
async/.await
is a built-in Rust language feature that allows us to write asynchronous code in a synchronous style.
Let's learn how to use the async/.await
keywords through examples. Before we begin, we need to introduce the futures
package. Edit the Cargo.toml
file and add the following content:
[dependencies]
futures = "0.3"
Using async
to Create an Asynchronous Future
Simply put, the async
keyword can be used to create the following types of Future
:
- Define a function:
async fn
- Define a block:
async {}
For example, an async
function:
async fn hello_world() {
...
}
The async
keyword modifies the function prototype to return a Future
trait object. It then wraps the execution result in a new Future
and returns it, roughly equivalent to:
fn hello_world() -> impl Future<Output = ()> {
async { ... }
}
Note: The
async
block implements an anonymousFuture
trait object, encapsulating aGenerator
, which is aFuture
-implementing generator. AGenerator
essentially acts as a state machine. When any operation inside anasync
block returnsPoll::Pending
, the generator callsyield
, relinquishing execution. Once resumed, the generator continues execution until all code completes, meaning the state machine enters theComplete
state and returnsPoll::Ready
, signaling that theFuture
has finished execution.
A code block marked with async
is converted into a state machine that implements the Future
trait. Unlike synchronous calls that block the current thread, when a Future
encounters a blocking operation, it relinquishes control of the current thread, waiting for the execution result of other Future
s.
A Future
needs to run on an executor. For example, block_on
is an executor that blocks the current thread:
// block_on blocks the current thread until the specified Future completes execution.
// This approach is simple and direct, but other runtime executors provide more sophisticated behaviors,
// such as using join to schedule multiple futures on the same thread.
use futures::executor::block_on;
async fn hello_world() {
println!("hello, world!");
}
fn main() {
let future = hello_world(); // Returns a Future, so no output is printed yet
block_on(future); // Executes the Future and waits for it to complete; "hello, world!" is then printed
}
Using await
to Wait for Another Asynchronous Future to Complete
In the main
function above, we used the block_on
executor to wait for the Future
to complete, making the code appear synchronous. But what if you need to call an async fn
inside another async fn
and wait for its completion before executing subsequent code? For example:
use futures::executor::block_on;
async fn hello_world() {
// Directly calling another async function inside an async function—will this work?
hello_cat();
println!("hello, world!");
}
async fn hello_cat() {
println!("hello, kitty!");
}
fn main() {
let future = hello_world();
block_on(future);
}
Here, in the hello_world
async function, we first call another async function hello_cat
and then print "hello, world!"
. Let's check the output:
warning: unused implementer of `futures::Future` that must be used
--> src/main.rs:6:5
|
6 | hello_cat();
| ^^^^^^^^^^^^
= note: futures do nothing unless you `.await` or poll them
...
hello, world!
As expected, we executed the Future
in main
using block_on
, but the Future
returned by hello_cat
was never executed. Fortunately, the compiler provides a friendly warning: "Futures do nothing unless you .await
or poll them."
There are two solutions:
- Use
.await
syntax. - Manually poll the
Future
(which is more complex, so we won’t cover it here).
Let's modify the code using .await
:
use futures::executor::block_on;
async fn hello_world() {
hello_cat().await;
println!("hello, world!");
}
async fn hello_cat() {
println!("hello, kitty!");
}
fn main() {
let future = hello_world();
block_on(future);
}
After adding .await
to hello_cat()
, the output changes significantly:
hello, kitty!
hello, world!
The output order now strictly follows the code order. This means that we achieved asynchronous execution while maintaining a sequential coding style. This approach is simple, efficient, and eliminates callback hell.
Internally, every .await
acts like an executor, repeatedly polling the Future
state. If it returns Pending
, it calls yield
. Otherwise, it exits the loop and completes the Future
execution. The logic is roughly as follows:
loop {
match some_future.poll() {
Pending => yield,
Ready(x) => break
}
}
In short, using .await
inside an async fn
allows waiting for another asynchronous call to complete. However, unlike block_on
, .await
does not block the current thread. Instead, it asynchronously waits for Future A
to complete. While waiting, the thread can continue executing other Future B
instances, enabling concurrency.
An Example
Consider a scenario of singing and dancing. Without .await
, the implementation might look like this:
use futures::executor::block_on;
struct Song {
author: String,
name: String,
}
async fn learn_song() -> Song {
Song {
author: "Rick Astley".to_string(),
name: String::from("Never Gonna Give You Up"),
}
}
async fn sing_song(song: Song) {
println!(
"Performing {}'s {} ~ {}",
song.author, song.name, "Never gonna let you down"
);
}
async fn dance() {
println!("Dancing along to the song");
}
fn main() {
let song = block_on(learn_song()); // First blocking call
block_on(sing_song(song)); // Second blocking call
block_on(dance()); // Third blocking call
}
This code runs correctly but requires three consecutive blocking calls, completing one task at a time. In reality, we could sing and dance simultaneously:
use futures::executor::block_on;
struct Song {
author: String,
name: String,
}
async fn learn_song() -> Song {
Song {
author: "Rick Astley".to_string(),
name: String::from("Never Gonna Give You Up"),
}
}
async fn sing_song(song: Song) {
println!(
"Performing {}'s {} ~ {}",
song.author, song.name, "Never gonna let you down"
);
}
async fn dance() {
println!("Dancing along to the song");
}
async fn learn_and_sing() {
let song = learn_song().await;
sing_song(song).await;
}
async fn async_main() {
let f1 = learn_and_sing();
let f2 = dance();
// The join! macro runs multiple futures concurrently
futures::join!(f1, f2);
}
fn main() {
block_on(async_main());
}
Here, learning and singing have a strict order, but both can coexist with dancing. Without .await
, using block_on(learn_song())
would block the current thread, preventing any other tasks, including dancing.
Thus, .await
is crucial for asynchronous programming in Rust. It allows multiple tasks to run concurrently on the same thread instead of executing sequentially.
Conclusion
async/.await
is Rust's built-in tool for writing asynchronous functions that look like synchronous code. async
converts a code block into a state machine that implements the Future
trait, which must run on an executor. Instead of blocking an entire thread, a Future
yields control, allowing other Future
s to execute.
Key takeaways:
-
Future
represents a task that yields a value in the future. -
async
creates aFuture
. -
.await
polls aFuture
, waiting for it to complete. - Executors (like
block_on
) manage and executeFuture
s. - Rust's
async
is zero-cost: no heap allocation or dynamic dispatch. - Rust does not include a built-in async runtime; third-party libraries like
tokio
,async-std
, andsmol
provide this functionality.
In summary, async/.await
enables efficient, concurrent task execution in Rust, eliminating callback hell and making asynchronous programming intuitive.
We are Leapcell, your top choice for hosting Rust projects.
Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:
Multi-Language Support
- Develop with Node.js, Python, Go, or Rust.
Deploy unlimited projects for free
- pay only for usage — no requests, no charges.
Unbeatable Cost Efficiency
- Pay-as-you-go with no idle charges.
- Example: $25 supports 6.94M requests at a 60ms average response time.
Streamlined Developer Experience
- Intuitive UI for effortless setup.
- Fully automated CI/CD pipelines and GitOps integration.
- Real-time metrics and logging for actionable insights.
Effortless Scalability and High Performance
- Auto-scaling to handle high concurrency with ease.
- Zero operational overhead — just focus on building.
Explore more in the Documentation!
Follow us on X: @LeapcellHQ
Top comments (0)