If you're building web APIs in Rust with Axum, you'll quickly hit a very relatable question:
"How do I share a database connection, a config value, or an in-memory cache across my route handlers?"
The answer involves four tools working together: State<T>, Arc<T>, Mutex<T>, and RwLock<T>. Each one does a specific job. By the end of this article, you'll understand what each one is for, when to use which, and how to combine them confidently in Axum 0.8.
Let's go step by step. 🦀
The Problem: Handlers Don't Share Memory by Default
In Axum, each request is handled by an async function. These handlers are independent — they don't automatically have access to shared resources like:
- A database connection pool
- An API key loaded from environment variables
- An in-memory counter or key-value store
You need a way to inject shared data into your handlers safely — especially when multiple requests run concurrently on different threads.
Tool #1 — Arc<T>: Shared Ownership Across Threads
Arc stands for Atomically Reference Counted. It lives in Rust's standard library (std::sync::Arc).
Rust's ownership system normally allows only one owner per value. But in a web server, multiple request handlers run at the same time and all need to read the same data. Arc<T> solves this by letting you have multiple owners of the same data. Internally it keeps a count of how many owners exist, and when the last one is dropped, the data is cleaned up.
use std::sync::Arc;
let config = Arc::new(AppConfig {
db_url: "postgres://localhost/mydb".to_string(),
});
// Both config and config_clone point to the same allocation — no data is copied
let config_clone = Arc::clone(&config);
💡
Arc::clone()is cheap — it just increments the reference count. The actual data is never duplicated.
When is plain Arc<T> enough?
When your data is read-only — like app configuration or feature flags loaded at startup. No mutations means no synchronization needed beyond Arc.
Tool #2 — State<T>: Axum's Dependency Injection
State<T> is Axum's built-in extractor for shared application state. An extractor is just something you put in a handler's function signature, and Axum automatically provides it.
use axum::extract::State;
async fn hello_handler(State(config): State<Arc<AppConfig>>) -> String {
format!("Connected to: {}", config.db_url)
}
You attach your state to the router using .with_state(), and from then on any handler can request it via State<T>:
let app = Router::new()
.route("/hello", get(hello_handler))
.with_state(Arc::new(config)); // 👈 attach once, available everywhere
⚠️ Order matters!
Statemust come before body extractors likeJsonin your handler's parameter list.
Tool #3 — Mutex<T>: Exclusive Access for Writes
What if you need to mutate your shared data? Reading and writing at the same time from multiple threads is a data race — and Rust won't let you do it without synchronization.
Mutex<T> (Mutual Exclusion) is the classic solution. It acts as a lock — only one thread can hold it at a time:
use std::sync::{Arc, Mutex};
let counter = Arc::new(Mutex::new(0u64));
// To read or write, you lock() first
let mut val = counter.lock().unwrap();
*val += 1;
// lock is automatically released when `val` is dropped
In an Axum handler:
async fn increment(State(state): State<Arc<AppState>>) -> String {
let mut count = state.counter.lock().unwrap();
*count += 1;
format!("Count is now: {}", *count)
}
The key rule with std::sync::Mutex:
❌ Never hold a
std::sync::Mutexlock across an.awaitpoint!
// BAD — other tasks can't proceed while this one awaits
let data = state.items.lock().unwrap();
some_async_function().await; // 🔴 still holding the lock!
If you genuinely need to hold a lock across an .await, use tokio::sync::Mutex instead (more on this below).
Tool #4 — RwLock<T>: Many Readers, One Writer
Mutex<T> is simple but strict — even reading blocks all other threads. For workloads that are read-heavy, this is wasteful.
Enter RwLock<T> (Read-Write Lock). It has two modes:
-
Read lock (
.read()) — Many threads can hold this simultaneously -
Write lock (
.write()) — Only one thread can hold this, and it blocks all readers
use std::sync::{Arc, RwLock};
let store = Arc::new(RwLock::new(vec!["hello".to_string()]));
// Multiple readers at the same time — fine!
let data = store.read().unwrap();
println!("{:?}", *data);
// One writer at a time — blocks until all readers are done
let mut data = store.write().unwrap();
data.push("world".to_string());
In an Axum handler:
async fn get_items(State(state): State<Arc<AppState>>) -> String {
let items = state.store.read().unwrap(); // 🔒 shared read lock
items.join(", ")
}
async fn add_item(State(state): State<Arc<AppState>>) -> String {
let mut items = state.store.write().unwrap(); // 🔒 exclusive write lock
items.push("new item".to_string());
format!("Total: {}", items.len())
}
💡 Use
RwLockwhen your data is read far more often than it's written. A good rule of thumb: if 80%+ of accesses are reads,RwLockwins.
std::sync vs tokio::sync — Which Should You Use?
This trips up a lot of beginners. Here's the quick answer:
std::sync::Mutex / RwLock
|
tokio::sync::Mutex / RwLock
|
|
|---|---|---|
Holding across .await |
❌ Will panic / cause !Send errors |
✅ Safe |
| Performance | ✅ Faster (OS-level) | Slightly more overhead |
| When to use | Lock is held briefly, no .await inside |
You must hold the lock across async calls |
The practical advice: Start with std::sync::Mutex. Only reach for tokio::sync::Mutex if you need to hold the lock while awaiting something.
// std::sync version — fine as long as you drop the lock before .await
{
let mut data = state.cache.lock().unwrap();
data.insert("key", "value"); // quick operation
} // lock dropped here ✅
some_async_call().await;
// tokio::sync version — needed if you must hold the lock across await
use tokio::sync::Mutex;
let mut data = state.cache.lock().await; // async lock acquisition
do_something_async().await; // safe to hold the lock here
Choosing the Right Tool
Here's a simple decision tree:
Is your data read-only after startup?
└── YES → Arc<T> (no lock needed)
└── NO (you need to mutate it)
├── Reads and writes happen equally?
│ └── Arc<Mutex<T>>
└── Reads are much more frequent than writes?
└── Arc<RwLock<T>>
Do you need to hold the lock across .await points?
└── YES → tokio::sync::Mutex / tokio::sync::RwLock
└── NO → std::sync::Mutex / std::sync::RwLock (preferred)
A Complete Working Example
Let's build a small in-memory key-value store that demonstrates all four tools together.
Cargo.toml
[dependencies]
axum = "0.8.9"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
main.rs
use axum::{
extract::{Path, State},
routing::{get, post},
Json, Router,
};
use std::{
collections::HashMap,
sync::{Arc, RwLock},
};
// Our shared application state
struct AppState {
// RwLock because reads (GET) will far outnumber writes (POST)
store: RwLock<HashMap<String, String>>,
}
#[tokio::main]
async fn main() {
let shared_state = Arc::new(AppState {
store: RwLock::new(HashMap::new()),
});
let app = Router::new()
.route("/store/{key}", get(get_value))
.route("/store", post(set_value))
.with_state(shared_state);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
println!("🚀 Listening on http://localhost:3000");
axum::serve(listener, app).await.unwrap();
}
// GET /store/:key — many of these can run concurrently
async fn get_value(
Path(key): Path<String>,
State(state): State<Arc<AppState>>,
) -> String {
let store = state.store.read().unwrap(); // shared read lock
match store.get(&key) {
Some(val) => val.clone(),
None => format!("Key '{}' not found", key),
}
} // read lock released here
// POST /store — exclusive write, blocks readers only for an instant
async fn set_value(
State(state): State<Arc<AppState>>,
Json(payload): Json<HashMap<String, String>>,
) -> String {
let mut store = state.store.write().unwrap(); // exclusive write lock
let count = payload.len();
for (k, v) in payload {
store.insert(k, v);
}
format!("Inserted {} key(s). Total keys: {}", count, store.len())
} // write lock released here
Try it out:
# Set some values
curl -X POST http://localhost:3000/store \
-H "Content-Type: application/json" \
-d '{"name": "Alice", "lang": "Rust"}'
# Read them back (concurrent reads are safe!)
curl http://localhost:3000/store/name
curl http://localhost:3000/store/lang
Common Gotchas
❌ Forgetting .with_state()
If you use State<T> in a handler but forget to call .with_state() on the router, Axum will panic at startup with a clear error message. Always pair them.
❌ Using Rc<T> instead of Arc<T>
Rc<T> is single-threaded reference counting. Axum requires Send + Sync because handlers run across threads. Use Arc<T> always.
❌ Holding std::sync::Mutex across .await
// ❌ This will either panic or cause a compile error
let lock = state.data.lock().unwrap();
some_async_fn().await; // still holding the lock!
Drop the lock before any .await, or switch to tokio::sync::Mutex.
❌ RwLock write starvation
If write requests pile up, RwLock can starve. For high-contention scenarios, a Mutex may actually perform better in practice. Profile before optimizing.
Quick Reference Cheatsheet
// Read-only state (config, flags)
Arc<AppConfig>
// Mutable state, equal reads/writes
Arc<Mutex<T>>
// Mutable state, mostly reads
Arc<RwLock<T>>
// Mutable state, must hold lock across .await
Arc<tokio::sync::Mutex<T>>
Arc<tokio::sync::RwLock<T>>
// In your handler signature:
async fn my_handler(State(state): State<Arc<AppState>>) { ... }
// Attach to router:
Router::new()
.route("/path", get(my_handler))
.with_state(Arc::new(app_state))
Summary
Here's the full picture:
-
Arc<T>— enables multiple handlers to own the same data without copying it, safely across threads. -
State<T>— Axum's extractor that delivers shared state to your handlers automatically. -
Mutex<T>— ensures only one handler modifies mutable data at a time (exclusive lock). -
RwLock<T>— allows many concurrent readers or one exclusive writer, great for read-heavy data.
Together they form the idiomatic Rust pattern for safe, performant shared state in concurrent web applications. Once you internalize this, you'll reach for it naturally in every Axum project.
Happy coding!
Got questions or spotted something? Drop a comment below — I'd love to hear from you!
Top comments (0)