DEV Community

Cover image for Actix Web vs Poem Framework
Sumana
Sumana

Posted on • Edited on

Actix Web vs Poem Framework

Modern web frameworks have a common challenge: sharing resources efficiently when handling multiple requests at the same time. Both Actix Web and Poem use Rust’s Arc (a smart pointer for thread-safe sharing), but they go about it in very different ways. Here’s a quick dive into how each one uses Arc and multithreading to manage concurrent request handling.

Understanding Arc in Web Server Context

Web servers spin up multiple worker threads to handle requests at the same time. These threads often need access to shared stuff—like DB connections, app state, or configs. That’s where Arc comes in. It allows safe shared ownership across threads using atomic reference counting, it’s a core piece of how Rust handles concurrency in web apps.

// Arc enables this pattern:
let shared_data = Arc::new(expensive_resource);
let clone1 = Arc::clone(&shared_data); // Thread 1 gets this
let clone2 = Arc::clone(&shared_data); // Thread 2 gets this  
let clone3 = Arc::clone(&shared_data); // Thread 3 gets this
// All threads share the same underlying data
Enter fullscreen mode Exit fullscreen mode

Actix Web: Hidden Arc with Connection Pooling

Actix Web abstracts Arc usage behind its web::Data wrapper, implementing a connection pooling strategy for maximum concurrency.


#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let pool = create_pool().await?;

    HttpServer::new(move || {
        App::new()
            .app_data(web::Data::new(pool.clone())) // Arc created internally
            .route("/todos", web::get().to(list_todos_handler))
    })
    .bind("127.0.0.1:8080")?
    .run()
    .await
}

async fn list_todos_handler(pool: web::Data<PgPool>) -> Json<Vec<Todo>> {
    let todos = sqlx::query_as!(Todo, "SELECT * FROM todos")
        .fetch_all(pool.get_ref()) // Extract from Arc
        .await?;
    Json(todos)
}
Enter fullscreen mode Exit fullscreen mode

Actix Web Thread Distribution

┌──────────────────┐    ┌──────────────────┐    ┌──────────────────┐
   WORKER THREAD 1       WORKER THREAD 2       WORKER THREAD 3
├──────────────────┤    ├──────────────────┤    ├──────────────────┤
                                                              
 ┌──────────────┐      ┌──────────────┐      ┌──────────────┐ 
 web::Data<T>        web::Data<T>        web::Data<T>   
 (Arc wrapper)       (Arc wrapper)       (Arc wrapper)  
 └──────┬───────┘      └──────┬───────┘      └──────┬───────┘ 
                                                           
└────────┼─────────┘    └────────┼─────────┘    └────────┼─────────┘
                                                       
         └───────────────────────┼───────────────────────┘
                                 
                                 
                    ┌─────────────────────────┐
                        SHARED HEAP DATA     
                    ├─────────────────────────┤
                         Arc<PgPool>         
                      ┌─────────────────┐    
                       ref_count: 3        
                      └─────────────────┘    
                      ┌─────────────────┐    
                           PgPool          
                       ┌─────────────┐     
                       Connection        
                       Pool (thread      
                       safe already)     
                       └─────────────┘     
                      └─────────────────┘    
                    └─────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Poem: Explicit Arc with Shared State

Poem takes an explicit approach, requiring developers to manually wrap shared state in Arc<<Mutex<T>> for thread-safe access to a centralized store.

#[tokio::main(flavor = "multi_thread")]
async fn main() -> Result<(), std::io::Error> {
    let s = Arc::new(Mutex::new(Store::new().unwrap()));

    let app = Route::new()
        .at("/website/:website_id", get(get_website))
        .at("/website", post(create_website))
        .data(s);

    Server::new(TcpListener::bind("0.0.0.0:8080"))
        .run(app)
        .await
}

pub fn get_website(
    Path(id): Path<String>,
    Data(s): Data<&Arc<Mutex<Store>>>,
) -> Json<GetWebsiteOutput> {
    let mut locked_s = s.lock().unwrap_or_else(|poisoned| poisoned.into_inner());
    let website = locked_s.get_website(id).unwrap();
    Json(GetWebsiteOutput { url: website.url })
}
Enter fullscreen mode Exit fullscreen mode

Poem Arc Wrapping Pattern

┌─────────────────────────────────────────┐
              Arc Wrapper                  Sharing across threads
 ┌─────────────────────────────────────┐ 
  ref_count: 3 (cloned to 3 threads)   
 └─────────────────────────────────────┘ 
 ┌─────────────────────────────────────┐ 
            Mutex Wrapper                Thread-safe mutation
  ┌─────────────────────────────────┐  
   lock_state: locked/unlocked       
  └─────────────────────────────────┘  
  ┌─────────────────────────────────┐  
            Store Data                 Your actual mutable data
          (websites, etc.)           
  └─────────────────────────────────┘  
 └─────────────────────────────────────┘ 
└─────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Framework Philosophy: Actix vs Poem

Actix Web leans into a stateless, connection-pooled setup. It hides Arc under abstractions like web::Data::new(), so you don’t have to worry about it. Each handler can independently borrow a DB connection, enabling real parallelism with zero fuss.

On the other hand, Poem gives you full control. You’ll often use patterns like Arc<Mutex<T>>, making shared state super visible. It keeps app state in memory, which is great for fast access to cached data—but, you’ll need to manage those concurrency primitives yourself.

Both tackle shared resource access across threads, but they come from different mindsets: abstraction vs control, parallelism vs consistency, and external vs internal state. Knowing how they handle Arc helps a lot when scaling web apps in Rust.

🚀 Want to see this in action? Check out the complete implementations:

Top comments (0)