DEV Community

Cover image for Rust Async Patterns in Tauri — Keeping the UI Responsive While Rust Does Heavy Work
hiyoyo
hiyoyo

Posted on

Rust Async Patterns in Tauri — Keeping the UI Responsive While Rust Does Heavy Work

If this is useful, a ❤️ helps others find it.

All tests run on an 8-year-old MacBook Air.

A Tauri app has two threads that matter: the main thread (UI) and whatever tokio spawns. Block the main thread and the UI freezes. Block for too long in a command and the frontend times out.

Here's how I keep things responsive in practice.


The basic rule

Never do blocking work in a #[tauri::command] without async.

// Bad — blocks the thread pool
#[tauri::command]
pub fn compress_pdf(path: String) -> Result<(), String> {
    heavy_compression_work(&path)?;  // takes 3 seconds, blocks
    Ok(())
}

// Good — async, non-blocking
#[tauri::command]
pub async fn compress_pdf(path: String) -> Result<(), String> {
    tokio::task::spawn_blocking(move || {
        heavy_compression_work(&path)
    })
    .await
    .map_err(|e| e.to_string())?
    .map_err(|e| e.to_string())
}
Enter fullscreen mode Exit fullscreen mode

spawn_blocking moves CPU-heavy work to a dedicated thread pool, freeing the async executor for other tasks.


Progress reporting during long operations

For operations that take more than a second, report progress via events:

#[tauri::command]
pub async fn batch_process(
    paths: Vec,
    window: tauri::Window,
) -> Result<(), String> {
    let total = paths.len();

    for (i, path) in paths.iter().enumerate() {
        process_single(&path).await?;

        window.emit("batch-progress", serde_json::json!({
            "current": i + 1,
            "total": total,
            "percent": ((i + 1) as f64 / total as f64 * 100.0) as u32,
        })).ok();
    }

    Ok(())
}
Enter fullscreen mode Exit fullscreen mode
// Frontend shows live progress
await listen('batch-progress', (event) => {
  setProgress(event.payload.percent);
});

await invoke('batch_process', { paths });
Enter fullscreen mode Exit fullscreen mode

Cancellation

Users cancel long operations. Support it with a shared flag:

use std::sync::{Arc, atomic::{AtomicBool, Ordering}};

pub struct CancelToken(Arc);

impl CancelToken {
    pub fn new() -> Self { Self(Arc::new(AtomicBool::new(false))) }
    pub fn cancel(&self) { self.0.store(true, Ordering::Relaxed); }
    pub fn is_cancelled(&self) -> bool { self.0.load(Ordering::Relaxed) }
}

#[tauri::command]
pub async fn batch_process(
    paths: Vec,
    cancel_token: tauri::State<'_, CancelToken>,
    window: tauri::Window,
) -> Result<(), String> {
    for (i, path) in paths.iter().enumerate() {
        if cancel_token.is_cancelled() {
            return Err("cancelled".to_string());
        }
        process_single(&path).await?;
        window.emit("batch-progress", i + 1).ok();
    }
    Ok(())
}
Enter fullscreen mode Exit fullscreen mode
// Cancel button
await invoke('cancel_batch');
Enter fullscreen mode Exit fullscreen mode

Parallel processing with Semaphore

Process multiple files concurrently, but not all at once:

use tokio::sync::Semaphore;
use std::sync::Arc;

pub async fn process_parallel(paths: Vec) -> Vec> {
    let semaphore = Arc::new(Semaphore::new(4)); // max 4 concurrent
    let mut handles = Vec::new();

    for path in paths {
        let sem = semaphore.clone();
        let handle = tokio::spawn(async move {
            let _permit = sem.acquire().await.unwrap();
            process_single(&path).await.map_err(|e| e.to_string())
        });
        handles.push(handle);
    }

    futures::future::join_all(handles)
        .await
        .into_iter()
        .map(|r| r.unwrap_or_else(|e| Err(e.to_string())))
        .collect()
}
Enter fullscreen mode Exit fullscreen mode

4 concurrent transfers on a 2017 MacBook Air runs well without saturating the disk or CPU.


Hiyoko PDF Vault → https://hiyokoko.gumroad.com/l/HiyokoPDFVault
X → @hiyoyok

Top comments (0)