DEV Community

Cover image for The case for a different Concurrency in Rust: The difference between Rust and Javascript concurrency.
Aliyu Adeniji
Aliyu Adeniji

Posted on • Edited on

The case for a different Concurrency in Rust: The difference between Rust and Javascript concurrency.

Modern Programming and Software development is increasingly becoming a war of hardware, speed and scalability, and Rust is inherently built to be an envoy that brings them together.

As a project grows, multiple parts of the project may need to perform different operations at the same time, and to do this, such projects may be caught up in the bottlenecks of concurrency. Rust helps in solving this kind of issue with its concurrency and parallelism

In this article, you will learn how concurrency is applicable in the Rust programming language and the differences between concurrency in Rust and Javascript.

What is Concurrency in Rust?

Rust allows you to write highly performance and scalable applications due to its memory safe capability and type-safety, this gives it a high level of concurrency by allowing you to write Rust codes that compile and execute multiple different tasks within the same timespan without each task overlapping another task.

Rust concurrency won't be adequately explained without mentioning some of problems you may encounter while making your programs run concurrently, they include;
Deadlocks; where two different threads are waiting for each other at the same time preventing them both from continuous execution.
Race conditions; where multiple threads are trying to access data in an inconsistent order and,
Bugs; which occur in specific circumstances in the cause of executing multiple threads and and aren't inherently discovered for appropriate fixes.

How does Rust implement Concurrency?

Rust implements concurrency in the following ways by ensuring that once a code compiles, then it's totally safe to run in a multithreaded environment.

  • Multithreading
  • Message passing
  • Shared state concurrency

1. Multithreading in Rust.

This is the process of running multiple processes in a program independently by splitting them into different threads; these multiple threads in Rust use a 1:1 thread model implementation.

Multithreading in Rust is built around two fundamental traits: Send and Sync. The Send trait indicates that ownership of a type can be safely transferred across threads. For example, most types are Send, but Rc is not. This is because Rc maintains non-atomic reference counters, which can lead to race conditions and even double-free errors if multiple threads attempt to modify them concurrently.

The Sync trait ensures that references to a type can be safely shared between threads. That is, a type T is Sync if &T can be safely sent to another thread. Together, Send + Sync mark a type as thread-safe.

Rust enforces thread safety at compile time, preventing data races, and Mutex ensures only one thread can access data at a time, while Arc provides thread-safe shared ownership.

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];

    for _ in 0..5 {
        let counter = Arc::clone(&counter);
        handles.push(thread::spawn(move || {
            let mut num = counter.lock().unwrap();
            *num += 1;
        }));
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Result: {}", *counter.lock().unwrap());
}
Enter fullscreen mode Exit fullscreen mode

With Rust’s ownership structure, you can always ensure that your code is memory safe and that values are only borrowed or inherited when needed and are dropped as soon as they are no longer used by other parts of your code, in the spawned thread, you can use the move keyword to take ownership of a value from the environment using closures, and once the spawned thread is done using the environment values, it immediately drops those values.

2. Message passing to Transfer data across threads.

Another concurrency approach in Rust is the Message Passing method between threads, the Rust standard library uses this approach by providing channels implementation. A channel can be thought of as a route for message passing between threads.

A channel is made up of two distinct halves which are the transmitting half and the receiving half, these two halves are constantly in communication, and the closure of one half will automatically lead to the closure of the other half too.
Let us look at how to pass data between threads using channels.

Import the mpsc::channel from the standard library, the mpsc simply means multiple producer, single consumer, this means that a channel can consist of multiple message senders and a single message receiver.

use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();

    // Move the receiver into the spawned thread
    let handle = thread::spawn(move || {
        match rx.recv() {
            Ok(data) => {
                println!("Received: {}", data);
            }
            Err(err) => {
                eprintln!("Error receiving: {}", err);
            }
        }
    });

    let data = 42;
    match tx.send(data) {
        Ok(()) => {
            println!("Sent: {}", data);
        }
        Err(err) => {
            eprintln!("Error sending: {}", err);
        }
    }

    // Wait for the thread to finish
    handle.join().unwrap();
}
Enter fullscreen mode Exit fullscreen mode

The MPSC channels are a powerful concurrency allow multiple threads to safely send data to a single receiving thread. Internally, MPSC channels decouple senders from the receiver by introducing an intermediate queue. Each Sender clones a handle to the channel, while the Receiver continuously pulls values. The Rust standard library provides both unbounded channels (which can grow dynamically) and bounded channels (SyncSender), which enforce backpressure when the buffer is full.

An MPSC channel also maintains a shared buffer, often implemented as a linked list or ring buffer guarded by atomic operations. Each sender appends to this buffer without needing exclusive ownership, while the single receiver consumes items in FIFO order. To coordinate access, atomic pointers and memory ordering (Acquire/Release) ensure safe concurrent operations without locks.

For example, using the standard library:

use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();

    for i in 0..3 {
        let tx_clone = tx.clone();
        thread::spawn(move || {
            tx_clone.send(i).unwrap();
        });
    }

    for received in rx.iter().take(3) {
        println!("Got: {}", received);
    }
}
Enter fullscreen mode Exit fullscreen mode

Here, multiple producer threads send integers concurrently into the same channel. Internally, each send enqueues data, and the receiver dequeues in order.

Libraries like Hopper extend this model by introducing memory- and disk-backed queues. Unlike SyncSender, Hopper allows precise configuration of in-memory and on-disk capacities, ensuring no events are lost under tight resource monitoring. Senders coordinate to write into multiple queue files, while the receiver reclaims disk space by removing exhausted files.

3. Shared State Concurrency

Another way to handle concurrency in a Rust project is through a shared state where many different threads can access the same shared data at the same time, to create a concurrency in a shared state, problems such as data races which can occur when multiple threads try to access the same resources at the same time are prevented by mechanisms such as Mutex.

Shared state concurrency in Rust is primarily managed through synchronization primitives like Mutex. A mutex ensures that only one thread can access a critical section of code or data at a time, preventing race conditions. Internally, Rust relies on atomic operations and memory ordering to provide these guarantees.

A Mutex supports two core operations: lock and unlock. When a thread locks a mutex, it checks whether the locked flag is available. If free, it sets this flag and gains exclusive access; otherwise, it must wait. Unlocking resets the flag, making the mutex available again. Rust uses Ordering::AcquireRelease to enforce memory consistency where no reads or writes can be reordered across a lock or unlock boundary.

Mutex allows only one thread to access a data location at a time in a concurrent environment, this is done through sending signal from the thread that wants to access a data, this thread tries to acquire the mutex’s lock that keeps track of which thread is having access to the data at a point in time.

Check a spin-lock mutex example implemented with an AtomicBool:

use std::sync::atomic::{AtomicBool, Ordering};
use std::thread;

pub struct SpinMutex {
    locked: AtomicBool,
}

impl SpinMutex {
    pub fn new() -> Self {
        Self { locked: AtomicBool::new(false) }
    }

    pub fn lock(&self) {
        while self.locked.swap(true, Ordering::AcqRel) {
            thread::yield_now();
        }
    }

    pub fn unlock(&self) {
        self.locked.store(false, Ordering::Release);
    }
}
Enter fullscreen mode Exit fullscreen mode

Differences between Concurrency in Rust and JavaScript.

Concurrency Execution Models

Rust: Threads with Ownership Guarantees

Rust implements its own parallelism through OS-level threads, with its ownership system providing compile-time guaranteed thread safety:

  1. Thread Creation: thread::spawn creates a native OS thread where each thread gets its own stack and competes for CPU time via the OS scheduler.

  2. Data Transfer: The move keyword in closures transfers ownership of captured variables to the new thread. This is implemented via:

    • For Copy types: Bitwise copy
    • For non-Copy types: Move semantics (original variable becomes invalid)
  3. Synchronization: Rust's Mutex<T> is a wrapper that enforces that you can only access the contained data when you hold the lock. The type system prevents accessing the data without locking.

JavaScript: Event Loop with Single-Threaded Concurrency

JavaScript's concurrency is different in the following ways:

  1. Event Loop Architecture:

    • Single call stack
    • Task queue (macro tasks like setTimeout callbacks)
  2. Execution Phases:

   While (queue.hasTasks()):
     1. Execute oldest macro task
     2. Execute all micro tasks
     3. Render if in browser
     4. Repeat
Enter fullscreen mode Exit fullscreen mode
  1. Concurrency Implementation:
    • All concurrent operations actually run on the same thread
    • I/O operations are handled by the underlying runtime (libuv in Node.js, browser APIs in Web)
    • The illusion of concurrency comes from rapid context switching between tasks

Memory Management and Sharing

Rust's Strict Shared-State

use std::sync::{Arc, Mutex};

let data = Arc::new(Mutex::new(0)); // Atomic Reference Counting
Enter fullscreen mode Exit fullscreen mode
  1. Arc (Atomic Reference Counting):

    • Thread-safe version of Rc (Reference Counting)
    • Uses atomic operations for reference count updates
    • Overhead of atomic operations but safe for cross-thread use
  2. Mutex Guard:

    • The lock() method returns a MutexGuard which implements:
      • Deref for accessing the data
      • Drop to automatically release the lock
  3. Memory Safety:

    • No data races possible at compile time
    • All shared access must be explicitly synchronized

JavaScript's Garbage-Collected Sharing

  1. Garbage Collection Implications:
    • No compile-time checks for race conditions
    • Relies on runtime checks and developer discipline

Performance

Rust's Threading Model

  1. Creation Cost:

    • OS threads have non-trivial creation overhead (~1-10µs)
    • Stack allocation (default 2MB on Linux)
  2. Synchronization Costs:

    • Mutex lock/unlock: ~20-100ns
    • Atomic operations: ~1-10ns
    • Channels (mpsc): ~50-200ns per message
  3. Memory Overhead:

    • Per-thread stack space
    • Synchronization primitives add small overhead

JavaScript's Event Loop

  1. Task Scheduling:

    • setTimeout(0) actually has minimum 4ms delay (HTML spec)
    • process.nextTick() and microtasks have ~1µs overhead
  2. Worker Costs:

    • Worker creation: ~5-50ms (includes JS engine initialization)
    • Message passing: ~10-100µs per message
    • Structured clone adds serialization overhead
  3. Optimizations:

    • Hidden classes optimize property access
    • Workers reuse same V8 instance when possible

Error Handling

Rust's Compile-Time Verification

let result = thread::spawn(|| {
    panic!("error");
}).join();

match result {
    Ok(_) => {},
    Err(e) => { /* handle panic */ }
}
Enter fullscreen mode Exit fullscreen mode
  1. Panic Propagation:

    • Thread panics are captured via JoinHandle
    • No silent failures
  2. Compile-Time Checks:

    • Borrow checker prevents data races at compile time
    • Type system enforces proper synchronization

JavaScript's Runtime Error Handling

try {
    setTimeout(() => { throw new Error("async error"); }, 0);
} catch (e) {
    // This won't catch the error!
}
Enter fullscreen mode Exit fullscreen mode
  1. Error Propagation:

    • Asynchronous errors must be handled in their callbacks
    • Unhandled promise rejections may terminate the process
  2. Debugging Challenges:

    • Stack traces may be truncated across event loop ticks
    • Race conditions only appear at runtime
    • No static analysis for shared memory access

Choosing the Right Model

Use Rust when:

  • You need true parallelism and maximum performance
  • Memory safety is critical
  • You want compile-time guarantees about thread safety
  • Low-level control over threading behavior is needed

Use JavaScript when:

  • You need high-throughput I/O operations
  • Simple concurrency model is preferred
  • Rapid development is more important than raw performance

Conclusion

Rust’s approach to concurency through its ownership and borrowing system ensures memory safety without a garbage collection, this enables a level of control over data access and management leading to improved performance in software.

Rust provides extensive safety guarantees for systems programming, while JavaScript offers a more simplified option for event-driven applications.

Top comments (0)