DEV Community

chh-itt
chh-itt

Posted on

Reactive Programming Doesn't Need a Framework — Ownership Is Enough

Reactive Programming Doesn't Need a Framework — Ownership Is Enough

How Auralis, a two-crate Rust kernel, reduces reactive programming to things Rust programmers already know.


I spent the last few months building Auralis, a reactive kernel for Rust. Two crates, zero unsafe, signal crate has zero dependencies. It's not a web framework — no view! macro, no DOM, no hydration. Just reactive primitives you pair with whatever you already have.

The core idea fits in one sentence:

Reactive = pausable async tasks. Lifecycle = ownership + structured concurrency.

Everything else follows from this. Let me walk through the design decisions that make it work.

Problem 1: Signal::set() can't call subscribers directly

In most reactive systems, changing a signal's value immediately notifies everyone watching it. This seems natural, but it creates a cascade of problems:

A subscriber callback calls set() on another signal → that signal's callback calls set() on the first signal → infinite loop. Or a subscriber tries to subscribe to a new signal during a callback → the subscriber list is being iterated → undefined behavior. Or a subscriber drops itself during a callback → the list iterator points to freed memory.

Every reactive system eventually invents mechanisms to deal with these: pending_unsubscriptions lists, traversal_depth counters, dirty_subscribers sets.

Auralis's answer: don't call subscribers synchronously.

// Signal::set does NOT call callbacks.
// It pushes a notification closure to the executor's deferred queue.
// The executor drains this queue at the start of the next flush.
pub fn set(&self, val: T) {
    let mut state = self.state.borrow_mut();
    state.value = val;
    state.version = state.version.wrapping_add(1); // monotonic version
    let subs = Self::prepare_notification(&mut state);
    drop(state);
    if let Some(subs) = subs {
        Self::schedule_notification(&self.state, subs);
    }
}
Enter fullscreen mode Exit fullscreen mode

The notification closure is a simple Box<dyn FnOnce()>. It captures a snapshot of the subscriber list at the time set() was called. When the executor drains the deferred queue (during the next flush, before polling any tasks), it fires each notification. If a subscriber was added after the snapshot, it won't be called for this change. If a subscriber unsubscribes before the notification fires, an alive flag prevents the callback from running.

This eliminates re-entrancy entirely. No depth counters, no pending-lists, no edge cases. The cost is one microtask of latency — negligible for any actual use case.

The state machine for a single signal notification looks like this:

set() → prepare_notification → schedule_notification
         ↓ (already notifying?)    ↓ (inside batch?)
         dirty=true, return        buffer, flush at batch end
                                   ↓
                              executor flush
                                   ↓
                              call subscribers
                                   ↓ (re-entrant set during callback?)
                              dirty=true, schedule follow-up
Enter fullscreen mode Exit fullscreen mode

Problem 2: Cancelling background work is hard. Dropping a scope is easy.

Every Rust UI eventually hits the same problem: the user navigates away, but the HTTP request is still in flight. The component is unmounted, but the timer is still ticking. You need cancellation tokens, AbortHandles, or manual Arc<AtomicBool> flags scattered everywhere.

Auralis replaces all of that with TaskScope. When a scope is dropped, all tasks spawned inside it — and all descendant scopes — are cancelled. No tokens, no handles, no manual cleanup.

let scope = TaskScope::with_executor(&ex);

// Spawn a task that listens to a signal.
scope.spawn(async move {
    loop {
        let val = count.changed().await;
        render(val);
    }
});

// Spawn another task that runs a timer.
scope.spawn(async move {
    timer::sleep(Duration::from_secs(5)).await;
    cleanup();
});

// Later, when the UI component is unmounted:
drop(scope);
// Both tasks are cancelled. No tokens, no handles.
Enter fullscreen mode Exit fullscreen mode

The cancellation itself is iterative, not recursive. TaskScope::drop does a BFS traversal of the scope tree, collects all descendants, then cancels them leaf-to-root. This means a 200-level nested UI tree drops without stack overflow — no #[rustc_recursion_limit] hacks needed.

The cancellation uses the same Rc<RefCell<TaskScopeInner>> that Memo uses. When the last TaskScope clone is dropped (Rc::strong_count == 1), the cancellation runs. Temporary clones — from find_scope during executor flush, from with_current_scope during spawn — are harmless. They share the same Rc and don't trigger cancellation. This is a bug we actually found and fixed in development — a temporary clone was setting cancelled = true on the shared inner, and subsequent spawns silently returned.

Problem 3: Derived state goes stale when you forget to update it

In immediate-mode GUIs (like egui), everything is recomputed every frame. This is simple but wasteful. In retained-mode UIs, derived state must be manually updated when dependencies change. This is efficient but error-prone — one missed update and the UI shows stale data.

Memo solves this by being both lazy and automatic. You give it a closure. It runs the closure once, discovers which signals were read, and subscribes to them. When a source signal changes, the memo just flips a dirty flag. The actual recomputation is deferred until someone calls memo.read().

let raw_data = Signal::new(load_large_dataset());
let filter_params = Signal::new(FilterParams::default());

// Memo auto-tracks which signals it reads.
let filtered = memo!(raw_data, filter_params =>
    filter(&raw_data.read(), &filter_params.read())
);

let aggregated = memo!(filtered =>
    aggregate(&filtered.read())
);

let formatted = memo!(aggregated =>
    format_results(&aggregated.read())
);

// Cache hit when params haven't changed: < 0.01 ms
// Cache miss when params changed: ~14 ms (500K records)
Enter fullscreen mode Exit fullscreen mode

The memo! macro handles the boilerplate of cloning signals before the move closure:

// Without macro:
let f = Memo::new({ let a = a.clone(); let b = b.clone(); move || a.read() + b.read() });

// With macro:
let f = memo!(a, b => a.read() + b.read());
Enter fullscreen mode Exit fullscreen mode

We benchmarked this against two alternatives on a 500K-record 3-stage pipeline:

Change rate Recompute every frame Manual cache Auralis Memo
1% (typical UI) ~14 ms/fr 0.14 ms/fr 0.16 ms/fr
10% ~14 ms/fr 1.45 ms/fr 1.78 ms/fr
50% (worst case) ~14 ms/fr 7.33 ms/fr 9.09 ms/fr

At realistic UI change rates (< 1%), Memo provides ~90x speedup. Cache-hit cost is literally a Cell::get + Rc deref — under 10 microseconds.

The 50% row is the interesting one. Memo has ~24% overhead vs. a hand-written cache (9.09 vs 7.33 ms/fr). This is the cost of not writing invalidation logic by hand. A hand-written cache for a 3-stage pipeline needs ~20 lines of if filter_changed || agg_changed || fmt_changed cascade. Add a stage, you add 2–3 more condition checks. Miss one, you get stale data.

With Memo, adding a stage is one line: memo!(prev_stage => compute(&prev_stage.read())). The dependency graph is maintained automatically.

Problem 4: Arc<Mutex<T>> is wasteful when you're single-threaded

Most Rust reactive libraries reach for Arc<Mutex<T>> so they can be Send + Sync. On single-threaded Wasm — which is the primary target for many Rust UI frameworks — atomics are pure overhead. Every lock() is effectively a no-op, but you pay the code size and mental overhead anyway.

Auralis goes the other way: Signal<T> and TaskScope are deliberately !Send + !Sync. They use Rc<RefCell<T>> throughout — zero atomics, zero lock overhead. The Wasm binary for a minimal reactive counter compiles to a few tens of KB.

For multi-threaded scenarios, the escape hatch is instance isolation, not shared mutable state:

// Thread 1
let ex1 = Executor::new_instance();
let scope = TaskScope::with_executor(&ex1);
// scope holds a strong reference to ex1

// Thread 2 — completely independent
let ex2 = Executor::new_instance();
let scope2 = TaskScope::with_executor(&ex2);

// Communication between threads: channels, not shared signals.
// See examples/multi_thread_bridge.rs for a 6-line pattern.
Enter fullscreen mode Exit fullscreen mode

This mirrors tokio's single-threaded runtime model: each executor is a self-contained event loop. Tasks in different executors don't interact. Cross-executor communication uses channels.

TaskWakers carry a (slot_id, generation) pair — a thread-local slot table maps these to Weak<RefCell<Executor>>. The generation counter invalidates stale wakers after an executor is destroyed. Zero unsafe code in the entire routing system.

What we learned

  1. Deferred callbacks are worth the microtask. The signal notification state machine went through multiple iterations, but the core idea — never call subscribers synchronously — survived every rewrite. It eliminates an entire class of bugs at the architectural level.

  2. Ownership is a better lifecycle manager than any framework API. drop(scope) is simpler, more composable, and harder to misuse than on_cleanup, AbortHandle, CancellationToken, or any other explicit cancellation mechanism. The compiler enforces it.

  3. Build the kernel, not the framework. Auralis has no DOM, no renderer, no router. This means it can be used in a CLI app (we have a demo), in a Wasm browser app (we have a demo), in an egui desktop app (we have a demo), or in any other scenario involving "signal change → async task reaction → automatic cleanup."

  4. Benchmarks are invaluable for design decisions. The incremental Memo subscription diff — keeping shared dependencies and only subscribing to new ones — was motivated by a benchmark showing 30x slowdown at high change rates. We found the bug, fixed it, and the performance returned to expected levels.

  5. Was found a non-obvious bug by building a Wasm demo. The TaskScope clone-from-find_scope was setting cancelled = true on the shared inner when dropped after a task poll, silently preventing future spawns. This only manifests in long-running apps (like Wasm pages) where the scope outlives a single flush cycle. The fix — checking Rc::strong_count == 1 before cancelling — matches Memo::drop's behavior.

Try it

The crates are on crates.io at v0.1.6:

[dependencies]
auralis-signal = "0.1"
auralis-task = "0.1"
Enter fullscreen mode Exit fullscreen mode

All demos are runnable — clone the repo and cargo run / trunk serve any of them.


If you're building something reactive in Rust and want a kernel that stays out of your way — or if you have opinions on the API — I'd love to hear from you.

Top comments (0)