I've been thinking about reactive programming from the runtime layer up. The question: what's the minimum machinery you actually need to add on top of Rust's async/await to get a working reactive system?
Three things Rust already gives you for free
- Pin> = a suspended computation
Think about it: a reactive effect is "a piece of code paused, waiting for its dependencies to change, then re-executing." That's exactly what a future stuck at Poll::Pending is.
scope.spawn(async move {
loop {
count.changed().await; // suspends here
println!("count = {}", count.read());
}
});
The SignalChangedFuture::poll() checks a monotonic version number. If unchanged, it subscribes a callback and returns Poll::Pending. The async executor polls other tasks. When the signal changes, the version bumps, the callback fires, the waker wakes, the executor re-polls. That's the entire effect lifecycle — no custom scheduler, no effect graph traversal, no create_effect() abstraction.
The executor's poll loop IS the effect scheduler. You don't need a separate one.
- Waker = the notification dispatcher
A reactive library has to decide "which effects need to re-run when signal X changes." In the async model, this maps directly to "which futures should be re-polled." The waker IS that dispatch mechanism:
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> PollSelf::Output {
if version_changed {
return Poll::Ready(self.signal.read());
}
// subscribe a callback that calls cx.waker().wake_by_ref()
// executor re-polls this future on next flush
Poll::Pending
}
No separate notification queue. No topological sort of dependents. Just "wake the waker, executor picks it up."
- Drop = cleanup
Drop the scope that owns the tasks → all futures drop → each future's Drop impl proactively unsubscribes from its signals → zero dangling subscribers. No manual on_cleanup() tokens, no effect disposal bookkeeping. Rust's ownership does the work.
For deeply nested scopes (think UI component trees with hundreds of levels), cancellation uses BFS to collect all descendants, then iterates leaf-to-root — no recursive drop, no stack overflow.
The 20% Rust doesn't give you: re-entrancy prevention
Here's where pure async breaks down. If Signal::set() calls subscriber callbacks synchronously:
set() → callback → set() on same signal → callback → infinite loop
set() → callback → subscribe new callback → mutate subscriber list during iteration → RefCell panic
set() → callback → drop owning scope → borrow conflict
The async model alone can't prevent this. You need a deferred notification state machine.
The approach: Signal::set() never invokes callbacks directly. It bumps the version, snapshots the subscriber list, and pushes a closure into the executor's deferred callback queue. The executor drains this queue at the start of every flush, before polling any tasks.
Three states, two flags:
notifying = false, dirty = false → normal. Snapshot subscribers, push notification, set dirty.
notifying = true → re-entrant set during callback. Just set dirty, no new notification.
dirty = true, notifying=false → notification already queued. No-op.
After callbacks complete: check dirty. If a re-entrant set() happened during the callback round, schedule a follow-up notification using a fresh subscriber snapshot (so newly-added subscribers from the callback round are included).
This covers every edge case: re-entrant set, subscribe/unsubscribe during callback iteration, scope drop during callback (cancelled flag is a Cell outside the RefCell, always writable), set-from-Drop (deferred to next flush via set_deferred()).
The tradeoff
A traditional reactive scheduler can run effects in topological order — dependencies before dependents, guaranteeing each memo recomputes at most once. An async executor runs tasks in whatever order they wake up. If two effects both read a dirty memo, they might each trigger its recomputation (the second read sees it's already clean and skips the work, but the first poll of each effect triggers the check).
Correctness is preserved — version checking + lazy memo recompute guarantees eventual consistency. But optimality is best-effort, not guaranteed.
For UI frame-level reactivity this is fine. For a compiler's incremental analysis it probably isn't. I think the simplicity trade is worth it for most applications, but I'd be curious to hear counterarguments.
Why bother?
If you squint, reactive programming and async programming are solving the same problem from different directions. One suspends computation waiting for data changes; the other suspends computation waiting for I/O. The primitives overlap almost completely. The question isn't "can you build reactivity on async" — it's "why wouldn't you?"
The only genuinely novel piece needed is a safe way to fire subscriber callbacks without re-entrancy. Everything else is already in the language.
I built a working implementation of this (~1800 lines across two crates, #![forbid(unsafe_code)], zero external deps for the signal layer). I’d genuinely love technical feedback — especially on:
- Whether the lack of topological ordering has bitten anyone in real UI projects, or if it’s mostly a theoretical concern.
- Whether there’s a simpler alternative to the generational slot table I’m using for executor routing.
If I’ve missed an edge case, or if any of the tradeoffs feel wrong to you, I’m all ears. This is still my own learning process, and I’d rather get corrected now than find out later.
Top comments (0)