<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: chh-itt</title>
    <description>The latest articles on DEV Community by chh-itt (@chhitt).</description>
    <link>https://dev.to/chhitt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chhitt"/>
    <language>en</language>
    <item>
      <title>Reactive Programming Doesn't Need a Framework — Ownership Is Enough</title>
      <dc:creator>chh-itt</dc:creator>
      <pubDate>Sun, 10 May 2026 08:14:27 +0000</pubDate>
      <link>https://dev.to/chhitt/reactive-programming-doesnt-need-a-framework-ownership-is-enough-3e70</link>
      <guid>https://dev.to/chhitt/reactive-programming-doesnt-need-a-framework-ownership-is-enough-3e70</guid>
      <description>&lt;h1&gt;
  
  
  Reactive Programming Doesn't Need a Framework — Ownership Is Enough
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;How Auralis, a two-crate Rust kernel, reduces reactive programming to things Rust programmers already know.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I spent the last few months building &lt;a href="https://github.com/chh-itt/auralis" rel="noopener noreferrer"&gt;Auralis&lt;/a&gt;, a reactive kernel for Rust. Two crates, zero unsafe, signal crate has zero dependencies. It's not a web framework — no &lt;code&gt;view!&lt;/code&gt; macro, no DOM, no hydration. Just reactive primitives you pair with whatever you already have.&lt;/p&gt;

&lt;p&gt;The core idea fits in one sentence:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Reactive = pausable async tasks. Lifecycle = ownership + structured concurrency.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Everything else follows from this. Let me walk through the design decisions that make it work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 1: &lt;code&gt;Signal::set()&lt;/code&gt; can't call subscribers directly
&lt;/h2&gt;

&lt;p&gt;In most reactive systems, changing a signal's value immediately notifies everyone watching it. This seems natural, but it creates a cascade of problems:&lt;/p&gt;

&lt;p&gt;A subscriber callback calls &lt;code&gt;set()&lt;/code&gt; on another signal → that signal's callback calls &lt;code&gt;set()&lt;/code&gt; on the first signal → infinite loop. Or a subscriber tries to subscribe to a new signal during a callback → the subscriber list is being iterated → undefined behavior. Or a subscriber drops itself during a callback → the list iterator points to freed memory.&lt;/p&gt;

&lt;p&gt;Every reactive system eventually invents mechanisms to deal with these: &lt;code&gt;pending_unsubscriptions&lt;/code&gt; lists, &lt;code&gt;traversal_depth&lt;/code&gt; counters, &lt;code&gt;dirty_subscribers&lt;/code&gt; sets.&lt;/p&gt;

&lt;p&gt;Auralis's answer: &lt;strong&gt;don't call subscribers synchronously.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Signal::set does NOT call callbacks.&lt;/span&gt;
&lt;span class="c1"&gt;// It pushes a notification closure to the executor's deferred queue.&lt;/span&gt;
&lt;span class="c1"&gt;// The executor drains this queue at the start of the next flush.&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.state&lt;/span&gt;&lt;span class="nf"&gt;.borrow_mut&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="py"&gt;.value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="py"&gt;.version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="py"&gt;.version&lt;/span&gt;&lt;span class="nf"&gt;.wrapping_add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// monotonic version&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;subs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;prepare_notification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;schedule_notification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;subs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The notification closure is a simple &lt;code&gt;Box&amp;lt;dyn FnOnce()&amp;gt;&lt;/code&gt;. It captures a snapshot of the subscriber list at the time &lt;code&gt;set()&lt;/code&gt; was called. When the executor drains the deferred queue (during the next flush, before polling any tasks), it fires each notification. If a subscriber was added after the snapshot, it won't be called for this change. If a subscriber unsubscribes before the notification fires, an &lt;code&gt;alive&lt;/code&gt; flag prevents the callback from running.&lt;/p&gt;

&lt;p&gt;This eliminates re-entrancy entirely. No depth counters, no pending-lists, no edge cases. The cost is one microtask of latency — negligible for any actual use case.&lt;/p&gt;

&lt;p&gt;The state machine for a single signal notification looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set() → prepare_notification → schedule_notification
         ↓ (already notifying?)    ↓ (inside batch?)
         dirty=true, return        buffer, flush at batch end
                                   ↓
                              executor flush
                                   ↓
                              call subscribers
                                   ↓ (re-entrant set during callback?)
                              dirty=true, schedule follow-up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem 2: Cancelling background work is hard. Dropping a scope is easy.
&lt;/h2&gt;

&lt;p&gt;Every Rust UI eventually hits the same problem: the user navigates away, but the HTTP request is still in flight. The component is unmounted, but the timer is still ticking. You need cancellation tokens, &lt;code&gt;AbortHandle&lt;/code&gt;s, or manual &lt;code&gt;Arc&amp;lt;AtomicBool&amp;gt;&lt;/code&gt; flags scattered everywhere.&lt;/p&gt;

&lt;p&gt;Auralis replaces all of that with &lt;code&gt;TaskScope&lt;/code&gt;. When a scope is dropped, all tasks spawned inside it — and all descendant scopes — are cancelled. No tokens, no handles, no manual cleanup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;TaskScope&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;with_executor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Spawn a task that listens to a signal.&lt;/span&gt;
&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="nf"&gt;.spawn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;move&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;loop&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="nf"&gt;.changed&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;val&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Spawn another task that runs a timer.&lt;/span&gt;
&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="nf"&gt;.spawn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;move&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nn"&gt;timer&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_secs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;cleanup&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Later, when the UI component is unmounted:&lt;/span&gt;
&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// Both tasks are cancelled. No tokens, no handles.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cancellation itself is iterative, not recursive. &lt;code&gt;TaskScope::drop&lt;/code&gt; does a BFS traversal of the scope tree, collects all descendants, then cancels them leaf-to-root. This means a 200-level nested UI tree drops without stack overflow — no &lt;code&gt;#[rustc_recursion_limit]&lt;/code&gt; hacks needed.&lt;/p&gt;

&lt;p&gt;The cancellation uses the same &lt;code&gt;Rc&amp;lt;RefCell&amp;lt;TaskScopeInner&amp;gt;&amp;gt;&lt;/code&gt; that &lt;code&gt;Memo&lt;/code&gt; uses. When the last &lt;code&gt;TaskScope&lt;/code&gt; clone is dropped (&lt;code&gt;Rc::strong_count == 1&lt;/code&gt;), the cancellation runs. Temporary clones — from &lt;code&gt;find_scope&lt;/code&gt; during executor flush, from &lt;code&gt;with_current_scope&lt;/code&gt; during spawn — are harmless. They share the same &lt;code&gt;Rc&lt;/code&gt; and don't trigger cancellation. This is a bug we actually found and fixed in development — a temporary clone was setting &lt;code&gt;cancelled = true&lt;/code&gt; on the shared inner, and subsequent spawns silently returned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 3: Derived state goes stale when you forget to update it
&lt;/h2&gt;

&lt;p&gt;In immediate-mode GUIs (like egui), everything is recomputed every frame. This is simple but wasteful. In retained-mode UIs, derived state must be manually updated when dependencies change. This is efficient but error-prone — one missed update and the UI shows stale data.&lt;/p&gt;

&lt;p&gt;Memo solves this by being both lazy and automatic. You give it a closure. It runs the closure once, discovers which signals were read, and subscribes to them. When a source signal changes, the memo just flips a &lt;code&gt;dirty&lt;/code&gt; flag. The actual recomputation is deferred until someone calls &lt;code&gt;memo.read()&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;raw_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Signal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;load_large_dataset&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;filter_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Signal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;FilterParams&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="c1"&gt;// Memo auto-tracks which signals it reads.&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;filtered&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;memo!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filter_params&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;raw_data&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;filter_params&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;aggregated&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;memo!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filtered&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nf"&gt;aggregate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;filtered&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;formatted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;memo!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;aggregated&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nf"&gt;format_results&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;aggregated&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Cache hit when params haven't changed: &amp;lt; 0.01 ms&lt;/span&gt;
&lt;span class="c1"&gt;// Cache miss when params changed: ~14 ms (500K records)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;memo!&lt;/code&gt; macro handles the boilerplate of cloning signals before the &lt;code&gt;move&lt;/code&gt; closure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Without macro:&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Memo&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="k"&gt;move&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// With macro:&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;memo!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="nf"&gt;.read&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We benchmarked this against two alternatives on a 500K-record 3-stage pipeline:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Change rate&lt;/th&gt;
&lt;th&gt;Recompute every frame&lt;/th&gt;
&lt;th&gt;Manual cache&lt;/th&gt;
&lt;th&gt;Auralis Memo&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1% (typical UI)&lt;/td&gt;
&lt;td&gt;~14 ms/fr&lt;/td&gt;
&lt;td&gt;0.14 ms/fr&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.16 ms/fr&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10%&lt;/td&gt;
&lt;td&gt;~14 ms/fr&lt;/td&gt;
&lt;td&gt;1.45 ms/fr&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.78 ms/fr&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50% (worst case)&lt;/td&gt;
&lt;td&gt;~14 ms/fr&lt;/td&gt;
&lt;td&gt;7.33 ms/fr&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9.09 ms/fr&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;At realistic UI change rates (&amp;lt; 1%), Memo provides ~90x speedup. Cache-hit cost is literally a &lt;code&gt;Cell::get&lt;/code&gt; + &lt;code&gt;Rc&lt;/code&gt; deref — under 10 microseconds.&lt;/p&gt;

&lt;p&gt;The 50% row is the interesting one. Memo has ~24% overhead vs. a hand-written cache (9.09 vs 7.33 ms/fr). This is the cost of &lt;em&gt;not writing invalidation logic by hand&lt;/em&gt;. A hand-written cache for a 3-stage pipeline needs ~20 lines of &lt;code&gt;if filter_changed || agg_changed || fmt_changed&lt;/code&gt; cascade. Add a stage, you add 2–3 more condition checks. Miss one, you get stale data.&lt;/p&gt;

&lt;p&gt;With Memo, adding a stage is one line: &lt;code&gt;memo!(prev_stage =&amp;gt; compute(&amp;amp;prev_stage.read()))&lt;/code&gt;. The dependency graph is maintained automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 4: &lt;code&gt;Arc&amp;lt;Mutex&amp;lt;T&amp;gt;&amp;gt;&lt;/code&gt; is wasteful when you're single-threaded
&lt;/h2&gt;

&lt;p&gt;Most Rust reactive libraries reach for &lt;code&gt;Arc&amp;lt;Mutex&amp;lt;T&amp;gt;&amp;gt;&lt;/code&gt; so they can be &lt;code&gt;Send + Sync&lt;/code&gt;. On single-threaded Wasm — which is the primary target for many Rust UI frameworks — atomics are pure overhead. Every &lt;code&gt;lock()&lt;/code&gt; is effectively a no-op, but you pay the code size and mental overhead anyway.&lt;/p&gt;

&lt;p&gt;Auralis goes the other way: &lt;code&gt;Signal&amp;lt;T&amp;gt;&lt;/code&gt; and &lt;code&gt;TaskScope&lt;/code&gt; are deliberately &lt;code&gt;!Send + !Sync&lt;/code&gt;. They use &lt;code&gt;Rc&amp;lt;RefCell&amp;lt;T&amp;gt;&amp;gt;&lt;/code&gt; throughout — zero atomics, zero lock overhead. The Wasm binary for a minimal reactive counter compiles to a few tens of KB.&lt;/p&gt;

&lt;p&gt;For multi-threaded scenarios, the escape hatch is &lt;strong&gt;instance isolation&lt;/strong&gt;, not shared mutable state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Thread 1&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;ex1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Executor&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new_instance&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;TaskScope&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;with_executor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;ex1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// scope holds a strong reference to ex1&lt;/span&gt;

&lt;span class="c1"&gt;// Thread 2 — completely independent&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;ex2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Executor&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new_instance&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;scope2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;TaskScope&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;with_executor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;ex2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Communication between threads: channels, not shared signals.&lt;/span&gt;
&lt;span class="c1"&gt;// See examples/multi_thread_bridge.rs for a 6-line pattern.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This mirrors tokio's single-threaded runtime model: each executor is a self-contained event loop. Tasks in different executors don't interact. Cross-executor communication uses channels.&lt;/p&gt;

&lt;p&gt;TaskWakers carry a &lt;code&gt;(slot_id, generation)&lt;/code&gt; pair — a thread-local slot table maps these to &lt;code&gt;Weak&amp;lt;RefCell&amp;lt;Executor&amp;gt;&amp;gt;&lt;/code&gt;. The generation counter invalidates stale wakers after an executor is destroyed. Zero unsafe code in the entire routing system.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deferred callbacks are worth the microtask.&lt;/strong&gt; The signal notification state machine went through multiple iterations, but the core idea — never call subscribers synchronously — survived every rewrite. It eliminates an entire class of bugs at the architectural level.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ownership is a better lifecycle manager than any framework API.&lt;/strong&gt; &lt;code&gt;drop(scope)&lt;/code&gt; is simpler, more composable, and harder to misuse than &lt;code&gt;on_cleanup&lt;/code&gt;, &lt;code&gt;AbortHandle&lt;/code&gt;, &lt;code&gt;CancellationToken&lt;/code&gt;, or any other explicit cancellation mechanism. The compiler enforces it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build the kernel, not the framework.&lt;/strong&gt; Auralis has no DOM, no renderer, no router. This means it can be used in a CLI app (we have a &lt;a href="https://github.com/chh-itt/auralis/tree/main/demos/cli-multitask" rel="noopener noreferrer"&gt;demo&lt;/a&gt;), in a Wasm browser app (we have a &lt;a href="https://github.com/chh-itt/auralis/tree/main/demos/wasm-counter" rel="noopener noreferrer"&gt;demo&lt;/a&gt;), in an egui desktop app (we have a &lt;a href="https://github.com/chh-itt/auralis/tree/main/demos/egui-demo" rel="noopener noreferrer"&gt;demo&lt;/a&gt;), or in any other scenario involving "signal change → async task reaction → automatic cleanup."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Benchmarks are invaluable for design decisions.&lt;/strong&gt; The incremental Memo subscription diff — keeping shared dependencies and only subscribing to new ones — was motivated by a benchmark showing 30x slowdown at high change rates. We found the bug, fixed it, and the performance returned to expected levels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Was found a non-obvious bug by building a Wasm demo.&lt;/strong&gt; The &lt;code&gt;TaskScope&lt;/code&gt; clone-from-&lt;code&gt;find_scope&lt;/code&gt; was setting &lt;code&gt;cancelled = true&lt;/code&gt; on the shared inner when dropped after a task poll, silently preventing future spawns. This only manifests in long-running apps (like Wasm pages) where the scope outlives a single flush cycle. The fix — checking &lt;code&gt;Rc::strong_count == 1&lt;/code&gt; before cancelling — matches &lt;code&gt;Memo::drop&lt;/code&gt;'s behavior.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The crates are on crates.io at v0.1.6:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[dependencies]&lt;/span&gt;
&lt;span class="py"&gt;auralis-signal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1"&lt;/span&gt;
&lt;span class="py"&gt;auralis-task&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All demos are runnable — clone the repo and &lt;code&gt;cargo run&lt;/code&gt; / &lt;code&gt;trunk serve&lt;/code&gt; any of them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/chh-itt/auralis" rel="noopener noreferrer"&gt;github.com/chh-itt/auralis&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docs: &lt;a href="https://docs.rs/auralis-signal" rel="noopener noreferrer"&gt;docs.rs/auralis-signal&lt;/a&gt; / &lt;a href="https://docs.rs/auralis-task" rel="noopener noreferrer"&gt;docs.rs/auralis-task&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;If you're building something reactive in Rust and want a kernel that stays out of your way — or if you have opinions on the API — I'd love to hear from you.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
