<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rahul Garg</title>
    <description>The latest articles on DEV Community by Rahul Garg (@xtmntxraphaelx).</description>
    <link>https://dev.to/xtmntxraphaelx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/xtmntxraphaelx"/>
    <language>en</language>
    <item>
      <title>React Native JSI Deep Dive — Part 4: Your First React Native JSI Function</title>
      <dc:creator>Rahul Garg</dc:creator>
      <pubDate>Mon, 23 Mar 2026 05:48:30 +0000</pubDate>
      <link>https://dev.to/xtmntxraphaelx/react-native-jsi-deep-dive-part-4-your-first-react-native-jsi-function-7bg</link>
      <guid>https://dev.to/xtmntxraphaelx/react-native-jsi-deep-dive-part-4-your-first-react-native-jsi-function-7bg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Simplicity is prerequisite for reliability."
— Edsger W. Dijkstra, &lt;em&gt;How Do We Tell Truths That Might Hurt?&lt;/em&gt;, 1975&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt; A JSI function is a C++ lambda disguised as a JavaScript function. No serialization, no bridge, no codegen — just a C++ callable that the runtime invokes directly. This post walks you through writing one from scratch: registering it with the runtime, reading arguments, validating types, handling errors, and calling it from JavaScript. By the end, you'll have a working native module with zero boilerplate.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: React Native JSI Deep Dive&lt;/strong&gt; (12 parts — series in progress)
&lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see/" rel="noopener noreferrer"&gt;Part 1: React Native Architecture — Threads, Hermes, and the Event Loop&lt;/a&gt; | &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-2-the-bridge-is-dead-long-live-jsi/" rel="noopener noreferrer"&gt;Part 2: React Native Bridge vs JSI — What Changed and Why&lt;/a&gt; | &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-3-c-for-javascript-developers/" rel="noopener noreferrer"&gt;Part 3: C++ for JavaScript Developers&lt;/a&gt; | &lt;strong&gt;Part 4: Your First React Native JSI Function (You are here)&lt;/strong&gt; | &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-5-hostobjects-exposing-c-classes-to-javascript/" rel="noopener noreferrer"&gt;Part 5: HostObjects — Exposing C++ Classes to JavaScript&lt;/a&gt; | Part 6: Memory Ownership | Part 7: Platform Wiring | Part 8: Threading &amp;amp; Async | &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-9-real-time-audio-in-react-native-lock-free-pipelines-with-jsi/" rel="noopener noreferrer"&gt;Part 9: Real-Time Audio in React Native — Lock-Free Pipelines with JSI&lt;/a&gt; | Part 10: Storage Engine | Part 11: Module Approaches | Part 12: Debugging&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="quick-recap"&gt;Quick Recap&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-2-the-bridge-is-dead-long-live-jsi/" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;, we saw that JSI replaces the JSON bridge with direct C++ function calls — no serialization, no async queue. In &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-3-c-for-javascript-developers/" rel="noopener noreferrer"&gt;Part 3&lt;/a&gt;, we learned the C++ vocabulary: references (&lt;code&gt;&amp;amp;&lt;/code&gt;), pointers (&lt;code&gt;*&lt;/code&gt;), RAII, smart pointers, lambdas with explicit captures.&lt;/p&gt;

&lt;p&gt;Now we use all of it. This post is where you write your first line of native module code.&lt;/p&gt;




&lt;h2 id="installing-native-functions-in-the-javascript-runtime"&gt;Installing Native Functions in the JavaScript Runtime&lt;/h2&gt;

&lt;p&gt;In a web browser, you can add JavaScript functions to the global scope (&lt;code&gt;window.myFunc = ...&lt;/code&gt;), but you can't install &lt;em&gt;native&lt;/em&gt; functions — functions implemented in C++ that execute without the JavaScript engine interpreting them. The browser's native API surface (&lt;code&gt;fetch&lt;/code&gt;, &lt;code&gt;setTimeout&lt;/code&gt;, the DOM) is fixed by the browser vendor.&lt;/p&gt;

&lt;p&gt;In React Native, you can. JSI lets you install C++ functions directly into the JavaScript runtime. From JavaScript's perspective, they're indistinguishable from any other function. From C++'s perspective, they're lambdas that receive the runtime and arguments — executed natively, not interpreted.&lt;/p&gt;

&lt;p&gt;The primary API for doing this is one function: &lt;code&gt;jsi::Function::createFromHostFunction&lt;/code&gt;. (You can also create callable functions via &lt;code&gt;HostObject&lt;/code&gt; or &lt;code&gt;evaluateJavaScript&lt;/code&gt;, but &lt;code&gt;createFromHostFunction&lt;/code&gt; is the dedicated, purpose-built API for registering C++ functions.)&lt;/p&gt;




&lt;h2 id="step-1-the-simplest-possible-jsi-function"&gt;Step 1: The Simplest Possible JSI Function&lt;/h2&gt;

&lt;p&gt;Let's start with the absolute minimum — a function that takes no arguments and returns a number:&lt;/p&gt;

&lt;p&gt;cpp/install.cpp — the seed&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;#include &amp;lt;jsi/jsi.h&amp;gt;

using namespace facebook;

void install(jsi::Runtime&amp;amp; rt) {
    auto fn = jsi::Function::createFromHostFunction(
        rt,                                         // 1. the runtime
        jsi::PropNameID::forAscii(rt, "getFortyTwo"), // 2. function name (for stack traces)
        0,                                          // 3. expected argument count
        [](jsi::Runtime&amp;amp; rt,                        // 4. the lambda
           const jsi::Value&amp;amp; thisVal,
           const jsi::Value* args,
           size_t count) -&amp;gt; jsi::Value {
            return jsi::Value(42);
        }
    );

    rt.global().setProperty(rt, "getFortyTwo", std::move(fn));
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;App.js — calling it&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;const n = getFortyTwo();
console.log(n); // 42&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;output&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;42&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Four things happen in &lt;code&gt;createFromHostFunction&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;rt&lt;/code&gt;&lt;/strong&gt; — the runtime instance. Every JSI call needs this — it's the handle to the JavaScript world.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;PropNameID::forAscii(rt, "getFortyTwo")&lt;/code&gt;&lt;/strong&gt; — the function's name. This shows up in error stack traces. It doesn't determine where the function is installed — that's &lt;code&gt;setProperty&lt;/code&gt;'s job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;0&lt;/code&gt;&lt;/strong&gt; — the expected argument count. This is informational (the JS &lt;code&gt;.length&lt;/code&gt; property) — the runtime doesn't enforce it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The lambda&lt;/strong&gt; — the actual C++ code that runs when JavaScript calls the function. It receives the runtime, &lt;code&gt;this&lt;/code&gt; value, a pointer to the arguments array, and the argument count.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The last line — &lt;code&gt;rt.global().setProperty(...)&lt;/code&gt; — installs the function on the JavaScript global object. After this call, any JavaScript code can call &lt;code&gt;getFortyTwo()&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; The function name passed to &lt;code&gt;PropNameID&lt;/code&gt; and the property name passed to &lt;code&gt;setProperty&lt;/code&gt; are independent. You could name the function &lt;code&gt;"internalMathOp"&lt;/code&gt; for stack traces but install it as &lt;code&gt;global.getFortyTwo&lt;/code&gt;. In practice, keep them the same to avoid confusion.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="step-2-reading-arguments"&gt;Step 2: Reading Arguments&lt;/h2&gt;

&lt;p&gt;A function that ignores its arguments isn't very useful. Let's add two numbers:&lt;/p&gt;

&lt;p&gt;cpp/install.cpp — reading arguments ⚠️ no validation yet&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void install(jsi::Runtime&amp;amp; rt) {
    auto add = jsi::Function::createFromHostFunction(
        rt,
        jsi::PropNameID::forAscii(rt, "nativeAdd"),
        2,  // expects 2 arguments
        [](jsi::Runtime&amp;amp; rt,
           const jsi::Value&amp;amp; thisVal,
           const jsi::Value* args,
           size_t count) -&amp;gt; jsi::Value {
            double a = args[0].asNumber();  // ← read first argument as double
            double b = args[1].asNumber();  // ← read second argument
            return jsi::Value(a + b);
        }
    );

    rt.global().setProperty(rt, "nativeAdd", std::move(add));
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;App.js&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;console.log(nativeAdd(3, 7));    // 10
console.log(nativeAdd(1.5, 2.5)); // 4&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;output&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;10
4&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;args&lt;/code&gt; parameter is a pointer to an array of &lt;code&gt;jsi::Value&lt;/code&gt; objects (as we learned in &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-3-c-for-javascript-developers/" rel="noopener noreferrer"&gt;Part 3&lt;/a&gt; — C-style array passing). &lt;code&gt;args[0]&lt;/code&gt; is the first argument, &lt;code&gt;args[1]&lt;/code&gt; is the second. The &lt;code&gt;count&lt;/code&gt; parameter tells you how many were actually passed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; This code is deliberately unvalidated to keep it simple — &lt;strong&gt;don't ship this pattern.&lt;/strong&gt; If JavaScript calls &lt;code&gt;nativeAdd(5)&lt;/code&gt; with only one argument, &lt;code&gt;args[1]&lt;/code&gt; accesses past the end of the arguments array. That's undefined behavior in C++ — it may crash, corrupt memory, or silently produce garbage. Step 3 fixes this with proper &lt;code&gt;count&lt;/code&gt; validation. Always check &lt;code&gt;count&lt;/code&gt; before indexing into &lt;code&gt;args&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;asNumber()&lt;/code&gt; converts a &lt;code&gt;jsi::Value&lt;/code&gt; to a C++ &lt;code&gt;double&lt;/code&gt;. But what happens if JavaScript passes a string instead of a number?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Think about it:&lt;/strong&gt; What does &lt;code&gt;nativeAdd("hello", 7)&lt;/code&gt; do? The &lt;code&gt;args[0].asNumber()&lt;/code&gt; call encounters a string. Does it return &lt;code&gt;NaN&lt;/code&gt;? Does it throw? Does it crash?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It throws a C++ exception that the JSI runtime catches and converts into a JavaScript &lt;code&gt;Error&lt;/code&gt; — catchable with &lt;code&gt;try/catch&lt;/code&gt; on the JS side. The app doesn't crash, but the call fails with a generic error message like "expected a number." This is better than silently returning garbage, but we should validate arguments explicitly rather than relying on the conversion to throw — both for better error messages and for safety (see the Gotcha below about missing arguments).&lt;/p&gt;




&lt;h2 id="step-3-validating-arguments"&gt;Step 3: Validating Arguments&lt;/h2&gt;

&lt;p&gt;Production JSI functions must validate their inputs. The &lt;a href="https://github.com/facebook/react-native/blob/main/packages/react-native/ReactCommon/jsi/jsi/jsi.h" rel="noopener noreferrer"&gt;&lt;code&gt;jsi::Value&lt;/code&gt;&lt;/a&gt; type provides type-checking methods: &lt;code&gt;isNumber()&lt;/code&gt;, &lt;code&gt;isString()&lt;/code&gt;, &lt;code&gt;isObject()&lt;/code&gt;, &lt;code&gt;isUndefined()&lt;/code&gt;, &lt;code&gt;isNull()&lt;/code&gt;, &lt;code&gt;isBool()&lt;/code&gt;, &lt;code&gt;isSymbol()&lt;/code&gt;, and &lt;code&gt;isBigInt()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;cpp/install.cpp — with argument validation&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void install(jsi::Runtime&amp;amp; rt) {
    auto add = jsi::Function::createFromHostFunction(
        rt,
        jsi::PropNameID::forAscii(rt, "nativeAdd"),
        2,
        [](jsi::Runtime&amp;amp; rt,
           const jsi::Value&amp;amp; thisVal,
           const jsi::Value* args,
           size_t count) -&amp;gt; jsi::Value {
            // Validate argument count
            if (count &amp;lt; 2) {                                       // ← NEW
                throw jsi::JSError(rt, "nativeAdd requires 2 arguments");
            }

            // Validate argument types
            if (!args[0].isNumber() || !args[1].isNumber()) {      // ← NEW
                throw jsi::JSError(rt, "nativeAdd arguments must be numbers");
            }

            double a = args[0].asNumber();
            double b = args[1].asNumber();
            return jsi::Value(a + b);
        }
    );

    rt.global().setProperty(rt, "nativeAdd", std::move(add));
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;App.js — error handling&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;try {
    nativeAdd("hello", 7);
} catch (e) {
    console.log(e.message); // "nativeAdd arguments must be numbers"
}

try {
    nativeAdd(5);
} catch (e) {
    console.log(e.message); // "nativeAdd requires 2 arguments"
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;output&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;"nativeAdd arguments must be numbers"
"nativeAdd requires 2 arguments"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The pattern is always the same:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check &lt;code&gt;count&lt;/code&gt;&lt;/strong&gt; — did JavaScript pass enough arguments?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check types&lt;/strong&gt; — are the arguments the right kind?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throw &lt;code&gt;jsi::JSError&lt;/code&gt;&lt;/strong&gt; — if validation fails, this becomes a catchable JavaScript error.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; Always validate &lt;em&gt;before&lt;/em&gt; calling &lt;code&gt;asNumber()&lt;/code&gt;, &lt;code&gt;asString()&lt;/code&gt;, etc. These conversion methods throw a C++ exception on type mismatch (which the JSI runtime converts to a JS error), but the error message is generic ("Value is string, expected a number"). Your custom message — &lt;code&gt;"nativeAdd arguments must be numbers"&lt;/code&gt; — is far more useful for debugging. More importantly, validate &lt;code&gt;count&lt;/code&gt; before indexing into &lt;code&gt;args&lt;/code&gt; — accessing &lt;code&gt;args[i]&lt;/code&gt; when &lt;code&gt;i &amp;gt;= count&lt;/code&gt; is undefined behavior that no exception handler can catch.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="step-4-error-handling-jsi-jserror"&gt;Step 4: Error Handling (jsi::JSError)&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;jsi::JSError&lt;/code&gt; is the bridge between C++ exceptions and JavaScript errors. When you throw a &lt;code&gt;jsi::JSError&lt;/code&gt; inside a host function, it propagates back to JavaScript as a regular &lt;code&gt;Error&lt;/code&gt; object — catchable with &lt;code&gt;try/catch&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The JSI runtime does catch &lt;code&gt;std::exception&lt;/code&gt; subclasses thrown from host functions and converts them into JavaScript errors (per the &lt;a href="https://github.com/facebook/react-native/blob/main/packages/react-native/ReactCommon/jsi/jsi/jsi.h" rel="noopener noreferrer"&gt;&lt;code&gt;jsi.h&lt;/code&gt; documentation&lt;/a&gt;: "If a C++ exception is thrown, a JS Error will be created and thrown into JS; if the C++ exception extends std::exception, the Error's message will be whatever what() returns"). However, exceptions that don't extend &lt;code&gt;std::exception&lt;/code&gt;, or undefined behavior that doesn't throw at all (like out-of-bounds array access), will crash the app. Relying on the runtime's catch-all is fragile — the error messages are generic, and non-exception UB isn't caught.&lt;/p&gt;

&lt;p&gt;The robust pattern: wrap your native logic in a &lt;code&gt;try/catch&lt;/code&gt; that gives you control over error messages and catches everything:&lt;/p&gt;

&lt;p&gt;cpp/install.cpp — safe error boundary&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[](jsi::Runtime&amp;amp; rt,
   const jsi::Value&amp;amp; thisVal,
   const jsi::Value* args,
   size_t count) -&amp;gt; jsi::Value {
    try {
        // Your native logic here
        auto result = someCppFunction(args[0].asNumber());
        return jsi::Value(result);
    } catch (const jsi::JSError&amp;amp;) {
        throw;  // already a JS error — let it propagate
    } catch (const std::exception&amp;amp; e) {
        throw jsi::JSError(rt, std::string("Native error: ") + e.what());
    } catch (...) {
        throw jsi::JSError(rt, "Unknown native error");
    }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This three-level catch ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;jsi::JSError&lt;/code&gt; passes through unchanged (it's already a JS error).&lt;/li&gt;
&lt;li&gt;Standard C++ exceptions (&lt;code&gt;std::runtime_error&lt;/code&gt;, &lt;code&gt;std::invalid_argument&lt;/code&gt;, etc.) are wrapped with their error message.&lt;/li&gt;
&lt;li&gt;Unknown exceptions get a generic fallback instead of crashing the app.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; Every JSI host function is a boundary between two worlds. The JSI runtime handles &lt;code&gt;std::exception&lt;/code&gt; subclasses automatically, but undefined behavior (dangling pointers, out-of-bounds access) bypasses all exception handling and crashes the app. The &lt;code&gt;try/catch&lt;/code&gt; wrapper adds defense in depth: clearer error messages, a catch-all for non-standard exceptions, and explicit control over what JavaScript sees. Think of it as the native equivalent of a React error boundary.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="the-jsi-value-type-system"&gt;The jsi::Value Type System&lt;/h2&gt;

&lt;p&gt;Before we build anything larger, let's understand the types you'll work with. &lt;code&gt;jsi::Value&lt;/code&gt; is a tagged union — a single type that can hold any JavaScript value.&lt;/p&gt;

&lt;h3 id="reading-values-js-c"&gt;Reading Values (JS → C++)&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;JavaScript Type&lt;/th&gt;
&lt;th&gt;Type Check&lt;/th&gt;
&lt;th&gt;Conversion&lt;/th&gt;
&lt;th&gt;C++ Type&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`number`&lt;/td&gt;
&lt;td&gt;`val.isNumber()`&lt;/td&gt;
&lt;td&gt;`val.asNumber()`&lt;/td&gt;
&lt;td&gt;`double`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`string`&lt;/td&gt;
&lt;td&gt;`val.isString()`&lt;/td&gt;
&lt;td&gt;`val.asString(rt)`&lt;/td&gt;
&lt;td&gt;`jsi::String`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`boolean`&lt;/td&gt;
&lt;td&gt;`val.isBool()`&lt;/td&gt;
&lt;td&gt;`val.asBool()`&lt;/td&gt;
&lt;td&gt;`bool`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`object`&lt;/td&gt;
&lt;td&gt;`val.isObject()`&lt;/td&gt;
&lt;td&gt;`val.asObject(rt)`&lt;/td&gt;
&lt;td&gt;`jsi::Object`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`null`&lt;/td&gt;
&lt;td&gt;`val.isNull()`&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`undefined`&lt;/td&gt;
&lt;td&gt;`val.isUndefined()`&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Figure 1: jsi::Value type checks and conversions. Always check the type before converting.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Note the asymmetry: &lt;code&gt;asNumber()&lt;/code&gt; doesn't take &lt;code&gt;rt&lt;/code&gt;, but &lt;code&gt;asString(rt)&lt;/code&gt; and &lt;code&gt;asObject(rt)&lt;/code&gt; do. Numbers and booleans are plain C++ values (a &lt;code&gt;double&lt;/code&gt; and a &lt;code&gt;bool&lt;/code&gt;). Strings and objects are runtime-managed — they live inside the JS engine and need the runtime handle to access.&lt;/p&gt;

&lt;p&gt;To get a &lt;code&gt;std::string&lt;/code&gt; from a &lt;code&gt;jsi::String&lt;/code&gt;, call &lt;code&gt;.utf8(rt)&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;Reading a string argument&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;jsi::String jsStr = args[0].asString(rt);   // jsi::String (engine-managed)
std::string cppStr = jsStr.utf8(rt);         // std::string (C++-owned copy)&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id="creating-values-c-js"&gt;Creating Values (C++ → JS)&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;C++ Value&lt;/th&gt;
&lt;th&gt;JSI Constructor&lt;/th&gt;
&lt;th&gt;JavaScript Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`42` or `3.14`&lt;/td&gt;
&lt;td&gt;`jsi::Value(42)`&lt;/td&gt;
&lt;td&gt;`number`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`true` / `false`&lt;/td&gt;
&lt;td&gt;`jsi::Value(true)`&lt;/td&gt;
&lt;td&gt;`boolean`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`"hello"`&lt;/td&gt;
&lt;td&gt;`jsi::String::createFromUtf8(rt, "hello")`&lt;/td&gt;
&lt;td&gt;`string`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;`jsi::Value::null()`&lt;/td&gt;
&lt;td&gt;`null`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;`jsi::Value::undefined()`&lt;/td&gt;
&lt;td&gt;`undefined`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;`jsi::Object(rt)`&lt;/td&gt;
&lt;td&gt;`{}` (empty object)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Figure 2: Creating JavaScript values from C++. Numbers and booleans wrap directly. Strings and objects need the runtime.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Returning different types&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// Return a number
return jsi::Value(42);

// Return a string
return jsi::String::createFromUtf8(rt, "hello from C++");

// Return an object with properties
auto obj = jsi::Object(rt);
obj.setProperty(rt, "name", jsi::String::createFromUtf8(rt, "JSI"));
obj.setProperty(rt, "version", jsi::Value(4));
return obj;  // JS receives: { name: "JSI", version: 4 }&lt;/code&gt;&lt;/pre&gt;




&lt;h2 id="putting-it-together-a-math-module"&gt;Putting It Together: A Math Module&lt;/h2&gt;

&lt;p&gt;Let's build something real — a small math module with multiple functions, installed as properties on a single object rather than polluting the global scope:&lt;/p&gt;

&lt;p&gt;cpp/MathModule.cpp — complete module&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;#include &amp;lt;jsi/jsi.h&amp;gt;
#include &amp;lt;cmath&amp;gt;
#include &amp;lt;string&amp;gt;

using namespace facebook;

void installMathModule(jsi::Runtime&amp;amp; rt) {

    // Helper: validate that arg at index i is a number
    auto requireNumber = [](jsi::Runtime&amp;amp; rt,
                            const jsi::Value* args,
                            size_t count,
                            size_t index,
                            const char* fnName) {
        if (index &amp;gt;= count) {
            throw jsi::JSError(rt,
                std::string(fnName) + ": missing argument at index "
                + std::to_string(index));
        }
        if (!args[index].isNumber()) {
            throw jsi::JSError(rt,
                std::string(fnName) + ": argument " + std::to_string(index)
                + " must be a number");
        }
    };

    // --- add(a, b) ---
    auto add = jsi::Function::createFromHostFunction(
        rt, jsi::PropNameID::forAscii(rt, "add"), 2,
        [requireNumber](jsi::Runtime&amp;amp; rt, const jsi::Value&amp;amp;,
                        const jsi::Value* args, size_t count) -&amp;gt; jsi::Value {
            requireNumber(rt, args, count, 0, "add");
            requireNumber(rt, args, count, 1, "add");
            return jsi::Value(args[0].asNumber() + args[1].asNumber());
        }
    );

    // --- multiply(a, b) ---
    auto multiply = jsi::Function::createFromHostFunction(
        rt, jsi::PropNameID::forAscii(rt, "multiply"), 2,
        [requireNumber](jsi::Runtime&amp;amp; rt, const jsi::Value&amp;amp;,
                        const jsi::Value* args, size_t count) -&amp;gt; jsi::Value {
            requireNumber(rt, args, count, 0, "multiply");
            requireNumber(rt, args, count, 1, "multiply");
            return jsi::Value(args[0].asNumber() * args[1].asNumber());
        }
    );

    // --- sqrt(x) ---
    auto sqrt = jsi::Function::createFromHostFunction(
        rt, jsi::PropNameID::forAscii(rt, "sqrt"), 1,
        [requireNumber](jsi::Runtime&amp;amp; rt, const jsi::Value&amp;amp;,
                        const jsi::Value* args, size_t count) -&amp;gt; jsi::Value {
            requireNumber(rt, args, count, 0, "sqrt");
            double x = args[0].asNumber();
            if (x &amp;lt; 0) {
                throw jsi::JSError(rt, "sqrt: argument must be non-negative");
            }
            return jsi::Value(std::sqrt(x));
        }
    );

    // --- describe() — returns an object ---
    auto describe = jsi::Function::createFromHostFunction(
        rt, jsi::PropNameID::forAscii(rt, "describe"), 0,
        [](jsi::Runtime&amp;amp; rt, const jsi::Value&amp;amp;,
           const jsi::Value* args, size_t count) -&amp;gt; jsi::Value {
            auto obj = jsi::Object(rt);
            obj.setProperty(rt, "name",
                jsi::String::createFromUtf8(rt, "NativeMath"));
            obj.setProperty(rt, "version", jsi::Value(1));
            obj.setProperty(rt, "engine",
                jsi::String::createFromUtf8(rt, "JSI"));
            return obj;
        }
    );

    // Install all functions on a single object
    auto mathModule = jsi::Object(rt);
    mathModule.setProperty(rt, "add", std::move(add));
    mathModule.setProperty(rt, "multiply", std::move(multiply));
    mathModule.setProperty(rt, "sqrt", std::move(sqrt));
    mathModule.setProperty(rt, "describe", std::move(describe));

    rt.global().setProperty(rt, "NativeMath", std::move(mathModule));
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;App.js — using the module&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;console.log(NativeMath.add(3, 7));          // 10
console.log(NativeMath.multiply(6, 7));     // 42
console.log(NativeMath.sqrt(144));          // 12
console.log(NativeMath.describe());         // { name: "NativeMath", version: 1, engine: "JSI" }

try {
    NativeMath.sqrt(-1);
} catch (e) {
    console.log(e.message);                 // "sqrt: argument must be non-negative"
}

try {
    NativeMath.add("hello", 7);
} catch (e) {
    console.log(e.message);                 // "add: argument 0 must be a number"
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;output&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;10
42
12
{ "name": "NativeMath", "version": 1, "engine": "JSI" }
"sqrt: argument must be non-negative"
"add: argument 0 must be a number"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Every concept from this post and Part 3 is at work:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;What's Happening&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`jsi::Runtime&amp;amp; rt`&lt;/td&gt;
&lt;td&gt;Reference — borrows the runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`const jsi::Value* args`&lt;/td&gt;
&lt;td&gt;Pointer — C-style array of arguments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`requireNumber` lambda&lt;/td&gt;
&lt;td&gt;Captured by value into each host function&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`jsi::JSError`&lt;/td&gt;
&lt;td&gt;C++ exception → JavaScript `Error`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`std::move(add)`&lt;/td&gt;
&lt;td&gt;Move semantics — transfers ownership to the module object&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`jsi::Object(rt)`&lt;/td&gt;
&lt;td&gt;Stack-allocated JSI object — RAII manages its handle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`setProperty` on `mathModule`&lt;/td&gt;
&lt;td&gt;Installs functions on an object (not global) — cleaner namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2 id="global-vs-object-installation"&gt;Global vs Object Installation&lt;/h2&gt;

&lt;p&gt;You have two choices for where to install your functions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global installation&lt;/strong&gt; — the function is available everywhere:&lt;/p&gt;

&lt;p&gt;Global — available as a bare function&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;rt.global().setProperty(rt, "nativeAdd", std::move(fn));
// JS: nativeAdd(3, 7)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Object installation&lt;/strong&gt; — the function is namespaced:&lt;/p&gt;

&lt;p&gt;Object — namespaced under a module&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;auto module = jsi::Object(rt);
module.setProperty(rt, "add", std::move(fn));
rt.global().setProperty(rt, "NativeMath", std::move(module));
// JS: NativeMath.add(3, 7)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Prefer object installation. It avoids polluting the global namespace, groups related functions together, and matches how JavaScript modules work. The only reason to use global installation is for very simple, single-function modules.&lt;/p&gt;




&lt;h2 id="the-tradeoffs-what-this-approach-can-t-do"&gt;The Tradeoffs (What This Approach Can't Do)&lt;/h2&gt;

&lt;p&gt;Pure JSI functions — as shown in this post — are powerful but limited:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;JSI Host Functions&lt;/th&gt;
&lt;th&gt;What You Need Instead&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Synchronous calls&lt;/td&gt;
&lt;td&gt;Yes — runs on JS thread&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Return values&lt;/td&gt;
&lt;td&gt;Yes — any `jsi::Value`&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stateful modules&lt;/td&gt;
&lt;td&gt;Possible via `shared_ptr` captures, but verbose — no properties, no `this`&lt;/td&gt;
&lt;td&gt;**HostObjects** (Part 5) — expose C++ classes with a clean interface&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Async operations&lt;/td&gt;
&lt;td&gt;No — must return synchronously&lt;/td&gt;
&lt;td&gt;**CallInvoker** (Part 8) — background threads + Promises&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Platform APIs&lt;/td&gt;
&lt;td&gt;No — pure C++ only&lt;/td&gt;
&lt;td&gt;**Platform wiring** (Part 7) — Obj-C++/JNI bridges&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Type safety from JS&lt;/td&gt;
&lt;td&gt;No — manual validation&lt;/td&gt;
&lt;td&gt;**TurboModules** (Part 11) — codegen from Flow/TS specs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Figure 3: What JSI host functions can and can't do. Parts 5–11 address each limitation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The biggest ergonomic limitation: &lt;strong&gt;no clean stateful interface&lt;/strong&gt;. You &lt;em&gt;can&lt;/em&gt; capture &lt;code&gt;shared_ptr&lt;/code&gt; in lambdas to share state (as we did in Part 3's key-value store example), but it gets verbose fast — no property access, no &lt;code&gt;this&lt;/code&gt;, and no way to group methods on an object that JavaScript can inspect. When you want a database connection, a cache, or a streaming audio session, you need a C++ object that JavaScript can interact with as a first-class object. That's what HostObjects provide — and that's Part 5.&lt;/p&gt;




&lt;h2 id="key-takeaways"&gt;Key Takeaways&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;createFromHostFunction&lt;/code&gt; is the core API.&lt;/strong&gt; It takes a runtime, a name (for stack traces), an argument count, and a C++ lambda. The lambda is what JavaScript calls. That's the entire mechanism.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Always validate arguments.&lt;/strong&gt; Check &lt;code&gt;count&lt;/code&gt; before accessing &lt;code&gt;args[index]&lt;/code&gt;. Check &lt;code&gt;isNumber()&lt;/code&gt; / &lt;code&gt;isString()&lt;/code&gt; / &lt;code&gt;isObject()&lt;/code&gt; before calling &lt;code&gt;asNumber()&lt;/code&gt; / &lt;code&gt;asString(rt)&lt;/code&gt; / &lt;code&gt;asObject(rt)&lt;/code&gt;. Never trust that JavaScript passed what you expect.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Wrap native errors in &lt;code&gt;jsi::JSError&lt;/code&gt;.&lt;/strong&gt; The JSI runtime catches &lt;code&gt;std::exception&lt;/code&gt; subclasses automatically, but the error messages are generic. Wrapping in &lt;code&gt;try/catch&lt;/code&gt; gives you clear error messages and catches non-standard exceptions. Let &lt;code&gt;jsi::JSError&lt;/code&gt; pass through unchanged. Undefined behavior (out-of-bounds access, dangling pointers) bypasses all exception handling — validate inputs first.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Install on objects, not globals.&lt;/strong&gt; Group related functions under a namespace object (&lt;code&gt;NativeMath.add&lt;/code&gt;) rather than polluting the global scope (&lt;code&gt;nativeAdd&lt;/code&gt;). It's cleaner and matches JavaScript conventions.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Host functions are synchronous.&lt;/strong&gt; They execute on the JS thread and return immediately. If your operation takes more than ~1ms, you'll block the event loop. Async patterns (background threads + Promises) come in Part 8.&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2 id="frequently-asked-questions"&gt;Frequently Asked Questions&lt;/h2&gt;

&lt;h3 id="how-do-you-create-a-jsi-function-in-react-native"&gt;How do you create a JSI function in React Native?&lt;/h3&gt;

&lt;p&gt;Use &lt;code&gt;jsi::Function::createFromHostFunction()&lt;/code&gt; to register a C++ lambda as a JavaScript function. The lambda receives the runtime, arguments as &lt;code&gt;jsi::Value&lt;/code&gt;, and returns a &lt;code&gt;jsi::Value&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id="can-jsi-functions-be-synchronous"&gt;Can JSI functions be synchronous?&lt;/h3&gt;

&lt;p&gt;Yes — JSI functions execute synchronously on the JS thread, returning results immediately without Promises or callbacks. This is only safe for operations completing in under ~1ms.&lt;/p&gt;

&lt;h3 id="what-happens-if-a-jsi-function-throws"&gt;What happens if a JSI function throws?&lt;/h3&gt;

&lt;p&gt;C++ exceptions extending &lt;code&gt;std::exception&lt;/code&gt; are caught by the JSI runtime and converted into JavaScript &lt;code&gt;Error&lt;/code&gt; objects, catchable with try/catch on the JS side.&lt;/p&gt;




&lt;h2 id="what-s-next"&gt;What's Next&lt;/h2&gt;

&lt;p&gt;You can now install C++ functions into JavaScript. But they're stateless — each call is independent. What if you want to expose a C++ &lt;em&gt;object&lt;/em&gt; to JavaScript? A database connection that remembers its state. A cache you can read and write. An audio session you can start, pause, and stop.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-5-hostobjects-exposing-c-classes-to-javascript/" rel="noopener noreferrer"&gt;&lt;strong&gt;Part 5: HostObjects — Exposing C++ Classes to JavaScript&lt;/strong&gt;&lt;/a&gt;, you'll learn to expose C++ classes to JavaScript as first-class objects with properties and methods. HostObjects are where JSI stops being a curiosity and becomes a real native module framework. You'll build a key-value store where &lt;code&gt;storage.get('key')&lt;/code&gt; calls C++ synchronously — no await, no bridge, no serialization.&lt;/p&gt;

&lt;p&gt;Part 4 gave you functions. Part 5 gives you objects.&lt;/p&gt;





&lt;h2 id="references-further-reading"&gt;References &amp;amp; Further Reading&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/facebook/react-native/blob/main/packages/react-native/ReactCommon/jsi/jsi/jsi.h" rel="noopener noreferrer"&gt;JSI Header — jsi.h (Complete API Surface, facebook/react-native)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/docs/the-new-architecture/landing-page" rel="noopener noreferrer"&gt;React Native — The New Architecture (Official Documentation)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mrousavy/react-native-mmkv" rel="noopener noreferrer"&gt;react-native-mmkv — Production JSI Module (Source Code Reference)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mrousavy/react-native-vision-camera" rel="noopener noreferrer"&gt;react-native-vision-camera — Production JSI + HostObject Module (Source Code Reference)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.cppreference.com/w/cpp/language/lambda" rel="noopener noreferrer"&gt;cppreference — Lambda Expressions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.cppreference.com/w/cpp/error/exception" rel="noopener noreferrer"&gt;cppreference — std::exception&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>reactnative</category>
      <category>cpp</category>
      <category>mobile</category>
      <category>jsi</category>
    </item>
    <item>
      <title>The 5 C++ Concepts Every React Native Developer Needs (and Nothing More)</title>
      <dc:creator>Rahul Garg</dc:creator>
      <pubDate>Fri, 20 Mar 2026 15:41:25 +0000</pubDate>
      <link>https://dev.to/xtmntxraphaelx/the-5-c-concepts-every-react-native-developer-needs-and-nothing-more-4076</link>
      <guid>https://dev.to/xtmntxraphaelx/the-5-c-concepts-every-react-native-developer-needs-and-nothing-more-4076</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"The purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise."
— Edsger W. Dijkstra, &lt;em&gt;The Humble Programmer&lt;/em&gt;, 1972&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt; You don't need to learn all of C++ to write JSI native modules. You need five concepts: stack vs heap, references and pointers, RAII, smart pointers, and lambdas. This post teaches exactly that subset — framed in JavaScript terms you already know. By the end, you'll read C++ the way you read TypeScript: not every keyword, but every &lt;em&gt;intention&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: React Native JSI Deep Dive&lt;/strong&gt; (12 parts — series in progress)
&lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see/" rel="noopener noreferrer"&gt;Part 1: React Native Architecture — Threads, Hermes, and the Event Loop&lt;/a&gt; | &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-2-the-bridge-is-dead-long-live-jsi/" rel="noopener noreferrer"&gt;Part 2: React Native Bridge vs JSI — What Changed and Why&lt;/a&gt; | &lt;strong&gt;Part 3: C++ for JavaScript Developers (You are here)&lt;/strong&gt; | &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-4-your-first-react-native-jsi-function/" rel="noopener noreferrer"&gt;Part 4: Your First React Native JSI Function&lt;/a&gt; | &lt;a href="part-5-host-objects.md"&gt;Part 5: HostObjects — Exposing C++ Classes to JavaScript&lt;/a&gt; | Part 6: Memory Ownership | Part 7: Platform Wiring | Part 8: Threading &amp;amp; Async | &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-9-real-time-audio-in-react-native-lock-free-pipelines-with-jsi/" rel="noopener noreferrer"&gt;Part 9: Real-Time Audio in React Native — Lock-Free Pipelines with JSI&lt;/a&gt; | Part 10: Storage Engine | Part 11: Module Approaches | Part 12: Debugging&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="the-problem-c-looks-like-it-was-designed-to-cause-suffering"&gt;The Problem: C++ Looks Like It Was Designed to Cause Suffering&lt;/h2&gt;

&lt;p&gt;If you've been writing JavaScript or TypeScript, your first encounter with C++ probably looks like this:&lt;/p&gt;

&lt;p&gt;What a JSI function looks like&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;static jsi::Value multiply(
    jsi::Runtime&amp;amp; rt,
    const jsi::Value&amp;amp; thisVal,
    const jsi::Value* args,
    size_t count) {
  double a = args[0].asNumber();
  double b = args[1].asNumber();
  return jsi::Value(a * b);
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You see &lt;code&gt;&amp;amp;&lt;/code&gt;, &lt;code&gt;*&lt;/code&gt;, &lt;code&gt;const&lt;/code&gt;, &lt;code&gt;size_t&lt;/code&gt;, and you think: &lt;em&gt;I have to learn an entirely new language.&lt;/em&gt; But look again. Strip the symbols and it's a function that takes two numbers and returns their product. The &lt;code&gt;&amp;amp;&lt;/code&gt; and &lt;code&gt;*&lt;/code&gt; are about one thing only: &lt;strong&gt;who owns the data and where it lives&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That's the entire mental shift. JavaScript hides memory management behind a garbage collector. C++ makes you state it explicitly. Everything else — classes, loops, conditionals, strings — works roughly the way you'd expect.&lt;/p&gt;

&lt;p&gt;This post teaches you the five C++ concepts that appear in every JSI module. No templates-of-templates. No operator overloading. No multiple inheritance. Just the vocabulary you need to read and write native module code.&lt;/p&gt;




&lt;h2 id="concept-1-stack-vs-heap-where-data-lives"&gt;Concept 1: Stack vs Heap (Where Data Lives)&lt;/h2&gt;

&lt;p&gt;In JavaScript, you never think about where your variables live. You write &lt;code&gt;const x = 42&lt;/code&gt; and the engine figures out the rest.&lt;/p&gt;

&lt;p&gt;In C++, data primarily lives in one of two places — the &lt;strong&gt;stack&lt;/strong&gt; or the &lt;strong&gt;heap&lt;/strong&gt; — and you choose which one. (C++ also has static storage for globals and thread-local storage, but for JSI modules, stack and heap are what matter.)&lt;/p&gt;

&lt;h3 id="the-stack"&gt;The Stack&lt;/h3&gt;

&lt;p&gt;The stack is fast, automatic memory. When a function runs, its local variables live on the stack. When the function returns, they're destroyed. No cleanup required — it's instant.&lt;/p&gt;

&lt;p&gt;Stack allocation — automatic lifetime&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void greet() {
    int count = 42;            // lives on the stack
    std::string name = "JSI";  // also on the stack (string content may be heap-allocated internally)
    // use count and name...
}  // ← count and name are destroyed here, automatically&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The JavaScript equivalent is a &lt;code&gt;let&lt;/code&gt; inside a function — it exists while the function runs and is eligible for garbage collection after. But there's a crucial difference: in C++, stack destruction is &lt;strong&gt;immediate and deterministic&lt;/strong&gt;. It doesn't happen "eventually" when a GC gets around to it. It happens at the closing brace. Every time. Guaranteed.&lt;/p&gt;

&lt;h3 id="the-heap"&gt;The Heap&lt;/h3&gt;

&lt;p&gt;The heap is for data that needs to outlive the function that created it. In JavaScript, everything that isn't a primitive (&lt;code&gt;number&lt;/code&gt;, &lt;code&gt;boolean&lt;/code&gt;) lives on the heap — objects, arrays, strings, closures. The garbage collector handles cleanup.&lt;/p&gt;

&lt;p&gt;In C++, you explicitly allocate on the heap with &lt;code&gt;new&lt;/code&gt; and must explicitly deallocate with &lt;code&gt;delete&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;Heap allocation — manual lifetime ⚠️&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void createBuffer() {
    int* data = new int[1024];  // allocate 1024 ints on the heap
    // use data...
    delete[] data;              // YOU must free it
}  // if you forget delete[], those 1024 ints leak forever&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Think about it:&lt;/strong&gt; What happens if an exception is thrown between &lt;code&gt;new&lt;/code&gt; and &lt;code&gt;delete&lt;/code&gt;? The &lt;code&gt;delete&lt;/code&gt; never runs. The memory leaks. This is the fundamental problem with manual memory management — and it's why modern C++ almost never uses raw &lt;code&gt;new&lt;/code&gt; and &lt;code&gt;delete&lt;/code&gt;. The solution is RAII, which we'll get to in Concept 3.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's the mental model:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────┐
│                        STACK                             │
│                                                          │
│  Fast. Automatic. Fixed-size.                            │
│  Dies when the function returns.                         │
│                                                          │
│  JS analogy: primitive values (number, boolean)          │
│  C++ use: local variables, function arguments            │
│                                                          │
├─────────────────────────────────────────────────────────┤
│                        HEAP                              │
│                                                          │
│  Slower. Manual (or smart-pointer managed). Dynamic.     │
│  Lives until you explicitly free it — or it leaks.       │
│                                                          │
│  JS analogy: objects, arrays, closures (GC'd)            │
│  C++ use: anything that outlives its creating function    │
│                                                          │
└─────────────────────────────────────────────────────────┘&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 1: Stack vs heap. JavaScript hides this distinction behind the garbage collector. C++ requires you to choose.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For JSI modules, you'll mostly work with stack-allocated values and smart pointers (which manage heap memory for you). Raw &lt;code&gt;new&lt;/code&gt; and &lt;code&gt;delete&lt;/code&gt; almost never appear in well-written modern C++.&lt;/p&gt;




&lt;h2 id="concept-2-references-and-pointers-aliasing-data"&gt;Concept 2: References and Pointers (Aliasing Data)&lt;/h2&gt;

&lt;p&gt;In JavaScript, when you pass an object to a function, the function receives a copy of the &lt;em&gt;reference&lt;/em&gt; — it can mutate the object's properties, but reassigning the parameter doesn't affect the caller's variable. (This is technically "pass-by-sharing," not true pass-by-reference in the C++ sense.) In practice, it feels like pass-by-reference for mutations:&lt;/p&gt;

&lt;p&gt;JavaScript — object mutations are visible to the caller&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;function addItem(list) {
    list.push('new item');  // modifies the original
}

const myList = ['a', 'b'];
addItem(myList);
console.log(myList);  // ['a', 'b', 'new item']&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In C++, you choose whether to pass data by &lt;strong&gt;value&lt;/strong&gt; (copy it), by &lt;strong&gt;reference&lt;/strong&gt; (alias it), or by &lt;strong&gt;pointer&lt;/strong&gt; (hold its address). This is what the &lt;code&gt;&amp;amp;&lt;/code&gt; and &lt;code&gt;*&lt;/code&gt; symbols mean.&lt;/p&gt;

&lt;h3 id="pass-by-value-copy"&gt;Pass by Value (Copy)&lt;/h3&gt;

&lt;p&gt;Pass by value — makes a copy&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void process(std::string text) {   // text is a COPY
    text += " modified";           // modifies the copy only
}

std::string original = "hello";
process(original);
// original is still "hello" — the copy was modified, not the original&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is like JavaScript's behavior with primitives — &lt;code&gt;let x = 5; foo(x);&lt;/code&gt; passes a copy.&lt;/p&gt;

&lt;h3 id="pass-by-reference"&gt;Pass by Reference (`&amp;amp;`)&lt;/h3&gt;

&lt;p&gt;Pass by reference — aliases the original&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void process(std::string&amp;amp; text) {  // &amp;amp; means "reference to"
    text += " modified";           // modifies the ORIGINAL
}

std::string original = "hello";
process(original);
// original is now "hello modified"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;&amp;amp;&lt;/code&gt; after the type means "this is not a copy — it's another name for the same data." The closest JavaScript analogy is passing an object to a function — the function can mutate the object's properties because it has a reference to the same data. But C++ references go further: reassigning the parameter &lt;em&gt;does&lt;/em&gt; affect the caller's variable, unlike JavaScript.&lt;/p&gt;

&lt;h3 id="const-reference-const"&gt;Const Reference (`const &amp;amp;`)&lt;/h3&gt;

&lt;p&gt;Const reference — read-only alias&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void print(const std::string&amp;amp; text) {  // can read but not modify
    std::cout &amp;lt;&amp;lt; text;                  // OK — reading
    // text += " nope";                // COMPILER ERROR — can't modify
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is the most common pattern in JSI code. When a function receives data it needs to read but not modify, it takes a &lt;code&gt;const &amp;amp;&lt;/code&gt;. It avoids the cost of copying while preventing accidental mutation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; When you see &lt;code&gt;const jsi::Value&amp;amp;&lt;/code&gt; in a JSI function signature, read it as: "I'm borrowing this value for the duration of this call. I won't modify it, and I won't keep it after I return." The &lt;code&gt;const&lt;/code&gt; is a &lt;em&gt;promise&lt;/em&gt; to the compiler — and to every developer who reads your code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="pointers"&gt;Pointers (`*`)&lt;/h3&gt;

&lt;p&gt;A pointer holds the &lt;strong&gt;memory address&lt;/strong&gt; of data. It's a lower-level construct than a reference — references are typically implemented as pointers under the hood, but with safer semantics (can't be null, can't be reseated).&lt;/p&gt;

&lt;p&gt;Pointers — the address operator&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;int value = 42;
int* ptr = &amp;amp;value;    // ptr holds the ADDRESS of value
                      // (&amp;amp; here means "address of", not "reference" — context matters)
std::cout &amp;lt;&amp;lt; *ptr;    // 42 — *ptr DEREFERENCES the pointer (reads the value at that address)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You'll see pointers in JSI function signatures:&lt;/p&gt;

&lt;p&gt;JSI function — pointer to argument array&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;jsi::Value myFunction(
    jsi::Runtime&amp;amp; rt,          // reference to the runtime
    const jsi::Value&amp;amp; thisVal, // const reference to "this"
    const jsi::Value* args,    // POINTER to array of arguments
    size_t count               // how many arguments
) {
    double x = args[0].asNumber();  // args[0] works like array indexing
    // ...
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;args&lt;/code&gt; parameter is a pointer to the first element of an array. &lt;code&gt;args[0]&lt;/code&gt; is the first argument, &lt;code&gt;args[1]&lt;/code&gt; is the second. The &lt;code&gt;count&lt;/code&gt; parameter tells you how many there are. This is C-style array passing — no &lt;code&gt;.length&lt;/code&gt; property, so the count is passed separately.&lt;/p&gt;

&lt;h3 id="the-cheat-sheet"&gt;The Cheat Sheet&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Symbol&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;th&gt;JS Analogy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`Type x`&lt;/td&gt;
&lt;td&gt;Value (copy)&lt;/td&gt;
&lt;td&gt;Primitive: `let x = 5`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`Type&amp;amp; x`&lt;/td&gt;
&lt;td&gt;Reference (alias)&lt;/td&gt;
&lt;td&gt;Object parameter: `function f(obj)`&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`const Type&amp;amp; x`&lt;/td&gt;
&lt;td&gt;Read-only reference&lt;/td&gt;
&lt;td&gt;A read-only view — others may still have write access&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`Type* x`&lt;/td&gt;
&lt;td&gt;Pointer (memory address)&lt;/td&gt;
&lt;td&gt;No direct analogy — closest is a weak reference&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`&amp;amp;x`&lt;/td&gt;
&lt;td&gt;"Address of x" (in an expression)&lt;/td&gt;
&lt;td&gt;No analogy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`*x`&lt;/td&gt;
&lt;td&gt;"Value at address x" (dereference)&lt;/td&gt;
&lt;td&gt;No analogy&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Figure 2: C++ parameter passing symbols. The &lt;code&gt;&amp;amp;&lt;/code&gt; does double duty — in a type declaration it means "reference," in an expression it means "address of."&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; The &lt;code&gt;&amp;amp;&lt;/code&gt; symbol has two completely different meanings depending on context. In a &lt;strong&gt;type declaration&lt;/strong&gt; (&lt;code&gt;std::string&amp;amp; text&lt;/code&gt;), it means "reference to." In an &lt;strong&gt;expression&lt;/strong&gt; (&lt;code&gt;int* ptr = &amp;amp;value&lt;/code&gt;), it means "address of." This trips up every JS developer learning C++. When you see &lt;code&gt;&amp;amp;&lt;/code&gt;, check whether it's next to a type or next to a variable name.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="concept-3-raii-the-destroyer-pattern"&gt;Concept 3: RAII (The Destroyer Pattern)&lt;/h2&gt;

&lt;p&gt;RAII — Resource Acquisition Is Initialization — is the most important C++ concept for JSI development. It has the worst name in computer science, but the idea is simple.&lt;/p&gt;

&lt;p&gt;In JavaScript, you write cleanup code manually:&lt;/p&gt;

&lt;p&gt;JavaScript — manual cleanup with try/finally&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;function readFile(path) {
    const handle = openFile(path);
    try {
        return handle.read();
    } finally {
        handle.close();  // you must remember this
    }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you forget the &lt;code&gt;finally&lt;/code&gt;, the file handle leaks. If an exception throws before &lt;code&gt;close()&lt;/code&gt; but outside the &lt;code&gt;try&lt;/code&gt;, the file handle leaks. It's fragile.&lt;/p&gt;

&lt;p&gt;In C++, RAII means: &lt;strong&gt;the constructor acquires the resource, and the destructor releases it.&lt;/strong&gt; Since destructors run automatically when objects leave scope (stack unwinding), cleanup is guaranteed — even if exceptions are thrown.&lt;/p&gt;

&lt;p&gt;C++ — RAII makes cleanup automatic&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class FileHandle {
    FILE* file_;
public:
    FileHandle(const char* path) : file_(fopen(path, "r")) {   // acquire
        if (!file_) throw std::runtime_error("Failed to open file");
    }
    ~FileHandle() { fclose(file_); }                             // release — runs automatically

    std::string read() { /* ... */ }
};

void readFile(const char* path) {
    FileHandle handle(path);     // constructor opens the file
    auto content = handle.read();
    return content;
}  // ← destructor runs here — file is closed, guaranteed
   //   even if handle.read() threw an exception&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;~FileHandle()&lt;/code&gt; is a &lt;strong&gt;destructor&lt;/strong&gt; — a function that runs automatically when the object is destroyed. For stack-allocated objects, that means when the scope ends (the closing &lt;code&gt;}&lt;/code&gt;). For heap-allocated objects, it means when &lt;code&gt;delete&lt;/code&gt; is called (or when a smart pointer decides it's time).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; RAII is not about files. It's about &lt;em&gt;any&lt;/em&gt; resource — memory, network connections, locks, GPU buffers, audio sessions. The pattern is always the same: acquire in the constructor, release in the destructor, and let scope determine lifetime. In JSI modules, HostObjects use RAII for their C++ state: when JavaScript's garbage collector collects the HostObject, the C++ destructor runs and cleans up native resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's the mental model:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;JavaScript:                          C++ (RAII):

  const x = acquire();                {
  try {                                 Resource x(...);  // acquire
    use(x);                             use(x);
  } finally {                        }  // ← destructor releases
    release(x);                         //   automatically, even
  }                                     //   on exception&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 3: RAII eliminates manual cleanup. The closing brace IS the &lt;code&gt;finally&lt;/code&gt; block.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The reason RAII matters for JSI: native modules manage resources that the JavaScript garbage collector knows nothing about — audio buffers, file handles, database connections, native thread pools. RAII ensures these resources are cleaned up deterministically, not "whenever the GC gets around to it."&lt;/p&gt;




&lt;h2 id="concept-4-smart-pointers-automatic-heap-management"&gt;Concept 4: Smart Pointers (Automatic Heap Management)&lt;/h2&gt;

&lt;p&gt;Raw &lt;code&gt;new&lt;/code&gt; and &lt;code&gt;delete&lt;/code&gt; are C++'s type-safe, object-aware replacements for C's &lt;code&gt;malloc&lt;/code&gt; and &lt;code&gt;free&lt;/code&gt;. Unlike &lt;code&gt;malloc&lt;/code&gt;/&lt;code&gt;free&lt;/code&gt;, &lt;code&gt;new&lt;/code&gt; calls constructors and &lt;code&gt;delete&lt;/code&gt; calls destructors — but they're still error-prone when used manually. Modern C++ uses &lt;strong&gt;smart pointers&lt;/strong&gt; — RAII wrappers around heap pointers that automatically &lt;code&gt;delete&lt;/code&gt; the memory when it's no longer needed.&lt;/p&gt;

&lt;p&gt;There are two smart pointers you need to know. Think of them as two different ownership policies.&lt;/p&gt;

&lt;h3 id="std-unique-ptr-exclusive-ownership"&gt;`std::unique_ptr` — Exclusive Ownership&lt;/h3&gt;

&lt;p&gt;A &lt;code&gt;unique_ptr&lt;/code&gt; owns its data exclusively. Nobody else can own it. When the &lt;code&gt;unique_ptr&lt;/code&gt; is destroyed, the data is freed. You cannot copy it — you can only &lt;strong&gt;move&lt;/strong&gt; it (transfer ownership).&lt;/p&gt;

&lt;p&gt;unique_ptr — one owner, automatic cleanup&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;#include &amp;lt;memory&amp;gt;

void example() {
    // Create a unique_ptr — it owns the AudioBuffer
    auto buffer = std::make_unique&amp;lt;AudioBuffer&amp;gt;(1024);
    buffer-&amp;gt;fill(0.0f);    // use it like a regular pointer

    // auto copy = buffer;  // ❌ COMPILER ERROR — can't copy a unique_ptr
    auto moved = std::move(buffer);  // ✓ Transfer ownership
    // buffer is now nullptr — moved owns the data
}  // ← moved is destroyed here, AudioBuffer is freed&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The JavaScript analogy: imagine a &lt;code&gt;const&lt;/code&gt; reference that you can't share. Only one variable can point to the data at a time. If you want to pass it somewhere else, you &lt;code&gt;move&lt;/code&gt; it — the original becomes &lt;code&gt;null&lt;/code&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;unique_ptr ownership:

  auto a = make_unique&amp;lt;X&amp;gt;();     a ──────▶ [X on heap]

  auto b = std::move(a);         a ──▶ nullptr
                                 b ──────▶ [X on heap]

  // b goes out of scope          b destroyed → [X freed]&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 4: &lt;code&gt;unique_ptr&lt;/code&gt; ownership transfer. Only one pointer to the data at a time. Moving transfers ownership and nullifies the source.&lt;/em&gt;&lt;/p&gt;

&lt;h3 id="std-shared-ptr-shared-ownership"&gt;`std::shared_ptr` — Shared Ownership&lt;/h3&gt;

&lt;p&gt;A &lt;code&gt;shared_ptr&lt;/code&gt; lets multiple owners share the same data. It maintains a &lt;strong&gt;reference count&lt;/strong&gt; — every copy increments the count, every destruction decrements it. When the count hits zero, the data is freed.&lt;/p&gt;

&lt;p&gt;shared_ptr — reference-counted ownership&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;#include &amp;lt;memory&amp;gt;

void example() {
    auto config = std::make_shared&amp;lt;AppConfig&amp;gt;();  // refcount = 1

    auto copy1 = config;   // refcount = 2 (both point to same AppConfig)
    auto copy2 = config;   // refcount = 3

    copy1.reset();          // refcount = 2 (copy1 releases its share)
    copy2.reset();          // refcount = 1
}  // config destroyed → refcount = 0 → AppConfig freed&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is closest to how JavaScript's garbage collector works — the object lives as long as &lt;em&gt;someone&lt;/em&gt; references it. The difference: &lt;code&gt;shared_ptr&lt;/code&gt; uses deterministic reference counting (freed immediately when count hits zero), while JS uses tracing GC (freed "eventually" during a GC pass).&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;shared_ptr reference counting:

  auto a = make_shared&amp;lt;X&amp;gt;();     a ──────▶ [X] refcount: 1
  auto b = a;                    a ──────▶ [X] refcount: 2
                                 b ──────┘
  a.reset();                     b ──────▶ [X] refcount: 1
  b.reset();                               [X] refcount: 0 → freed&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 5: &lt;code&gt;shared_ptr&lt;/code&gt; reference counting. Multiple pointers to the same data. Freed when the last one lets go.&lt;/em&gt;&lt;/p&gt;

&lt;h3 id="which-one-for-jsi"&gt;Which One for JSI?&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Smart Pointer&lt;/th&gt;
&lt;th&gt;Use When&lt;/th&gt;
&lt;th&gt;JSI Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`unique_ptr`&lt;/td&gt;
&lt;td&gt;One owner, no sharing needed&lt;/td&gt;
&lt;td&gt;Internal buffers, temporary computation results&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`shared_ptr`&lt;/td&gt;
&lt;td&gt;Multiple owners, or exposed to JS&lt;/td&gt;
&lt;td&gt;**HostObjects** — JS GC and C++ code both need access&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The critical one for JSI is &lt;code&gt;shared_ptr&lt;/code&gt;. When you create a HostObject (a C++ object exposed to JavaScript), it's wrapped in a &lt;code&gt;std::shared_ptr&lt;/code&gt;. The JavaScript garbage collector holds one reference, and your C++ code may hold others. The HostObject is destroyed only when &lt;em&gt;both&lt;/em&gt; JS and C++ have released their references.&lt;/p&gt;

&lt;p&gt;HostObject uses shared_ptr — preview of Part 5&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// HostObjects are always shared_ptr — JS GC holds one reference
auto storage = std::make_shared&amp;lt;StorageHostObject&amp;gt;(dbPath);
runtime.global().setProperty(
    runtime, "storage",
    jsi::Object::createFromHostObject(runtime, storage)
);
// Now: JS holds a reference (via GC) AND C++ holds 'storage'
// StorageHostObject is freed when BOTH release&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; &lt;code&gt;shared_ptr&lt;/code&gt; has overhead — the reference count is an atomic integer (thread-safe increment/decrement), and each &lt;code&gt;shared_ptr&lt;/code&gt; is larger than a raw pointer (it carries the control block). For hot paths and real-time code, prefer &lt;code&gt;unique_ptr&lt;/code&gt;. We'll see why this matters in Parts 8 and 9 when we build threaded and audio pipeline code.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="concept-5-lambdas-c-closures"&gt;Concept 5: Lambdas (C++ Closures)&lt;/h2&gt;

&lt;p&gt;Lambdas are the C++ concept you'll recognize most immediately. They're closures — anonymous functions that can capture variables from their surrounding scope.&lt;/p&gt;

&lt;p&gt;JavaScript closure&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;function makeCounter() {
    let count = 0;
    return () =&amp;gt; ++count;
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;C++ lambda — the same pattern&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;auto makeCounter() {
    int count = 0;
    return [count]() mutable { return ++count; };
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The syntax looks different, but the observable output is the same: call the returned function three times and you get 1, 2, 3. The &lt;em&gt;mechanism&lt;/em&gt; differs — JS captures the variable binding (shared with other closures from the same scope), while C++ &lt;code&gt;[count] mutable&lt;/code&gt; captures a private copy — but for a single returned counter, the result is identical.&lt;/p&gt;

&lt;h3 id="lambda-anatomy"&gt;Lambda Anatomy&lt;/h3&gt;

&lt;pre&gt;&lt;code&gt;[capture](parameters) -&amp;gt; return_type { body }&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;strong&gt;capture list&lt;/strong&gt; &lt;code&gt;[...]&lt;/code&gt; is what makes C++ lambdas different from JavaScript closures. In JavaScript, closures automatically capture the enclosing scope's variable bindings — they see mutations to those variables, similar in &lt;em&gt;behavior&lt;/em&gt; to C++ capture-by-reference (though JS keeps the scope alive via GC, so there's no dangling reference risk). In C++, you explicitly choose what to capture and &lt;em&gt;how&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;Capture modes&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;int x = 10;
std::string name = "JSI";

auto byValue   = [x]()       { return x; };          // copies x (snapshot)
auto byRef     = [&amp;amp;x]()      { return x; };          // references x (live alias)
auto everything = [=]()      { return x; };           // copies ALL variables
auto allByRef  = [&amp;amp;]()       { return x; };           // references ALL variables
auto mixed     = [x, &amp;amp;name]() { return name + "!"; }; // x by value, name by ref&lt;/code&gt;&lt;/pre&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capture&lt;/th&gt;
&lt;th&gt;Syntax&lt;/th&gt;
&lt;th&gt;JS Analogy&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;By value&lt;/td&gt;
&lt;td&gt;`[x]`&lt;/td&gt;
&lt;td&gt;`const x_copy = x` then use `x_copy`&lt;/td&gt;
&lt;td&gt;Snapshot — changes to `x` outside don't affect the lambda&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;By reference&lt;/td&gt;
&lt;td&gt;`[&amp;amp;x]`&lt;/td&gt;
&lt;td&gt;Closest to JS closure behavior&lt;/td&gt;
&lt;td&gt;Live alias — sees changes to `x`, and can modify it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;All by value&lt;/td&gt;
&lt;td&gt;`[=]`&lt;/td&gt;
&lt;td&gt;No direct analogy&lt;/td&gt;
&lt;td&gt;Copies everything referenced in the body&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;All by reference&lt;/td&gt;
&lt;td&gt;`[&amp;amp;]`&lt;/td&gt;
&lt;td&gt;Closest to default JS closures&lt;/td&gt;
&lt;td&gt;References everything — most like JavaScript&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Figure 6: Lambda capture modes. JavaScript closures always share the enclosing scope's bindings. C++ makes you choose — and the choice matters for thread safety.&lt;/em&gt;&lt;/p&gt;

&lt;h3 id="why-captures-matter-for-jsi"&gt;Why Captures Matter for JSI&lt;/h3&gt;

&lt;p&gt;This is where the JSI connection becomes critical. When you create a JSI host function, you typically use a lambda:&lt;/p&gt;

&lt;p&gt;JSI host function with lambda capture&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;void install(jsi::Runtime&amp;amp; runtime, std::shared_ptr&amp;lt;Database&amp;gt; db) {
    auto get = jsi::Function::createFromHostFunction(
        runtime,
        jsi::PropNameID::forAscii(runtime, "get"),
        1,  // argument count
        [db](jsi::Runtime&amp;amp; rt,              // ← capture db by value (shared_ptr copy)
             const jsi::Value&amp;amp; thisVal,
             const jsi::Value* args,
             size_t count) -&amp;gt; jsi::Value {
            auto key = args[0].asString(rt).utf8(rt);
            auto result = db-&amp;gt;get(key);     // use the captured database
            return jsi::String::createFromUtf8(rt, result);
        }
    );
    runtime.global().setProperty(runtime, "dbGet", std::move(get));
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Notice: the lambda captures &lt;code&gt;db&lt;/code&gt; by &lt;strong&gt;value&lt;/strong&gt; — but &lt;code&gt;db&lt;/code&gt; is a &lt;code&gt;shared_ptr&lt;/code&gt;, so capturing by value &lt;em&gt;copies the shared pointer&lt;/em&gt;, incrementing the reference count. The lambda now shares ownership of the database. Even if the original &lt;code&gt;db&lt;/code&gt; variable goes out of scope, the lambda's copy keeps the database alive.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Think about it:&lt;/strong&gt; What would happen if we captured &lt;code&gt;db&lt;/code&gt; by reference (&lt;code&gt;[&amp;amp;db]&lt;/code&gt;) instead of by value (&lt;code&gt;[db]&lt;/code&gt;)? The &lt;code&gt;install&lt;/code&gt; function would return, &lt;code&gt;db&lt;/code&gt; (a local variable) would be destroyed, and the lambda would hold a dangling reference — a pointer to memory that no longer exists. The next time JavaScript called &lt;code&gt;dbGet()&lt;/code&gt;, it would crash. This is why JSI lambdas almost always capture &lt;code&gt;shared_ptr&lt;/code&gt; by value, not by reference.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This pattern — capturing &lt;code&gt;shared_ptr&lt;/code&gt; by value inside JSI lambdas — appears in virtually every native module. It's how C++ objects stay alive as long as JavaScript needs them.&lt;/p&gt;




&lt;h2 id="move-semantics-transferring-ownership"&gt;Move Semantics: Transferring Ownership&lt;/h2&gt;

&lt;p&gt;One more concept ties everything together. You've already seen &lt;code&gt;std::move&lt;/code&gt; with &lt;code&gt;unique_ptr&lt;/code&gt;. Let's understand what it actually does.&lt;/p&gt;

&lt;p&gt;In JavaScript, assigning an object doesn't copy it:&lt;/p&gt;

&lt;p&gt;JavaScript — objects are shared, not copied&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;const a = { data: [1, 2, 3] };
const b = a;      // b and a point to the SAME object
b.data.push(4);   // a.data is also [1, 2, 3, 4]&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In C++, assigning an object &lt;strong&gt;copies&lt;/strong&gt; it by default:&lt;/p&gt;

&lt;p&gt;C++ — objects are copied by default&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;std::vector&amp;lt;int&amp;gt; a = {1, 2, 3};
std::vector&amp;lt;int&amp;gt; b = a;     // b is a COPY — a and b are independent
b.push_back(4);             // a is still {1, 2, 3}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Copying is safe but expensive. If &lt;code&gt;a&lt;/code&gt; holds a megabyte of data, &lt;code&gt;b = a&lt;/code&gt; copies that megabyte. &lt;code&gt;std::move&lt;/code&gt; says: "I'm done with &lt;code&gt;a&lt;/code&gt; — transfer its guts to &lt;code&gt;b&lt;/code&gt; without copying."&lt;/p&gt;

&lt;p&gt;Move — transfer without copying&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;std::vector&amp;lt;int&amp;gt; a = {1, 2, 3};
std::vector&amp;lt;int&amp;gt; b = std::move(a);  // b steals a's internal buffer
// a is now in a "moved-from" state — valid but unspecified (typically empty)
// b holds {1, 2, 3} — no copy happened&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Think of it like this: a normal copy is photocopying a 100-page document. A move is handing someone the document — instant, but now you don't have it anymore.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Copy:    a ──▶ [1,2,3]         b ──▶ [1,2,3]    (two copies exist)

Move:    a ──▶ []              b ──▶ [1,2,3]    (data transferred, no copy)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 7: Copy vs move. Copy duplicates the data. Move transfers ownership — the source is left in a valid but unspecified state (typically empty).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You'll see &lt;code&gt;std::move&lt;/code&gt; in JSI code when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transferring a &lt;code&gt;unique_ptr&lt;/code&gt; to a new owner&lt;/li&gt;
&lt;li&gt;Passing a large object into a function without copying it&lt;/li&gt;
&lt;li&gt;Returning a constructed object from a function efficiently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Move in JSI context&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// Moving a JSI function into a property (no copy needed)
auto fn = jsi::Function::createFromHostFunction(rt, name, 0, callback);
runtime.global().setProperty(runtime, "myFunc", std::move(fn));
// fn is now empty — runtime.global() owns the function&lt;/code&gt;&lt;/pre&gt;




&lt;h2 id="putting-it-all-together-reading-real-jsi-code"&gt;Putting It All Together: Reading Real JSI Code&lt;/h2&gt;

&lt;p&gt;Let's apply all five concepts to a real-world JSI module. This is a simplified version of what you'd see in a library like &lt;code&gt;react-native-mmkv&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;A complete mini JSI module — every concept in action&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;#include &amp;lt;jsi/jsi.h&amp;gt;
#include &amp;lt;memory&amp;gt;
#include &amp;lt;string&amp;gt;
#include &amp;lt;unordered_map&amp;gt;

using namespace facebook;

// A simple in-memory key-value store
class KeyValueStore {                                    // RAII: constructor acquires,
public:                                                  //       destructor releases
    void set(const std::string&amp;amp; key,                     // const&amp;amp; — read-only reference
             const std::string&amp;amp; value) {
        data_[key] = value;
    }

    std::string get(const std::string&amp;amp; key) const {      // const method — won't modify state
        auto it = data_.find(key);
        if (it != data_.end()) return it-&amp;gt;second;
        return "";
    }

private:
    std::unordered_map&amp;lt;std::string, std::string&amp;gt; data_;  // stack-allocated (inside the object)
};  // destructor frees data_ automatically (RAII)

void installStorage(jsi::Runtime&amp;amp; rt) {                  // reference — aliases the runtime
    // shared_ptr: JS GC and C++ both need access
    auto store = std::make_shared&amp;lt;KeyValueStore&amp;gt;();

    // "set" function — lambda captures store by value (shared_ptr copy, refcount++)
    auto setFn = jsi::Function::createFromHostFunction(
        rt, jsi::PropNameID::forAscii(rt, "set"), 2,
        [store](jsi::Runtime&amp;amp; rt, const jsi::Value&amp;amp;,
                const jsi::Value* args, size_t count) -&amp;gt; jsi::Value {
            auto key = args[0].asString(rt).utf8(rt);    // jsi::String → std::string
            auto val = args[1].asString(rt).utf8(rt);
            store-&amp;gt;set(key, val);                        // use captured shared_ptr
            return jsi::Value::undefined();
        }
    );

    // "get" function — same capture pattern
    auto getFn = jsi::Function::createFromHostFunction(
        rt, jsi::PropNameID::forAscii(rt, "get"), 1,
        [store](jsi::Runtime&amp;amp; rt, const jsi::Value&amp;amp;,
                const jsi::Value* args, size_t count) -&amp;gt; jsi::Value {
            auto key = args[0].asString(rt).utf8(rt);
            auto result = store-&amp;gt;get(key);
            return jsi::String::createFromUtf8(rt, result);
        }
    );

    // Install into JavaScript global scope — move (no copy needed)
    auto storage = jsi::Object(rt);
    storage.setProperty(rt, "set", std::move(setFn));    // move: transfer ownership
    storage.setProperty(rt, "get", std::move(getFn));
    rt.global().setProperty(rt, "storage", std::move(storage));
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Using it from JavaScript&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;storage.set('theme', 'dark');                // synchronous — no await
const theme = storage.get('theme');          // synchronous — returns immediately
console.log(theme);                          // "dark"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;output&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;"dark"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Every concept from this post appears in that code:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Line&lt;/th&gt;
&lt;th&gt;Concept&lt;/th&gt;
&lt;th&gt;What's Happening&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`jsi::Runtime&amp;amp; rt`&lt;/td&gt;
&lt;td&gt;**Reference**&lt;/td&gt;
&lt;td&gt;Borrows the runtime — doesn't own it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`const jsi::Value&amp;amp;`&lt;/td&gt;
&lt;td&gt;**Const reference**&lt;/td&gt;
&lt;td&gt;Read-only access to the `this` value&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`const jsi::Value* args`&lt;/td&gt;
&lt;td&gt;**Pointer**&lt;/td&gt;
&lt;td&gt;Points to the argument array&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`std::make_shared()`&lt;/td&gt;
&lt;td&gt;**Smart pointer**&lt;/td&gt;
&lt;td&gt;Heap allocation with shared ownership&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`[store](...) { ... }`&lt;/td&gt;
&lt;td&gt;**Lambda + capture**&lt;/td&gt;
&lt;td&gt;Closure capturing `shared_ptr` by value&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`std::move(setFn)`&lt;/td&gt;
&lt;td&gt;**Move**&lt;/td&gt;
&lt;td&gt;Transfers function ownership to the object&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`~KeyValueStore()` (implicit)&lt;/td&gt;
&lt;td&gt;**RAII**&lt;/td&gt;
&lt;td&gt;Destructor frees `data_` when store is destroyed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2 id="the-concepts-you-don-t-need-yet"&gt;The Concepts You Don't Need (Yet)&lt;/h2&gt;

&lt;p&gt;C++ is enormous. Here's what you can safely ignore for JSI work:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;C++ Feature&lt;/th&gt;
&lt;th&gt;Why You Don't Need It&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Templates (advanced)&lt;/td&gt;
&lt;td&gt;JSI uses them internally, but you rarely write them&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multiple inheritance&lt;/td&gt;
&lt;td&gt;JSI uses single inheritance (`HostObject` base class)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operator overloading (advanced)&lt;/td&gt;
&lt;td&gt;JSI uses move-assignment operators and basic comparisons internally, but you won't write custom overloads for JSI modules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`const_cast` / `reinterpret_cast`&lt;/td&gt;
&lt;td&gt;Have legitimate uses in systems code, but you won't need them for JSI modules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Manual `new` / `delete`&lt;/td&gt;
&lt;td&gt;Use `make_unique` and `make_shared` instead&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Preprocessor macros (`#define`)&lt;/td&gt;
&lt;td&gt;Occasionally for platform `#ifdef`, but not for logic&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you encounter these in third-party native modules, you can usually understand the surrounding code without understanding the advanced feature.&lt;/p&gt;




&lt;h2 id="key-takeaways"&gt;Key Takeaways&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stack vs heap.&lt;/strong&gt; Stack memory is automatic — allocated when a function starts, freed when it returns. Heap memory outlives functions but must be managed. For JSI modules, smart pointers manage the heap for you.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;References (&lt;code&gt;&amp;amp;&lt;/code&gt;) and pointers (&lt;code&gt;*&lt;/code&gt;).&lt;/strong&gt; References are aliases — another name for existing data. &lt;code&gt;const &amp;amp;&lt;/code&gt; means "read-only borrow." Pointers hold memory addresses. In JSI code, you'll see &lt;code&gt;jsi::Runtime&amp;amp;&lt;/code&gt; (borrow the runtime) and &lt;code&gt;const jsi::Value*&lt;/code&gt; (pointer to the argument array).&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;RAII.&lt;/strong&gt; Constructors acquire resources, destructors release them. Scope determines lifetime. This is C++'s answer to &lt;code&gt;try/finally&lt;/code&gt; — but it's built into the language and can't be forgotten. Every HostObject relies on RAII to clean up native resources when JavaScript garbage-collects it.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Smart pointers.&lt;/strong&gt; &lt;code&gt;unique_ptr&lt;/code&gt; = one owner, automatic cleanup. &lt;code&gt;shared_ptr&lt;/code&gt; = shared ownership via reference counting. HostObjects use &lt;code&gt;shared_ptr&lt;/code&gt; because both JavaScript's GC and C++ code need to hold references to the same object.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Lambdas capture explicitly.&lt;/strong&gt; Unlike JavaScript closures (which automatically share the enclosing scope's variable bindings), C++ lambdas require you to declare what they capture and how. For JSI, the key pattern is: capture &lt;code&gt;shared_ptr&lt;/code&gt; by value inside lambdas so the native object stays alive as long as JavaScript needs it.&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2 id="what-s-next"&gt;What's Next&lt;/h2&gt;

&lt;p&gt;You now have the C++ vocabulary. You know where data lives (stack vs heap), how to borrow it (&lt;code&gt;&amp;amp;&lt;/code&gt;), how to manage it (&lt;code&gt;unique_ptr&lt;/code&gt;, &lt;code&gt;shared_ptr&lt;/code&gt;), how to clean it up (RAII), and how to write closures (lambdas with explicit captures).&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-4-your-first-react-native-jsi-function/" rel="noopener noreferrer"&gt;&lt;strong&gt;Part 4: Your First JSI Function&lt;/strong&gt;&lt;/a&gt;, we put it all together. You'll write a JSI function from scratch — registering it with the runtime, validating arguments, handling errors, and calling it from JavaScript. No boilerplate generators, no codegen. Just raw JSI.&lt;/p&gt;

&lt;p&gt;Part 3 gave you the vocabulary. Part 4 gives you the verb.&lt;/p&gt;





&lt;h2 id="references-further-reading"&gt;References &amp;amp; Further Reading&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://en.cppreference.com/w/cpp/memory/unique_ptr" rel="noopener noreferrer"&gt;cppreference — std::unique_ptr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.cppreference.com/w/cpp/memory/shared_ptr" rel="noopener noreferrer"&gt;cppreference — std::shared_ptr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.cppreference.com/w/cpp/language/raii" rel="noopener noreferrer"&gt;cppreference — RAII (Resource Acquisition Is Initialization)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.cppreference.com/w/cpp/language/lambda" rel="noopener noreferrer"&gt;cppreference — Lambda Expressions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.cppreference.com/w/cpp/language/move_constructor" rel="noopener noreferrer"&gt;cppreference — Move Semantics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines" rel="noopener noreferrer"&gt;C++ Core Guidelines — Bjarne Stroustrup &amp;amp; Herb Sutter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/facebook/react-native/blob/main/packages/react-native/ReactCommon/jsi/jsi/jsi.h" rel="noopener noreferrer"&gt;JSI Header — jsi.h (API Surface, facebook/react-native)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>reactnative</category>
      <category>cpp</category>
      <category>jsi</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>React Native JSI Deep Dive — Part 2: The Bridge is Dead, Long Live JSI</title>
      <dc:creator>Rahul Garg</dc:creator>
      <pubDate>Wed, 18 Mar 2026 08:51:20 +0000</pubDate>
      <link>https://dev.to/xtmntxraphaelx/react-native-jsi-deep-dive-part-2-the-bridge-is-dead-long-live-jsi-20nc</link>
      <guid>https://dev.to/xtmntxraphaelx/react-native-jsi-deep-dive-part-2-the-bridge-is-dead-long-live-jsi-20nc</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs. We &lt;em&gt;should&lt;/em&gt; forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
— Donald Knuth, &lt;em&gt;Structured Programming with go to Statements&lt;/em&gt;, 1974&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt; Every React Native native module call used to pass through a single chokepoint: the Bridge. It serialized every value to JSON, batched every call into an async queue, and made it impossible to build anything that needed to respond in under 16 milliseconds. JSI replaced it with something deceptively simple — a direct C++ function pointer. No serialization. No queue. No bridge. This post traces a native module call through both architectures so you can see exactly what changed and why it matters.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: React Native JSI Deep Dive&lt;/strong&gt; (12 parts — series in progress)
&lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see/" rel="noopener noreferrer"&gt;Part 1: The Runtime You Never See&lt;/a&gt; | &lt;strong&gt;Part 2: The Bridge is Dead, Long Live JSI (You are here)&lt;/strong&gt; | Part 3: C++ Foundations | Part 4: JSI Functions | Part 5: HostObjects | Part 6: Memory Ownership | Part 7: Platform Wiring | Part 8: Threading &amp;amp; Async | Part 9: Audio Pipeline | Part 10: Storage Engine | Part 11: Module Approaches | Part 12: Debugging&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="quick-recap"&gt;Quick Recap&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see/" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, we established that React Native runs as three execution domains — JS thread (Hermes), UI thread (platform), and native background threads — communicating via message passing. The JS engine exposes a C++ interface called &lt;code&gt;jsi::Runtime&lt;/code&gt;. And we left off with a teaser: &lt;em&gt;before&lt;/em&gt; JSI, the messaging system between these worlds was a JSON serialization layer called the Bridge.&lt;/p&gt;

&lt;p&gt;Now let's open it up and see what was actually inside.&lt;/p&gt;




&lt;h2 id="the-problem-the-invisible-tax"&gt;The Problem: The Invisible Tax&lt;/h2&gt;

&lt;p&gt;Here's a native module you might write in the old architecture:&lt;/p&gt;

&lt;p&gt;android/src/main/java/com/myapp/MathModule.java&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;@ReactMethod
public void multiply(double a, double b, Promise promise) {
    promise.resolve(a * b);
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And from JavaScript:&lt;/p&gt;

&lt;p&gt;App.js&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;const result = await NativeModules.MathModule.multiply(3, 7);
console.log(result); // 21&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Two numbers in, one number out. The actual multiplication takes &lt;strong&gt;nanoseconds&lt;/strong&gt;. But in the old architecture, this call took &lt;strong&gt;milliseconds&lt;/strong&gt; — orders of magnitude slower than the work itself.&lt;/p&gt;

&lt;p&gt;Where did all that time go?&lt;/p&gt;




&lt;h2 id="the-bridge-how-it-actually-worked"&gt;The Bridge: How It Actually Worked&lt;/h2&gt;

&lt;p&gt;The Bridge — formally the &lt;code&gt;BatchedBridge&lt;/code&gt; backed by &lt;code&gt;MessageQueue.js&lt;/code&gt; — sat between JavaScript and native code. Nearly every JS ↔ native call passed through it. (A few subsystems bypassed it — notably the native animated driver, which serialized the animation graph once and then ran entirely on the UI thread — but all native module calls and most event dispatch went through the Bridge.)&lt;/p&gt;

&lt;p&gt;Here's what happened when you called &lt;code&gt;multiply(3, 7)&lt;/code&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;JavaScript                          Bridge                              Native
    │                                  │                                   │
    │  NativeModules.MathModule        │                                   │
    │     .multiply(3, 7)              │                                   │
    │                                  │                                   │
    │  1. Serialize call:              │                                   │
    │     moduleIDs: [42],             │                                   │
    │     methodIDs: [3],              │                                   │
    │     params: [[3, 7]]             │                                   │
    │     (three parallel arrays —     │                                   │
    │      numeric IDs, not names)     │                                   │
    │                                  │                                   │
    │  2. Enqueue in batch ──────────▶ │                                   │
    │                                  │                                   │
    │                                  │  3. Wait for batch flush          │
    │                                  │     (≥5ms between JS-initiated    │
    │                                  │      flushes, or next native poll)│
    │                                  │                                   │
    │                                  │  4. Flush batch ────────────────▶ │
    │                                  │                                   │
    │                                  │                    5. JSON.parse  │
    │                                  │                    6. Find module │
    │                                  │                    7. Invoke      │
    │                                  │                       method     │
    │                                  │                    8. Compute:    │
    │                                  │                       3 * 7 = 21 │
    │                                  │                                   │
    │                                  │                    9. Serialize   │
    │                                  │                       result     │
    │                                  │  ◀──────────────── 10. Send back │
    │                                  │                                   │
    │  11. Deserialize result ◀─────── │                                   │
    │  12. Resolve promise             │                                   │
    │      result = 21                 │                                   │&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 1: A native module call through the Bridge. The actual work (step 8) is a single multiplication. Everything else is overhead.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Count the steps: twelve. The actual computation is step 8 — a single multiplication. The other eleven steps are pure overhead: serialization, queuing, deserialization, dispatch.&lt;/p&gt;

&lt;p&gt;Let's break down the two costs the Bridge imposed.&lt;/p&gt;




&lt;h2 id="cost-1-json-serialization"&gt;Cost 1: JSON Serialization&lt;/h2&gt;

&lt;p&gt;Every value that crossed the Bridge was serialized to JSON on one side and parsed back on the other. Numbers, strings, booleans, arrays, objects — everything was converted to a JSON string, transmitted as bytes, and reconstructed from scratch.&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;multiply(3, 7)&lt;/code&gt;, that means the call is encoded into three parallel arrays — &lt;code&gt;moduleIDs&lt;/code&gt;, &lt;code&gt;methodIDs&lt;/code&gt;, and &lt;code&gt;params&lt;/code&gt; — using numeric IDs that map to registered module and method names. The arguments themselves (&lt;code&gt;[3, 7]&lt;/code&gt;) are JSON-serialized:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;JS side:  Enqueue into batch arrays:
          moduleIDs: [42]        (numeric ID for "MathModule")
          methodIDs: [3]         (numeric ID for "multiply")
          params:    [[3, 7]]    (JSON-serialized arguments)

Native:   Deserialize the batch, look up module 42 / method 3,
          parse the argument array [3, 7]&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For two numbers, this is wasteful but survivable. But consider what happens with real data:&lt;/p&gt;

&lt;p&gt;Sending a large dataset across the Bridge&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// Passing 10,000 items to native for processing
NativeModules.DataProcessor.process(items);

// Bridge must:
// 1. JSON.stringify 10,000 objects (~2-5ms for complex objects)
// 2. Copy the resulting string across the bridge
// 3. JSON.parse on the native side (~2-5ms)
// Total serialization overhead: 4-10ms — before native code even starts&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And it wasn't just data size. It was data &lt;em&gt;types&lt;/em&gt;. JSON has no concept of typed arrays, binary data, or ArrayBuffers. If you needed to pass image pixels, audio samples, or any binary data to native code, you had two options: Base64-encode it (inflating size by 33% and adding encoding/decoding overhead) or write it to a temporary file and pass the file path.&lt;/p&gt;

&lt;p&gt;Neither option let you share memory. Every byte was copied at least twice.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; The serialization cost was proportional to the &lt;em&gt;size&lt;/em&gt; of data being transferred, not the &lt;em&gt;complexity&lt;/em&gt; of the operation. A native function that took 0.1ms to execute could spend 10ms just getting its arguments across the Bridge. The Bridge made cheap operations expensive and made transferring large data impractical.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="cost-2-async-only-batching"&gt;Cost 2: Async-Only Batching&lt;/h2&gt;

&lt;p&gt;The Bridge was asynchronous. Every call — even a simple multiplication that could return instantly — was enqueued in a batch queue and processed later. There was no way to make a synchronous native call.&lt;/p&gt;

&lt;p&gt;Here's why this mattered. Imagine you're building a key-value store:&lt;/p&gt;

&lt;p&gt;The async tax on a simple lookup&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// What you WANT to write (synchronous, like localStorage):
const theme = Storage.get('theme');
renderApp(theme);

// What you HAD to write (async, because the Bridge):
const theme = await NativeModules.Storage.get('theme');
renderApp(theme);&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;await&lt;/code&gt; doesn't just add syntax. It suspends the async function, and because the Bridge dispatched calls to a separate native thread, the result couldn't come back until a future event loop cycle — after the native thread received the batch, executed the call, and sent the result back across the Bridge. For a cache lookup that takes microseconds on the native side, this cross-thread round-trip added milliseconds of latency.&lt;/p&gt;

&lt;p&gt;And because the Bridge batched calls, multiple native calls from the same JS execution frame were collected and sent together:&lt;/p&gt;

&lt;p&gt;Batching behavior&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// These three calls don't execute immediately.
// They're collected into a batch:
NativeModules.Analytics.track('screen_view');
NativeModules.Storage.get('user_id');
NativeModules.Logger.info('App mounted');

// The batch is flushed on the next event loop cycle.
// All three calls cross the Bridge together.
// Results come back asynchronously, in an unspecified order.&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Batching was an optimization — sending one message with three calls is cheaper than three separate messages. But it meant you couldn't get a result &lt;em&gt;during&lt;/em&gt; the current execution frame. Every native call was a round trip through the event loop.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; The Bridge's batching behavior created subtle bugs. If you called two native methods that depended on each other — say, &lt;code&gt;write('key', 'value')&lt;/code&gt; followed by &lt;code&gt;read('key')&lt;/code&gt; — they were batched together, but execution order on the native side wasn't guaranteed to match call order. Race conditions in Bridge-based native modules were a common source of bugs that were nearly impossible to reproduce.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="where-it-broke-down"&gt;Where It Broke Down&lt;/h2&gt;

&lt;p&gt;For simple apps with occasional native calls, the Bridge was fine. Millions of React Native apps shipped on it. But it had a ceiling, and three categories of work hit that ceiling hard:&lt;/p&gt;

&lt;h3 id="high-frequency-events"&gt;High-Frequency Events&lt;/h3&gt;

&lt;p&gt;JS-driven scroll-linked animations were a common pain point. When a scroll event needed to update a JS-driven animation, each event triggered a Bridge round-trip: the event was serialized to JSON on the native side, deserialized on the JS side, processed by JavaScript (which computed the new animation value), and the result serialized back to native for the UI update. If any step took longer than the frame budget, the animation stuttered.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Scroll event driving a JS animation:

  UI Thread                Bridge                 JS Thread
     │                       │                       │
     │  Scroll offset ──▶    │                       │
     │                       │  Serialize ──▶        │
     │                       │                       │  Process event
     │                       │                       │  Compute animation
     │                       │    ◀── Serialize      │
     │  ◀── Deserialize      │                       │
     │                       │                       │
     │  Apply update         │                       │
     └───────────────────────┴───────────────────────┘
                     Must complete in &amp;lt;16ms&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 2: A scroll-driven animation round-trip through the Bridge. Each event requires serialization in both directions — all within a single frame budget.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Each round-trip involves serialization and deserialization in both directions — four JSON operations per event. At high scroll velocities, these events can fire dozens of times per second, compounding the overhead quickly. (This is exactly why React Native introduced the native animated driver — &lt;code&gt;useNativeDriver: true&lt;/code&gt; — which bypassed the Bridge entirely for animations. But any scroll-linked logic that &lt;em&gt;required&lt;/em&gt; JavaScript computation had no escape hatch.)&lt;/p&gt;

&lt;h3 id="large-data-transfers"&gt;Large Data Transfers&lt;/h3&gt;

&lt;p&gt;Passing images, audio buffers, or large datasets across the Bridge required serializing the entire payload to JSON (or Base64). There was no way to share a memory pointer. A 1MB audio buffer became a 1.33MB Base64 string that was copied, transmitted, parsed, and decoded — turning a zero-cost pointer share into a multi-millisecond copy operation.&lt;/p&gt;

&lt;h3 id="synchronous-lookups"&gt;Synchronous Lookups&lt;/h3&gt;

&lt;p&gt;Some operations are fundamentally synchronous. Reading a cached value, checking a feature flag, getting the current timestamp from a high-resolution native timer — these operations complete in microseconds on the native side. But the Bridge forced them through an async round-trip, adding milliseconds of overhead to microsecond operations.&lt;/p&gt;

&lt;p&gt;This is why libraries like &lt;a href="https://github.com/mrousavy/react-native-mmkv" rel="noopener noreferrer"&gt;react-native-mmkv&lt;/a&gt; couldn't exist in the old architecture. MMKV's entire value proposition is synchronous key-value access — &lt;code&gt;storage.getString('key')&lt;/code&gt; returns immediately, no &lt;code&gt;await&lt;/code&gt;. That's only possible with JSI.&lt;/p&gt;




&lt;h2 id="jsi-the-replacement"&gt;JSI: The Replacement&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/facebook/react-native/tree/main/packages/react-native/ReactCommon/jsi/jsi" rel="noopener noreferrer"&gt;JavaScript Interface&lt;/a&gt; (JSI) replaces the Bridge with something fundamentally different: instead of serializing messages between two separate worlds, JSI lets JavaScript hold &lt;strong&gt;references to C++ host objects and functions&lt;/strong&gt; — managed through the runtime, without any JSON serialization layer in between.&lt;/p&gt;

&lt;p&gt;No serialization. No queue. No batch. No bridge.&lt;/p&gt;

&lt;p&gt;Here's the same &lt;code&gt;multiply&lt;/code&gt; operation with JSI:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;JavaScript                                    C++ (via JSI)
    │                                             │
    │  multiply(3, 7)                             │
    │                                             │
    │  1. Call C++ function pointer ────────────▶  │
    │     (args passed as jsi::Value,              │
    │      no serialization)                      │
    │                                             │  2. Read args directly:
    │                                             │     a = args[0].asNumber()
    │                                             │     b = args[1].asNumber()
    │                                             │  3. Compute: 3 * 7 = 21
    │                                             │  4. Return jsi::Value(21)
    │                                             │
    │  ◀──────────────────────────────────────────│
    │  5. result = 21                             │
    │     (no deserialization)                    │
    │                                             │&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 3: The same multiply call through JSI. Five steps instead of twelve. No serialization, no queue, no batch.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Five steps instead of twelve. And steps 1 and 5 are essentially free — they're a C++ function call and a return value. The entire overhead is a function pointer invocation.&lt;/p&gt;

&lt;p&gt;Let's unpack what makes this possible.&lt;/p&gt;




&lt;h2 id="how-jsi-works-function-pointers-not-messages"&gt;How JSI Works: Function Pointers, Not Messages&lt;/h2&gt;

&lt;p&gt;When you register a JSI function, you're giving the JavaScript runtime a &lt;strong&gt;C++ function pointer&lt;/strong&gt; disguised as a JavaScript function. From JavaScript's perspective, it's just a function. From C++'s perspective, it's a lambda that receives the runtime and arguments directly.&lt;/p&gt;

&lt;p&gt;Registering a JSI function (simplified)&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// C++ side: install a function into the JS runtime
runtime.global().setProperty(
    runtime,
    "multiply",
    jsi::Function::createFromHostFunction(
        runtime,
        jsi::PropNameID::forAscii(runtime, "multiply"),
        2,  // argument count
        [](jsi::Runtime&amp;amp; rt,
           const jsi::Value&amp;amp; thisVal,
           const jsi::Value* args,
           size_t count) -&amp;gt; jsi::Value {
            double a = args[0].asNumber();
            double b = args[1].asNumber();
            return jsi::Value(a * b);
        }
    )
);&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Calling it from JavaScript&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;const result = multiply(3, 7);  // 21 — synchronous, no await needed&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Think about it:&lt;/strong&gt; Notice what's missing. There's no &lt;code&gt;await&lt;/code&gt;. There's no Promise. There's no callback. The function call is &lt;strong&gt;synchronous&lt;/strong&gt; — JavaScript calls it, C++ executes, the result is returned immediately on the same thread, in the same event loop tick. How is that possible when the Bridge required everything to be async?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The answer is thread affinity. The Bridge was async because it sent messages &lt;em&gt;between&lt;/em&gt; threads — the JSON payload was produced on the JS thread and consumed on a native thread. JSI functions run &lt;strong&gt;on the JS thread itself&lt;/strong&gt;. The C++ code executes in the same thread that called it. No cross-thread messaging means no async overhead.&lt;/p&gt;

&lt;p&gt;This is why &lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see/" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt; emphasized that &lt;code&gt;jsi::Runtime&lt;/code&gt; is confined to the JS thread. That constraint — which might have seemed limiting — is what makes synchronous calls possible.&lt;/p&gt;




&lt;h2 id="values-without-serialization"&gt;Values Without Serialization&lt;/h2&gt;

&lt;p&gt;The Bridge converted everything to JSON. JSI passes values directly as &lt;code&gt;jsi::Value&lt;/code&gt; — a C++ type that can hold any JavaScript value without converting it to a string first.&lt;/p&gt;

&lt;p&gt;Here's how JavaScript types map to JSI types:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;JavaScript Type&lt;/th&gt;
&lt;th&gt;Bridge (old)&lt;/th&gt;
&lt;th&gt;JSI (new)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`number`&lt;/td&gt;
&lt;td&gt;JSON number → string → parse back&lt;/td&gt;
&lt;td&gt;`jsi::Value` wrapping `double` — zero conversion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`string`&lt;/td&gt;
&lt;td&gt;JSON string → escaped → parse back&lt;/td&gt;
&lt;td&gt;`jsi::String` — engine-native string, no JSON serialization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`boolean`&lt;/td&gt;
&lt;td&gt;JSON `true`/`false` → string → parse&lt;/td&gt;
&lt;td&gt;`jsi::Value` wrapping `bool` — zero conversion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`object`&lt;/td&gt;
&lt;td&gt;JSON.stringify entire tree&lt;/td&gt;
&lt;td&gt;`jsi::Object` — direct handle, no copying&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`array`&lt;/td&gt;
&lt;td&gt;JSON.stringify entire array&lt;/td&gt;
&lt;td&gt;`jsi::Array` (a `jsi::Object`) — direct handle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`ArrayBuffer`&lt;/td&gt;
&lt;td&gt;Not supported (Base64 workaround)&lt;/td&gt;
&lt;td&gt;`jsi::ArrayBuffer` — zero-copy pointer to raw bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`function`&lt;/td&gt;
&lt;td&gt;Not passable&lt;/td&gt;
&lt;td&gt;`jsi::Function` — callable from C++&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Figure 4: Value type mapping between the Bridge and JSI. The Bridge serialized everything to strings. JSI preserves native types.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The most important row is &lt;code&gt;ArrayBuffer&lt;/code&gt;. The Bridge had &lt;strong&gt;no way&lt;/strong&gt; to pass binary data without encoding it. JSI gives you &lt;code&gt;jsi::ArrayBuffer&lt;/code&gt; — a direct pointer to a block of raw bytes shared between JavaScript and C++. No copy, no encoding, no overhead.&lt;/p&gt;

&lt;p&gt;Zero-copy access to binary data&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// C++ reads directly from JS ArrayBuffer — no copy
auto buffer = args[0].asObject(rt).getArrayBuffer(rt);
uint8_t* data = buffer.data(rt);   // raw pointer to JS memory
size_t length = buffer.size(rt);    // size in bytes

// Process the bytes in-place — JS and C++ see the same memory
for (size_t i = 0; i &amp;lt; length; i++) {
    data[i] = processAudioSample(data[i]);
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is how audio pipelines, camera processors, and ML inference can work in React Native — binary data flows between JS and native without a single copy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; JSI doesn't just make the Bridge faster. It makes an entirely new category of operations possible. Zero-copy binary data sharing, synchronous function calls, passing functions and objects between JS and C++ — none of these could work through a JSON serialization layer. JSI isn't an optimization of the Bridge. It's an elimination of the Bridge.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="bridgeless-mode-the-default-path-is-jsi"&gt;Bridgeless Mode: The Default Path Is JSI&lt;/h2&gt;

&lt;p&gt;Starting with React Native 0.76, &lt;strong&gt;Bridgeless Mode&lt;/strong&gt; is the default. All new JS ↔ native communication goes through JSI — not the classic Bridge.&lt;/p&gt;

&lt;p&gt;The Bridge code is not fully removed from the codebase yet — React Native provides an &lt;a href="https://reactnative.dev/blog/2024/10/23/the-new-architecture-is-here" rel="noopener noreferrer"&gt;automatic interop layer&lt;/a&gt; so that old-style native modules (those using &lt;code&gt;@ReactMethod&lt;/code&gt; and &lt;code&gt;BatchedBridge&lt;/code&gt;) continue to work during the migration period. But the interop layer is a compatibility shim, not the primary architecture. New native modules should target JSI directly, and the React Native team has stated that the bridge code and interop layer will be removed entirely in a future release.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;React Native ≤ 0.72:    Bridge ON, JSI optional
React Native 0.73–0.75: Bridge ON, JSI encouraged (New Architecture opt-in)
React Native 0.76+:     JSI default (Bridgeless Mode), Bridge interop layer for legacy modules&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Terminology:&lt;/strong&gt; &lt;strong&gt;Bridgeless Mode&lt;/strong&gt; means the classic JSON bridge (&lt;code&gt;BatchedBridge&lt;/code&gt;, &lt;code&gt;MessageQueue.js&lt;/code&gt;) is no longer the primary communication path. All new JS ↔ native communication uses JSI. An automatic interop layer keeps legacy modules working, but this shim is temporary — full bridge removal is planned. Bridgeless Mode is part of the broader "New Architecture" that also includes Fabric (the new renderer) and TurboModules (codegen-based native modules built on JSI).&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="the-tradeoffs-nothing-is-free"&gt;The Tradeoffs (Nothing Is Free)&lt;/h2&gt;

&lt;p&gt;JSI isn't a free lunch. The Bridge had properties that were genuinely useful:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Bridge&lt;/th&gt;
&lt;th&gt;JSI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;**Thread safety**&lt;/td&gt;
&lt;td&gt;Inherently safe — JSON messages can be sent from any thread&lt;/td&gt;
&lt;td&gt;Must only access `jsi::Runtime` from the JS thread&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**Debugging**&lt;/td&gt;
&lt;td&gt;Messages are JSON — easy to log, intercept, replay&lt;/td&gt;
&lt;td&gt;C++ function calls — harder to trace without native debuggers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**Language barrier**&lt;/td&gt;
&lt;td&gt;Any language can produce/consume JSON&lt;/td&gt;
&lt;td&gt;Must write C++ (or Objective-C++ / JNI wrappers)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**Crash surface area**&lt;/td&gt;
&lt;td&gt;Native modules in Java/Kotlin/Swift — managed memory, fewer crash vectors&lt;/td&gt;
&lt;td&gt;C++ with manual memory — segfaults, use-after-free, and undefined behavior are possible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**Simplicity**&lt;/td&gt;
&lt;td&gt;`@ReactMethod` annotation, Java/Kotlin/Swift only&lt;/td&gt;
&lt;td&gt;C++ required for direct JSI, plus platform wiring&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The Bridge traded performance for simplicity. JSI trades simplicity for performance. For most apps — where native calls are infrequent and data payloads are small — the Bridge was perfectly adequate. JSI becomes essential when you need synchronous access, binary data, or high-frequency native calls.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Feynman Moment:&lt;/strong&gt; Here's where the "Bridge is dead" headline is slightly misleading. The &lt;em&gt;mechanism&lt;/em&gt; is dead — no more JSON serialization and async queuing. But the &lt;em&gt;pattern&lt;/em&gt; of sending messages between threads is alive and well. When you do heavy work on a background thread and send the result back to the JS thread via &lt;code&gt;CallInvoker&lt;/code&gt;, that's still message passing. JSI eliminated the Bridge as an implementation detail, but it didn't eliminate the need for async communication between threads. The three-thread architecture from Part 1 hasn't changed. What changed is the cost of crossing the boundary when you're already on the right thread.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="in-practice-seeing-the-difference"&gt;In Practice: Seeing the Difference&lt;/h2&gt;

&lt;p&gt;Let's make the performance difference concrete. Consider a storage module that reads a cached value:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridge (old architecture):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bridge-based storage read (AsyncStorage)&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// Average time: ~0.24ms per read (measured via StorageBenchmark)
const value = await NativeModules.Storage.get('user_theme');
// 1. Serialize call into batch arrays
// 2. Enqueue in batch
// 3. Wait for batch flush
// 4. Send to native thread
// 5. Deserialize on native side
// 6. Read from storage
// 7. Serialize result
// 8. Send back to JS thread
// 9. Deserialize result
// 10. Resolve promise&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;JSI (new architecture):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JSI-based storage read (react-native-mmkv)&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// Average time: ~0.012ms per read (measured via StorageBenchmark)
const value = storage.getString('user_theme');
// 1. Call C++ function pointer
// 2. Read from memory-mapped storage
// 3. Return jsi::String
// Done.&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Benchmarks from &lt;a href="https://github.com/mrousavy/StorageBenchmark" rel="noopener noreferrer"&gt;mrousavy/StorageBenchmark&lt;/a&gt; show MMKV at ~0.012ms per read vs AsyncStorage at &lt;del&gt;0.24ms — roughly a **&lt;/del&gt;20x speedup**. The MMKV README reports ~30x faster than AsyncStorage. The exact ratio varies by device and payload size, but the order of magnitude is consistent: not because the storage engine got faster, but because the serialization and async overhead disappeared.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; This doesn't mean you should make everything synchronous. A synchronous JSI call that takes 50ms blocks the JS thread for 50ms — no touch events, no timers, no callbacks. Rule of thumb: if the operation completes in under 1ms, synchronous is fine. If it might take over 5ms, use a background thread with &lt;code&gt;CallInvoker&lt;/code&gt; and return a Promise. We'll cover this pattern in detail in Part 8.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="key-takeaways"&gt;Key Takeaways&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Bridge serialized everything to JSON.&lt;/strong&gt; Every native call — no matter how simple — paid the cost of &lt;code&gt;JSON.stringify&lt;/code&gt; on one side and &lt;code&gt;JSON.parse&lt;/code&gt; on the other. This made data size the dominant factor in call latency, not computational complexity.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;The Bridge was async-only.&lt;/strong&gt; Every call was batched and processed on the next event loop tick. There was no way to get a synchronous result, even for operations that completed in microseconds.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;JSI replaces serialization with function pointers.&lt;/strong&gt; JavaScript holds managed references to C++ host functions and objects through the runtime. Calls are synchronous (on the JS thread), values are passed as &lt;code&gt;jsi::Value&lt;/code&gt; (no JSON conversion), and binary data can be shared zero-copy via &lt;code&gt;jsi::ArrayBuffer&lt;/code&gt;.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Bridgeless Mode is the default since RN 0.76.&lt;/strong&gt; JSI is the primary communication path. An interop layer keeps legacy Bridge-based modules working during migration, but the Bridge is no longer the default and will be fully removed in a future release.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;JSI trades simplicity for performance.&lt;/strong&gt; The Bridge let you write native modules in Java/Swift with &lt;code&gt;@ReactMethod&lt;/code&gt;. JSI requires C++ for direct access. This is the cost of eliminating the serialization layer — and it's why Part 3 of this series teaches you the C++ you need.&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2 id="what-s-next"&gt;What's Next&lt;/h2&gt;

&lt;p&gt;JSI gives JavaScript direct access to C++ functions. But to write those functions, you need to write C++. And if you're a JavaScript developer, C++ probably looks like it was designed to cause suffering.&lt;/p&gt;

&lt;p&gt;Good news: you don't need to learn all of C++. You need a specific subset — the parts that matter for JSI native modules: stack vs heap, RAII, smart pointers, lambdas, and move semantics. That's it. No templates-of-templates, no operator overloading, no multiple inheritance.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Part 3: C++ for JavaScript Developers&lt;/strong&gt;, we'll learn exactly that subset — framed in terms you already understand from JavaScript. &lt;code&gt;unique_ptr&lt;/code&gt; is a &lt;code&gt;const&lt;/code&gt; reference you can't copy. &lt;code&gt;shared_ptr&lt;/code&gt; is a garbage-collected pointer with a reference count. RAII is &lt;code&gt;try/finally&lt;/code&gt; built into the language.&lt;/p&gt;

&lt;p&gt;You'll write your first JSI function in Part 4. But Part 3 gives you the vocabulary to understand what that function is doing at the memory level — and why it doesn't leak, crash, or corrupt your app.&lt;/p&gt;




&lt;h2 id="references-further-reading"&gt;References &amp;amp; Further Reading&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/docs/the-new-architecture/landing-page" rel="noopener noreferrer"&gt;React Native — The New Architecture (Official Documentation)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/facebook/react-native/tree/main/packages/react-native/ReactCommon/jsi/jsi" rel="noopener noreferrer"&gt;JSI Source Code — facebook/react-native (jsi.h API Surface)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/blog/2024/10/23/the-new-architecture-is-here" rel="noopener noreferrer"&gt;React Native 0.76 — The New Architecture Is Here&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/reactwg/react-native-new-architecture/discussions/154" rel="noopener noreferrer"&gt;React Native Working Group — Bridgeless Mode Discussion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mrousavy/react-native-mmkv" rel="noopener noreferrer"&gt;react-native-mmkv — JSI-based Synchronous Storage (Source Code)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mrousavy/StorageBenchmark" rel="noopener noreferrer"&gt;mrousavy/StorageBenchmark — MMKV vs AsyncStorage Performance Comparison&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/architecture/threading-model" rel="noopener noreferrer"&gt;React Native — Threading Model (Architecture Docs)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/facebook/react-native/blob/main/packages/react-native/Libraries/BatchedBridge/MessageQueue.js" rel="noopener noreferrer"&gt;MessageQueue.js — BatchedBridge Implementation (Source Code)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tadeuzagallo.com/blog/react-native-bridge/" rel="noopener noreferrer"&gt;Tadeu Zagallo — Bridging in React Native (Core Engineer Writeup)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: React Native JSI Deep Dive&lt;/strong&gt; (12 parts — series in progress)
&lt;a href="https://heartit.tech/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see/" rel="noopener noreferrer"&gt;Part 1: The Runtime You Never See&lt;/a&gt; | &lt;strong&gt;Part 2: The Bridge is Dead, Long Live JSI (You are here)&lt;/strong&gt; | Part 3: C++ Foundations | Part 4: JSI Functions | Part 5: HostObjects | Part 6: Memory Ownership | Part 7: Platform Wiring | Part 8: Threading &amp;amp; Async | Part 9: Audio Pipeline | Part 10: Storage Engine | Part 11: Module Approaches | Part 12: Debugging&lt;/p&gt;


&lt;/blockquote&gt;

</description>
      <category>reactnative</category>
      <category>mobile</category>
      <category>jsi</category>
      <category>newarchitecture</category>
    </item>
    <item>
      <title>React Native JSI Deep Dive — Part 1: The Runtime You Never See</title>
      <dc:creator>Rahul Garg</dc:creator>
      <pubDate>Mon, 16 Mar 2026 13:54:13 +0000</pubDate>
      <link>https://dev.to/xtmntxraphaelx/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see-a87</link>
      <guid>https://dev.to/xtmntxraphaelx/react-native-jsi-deep-dive-part-1-the-runtime-you-never-see-a87</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"The most dangerous thought you can have as a creative person is to think you know what you're doing."
— Bret Victor, &lt;em&gt;The Future of Programming&lt;/em&gt;, 2013&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt; You've built React Native apps for years. You know &lt;code&gt;useState&lt;/code&gt;, you know &lt;code&gt;FlatList&lt;/code&gt;, you know how to call a native module. But do you know what happens in the 16 milliseconds between your &lt;code&gt;onPress&lt;/code&gt; handler and the pixel changing on screen? Three execution threads, two runtime environments (JavaScript via Hermes, native via Objective-C/Java/C++), and a message-passing architecture that explains every performance problem you've ever had.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: React Native JSI Deep Dive&lt;/strong&gt; (12 parts — series in progress)
&lt;strong&gt;Part 1: The Runtime You Never See (You are here)&lt;/strong&gt; | Part 2: Bridge → JSI | Part 3: C++ Foundations | Part 4: JSI Functions | Part 5: HostObjects | Part 6: Memory Ownership | Part 7: Platform Wiring | Part 8: Threading &amp;amp; Async | Part 9: Audio Pipeline | Part 10: Storage Engine | Part 11: Module Approaches | Part 12: Debugging&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="the-problem-the-illusion-of-one-world"&gt;The Problem: The Illusion of One World&lt;/h2&gt;

&lt;p&gt;Here's something that should bother you: when you write a React Native component, it feels like you're writing a single program.&lt;/p&gt;

&lt;p&gt;App.tsx&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;function Counter() {
  const [count, setCount] = useState(0);
  return (
    &amp;lt;TouchableOpacity onPress={() =&amp;gt; setCount(count + 1)}&amp;gt;
      &amp;lt;Text&amp;gt;{count}&amp;lt;/Text&amp;gt;
    &amp;lt;/TouchableOpacity&amp;gt;
  );
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;One file. One function. One mental model: user taps, state changes, screen updates. It feels no different from a web app.&lt;/p&gt;

&lt;p&gt;But this is an illusion. When you press that button, your tap crosses &lt;strong&gt;three execution threads&lt;/strong&gt;, passes through &lt;strong&gt;two runtime environments&lt;/strong&gt; — JavaScript (Hermes) and native (Objective-C/Java/C++) — and triggers a cascade of &lt;strong&gt;messages&lt;/strong&gt; between worlds that don't share memory, don't share a clock, and barely speak the same language.&lt;/p&gt;

&lt;p&gt;Every performance problem you've ever encountered in React Native — janky scrolling, delayed touch responses, slow native module calls, mysterious frame drops — traces back to this hidden architecture. Understanding it doesn't just explain the problems. It makes the solutions feel inevitable.&lt;/p&gt;




&lt;h2 id="the-three-threads"&gt;The Three Threads&lt;/h2&gt;

&lt;p&gt;React Native is not a single-threaded JavaScript application that talks to native views. It behaves like a &lt;strong&gt;distributed system&lt;/strong&gt; running on your phone — independent execution domains communicating via message passing. The &lt;a href="https://reactnative.dev/architecture/threading-model" rel="noopener noreferrer"&gt;official architecture docs&lt;/a&gt; describe two primary threads — JS and UI — but in practice, native module work runs on background thread pools, making the effective model three execution domains. In the New Architecture, the Fabric renderer may use additional threads for layout and shadow tree operations, but the JS/UI/Background mental model remains the most useful conceptual starting point.&lt;/p&gt;

&lt;h3 id="thread-1-javascript"&gt;Thread 1: JavaScript&lt;/h3&gt;

&lt;p&gt;This is where your code runs. Your components, your hooks, your business logic, your API calls — all of it executes here, inside a JavaScript engine.&lt;/p&gt;

&lt;p&gt;On React Native 0.76+, that engine is &lt;strong&gt;Hermes&lt;/strong&gt; — a JavaScript VM designed specifically for React Native. Unlike V8 in Chrome — which first interprets bytecode via its Ignition interpreter, then selectively JIT-compiles hot code paths through multiple optimization tiers — Hermes takes a different approach. The Hermes compiler (&lt;code&gt;hermesc&lt;/code&gt;) pre-compiles your JavaScript to &lt;strong&gt;bytecode as a build step&lt;/strong&gt; after Metro bundles the JS. The app ships this pre-compiled bytecode. When the app starts, Hermes executes it directly — no parsing, no JIT compilation, no warm-up.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Terminology:&lt;/strong&gt; &lt;strong&gt;Hermes&lt;/strong&gt; is a JavaScript engine optimized for React Native. It uses ahead-of-time bytecode compilation (via &lt;code&gt;hermesc&lt;/code&gt; as a post-bundling build step), a concurrent generational garbage collector (Hades), and a memory model tuned for mobile constraints. It's not V8 (Chrome) or JavaScriptCore (Safari) — it's purpose-built.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The JS thread has one event loop, just like a browser. When your &lt;code&gt;onPress&lt;/code&gt; handler fires, it goes into the event loop queue. When a &lt;code&gt;fetch&lt;/code&gt; response arrives, its callback goes into the queue. When a timer fires, same queue. One thread processes them one at a time, in order.&lt;/p&gt;

&lt;p&gt;This means JavaScript is fundamentally &lt;strong&gt;single-threaded&lt;/strong&gt;. While your event handler runs, nothing else happens on this thread. No other callback fires. No state update processes. If your handler takes 100ms (say, sorting a large array), the UI is frozen for those 100ms — not because the screen can't update, but because the JS thread is busy and can't process the next item in the queue.&lt;/p&gt;

&lt;h3 id="thread-2-ui-main-thread"&gt;Thread 2: UI (Main Thread)&lt;/h3&gt;

&lt;p&gt;This is the platform's main thread — the one that actually draws pixels and handles touch events. On iOS, it's the thread that runs UIKit. On Android, it's the one that runs the View system.&lt;/p&gt;

&lt;p&gt;The UI thread has one overriding constraint: &lt;strong&gt;it must submit work to the rendering pipeline within the frame budget — 16.6ms&lt;/strong&gt; (60fps) or &lt;strong&gt;8.3ms&lt;/strong&gt; (120fps on newer devices). If it misses a frame, the user sees a stutter. If it misses several, the interface feels broken.&lt;/p&gt;

&lt;p&gt;This thread does not run JavaScript. It runs native platform code — layout calculations, view hierarchy updates, animation interpolations, touch hit-testing. Your React components don't exist here. What exists here are native views — &lt;code&gt;UIView&lt;/code&gt; on iOS, &lt;code&gt;android.view.View&lt;/code&gt; on Android — that React Native's rendering system creates and manages on your behalf.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; When you write &lt;code&gt;&amp;lt;Text&amp;gt;Hello&amp;lt;/Text&amp;gt;&lt;/code&gt; in React Native, no &lt;code&gt;Text&lt;/code&gt; component exists on the UI thread. Instead, React Native creates a native platform view — on iOS, a custom &lt;code&gt;UIView&lt;/code&gt; subclass (&lt;a href="https://github.com/facebook/react-native/blob/main/packages/react-native/React/Fabric/Mounting/ComponentViews/Text/RCTParagraphComponentView.h" rel="noopener noreferrer"&gt;&lt;code&gt;RCTParagraphComponentView&lt;/code&gt;&lt;/a&gt; in Fabric) that uses TextKit for rendering; on Android, a custom &lt;code&gt;View&lt;/code&gt; subclass. These are real native views that the operating system knows how to composite. Your React tree is a &lt;em&gt;description&lt;/em&gt; of what the UI should look like. The UI thread holds the &lt;em&gt;reality&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="thread-3-native-modules-background"&gt;Thread 3: Native Modules (Background)&lt;/h3&gt;

&lt;p&gt;The third thread — or more accurately, a &lt;strong&gt;pool of background threads&lt;/strong&gt; — handles native module work. When your JavaScript calls &lt;code&gt;AsyncStorage.getItem('key')&lt;/code&gt;, the work doesn't happen on the JS thread or the UI thread. It's dispatched to a background thread where native code reads from disk, then the result is sent back to the JS thread.&lt;/p&gt;

&lt;p&gt;This is where native modules live: camera access, file I/O, biometric auth, Bluetooth — anything that talks to platform APIs.&lt;/p&gt;




&lt;h2 id="how-they-talk-messages-not-memory"&gt;How They Talk: Messages, Not Memory&lt;/h2&gt;

&lt;p&gt;Here's the critical insight: in the old architecture, these three threads &lt;strong&gt;did not share memory&lt;/strong&gt; — they communicated exclusively by passing serialized &lt;strong&gt;messages&lt;/strong&gt;. The New Architecture changes this partially: Fabric uses shared immutable C++ data structures (like the shadow tree) that multiple threads can read, and JSI allows the JS thread to directly invoke C++ code. But the core principle still holds — threads coordinate through well-defined interfaces, not by reading each other's mutable state, and most cross-thread communication still follows a message-passing pattern.&lt;/p&gt;

&lt;p&gt;Think of three people in separate rooms. In the old architecture, the rooms were soundproof — you slid notes under the door. In the New Architecture, some rooms have a shared read-only bulletin board (the immutable shadow tree) and a direct intercom (JSI) — but you still can't reach in and rearrange someone else's desk.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;┌─────────────┐     messages     ┌─────────────┐     messages     ┌─────────────┐
│             │    ──────────▶   │             │    ──────────▶   │             │
│  JS Thread  │                  │  UI Thread  │                  │   Native    │
│  (Hermes)   │    ◀──────────   │  (UIKit /   │    ◀──────────   │   Modules   │
│             │     messages     │   Android)  │     messages     │             │
└─────────────┘                  └─────────────┘                  └─────────────┘&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;When your &lt;code&gt;onPress&lt;/code&gt; handler calls &lt;code&gt;setCount(count + 1)&lt;/code&gt;, here's the actual sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;JS thread&lt;/strong&gt;: React runs your handler, computes the new virtual DOM, diffs it against the previous one, and determines that the &lt;code&gt;Text&lt;/code&gt; component's content changed from "0" to "1".&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;JS thread → UI thread&lt;/strong&gt;: React Native dispatches an update: "Update the text property of native view #42 to '1'."&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;UI thread&lt;/strong&gt;: Receives the update, finds native view #42, updates its text property, and includes it in the next frame render.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The user sees "1" on screen. Total elapsed time: anywhere from 1ms (if everything aligns perfectly) to 100ms+ (if either thread is busy).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Feynman Moment:&lt;/strong&gt; The soundproof rooms analogy is useful but breaks in an important way. Real soundproof rooms have a fixed door — you always pass notes through the same slot. In React Native, the &lt;em&gt;mechanism&lt;/em&gt; for passing notes changed completely between the old and new architecture. The old architecture used a JSON serialization bridge (slow, asynchronous). The New Architecture — often called &lt;strong&gt;Bridgeless Mode&lt;/strong&gt; — uses JSI, a C++ interface that lets the rooms share certain objects directly. That's what the rest of this series is about.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="let-s-trace-a-button-press"&gt;Let's Trace a Button Press&lt;/h2&gt;

&lt;p&gt;Here's the full path a tap takes through the system — from finger to pixel:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;User Tap
   │
   ▼
UI Thread (touch hit-testing)
   │
   ▼
EventDispatcher → JSI
   │
   ▼
JS Thread (onPress handler)
   │
   ▼
React reconciliation (diff)
   │
   ▼
Fabric commit
   │
   ▼
UI Thread (native view update)
   │
   ▼
Next frame render → pixel changes&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's walk through each step. (The timing estimates below are illustrative — actual durations vary by device, load, and complexity. No official benchmarks publish sub-millisecond breakdowns for this path.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Touch begins.&lt;/strong&gt; The user's finger contacts the screen. The operating system detects this on the UI thread and begins hit-testing: which native view is under the finger?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Touch dispatched.&lt;/strong&gt; The UI thread identifies the native &lt;code&gt;TouchableOpacity&lt;/code&gt; view and dispatches the event through React Native's &lt;code&gt;EventDispatcher&lt;/code&gt;, which delivers it to the JavaScript runtime: "Touch began at coordinates (x, y) on view #37." (In the New Architecture, this delivery goes through JSI rather than the legacy bridge.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JS processes the event.&lt;/strong&gt; The JS thread picks up the event from its event loop queue. React's event system maps view #37 to your &lt;code&gt;onPress&lt;/code&gt; callback. Your handler runs: &lt;code&gt;setCount(count + 1)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;React reconciliation.&lt;/strong&gt; React runs the component again with the new state. It diffs the new virtual DOM against the previous one. It finds one change: the &lt;code&gt;Text&lt;/code&gt; node's children changed from "0" to "1".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fabric commit.&lt;/strong&gt; React Native's rendering system — &lt;strong&gt;Fabric&lt;/strong&gt; — packages the change and commits it. Fabric is the modern renderer that replaced the legacy UIManager; it coordinates layout and mounting between the JS and UI threads. The commit: "Set property 'text' to '1' on shadow node #42."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UI thread applies the update.&lt;/strong&gt; The UI thread receives the update, modifies the native view, and marks the view hierarchy as needing a redraw.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next frame.&lt;/strong&gt; The OS composites the updated view hierarchy into the next display frame. The user sees "1".&lt;/p&gt;

&lt;p&gt;For a simple counter, the entire pipeline typically completes within a single frame (~16ms at 60fps). The JS thread does its work quickly — reconciliation for a single text change is sub-millisecond. The rest is coordination: waiting for the event to reach JS, waiting for the commit to reach the UI thread, waiting for the next frame boundary.&lt;/p&gt;

&lt;p&gt;Now imagine what happens when those 4ms of JS work become 40ms. Or when the UI thread is busy running a complex animation. Or when a native module call inserts another round trip. The messages pile up, the threads fall out of sync, and the user feels the lag.&lt;/p&gt;




&lt;h2 id="the-event-loop-your-friend-and-your-bottleneck"&gt;The Event Loop (Your Friend and Your Bottleneck)&lt;/h2&gt;

&lt;p&gt;The JS thread runs a single event loop. Everything — touch handlers, timers, network callbacks, native module results — enters the same queue and is processed one at a time.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Event Loop Queue:
┌──────────────────────────────────────────────────┐
│ onPress() │ setTimeout() │ fetch() callback │ ... │
└──────────────────────────────────────────────────┘
     ▲                                        │
     │         Process one at a time           │
     └─────────────────────────────────────────┘&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is conceptually similar to how browsers process JavaScript tasks. And it has the same fundamental implication: &lt;strong&gt;any single task that takes too long blocks everything behind it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Blocking the event loop&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;function onPress() {
  // This blocks the entire JS thread for ~200ms
  const sorted = hugeArray.sort((a, b) =&amp;gt; a.localeCompare(b));
  setData(sorted);
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;While &lt;code&gt;sort()&lt;/code&gt; runs, no touch events are processed, no animations are driven from JS, no network callbacks fire, no timers execute. The app appears frozen — not because the UI thread is stuck (it's still rendering the old state perfectly smoothly), but because the JS thread can't tell it to change anything.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; This is why React Native animations should use the native driver (&lt;code&gt;useNativeDriver: true&lt;/code&gt;) whenever possible. A JS-driven animation means the JS thread must send a position update message to the UI thread every 16ms. If the JS thread is busy, it misses frames and the animation stutters. A native-driven animation runs entirely on the UI thread — the JS thread doesn't need to participate at all.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="hermes-the-engine-under-the-hood"&gt;Hermes: The Engine Under the Hood&lt;/h2&gt;

&lt;p&gt;The JS thread doesn't run JavaScript directly. It runs &lt;strong&gt;Hermes&lt;/strong&gt;, a JavaScript engine with several properties that matter for React Native:&lt;/p&gt;

&lt;h3 id="bytecode-compilation"&gt;Bytecode Compilation&lt;/h3&gt;

&lt;p&gt;V8 in Chrome parses JavaScript source code at runtime, interprets it as bytecode (via Ignition), and then selectively JIT-compiles frequently-executed code paths through multiple optimization tiers (Sparkplug, Maglev, TurboFan). Hermes skips all of this at runtime. The &lt;code&gt;hermesc&lt;/code&gt; compiler compiles JavaScript to bytecode as a &lt;strong&gt;build step&lt;/strong&gt; — after Metro bundles your JS, but before the app is packaged. The app ships Hermes bytecode, not JavaScript source.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;V8 (Chrome):     Source code → Parse → AST → Bytecode → Interpret (Ignition)
                  [hot paths only] → JIT compile (Sparkplug/Maglev/TurboFan) → Machine code
                  (parsing + interpretation at runtime; JIT warms up over time)

Hermes (RN):     Source code → hermesc → Bytecode (at build time)
                  Bytecode → Execute directly (at runtime — fast startup)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is why React Native apps with Hermes start faster: there's no parsing step when the app launches. The tradeoff is that Hermes executes bytecode directly &lt;strong&gt;without a JIT compiler&lt;/strong&gt; — no runtime compilation to native machine code. This improves startup time and memory usage at the cost of peak compute performance compared to JIT engines like V8. For most React Native workloads (UI-driven, event-based, not compute-heavy), direct bytecode execution is fast enough. For compute-heavy work, you should be in native code anyway — which is exactly what this series teaches.&lt;/p&gt;

&lt;h3 id="the-hades-garbage-collector"&gt;The Hades Garbage Collector&lt;/h3&gt;

&lt;p&gt;Hermes uses a mostly-concurrent generational garbage collector called &lt;strong&gt;Hades&lt;/strong&gt;. "Mostly-concurrent" means it performs the bulk of collection work on a background thread &lt;em&gt;while your JavaScript is running&lt;/em&gt; — but it still requires brief stop-the-world pauses for specific phases like root marking and weak reference finalization. "Generational" means it separates objects into young and old generations — young objects (recently allocated, likely short-lived) are collected frequently and cheaply, while old objects (survived multiple collections) are collected less often. (On 32-bit platforms, Hades runs in incremental mode rather than concurrent.)&lt;/p&gt;

&lt;p&gt;This matters because GC pauses are one of the most common causes of frame drops in JavaScript applications. Hermes's predecessor GC (&lt;a href="https://hermesengine.dev/docs/hades" rel="noopener noreferrer"&gt;GenGC&lt;/a&gt;) had average pauses around 200ms on large heaps. Hades dramatically reduces these by doing most work concurrently, keeping its STW pauses short — usually well below the frame budget.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; Hades is one reason React Native 0.76+ feels smoother than older versions. JavaScriptCore (the previous default engine) has its own concurrent GC (&lt;a href="https://webkit.org/blog/7122/introducing-riptide-webkits-retreating-wavefront-concurrent-garbage-collector/" rel="noopener noreferrer"&gt;Riptide&lt;/a&gt;, introduced in 2017), but Hermes's GC is specifically tuned for mobile constraints — optimizing for memory footprint and startup time rather than peak throughput. The combination of AOT bytecode (no parse/compile at startup) and Hades (short GC pauses) gives Hermes a clear edge on mobile devices.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="the-jsi-runtime-interface"&gt;The jsi::Runtime Interface&lt;/h3&gt;

&lt;p&gt;Here's where things get interesting for the rest of this series. Hermes doesn't just run JavaScript — it exposes a &lt;strong&gt;C++ interface&lt;/strong&gt; called &lt;code&gt;jsi::Runtime&lt;/code&gt; that native code can use to interact with the JavaScript world.&lt;/p&gt;

&lt;p&gt;Through &lt;code&gt;jsi::Runtime&lt;/code&gt;, C++ code can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create JavaScript objects and functions&lt;/li&gt;
&lt;li&gt;Call JavaScript functions&lt;/li&gt;
&lt;li&gt;Read and write JavaScript values&lt;/li&gt;
&lt;li&gt;Expose C++ functions that JavaScript can call synchronously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This interface — JSI, the &lt;strong&gt;JavaScript Interface&lt;/strong&gt; — is what the New Architecture is built on. It's what replaced the JSON bridge. And it's what the rest of this series teaches you to use.&lt;/p&gt;

&lt;p&gt;But we're getting ahead of ourselves. For now, the important thing is: Hermes is not a black box. It has a C++ API surface. Native code can reach into the JavaScript world — and JavaScript can reach into native code — without serializing anything to JSON.&lt;/p&gt;




&lt;h2 id="why-this-model-matters"&gt;Why This Model Matters&lt;/h2&gt;

&lt;p&gt;Understanding the three-thread architecture isn't academic. It directly predicts the behavior of every native module you'll build in this series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thread affinity&lt;/strong&gt;: JSI calls must run on the JS thread. Why? Because &lt;code&gt;jsi::Runtime&lt;/code&gt; access is confined to the JavaScript runtime thread — accessing &lt;code&gt;jsi::Runtime&lt;/code&gt; or any &lt;code&gt;jsi::Value&lt;/code&gt; objects from other threads leads to undefined behavior — because JSI values are bound to a specific runtime instance, and that runtime is not thread-safe. This single constraint shapes the entire design of every native module: heavy work goes to background threads, results come back to the JS thread via &lt;code&gt;CallInvoker&lt;/code&gt; or &lt;code&gt;RuntimeExecutor&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message-passing&lt;/strong&gt;: The old bridge serialized every call to JSON and sent it as a message. The new architecture (JSI) allows synchronous calls — but only on the JS thread. Understanding when to use synchronous calls (fast lookups, &amp;lt;1ms) vs asynchronous calls (I/O, computation, &amp;gt;5ms) is a core skill for native module design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event loop&lt;/strong&gt;: A synchronous native module call that takes 50ms blocks the entire JS thread for 50ms. No touch events, no timers, no callbacks. This is why real-time systems like audio pipelines can't be driven from JavaScript — the event loop has deterministic &lt;em&gt;ordering&lt;/em&gt;, but its wall-clock &lt;em&gt;timing&lt;/em&gt; varies unpredictably based on what else is in the queue, GC pauses, and device load. You can't guarantee a callback will fire within a specific deadline.&lt;/p&gt;

&lt;p&gt;Every part of this series is a consequence of this architecture:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Part&lt;/th&gt;
&lt;th&gt;Consequence of the Architecture&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Part 2&lt;/td&gt;
&lt;td&gt;The bridge serialized messages to JSON. JSI eliminates serialization.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Part 4&lt;/td&gt;
&lt;td&gt;JSI functions run synchronously on the JS thread — fast but blocking.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Part 5&lt;/td&gt;
&lt;td&gt;HostObjects let C++ objects live in the native heap with JS handles.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Part 6&lt;/td&gt;
&lt;td&gt;JS GC and C++ heap are separate — ownership must be explicit.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Part 8&lt;/td&gt;
&lt;td&gt;Background threads need CallInvoker to send results back to JS.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Part 9&lt;/td&gt;
&lt;td&gt;Audio callbacks can't touch JSI — they run on a different thread.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2 id="key-takeaways"&gt;Key Takeaways&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;React Native behaves like a distributed system.&lt;/strong&gt; Three execution domains — JS, UI, Native background — coordinate through well-defined interfaces. The New Architecture allows shared immutable data structures and direct JSI calls, but threads still can't access each other's mutable state freely. Every performance issue traces back to this architecture.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Hermes is the JS engine.&lt;/strong&gt; It uses ahead-of-time bytecode compilation via &lt;code&gt;hermesc&lt;/code&gt; as a post-bundling build step (fast startup, no JIT), a mostly-concurrent generational garbage collector called Hades (short STW pauses), and exposes a C++ interface (&lt;code&gt;jsi::Runtime&lt;/code&gt;) that native code can call directly.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;The JS thread has one event loop.&lt;/strong&gt; One queue, one item at a time. Any task that blocks the event loop blocks everything: touch events, timers, network callbacks, animations driven from JS.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;The UI thread must produce a frame every 16ms.&lt;/strong&gt; It doesn't run JavaScript. It runs native platform code. Your React components are descriptions; the UI thread holds the reality.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;jsi::Runtime&lt;/code&gt; is confined to the JS thread.&lt;/strong&gt; Accessing the runtime or any JSI values from other threads leads to undefined behavior. Background work must return results through &lt;code&gt;CallInvoker&lt;/code&gt; or &lt;code&gt;RuntimeExecutor&lt;/code&gt;. This single constraint drives the design of every native module in the New Architecture. If you remember one thing from this post, remember this.&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2 id="what-s-next"&gt;What's Next&lt;/h2&gt;

&lt;p&gt;Now you know the architecture. Three threads, message-passing, a JS engine with a C++ API. But we skipped a crucial chapter: how did messages get from JS to native &lt;em&gt;before&lt;/em&gt; JSI?&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Legacy Architecture:          New Architecture (Bridgeless):

  JS Thread                     JS Thread
     │                             │
     ▼                             ▼
  Bridge Queue                  jsi::Runtime
  (batched JSON messages)          │
     │                             ▼
     ▼                          Direct C++ call
  Native deserializes              │
     │                             ▼
     ▼                          Native code
  Native code&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The answer is the Bridge — a JSON serialization layer that was simple, reliable, and brutally slow. In &lt;strong&gt;Part 2&lt;/strong&gt; (coming soon), we'll trace a native module call through the Bridge, understand exactly why it was a bottleneck, and see how JSI eliminates the problem entirely.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series status:&lt;/strong&gt; This is Part 1 of a 12-part series currently being written. Follow &lt;a href="https://heartit.tech" rel="noopener noreferrer"&gt;heart-IT&lt;/a&gt; for updates as new parts are published.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="references-further-reading"&gt;References &amp;amp; Further Reading&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/docs/the-new-architecture/landing-page" rel="noopener noreferrer"&gt;React Native — The New Architecture (Official Documentation)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hermesengine.dev/" rel="noopener noreferrer"&gt;Hermes — JavaScript Engine for React Native&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/blog/2024/10/23/release-0.76-new-architecture" rel="noopener noreferrer"&gt;React Native 0.76 — New Architecture by Default&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/architecture/threading-model" rel="noopener noreferrer"&gt;React Native — Threading Model (Architecture Docs)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://engineering.fb.com/2019/07/12/android/hermes/" rel="noopener noreferrer"&gt;Meta Engineering Blog — Hermes: An Open Source JavaScript Engine Optimized for Mobile&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hermesengine.dev/docs/hades" rel="noopener noreferrer"&gt;Hermes — Hades: Concurrent Garbage Collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://webkit.org/blog/7122/introducing-riptide-webkits-retreating-wavefront-concurrent-garbage-collector/" rel="noopener noreferrer"&gt;WebKit — Introducing Riptide: Concurrent Garbage Collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://reactnative.dev/architecture/render-pipeline" rel="noopener noreferrer"&gt;React Native — Render, Commit, and Mount (Render Pipeline Docs)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/facebook/react-native/blob/main/packages/react-native/React/Fabric/Mounting/ComponentViews/Text/RCTParagraphComponentView.h" rel="noopener noreferrer"&gt;RCTParagraphComponentView.h — Fabric Text Component (Source Code)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: React Native JSI Deep Dive&lt;/strong&gt; (12 parts — series in progress)
&lt;strong&gt;Part 1: The Runtime You Never See (You are here)&lt;/strong&gt; | Part 2: Bridge → JSI | Part 3: C++ Foundations | Part 4: JSI Functions | Part 5: HostObjects | Part 6: Memory Ownership | Part 7: Platform Wiring | Part 8: Threading &amp;amp; Async | Part 9: Audio Pipeline | Part 10: Storage Engine | Part 11: Module Approaches | Part 12: Debugging&lt;/p&gt;


&lt;/blockquote&gt;

</description>
      <category>reactnative</category>
      <category>jsi</category>
      <category>mobile</category>
      <category>programming</category>
    </item>
    <item>
      <title>P2P from Scratch — Part 2: Encrypted Pipes</title>
      <dc:creator>Rahul Garg</dc:creator>
      <pubDate>Thu, 12 Mar 2026 06:22:35 +0000</pubDate>
      <link>https://dev.to/xtmntxraphaelx/p2p-from-scratch-part-2-encrypted-pipes-5b7n</link>
      <guid>https://dev.to/xtmntxraphaelx/p2p-from-scratch-part-2-encrypted-pipes-5b7n</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Privacy is necessary for an open society in the electronic age."
— Eric Hughes, A Cypherpunk's Manifesto&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt; In Part 1, we punched a hole through two NATs and established a raw UDP path between peers. But raw UDP is the network equivalent of shouting across an open field — anyone standing between you can listen, modify, or impersonate. This post shows how Hyperswarm turns that raw path into an encrypted, multiplexed communication channel — and why a single connection can carry dozens of independent protocols simultaneously.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: P2P from Scratch — Building on the Holepunch Stack&lt;/strong&gt;
&lt;a href="https://heartit.tech/p2p-from-scratch-part-1-the-internet-is-hostile/" rel="noopener noreferrer"&gt;Part 1: The Internet is Hostile&lt;/a&gt; | &lt;strong&gt;Part 2: Encrypted Pipes (You are here)&lt;/strong&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-3-append-only-truth/" rel="noopener noreferrer"&gt;Part 3: Append-Only Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-4-from-logs-to-databases/" rel="noopener noreferrer"&gt;Part 4: From Logs to Databases&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-5-finding-peers/" rel="noopener noreferrer"&gt;Part 5: Finding Peers&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-6-many-writers-one-truth/" rel="noopener noreferrer"&gt;Part 6: Many Writers, One Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-7-trust-no-one-verify-everything/" rel="noopener noreferrer"&gt;Part 7: Trust No One&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-8-building-for-humans/" rel="noopener noreferrer"&gt;Part 8: Building for Humans&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="quick-recap"&gt;Quick Recap&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://heartit.tech/p2p-from-scratch-part-1-the-internet-is-hostile/" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, we punched through NATs using a DHT-coordinated timing dance and established a raw UDP path between two peers. The hole is open — but the pipe is unprotected.&lt;/p&gt;




&lt;h2 id="the-problem-an-open-pipe-is-a-dangerous-pipe"&gt;The Problem: An Open Pipe Is a Dangerous Pipe&lt;/h2&gt;

&lt;p&gt;At the end of Part 1, Alice and Bob had a working UDP path. Packets flow in both directions. The NAT doors are open.&lt;/p&gt;

&lt;p&gt;But here's the thing about UDP: it's just raw bytes on a wire. There's no encryption, no authentication, and no ordering guarantee. Three problems follow immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anyone on the network path can read the data.&lt;/strong&gt; Your ISP, the coffee shop Wi-Fi operator, any router between you and your peer — they can all see every byte. For a file-sharing app, that means someone can read your files. For a chat app, your messages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anyone can modify the data in transit.&lt;/strong&gt; A malicious router could rewrite the contents of your packets before forwarding them. You'd receive corrupted data and have no way to detect the tampering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anyone can impersonate your peer.&lt;/strong&gt; Without authentication, you have no way to verify that the packets you're receiving actually come from the person you intended to talk to. A third party could intercept the connection and pretend to be Bob.&lt;/p&gt;

&lt;p&gt;This is the classic &lt;a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack" rel="noopener noreferrer"&gt;man-in-the-middle&lt;/a&gt; problem. And solving it in peer-to-peer is harder than in client-server, because there's no certificate authority, no TLS handshake backed by a central trust hierarchy, and no domain name to verify.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; In client-server HTTPS, trust flows from certificate authorities: your browser trusts DigiCert, DigiCert vouches for &lt;code&gt;example.com&lt;/code&gt;, so you trust the connection. In P2P, there's no certificate authority. Trust must be bootstrapped from the keypairs themselves — you trust a connection because you already know the peer's public key, not because a third party vouched for them.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="secret-stream-from-raw-bytes-to-encrypted-channel"&gt;Secret Stream: From Raw Bytes to Encrypted Channel&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/holepunchto/hyperswarm-secret-stream" rel="noopener noreferrer"&gt;Secret Stream&lt;/a&gt; is the component that transforms a raw Duplex stream (like our holepunched UDP path) into an encrypted, authenticated channel. It uses two cryptographic layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Noise XX handshake&lt;/strong&gt; — for mutual authentication and session key derivation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;libsodium's secretstream&lt;/strong&gt; — for ongoing AEAD encryption of all payload data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result is a standard Node.js Duplex stream that happens to encrypt everything transparently. Application code writes plaintext; the wire carries ciphertext.&lt;/p&gt;

&lt;h3 id="the-noise-protocol-framework"&gt;The Noise Protocol Framework&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://noiseprotocol.org/noise.html" rel="noopener noreferrer"&gt;Noise Protocol Framework&lt;/a&gt; isn't a single protocol — it's a framework for building authenticated key-agreement protocols. You compose a Noise protocol by choosing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;handshake pattern&lt;/strong&gt; — which messages carry which keys&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;DH function&lt;/strong&gt; — how keys are exchanged (scalar multiplication on Ed25519 points in Hyperswarm, via &lt;a href="https://github.com/holepunchto/noise-curve-ed" rel="noopener noreferrer"&gt;noise-curve-ed&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;cipher&lt;/strong&gt; — for encrypting handshake payloads (ChaCha20-Poly1305)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;hash function&lt;/strong&gt; — for key derivation (BLAKE2b)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hyperswarm uses the &lt;strong&gt;XX&lt;/strong&gt; pattern. The letters describe what each side does: X means "transmit static key." Since both sides do X, both sides share their long-term public key during the handshake.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Terminology:&lt;/strong&gt; A &lt;strong&gt;handshake pattern&lt;/strong&gt; in Noise defines the sequence of messages and which cryptographic keys are exchanged at each step. The letters encode the behavior: N = no static key for that party (anonymous), K = static key Known in advance, X = static key Transmitted. XX means both sides transmit their static key — mutual authentication with no prior knowledge required.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="why-xx-and-not-ik-or-nk"&gt;Why XX? (And Not IK or NK)&lt;/h3&gt;

&lt;p&gt;The choice of handshake pattern has real consequences:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;What It Means&lt;/th&gt;
&lt;th&gt;Requires Prior Knowledge?&lt;/th&gt;
&lt;th&gt;Used When&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;**NK**&lt;/td&gt;
&lt;td&gt;Initiator has no static key (anonymous)&lt;/td&gt;
&lt;td&gt;Responder's key must be known in advance&lt;/td&gt;
&lt;td&gt;Connecting to a known server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**IK**&lt;/td&gt;
&lt;td&gt;Initiator's static key sent Immediately&lt;/td&gt;
&lt;td&gt;Responder's key must be known in advance&lt;/td&gt;
&lt;td&gt;Both keys known beforehand&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**XX**&lt;/td&gt;
&lt;td&gt;Both sides Transmit static key&lt;/td&gt;
&lt;td&gt;No prior knowledge needed&lt;/td&gt;
&lt;td&gt;General-purpose peer discovery&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In a DHT-based peer discovery system, Alice often doesn't know Bob's public key in advance — she discovered him via a topic announcement. And Bob doesn't know Alice's key either. The XX pattern handles this gracefully: both peers learn each other's identity &lt;em&gt;during&lt;/em&gt; the handshake.&lt;/p&gt;

&lt;p&gt;The tradeoff is that XX requires three messages (one more message than IK), but for Hyperswarm's use case — where peers are strangers meeting via a DHT — this is the right choice.&lt;/p&gt;




&lt;h2 id="the-three-message-dance"&gt;The Three-Message Dance&lt;/h2&gt;

&lt;p&gt;The Noise XX handshake has three messages. Each message mixes ephemeral and static keys to progressively build a shared secret.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Terminology:&lt;/strong&gt; In Noise, an &lt;strong&gt;ephemeral key&lt;/strong&gt; is a fresh keypair generated for this specific handshake. It provides forward secrecy — even if someone later steals your static key, they can't decrypt past sessions. A &lt;strong&gt;static key&lt;/strong&gt; is your long-term Ed25519 identity key. Hyperswarm uses &lt;a href="https://github.com/holepunchto/noise-curve-ed" rel="noopener noreferrer"&gt;noise-curve-ed&lt;/a&gt;, which performs Diffie-Hellman directly on Ed25519 points (&lt;code&gt;crypto_scalarmult_ed25519_noclamp&lt;/code&gt;) — no conversion to Curve25519 needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's what flows over the wire:&lt;/p&gt;

&lt;pre&gt;
sequenceDiagram
    participant A as Alice (Initiator)
    participant B as Bob (Responder)

    Note over A: Generate ephemeral keypair (eA)

    A-&amp;gt;&amp;gt;B: Message 1: eA (Alice's ephemeral public key)
    Note over B: Generate ephemeral keypair (eB)
    Note over B: DH(eB, eA) → shared secret
    Note over B: Encrypt Bob's static key with shared secret

    B-&amp;gt;&amp;gt;A: Message 2: eB + encrypted(sB)
    Note over A: DH(eA, eB) → shared secret
    Note over A: Decrypt Bob's static key
    Note over A: DH(sA, eB) → additional shared secret
    Note over A: Encrypt Alice's static key

    A-&amp;gt;&amp;gt;B: Message 3: encrypted(sA)
    Note over B: DH(eB, sA) → additional shared secret
    Note over B: Derive final session keys

    Note over A,B: Both sides now have: session key, handshakeHash, remotePublicKey
&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 1: The Noise XX three-message handshake. Ephemeral keys go first; static keys are encrypted.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's unpack each step:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message 1 — Alice introduces herself (ephemerally).&lt;/strong&gt; Alice generates a fresh ephemeral keypair and sends the public half. This is unencrypted — an eavesdropper can see it. But that's fine: ephemeral keys are disposable and reveal nothing about Alice's identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message 2 — Bob responds with his identity.&lt;/strong&gt; Bob generates his own ephemeral keypair, performs a Diffie-Hellman with Alice's ephemeral key to derive a shared secret, and uses that secret to &lt;em&gt;encrypt&lt;/em&gt; his static public key. An eavesdropper sees Bob's ephemeral key (plaintext) and a blob of ciphertext. They can't decrypt it without performing the DH themselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message 3 — Alice reveals her identity.&lt;/strong&gt; Alice decrypts Bob's static key, performs additional DH operations mixing static and ephemeral keys, and sends her own static public key — encrypted. After this message, both sides have performed all the DH operations needed to derive the final session keys.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; The ephemeral keys serve two purposes. First, they provide &lt;strong&gt;forward secrecy&lt;/strong&gt; for all post-handshake traffic — if an attacker records the handshake and later compromises a static key, they still can't derive the session keys because the ephemeral keys are gone. Second, they protect &lt;strong&gt;identity hiding&lt;/strong&gt; — static keys are encrypted, so a passive eavesdropper can't determine who is talking to whom (though the responder's identity can be probed by an active attacker who initiates a fake handshake — the initiator has stronger identity protection).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="what-comes-out-of-the-handshake"&gt;What Comes Out of the Handshake&lt;/h3&gt;

&lt;p&gt;After the three messages, both peers have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;session key&lt;/strong&gt; — derived from the combined DH operations, used for all subsequent encryption&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;handshakeHash&lt;/strong&gt; — a cryptographic binding of the entire handshake transcript, useful for channel binding&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;remotePublicKey&lt;/strong&gt; — the peer's verified Ed25519 public key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;handshakeHash&lt;/code&gt; is particularly important. It cryptographically binds everything that happened during the handshake — which keys were exchanged, in what order, with what randomness. If a man-in-the-middle had tampered with any message, the hashes wouldn't match and the handshake would fail.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; Noise XX provides &lt;em&gt;authentication&lt;/em&gt; — you know you're talking to the same keypair throughout the session. But authentication is not trust. You don't know &lt;em&gt;who&lt;/em&gt; owns that keypair unless you've verified it out-of-band (pinned it, received it through an invitation flow, etc.). A stranger's keypair is authenticated but untrusted.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="post-handshake-the-encrypted-stream"&gt;Post-Handshake: The Encrypted Stream&lt;/h2&gt;

&lt;p&gt;Once the handshake completes, Secret Stream switches to &lt;a href="https://doc.libsodium.org/secret-key_cryptography/secretstream" rel="noopener noreferrer"&gt;libsodium's secretstream&lt;/a&gt; for all subsequent data. This uses &lt;strong&gt;XChaCha20-Poly1305&lt;/strong&gt; — an AEAD cipher that provides both encryption (confidentiality) and authentication (tamper detection) for every chunk of data.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Terminology:&lt;/strong&gt; &lt;strong&gt;AEAD&lt;/strong&gt; (Authenticated Encryption with Associated Data) means each encrypted message includes a cryptographic tag that proves the data hasn't been modified. If even a single bit changes in transit, the authentication tag verification fails and the recipient knows the data was tampered with.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Why XChaCha20-Poly1305 and not AES-GCM?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;XChaCha20-Poly1305&lt;/th&gt;
&lt;th&gt;AES-GCM&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Nonce size&lt;/td&gt;
&lt;td&gt;24 bytes (safe to generate randomly)&lt;/td&gt;
&lt;td&gt;12 bytes (nonce reuse = catastrophic for both; 24 bytes makes random collision negligible)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hardware dependency&lt;/td&gt;
&lt;td&gt;No special instructions needed&lt;/td&gt;
&lt;td&gt;Needs AES-NI or ARM Crypto Extensions for full speed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nonce management&lt;/td&gt;
&lt;td&gt;Automatic (libsodium secretstream handles it)&lt;/td&gt;
&lt;td&gt;Manual (application must track nonces)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Implementation safety&lt;/td&gt;
&lt;td&gt;ARX operations are naturally constant-time&lt;/td&gt;
&lt;td&gt;Cache-timing risks in table-based software implementations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The 24-byte nonce is the key advantage. With a 12-byte nonce (AES-GCM), you risk catastrophic failure if two messages accidentally use the same nonce. With 24 bytes, the nonce space is large enough that random collision is negligible. In practice, libsodium's secretstream doesn't randomly generate a fresh nonce per message — it uses deterministic nonce evolution with an internal counter and automatic rekeying. The application never touches nonce management.&lt;/p&gt;

&lt;p&gt;The result: application code just reads and writes from a standard Node.js Duplex stream. The encryption is invisible.&lt;/p&gt;

&lt;p&gt;secret-stream-example.js&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;const SecretStream = require('@hyperswarm/secret-stream')

// Wrap any raw Duplex stream (e.g., the holepunched UDP path)
const encrypted = new SecretStream(isInitiator, rawStream, {
  keyPair: { publicKey, secretKey }  // Your Ed25519 identity keypair
})

// Wait for the handshake to complete
await encrypted.opened

// Now you have:
console.log(encrypted.remotePublicKey)  // Peer's verified Ed25519 key
console.log(encrypted.handshakeHash)    // Cryptographic binding of handshake

// Read and write just like any stream — encryption is transparent
encrypted.write('Hello, authenticated peer!')
encrypted.on('data', data =&amp;gt; console.log('Received:', data.toString()))&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; Secret Stream wraps the &lt;em&gt;entire&lt;/em&gt; connection — not individual messages. You don't choose what to encrypt and what to leave plain. Everything is encrypted, always. This is by design: selective encryption is an anti-pattern that inevitably leaks metadata.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="protomux-one-pipe-many-protocols"&gt;Protomux: One Pipe, Many Protocols&lt;/h2&gt;

&lt;p&gt;We now have an encrypted Duplex stream. One encrypted pipe between two peers. But a real P2P application needs to do many things simultaneously over that connection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replicate a Hypercore (the append-only log from Part 3)&lt;/li&gt;
&lt;li&gt;Sync an Autobase (the multi-writer system from Part 6)&lt;/li&gt;
&lt;li&gt;Send custom application messages (chat, commands, metadata)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You &lt;em&gt;could&lt;/em&gt; design a single protocol that handles all of these in one stream. But that creates a monolithic protocol where changes to one concern affect everything else.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/holepunchto/protomux" rel="noopener noreferrer"&gt;Protomux&lt;/a&gt; solves this by multiplexing multiple independent protocol &lt;strong&gt;channels&lt;/strong&gt; over the single encrypted stream. Each channel has its own message types, its own state machine, and its own lifecycle — but they all share the same underlying connection.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Feynman Moment:&lt;/strong&gt; Think of Protomux like USB. A single USB cable carries power, data, and video — but each protocol runs independently. Your mouse doesn't need to know about your monitor. Similarly, Hypercore replication doesn't need to know about your chat protocol. They share a wire but live in separate channels.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="how-channel-pairing-works"&gt;How Channel Pairing Works&lt;/h3&gt;

&lt;p&gt;When two peers want to communicate over a protocol, they each create a channel with the same &lt;strong&gt;protocol name&lt;/strong&gt; and &lt;strong&gt;id&lt;/strong&gt;. Protomux matches channels across peers by this pair.&lt;/p&gt;

protomux-channels.js

&lt;pre&gt;&lt;code&gt;const Protomux = require('protomux')

// Create a muxer over the encrypted stream
const mux = Protomux.from(encryptedStream)

// Open a channel for "my-chat-protocol"
const channel = mux.createChannel({
  protocol: 'my-chat-protocol',
  id: Buffer.from('room-42'),    // Optional: distinguishes instances
  handshake: chatHandshakeCodec, // Optional: codec for opening handshake

  onopen (handshakeData) {
    console.log('Channel opened! Peer sent:', handshakeData)
  },
  onclose () {
    console.log('Channel closed by peer')
  }
})

// Define message types on the channel
const textMessage = channel.addMessage({
  encoding: c.string,          // compact-encoding codec
  onmessage (msg) {
    console.log('Chat message:', msg)
  }
})

// Open the channel (triggers pairing with the remote side)
channel.open(myHandshakePayload)

// Send a message
textMessage.send('Hello from the other side')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The pairing is &lt;strong&gt;symmetric&lt;/strong&gt;: both sides must create a channel with the same protocol name and id. If Alice creates &lt;code&gt;{ protocol: 'chat', id: roomId }&lt;/code&gt; and Bob creates the same, Protomux pairs them. If only one side creates the channel, it stays open but idle until the other side matches.&lt;/p&gt;

&lt;h3 id="the-three-lifecycles"&gt;The Three Lifecycles&lt;/h3&gt;

&lt;p&gt;Every Protomux channel has three phases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Opening&lt;/strong&gt; — The channel sends a handshake message to the remote peer. If both sides have opened, the &lt;code&gt;onopen&lt;/code&gt; handler fires with the remote's handshake data. This is where you exchange initial state (capabilities, versions, discovery keys).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Messages&lt;/strong&gt; — While open, either side can send messages. Each message type is registered with &lt;code&gt;channel.addMessage()&lt;/code&gt; and has its own encoding and handler. Messages within a channel are delivered in order.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Closing&lt;/strong&gt; — Either side can close the channel. The &lt;code&gt;onclose&lt;/code&gt; handler fires on the remote. Closing one channel does &lt;em&gt;not&lt;/em&gt; close the underlying connection or affect other channels.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; Hyperswarm deduplicates connections — if you join multiple topics and discover the same peer through several of them, you still get a single connection. Protomux is what makes this work: each topic or Hypercore gets its own channel on the shared connection. Without multiplexing, connection deduplication would be impossible.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="how-hypercore-uses-protomux"&gt;How Hypercore Uses Protomux&lt;/h3&gt;

&lt;p&gt;When you replicate a Hypercore, the replication protocol opens a Protomux channel with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Protocol name:&lt;/strong&gt; &lt;code&gt;'hypercore/alpha'&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Channel id:&lt;/strong&gt; The Hypercore's &lt;strong&gt;discoveryKey&lt;/strong&gt; (a keyed BLAKE2b-256 hash: &lt;code&gt;BLAKE2b-256(key=publicKey, data="hypercore")&lt;/code&gt; — not the public key itself, which would leak what data you're interested in)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Hypercore replication protocol currently defines 10 message types on this channel:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Message&lt;/th&gt;
&lt;th&gt;Direction&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;`sync`&lt;/td&gt;
&lt;td&gt;Both&lt;/td&gt;
&lt;td&gt;Announce local length and fork ID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`request`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Ask for a specific block&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`cancel`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Cancel a pending block request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`data`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Respond with block + Merkle proof&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`noData`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Indicate requested data is unavailable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`want`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Express interest in a block range&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`unwant`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Cancel interest in a range&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`bitfield`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Full bitfield of available blocks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`range`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Download a contiguous range&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`extension`&lt;/td&gt;
&lt;td&gt;Either&lt;/td&gt;
&lt;td&gt;Custom extension messages&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When Alice replicates three different Hypercores with Bob, three Protomux channels open — one per discoveryKey — all sharing the same encrypted connection. Each channel independently tracks which blocks Alice has, which Bob has, and what needs to be exchanged.&lt;/p&gt;




&lt;h2 id="cork-and-uncork-batching-for-performance"&gt;Cork and Uncork: Batching for Performance&lt;/h2&gt;

&lt;p&gt;When an application sends many small messages in quick succession — say, responding to multiple block requests during replication — each &lt;code&gt;send()&lt;/code&gt; call would normally trigger a separate write to the underlying stream. That means separate encryption operations, separate system calls, and separate network packets.&lt;/p&gt;

&lt;p&gt;Protomux (and individual channels) support &lt;strong&gt;corking&lt;/strong&gt;: a pattern that buffers messages and flushes them as a single batch.&lt;/p&gt;

corking-example.js

&lt;pre&gt;&lt;code&gt;// Without corking: 100 separate writes
for (const block of blocks) {
  dataMessage.send(block)  // Each send = separate packet
}

// With corking: 1 batched write
mux.cork()
for (const block of blocks) {
  dataMessage.send(block)  // Buffered, not sent yet
}
mux.uncork()  // All 100 messages flushed as one batch&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; Corking is about performance, not correctness. Messages are still delivered in order whether you cork or not. But for high-throughput scenarios like replicating a large Hypercore, the difference between 1,000 individual writes and 10 batched writes is significant. Hypercore replication uses corking internally.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="compact-encoding-the-wire-format"&gt;Compact Encoding: The Wire Format&lt;/h2&gt;

&lt;p&gt;Every message on a Protomux channel needs to be serialized to bytes for transmission and deserialized on the other end. Hyperswarm uses &lt;a href="https://github.com/holepunchto/compact-encoding" rel="noopener noreferrer"&gt;Compact Encoding&lt;/a&gt; — a binary serialization library that's both space-efficient and fast.&lt;/p&gt;

&lt;p&gt;The pattern is always three steps:&lt;/p&gt;

compact-encoding-example.js

&lt;pre&gt;&lt;code&gt;const c = require('compact-encoding')

// Define a message schema
const myMessage = {
  preencode (state, msg) {
    c.uint.preencode(state, msg.type)      // 1. Measure: how many bytes?
    c.string.preencode(state, msg.payload)
  },
  encode (state, msg) {
    c.uint.encode(state, msg.type)          // 2. Write: serialize into buffer
    c.string.encode(state, msg.payload)
  },
  decode (state) {
    return {                                // 3. Read: deserialize from buffer
      type: c.uint.decode(state),
      payload: c.string.decode(state)
    }
  }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Preencode&lt;/strong&gt; calculates the exact byte length needed. &lt;strong&gt;Encode&lt;/strong&gt; writes the data into a pre-allocated buffer. &lt;strong&gt;Decode&lt;/strong&gt; reads it back.&lt;/p&gt;

&lt;p&gt;Why not just use JSON? Two reasons:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Compact Encoding&lt;/th&gt;
&lt;th&gt;JSON&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Overhead&lt;/td&gt;
&lt;td&gt;Minimal (varint lengths, raw bytes)&lt;/td&gt;
&lt;td&gt;High (key names repeated, quotes, escaping)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Speed&lt;/td&gt;
&lt;td&gt;Faster decode (binary, no parsing)&lt;/td&gt;
&lt;td&gt;Slower parse (string processing)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Types&lt;/td&gt;
&lt;td&gt;Native buffers, uints, fixed arrays&lt;/td&gt;
&lt;td&gt;Everything is a string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consistency&lt;/td&gt;
&lt;td&gt;Matches the rest of the Holepunch stack&lt;/td&gt;
&lt;td&gt;Foreign to the protocol layer&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For a wire protocol that might exchange thousands of messages per second during replication, this matters.&lt;/p&gt;




&lt;h2 id="the-full-stack-from-udp-to-application"&gt;The Full Stack: From UDP to Application&lt;/h2&gt;

&lt;p&gt;Let's trace a single message through the entire transport stack to see how the pieces fit together:&lt;/p&gt;

&lt;pre&gt;
graph TD
    A["Application writes: 'Hello'"] --&amp;gt; B["Protomux: Route to correct channel"]
    B --&amp;gt; C["Compact Encoding: Serialize to bytes"]
    C --&amp;gt; D["Protomux: Frame with channel ID + message type"]
    D --&amp;gt; E["Secret Stream: Encrypt with XChaCha20-Poly1305"]
    E --&amp;gt; F["UDX: Reliable delivery over UDP"]
    F --&amp;gt; G["Wire: Encrypted bytes on the network"]

    G --&amp;gt; H["UDX: Reassemble reliable stream"]
    H --&amp;gt; I["Secret Stream: Decrypt + verify auth tag"]
    I --&amp;gt; J["Protomux: Demux to correct channel"]
    J --&amp;gt; K["Compact Encoding: Deserialize from bytes"]
    K --&amp;gt; L["Application receives: 'Hello'"]

    style A fill:#22272e,stroke:#539bf5,color:#e6edf3
    style L fill:#22272e,stroke:#539bf5,color:#e6edf3
    style E fill:#22272e,stroke:#a371f7,color:#e6edf3
    style I fill:#22272e,stroke:#a371f7,color:#e6edf3
&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 2: A message travels down the stack on one side and back up on the other. Encryption happens once at the stream level — individual channels don't re-encrypt.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Notice that encryption happens at the Secret Stream level — &lt;em&gt;below&lt;/em&gt; the multiplexing. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All channels share the same encryption session (one handshake, not one per channel)&lt;/li&gt;
&lt;li&gt;A new Protomux channel doesn't require a new Noise handshake&lt;/li&gt;
&lt;li&gt;Channel identities and protocol names are hidden from eavesdroppers (though traffic analysis — packet sizes, timing patterns — can still leak side-channel metadata)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Feynman Moment:&lt;/strong&gt; Why encrypt below the multiplexer, not above it? If you encrypted each channel separately, an eavesdropper could observe the number of channels, the timing of messages per channel, and the size distribution of each protocol's traffic. By encrypting the entire multiplexed stream, all of this metadata is hidden. The eavesdropper sees one opaque stream of bytes.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="the-tradeoffs"&gt;The Tradeoffs&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What You Gain&lt;/th&gt;
&lt;th&gt;What You Pay&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Forward secrecy via ephemeral keys&lt;/td&gt;
&lt;td&gt;1 extra message vs. IK pattern&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Identity hiding (static keys encrypted)&lt;/td&gt;
&lt;td&gt;Cannot authenticate before the handshake completes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mutual authentication without certificate authority&lt;/td&gt;
&lt;td&gt;Must distribute public keys out-of-band for trust&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multiplexed protocols over single connection&lt;/td&gt;
&lt;td&gt;Channel pairing complexity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AEAD encryption on every byte&lt;/td&gt;
&lt;td&gt;Modest CPU overhead for encryption&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Corked batch writes&lt;/td&gt;
&lt;td&gt;Must remember to cork/uncork in hot paths&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The overhead is real but modest. The Noise handshake adds three messages to connection setup (typically &amp;lt; 100ms combined). The XChaCha20-Poly1305 encryption runs at several GB/s on modern hardware. For a P2P application, the NAT traversal from Part 1 dominates the latency budget — the encryption is effectively free by comparison.&lt;/p&gt;




&lt;h2 id="in-practice-building-a-multiplexed-chat"&gt;In Practice: Building a Multiplexed Chat&lt;/h2&gt;

&lt;p&gt;Here's a minimal example that combines everything — Secret Stream for encryption, Protomux for multiplexing, and Compact Encoding for wire serialization:&lt;/p&gt;

&lt;p&gt;multiplexed-chat.js&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;const Hyperswarm = require('hyperswarm')
const Protomux = require('protomux')
const c = require('compact-encoding')
const crypto = require('hypercore-crypto')

const swarm = new Hyperswarm()
// Hash the room name to get a 32-byte topic for discovery
const topic = crypto.discoveryKey(Buffer.alloc(32).fill('heartit-chat-room'))

swarm.on('connection', (encryptedStream, info) =&amp;gt; {
  // encryptedStream is already a Secret Stream (Hyperswarm wraps it)
  const mux = Protomux.from(encryptedStream)

  // Create a chat channel
  const channel = mux.createChannel({
    protocol: 'heartit-chat',
    id: Buffer.from('general'),
    onopen () { console.log('Chat channel opened with', info.publicKey.toString('hex').slice(0, 8)) },
    onclose () { console.log('Chat channel closed') }
  })

  // Define a text message type
  const chatMsg = channel.addMessage({
    encoding: c.string,
    onmessage (text) {
      console.log(`[${info.publicKey.toString('hex').slice(0, 8)}] ${text}`)
    }
  })

  channel.open()

  // Read from stdin and send
  process.stdin.on('data', data =&amp;gt; {
    chatMsg.send(data.toString().trim())
  })
})

// Join the topic as both server and client
const discovery = swarm.join(topic, { server: true, client: true })
await discovery.flushed()
console.log('Waiting for peers...')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is ~30 lines of code for an encrypted, authenticated, peer-to-peer chat over a multiplexed connection with NAT traversal. No server, no certificate authority, no monthly bill.&lt;/p&gt;




&lt;h2 id="key-takeaways"&gt;Key Takeaways&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Secret Stream wraps any Duplex stream in Noise XX + XChaCha20-Poly1305 encryption.&lt;/strong&gt; Three handshake messages establish mutual authentication and session keys. After that, libsodium's secretstream encrypts every byte with AEAD.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Noise XX is the right pattern for peer discovery.&lt;/strong&gt; Neither side needs to know the other's public key in advance. Both static keys are transmitted during the handshake, encrypted under ephemeral keys for identity hiding.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Forward secrecy means compromised keys don't expose past sessions.&lt;/strong&gt; Ephemeral keypairs are generated per handshake and discarded afterward. Recording traffic today is useless if keys leak tomorrow.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Protomux multiplexes independent protocols over a single encrypted connection.&lt;/strong&gt; Channels pair by protocol name + id. Each channel has its own message types, lifecycle, and state. Hypercore replication uses &lt;code&gt;hypercore/alpha&lt;/code&gt; channels keyed by discoveryKey.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Encrypt below the multiplexer, not above it.&lt;/strong&gt; This hides the number of active channels, per-channel message timing, and protocol-specific traffic patterns from eavesdroppers.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Cork your writes in hot paths.&lt;/strong&gt; Batching messages with &lt;code&gt;mux.cork()&lt;/code&gt; / &lt;code&gt;mux.uncork()&lt;/code&gt; reduces system calls and encryption operations for high-throughput scenarios.&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2 id="what-s-next"&gt;What's Next&lt;/h2&gt;

&lt;p&gt;We have an encrypted pipe that can carry multiple protocols. Now we need something worth transmitting.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://heartit.tech/p2p-from-scratch-part-3-append-only-truth/" rel="noopener noreferrer"&gt;Part 3&lt;/a&gt;, we'll build an append-only log — Hypercore — that uses a flat in-order Merkle tree to make every byte cryptographically verifiable. We'll see how a peer can download a single block out of millions and prove it hasn't been tampered with, using only a handful of hashes and one Ed25519 signature. This is the data structure that everything else in the Holepunch stack is built on.&lt;/p&gt;




&lt;h2 id="references-further-reading"&gt;References &amp;amp; Further Reading&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/hyperswarm-secret-stream" rel="noopener noreferrer"&gt;holepunchto/hyperswarm-secret-stream — Noise XX + libsodium transport encryption&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/protomux" rel="noopener noreferrer"&gt;holepunchto/protomux — Protocol multiplexing over encrypted streams&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/compact-encoding" rel="noopener noreferrer"&gt;holepunchto/compact-encoding — Binary wire serialization&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://noiseprotocol.org/noise.html" rel="noopener noreferrer"&gt;Noise Protocol Framework — Specification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://doc.libsodium.org/secret-key_cryptography/secretstream" rel="noopener noreferrer"&gt;libsodium secretstream — XChaCha20-Poly1305 AEAD streaming&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/noise-curve-ed" rel="noopener noreferrer"&gt;holepunchto/noise-curve-ed — Ed25519 Diffie-Hellman (direct, without Curve25519 conversion)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/hypercore" rel="noopener noreferrer"&gt;holepunchto/hypercore — Append-only log (uses Protomux for replication)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack" rel="noopener noreferrer"&gt;Wikipedia — Man-in-the-middle attack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Authenticated_encryption" rel="noopener noreferrer"&gt;Wikipedia — Authenticated Encryption&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: P2P from Scratch — Building on the Holepunch Stack&lt;/strong&gt;
&lt;a href="https://heartit.tech/p2p-from-scratch-part-1-the-internet-is-hostile/" rel="noopener noreferrer"&gt;Part 1: The Internet is Hostile&lt;/a&gt; | &lt;strong&gt;Part 2: Encrypted Pipes (You are here)&lt;/strong&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-3-append-only-truth/" rel="noopener noreferrer"&gt;Part 3: Append-Only Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-4-from-logs-to-databases/" rel="noopener noreferrer"&gt;Part 4: From Logs to Databases&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-5-finding-peers/" rel="noopener noreferrer"&gt;Part 5: Finding Peers&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-6-many-writers-one-truth/" rel="noopener noreferrer"&gt;Part 6: Many Writers, One Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-7-trust-no-one-verify-everything/" rel="noopener noreferrer"&gt;Part 7: Trust No One&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-8-building-for-humans/" rel="noopener noreferrer"&gt;Part 8: Building for Humans&lt;/a&gt;&lt;/p&gt;


&lt;/blockquote&gt;

</description>
      <category>p2p</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>holepunch</category>
    </item>
    <item>
      <title>P2P from Scratch — Part 1: The Internet is Hostile</title>
      <dc:creator>Rahul Garg</dc:creator>
      <pubDate>Thu, 12 Mar 2026 06:17:15 +0000</pubDate>
      <link>https://dev.to/xtmntxraphaelx/p2p-from-scratch-part-1-the-internet-is-hostile-goo</link>
      <guid>https://dev.to/xtmntxraphaelx/p2p-from-scratch-part-1-the-internet-is-hostile-goo</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a footprint so large was so error-free?"
— Alan Kay&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt; You want two computers to talk directly to each other. No server in the middle, no middleman, no monthly bill. Sounds simple — the Internet is a network, after all. But the moment you try it, you discover something uncomfortable: the Internet was never designed for this. Here's why, and how Hyperswarm punches through anyway.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: P2P from Scratch — Building on the Holepunch Stack&lt;/strong&gt;
&lt;strong&gt;Part 1: The Internet is Hostile (You are here)&lt;/strong&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-2-encrypted-pipes/" rel="noopener noreferrer"&gt;Part 2: Encrypted Pipes&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-3-append-only-truth/" rel="noopener noreferrer"&gt;Part 3: Append-Only Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-4-from-logs-to-databases/" rel="noopener noreferrer"&gt;Part 4: From Logs to Databases&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-5-finding-peers/" rel="noopener noreferrer"&gt;Part 5: Finding Peers&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-6-many-writers-one-truth/" rel="noopener noreferrer"&gt;Part 6: Many Writers, One Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-7-trust-no-one-verify-everything/" rel="noopener noreferrer"&gt;Part 7: Trust No One&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-8-building-for-humans/" rel="noopener noreferrer"&gt;Part 8: Building for Humans&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="the-problem-your-computer-doesn-t-have-an-address"&gt;The Problem: Your Computer Doesn't Have an Address&lt;/h2&gt;

&lt;p&gt;Here's something that should bother you: your laptop is connected to the Internet right now, but nobody can reach it.&lt;/p&gt;

&lt;p&gt;Try it. Find your IP address. It's probably something like &lt;code&gt;192.168.1.47&lt;/code&gt;. Now ask a friend on a different Wi-Fi network to send a packet to &lt;code&gt;192.168.1.47&lt;/code&gt;. Nothing happens. That address means nothing outside your home.&lt;/p&gt;

&lt;p&gt;The IP address the rest of the world sees — the one your ISP gave your router — belongs to your &lt;em&gt;router&lt;/em&gt;, not your laptop. And your router has no idea which of the dozens of devices behind it you're trying to reach. Worse, in many countries your ISP doesn't even give your router a real public IP. They put your router behind &lt;em&gt;their own&lt;/em&gt; router, so you're behind two layers of address translation.&lt;/p&gt;

&lt;p&gt;This is &lt;a href="https://en.wikipedia.org/wiki/Network_address_translation" rel="noopener noreferrer"&gt;Network Address Translation&lt;/a&gt; — NAT — and it's the reason peer-to-peer connectivity is hard.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Terminology:&lt;/strong&gt; &lt;strong&gt;NAT&lt;/strong&gt; (Network Address Translation) is a technique where a router rewrites the source IP address of outbound packets and maintains a mapping table so it can route responses back to the correct internal device. It was designed to conserve IPv4 addresses, not to enable direct communication.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Every time you visit a website, your router creates a temporary mapping: "outgoing traffic from &lt;code&gt;192.168.1.47:52301&lt;/code&gt; should appear as &lt;code&gt;203.0.113.5:41928&lt;/code&gt; to the outside world." When the website responds to &lt;code&gt;203.0.113.5:41928&lt;/code&gt;, the router checks its table, finds the mapping, and forwards the response to your laptop.&lt;/p&gt;

&lt;p&gt;This works perfectly for client-server communication. You always initiate the connection. The server always has a fixed public address. The router's mapping table always has the right entry.&lt;/p&gt;

&lt;p&gt;But what if there's no server? What if two laptops, both behind NATs, want to talk to each other?&lt;/p&gt;

&lt;p&gt;Neither one has a public address. Neither router has a mapping entry for the other. Any packet sent to either router from an unknown source gets silently dropped.&lt;/p&gt;

&lt;p&gt;This is the fundamental problem of peer-to-peer networking. It's not a software bug — it's a consequence of address translation and stateful firewalling.&lt;/p&gt;




&lt;h2 id="the-mental-model-two-people-in-soundproof-rooms"&gt;The Mental Model: Two People in Soundproof Rooms&lt;/h2&gt;

&lt;p&gt;Imagine two people, Alice and Bob, each in a soundproof room with a locked door. The door only opens from the inside, and only for a few seconds. Neither person can hear the other through the walls.&lt;/p&gt;

&lt;p&gt;They want to have a conversation.&lt;/p&gt;

&lt;p&gt;If Alice opens her door and shouts, but Bob's door is still closed — he hears nothing. If Bob opens his door a minute later and shouts, Alice's door has already closed — she hears nothing. They could each open their doors a thousand times and never connect.&lt;/p&gt;

&lt;p&gt;But if someone &lt;em&gt;outside&lt;/em&gt; both rooms — a coordinator — passes each of them a note saying "open your door in exactly 10 seconds," and they both do it at the same moment, their voices travel through both open doors and they connect.&lt;/p&gt;

&lt;p&gt;That coordinator is the role a &lt;a href="https://github.com/holepunchto/hyperdht" rel="noopener noreferrer"&gt;DHT&lt;/a&gt; (distributed hash table) plays in peer-to-peer networking. The soundproof room is your NAT. The door opening is your router creating a mapping entry. The simultaneous timing is the critical requirement.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Feynman Moment:&lt;/strong&gt; Here's where the analogy breaks — and where the real engineering begins. In the real world, "opening the door" doesn't just mean creating a NAT mapping. Different routers create mappings with wildly different rules. Some routers assign the same external port no matter who you're talking to. Others assign a different external port for every destination. Some allow any outside address to send traffic through the mapping. Others only allow the specific address you originally contacted. These differences aren't edge cases — they're the entire battlefield.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="how-nats-actually-work-the-four-behavioral-classes"&gt;How NATs Actually Work (The Four Behavioral Classes)&lt;/h2&gt;

&lt;p&gt;Not all NATs are created equal. The way your router creates and filters its mapping table determines whether holepunching can work at all.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Terminology:&lt;/strong&gt; &lt;strong&gt;NAT Mapping&lt;/strong&gt; is the entry your router creates in its translation table when an internal device sends a packet. It links your internal IP:port to an external IP:port and governs what traffic can flow back through.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;NAT Type&lt;/th&gt;
&lt;th&gt;Mapping Behavior&lt;/th&gt;
&lt;th&gt;Inbound Filtering&lt;/th&gt;
&lt;th&gt;Holepunch Friendly?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;**Full Cone**&lt;/td&gt;
&lt;td&gt;Same external port for all destinations&lt;/td&gt;
&lt;td&gt;Any source allowed through&lt;/td&gt;
&lt;td&gt;Yes — easiest&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**Restricted Cone**&lt;/td&gt;
&lt;td&gt;Same external port for all destinations&lt;/td&gt;
&lt;td&gt;Only IPs you've contacted&lt;/td&gt;
&lt;td&gt;Yes — with coordination&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**Port Restricted**&lt;/td&gt;
&lt;td&gt;Same external port for all destinations&lt;/td&gt;
&lt;td&gt;Only IP:port pairs you've contacted&lt;/td&gt;
&lt;td&gt;Yes — with precise timing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;**Symmetric**&lt;/td&gt;
&lt;td&gt;Different external port per destination IP:port&lt;/td&gt;
&lt;td&gt;Only the specific destination&lt;/td&gt;
&lt;td&gt;No — port unpredictable&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note on terminology:&lt;/strong&gt; The four names above (Full Cone, Restricted Cone, Port Restricted, Symmetric) come from &lt;a href="https://www.rfc-editor.org/rfc/rfc3489" rel="noopener noreferrer"&gt;RFC 3489&lt;/a&gt; (2003). The later &lt;a href="https://www.rfc-editor.org/rfc/rfc4787" rel="noopener noreferrer"&gt;RFC 4787&lt;/a&gt; (2007) replaces this with a two-axis model — &lt;em&gt;mapping behavior&lt;/em&gt; (Endpoint-Independent / Address-Dependent / Address-and-Port-Dependent) × &lt;em&gt;filtering behavior&lt;/em&gt; — which better captures real-world NATs that don't fit neatly into one of four boxes. Internally, HyperDHT uses a three-level classification — &lt;strong&gt;OPEN&lt;/strong&gt;, &lt;strong&gt;CONSISTENT&lt;/strong&gt; (predictable port mapping), and &lt;strong&gt;RANDOM&lt;/strong&gt; (unpredictable) — which maps to what matters for holepunching: can you predict the port or not?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first three types share a critical property: the external port stays the same regardless of destination. If your laptop sends a packet to server A and gets mapped to external port &lt;code&gt;41928&lt;/code&gt;, it also uses port &lt;code&gt;41928&lt;/code&gt; when talking to server B. This consistency is what makes holepunching possible — a coordinator can observe the port from one connection and tell a peer to aim at that same port.&lt;/p&gt;

&lt;p&gt;Symmetric NAT breaks this entirely. Every new destination gets a fresh, unpredictable external port, and symmetric NATs typically combine this with address-and-port-dependent filtering — making both mapping and filtering unpredictable. A coordinator can observe the port your router assigned when talking to the DHT, but that port is useless for connecting to another peer — the router will assign a completely different one.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; Holepunching is fundamentally about &lt;em&gt;port prediction&lt;/em&gt;. If the coordinator can predict what external port your router will use, peers can aim their packets at it. Symmetric NAT makes this prediction impossible.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="the-dance-how-holepunching-actually-works"&gt;The Dance: How Holepunching Actually Works&lt;/h2&gt;

&lt;p&gt;Let's walk through what &lt;a href="https://github.com/holepunchto/hyperswarm" rel="noopener noreferrer"&gt;Hyperswarm&lt;/a&gt; does when two peers want to connect. This isn't abstract protocol theory — this is what happens on your network right now.&lt;/p&gt;

&lt;h3 id="step-1-both-peers-join-the-dht"&gt;Step 1: Both Peers Join the DHT&lt;/h3&gt;

&lt;p&gt;Both Alice and Bob connect to &lt;a href="https://github.com/holepunchto/hyperdht" rel="noopener noreferrer"&gt;HyperDHT&lt;/a&gt; — a Kademlia-based distributed hash table. This establishes their presence in the network and — critically — creates NAT mappings. The DHT nodes can now observe each peer's external IP and port.&lt;/p&gt;

&lt;h3 id="step-2-signaling-via-dht-nodes"&gt;Step 2: Signaling via DHT Nodes&lt;/h3&gt;

&lt;p&gt;Alice wants to connect to Bob. She finds Bob's announcement in the DHT and sends a connection request. But she doesn't send it directly to Bob — she can't, because Bob's NAT would drop it. Instead, she sends it to one of Bob's designated &lt;em&gt;relay nodes&lt;/em&gt; in the DHT.&lt;/p&gt;

&lt;p&gt;This is a key design choice: Hyperswarm doesn't rely on external STUN/TURN servers like WebRTC does. Instead, the DHT nodes &lt;em&gt;themselves&lt;/em&gt; perform the equivalent functions — NAT type detection (STUN's role) and connection relay when holepunching fails (TURN's role). The protocol is different, but the jobs are the same. No single company controls the infrastructure.&lt;/p&gt;

&lt;h3 id="step-3-the-simultaneous-send"&gt;Step 3: The Simultaneous Send&lt;/h3&gt;

&lt;p&gt;The relay delivers Alice's intent to Bob. Now both peers know about each other's external address (IP + port, as observed by the DHT). Both peers simultaneously send UDP packets toward each other's external address.&lt;/p&gt;

&lt;p&gt;Here's the critical moment: when Alice sends a packet to Bob's external address, &lt;em&gt;Alice's&lt;/em&gt; router creates a mapping entry that says "I'm expecting a response from Bob's IP." When Bob's packet arrives at Alice's router — from Bob's IP — the router matches it against the fresh mapping and lets it through.&lt;/p&gt;

&lt;p&gt;The same thing happens on Bob's side. Both doors open at the same moment. The hole is punched.&lt;/p&gt;

&lt;pre&gt;
sequenceDiagram
    participant A as Alice (behind NAT)
    participant AR as Alice's Router
    participant DHT as DHT Relay Node
    participant BR as Bob's Router
    participant B as Bob (behind NAT)

    A-&amp;gt;&amp;gt;DHT: 1. "I want to connect to Bob"
    DHT-&amp;gt;&amp;gt;B: 2. "Alice wants to connect" (relayed)
    Note over DHT: DHT shares external addresses

    A-&amp;gt;&amp;gt;BR: 3. UDP packet → Bob's external addr
    Note over AR: Alice's NAT creates mapping
    B-&amp;gt;&amp;gt;AR: 3. UDP packet → Alice's external addr
    Note over BR: Bob's NAT creates mapping

    Note over AR,BR: Both NATs now have mappings for each other

    B--&amp;gt;&amp;gt;AR: 4. Packet arrives, matches mapping ✓
    AR--&amp;gt;&amp;gt;A: 4. Forwarded to Alice
    A--&amp;gt;&amp;gt;BR: 4. Packet arrives, matches mapping ✓
    BR--&amp;gt;&amp;gt;B: 4. Forwarded to Bob

    Note over A,B: Direct P2P connection established
&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Figure 1: The holepunching dance. Both peers must send before either receives.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Implementation detail:&lt;/strong&gt; The diagram above shows the logical flow. In practice, HyperDHT sends multiple probe rounds with retries — the first packets sent to an unopened NAT mapping are expected to be dropped. The holepunch succeeds when at least one packet from each side arrives &lt;em&gt;after&lt;/em&gt; the other side's outbound packet has created the necessary mapping. This is why timing coordination matters more than single-packet delivery.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All of this refers to &lt;strong&gt;UDP holepunching&lt;/strong&gt;. Hyperswarm uses UDP for the holepunch dance because UDP NAT mappings are simpler and more predictable. TCP holepunching is significantly harder — it requires simultaneous SYN packets and many NATs don't support it reliably. This is why Hyperswarm establishes the UDP path first and then upgrades it to a reliable, encrypted stream.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="step-4-encrypted-stream"&gt;Step 4: Encrypted Stream&lt;/h3&gt;

&lt;p&gt;Once the UDP path is established, Hyperswarm upgrades the connection to a reliable, encrypted stream using &lt;a href="https://github.com/holepunchto/hyperswarm-secret-stream" rel="noopener noreferrer"&gt;Secret Stream&lt;/a&gt; — a Noise XX handshake with Ed25519 keypairs, followed by libsodium's AEAD encryption for all payload data. We'll cover this in detail in &lt;a href="https://heartit.tech/p2p-from-scratch-part-2-encrypted-pipes/" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; The timing requirement isn't just "roughly at the same time." NAT mappings have expiry timers. If Alice sends her packet but Bob's router takes too long to relay the signal, Alice's mapping may expire before Bob's packet arrives. Connection failures that look like "peer unreachable" are often timing desynchronization in disguise.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="when-the-dance-fails-symmetric-nat-and-relay-fallback"&gt;When the Dance Fails: Symmetric NAT and Relay Fallback&lt;/h2&gt;

&lt;p&gt;Holepunching works when at least one side has a predictable port mapping. If Alice is behind a symmetric NAT but Bob is behind a cone or restricted NAT, Bob's external port is still predictable — so the holepunch can target it. Alice's side creates a fresh mapping for the outbound packet to Bob, and Bob's response arrives at that mapping. One predictable side is enough.&lt;/p&gt;

&lt;p&gt;Relay is only needed when &lt;strong&gt;both&lt;/strong&gt; peers are behind randomized (symmetric) NATs. Neither side can predict the other's port, so there's no target to aim at. No amount of timing coordination can overcome both ports being unpredictable.&lt;/p&gt;

&lt;p&gt;Hyperswarm handles this with &lt;strong&gt;relay fallback&lt;/strong&gt;: the connection routes through a DHT node that both peers can reach. Each peer can specify up to 3 relay nodes. The data still flows — just through an intermediary.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;NAT A&lt;/th&gt;
&lt;th&gt;NAT B&lt;/th&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Best case&lt;/td&gt;
&lt;td&gt;Full Cone&lt;/td&gt;
&lt;td&gt;Full Cone&lt;/td&gt;
&lt;td&gt;Direct holepunch&lt;/td&gt;
&lt;td&gt;Low latency, direct path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Common case&lt;/td&gt;
&lt;td&gt;Restricted&lt;/td&gt;
&lt;td&gt;Port Restricted&lt;/td&gt;
&lt;td&gt;Coordinated holepunch&lt;/td&gt;
&lt;td&gt;Slightly higher latency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;One-sided&lt;/td&gt;
&lt;td&gt;Symmetric&lt;/td&gt;
&lt;td&gt;Restricted&lt;/td&gt;
&lt;td&gt;Direct holepunch&lt;/td&gt;
&lt;td&gt;Works — B's port is predictable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Worst case&lt;/td&gt;
&lt;td&gt;Symmetric&lt;/td&gt;
&lt;td&gt;Symmetric&lt;/td&gt;
&lt;td&gt;Full relay&lt;/td&gt;
&lt;td&gt;Both ports unpredictable, must relay&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On typical consumer networks, Holepunch achieves roughly 95% direct connections and only ~5% relayed. The ~5% happens specifically when both peers are on randomized NATs — since it requires both sides to be unpredictable simultaneously, the probability is low. But it's not uniformly distributed: environments where symmetric NATs are common (like corporate networks) see a higher local relay rate when peers within those environments connect to each other. The application should handle both paths transparently.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; The fallback isn't a failure — it's a design requirement. Any P2P system that doesn't account for symmetric NAT will silently fail for a significant fraction of users. Hyperswarm makes the fallback automatic so applications don't need to handle it manually.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="beyond-nat-what-makes-hyperswarm-s-dht-different"&gt;Beyond NAT: What Makes Hyperswarm's DHT Different&lt;/h2&gt;

&lt;p&gt;The DHT isn't just a signaling helper — it's the peer discovery layer, and it has its own engineering challenges.&lt;/p&gt;

&lt;h3 id="sybil-resistance-via-node-id-derivation"&gt;Sybil Resistance via Node ID Derivation&lt;/h3&gt;

&lt;p&gt;In a standard Kademlia DHT, nodes choose their own IDs. An attacker could generate thousands of IDs strategically positioned near a target, surrounding it with malicious nodes. This is a &lt;a href="https://en.wikipedia.org/wiki/Sybil_attack" rel="noopener noreferrer"&gt;Sybil attack&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/holepunchto/dht-rpc" rel="noopener noreferrer"&gt;dht-rpc&lt;/a&gt; prevents this by &lt;em&gt;deriving&lt;/em&gt; node IDs from the node's network identity: &lt;code&gt;nodeID = hash(publicIP + publicPort)&lt;/code&gt;. You can't choose your ID — the network determines it from your address. An attacker would need control of specific IP addresses to position themselves near a target in the keyspace.&lt;/p&gt;

&lt;p&gt;This is one defense layer. Round-trip tokens prove IP ownership (preventing spoofing), and the ephemeral-to-persistent transition (described below) prevents rapid routing table pollution.&lt;/p&gt;

&lt;h3 id="the-ephemeral-to-persistent-transition"&gt;The Ephemeral-to-Persistent Transition&lt;/h3&gt;

&lt;p&gt;New nodes don't immediately become permanent members of the DHT's routing tables. They start in &lt;strong&gt;ephemeral mode&lt;/strong&gt; — participating in queries but not stored in other nodes' routing tables.&lt;/p&gt;

&lt;p&gt;After approximately 20–30 minutes of stable uptime (the base threshold is 240 ticks × 5 seconds, but NAT assessment and network conditions add overhead), the node transitions to &lt;strong&gt;persistent mode&lt;/strong&gt; and takes a permanent position in the routing table. After a sleep/wake cycle, this timer resets to ~60 minutes.&lt;/p&gt;

&lt;p&gt;This protects the DHT from short-lived nodes churning the routing tables and from attackers spinning up thousands of nodes to flood the network. If you're running a server on an open NAT, you can bypass this with &lt;code&gt;ephemeral: false&lt;/code&gt;, but for consumer devices behind NATs, the transition period is a feature, not a limitation.&lt;/p&gt;




&lt;h2 id="the-tradeoffs-nothing-is-free"&gt;The Tradeoffs: Nothing Is Free&lt;/h2&gt;

&lt;p&gt;Holepunching and DHT-based discovery solve the fundamental connectivity problem, but they come with costs.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What You Gain&lt;/th&gt;
&lt;th&gt;What You Pay&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No central server dependency&lt;/td&gt;
&lt;td&gt;Connection setup is slower (DHT lookup + holepunch negotiation)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No monthly infrastructure bill&lt;/td&gt;
&lt;td&gt;~5% of connections relay through intermediaries (only when both sides are on randomized NATs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resistant to single-point-of-failure&lt;/td&gt;
&lt;td&gt;First connection takes seconds, not milliseconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Works across ISPs and countries&lt;/td&gt;
&lt;td&gt;Both-sides-symmetric connections get relay latency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DHT nodes are the infrastructure&lt;/td&gt;
&lt;td&gt;~20–30 minute warmup for new DHT nodes (~60 min after wake)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The connection setup cost is a one-time tax. Once the hole is punched, the direct UDP path is as fast as any other Internet connection. But that initial negotiation — DHT lookup, signaling, simultaneous send, handshake — takes real time. Your UX needs to account for this (we'll cover P2P UX design in &lt;a href="https://heartit.tech/p2p-from-scratch-part-8-building-for-humans/" rel="noopener noreferrer"&gt;Part 8&lt;/a&gt;).&lt;/p&gt;




&lt;h2 id="in-practice-watching-it-happen"&gt;In Practice: Watching It Happen&lt;/h2&gt;

&lt;p&gt;You can observe Hyperswarm's holepunching in action with a minimal script. Install the module and create two peers that discover each other via a shared topic:&lt;/p&gt;

&lt;p&gt;holepunch-demo.js&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;const Hyperswarm = require('hyperswarm')

// Both peers must join the same topic — a 32-byte buffer.
// Use a fixed value so both machines connect to the same swarm.
const topic = Buffer.from(
  'a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6a7b8c9d0e1f2a3b4c5d6a7b8c9d0e1f2',
  'hex'
)

const swarm = new Hyperswarm()
swarm.on('connection', (conn, info) =&amp;gt; {
  console.log('Connected to peer!', info.publicKey.toString('hex').slice(0, 8))
  conn.on('data', data =&amp;gt; console.log('Received:', data.toString()))
  conn.write('Hello from ' + (process.argv[2] || 'anonymous'))
})
const discovery = swarm.join(topic, { server: true, client: true })
await discovery.flushed()
console.log('Announced on topic, waiting for peers...')&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Code examples in this series use &lt;code&gt;require()&lt;/code&gt; with top-level &lt;code&gt;await&lt;/code&gt; for clarity. To run them, either wrap the body in &lt;code&gt;(async () =&amp;gt; { ... })()&lt;/code&gt; or save with an &lt;code&gt;.mjs&lt;/code&gt; extension and use &lt;code&gt;import&lt;/code&gt; instead of &lt;code&gt;require&lt;/code&gt;. The &lt;a href="https://docs.pears.com/" rel="noopener noreferrer"&gt;Pear Runtime&lt;/a&gt; supports this syntax natively.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Run this same script on two different machines (or two different networks) — e.g., &lt;code&gt;node holepunch-demo.js Alice&lt;/code&gt; on one and &lt;code&gt;node holepunch-demo.js Bob&lt;/code&gt; on the other. Because the topic is hardcoded, both peers discover each other automatically. You'll see the connection event and the data exchange. If both peers are behind randomized (symmetric) NATs, Hyperswarm silently falls back to relay — the &lt;code&gt;connection&lt;/code&gt; event fires either way. If only one side is symmetric, holepunching still works directly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Gotcha:&lt;/strong&gt; If you run both peers on the same machine or the same LAN, you're not testing holepunching at all — you're testing local discovery. Real holepunching only happens across NAT boundaries. To test properly, use two different networks or a cloud VM as the second peer.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2 id="key-takeaways"&gt;Key Takeaways&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NAT is the fundamental obstacle to P2P connectivity.&lt;/strong&gt; Your device doesn't have a reachable address. Your router drops unsolicited inbound packets. This isn't a bug — it's how the Internet was designed.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Holepunching is a timing dance.&lt;/strong&gt; Both peers must create outbound NAT mappings simultaneously so that each peer's inbound packet matches the other's fresh mapping. The DHT coordinates this timing.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Both-sides-symmetric is the only case that requires relay.&lt;/strong&gt; If only one peer is behind a symmetric NAT, holepunching still works — the other side's port is predictable. Relay is only needed when both peers have randomized port mappings, making prediction impossible on both ends.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Hyperswarm's DHT is more than a phone book.&lt;/strong&gt; Node IDs derived from &lt;code&gt;hash(IP + port)&lt;/code&gt; resist Sybil attacks. Ephemeral-to-persistent transitions resist routing table pollution. DHT nodes double as relay infrastructure.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Budget for ~5% relayed connections.&lt;/strong&gt; On consumer networks, Holepunch achieves ~95% direct connectivity. The ~5% relay fraction occurs specifically when both peers are on randomized NATs — since it requires both sides to be unpredictable, the probability is low. But it's not uniformly distributed: environments where symmetric NATs are common (corporate networks) see higher local relay rates. Your architecture and UX must handle relayed connections as a first-class path, not an error state.&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2 id="what-s-next"&gt;What's Next&lt;/h2&gt;

&lt;p&gt;We've established that two peers can find each other and create a connection path — even through hostile network conditions. But that path is just raw UDP packets. Anyone between the two peers can read them, modify them, or inject fake ones.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://heartit.tech/p2p-from-scratch-part-2-encrypted-pipes/" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;, we'll look at how Hyperswarm turns that raw UDP path into an encrypted, multiplexed communication channel using the Noise protocol, Secret Stream, and Protomux. We'll see how a single encrypted connection carries multiple independent protocol channels — and why that matters when you start replicating data structures in Part 3.&lt;/p&gt;




&lt;h2 id="references-further-reading"&gt;References &amp;amp; Further Reading&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/hyperswarm" rel="noopener noreferrer"&gt;holepunchto/hyperswarm — High-level peer discovery and connection management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/hyperdht" rel="noopener noreferrer"&gt;holepunchto/hyperdht — DHT layer with keypair connections and NAT traversal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/dht-rpc" rel="noopener noreferrer"&gt;holepunchto/dht-rpc — Low-level Kademlia DHT with Sybil-resistant node IDs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/holepunchto/hyperswarm-secret-stream" rel="noopener noreferrer"&gt;holepunchto/hyperswarm-secret-stream — Noise XX + libsodium transport encryption&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Network_address_translation" rel="noopener noreferrer"&gt;Wikipedia — Network Address Translation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.rfc-editor.org/rfc/rfc4787" rel="noopener noreferrer"&gt;RFC 4787 — NAT Behavioral Requirements for Unicast UDP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Hole_punching_(networking)" rel="noopener noreferrer"&gt;Wikipedia — Hole Punching (Networking)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Sybil_attack" rel="noopener noreferrer"&gt;Wikipedia — Sybil Attack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.pears.com/" rel="noopener noreferrer"&gt;Pear Runtime Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series: P2P from Scratch — Building on the Holepunch Stack&lt;/strong&gt;
&lt;strong&gt;Part 1: The Internet is Hostile (You are here)&lt;/strong&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-2-encrypted-pipes/" rel="noopener noreferrer"&gt;Part 2: Encrypted Pipes&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-3-append-only-truth/" rel="noopener noreferrer"&gt;Part 3: Append-Only Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-4-from-logs-to-databases/" rel="noopener noreferrer"&gt;Part 4: From Logs to Databases&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-5-finding-peers/" rel="noopener noreferrer"&gt;Part 5: Finding Peers&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-6-many-writers-one-truth/" rel="noopener noreferrer"&gt;Part 6: Many Writers, One Truth&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-7-trust-no-one-verify-everything/" rel="noopener noreferrer"&gt;Part 7: Trust No One&lt;/a&gt; | &lt;a href="https://heartit.tech/p2p-from-scratch-part-8-building-for-humans/" rel="noopener noreferrer"&gt;Part 8: Building for Humans&lt;/a&gt;&lt;/p&gt;


&lt;/blockquote&gt;

</description>
      <category>p2p</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>holepunch</category>
    </item>
  </channel>
</rss>
