<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Felix Zeller</title>
    <description>The latest articles on DEV Community by Felix Zeller (@felix_zeller_6f3c43a7513f).</description>
    <link>https://dev.to/felix_zeller_6f3c43a7513f</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/felix_zeller_6f3c43a7513f"/>
    <language>en</language>
    <item>
      <title>Real-Time Breath Detection in the Browser: Spectral Centroid, Dual-Path State Machines, and a Nasty iOS Bug</title>
      <dc:creator>Felix Zeller</dc:creator>
      <pubDate>Sun, 12 Apr 2026 20:16:54 +0000</pubDate>
      <link>https://dev.to/felix_zeller_6f3c43a7513f/real-time-breath-detection-in-the-browser-spectral-centroid-dual-path-state-machines-and-a-nasty-56bb</link>
      <guid>https://dev.to/felix_zeller_6f3c43a7513f/real-time-breath-detection-in-the-browser-spectral-centroid-dual-path-state-machines-and-a-nasty-56bb</guid>
      <description>&lt;p&gt;Microphone-based breath detection sounds simple until you actually try it. Energy goes up, energy goes down — that's a breath, right? In practice, you run into continuous breathing with no silence gaps, noisy environments that drift over time, and (on iOS) a Web Audio API that silently returns all zeros. This post walks through how &lt;a href="https://github.com/shiihaa-app/breath-detection" rel="noopener noreferrer"&gt;@shiihaa/breath-detection&lt;/a&gt; solves each of these problems.&lt;/p&gt;

&lt;p&gt;The library was extracted from &lt;a href="https://shiihaa.app" rel="noopener noreferrer"&gt;shii·haa&lt;/a&gt;, a breathwork and biofeedback app built by Felix Zeller, a Swiss physician (Intensive Care + Internal Medicine + Diplompsychologe). It's MIT-licensed, zero dependencies, and ships TypeScript types.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @shiihaa/breath-detection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Core Idea: Spectral Centroid for Inhale/Exhale Classification
&lt;/h2&gt;

&lt;p&gt;Most breath detectors only measure &lt;em&gt;when&lt;/em&gt; breathing occurs — they can't tell you whether a given phase is an inhale or an exhale. This one can, and the reason is grounded in physiology.&lt;/p&gt;

&lt;p&gt;When you inhale through your nose, air moves through narrow nasal passages and turbinates. That turbulence generates higher-frequency acoustic energy. When you exhale, airflow is slower and more laminar — the spectral energy shifts down.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;th&gt;Airflow&lt;/th&gt;
&lt;th&gt;Centroid range&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Inhale&lt;/td&gt;
&lt;td&gt;Turbulent&lt;/td&gt;
&lt;td&gt;~800–2500 Hz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exhale&lt;/td&gt;
&lt;td&gt;Laminar&lt;/td&gt;
&lt;td&gt;~200–800 Hz&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The library computes the spectral centroid from a 4096-point FFT over the 150–2500 Hz band on every tick. If the centroid of phase A is meaningfully higher than phase B, it labels A as inhale and B as exhale. The &lt;code&gt;BreathCycle&lt;/code&gt; object even exposes a &lt;code&gt;labelSwapped&lt;/code&gt; flag for cases where the centroid evidence flipped the initial threshold-based guess.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Detection Pipeline
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Microphone → FFT → Energy + Centroid → State Machine → Breath Cycles
                                              ↑
                                     Peak Counter (fallback)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;State machine (primary path):&lt;/strong&gt; Energy crosses the calibrated threshold → active phase begins. Energy drops to near-noise-floor → silent phase begins. One active + one silent = one breath cycle. This works well for deliberate breathwork with natural pauses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Peak fallback (secondary path):&lt;/strong&gt; Some people breathe continuously without any silence gap — the energy never fully drops. In that case, the library counts energy &lt;em&gt;peaks&lt;/em&gt; instead. Two peaks = one breath cycle, with the trough between them treated as the phase boundary. The &lt;code&gt;method&lt;/code&gt; field in &lt;code&gt;BreathCycle&lt;/code&gt; tells you which path fired: &lt;code&gt;'threshold'&lt;/code&gt; or &lt;code&gt;'peak'&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;BreathDetector&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@shiihaa/breath-detection&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;detector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BreathDetector&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;thresholdFactor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// 0 = sensitive, 1 = strict&lt;/span&gt;
  &lt;span class="na"&gt;enableCentroid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;centroidThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// Hz difference for confident labeling&lt;/span&gt;
  &lt;span class="na"&gt;minCycleGapSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;2.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;detector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onCycle&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inhaleMs&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;ms in / &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exhaleMs&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;ms out`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Rate: &lt;/span&gt;&lt;span class="p"&gt;${(&lt;/span&gt;&lt;span class="mi"&gt;60000&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cycleMs&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt; breaths/min`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Method: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 'threshold' or 'peak'&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Centroid A: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;centroidA1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;Hz, B: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;centroidA2&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;Hz`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;detector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onPhase&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// fires every tick — useful for live UI feedback&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Phase: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;phase&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, Energy: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;energy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;detector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Mic access denied&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// 6-second calibration: 2s silence + 4s breathing&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;detector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;calibrate&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Noise floor: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;noiseFloor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Breath max: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;breathMax&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;detector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startDetection&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Auto-Recalibration
&lt;/h2&gt;

&lt;p&gt;One underrated problem: users move rooms, switch from earbuds to laptop mic, or the HVAC kicks on. The library re-samples the noise floor every 10 seconds during active detection, so the threshold adapts without requiring a fresh calibration call.&lt;/p&gt;

&lt;h2&gt;
  
  
  The iOS Problem (and the Fix)
&lt;/h2&gt;

&lt;p&gt;If you're building a Capacitor app, there's a well-known but poorly documented bug: &lt;code&gt;AnalyserNode.getByteFrequencyData()&lt;/code&gt; returns all zeros inside &lt;code&gt;WKWebView&lt;/code&gt; even when &lt;code&gt;getUserMedia&lt;/code&gt; succeeds and the microphone is actually capturing audio.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Broken on iOS WKWebView&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mediaDevices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUserMedia&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;audio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AudioContext&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;analyser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createAnalyser&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createMediaStreamSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;analyser&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;analyser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;frequencyBinCount&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;analyser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByteFrequencyData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// [0, 0, 0, 0, ...] — even when mic is live&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The companion plugin &lt;a href="https://github.com/shiihaa-app/capacitor-audio-analysis" rel="noopener noreferrer"&gt;@shiihaa/capacitor-audio-analysis&lt;/a&gt; routes microphone capture through native &lt;code&gt;AVAudioEngine&lt;/code&gt; in Swift, computes RMS and band energy there, and emits the results as Capacitor events. &lt;code&gt;BreathDetector&lt;/code&gt; can then consume those values instead of touching the Web Audio API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @shiihaa/capacitor-audio-analysis
npx cap &lt;span class="nb"&gt;sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@shiihaa/capacitor-audio-analysis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;gain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;8.0&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;audioData&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;RMS:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rms&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;         &lt;span class="c1"&gt;// smoothed 0–1&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Band energy:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bandEnergy&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 150–2500 Hz proxy&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  BreathCycle Object
&lt;/h2&gt;

&lt;p&gt;Every completed cycle delivers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;inhaleMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;      &lt;span class="c1"&gt;// inhale duration&lt;/span&gt;
  &lt;span class="nl"&gt;exhaleMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;      &lt;span class="c1"&gt;// exhale duration&lt;/span&gt;
  &lt;span class="nl"&gt;holdInMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;      &lt;span class="c1"&gt;// breath hold after inhale (0 if none)&lt;/span&gt;
  &lt;span class="nl"&gt;holdOutMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;     &lt;span class="c1"&gt;// breath hold after exhale (0 if none)&lt;/span&gt;
  &lt;span class="nl"&gt;cycleMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;       &lt;span class="c1"&gt;// total cycle duration&lt;/span&gt;
  &lt;span class="nl"&gt;peakEnergy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;    &lt;span class="c1"&gt;// 0–1&lt;/span&gt;
  &lt;span class="nl"&gt;confidence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;    &lt;span class="c1"&gt;// 0–100&lt;/span&gt;
  &lt;span class="nl"&gt;labelSwapped&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// centroid overrode threshold labeling&lt;/span&gt;
  &lt;span class="nl"&gt;centroidA1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;    &lt;span class="c1"&gt;// spectral centroid for phase 1 (Hz)&lt;/span&gt;
  &lt;span class="nl"&gt;centroidA2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;    &lt;span class="c1"&gt;// spectral centroid for phase 2 (Hz)&lt;/span&gt;
  &lt;span class="nl"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;threshold&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;peak&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What It's Designed For
&lt;/h2&gt;

&lt;p&gt;The library was built for guided breathwork (box breathing, 4-7-8, coherence breathing) and biofeedback applications where you need reliable per-cycle data with inhale/exhale distinction. It's not a medical device. Confidence scores and the &lt;code&gt;labelSwapped&lt;/code&gt; flag give you enough signal to decide whether to trust a given cycle for real-time feedback or discard it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;code&gt;npm install @shiihaa/breath-detection&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/shiihaa-app/breath-detection" rel="noopener noreferrer"&gt;shiihaa-app/breath-detection&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Companion iOS plugin:&lt;/strong&gt; &lt;a href="https://github.com/shiihaa-app/capacitor-audio-analysis" rel="noopener noreferrer"&gt;shiihaa-app/capacitor-audio-analysis&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App:&lt;/strong&gt; &lt;a href="https://shiihaa.app" rel="noopener noreferrer"&gt;shiihaa.app&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>typescript</category>
      <category>webdev</category>
      <category>capacitor</category>
    </item>
    <item>
      <title>Beyond the Timer: How Biofeedback Changes Everything About Breathwork Apps</title>
      <dc:creator>Felix Zeller</dc:creator>
      <pubDate>Sun, 12 Apr 2026 13:08:39 +0000</pubDate>
      <link>https://dev.to/felix_zeller_6f3c43a7513f/beyond-the-timer-how-biofeedback-changes-everything-about-breathwork-apps-5hdp</link>
      <guid>https://dev.to/felix_zeller_6f3c43a7513f/beyond-the-timer-how-biofeedback-changes-everything-about-breathwork-apps-5hdp</guid>
      <description>&lt;p&gt;You've been here before. Deadline approaching, shoulders up around your ears, chest tight. Someone tells you to take a deep breath. So you try. And then you wonder: &lt;em&gt;did that actually do anything?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most breathwork apps are glorified timers. They show you an animation, tell you when to inhale and exhale, and leave you to guess whether your nervous system got the memo. shii·haa works differently. Instead of simply telling you when to breathe, it listens — and then shows you what your body is actually doing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Makes shii·haa Different
&lt;/h2&gt;

&lt;p&gt;The core insight is simple but significant: breathing exercises only work if your body responds to them. The question was never &lt;em&gt;how&lt;/em&gt; to breathe. It was always: &lt;em&gt;is it working?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;shii·haa answers that question in real time through two channels.&lt;/p&gt;

&lt;p&gt;The first is &lt;strong&gt;your microphone&lt;/strong&gt;. The app detects the sound of your breath — the gentle turbulence of air moving through your nose and mouth — and translates that into a live rhythm signal. No wearable required. Just breathe normally.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;your heart&lt;/strong&gt;. Connect a Bluetooth chest strap — Garmin, Polar, Wahoo, or any standard BLE heart rate monitor — and the app reads your heartbeats one by one. Not an average, but a living, moment-to-moment signal that changes with every breath.&lt;/p&gt;

&lt;p&gt;Alone, each signal tells you something. Together, they tell you something remarkable.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Science: Respiratory Sinus Arrhythmia
&lt;/h2&gt;

&lt;p&gt;Your heart rate is not steady. Even sitting still, your heartbeats speed up slightly when you inhale and slow down when you exhale. This is called &lt;strong&gt;Respiratory Sinus Arrhythmia (RSA)&lt;/strong&gt; — not a medical problem, but a sign of a healthy, flexible nervous system.&lt;/p&gt;

&lt;p&gt;The vagus nerve modulates your heart rate in sync with your breathing. When that coupling is strong, your body is in a state of physiological coherence: calm, regulated, resilient. When you're chronically stressed, that coupling weakens. Your heart rate variability flattens.&lt;/p&gt;

&lt;p&gt;This is exactly what clinical biofeedback therapists measure — using equipment that costs thousands of euros and sessions at €150–200 each. shii·haa brings that same measurement to your phone.&lt;/p&gt;

&lt;p&gt;When you breathe slowly — five to six breaths per minute — and your exhale is slightly longer than your inhale, your heart and breathing fall into natural resonance. You can feel it as calm clarity. With shii·haa, you can also &lt;em&gt;see&lt;/em&gt; it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Stimmungs-Check: Start Where You Are
&lt;/h2&gt;

&lt;p&gt;Before choosing a technique, you need to know where you're starting from. That's the Stimmungs-Check — thirty seconds of free breathing. No instructions, no pattern. Just breathe naturally while the app listens.&lt;/p&gt;

&lt;p&gt;It analyzes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Breathing rate&lt;/strong&gt; — how many breaths per minute?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inhale-to-exhale ratio&lt;/strong&gt; — are your exhales longer, or are you holding without realizing?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regularity&lt;/strong&gt; — consistent or erratic and shallow?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cardiac response&lt;/strong&gt; — how strongly is your heart responding to each breath?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then it gives you a personalized recommendation. Not the same sequence every day. A suggestion built on what your body just told it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Signals, One Picture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Sound&lt;/strong&gt; is the most immediate. The microphone hears the acoustics of your breath and translates it into a rhythm the app can track. It tells the app exactly when you inhale and exhale, down to the second.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heart rate&lt;/strong&gt; is the deeper layer. Beat by beat, the app watches how your cardiac rhythm rises and falls with your breathing. This is where the biofeedback becomes genuinely clinical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern recognition&lt;/strong&gt; ties it together. The app recognizes the shape of your breathing — peaks, valleys, rhythm — and identifies whether you're moving toward coherence or away from it.&lt;/p&gt;

&lt;p&gt;When all three signals agree, the picture is remarkably clear. When your breathing is smooth and your heart rises and falls in step with each breath, you're in coherence — and you can feel it, because that feeling has a name: calm.&lt;/p&gt;




&lt;h2&gt;
  
  
  How a Session Works
&lt;/h2&gt;

&lt;p&gt;A biofeedback session in shii·haa follows a natural arc — from listening, to understanding, to guided practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Calibrate
&lt;/h3&gt;

&lt;p&gt;Every session begins with a few seconds of calibration. Hold your phone about 30 cm in front of you. Be still for a moment, then breathe normally. The app learns the difference between silence and your breath — your personal baseline, in this room, right now. If you're wearing a chest strap, it picks up your resting heart rate at the same time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Breathe Freely
&lt;/h3&gt;

&lt;p&gt;Then you breathe — however you want. There's no timer telling you when to inhale or exhale. The app watches and listens, tracking your natural rhythm in real time. The breathing circle expands and contracts with your actual breath. The oscillograph shows your audio signal. If a chest strap is connected, your heart rate pulses alongside it.&lt;/p&gt;

&lt;p&gt;This is the exploration phase. You discover your own pattern without being told what it should be.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Analyze
&lt;/h3&gt;

&lt;p&gt;After the free session, shii·haa shows you what it observed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Breathing rate&lt;/strong&gt; — your actual breaths per minute&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I:E ratio&lt;/strong&gt; — how your inhale compares to your exhale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regularity&lt;/strong&gt; — how consistent your rhythm was&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HRV metrics&lt;/strong&gt; — RMSSD, SDNN, and coherence score (with chest strap)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RSA correlation&lt;/strong&gt; — how tightly your heart followed your breath&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't abstract numbers. They're a mirror. You see exactly how your body responded to your breathing — and where there's room to go deeper.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Your Rhythm, Your Ratios
&lt;/h3&gt;

&lt;p&gt;Here is where shii·haa does something no other breathwork app does.&lt;/p&gt;

&lt;p&gt;Every technique in the app — 4-7-8 relaxation, box breathing, coherence, resonance, and many more — has a defined &lt;em&gt;ratio&lt;/em&gt;: the relationship between inhale, hold, exhale, and pause. A 4-7-8 pattern means inhaling for one unit, holding for 1.75 units, and exhaling for two units. That ratio is what creates the physiological effect. Not the absolute duration.&lt;/p&gt;

&lt;p&gt;Most apps tell everyone to breathe 4 seconds in, 7 seconds hold, 8 seconds out. But if your natural inhale is 3 seconds, forcing a 4-second inhale creates tension. If your natural rhythm is slower, 4 seconds feels rushed. The technique fights your body instead of working with it.&lt;/p&gt;

&lt;p&gt;shii·haa takes a different approach. You start by breathing a technique &lt;strong&gt;freely&lt;/strong&gt; — at your own pace, guided by the ratios but not locked to a metronome. The app observes your natural tempo: how long your inhales actually take, how your body settles into the pattern. It measures your personal base rhythm.&lt;/p&gt;

&lt;p&gt;Then, for the guided session, the app adapts the absolute timings to &lt;em&gt;your&lt;/em&gt; rhythm. If your natural inhale is 3.2 seconds and the technique calls for a 1:1.75:2 ratio, the guided session becomes 3.2s inhale, 5.6s hold, 6.4s exhale. The ratio stays precise. The tempo becomes yours.&lt;/p&gt;

&lt;p&gt;The app corrects the ratios — not your rhythm. If your exhale was too short relative to the pattern, it gently extends it. If your hold was cut short, it adds a moment. But the fundamental pace stays anchored to how &lt;em&gt;you&lt;/em&gt; actually breathe.&lt;/p&gt;

&lt;p&gt;This matters more than most people realize. The ratio between inhale and exhale determines the autonomic effect: a longer exhale activates the parasympathetic nervous system (calming), equal phases create balance (focus), emphasis on the inhale activates the sympathetic system (energy). The absolute seconds are just a vehicle for the ratio. Your body doesn't care whether you breathe in for 4 seconds or 3.2 seconds. It cares about the &lt;em&gt;Gestalt&lt;/em&gt; of the breath.&lt;/p&gt;

&lt;p&gt;And with the biofeedback running throughout, you can verify it in real time. Watch your HRV improve, your coherence score rise, your heart-breath coupling strengthen — all at a pace that feels natural, not forced.&lt;/p&gt;

&lt;p&gt;That's the difference between following a timer and being guided by your own body. Proof, not hope. Your rhythm, not someone else's.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;Anyone who wants to know if their breathing practice is actually working.&lt;/p&gt;

&lt;p&gt;People with anxiety who want evidence that "just breathe" actually helps. Athletes who track HRV for performance. Meditators who want to watch their nervous system settle. People with insomnia who use breathwork before bed. And curious humans who want to understand how their body works.&lt;/p&gt;

&lt;p&gt;You don't need a clinical diagnosis. You just need to be a person with a nervous system.&lt;/p&gt;




&lt;h2&gt;
  
  
  Built by a Doctor, Not a Startup
&lt;/h2&gt;

&lt;p&gt;shii·haa was created by Dr. Felix Zeller — an intensive care physician, emergency doctor, and clinical psychologist based in Zürich.&lt;/p&gt;

&lt;p&gt;Felix built this because he saw what clinical biofeedback could do for patients in real distress — and then watched those same patients go home without any way to continue. The equipment was too expensive. The sessions too infrequent. The gap between clinical insight and everyday life too wide.&lt;/p&gt;

&lt;p&gt;shii·haa is his attempt to close that gap. Not a simplified version of biofeedback, but a genuine measurement tool that happens to live in your pocket.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shiihaa.app" rel="noopener noreferrer"&gt;shiihaa.app&lt;/a&gt; — runs in Chrome, no install needed. iOS and Android apps also available.&lt;/p&gt;

&lt;p&gt;A Bluetooth chest strap is optional but recommended for full biofeedback. Any standard BLE heart rate monitor works. Dana pricing — pay what you wish.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The next time you take a deep breath, you don't have to wonder if it worked. Your body will show you.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Felix Zeller is an intensive care physician, emergency doctor, and clinical psychologist based in Zürich. He built shii·haa — a breathwork and biofeedback app — with Perplexity Computer. Try it at &lt;a href="https://shiihaa.app" rel="noopener noreferrer"&gt;shiihaa.app&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>healthtech</category>
      <category>biofeedback</category>
      <category>breathwork</category>
      <category>wellness</category>
    </item>
    <item>
      <title>How We Solved iOS Audio Analysis for Breathwork Biofeedback</title>
      <dc:creator>Felix Zeller</dc:creator>
      <pubDate>Sun, 12 Apr 2026 13:05:48 +0000</pubDate>
      <link>https://dev.to/felix_zeller_6f3c43a7513f/how-we-solved-ios-audio-analysis-for-breathwork-biofeedback-5lo</link>
      <guid>https://dev.to/felix_zeller_6f3c43a7513f/how-we-solved-ios-audio-analysis-for-breathwork-biofeedback-5lo</guid>
      <description>&lt;p&gt;Real-time breath detection sounds straightforward until you try to ship it on iOS. Our app, shii·haa, guides users through breathwork and provides live biofeedback — it needs to hear you breathe and respond, frame by frame. In the browser, this is about twenty lines of Web Audio API code. On iOS inside a Capacitor app, it turned into a three-month rabbit hole that ended with us writing a native Swift plugin and open-sourcing it.&lt;/p&gt;

&lt;p&gt;This is the story of that journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plugin links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;npm: &lt;a href="https://www.npmjs.com/package/@shiihaa/capacitor-audio-analysis" rel="noopener noreferrer"&gt;npmjs.com/package/@shiihaa/capacitor-audio-analysis&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/shiihaa-app/capacitor-audio-analysis" rel="noopener noreferrer"&gt;github.com/shiihaa-app/capacitor-audio-analysis&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;shii·haa is a breathwork and biofeedback app built with Ionic, Capacitor, and Vue 3, running as a progressive web app, an iOS app, and an Android app from a single codebase. One of its core features is real-time audio analysis: the app listens to the user's breathing through the microphone, computes the energy envelope of the signal, and uses that to detect inhale and exhale phases.&lt;/p&gt;

&lt;p&gt;On the web, this works beautifully. The Web Audio API's &lt;code&gt;AnalyserNode&lt;/code&gt; gives you a frequency-domain or time-domain buffer every animation frame. You compute the RMS (root mean square) of the time-domain signal, apply a short rolling average, and you have a breath curve. Clinical-grade signal processing in a weekend.&lt;/p&gt;

&lt;p&gt;Then we packaged the app for iOS. That's when things stopped working.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bug
&lt;/h2&gt;

&lt;p&gt;Inside a Capacitor iOS app, the web content runs in a &lt;code&gt;WKWebView&lt;/code&gt;. Apple's &lt;code&gt;WKWebView&lt;/code&gt; supports &lt;code&gt;getUserMedia&lt;/code&gt; — the API that gives you access to the microphone — starting with iOS 14.5. So microphone access works. The stream arrives. The &lt;code&gt;AudioContext&lt;/code&gt; creates. No errors are thrown.&lt;/p&gt;

&lt;p&gt;But the &lt;code&gt;AnalyserNode&lt;/code&gt; returns garbage.&lt;/p&gt;

&lt;p&gt;Specifically: &lt;code&gt;getByteTimeDomainData()&lt;/code&gt; fills its array entirely with &lt;code&gt;128&lt;/code&gt; — the silence value in an unsigned 8-bit PCM encoding. &lt;code&gt;getByteFrequencyData()&lt;/code&gt; returns all zeros. No matter how loudly you breathe into the phone, the data never changes.&lt;/p&gt;

&lt;p&gt;This is a known WebKit bug. The audio stream from &lt;code&gt;getUserMedia&lt;/code&gt; inside &lt;code&gt;WKWebView&lt;/code&gt; is not actually routed into the Web Audio graph the way it is in Safari. The &lt;code&gt;MediaStreamSourceNode&lt;/code&gt; receives the stream, no exception is raised, but the audio samples delivered to the &lt;code&gt;AnalyserNode&lt;/code&gt; are silent placeholders. The signal is there at the OS level — iOS is capturing your microphone — but the bridge between the native audio session and the WebView's audio rendering process is broken for analysis purposes.&lt;/p&gt;

&lt;p&gt;Related reports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bugs.webkit.org/show_bug.cgi?id=196293" rel="noopener noreferrer"&gt;bugs.webkit.org/show_bug.cgi?id=196293&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/73585430/ios-webkit-no-audio" rel="noopener noreferrer"&gt;Stack Overflow: iOS WebKit no audio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://forum.ionicframework.com/t/how-to-enable-background-microphone-audio-access/227238" rel="noopener noreferrer"&gt;Ionic forum thread&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ionic-team/capacitor/issues/1669" rel="noopener noreferrer"&gt;Capacitor GitHub issue&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fundamental problem is architectural. &lt;code&gt;WKWebView&lt;/code&gt; runs the web content in a separate process (the WebKit renderer process) for security and stability. When you call &lt;code&gt;getUserMedia&lt;/code&gt;, the microphone permission is granted to the &lt;em&gt;host app process&lt;/em&gt;, not the renderer. Audio gets bridged across the process boundary, but in a way that works for playback and WebRTC — not for &lt;code&gt;AnalyserNode&lt;/code&gt; tap-style analysis. The audio just doesn't arrive at the right place.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Tried (and Failed)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Adjusting getUserMedia constraints
&lt;/h3&gt;

&lt;p&gt;Our first hypothesis was that the audio processing pipeline needed specific hints. We tried every combination of constraints imaginable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mediaDevices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUserMedia&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;audio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;echoCancellation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;noiseSuppression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;autoGainControl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;sampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;44100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;channelCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: same silent data. The constraints affect what the OS attempts to configure, but the sampling gap happens downstream of that.&lt;/p&gt;

&lt;h3&gt;
  
  
  ScriptProcessorNode
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;ScriptProcessorNode&lt;/code&gt; is deprecated but still works in most browsers. It fires an &lt;code&gt;onaudioprocess&lt;/code&gt; callback every buffer, letting you inspect raw samples in JavaScript. We wired it in as an alternative to &lt;code&gt;AnalyserNode&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;processor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;audioContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createScriptProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;processor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onaudioprocess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;inputData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getChannelData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="c1"&gt;// inputData was all zeros on iOS WKWebView&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same problem. The audio samples reaching the callback were flat. This confirmed the issue isn't specific to &lt;code&gt;AnalyserNode&lt;/code&gt; — the entire &lt;code&gt;getUserMedia&lt;/code&gt;-to-WebAudio pipeline is broken for analysis in &lt;code&gt;WKWebView&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  cordova-plugin-audioinput
&lt;/h3&gt;

&lt;p&gt;This Cordova plugin routes audio through a native capture path and delivers PCM data to JavaScript via Cordova events. Some community members reported it working as a workaround. We tried bridging it into our Capacitor project.&lt;/p&gt;

&lt;p&gt;It partially worked — we could receive audio data — but the integration was fragile. The plugin's timing model didn't align cleanly with our animation-frame-based rendering loop, latency was unpredictable, and the plugin is effectively unmaintained. More critically, we couldn't control the capture format cleanly enough for our RMS computation to be reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  GainNode amplification
&lt;/h3&gt;

&lt;p&gt;One forum suggestion was that the signal was present but too quiet. We added a &lt;code&gt;GainNode&lt;/code&gt; with a gain of 50:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gainNode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;audioContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createGain&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;gainNode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;gainNode&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;gainNode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;analyser&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No change. When the input buffer is all zeros, amplifying it by 50 still gives you zeros.&lt;/p&gt;

&lt;p&gt;We were stuck.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution: A Native Capacitor Plugin with AVAudioEngine
&lt;/h2&gt;

&lt;p&gt;The insight we needed: the microphone &lt;em&gt;is&lt;/em&gt; working in the iOS app. The OS is happily capturing audio. The problem is purely about getting that data into JavaScript. We didn't need the Web Audio API at all — we needed to bypass &lt;code&gt;WKWebView&lt;/code&gt;'s broken bridge entirely.&lt;/p&gt;

&lt;p&gt;The solution was to write a native Capacitor plugin in Swift that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Opens the microphone using &lt;code&gt;AVAudioEngine&lt;/code&gt; — Apple's native, fully-featured audio graph framework&lt;/li&gt;
&lt;li&gt;Installs a tap on the input node that fires on every audio buffer&lt;/li&gt;
&lt;li&gt;Computes the RMS of each buffer in Swift (fast, low-overhead)&lt;/li&gt;
&lt;li&gt;Emits the result back to JavaScript via Capacitor's event system&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The architecture looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[iOS Microphone]
       ↓
[AVAudioEngine InputNode]
       ↓  (installTap callback, runs on audio thread)
[RMS computation in Swift]
       ↓
[Capacitor notifyListeners()]
       ↓
[JavaScript event handler]
       ↓
[Breath detection logic / UI update]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;getUserMedia&lt;/code&gt;. No &lt;code&gt;AnalyserNode&lt;/code&gt;. No broken bridge. The audio never touches the WebView's audio graph.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Swift core
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;@objc&lt;/span&gt; &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;startListening&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="nv"&gt;call&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;CAPPluginCall&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;AVAudioEngine&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;inputNode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;inputNode&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;format&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;inputNode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;outputFormat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;forBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;inputNode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;installTap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;onBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;bufferSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;format&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;weak&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="n"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;
        &lt;span class="k"&gt;guard&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;channelData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;buffer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;floatChannelData&lt;/span&gt;&lt;span class="p"&gt;?[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;frameCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;frameLength&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;// Compute RMS&lt;/span&gt;
        &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Float&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;..&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;frameCount&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;sample&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;channelData&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="n"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;sample&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;sample&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="kt"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frameCount&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;notifyListeners&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"audioLevel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"rms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rms&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to start audio engine: &lt;/span&gt;&lt;span class="se"&gt;\(&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;localizedDescription&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;AVAudioEngine&lt;/code&gt; handles all the session management, permission checking, and hardware configuration. The &lt;code&gt;installTap&lt;/code&gt; callback runs on Apple's internal audio thread and fires every ~23ms at 44.1 kHz with a buffer size of 1024 frames — low enough latency for real-time biofeedback.&lt;/p&gt;

&lt;h3&gt;
  
  
  The TypeScript API
&lt;/h3&gt;

&lt;p&gt;Install the plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @shiihaa/capacitor-audio-analysis
npx cap &lt;span class="nb"&gt;sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@shiihaa/capacitor-audio-analysis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Request microphone permission&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;requestPermission&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Listen for audio level events&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;audioLevel&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;rms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rms&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// 0.0 (silence) to ~0.5 (loud breathing)&lt;/span&gt;
  &lt;span class="nf"&gt;updateBreathCurve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;level&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Start capture&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startListening&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Stop when done&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stopListening&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;AudioAnalysis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;removeAllListeners&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;rms&lt;/code&gt; value is a floating-point number between &lt;code&gt;0.0&lt;/code&gt; (complete silence) and roughly &lt;code&gt;0.3–0.5&lt;/code&gt; for a loud inhale. We apply a short rolling window average in JavaScript and threshold-detect peaks for inhale/exhale phase segmentation.&lt;/p&gt;

&lt;p&gt;On Android and web, the plugin gracefully falls back to the standard &lt;code&gt;getUserMedia&lt;/code&gt; + &lt;code&gt;AnalyserNode&lt;/code&gt; path — those platforms don't have the &lt;code&gt;WKWebView&lt;/code&gt; limitation, so the native layer isn't needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;After integrating the plugin, real-time breath detection on iOS worked immediately. The RMS signal from &lt;code&gt;AVAudioEngine&lt;/code&gt; is clean, low-latency, and reliable across device generations from iPhone 8 to iPhone 16 Pro.&lt;/p&gt;

&lt;p&gt;But we didn't stop at audio. shii·haa uses a three-signal approach for breath phase detection:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signal 1 — Microphone energy (via this plugin):&lt;/strong&gt; The RMS envelope of breathing sounds, filtered with a 300ms rolling average to smooth out noise transients.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signal 2 — Heart rate variability via BLE:&lt;/strong&gt; We connect to Garmin, Polar, and Wahoo chest straps over Bluetooth Low Energy and stream real-time R-R intervals. During inhalation, heart rate naturally accelerates; during exhalation, it decelerates. This is Respiratory Sinus Arrhythmia (RSA) — a well-characterized physiological coupling that can serve as an independent breath-phase signal, confirmed in published cardiorespiratory research.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signal 3 — Threshold analysis:&lt;/strong&gt; A configurable amplitude threshold that the user can calibrate to their breathing pattern and environment.&lt;/p&gt;

&lt;p&gt;When all three signals agree on a phase transition, the biofeedback is accurate enough for clinical-grade guidance. This matters because shii·haa is designed by Dr. Felix Zeller — an intensive care physician and clinical psychologist — not just as a wellness app, but as a precision tool.&lt;/p&gt;

&lt;p&gt;The biofeedback loop is now live: breathe in, watch the indicator climb, reach the target zone, shift to exhale. The app guides users through resonance frequency breathing (around 5–6 breath cycles per minute), the rhythm that maximally amplifies RSA and activates the parasympathetic nervous system. All of this running on a phone, with a chest strap, with sub-100ms feedback latency.&lt;/p&gt;




&lt;h2&gt;
  
  
  Open Source
&lt;/h2&gt;

&lt;p&gt;We've open-sourced the plugin because this is a problem every Capacitor developer building audio analysis features on iOS will hit, and the existing workarounds are insufficient.&lt;/p&gt;

&lt;p&gt;The plugin is MIT-licensed. Get it here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/@shiihaa/capacitor-audio-analysis" rel="noopener noreferrer"&gt;npmjs.com/package/@shiihaa/capacitor-audio-analysis&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/shiihaa-app/capacitor-audio-analysis" rel="noopener noreferrer"&gt;github.com/shiihaa-app/capacitor-audio-analysis&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Contributions welcome — especially:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Android native path:&lt;/strong&gt; Currently falls back to Web Audio on Android, but a native &lt;code&gt;AudioRecord&lt;/code&gt;-based path could improve latency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Additional metrics:&lt;/strong&gt; Zero-crossing rate, spectral centroid, or other features for more sophisticated breath classification&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background audio support:&lt;/strong&gt; Keeping the session alive when the app is backgrounded (requires &lt;code&gt;UIBackgroundModes: audio&lt;/code&gt; in &lt;code&gt;Info.plist&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing:&lt;/strong&gt; Real-device testing across iOS versions — the &lt;code&gt;WKWebView&lt;/code&gt; audio stack changes between releases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building a meditation app, a voice analysis tool, a pitch detector, or anything else that needs real microphone data on iOS inside a Capacitor app, this plugin removes the biggest obstacle.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Dr. Felix Zeller is an intensive care physician, emergency doctor, and clinical psychologist based in Zürich. He built shii·haa — a breathwork and biofeedback app — with Perplexity Computer. Try it at &lt;a href="https://shiihaa.app" rel="noopener noreferrer"&gt;shiihaa.app&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ios</category>
      <category>capacitor</category>
      <category>javascript</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
