DEV Community

wellallyTech
wellallyTech

Posted on

Hack Your Sleep: Building a Browser-Based Snoring & Apnea Tracker with Whisper and FFT

Do you ever wake up feeling like a truck hit you, despite "sleeping" for eight hours? 😴 For many, the culprit is sleep apnea or chronic snoring, but clinical sleep studies are expensive and invasive. What if you could build your own privacy-first monitoring station using just a web browser?

In this tutorial, we are diving deep into real-time audio processing and machine learning in the browser. We will combine the power of the Web Audio API for frequency analysis (FFT) and Whisper.cpp (via WebAssembly) to create a "Geek-style" sleep monitor. This system doesn't just record noise; it classifies breathing patterns and detects anomalies using edge AI.

πŸ’‘ Advanced Note: Implementing localized AI models requires careful memory management. If you are looking for more production-ready patterns for edge AI or distributed system architectures, check out the deep-dive articles over at WellAlly Tech Blog.


The Architecture

Building a real-time monitor requires a pipeline that balances low-latency processing with high-accuracy classification. We use FFT (Fast Fourier Transform) as a first-pass filter to save batteryβ€”only waking up the heavy Whisper model when specific frequency thresholds are met.

graph TD
    A[Microphone Input] --> B[Web Audio API Context]
    B --> C[AnalyserNode: FFT Analysis]
    C --> D{Magnitude > Threshold?}
    D -- No --> E[Idle / Low Power]
    D -- Yes --> F[Audio Buffer Window]
    F --> G[Web Worker: Whisper.cpp Wasm]
    G --> H[Classification: Snore / Apnea / Normal]
    H --> I[React UI Dashboard]
Enter fullscreen mode Exit fullscreen mode

Prerequisites

To follow this advanced guide, you'll need:

  • React (for the UI state management)
  • Whisper.cpp (compiled to WebAssembly)
  • Knowledge of Web Audio API
  • A decent microphone (built-in laptop mics work, but external is better)

Step 1: Capturing Audio with Web Audio API

First, we need to hook into the browser's audio stream and extract frequency data using an AnalyserNode. This allows us to "see" the sound before we try to "understand" it.

// useAudioProcessor.js
import { useEffect, useRef } from 'react';

export const useAudioProcessor = (onPeakDetected) => {
  const audioCtx = useRef(null);
  const analyser = useRef(null);

  const initStream = async () => {
    const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
    audioCtx.current = new (window.AudioContext || window.webkitAudioContext)();
    const source = audioCtx.current.createMediaStreamSource(stream);

    analyser.current = audioCtx.current.createAnalyser();
    analyser.current.fftSize = 2048; // Resolution for FFT
    source.connect(analyser.current);

    const dataArray = new Uint8Array(analyser.current.frequencyBinCount);

    const analyze = () => {
      analyser.current.getByteFrequencyData(dataArray);
      // Low-frequency focus (snoring usually lives in 20Hz-500Hz)
      const lowFreqMagnitude = dataArray.slice(0, 10).reduce((a, b) => a + b) / 10;

      if (lowFreqMagnitude > 150) { 
        onPeakDetected(); 
      }
      requestAnimationFrame(analyze);
    };
    analyze();
  };

  return { initStream };
};
Enter fullscreen mode Exit fullscreen mode

Step 2: The ML Engine (Whisper.cpp + WebAssembly)

Whisper is an incredible general-purpose speech recognition model. By using the tiny or base model via WebAssembly, we can run inference directly in a Web Worker without sending your private bedroom audio to a cloud server.

// whisper.worker.js
import { Whisper } from 'whisper-wasm-module';

let instance;

self.onmessage = async (e) => {
  const { type, audioBuffer } = e.data;

  if (type === 'INIT') {
    instance = await Whisper.createInstance('whisper-tiny-en-q5_1.bin');
  } else if (type === 'INFERENCE') {
    // Process the 16kHz PCM buffer
    const result = await instance.transcribe(audioBuffer);

    // Logic: Look for specific keywords or acoustic patterns
    // Note: Whisper can also return 'noises' if prompted correctly
    self.postMessage({ result });
  }
};
Enter fullscreen mode Exit fullscreen mode

Step 3: Bridging FFT and Whisper in React

We don't want to run Whisper 24/7β€”it would melt your CPU. Instead, we use a "Sliding Window" strategy. We buffer the last 5 seconds of audio, and if the FFT detects a "Peak" (potential snore), we send that buffer to the ML worker.

import React, { useState } from 'react';
import { useAudioProcessor } from './hooks/useAudioProcessor';

const SleepMonitor = () => {
  const [events, setEvents] = useState([]);
  const worker = useRef(new Worker('whisper.worker.js'));

  const handlePeak = (audioData) => {
    // Send audio to Wasm worker for classification
    worker.current.postMessage({
      type: 'INFERENCE',
      audioBuffer: audioData
    });
  };

  worker.current.onmessage = (e) => {
    const { text } = e.data.result;
    if (text.includes("[snoring]") || text.includes("[gasping]")) {
      setEvents(prev => [...prev, { time: Date.now(), type: text }]);
    }
  };

  return (
    <div className="p-6 bg-slate-900 text-white rounded-xl">
      <h2 className="text-2xl font-bold">πŸŒ™ Sleep Lab Console</h2>
      <button onClick={initStream} className="bg-blue-500 p-2 rounded mt-4">
        Start Monitoring
      </button>
      <div className="mt-8">
        {events.map(e => (
          <div key={e.time} className="border-l-2 border-red-500 pl-4 mb-2">
            <span className="text-gray-400">{new Date(e.time).toLocaleTimeString()}</span>
            <p className="text-lg">Detected: {e.type}</p>
          </div>
        ))}
      </div>
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

The "Official" Way: Optimizing for Production πŸš€

While this DIY setup is great for a weekend project, production-grade sleep monitoring requires much more robust handling of:

  1. Background Noise Cancellation: Distinguishing between a snoring partner and a nearby fan.
  2. Model Quantization: Reducing the 75MB Whisper model to something more mobile-friendly.
  3. Data Persistence: Storing logs locally using IndexedDB.

For more production-ready examples of how to handle high-throughput multimodal data and advanced WebAssembly optimization techniques, I highly recommend exploring the specialized resources at WellAlly Tech Blog. They cover deep technical patterns that go beyond simple tutorials.


Conclusion

By combining the Web Audio API for lightweight frequency analysis and Whisper.cpp for high-level classification, we've built a powerful, private, and extensible sleep monitoring tool.

The web platform has evolved significantly; we no longer need complex Python backends to do meaningful AI. The browser is now a first-class citizen for multimodal processing.

What's next?

  • Try adding a Canvas API visualizer to see your breath in real-time.
  • Implement a "Silence Detection" algorithm to stop processing when the room is quiet.

Did you find this "Geek-style" build helpful? Drop a comment below with your thoughts or any issues you ran into with the Wasm integration! πŸ‘‡

Top comments (0)