DEV Community

Cover image for How About We Track Snoring and Coughing While We Sleep? WellSleep, Sleep Well.
diosamuel
diosamuel Subscriber

Posted on

How About We Track Snoring and Coughing While We Sleep? WellSleep, Sleep Well.

Agentic Postgres Challenge Submission

This is a submission for the Agentic Postgres Challenge with Tiger Data


[DRAFT]


Well, Sleep Is Important, Right?

As programmers, we’re notorious for sacrificing sleep in the name of deadlines and caffeine,maybe you do it too. But over time, lack of quality sleep takes its toll. Tracking your sleep patterns can reveal a lot about your health: how long you actually sleep, how often you snore, when you cough, even changes in your heart rate.

So here’s an idea: what if we built a simple health tracker that listens to you while you sleep?
By analyzing audio, we could detect and count coughs and snores throughout the night-turning ordinary sound into meaningful insights about your sleep health.

Let's break down the flow
User record in app -> extract the audio -> get insight from it -> result

Hold on, how to process the audio? 🤔

Surprisingly, audio files aren’t as straightforward as saving numbers or text, they’re binary data, and that means we have to handle them differently. But here’s the cool part: we can translate audio into something more familiar format. By extracting features like the waveform or RMS (energy) values over time, we can turn sound into a ✨ time-series format.

Each audio recording then becomes a stream of values across thousands (or even millions) of tiny time slices. That’s where traditional PostgreSQL starts to struggle, it’s not built to efficiently store or query that kind of high-frequency data.

Here comes TigerData (formerly TimescaleDB): a time-series database designed for exactly this. It brings lightning-fast queries, scalable hyperscale architecture, and columnar storage, everything you need to handle audio data like a pro.

Demo

How it works?

  1. Turn audio into waveform timeseries here i use librosa and turn into timeseries

Labelling result


How I Used Agentic Postgres

  1. Hyperscale database

  1. Fluid Storage Captures raw .wav into Fluid Storage for fast access. Metadata logged into recording_sessions table.
  2. Tiger MCP Works on a fork (e.g., feat_fork) using Tiger MCP. Extracts MFCC/RMS and stores them in features hypertable.
  3. Detection Agents CoughAgent: Trained for high-freq bursts (500–3000 Hz). SnoreAgent: Trained for low-freq (60–500 Hz). Each agent runs on its own fork, writes tentative labels and confidence scores.
  4. pg_text Hybrid Search Layer Converts audio segments into embeddings (pg_text.embed()). Allows cross-night semantic search: From processing audio, i extract feature using librosa, so it will get start_time, end_time, amplitude_mean, amplitude_max,amplitude_min, amplitude_rms

I'm using tiger mcp for retrieve the label of the audio timeseries

Overall Experience

Overall i shocked about the feature,

Top comments (0)