The Problem: I have Aphantasia (no mental imagery). I struggle to get into "Deep Work." I tried binaural beats on Spotify, but audio compression (MP3/AAC) ruins the phase difference needed for the science to work.
The Solution I decided to build a raw audio/visual synth in the browser to mechanically force focus.
The Tech Stack
Framework: Next.js (React)
Audio: Native AudioContext API (for real-time oscillators).
Visuals: requestAnimationFrame to sync a 490nm Cyan strobe.
The Tricky Part (Code Snippet) The hardest part was handling the "Drift." Javascript timers are imprecise. I had to use the audio context's internal clock currentTime to schedule the oscillator frequency ramps so they wouldn't desync from the strobe.
The Result It's called Phantas.io.
It generates everything locally on your CPU. No login required to use the generator.
Question for Devs: I'm currently using a simple setInterval for the React state updates on the visual timer, but I'm thinking of moving the whole timing engine to a Web Worker to prevent UI blocking. Has anyone tried this for high-precision metronomes?
Top comments (0)