DEV Community

Cover image for Real-Time Audio Spectrograms in the Browser Using Web Audio API and Canvas
HexShift
HexShift

Posted on • Edited on

Real-Time Audio Spectrograms in the Browser Using Web Audio API and Canvas

Need to visualize audio in the browser — without external libraries, servers, or big frameworks? This post shows how to build a real-time spectrogram from microphone input using the Web Audio API and a raw HTML5 canvas. No dependencies, just powerful signal analysis right in your UI.

Why Spectrograms?


  • Great for audio tools, education, or music visualizers
  • Lightweight, performant, and fully in-browser
  • Foundation for voice activity detection or pitch tracking

Step 1: Request Microphone Access


Use navigator.mediaDevices.getUserMedia to tap into the user's mic:


const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const source = audioCtx.createMediaStreamSource(stream);

Step 2: Connect an Analyser Node


This node provides frequency domain data you can draw from:


const analyser = audioCtx.createAnalyser();
analyser.fftSize = 2048;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);

source.connect(analyser);

Step 3: Set Up the Canvas


We’ll scroll horizontally, plotting frequency data vertically:


<canvas id="spectrogram" width="800" height="256"></canvas>

<script>
const canvas = document.getElementById('spectrogram');
const ctx = canvas.getContext('2d');
let x = 0;

function draw() {
requestAnimationFrame(draw);
analyser.getByteFrequencyData(dataArray);

const imageData = ctx.getImageData(1, 0, canvas.width - 1, canvas.height);
ctx.putImageData(imageData, 0, 0); // shift left

for (let i = 0; i < bufferLength; i++) {
const value = dataArray[i];
const percent = value / 255;
const hue = Math.floor(255 - (percent * 255));
ctx.fillStyle = hsl(${hue}, 100%, 50%);
ctx.fillRect(canvas.width - 1, canvas.height - i, 1, 1);
}
}
draw();
</script>

Step 4: Tuning and Customization


  • Use smaller fftSize for faster updates but lower frequency resolution
  • Change color mappings for different effects (e.g., grayscale, rainbow)
  • Log-scale the Y-axis if you want pitch-based scaling

Pros and Cons

✅ Pros


  • No libraries, just browser-native tech
  • Responsive and low-latency
  • Easy to embed or extend into audio apps

⚠️ Cons


  • Raw canvas drawing requires manual optimization
  • Not ideal for mobile or very low-end devices
  • May need permission handling logic for UX

🚀 Alternatives


  • Wavesurfer.js – full waveform visualizations
  • Tone.js – more abstracted audio layer
  • WebGL/ShaderToy – GPU-accelerated visualizations

Summary


With just a few lines of JavaScript and no dependencies, you can create a real-time spectrogram that turns any mic input into a colorful, informative audio visualizer. It's perfect for web tools, signal processing experiments, or just making your site look awesome with live feedback.

Whether you’re coding, writing, or experimenting with AI workflows, effective prompting is key. Vibe Coding: Prompting Best Practices delivers a practical framework for improving AI results—covering everything from initial setup to advanced composition strategies. Ideal for professionals looking to refine their process.

Top comments (0)