When working with Web Audio API and HTML media elements, developers frequently encounter three main issues:
- The MediaElementAudioSourceNode Duplication Error
The most common error looks like this:
Cannot create multiple MediaElementAudioSourceNode from the same HTMLMediaElement
This happens because browsers allow creating only one MediaElementAudioSourceNode per <video> or <audio> element. Attempting to create a second one will throw an error, and the original element may lose its audio.
- Autoplay Policy Restrictions
Modern browsers block automatic audio playback before user interaction. AudioContext is created in a suspended state, and you must explicitly call resume() after a user gesture (click, tap).
- HLS Manifest Loading Synchronization
The HLS manifest loads asynchronously through hls.js. If you create MediaElementAudioSourceNode before the manifest fully loads and attaches to the video element, you'll get silent audio or initialization errors.
The Solution: Singleton Pattern for AudioContext Management
To guarantee a single AudioContext instance and correct MediaElementAudioSourceNode functionality, we use the Singleton pattern. This ensures:
- Single instance of
AudioContextper application - Single
MediaElementAudioSourceNodeper media element - Centralized audio graph state management
- Easy reusability across app components
Basic Implementation
export class HlsAudioService {
private static instance: HlsAudioService;
public audioContext: AudioContext;
public source?: MediaElementAudioSourceNode;
public splitter?: ChannelSplitterNode;
public analysers: AnalyserNode[] = [];
private constructor() {
const AudioContextClass = window.AudioContext || (window as any).webkitAudioContext;
this.audioContext = new AudioContextClass();
}
static getInstance(): HlsAudioService {
if (!HlsAudioService.instance) {
HlsAudioService.instance = new HlsAudioService();
}
return HlsAudioService.instance;
}
async resumeContext(): Promise<void> {
if (this.audioContext.state === 'suspended') {
await this.audioContext.resume();
}
}
}
We create a private constructor that initializes AudioContext with cross-browser compatibility (using the webkit prefix for older Safari versions). The static getInstance() method guarantees only one instance throughout the application.
Integration with hls.js
The key is correct initialization sequence. You must wait for the MANIFEST_PARSED event from hls.js before creating audio nodes:
const videoElement = document.querySelector('video') as HTMLVideoElement;
const hls = new Hls();
const audioService = HlsAudioService.getInstance();
hls.on(Hls.Events.MANIFEST_PARSED, () => {
// Manifest loaded, safe to create audio graph
if (!audioService.source) {
audioService.source = audioService.audioContext.createMediaElementSource(videoElement);
setupAudioGraph(audioService);
}
});
hls.loadSource('https://example.com/stream.m3u8');
hls.attachMedia(videoElement);
// Handle user gesture
videoElement.addEventListener('play', async () => {
await audioService.resumeContext();
});
This approach ensures MediaElementAudioSourceNode is created only after hls.js fully initializes the stream, and only once.
Building the Audio Graph for Analysis
After creating the source node, build the processing chain. For stereo analysis, typical architecture looks like:
function setupAudioGraph(service: HlsAudioService) {
const { audioContext, source } = service;
// Split stereo signal into left and right channels
service.splitter = audioContext.createChannelSplitter(2);
// Create analysers for each channel
const analyserLeft = audioContext.createAnalyser();
const analyserRight = audioContext.createAnalyser();
analyserLeft.fftSize = 2048; // FFT size for frequency analysis
analyserRight.fftSize = 2048;
// Build chain: source → splitter → analysers → destination
source!.connect(service.splitter);
service.splitter.connect(analyserLeft, 0); // Left channel
service.splitter.connect(analyserRight, 1); // Right channel
// Connect to output for playback
analyserLeft.connect(audioContext.destination);
analyserRight.connect(audioContext.destination);
service.analysers = [analyserLeft, analyserRight];
}
ChannelSplitterNode splits the stereo signal into mono channels. Each AnalyserNode provides frequency and amplitude data for its channel in real-time. The fftSize parameter determines frequency analysis detail — higher values give more resolution but increase CPU load.
Handling Common Edge Cases
CORS and Cross-Origin Streams
If the HLS stream is on another domain, MediaElementAudioSourceNode outputs zeros for security reasons. The solution is adding the crossOrigin attribute to the video element and ensuring the server sends the Access-Control-Allow-Origin header:
videoElement.crossOrigin = 'anonymous';
Without proper CORS setup, audio analysis is impossible, though playback works fine.
Suspended State on Mobile
On mobile platforms (especially iOS), AudioContext may enter suspended state to save battery. Check the state before each use:
async function ensureAudioContextRunning(service: HlsAudioService) {
if (service.audioContext.state === 'suspended') {
await service.audioContext.resume();
console.log('AudioContext resumed');
}
}
Call this function in play and canplay event handlers.
Switching Between Streams
When changing HLS source (quality switching or channel change), don’t recreate MediaElementAudioSourceNode. Simply call hls.loadSource() with the new URL — the existing audio graph continues working:
function switchStream(newUrl: string) {
hls.loadSource(newUrl);
// source node remains, recreation not needed
}
Attempting to recreate the source will error since it’s already bound to the element.
Performance Optimization
To reduce CPU load during visualization:
Reduce update frequency: Instead of updating every frame (60 FPS), limit to 30 FPS using requestAnimationFrame and frame counting.
Adjust FFT size: Simple level indicators need fftSize = 256, detailed spectrograms need 2048 or 4096. Smaller values reduce latency and load.
Web Workers: You can offload data processing from AnalyserNode to a worker, avoiding main-thread blocking. However, audio nodes themselves must stay on the main thread.
Use Cases
With the obtained data, you can implement various visualizations and analyses:
- PPM/VU meters — display current signal level with different integration times
- Spectrograms — real-time frequency spectrum visualization using getByteFrequencyData()
- Silence detector — analyze amplitude to detect audio pauses
- Equalizers — divide frequencies into bands for volume control
All these tasks use data from AnalyserNode, which provides an array of values from 0 to 255 for each frequency band.
Browser Compatibility
| Browser | Web Audio API | Native HLS | HLS.js (MSE) |
|---|---|---|---|
| Chrome (Desktop) | v14+ | No | Yes |
| Firefox (Desktop) | v25+ | No | Yes |
| Safari (Desktop) | v6.1+ | Yes | ⚠Not needed |
| Edge | Full | No | Yes |
| Chrome Mobile | Full | Yes (Android) | Yes |
| Safari iOS | v6+ | Yes | No (MSE) |
| Firefox Android | Full | No | Yes |
Important: Safari on iOS doesn’t support Media Source Extensions, so hls.js won’t work. Fortunately, iOS has native HLS support, and you can use video.src directly. Web Audio API works correctly with it.
Common Errors Reference
| Error | Cause | Solution |
|---|---|---|
| Cannot create multiple MediaElementAudioSourceNode | Attempting to create MediaElementAudioSourceNode twice for same element | Use Singleton pattern to maintain single reference |
| AudioContext was not allowed to start | AudioContext created before user interaction | Call audioContext.resume() after user gesture (click, touch) |
| MediaElementAudioSourceNode outputs zeroes | CORS restrictions for cross-domain audio | Add crossOrigin="anonymous" to video element |
| HLS manifest not loading | HLS manifest loads before AudioContext initialization | Wait for MANIFEST_PARSED event before creating source node |
Conclusion
Analyzing audio from HLS streams in the browser is achievable with the right approach. Key takeaways:
- Singleton for
AudioContextprevents node recreation errors and simplifies state management - Waiting for
MANIFEST_PARSEDfrom hls.js guarantees correct initialization - Handling autoplay policy via resume() ensures cross-platform compatibility
- CORS setup is essential for cross-domain streams
A complete working implementation is available at github.com/ABurov30/AudioContext, where you can study ready-made integration of all described techniques.
This approach enables building reliable web applications for streaming video with advanced audio analysis, without requiring users to install additional plugins or extensions.

Top comments (0)