DEV Community

Timothy Fosteman
Timothy Fosteman

Posted on

4 1

How to Prevent Speaker Feedback in Speech Transcription Using Web Audio API

Yet another thing i needed to figure out recently to hook up my Assembly.ai transription engine to a frontend that was loud.

Here is what i tried:

  1. Request microphone access with echo cancellation.
  2. Set up an audio processing chain using the Web Audio API.
  3. Integrate this setup with speech recognition.
  4. Utilize the DynamicsCompressorNode for additional audio processing.

Step 1: Request Microphone Access with Echo Cancellation

The first step is to request access to the microphone with echo cancellation enabled. This feature is built into most modern browsers and helps reduce the feedback from your speakers.

async function getMicrophoneStream() {
    const constraints = {
        audio: {
            echoCancellation: true,
            noiseSuppression: true,
            autoGainControl: true
        }
    };

    try {
        const stream = await navigator.mediaDevices.getUserMedia(constraints);
        return stream;
    } catch (err) {
        console.error('Error accessing the microphone', err);
        return null;
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation

  • Constraints: We specify audio constraints to enable echo cancellation, noise suppression, and auto-gain control.
  • Error Handling: If the user denies access or if there is any other issue, we catch and log the error.

Step 2: Set Up Web Audio API Nodes

Next, we set up the Web Audio API to process the audio stream. This involves creating an AudioContext and connecting various nodes, including a DynamicsCompressorNode.

async function setupAudioProcessing(stream) {
    const audioContext = new AudioContext();
    const source = audioContext.createMediaStreamSource(stream);

    // Create a DynamicsCompressorNode for additional processing
    const compressor = audioContext.createDynamicsCompressor();
    compressor.threshold.setValueAtTime(-50, audioContext.currentTime); // Example settings
    compressor.knee.setValueAtTime(40, audioContext.currentTime);
    compressor.ratio.setValueAtTime(12, audioContext.currentTime);
    compressor.attack.setValueAtTime(0, audioContext.currentTime);
    compressor.release.setValueAtTime(0.25, audioContext.currentTime);

    // Connect nodes
    source.connect(compressor);
    compressor.connect(audioContext.destination);

    return { audioContext, source, compressor };
}
Enter fullscreen mode Exit fullscreen mode

Explanation

  • AudioContext: Represents the audio environment.
  • MediaStreamSource: Connects the microphone stream to the audio context.
  • DynamicsCompressorNode: Reduces the dynamic range of the audio signal, helping to manage background noise and feedback.

Step 3: Integrate with Speech Recognition

Finally, we integrate our audio processing setup with the Web Speech API to perform speech recognition.

async function startSpeechRecognition() {
    const stream = await getMicrophoneStream();
    if (!stream) return;

    const { audioContext, source, compressor } = await setupAudioProcessing(stream);

    const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
    recognition.continuous = true;
    recognition.interimResults = true;

    recognition.onresult = (event) => {
        for (let i = event.resultIndex; i < event.results.length; i++) {
            const transcript = event.results[i][0].transcript;
            console.log('Transcript:', transcript);
        }
    };

    recognition.onerror = (event) => {
        console.error('Speech recognition error', event.error);
    };

    recognition.start();

    // Handle audio context resume if needed
    if (audioContext.state === 'suspended') {
        audioContext.resume();
    }

    return recognition;
}

// Start the speech recognition process
startSpeechRecognition();
Enter fullscreen mode Exit fullscreen mode

Explanation

  • Speech Recognition Setup: We set up the Web Speech API for continuous and interim speech recognition.
  • Event Handling: We handle the onresult and onerror events to process recognition results and errors.
  • Start Recognition: We start the speech recognition process and ensure the audio context is not suspended.

Hopefully you found this useful.

Happy coding!

Tim.

Heroku

Build apps, not infrastructure.

Dealing with servers, hardware, and infrastructure can take up your valuable time. Discover the benefits of Heroku, the PaaS of choice for developers since 2007.

Visit Site

Top comments (0)

Billboard image

The Next Generation Developer Platform

Coherence is the first Platform-as-a-Service you can control. Unlike "black-box" platforms that are opinionated about the infra you can deploy, Coherence is powered by CNC, the open-source IaC framework, which offers limitless customization.

Learn more

👋 Kindness is contagious

Dive into an ocean of knowledge with this thought-provoking post, revered deeply within the supportive DEV Community. Developers of all levels are welcome to join and enhance our collective intelligence.

Saying a simple "thank you" can brighten someone's day. Share your gratitude in the comments below!

On DEV, sharing ideas eases our path and fortifies our community connections. Found this helpful? Sending a quick thanks to the author can be profoundly valued.

Okay