DEV Community

Cover image for An introduction to the MediaRecorder API
Phil Nash for Twilio

Posted on • Originally published at twilio.com on

An introduction to the MediaRecorder API

On the web we can capture media streams from the user's camera, microphone and even desktop. We can use those media streams for real time video chat over WebRTC and with the MediaRecorder API we can also record and save audio or video from our users directly in a web browser.

To explore the MediaRecorder API let's build a simple audio recorder app with just HTML, CSS and JavaScript.

Getting started

To build this application all we need is a text editor and a browser that supports the MediaRecorded API. At the time of writing, supported browsers include Firefox, Chrome and Opera. There is also work ongoing to bring this API to Edge and Safari.

To get started, create a folder to work in and save this HTML file and this CSS file to give us something to start with. Make sure they are in the same folder and the CSS file is named web-recorder-style.css. Open the HTML file in your browser, you should see the following:

The starter project for the web recorder. It has a heading, a button and a footer and it doesn't do anything yet.

Now let's take a look at the MediaRecorder API.

MediaRecorder API

To start with the MediaRecorder API, you need a MediaStream. You can either get one from a <video> or <audio> element or by calling getUserMedia to capture the user's camera and microphone. Once you have a stream you can initialise the MediaRecorder with it and you are ready to record.

During recording, the MediaRecorder object will emit dataavailable events with the recorded data as part of the event. We will listen for those events and collate the data chunks in an array. Once the recording is complete we'll tie the array of chunks back together in a Blob object. We can control the start and end of the recording by calling start and stop on the MediaRecorder object.

Let's see this in action.

getUserMedia

We'll start by wiring up some of our UI and using the first button to get access to the user's microphone stream. Between the <script> tags at the bottom of the starter HTML you downloaded, start by registering an event to run after the content of the page has loaded and then gather the bits of UI that we will be using:

<script>
  window.addEventListener('DOMContentLoaded', () => {
    const getMic = document.getElementById('mic');
    const recordButton = document.getElementById('record');
    const list = document.getElementById('recordings');

  });
</script>
Enter fullscreen mode Exit fullscreen mode

Next, we'll check whether the browser supports the code we're writing. If it doesn't we'll display an error to the page instead.

<script>
  window.addEventListener('DOMContentLoaded', () => {
    const getMic = document.getElementById('mic');
    const recordButton = document.getElementById('record');
    const list = document.getElementById('recordings');
    if ('MediaRecorder' in window) {
      // everything is good, let's go ahead
    } else {
      renderError("Sorry, your browser doesn't support the MediaRecorder API, so this demo will not work.");
    }
  });
</script>
Enter fullscreen mode Exit fullscreen mode

For the renderError method we will replace the contents of the <main> element with the error message. Add this method after the event listener.

    function renderError(message) {
      const main = document.querySelector('main');
      main.innerHTML = `<div class="error"><p>${message}</p></div>`;
    }
Enter fullscreen mode Exit fullscreen mode

If we have access to the MediaRecorder then we now need to get access to the microphone to record from. For this we will use the getUserMedia API. We're not going to request access to the microphone straight away as that is a poor experience for any user. Instead, we will wait for the user to click the button to access the microphone, then ask.

    if ('MediaRecorder' in window) {
      getMic.addEventListener('click', async () => {
        getMic.setAttribute('hidden', 'hidden');
        try {
          const stream = await navigator.mediaDevices.getUserMedia({
            audio: true,
            video: false
          });
          console.log(stream);
        } catch {
          renderError(
            'You denied access to the microphone so this demo will not work.'
          );
        }
      });
    } else {
Enter fullscreen mode Exit fullscreen mode

Making a call to navigator.mediaDevices.getUserMedia returns a promise that resolves successfully if the user permits access to the media. Since we're using modern JavaScript, we can make that promise appear to be synchronous using async/await. We declare that the click handler is an async function and then when it comes to the call to getUserMedia we await the result and then carry on after.

The user might deny access to the microphone, which we'll handle by wrapping the call in a try/catch statement. Denial will cause the catch block to execute, and we'll use our renderError function again.

Save your file and open it in the browser. Click the Get microphone button. You will get asked if you want to give access to the microphone and when you accept you'll see the resultant MediaStream logged to the console.

If you open the browser console, press the "Get microphone" button and accept the permission, you will see a MediaStream object logged to the console.

Recording

Now we have access to the microphone, we can prepare our recorder. We'll store a couple of other variables that we'll need too. First the MIME type that we'll be working with, "audio/webm". This seems to be the most widely supported format that browsers will record to today. We'll also create an array called chunks, which we will use to store parts of the recording as it is created.

The MediaRecorder is initialised with the media stream that we captured from the user's microphone and an object of options, of which we will pass the MIME type we defined earlier. Replace the console.log from earlier with:

        try {
          const stream = await navigator.mediaDevices.getUserMedia({
            audio: true,
            video: false
          });
          const mimeType = 'audio/webm';
          let chunks = [];
          const recorder = new MediaRecorder(stream, { type: mimeType });
Enter fullscreen mode Exit fullscreen mode

Now we've created our MediaRecorder we need to setup some event listeners for it. The recorder emits events for a number of different reasons. Many are to do with interaction with the recorder itself, so you can listen for events when it starts recording, pauses, resumes and stops. The most important event is the dataavailable event which is emitted periodically while the recorder is actively recording. The events contain a chunk of the recording, which we will push onto the chunks array we just created.

For our application we're going to listen to the dataavailable event collecting chunks and then when the stop event fires we'll gather all the chunks into a Blob which we can then play with an <audio> element and reset the array of chunks.

           const recorder = new MediaRecorder(stream, { type: mimeType });
           recorder.addEventListener('dataavailable', event => {
             if (typeof event.data === 'undefined') return;
               if (event.data.size === 0) return;
               chunks.push(event.data);
             });
           recorder.addEventListener('stop', () => {
             const recording = new Blob(chunks, {
               type: mimeType
             });
             renderRecording(recording, list);
             chunks = [];
           });
Enter fullscreen mode Exit fullscreen mode

We'll implement the renderRecording method soon. We just have a little more work to do to enable a button to start and stop the recording.

We need to unhide the recording button, then when it is clicked either start the recording or stop it, depending on the state of the recorder itself. That code looks like this:

           const recorder = new MediaRecorder(stream, { type: mimeType });
           recorder.addEventListener('dataavailable', event => {
             if (typeof event.data === 'undefined') return;
               if (event.data.size === 0) return;
               chunks.push(event.data);
             });
           recorder.addEventListener('stop', () => {
             const recording = new Blob(chunks, {
               type: mimeType
             });
             renderRecording(recording, list);
             chunks = [];
           });
Enter fullscreen mode Exit fullscreen mode

To complete this little application we're going to render the recordings into <audio> elements and provide a download link so a user can save their recording to the desktop. The key here is that we can take the Blob we created and turn it into a URL using the URL.createObjectURL method. This URL can then be used as the src of an <audio> element and as the href of an anchor. To make the anchor download the file, we set the download attribute.

This function is mostly creating DOM elements and making a file name out of the time that the recording was made. Add it below your renderError function.

  function renderRecording(blob, list) {
    const blobUrl = URL.createObjectURL(blob);
    const li = document.createElement('li');
    const audio = document.createElement('audio');
    const anchor = document.createElement('a');
    anchor.setAttribute('href', blobUrl);
    const now = new Date();
    anchor.setAttribute(
      'download',
      `recording-${now.getFullYear()}-${(now.getMonth() + 1).toString().padStart(2, '0')}-${now.getDay().toString().padStart(2, '0')}--${now.getHours().toString().padStart(2, '0')}-${now.getMinutes().toString().padStart(2, '0')}-${now.getSeconds().toString().padStart(2, '0')}.webm`
    );
    anchor.innerText = 'Download';
    audio.setAttribute('src', blobUrl);
    audio.setAttribute('controls', 'controls');
    li.appendChild(audio);
    li.appendChild(anchor);
    list.appendChild(li);
  }
Enter fullscreen mode Exit fullscreen mode

The completed application. This animation shows the full system, from getting the microphone to recording the audio, playing it back and downloading the recording.

Testing it out

Open the page in your web browser and click the Get Microphone button. Accept the permissions dialog and then click to start recording. Record yourself a message and play it back from the page.

WebM files

If you download one of your recordings, you may find you don't have a media player that's capable of playing a WebM file. WebM is an open source format for both audio and video, but it's mostly had support from browsers. If you have VLC player you can likely play the audio, otherwise you might want to convert it to an MP3 or WAV file using an online tool like convertio (or if you're feeling daring, with ffmpeg in your terminal).

Your browser is now a recorder

The MediaRecorder API is a powerful new addition to browsers. In this post we've seen its ability to record audio, but it doesn't just stop there. Currently the application doesn't save the audio files, so a page refresh loses them. ou could save them using IndexedDB or send them to a server. You could also play around with the recording, imagine passing the audio through the Web Audio API before recording it. And if the WebM format isn't your cup of tea, you could always look into re-encoding the audio in the front end, though that's likely a job for WebAssembly (or your server…).

If you want to try out the code from this post you can check out a live demo. All the code is available in this GitHub repo and you can remix the project onGlitch too.

Let me know what you think of the MediaRecorder API and what you could use it for. Hit up the comments below or drop me a line on Twitter at @philnash.

Top comments (11)

Collapse
 
flozero profile image
florent giraud

show us about video desktop capture please !

Collapse
 
philnash profile image
Phil Nash

You can capture desktop video using getDisplayMedia as I show in this post on desktop capture in Edge. This works for Chrome now too, though you used to need an extension. Firefox supports desktop capture, but with an non-standard methods.

Once you capture the media stream, you can use the MediaRecorder just like in this post.

Collapse
 
flozero profile image
florent giraud • Edited

Maybe you will know this too so !

Can we stream just a part of the screen or can we take just a specific window

Do you know how to send this media stream to a rtmp flux ?

Thread Thread
 
philnash profile image
Phil Nash

This isn’t something I’ve done, but I’m interested!

Within the getDisplayMedia permissions dialog, the user gets to choose whether they share the whole screen or just a window.

To stream just part of the screen, you could copy the video to a canvas element and crop it there, then export the canvas back to a MediaStream using the captureStream method.

As for streaming to RTMP, that is not something built into the browser. You’ll need to stream to a server using webRTC or web sockets and then have the server turn it into a RTMP stream.

Collapse
 
flozero profile image
florent giraud

ok i got my response for the specific window capture

Collapse
 
jamonjamon profile image
Jaimie Carter

This looks great. Very interested in media. The AV1 codec looks great and it will be good when there are some hardware encoders, with acceptable latency.

Collapse
 
philnash profile image
Phil Nash

That is interesting. It looks like the browsers that support MediaRecorder mostly also support playing video encoded with AV1

const video = document.createElement('video');
video.canPlayType('video/webm; codecs="av1"');
// => "probably"

But they don't yet record with it.

MediaRecorder.isTypeSupported('video/webm; codecs="av1"')
// => false

Will be interesting to see where this goes!

Collapse
 
jamonjamon profile image
Jaimie Carter

Indeed. Maybe the reason recording is unsupported (and I have no idea what I'm talking about, here) is due to the browsers not being about to cope with the level of processing the codec requires. ? An open source solution like this is certainly overdue in this space. Let's hope everyone (apple) supports it properly.

Thread Thread
 
philnash profile image
Phil Nash

I would reckon so! Leaving browsers to hang for a long time would not be a good experience and they already suck up plenty of memory. It already takes a while when you record a video to encode it and produce it ready to play back. As you said, as hardware encoding becomes available it might become better and browsers might well be able to hand off to that hardware to process.

That's why the MediaRecorder.isTypeSupported method exists. So we can choose the best possible for our users and then fallback.

Collapse
 
missamarakay profile image
Amara Graham

WebM totally hung me up the first time I was playing around with MediaRecorder API. Luckily the service I was sending the audio to accepted that format and I was good to go!

Collapse
 
philnash profile image
Phil Nash

Ha, that is useful. I am going to look into translating from WebM to other formats in the browser, but that's going to take some WebAssembly wrangling (which I'm sort of looking forward to)!