In this post, we will cover the process of capturing real-time video and audio streams using WebRTC (API) and a touch of HTML/Javascript.
What is WebRTC?
WebRTC is a open-source initiative maintained by the Google WebRTC team that offers Real-Time communications capabilities via straightforward application programming interfaces for web browsers and mobile applications (APIs)[1].
These are the key components of WebRTC:
Allow access to a device's webcam and/or microphone, and can plug in their signals to a RTC connection.
An RTCPeerConnection interface for configuring video chat and phone calls.
Provides a technique for establishing a peer-to-peer data route between browsers [2].
WebRTC Development
The MediaDevices.getUserMedia() method requests permission to use a media input, resulting in a MediaStream with tracks containing the desired types of media.
It returns a Promise that resolves to a MediaStream object. If the user rejects permission or there is no matching media, then the promise is denied with a NotAllowedError or NotFoundError DOMException.
Part 1: Video Capture
We add an HTML video element <video>
, with the autoplay
attribute to automatically play the video:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Getting Started With WebRTC.</title>
<meta name="description" content="The WebRTC project is open-source and supported by Apple, Google, Microsoft and Mozilla, amongst others. This page is maintained by the Google WebRTC team.">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="styles.css">
</head>
<body>
<video autoplay></video>
<!-- <audio autoplay controls></audio> -->
<script src="script.js" async defer></script>
</body>
</html>
Then, we use JavaScript to set a constraint with the video parameter set to "true" in order to record a video stream. If the user accepts the permission, the promise is fulfilled with a MediaStream containing one video track. If no matching devices are connected, a NotFoundError
will be generated and a PermissionDeniedError
is thrown if the permission is denied.
// Request a video stream
const constraints = {
video: true
};
// handle success function
function handleSuccess(stream) {
document.querySelector('video').srcObject = stream;
}
// handle error function
function handleError(error) {
console.log('Error accessing media devices: ', error);
}
// triggers a permissions request with getUserMedia()
navigator.mediaDevices.getUserMedia(constraints)
.then(handleSuccess)
.catch(handleError);
Note 💡- If MediaDevices.getUserMedia() is called, all browsers will show a permissions window, giving users the choice of allowing or denying access to their webcam [3].
The webcam's video stream is then shown in the HTML video element after the user gives permission to access it:
Part 2: Audio Capture
Next step is to capture an audio stream produced by a microphone.
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]> <html class="no-js"> <!--<![endif]-->
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Getting Started With WebRTC.</title>
<meta name="description" content="The WebRTC project is open-source and supported by Apple, Google, Microsoft and Mozilla, amongst others. This page is maintained by the Google WebRTC team.">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="styles.css">
</head>
<body>
<!--[if lt IE 7]>
<p class="browsehappy">You are using an <strong>outdated</strong> browser. Please <a href="#">upgrade your browser</a> to improve your experience.</p>
<![endif]-->
<audio autoplay controls></audio>
<script src="script.js" async defer></script>
</body>
</html>
Note 💡 - We upgraded to an HTML audio element
<audio>
with theautoplay
attribute to play the audio automatically and thecontrols
attribute to show the audio controls, such as the volume.
To record an audio stream, we use JavaScript to set a constraint with the audio parameter set to "true". To seek permission to use the microphone, the navigator.mediaDevices.getUserMedia
method is called. If the user accepts the permission, the promise is fulfilled with a MediaStream containing one audio track. If no matching devices are connected, a NotFoundError
will be generated and a PermissionDeniedError
is thrown if the permission is denied.
// Request a audio stream
const constraints = {
audio: true
};
// handle success function
function handleSuccess(stream) {
document.querySelector('audio').srcObject = stream;
}
// handle error function
function handleError(error) {
console.log('getUserMedia error: ', error);
}
// triggers a permissions request with getUserMedia()
navigator.mediaDevices.getUserMedia(constraints)
.then(handleSuccess)
.catch(handleError);
Note 💡- If MediaDevices.getUserMedia() is called, all browsers will show a permissions window, giving users the choice of allowing or denying access to their microphone [3].
The HTML Audio element is then shown with the audio stream produced by the microphone after the user allows permission to access the microphone:
Conclusion
And there you have it. We demonstrated how to capture video and audio streams provided by a webcam and microphone using the MediaDevices.getUserMedia() WebRTC method.
Top comments (0)