DEV Community

Nikhil karkra
Nikhil karkra

Posted on

Face recognition using JavaScript

Face detection is one of the most common applications of Artificial Intelligence. The use of Facial detection has increased in the last couple of years.

Face-api.js has brought a JavaScript API for face detection and face recognition in the browser implemented on top of the tensorflow.js core API

In this tutorial, we will build the face recognition app that will work in the Browser. From the face, we will predict the Emotion, Gender, and age.

The output of this app will look as shown below.
Alt Text

Project Step

Step1 - Create a folder called face-recognition

Under the face-recognition folder create the following folder structure
Alt Text

All folders are self-explanatory except models. That I will cover in going forward.

Step2 - download the face-api.min.js

Download the face-api.min.js code from the following URL and paste it inside the js/face-api.min.js file.

https://raw.githubusercontent.com/karkranikhil/face-recognition-using-js/master/js/face-api.min.js
Enter fullscreen mode Exit fullscreen mode

Step3 - download the modal files

Models are the trained data that we will use to detect the feature from the face.
Download the files from the following URL and placed them inside the models folder.

https://github.com/karkranikhil/face-recognition-using-js/tree/master/models
Enter fullscreen mode Exit fullscreen mode

Step4 - Let's built the index.html file.

In index.html file we importing the style.css for styles, face-api.min.js for processing the model data and extracting the features and main.js where we will write our logic.
Inside the body tag we are creating a video tag to get the face, result-container for showing the emotion, gender, and age.

Place the below code inside the index.html file

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
    <title>Face recognition App</title>
    <link rel="stylesheet" href="css/style.css" />
  </head>
  <body>
    <header>Face recognition in the browser using Javascript</header>
    <div class="container">
      <video id="video" height="500" width="500" autoplay muted></video>
    </div>
    <div class="result-container">
      <div id="emotion">Emotion</div>
      <div id="gender">Gender</div>
      <div id="age">Age</div>
    </div>

    <script src="./js/face-api.min.js"></script>
    <script src="./js/main.js"></script>
  </body>
</html>

Enter fullscreen mode Exit fullscreen mode

Step5 - Let's built the main.js file.

Inside the main.js file we are using promise.all to load the models to the face API. once the promise is resolved then we are calling the startVideo method that starts the streaming. Below are the methods used for this demo

  • faceapi.detectSingleFace method - detectSingleFace utilize the SSD Mobilenet V1 Face Detector. You can specify the face detector by passing the corresponding options object. To detect the multiple faces replace the detectSingleFace with detectAllFaces

  • withFaceLandmarks method - It is used for Detecting 68 Face Landmark Points

  • withFaceExpressions method - This method Detect all faces in an image + recognize facial expressions of each face and return the array

  • withAgeAndGendermethod - This method Detect all faces in an image + estimate age and recognize gender of each face and return the array

Replace the following code to the main.js

const video = document.getElementById("video");
const isScreenSmall = window.matchMedia("(max-width: 700px)");
let predictedAges = [];

/****Loading the model ****/
Promise.all([
  faceapi.nets.tinyFaceDetector.loadFromUri("/models"),
  faceapi.nets.faceLandmark68Net.loadFromUri("/models"),
  faceapi.nets.faceRecognitionNet.loadFromUri("/models"),
  faceapi.nets.faceExpressionNet.loadFromUri("/models"),
  faceapi.nets.ageGenderNet.loadFromUri("/models")
]).then(startVideo);

function startVideo() {
  navigator.getUserMedia(
    { video: {} },
    stream => (video.srcObject = stream),
    err => console.error(err)
  );
}

/****Fixing the video with based on size size  ****/
function screenResize(isScreenSmall) {
  if (isScreenSmall.matches) {
    video.style.width = "320px";
  } else {
    video.style.width = "500px";
  }
}

screenResize(isScreenSmall);
isScreenSmall.addListener(screenResize);

/****Event Listeiner for the video****/
video.addEventListener("playing", () => {
  const canvas = faceapi.createCanvasFromMedia(video);
  let container = document.querySelector(".container");
  container.append(canvas);

  const displaySize = { width: video.width, height: video.height };
  faceapi.matchDimensions(canvas, displaySize);

  setInterval(async () => {
    const detections = await faceapi
      .detectSingleFace(video, new faceapi.TinyFaceDetectorOptions())
      .withFaceLandmarks()
      .withFaceExpressions()
      .withAgeAndGender();

    const resizedDetections = faceapi.resizeResults(detections, displaySize);
    canvas.getContext("2d").clearRect(0, 0, canvas.width, canvas.height);

    /****Drawing the detection box and landmarkes on canvas****/
    faceapi.draw.drawDetections(canvas, resizedDetections);
    faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);

    /****Setting values to the DOM****/
    if (resizedDetections && Object.keys(resizedDetections).length > 0) {
      const age = resizedDetections.age;
      const interpolatedAge = interpolateAgePredictions(age);
      const gender = resizedDetections.gender;
      const expressions = resizedDetections.expressions;
      const maxValue = Math.max(...Object.values(expressions));
      const emotion = Object.keys(expressions).filter(
        item => expressions[item] === maxValue
      );
      document.getElementById("age").innerText = `Age - ${interpolatedAge}`;
      document.getElementById("gender").innerText = `Gender - ${gender}`;
      document.getElementById("emotion").innerText = `Emotion - ${emotion[0]}`;
    }
  }, 10);
});

function interpolateAgePredictions(age) {
  predictedAges = [age].concat(predictedAges).slice(0, 30);
  const avgPredictedAge =
    predictedAges.reduce((total, a) => total + a) / predictedAges.length;
  return avgPredictedAge;
}

Enter fullscreen mode Exit fullscreen mode

Step6 - Let's Add the style to the app.

Replace the style.css with the following code.

body {
  margin: 0;
  padding: 0;
  box-sizing: border-box;
  height: 100vh;
  background: #2f2f2f;
  width: calc(100% - 33px);
}

canvas {
  position: absolute;
}
.container {
  display: flex;
  width: 100%;
  justify-content: center;
  align-items: center;
}
.result-container {
  display: flex;
  width: 100%;
  justify-content: center;
  align-items: center;
  flex-direction: column;
}
.result-container > div {
  font-size: 1.3rem;
  padding: 0.5rem;
  margin: 5px 0;
  color: white;
  text-transform: capitalize;
}
#age {
  background: #1e94be;
}
#emotion {
  background: #8a1025;
}
#gender {
  background: #62d8a5;
}
video {
  width: 100%;
}
header {
  background: #42a5f5;
  color: white;
  width: 100%;
  font-size: 2rem;
  padding: 1rem;
  font-size: 2rem;
}

Enter fullscreen mode Exit fullscreen mode

Step7 - Let's run the app by the live server or http-server

Once you run the app you will see the following output.
Alt Text

you can run the app deployed by me, using the following URL
https://face-recognition.karkranikhil.now.sh/

Reference

https://github.com/justadudewhohacks/face-api.js/
GITHUB - https://github.com/karkranikhil/face-recognition-using-js

Top comments (13)

Collapse
 
janakirama profile image
janakiram-a

Hi Nikhil,
As I am trying to host in local iis, I am getting 404 errors for files in console from js and model folders. I have enabled 'Static Content' from iis settings also. Is there any other configuration that i need to do?.
Please let me know. Your demo looks excellent, I would like to play with it if it can be configured on my system. Thanks.
-Janakiram

Collapse
 
khatrinitesh profile image
Nitesh Khatri

how to change color from pink to white face dot?

Collapse
 
whatl3y profile image
Lance Whatley

I wrote the following utility & library that uses face-api.js as well 🙂, nicely done: github.com/Risk3sixty-Labs/facerep...

Collapse
 
erne_vizcaino profile image
Ernesto Vizcaino

I'll try it 👍

Collapse
 
chhechinyong profile image
Chhe-chinyong

I don't see the camera even I turn it on.

Collapse
 
karkranikhil profile image
Nikhil karkra

can you check is there any error in the console.?

Collapse
 
kuba2511 profile image
kuba2511

how do add the models folder into the code

Collapse
 
srbh1999 profile image
saurabh sashank

What is the extension of models

age_gender_model-shard1

I am unable to host it on IIS

Collapse
 
xavierjs profile image
Bourry Xavier

Face-api is too slow. Jeeliz FaceFilter is really faster (to run and to load), especially on mobile devices ( github.com/jeeliz/jeelizFaceFilter ).

Collapse
 
avichovatiya profile image
Avi Chovatiya

Interesting..
Have created any model?

Collapse
 
slateef39 profile image
ᴼᴹᴳ ᶥᵗˢLateefknows01

hi please, how will i download the modal files in step 3 from your github

Collapse
 
karkranikhil profile image
Nikhil karkra

download each file and paste inside the models folder.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.