DEV Community

Cover image for 😎 Control the expressions πŸ˜€ of your emoji avatar 😳
Roberto B.
Roberto B.

Posted on • Updated on

😎 Control the expressions πŸ˜€ of your emoji avatar 😳

Face-api.js is an amazing library, it is a JavaScript face recognition API for the browser and nodejs implemented on top of tensorflow.js core.
With face-api.js you can:

  • detect faces from an image;
  • retrieve key landmarks (68) on the face and tracking them (mouth, nose, eyes etc);
  • detect faces characteristics (each faces is decoded via 128 numbers) useful for find similarity between two faces;
  • detect expression: neutral, happy, sad, angry, fearful, disgusted, surprised.

The GitHub repository of face-api.js is: https://github.com/justadudewhohacks/face-api.js

The most important thing is that someone already performed the training on a meaningful sample for the tasks explained above, so you can reuse the "models".

The PoC: πŸ˜ŽπŸ™‚πŸ˜€πŸ˜₯😠😨🀒😳

I would like to walk through a small Web App that allow you to control your avatar with your face expression.
The steps are:

  • init the Project
  • acquire the video stream through the Web Cam via getUserMedia;
  • load models for face recognition and expression recognition;
  • detect all Faces in the video streaming;
  • retrieve the expression for each faces;
  • display an emoji based on the most rated expression.

Init the project

As first step you need to create a new package.json file in a new empty directory. Fill package.json with:

{
  "name": "face-poc",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "face-api.js": "^0.22.0"
  }
}
Enter fullscreen mode Exit fullscreen mode

The relevant part is the dependecy with face-api.js.

Then, you can perform a classic npm install

npm i
Enter fullscreen mode Exit fullscreen mode

Training / Models

You need to retrieve the models used by face-api.js.
You can download models from this URL: https://github.com/justadudewhohacks/face-api.js/tree/master/weights and store them in models directory.

The layout

The HTML file is very simple.
It contains a video tag for display the WebCam video streaming, and a div tag where to place the emoji.
So create a index.html file

<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Emoji Expression Controller</title>
  <!-- 001 - include a style.css -->
  <link rel="stylesheet" type="text/css" href="style.css">
  <!-- 002 - include face-api.min.js -->
  <script defer src="./node_modules/face-api.js/dist/face-api.min.js"></script>
  <!-- 003 - include our script -->
  <script defer src="./script.js"></script>
</head>

<body>
  <div class="header">Emoji Expression Controller</div>
  <div class="content">
    <!-- 004 - define a tag for video -->
    <video id="cam" width="512" height="256" autoplay muted></video>
    <!-- 005 - define a div where to place the emoji -->
    <div id="face"></div>
  </div>
</body>

</html>
Enter fullscreen mode Exit fullscreen mode

The relevant parts are:

  • 001 include an external CSS where we will place the styles;
  • 002 include (defer) the face-api installed in node-moduels (minificated version);
  • 003 include our javascript file, where we will place our JS code;
  • 004 define thee tag video with the id, width, height, autoplay and muted attributes;
  • 005 define a div with the id attribute.

A touch of style

Create a style.css file and fill it:

body {
  margin: 0px;
  padding: 0px;
  width: 100vw;
  height: 100vh;  
  justify-content: center;
  align-items: center;
}
.header {
  font-size:  48px;
  font-weight: 800;
  justify-content: center;
  align-items: center;
  display: flex;
}
.content {
  justify-content: center;
  align-items: center;
  display: flex;
}
#face {
  font-size: 128px;
}

Enter fullscreen mode Exit fullscreen mode

The Logic

Create a script.js file where we will:

  • load the needed models via Promise;
  • access to webcam through getUserMedia;
  • detect faces with expression every 500 milliseconds;
  • map the expression with the right emoji;
  • display the emoji.
// 001 - Access to DOM for video and face icon
const video = document.getElementById('cam');
const face = document.getElementById('face');

// 002 - Load models for Face Detection and Face Expression
Promise.all(
  [
    faceapi.nets.tinyFaceDetector.loadFromUri('/models'),
    faceapi.nets.faceExpressionNet.loadFromUri('/models')
  ]
).then(startvideo)


async function startvideo() {
  console.info("Models loaded, now I will access to WebCam")
  // 003 - Access to Cam and display it on video DIV
  const stream = await navigator.mediaDevices.getUserMedia({
    video: true
  })
  video.srcObject = stream

}

// 004 - Define the array with emoji
let statusIcons = {
  default: '😎',
  neutral: 'πŸ™‚',
  happy: 'πŸ˜€',
  sad: 'πŸ˜₯',
  angry: '😠',
  fearful: '😨',
  disgusted: '🀒',
  surprised: '😳'
}

function detectExpression() {
  // 005 - Set the default Emoji
  face.innerHTML = statusIcons.default
  // 006 - setInterval to detect face/espression periodically (every 500 milliseconds)
  const milliseconds = 500
  setInterval(async () => {
    // 007 - Wait to detect face with Expression
    const detection = await
      faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
        .withFaceExpressions()
    // 008 - detectAllFaces retruns an array of faces with some interesting attributes
    if (detection.length > 0) {
      // 009 - walk through all faces detected
      detection.forEach(element => {
        /**
         * 010 - each face element has a expressions attribute
         * for example:
         * neutral: 0.33032259345054626
         * happy: 0.0004914478631690145
         * sad: 0.6230283975601196
         * angry: 0.042668383568525314
         * fearful: 0.000010881130037887488
         * disgusted: 0.003466457361355424
         * surprised: 0.000011861078746733256
         */
        let status = "";
        let valueStatus = 0.0;
        for (const [key, value] of Object.entries(element.expressions)) {
          if (value > valueStatus) {
            status = key
            valueStatus = value;
          }
        }
        // 011 - once we have the highest scored expression (status) we display the right Emoji
        face.innerHTML = statusIcons[status]
      });
    } else {
      console.log("No Faces")
      //face.innerHTML = statusIcons.default;
    }
  }, milliseconds);
}

// 012 - Add a listener once the Video is played
video.addEventListener('playing', () => {
  detectExpression()
})

Enter fullscreen mode Exit fullscreen mode

Walking through the code:

  • 001 create 2 const to access to the DOM for the video and the emoji div;
  • 002 load the needed models previously downloaded. For this PoC we need 2 things: detect face (tinyFaceDetector) and identify the expression (faceExpressionNet). Loading models it takes time. So we are using a promise and then we will call startvideo function once the models loading is completed;
  • 003 through getUserMedia and setting stream we access to webcam and show the realtime video in the video tag;
  • 004 face-api is able to evaluate face detected and assign a kind of scores (from 0 to 1) for each expression: neutral, happy, sad, angry, fearful, disgusted, surprised;
  • 005 use a default emoji for default state;
  • 006 use setInterval to detect periodically the expression (every 500 milliseconds);
  • 007 detect faces with expression through detectAllFaces method and calling withFaceExpressions;
  • 008 if no faces is detected detectAllFaces will return an empty array (length == 0);
  • 009 use detectAllFaces (All) so an array of faces is returned, so we need to loop through the array;
  • 010 each face element has attributes: neutral, happy, sad, angry, fearful, disgusted, surprised. And we track the highest rated expression with status and valueStatus;
  • 011 once we have the highest scored expression (status) we display the right Emoji from the icon map;
  • 012 add a listener in order to start detection once the video is played.

Start your local Web Server

In order to serve your index.html with the assets (js, css, models) you need to start your local web server. You can do it in multiple way. For example if you have PHP installed , in the directory where you have your index.html launch:

php -S  php -S 127.0.0.1:8081
Enter fullscreen mode Exit fullscreen mode

or you can use python 3

python -m http.server --bind 127.0.0.1 8081
Enter fullscreen mode Exit fullscreen mode

Then open your browser and go to URL: http://127.0.0.1:8081

Surprise!

Top comments (2)

Collapse
 
robertobutti profile image
Roberto B.

I pushed the code to the Github Repo: github.com/roberto-butti/emoji-exp...

Collapse
 
robertobutti profile image
Roberto B.

Next step is to place the avatar over the real face. The avatar will have the same expression of the face...