DEV Community

Yuiko Koyanagi
Yuiko Koyanagi

Posted on

Recognize facial expressions and change face to Emoji using face-api.js with Next.js+TypeScript

Hey guys,

I have developed an application with face-api.js, that recognize facial expressions and change the face to Emoji realtime!

Image from Gyazo

In this article, I will explain how to develop this application.

DEMO→https://face2emoji.vercel.app/
github→https://github.com/yuikoito/face2emoji

Why I developed

I've been having more and more online meetings lately, and there are times when I don't want to show my face, but I want to convey my expression.

I personally don't want to show my face, but, of course I understand facial expressions tell many things.

Then, I was wondering if it would be great if I could show only my facial expressions! lol

You can't use this application as a real web conference, but if enough demand, I will make it a Chrome extension so that it can be used for actual web conferences.

Now, just try how it works and tell me your thoughts :)

App Structure and Function

I'm using Next.js + TypeScript to create the application.
The facial expression recognition is done using face-api.js, a module based on TensorFlow.js.
And the hosting is vercel.

The function is simple: when the app is launched, it recognizes the face and facial expression, and places an emoji on your face.
The emoji used are apple emoji.

With face-api.js, 7 patterns of facial expressions can be obtained, and for each of these, the following emojis are set for each expression.

6

7

OK, now it's time to explain how to develop this application.

Setup and Install face-api.js

$ yarn create next-app <app-name>
$ cd <app-name>
$ touch tsconfig.json
$ yarn add --dev typescript @types/react
Enter fullscreen mode Exit fullscreen mode

Then, rename index.js and _app.js to index.tsx, _app.tsx.

Install webcam.

$ yarn add react-webcam @types/react-webcam
Enter fullscreen mode Exit fullscreen mode

Now we are ready to install face-api.js.

$ yarn add face-api.js
Enter fullscreen mode Exit fullscreen mode

As stated in the README, face-api.js needs to load models.
So, copy the weights folder from face-api.js github and place it under public.
Rename from weights to models.

Load the models.
Here, we are using tinyFaceDetector and faceExpressionNet.
With ssdMobilenetv1, the accuracy will be better, but it will be much heavier, so some devices will not work.

  const loadModels = async () => {
    const MODEL_URL = "/models";
    await Promise.all([
      faceapi.nets.tinyFaceDetector.load(MODEL_URL),
      faceapi.nets.faceExpressionNet.load(MODEL_URL),
    ]);
  };
Enter fullscreen mode Exit fullscreen mode

Let's check if the logs are output correctly.
Try to recognize a face by clicking a button and see the log as follows.

// pages/index.tsx

import * as faceapi from "face-api.js";
import { useRef } from "react";
import Webcam from "react-webcam";

export default function Home() {
  const webcamRef = useRef<Webcam>(null);
  const canvasRef = useRef<HTMLCanvasElement>(null);
  const loadModels = async () => {
    const MODEL_URL = "/models";
    await Promise.all([
      faceapi.nets.tinyFaceDetector.load(MODEL_URL),
      faceapi.nets.faceExpressionNet.load(MODEL_URL),
    ]);
  };

  const faceDetectHandler = async () => {
    await loadModels();
    if (webcamRef.current && canvasRef.current) {
      const webcam = webcamRef.current.video as HTMLVideoElement;
      const canvas = canvasRef.current;
      webcam.width = webcam.videoWidth;
      webcam.height = webcam.videoHeight;
      canvas.width = webcam.videoWidth;
      canvas.height = webcam.videoHeight;
      const video = webcamRef.current.video;
      const detectionsWithExpressions = await faceapi
        .detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
        .withFaceExpressions();
      console.log(detectionsWithExpressions);
    }
  };

  return (
    <div className={styles.container}>
      <main className={styles.main}>
        <Webcam audio={false} ref={webcamRef} className={styles.video} />
        <canvas ref={canvasRef} className={styles.video} />
        <button onClick={faceDetectHandler}>顔認識</button>
      </main>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

image

OK, seems good!

The detection part is the face recognition part and contains the coordinates, etc. The expressions part is the facial expressions and contains the score for each expression.

Now all we have to do is check the location of the face and draw an emoji based on the expression with the highest score.

Change emoji according to facial expressions

The drawing part is long, so I put the logic in a separate file.
DetectionsWithExpressions and canvas should be sent from index.tsx.

Change the expressions object to a form that is easy to calculate.

// expressions output
{
  angry: 0.00012402892753016204
  disgusted: 0.00000494607138534775
  fearful: 2.4963259193100384e-7
  happy: 0.00011926032311748713
  neutral: 0.9996343851089478
  sad: 0.00010264792217640206
  surprised: 0.000014418363207369111
}
Enter fullscreen mode Exit fullscreen mode

Then we change like the following.

    const Array = Object.entries(detectionsWithExpression.expressions);
    const scoresArray = Array.map((i) => i[1]);
    const expressionsArray = Array.map((i) => i[0]);
Enter fullscreen mode Exit fullscreen mode

The whole code is as follows.

// utils/drawEmoji.ts

import {
  WithFaceExpressions,
  FaceDetection,
  FaceExpressions,
} from "face-api.js";

export const drawEmoji = async (
  detectionsWithExpressions: WithFaceExpressions<{
    detection: FaceDetection;
    expressions: FaceExpressions;
  }>[],
  canvas: HTMLCanvasElement
) => {
  detectionsWithExpressions.map((detectionsWithExpression) => {
    const ctx = canvas.getContext("2d");
    const Array = Object.entries(detectionsWithExpression.expressions);
    const scoresArray = Array.map((i) => i[1]);
    const expressionsArray = Array.map((i) => i[0]);
    const max = Math.max.apply(null, scoresArray);
    const index = scoresArray.findIndex((score) => score === max);
    const expression = expressionsArray[index];
    const image = document.createElement("img");
    image.onload = () => {
      const width = detectionsWithExpression.detection.box.height * 1.2;
      const height = detectionsWithExpression.detection.box.height * 1.2;
      const x = detectionsWithExpression.detection.box.x - width * 0.1;
      const y = detectionsWithExpression.detection.box.y - height * 0.2;
      ctx.clearRect(0, 0, canvas.width, canvas.height);
      ctx.drawImage(image, x, y, width, height);
    };
    image.src = `/emojis/${expression}.png`;
  });
};
Enter fullscreen mode Exit fullscreen mode

By the way, since multiple people can be detected this time, detectionsWithExpressions is set to multiple, but if you put ctx.clearRect(0, 0, canvas.width, canvas.height); in the above location, the map will flow in order and if there are multiple people, one person will be detected and drawn, and the others will not be drawn.

For this reason, if you want to support multiple people, you may need to prepare a temporary canvas element, draw all the users on it, and then draw them on the main canvas.

Here it is not intended for use by multiple people.

Anyway, now the logic part is done, and all you have to do is to call the above function in index.tsx.

Notes on deployment: Module not found: Can't resolve'fs' support

Since face-api.js is also available in node.js, some use fs. However, of course, fs is not required when running with a browser, so an error will occur at the time of deployment in this part.
Therefore, it is necessary to explicitly state that fs is not used when running in a browser.

This part was reported in the following issue.

https://github.com/justadudewhohacks/face-api.js/issues/154

Then, change next.config.js as follows.

// next.config.js
module.exports = {
  reactStrictMode: true,
  webpack: (config, { isServer }) => {
    if (!isServer) {
      config.resolve.fallback = {
        fs: false,
      };
    }
    return config;
  },
};

Enter fullscreen mode Exit fullscreen mode

If need, read my github.

That's it!

This article is the 12th week of trying to write at least one article every week.

If you'd like, please take a look at my previous weekly posts!
See you soon!

Contact

Please send me a message if you want to offer a job or ask me something.

Latest comments (0)