DEV Community

Yuiko Ito
Yuiko Ito

Posted on

Developed the app which trains your facial expressions: face-api.js + Next.js + TypeScript

Hey guys,
I developed the app which trains your facial expressions.

Do you ever lose your expression when you are alone at home?
I have lost mineใ€€lol.

Videotogif

So I felt a sense of crisis about it, so I made an app that can help you train your facial expressions.

Let's try now.

DEMO: https://face-expression-challenge.vercel.app/
github: https://github.com/yuikoito/face-expression-challenge

I have only added minimal functionality at this time, so the content of the code may change drastically in the future.

Functions

The functions are simple.
When you press the start button, a countdown starts, and then a question appears.
The subject is given randomly, and the judging starts 1.5 seconds after showing the subject. Then the judging is done based on the facial expressions you make during that time.

The expressions to be judged are the following seven patterns, as described in Recognize facial expressions and change face to Emoji using face-api.js with Next.js+TypeScript

image

I'll omit how to introduce the face-api.js model, as it is still written in the above article.

Check if the subject and the facial expression match

In order to check whether the subject and the facial expression match, I put the subject as an argument into the function that recognizes facial expressions in the video (faceDetectHandler).
Then, the facial expression is detected every 0.1 second.

  const faceDetectHandler = (subject: string) => {
    const video = webcamRef.current.video;
    const intervalHandler = setInterval(async () => {
      const detectionsWithExpressions = await faceapi
        .detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
        .withFaceExpressions();
      if (detectionsWithExpressions.length > 0) {
        detectionsWithExpressions.map((detectionsWithExpression) => {
          const Array = Object.entries(detectionsWithExpression.expressions);
          const scoresArray = Array.map((i) => i[1]);
          const expressionsArray = Array.map((i) => i[0]);
          const max = Math.max.apply(null, scoresArray);
          const index = scoresArray.findIndex((score) => score === max);
          const expression = expressionsArray[index];
          if (expression === subject) {
            setIsMatch(true);
          }
        });
      }
    }, 100);
    setIntervalHandler(intervalHandler);
  };
Enter fullscreen mode Exit fullscreen mode

Then, if the expression and the subject match, setIsMatch(true) to set the flag to true.

However, now isMatch will always be true once it is set to true, so I reset it back to false when I call faceDetectHandler.
The calling part looks like the following.

  const drawSubject = (expression: string) => {
    // Count how many times to have drawn the subject in order to stop the process when the game count reaches 5 times.
    setGameCount((gameCount) => gameCount + 1);
    // Reset to false
    setIsMatch(false);
    faceDetectHandler(expression);
    const canvas = canvasRef.current;
    // Import subject image and show it
    const ctx = canvas.getContext("2d");
    const image = document.createElement("img");
    image.onload = () => {
      coverCanvas(ctx, canvas);
      ctx.drawImage(
        image,
        (canvas.width - 300) / 2,
        (canvas.height - 300) / 2,
        300,
        300
      );
    };
    image.src = `/emojis/${expression}.png`;
    // If the faceDetectHandler already moved before, the last interval should be stopped.
    if (intervalHandler) {
      clearInterval(intervalHandler);
    }
    // Do clearRect 1.5s after showing the subject.
    setTimeout(() => {
      ctx.clearRect(0, 0, canvas.width, canvas.height);
    }, 1500);
    // Then, judge 1.5s after it
    setTimeout(() => {
      setStage("judge");
    }, 3000);
  };
Enter fullscreen mode Exit fullscreen mode

As for the flow of starting the game, submitting a subject, making a judge, and ending the game, the state management is managed by stage in a messy way.

  const [stage, setStage] = useState<
    "isNotStart" | "ready" | "start" | "judge" | "finish"
  >("isNotStart");
Enter fullscreen mode Exit fullscreen mode
  useEffect(() => {
    const canvas = canvasRef.current;
    const ctx = canvas.getContext("2d");
    if (stage === "judge") {
      judge(ctx, canvas);
    }
    if (stage === "start") {
      const expression =
        ExpressionTypes[Math.floor(Math.random() * ExpressionTypes.length)];
      drawSubject(expression);
    }
    if (stage === "finish") {
      setTimeout(() => {
        coverCanvas(ctx, canvas);
        drawText(ctx, canvas, `${point}๏ผ5`);
      }, 1500);
    }
  }, [stage]);
Enter fullscreen mode Exit fullscreen mode

Next functions I will develop in the future

There are a few problems that I am planning to fix before the official release.

First, the facial expression recognition runs immediately after the subject is given, so in terms of UI, the subject is given (1.5 seconds) -> judgment starts (1.5 seconds), but in reality, the judgment starts when the subject is given, so it feels like you should make that expression within 3 seconds. That's not good.

Also, the judging of facial expressions is pretty lax.
This is because the method of judging expressions is based on the highest score among the current expressions, as shown below.

          const Array = Object.entries(detectionsWithExpression.expressions);
          const scoresArray = Array.map((i) => i[1]);
          const expressionsArray = Array.map((i) => i[0]);
          const max = Math.max.apply(null, scoresArray);
          const index = scoresArray.findIndex((score) => score === max);
Enter fullscreen mode Exit fullscreen mode

In face-api.js, you can get the scores of all the expressions as shown below, and if the highest score (neutral in this case) is close to 1, as in the example below, it's fine, but if all the scores are between 0.1 and 0.2, I think it's too miscellany to assume that it's that expression.

{
  angry: 0.00012402892753016204
  disgusted: 0.00000494607138534775
  fearful: 2.4963259193100384e-7
  happy: 0.00011926032311748713
  neutral: 0.9996343851089478
  sad: 0.00010264792217640206
  surprised: 0.000014418363207369111
}
Enter fullscreen mode Exit fullscreen mode

So, I'm thinking of adding a rule that says if the estimated expression is not more than 0.5, it won't be judged, and see how it goes.

I'm also planning to add a share function, dynamic OGP, and some kind of message at the final judgment.

I'm hoping to release it sometime next week, so please play with it after the release if you like!

That's it!

This article is the 16th week of trying to write at least one article every week.

If you'd like, please take a look at my previous weekly posts!
See you soon!

๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ

Please send me a message if you need.

yuiko.dev@gmail.com
https://twitter.com/yui_active

๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ

Discussion (0)