Hi guys, I wrote Developed the app which trains your facial expressions: face-api.js + Next.js + TypeScript last week, and this week I updated the app, which you can train your face expressions and released it.
The main updates are as follows
- Support for multiple languages
- Added the ability to select the difficulty level
- Dynamic OGP support
- Allowing users to move on if they succeed
- Overall design update
The entire code is below.
https://github.com/yuikoito/face-expression-challenge
URL: https://face-expression-challenge.vercel.app/
Enable to select the difficulty level.
I thought it would be undesirable to have the same difficulty level every time, so I made it possible to choose the difficulty level.
You can choose between easy, normal, hard, and devil.
In addition to changing the time for each difficulty, I also changed the threshold value to make the judgment itself tougher.
In the part where the expression is obtained, the threshold is used as shown below. (I used detectSingleFace from now on)
const detectionsWithExpression = await faceapi
.detectSingleFace(video, new faceapi.TinyFaceDetectorOptions())
.withFaceExpressions();
if (detectionsWithExpression) {
const Array = Object.entries(detectionsWithExpression.expressions);
const scoresArray = Array.map((i) => i[1]);
const expressionsArray = Array.map((i) => i[0]);
const max = Math.max.apply(null, scoresArray);
const index = scoresArray.findIndex((score) => score === max);
const expression = expressionsArray[index];
if (
expression === subject &&
// Don't make a decision unless it's above the specified threshold here.
Array[index][1] >= levelConfig[level].threshold
) {
clearInterval(intervalHandler);
setIsMatch(true);
setStage("result");
}
}
When succeeded, move on to next subject.
Originally, I gave the subject for 1.5 seconds and then judged it after 1.5 seconds, but I decided to make users move on to next subject when succeeded.
So, after 1.5 seconds of presentation, I added a 3-second time limit, and if the facial expressions matched well within that time, you can move on to the next topic.
Even if no match is found, if no match can be made within 3 seconds, the next step is taken.
Dynamic OGP Support
The results are now dynamic OGP, which makes it easier to understand when they are shared.
It's a simple structure with a background image and text.
To display the background image, I imported and used loadImage from within canvas.
const backgroundImage = await loadImage(
path.resolve("./images/background.jpg")
);
ctx.drawImage(backgroundImage, 0, 0, WIDTH, HEIGHT);
Multilingual support
Next.js has i18n built-in since v10, so it can be multilingualized without importing anything.
The dictionary files are prepared as a TypeScript file, and the files called useTranslate.ts in which it is determined which one to load depending on the language.
import { useRouter } from "next/router";
import { JaTexts } from "../locales/ja";
import { EnTexts } from "../locales/en";
const useTranlate = () => {
const { locale } = useRouter();
return locale === "ja" ? JaTexts : EnTexts;
};
export default useTranlate;
Then don't forget to set the following settings in next.config.js.
i18n: {
locales: ["en", "ja"],
defaultLocale: "en",
},
This is the first time I noticed that only the top page is automatically rolled back to the file in that language...
I wanted to make the OGP part multilingual as well, depending on whether the share URL is in English or Japanese, so I made it so that when you share, the locale parameter is left in the share URL. (If the URL is in Japanese, the share URL will be /ja/share...
) )
After that, the overall design was improved and released.
That's it!
This article is the 17th week of trying to write at least one article every week.
If you'd like, please take a look at my previous weekly posts!
See you soon!
๐๐๐๐๐๐
Please send me a message if you need.
yuiko.dev@gmail.com
https://twitter.com/yui_active
๐๐๐๐๐๐
Top comments (0)