Hello guys,
I have developed an application with face detection, that applies a mask automatically to your face in real time.
In this article, I will explain how to develop this application.
DEMO→https://mask-app-one.vercel.app/
github→https://github.com/YuikoIto/mask-app
This application has no loading animation, so you have to wait for some seconds at the first loading.
Setup react application and install react-webcam
$ npx create-react-app face-mask-app --template typescript
$ yarn add react-webcam @types/react-webcam
Then, try setting up web camera.
// App.tsx
import { useRef } from "react";
import "./App.css";
import Webcam from "react-webcam";
const App = () => {
const webcam = useRef<Webcam>(null);
return (
<div className="App">
<header className="header">
<div className="title">face mask App</div>
</header>
<Webcam
audio={false}
ref={webcam}
style={{
position: "absolute",
margin: "auto",
textAlign: "center",
top: 100,
left: 0,
right: 0,
}}
/>
</div>
);
}
export default App;
yarn start
and access http://localhost:3000/.
Yay! Web camera is now available.
Try Face detection using Tensorflow
Here, we are using this model. https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection
$ yarn add @tensorflow-models/face-landmarks-detection @tensorflow/tfjs-core @tensorflow/tfjs-converter @tensorflow/tfjs-backend-webgl
- If you don't use TypeScript, you don't have to install all of them. Install
@tensorflow/tfjs
instead of@tensorflow/tfjs-core
,@tensorflow/tfjs-converter
, and@tensorflow/tfjs-backend-webgl
.
library versions
"@tensorflow-models/face-landmarks-detection": "^0.0.3",
"@tensorflow/tfjs-backend-webgl": "^3.6.0",
"@tensorflow/tfjs-converter": "^3.6.0",
"@tensorflow/tfjs-core": "^3.6.0",
// App.tsx
import "@tensorflow/tfjs-core";
import "@tensorflow/tfjs-converter";
import "@tensorflow/tfjs-backend-webgl";
import * as faceLandmarksDetection from "@tensorflow-models/face-landmarks-detection";
import { MediaPipeFaceMesh } from "@tensorflow-models/face-landmarks-detection/dist/types";
const App = () => {
const webcam = useRef<Webcam>(null);
const runFaceDetect = async () => {
const model = await faceLandmarksDetection.load(
faceLandmarksDetection.SupportedPackages.mediapipeFacemesh
);
/*
Please check your library version.
The new version is a bit different from the previous.
You should write as followings in the new one.
You will see more information from https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection.
const model = faceLandmarksDetection.SupportedModels.MediaPipeFaceMesh;
const detectorConfig = {
runtime: 'mediapipe', // or 'tfjs'
solutionPath: 'https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh',
}
const detector = await faceLandmarksDetection.createDetector(model, detectorConfig);
*/
detect(model);
};
const detect = async (model: MediaPipeFaceMesh) => {
if (webcam.current) {
const webcamCurrent = webcam.current as any;
// go next step only when the video is completely uploaded.
if (webcamCurrent.video.readyState === 4) {
const video = webcamCurrent.video;
const predictions = await model.estimateFaces({
input: video,
});
if (predictions.length) {
console.log(predictions);
}
}
};
};
useEffect(() => {
runFaceDetect();
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [webcam.current?.video?.readyState])
Check logs.
OK, seems good.
Setup canvas to overlay the mask on your face
Add <canvas>
under <Webcam>
.
//App.tsx
const App = () => {
const webcam = useRef<Webcam>(null);
const canvas = useRef<HTMLCanvasElement>(null);
return (
<div className="App">
<header className="header">
<div className="title">face mask App</div>
</header>
<Webcam
audio={false}
ref={webcam}
/>
<canvas
ref={canvas}
/>
Match the size of the canvas with the video.
const videoWidth = webcamCurrent.video.videoWidth;
const videoHeight = webcamCurrent.video.videoHeight;
canvas.current.width = videoWidth;
canvas.current.height = videoHeight;
Then, let's see this map and check where we should fill out.
By this map, No. 195 is around the nose. So set this point as the fulcrum. Let's draw a mask easily by using beginPath()〜closePath()
.
// mask.ts
import { AnnotatedPrediction } from "@tensorflow-models/face-landmarks-detection/dist/mediapipe-facemesh";
import {
Coord2D,
Coords3D,
} from "@tensorflow-models/face-landmarks-detection/dist/mediapipe-facemesh/util";
const drawMask = (
ctx: CanvasRenderingContext2D,
keypoints: Coords3D,
distance: number
) => {
const points = [
93,
132,
58,
172,
136,
150,
149,
176,
148,
152,
377,
400,
378,
379,
365,
397,
288,
361,
323,
];
ctx.moveTo(keypoints[195][0], keypoints[195][1]);
for (let i = 0; i < points.length; i++) {
if (i < points.length / 2) {
ctx.lineTo(
keypoints[points[i]][0] - distance,
keypoints[points[i]][1] + distance
);
} else {
ctx.lineTo(
keypoints[points[i]][0] + distance,
keypoints[points[i]][1] + distance
);
}
}
};
export const draw = (
predictions: AnnotatedPrediction[],
ctx: CanvasRenderingContext2D,
width: number,
height: number
) => {
if (predictions.length > 0) {
predictions.forEach((prediction: AnnotatedPrediction) => {
const keypoints = prediction.scaledMesh;
const boundingBox = prediction.boundingBox;
const bottomRight = boundingBox.bottomRight as Coord2D;
const topLeft = boundingBox.topLeft as Coord2D;
// make the drawing mask larger a bit
const distance =
Math.sqrt(
Math.pow(bottomRight[0] - topLeft[0], 2) +
Math.pow(topLeft[1] - topLeft[1], 2)
) * 0.02;
ctx.clearRect(0, 0, width, height);
ctx.fillStyle = "black";
ctx.save();
ctx.beginPath();
drawMask(ctx, keypoints as Coords3D, distance);
ctx.closePath();
ctx.fill();
ctx.restore();
});
}
};
Import this draw
function in App.tsx and use it.
const ctx = canvas.current.getContext("2d") as CanvasRenderingContext2D;
requestAnimationFrame(() => {
draw(predictions, ctx, videoWidth, videoHeight);
});
Finish!
Thanks for reading.
This is my first time to use Tensorflow but thanks to a good README of the official github repository, I can make a small application easily. I will develop more with using Tensorflow 🐣
🍎🍎🍎🍎🍎🍎
Please send me a message if you need.
🍎🍎🍎🍎🍎🍎
Top comments (14)
Hi, @yuikoito
Does this work on mobile as a Selfie?
Yes it works on my environment. (ios)
Thanks, Yuiko
@yuikoito , do you know how to get screenshot automatically, when doing selfie?
@vitalii0927
sorry for my late reply.
I don't really know how to get screenshot "automatically" but maybe you can use "getScreenshot" method as follows.
Then maybe you can loop this for example in
requestAnimationFrame
.Thank you, @yuikoito
Fun project! 😎
Thanks !
Hey Yuiko Koyanagi,
My name is Erik O’Bryant and I’m assembling a team of developers to create an AI operating system. An OS like this would use AI to interpret and execute user commands (just imagine being able to type plain English into your terminal and having your computer do exactly what you tell it). You seem to know a lot about AI development and so I was wondering if you’d be interested in joining my team and helping me develop the first ever intelligent operating system. If you’re interested, please shoot me a message at erockthefrog@gmail.com and let me know.
great post. Have you tried similarily media pipe pose. I did try it and got misc resutls . But i didnt manage to get the 3d landmarks up and running
Thank you for reading! No I didn't try other media pipe pose yet. I will try :)
But actually at first I tried to use face-api.js but I couldn't manage it. I searched how the face-api.js is running and which library is based on, and I found Tensorflow.
I am really beginner for Machine Learning and I still have many things I don't understand but I am happy if my article would help you!
I am the same here , Just learning by experimenting.
Hey I just posted a new one about TensorFlow.js. If you are interested, please have a look! This is not about 3d landmarks, but I hope my article help your learning :)
dev.to/yuikoito/tensorflow-next-js...
great