For Implementing Face Recognization, there are two major steps to be followed
Provided GitHub link below
Need to provide images of a particular person and name to the train.
Then need to provide all the images which need to split which will be compared against our trained data to split.
1) First Create a react app and then we need to provide the models to faceAPI and these resolved models will be used across the components to recognize the face.
React - ./loadModules.js
import * as faceApi from 'face-api.js';
export const loadModules = () => {
return Promise.all([
faceApi.nets.faceRecognitionNet.loadFromUri('/src/models'),
faceApi.nets.faceLandmark68Net.loadFromUri('/src/models'),
faceApi.nets.ssdMobilenetv1.loadFromUri('/src/models')
])
}
export default faceApi
2) Create a home component, Now add an input field, file upload option, and add a button to add the images of a particular person to get the trained data for that person.
React - ./home.tsx
import React, { useState } from "react";
import { Button, Input } from '@material-ui/core';
import React, { useState } from "react";
import { Button, Input } from '@material-ui/core';
import JSZip from 'jszip';
import { saveAs } from 'file-saver';
import faceApi from "../../../loadModules";
import { matchFacesBy, singleFaceDetectionWithImage } from "../../../utils/helpers";
const Home = (props:HomeProps) => {
const [formData, setFormData] = useState({
name: '',
faceData: [],
actualImages: []
});
const [checkPoint, setCheckPoint] = useState([]);
const [submitDisable, setSubmitDisable] = useState(true);
const [trainComplete, setTrainComplete] = useState(false);
const [trainedSet, setTrainedSet] = useState([]);
const [finalResult, setFinalResult] = useState([]);
const [duplicate, setDuplicate] = useState(false);
const handleNameChange = (event:any) => {
const { value } = event.target;
setFormData({ ...formData, name: value });
}
const handleSubmit = (event:any) => {
event.preventDefault();
checkPoint.push(formData);
setCheckPoint(checkPoint);
setFormData({
name: '',
faceData: [],
actualImages: []
});
}
const handleCompareImage = async (event:any) => {
..// wil be implemented and discussed below
}
return (
<React.Fragment>
<div className="form-container">
<div className="form-title">Upload Known Faces to split</div>
<form onSubmit={handleSubmit}>
<Input type="text" onChange={handleNameChange}
placeholder="Enter The Name" value={formData.name} />
<Button variant="contained" component="label"
onChange={handleCompareImage} >
Upload Known Face
<input type="file" multiple style={{ display: "none" }}/>
</Button>
<Button color="primary" type="submit"
disabled={submitDisable}>ADD
</Button>
</form>
</React.Fragment>
)
}
Here just providing the person name and images and on uploading images it will be passed to handleCompareImages here the images of the person are detected one by one and will be pushed to our form data.
handleCompareImage - ./home.tsx
const handleCompareImage = async (event:any) => {
const { files } = event.target;
setSubmitDisable(true);
let actualImages:any = [];
let faceDetections:any = [];
for (let index = 0; index < files?.length; index++) {
const file = files[index];
const result:any = await singleFaceDetectionWithImage(file)
if (result.singleFaceDetection) {
actualImages.push(result.actualImage);
faceDetections.push(result.singleFaceDetection?.descriptor);
}
}
setFormData({
...formData,
faceData: faceDetections,
actualImages: actualImages
});
setSubmitDisable(false);
}
after executing the handleCompareImage we will again enable the add button. Here we are looping through the person images and detecting the face of the person on each image and getting face details data and storing it in formData.
singleFaceDetectionWithImage function will contain the logic to face detection and returning image and face details data.
React - ./helper.tsx
import faceApi from "../loadModules";
export function singleFaceDetectionWithImage(file:Blob) {
return new Promise(async (resolve, reject) => {
const actualImage = await faceApi.bufferToImage(file);
const singleFaceDetection = await faceApi.detectSingleFace(actualImage)
.withFaceLandmarks().withFaceDescriptor();
resolve({ actualImage, singleFaceDetection });
})
}
Here we will first convert the file type to a base64 and then passing it to faceApi to detect the face of the person and then getting the trained data withFaceDescriptor.
3) We have the trained data set of the person (we can have multiple people by adding one by one after each person). we have allowed a duplicates button just to allow the same image with multiple people available on their folder. Now by clicking start will start labeling the fate details data with the name and provides us the trained data for each person.
React - ./home.tsx (inside homeComponent add this)
const handleTrain = () => {
setTrainComplete(false);
new Promise((resolve, reject) => {
const labeledFaceDescriptors = checkPoint.map((data) => {
return new faceApi.LabeledFaceDescriptors(data.name, data.faceData);
});
resolve(labeledFaceDescriptors);
}).then((data:any) => {
setTrainedSet(data);
setTrainComplete(true);
}).catch(err => {
console.error(err);
})
}
return (
<React.Fragment>
<div className="form-container">
<div className="form-title">Upload Known Faces to split</div>
<form onSubmit={handleSubmit}>
<Input type="text" onChange={handleNameChange}
placeholder="Enter The Name" value={formData.name} />
<Button variant="contained" component="label"
onChange={handleCompareImage} >
Upload Known Face
<input type="file" multiple style={{ display: "none" }}/>
</Button>
<Button color="primary" type="submit"
disabled={submitDisable}>ADD
</Button>
</form>
<Button color="secondary" type="submit"
onClick={() => setDuplicate(!duplicate)}>Allow Duplicates
</Button>
{duplicate ?
<div className="duplicate-warining">
Allowing duplicates may increase in size
</div>
: ''}
</div>
{/* Form data display */}
<div className="check-point-wrapper">
<div className="form-display-container">
{checkPoint.map((imgData, index) => (
<div className="image-name-wrapper" key={index}>
<img
src={imgData?.actualImages[0]?.src ?
imgData?.actualImages[0].src : null}
/>
<div>{imgData?.name}</div>
</div>
))}
</div>
{checkPoint?.length ?
<Button className="start-action" color="primary"
variant="contained" onClick={handleTrain}>START</Button>
: ''}
</div>
</div>
<React.Fragment>
)
}
export default Home;
4) Now we need to upload all the images to split by face recognization, so we need input to upload multiple images and need to process to get the face details in the images and need to compare it with the trained date set to split by face.
React - ./home.tsx (Add this in home component below "Form data display")
{/* Image to split upload */}
{trainComplete ?
<div className="image-to-split-wrapper">
<div>Upload All Your Images That needs to Split</div>
<Button color="secondary" variant="contained" component="label"
onChange={handleImageChange} >
Upload File
<input type="file" multiple style={{ display: "none" }}></input>
</Button>
</div>
: ''}
And add this in homeComponent (./home.tsx)
const handleImageChange = (event:any) => {
const { files } = event.target;
handleFiles(files);
}
const handleFiles = async (files:FileList) => {
const faceMatcher:any = new faceApi.FaceMatcher(trainedSet, 0.45);
for (let index = 0; index < files.length; index++) {
const file = files[index];
const actualImage = await faceApi.bufferToImage(file);
const allFaceDetection = await faceApi.detectAllFaces(actualImage)
.withFaceLandmarks().withFaceDescriptors();
const finalDataSet = matchFacesBy(allFaceDetection,
file, faceMatcher,
finalResult, duplicate);
setFinalResult(finalDataSet);
}
makeZip();
}
const makeZip = () => {
var zip = new JSZip();
finalResult.map((result) => {
Object.keys(result).map((name) => {
const file = result[name].file;
if (file) {
let imageFolder = zip.folder(name);
imageFolder.file(file.name, file);
}
})
})
zip.generateAsync({type: "blob"}).then((content) => {
saveAs(content, 'split-images.zip');
})
}
here we handle the images upload and the uploaded images are passed to handleFiles. Here we will be providing the trained data set and the percentage to match to face matcher api.
then will be looping through all the images and detecting all the faces in the image and pacing the face details to matchFaceBy function in helper.tsx
5) Now matchFaceBy will match the face by the trained data and all face details from the image
React - ./helper.tsx
export const matchFacesBy = (allFaceDetection:any, file:File,
faceMatcher:any, finalResult: any, duplicate:Boolean) => {
const localFinalResult:any = {};
for (let index = 0; index < allFaceDetection.length; index++) {
const faceDetection = allFaceDetection[index];
const result = faceMatcher.findBestMatch(faceDetection?.descriptor);
localFinalResult[result.label] = {
result,
file
}
if (result.label !== 'unknown') {
localFinalResult['unknown'] = {};
if (!duplicate) break;
}
}
finalResult.push(localFinalResult);
// setFinalResult(finalResult);
console.log(finalResult);
return finalResult;
}
Here we will be looping through all the faces detected from images and will be taking each face and will find the best matching with the trained data, and will push the data into the particular username object(which is the "result.label" which has been associated when training the data). and unknown faces are pushed into the unknown object and the final result is returned.
Now the final result of each image is stored in finalResult. Now the makeZip is called to create a zip file based on the finalResult and will store each user image separately on folders and will download it.
Thats it!!! Face Recognization will not be 100% accurate in faceApi
Github Link - github.com/Arjhun777
Woking demo - splitbyface.netlify.app
Chech out my blog - arjhun777.blogspot.com
Top comments (0)