DEV Community

Ali Zgheib
Ali Zgheib

Posted on • Updated on

Build a face recognition tool for celebrities using React & AWS

Hello everyone, This is my first tech blog on dev.to and in general. So I hope you hold tight while we go on this wild journey 😀

Today we will be building a tool that can detect the faces of celebrities in a specific photo. The tool will be built using React JS for the front-end and on the back-end we will be using AWS CDK to deploy our infrastructure (APIGateway & AWS Lambda). Our backend will be interacting with AWS Rekognition (Machine learning and face recognition service provided by AWS - https://aws.amazon.com/rekognition/ )

I thought that it would be fun to showcase the final results before diving into the implementation steps.

So let's say we have a celebrities photo as shown below:

Image description

We will upload the image to our application:

Image description

Finally, after submitting the form we will get the results below:

Image description

I hope you liked the idea of the app and the accuracy of the results.

Below is a simple architecture diagram created to showcase the final infrastructure and and how all service will be interacting with each other.

Image description

If that seems too complicated, no worries about it - we will be building it together step by step.

Let's start!!

First of all, we will start with our front-end (React application) and to do so we can create a basic typescript React project using the command below:

npx create-react-app front-end --template typescript
Enter fullscreen mode Exit fullscreen mode

After giving it a couple of seconds/minutes, we should have a basic react application that we can use as a base for our application.

Let's start the application in development mode:

npm run start
Enter fullscreen mode Exit fullscreen mode

Our application will be simple and will consist of only 1 page which contains the upload file form and a modal to display the results. We will also make use of React states and the Fetch API to be able to interact with our API on the backend (which we will deploy later)

Lets start with the code of the form:


  const [file, setFile] = React.useState<File | null>(null);
  const [isLoading, setIsLoading] = React.useState(false);
  const [error, setError] = React.useState<string | null>(null);

  const handleSubmit = async (e: React.FormEvent<HTMLFormElement>) => {
    // to be implemented later
  };

  const handleFileChange = (e: React.ChangeEvent<HTMLInputElement>) => {
    if (!e.target.files || e.target.files.length === 0) {
      return setFile(null);
    }

    setFile(e.target.files[0]);
    setError(null);
  };
  return (
    <div className="container">
      <div className="card">
        <h3>Detect celebrities through pictures</h3>
        <div className="drop_box">
          <h4>Upload File here</h4>
          <p>Files Supported: PNG & JPG</p>

          <form onSubmit={handleSubmit}>
            <div className="form">
              <input
                type="file"
                accept=".png,.jpg"
                id="fileID"
                disabled={isLoading}
                onChange={handleFileChange}
              />
              <h6 id="file-name"></h6>

              <p id="error">{error}</p>
              <button
                type="submit"
                className="btn"
                id="submit-btn"
                disabled={isLoading}
              >
                {isLoading ? "Loading..." : "Submit"}
              </button>
            </div>
          </form>
        </div>
      </div>
    </div>
  );
Enter fullscreen mode Exit fullscreen mode

After the form element (div with class "card") we can add the modal as follow:

interface CelebtritiesData {
  celebrityFaces: any[];
  unrecognizedFaces: any[];
}

  const [showModal, setShowModal] = React.useState(false);
  const [celebritiesData, setCelebtritiesData] =
    React.useState<CelebtritiesData | null>(null);

  const handleModalClose = () => {
    setShowModal(false);
  };

  return (
    <div className="container">
      {/* upload file form here */}
      {showModal && (
        <div id="myModal" className="modal">
          <div className="modal-content">
            <span className="close" onClick={handleModalClose}>
              &times;
            </span>

            <div className="modal-actual-content">
              <h3>
                {`Number of recognized celebrities: ${celebritiesData?.celebrityFaces.length}`}
              </h3>
              <ul>
                {celebritiesData?.celebrityFaces.map((celebrityFace) => {
                  return <li>{celebrityFace.Name}</li>;
                })}
              </ul>

              {celebritiesData &&
                celebritiesData?.unrecognizedFaces.length > 0 && (
                  <h5>
                    {`we weren't able to recognize ${celebritiesData?.unrecognizedFaces.length} celebrities: `}
                  </h5>
                )}
            </div>
          </div>
        </div>
      )}
    </div>
  );
Enter fullscreen mode Exit fullscreen mode

Full code (including css) can be found here: https://github.com/AliZgheib/celebrities-face-recognition/tree/main/front-end

We still need to add the logic where we convert the image to base64 + send the results to our backend/API (which we will build soon)

So lets take a break from our react application and deploy our backend using AWS CDK (AWS API Gateway + AWS Lambda)

First of all there's a list of pre-requisite before we start working with AWS CDK - let's check it together:

1- Install the recommended version of NodeJS - https://nodejs.org/en

2- Install AWS CDK

npm install -g aws-cdk
Enter fullscreen mode Exit fullscreen mode

3- Configure your AWS credentials

aws configure
Enter fullscreen mode Exit fullscreen mode

4- Make sure to install typescript

npm install -g typescript
Enter fullscreen mode Exit fullscreen mode

Now lets create our base AWS CDK project using typescript template

mkdir back-end
cd back-end
cdk init app --language typescript
Enter fullscreen mode Exit fullscreen mode

After waiting a couple of seconds/minutes we should have our base AWS CDK project using typescript template.

An important step that we need to do when setting up AWS CDK on a new AWS account (or new region) is running the command below:

cdk bootstrap
Enter fullscreen mode Exit fullscreen mode

That should be for the setup!! Now lets continue coding again.

If you are not familiar with AWS CDK or what it does. AWS CDK is basically a tool provided by AWS to help us provision infrastructure as code instead of via the console (clickops).

Infrastructure as code is the future. it will help us write a better code that is reusable, maintainable, and we can review it before deploying the updates to our production environment.

Now let's provision an API Gateway, a Lambda (which will act as an API endpoint) and add the necessary permissions for the Lambda to be able to interact with AWS Rekognition.

We add the API Gateway to our CDK stack as follow:

import * as cdk from "aws-cdk-lib";
import { Construct } from "constructs";

import * as apigateway from "aws-cdk-lib/aws-apigateway";


export class BackEndStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // The code that defines your stack goes here

    const api = new apigateway.RestApi(this, "my-api", {
      description: "api gateway",
      deployOptions: {
        stageName: "dev",
      },
      // 👇 enable CORS
      defaultCorsPreflightOptions: {
        allowHeaders: [
          "Content-Type",
          "X-Amz-Date",
          "Authorization",
          "X-Api-Key",
        ],
        allowMethods: ["OPTIONS", "GET", "POST", "PUT", "PATCH", "DELETE"],
        allowCredentials: true,
        allowOrigins: ["*"],
      },
    });

    // 👇 create an Output for the API URL
    new cdk.CfnOutput(this, "apiUrl", { value: api.url });
  }
}

Enter fullscreen mode Exit fullscreen mode

Now lets add the Lambda function:

import * as path from "path";
import * as lambda from "aws-cdk-lib/aws-lambda";
import { Duration } from "aws-cdk-lib";

    // 👇 define the lambda function
    const rekognitionLambda = new lambda.Function(this, "rekognition-lambda", {
      runtime: lambda.Runtime.NODEJS_18_X,
      handler: "index.main",
      code: lambda.Code.fromAsset(path.join(__dirname, "/../src/rekognition")),
      timeout: Duration.seconds(10),
    });
Enter fullscreen mode Exit fullscreen mode

As you can see above we specified the Node JS runtime of the Lambda to be v18 and we also passed the handler path to the "code" property.

We will add the Lambda Logic in a bit.

Now lets give our Lambda function the necessary IAM permissions to be able to interact with AWS Rekognition.

    // 👇 create a policy statement
    const rekognitionLambdaPolicy = new iam.PolicyStatement({
      actions: ["rekognition:RecognizeCelebrities"],
      resources: ["*"],
    });

    // 👇 add the policy to the Function's role
    rekognitionLambda.role?.attachInlinePolicy(
      new iam.Policy(this, "rekognition-lambda-policy", {
        statements: [rekognitionLambdaPolicy],
      })
    );
Enter fullscreen mode Exit fullscreen mode

We will also create a POST /rekognition endpoint under the API and we will assign the Lambda as an integration/proxy which will allow it to read the base64 image in the POST request body -> call AWS Regkonition service and return the response.

    // 👇 add a /rekognition resource
    const rekognitionResource = api.root.addResource("rekognition");

    // 👇 integrate POST /rekognition with rekognitionLambda
    rekognitionResource.addMethod(
      "POST",
      new apigateway.LambdaIntegration(rekognitionLambda, { proxy: true })
    );
Enter fullscreen mode Exit fullscreen mode

Now let's write the necessary logic for the Lambda function that will validate the "imageBase64" property in the body, forward it to AWS Regkognition service which will analyze the image and output a result. finally, the result will be formatted and returned to our front-end application.

Lets create a new directory "src/rekognition" in the root of our "back-end" project. Under the "rekognition" folder we will add an "index.ts" file which will contain the actual code of the Lambda function.

The content of the "index.ts" file should look as follow:

import {
  RekognitionClient,
  RecognizeCelebritiesCommand,
} from "@aws-sdk/client-rekognition";

const baseHandler = async (event: any) => {
  try {
    const { imageBase64 } = JSON.parse(event.body);

    console.log("imageBase64", imageBase64);

    if (!imageBase64) {
      return new Error("imageBase64 is a required");
    }
    const rekognitionClient = new RekognitionClient({});

    const { CelebrityFaces, UnrecognizedFaces } = await rekognitionClient.send(
      new RecognizeCelebritiesCommand({
        Image: { Bytes: Buffer.from(imageBase64, "base64") },
      })
    );
    return {
      celebrityFaces: CelebrityFaces ? CelebrityFaces : [],
      unrecognizedFaces: UnrecognizedFaces ? UnrecognizedFaces : [],
    };
  } catch (error: any) {
    console.log(error);

    return new Error("Something went wrong");
  }
};

const main = async (event: any) => {
  const response = await baseHandler(event);

  if (response instanceof Error) {
    return {
      headers: {
        "Access-Control-Allow-Origin": "*",
      },
      body: JSON.stringify({
        message: response.message,
      }),
      statusCode: 400,
    };
  }

  return {
    headers: {
      "Access-Control-Allow-Origin": "*",
    },
    body: JSON.stringify(response),
    statusCode: 200,
  };
};

module.exports = { main };
Enter fullscreen mode Exit fullscreen mode

PS: "imageBase64" is the actual celebrities image that we will convert on the front-end to base64 and pass it in the request body.

If the code above is too complex, here's what we are doing in short:

1- We are parsing the request body using JSON.parse
2- We validating that the body contains "imageBase64" property
3- We are initializing the "RekognitionClient" Constructor
4- We are passing the "imageBase64" to Rekognition service and awaiting a response
4- We are destructuring the required data from the response
5- In the main handler we are returning a 200 response if all is good - otherwise, we would return a 400 with the error message.

The full back-end code can be found here: https://github.com/AliZgheib/celebrities-face-recognition/tree/main/back-end

Phew!! I believe thats it for the back-end code.

Now lets try to deploy our infrastructure/back-end to AWS by running the commands below:

npm run build
Enter fullscreen mode Exit fullscreen mode

Now we deploy the code (follow the prompt - it should be pretty simple):

cdk deploy
Enter fullscreen mode Exit fullscreen mode

After successfully deploying our API & Lambda, we should get the API URL in the output as follow:

https://XXX.execute-api.us-east-1.amazonaws.com/dev/
Enter fullscreen mode Exit fullscreen mode

"XXX" is your actual API ID (yours should be different)

Based on the above, our API endpoint will be:

https://XXX.execute-api.us-east-1.amazonaws.com/dev/rekognition
Enter fullscreen mode Exit fullscreen mode

That it for the backend!! Hope you made it through with me

Now let's go back to our React application to finalize our application.

In the same main page (or we can move it to a separate helper file for better files organization)

We add the following helper function that will read the uploaded file and convert it to base64

const convertFileToBase64 = async (file: any): Promise<any> => {
  return new Promise((res, rej) => {
    let reader = new FileReader();
    reader.readAsDataURL(file);
    reader.onload = function () {
      res(reader.result);
    };
    reader.onerror = function (error) {
      console.log("Error: ", error);
      rej(error);
    };
  });
};
Enter fullscreen mode Exit fullscreen mode

Now we update the "handleSubmit" function that we previously left empty and add the necessary logic to call our POST /rekognition API endpoint:

  const handleSubmit = async (e: React.FormEvent<HTMLFormElement>) => {
    try {
      e.preventDefault();

      if (!file) {
        return setError("Image not found.");
      }
      setError(null)
      setIsLoading(true);
      const fileBase64 = await convertFileToBase64(file);

      const fileNameClean = fileBase64.split("base64,")[1];

      const rawResponse = await fetch(
        "https://XXX.execute-api.us-east-1.amazonaws.com/dev/rekognition",
        {
          method: "POST",
          headers: {
            Accept: "application/json",
            "Content-Type": "application/json",
          },
          body: JSON.stringify({ imageBase64: fileNameClean }),
        }
      );
      const content = await rawResponse.json();

      setCelebtritiesData(content);
      setShowModal(true);
    } catch (error) {
      setError("Image is invalid or too large.");
    } finally {
      setIsLoading(false);
    }
  };
Enter fullscreen mode Exit fullscreen mode

PS: make sure to replace https://XXX.execute-api.us-east-1.amazonaws.com/dev/rekognition with your actual API endpoint

And that should be it!! Now you can try out the application and you should get very similar results to what I show cased at the start.

I hope you were able to follow along with me and that you enjoyed my article 😀

If you would like to take the React application to the next level and host it on AWS (using S3 and CloudFront as a caching solution) - Checkout the article that I recently wrote: https://dev.to/alizgheib/how-to-deploy-a-react-application-to-aws-using-aws-s3-aws-cloudfront-42km

Top comments (1)

Collapse
 
terry_austin_a9a19c4c0a76 profile image
Terry Austin • Edited

Building a face recognition tool for celebrities using React and AWS sounds like an exciting project! It's fascinating how technology can be leveraged to identify public figures across various media platforms. This reminds me of athletes who transition into other careers, like celebrities rapper, blending the worlds of sports and music. The ability to recognize and categorize these multi-talented individuals through AI highlights just how interconnected and versatile our interests in celebrities can be. Has anyone here worked on similar projects or used these technologies in innovative ways?