DEV Community

Cover image for Building My First Full-Stack App: Part 1 - Crafting the React Frontend for Face Detection
Abhinav Anand
Abhinav Anand

Posted on • Edited on

Building My First Full-Stack App: Part 1 - Crafting the React Frontend for Face Detection

Introduction

This is my first deep dive into building a full-stack web application, and I'm excited to share my learning journey and the technical steps involved. My goal with this project was to solidify my understanding of frontend development with React, connect to an external API, build a robust backend server, and manage a database – culminating in a deployed application.

What is this app?

This is a simple web application where users can paste an image URL, and the app will detect and highlight faces within that image.

Key Technologies Used:

  • Front-End: Vite + React.js
  • Back-End: Node.js + Express.js
  • Database: PostgreSQL (managed with dBeaver)

You can explore the full codebase here:

https://github.com/abhinavan00/face_detection_app


The Frontend (React.js)

Setting Up the Project:

I initiated the React project using npm create vite@latest, selecting JavaScript as the language. This provided a lightweight and fast development environment. After initial setup, I streamlined the project by removing unnecessary boilerplate from App.jsx, App.css, and index.css to align with my specific design requirements.

Core Component Architecture:

Before writing any code, I planned the application's skeletal structure to ensure modularity and maintainability. The design included several functional components:

  • Logo: Displays the application branding.
  • Navigation: Handles user authentication links (Sign In, Register).
  • Entry Count: Shows user-specific data (e.g., images processed).
  • ImageLinkForm: Contains the input field for image URLs and the submission button.
  • FaceRecognition: Responsible for displaying the image and overlaying the detected bounding boxes.
  • Smart Components: App.jsx (the main container), SignIn, and Register pages. The SignIn and Register components were styled efficiently using Tachyons.

Handling User Input and Displaying Results:

For state management, I opted for React Class Components in this project, although I have also practiced with React Hooks. The core state elements were input (to capture the image URL from the form) and imageUrl (to store the URL for image display).

Upon user submission, the input value updates imageUrl. This imageUrl is then passed as a prop to the FaceRecognition component via ImageLinkForm, enabling the image to be displayed.

To detect faces and draw bounding boxes, I utilized Clarifai's face-detection API. The initial API call was performed via JavaScript's Fetch API (RESTful method). I also explored integrating this API using gRPC for a more robust backend solution in a separate practice session.

Overcoming the CORS Challenge (A First-Time Debugging Experience):

This particular challenge significantly deepened my understanding of web security. Initially, my frontend API requests were consistently being rejected. I quickly identified the issue as a CORS (Cross-Origin Resource Sharing) policy violation, a concept I hadn't encountered directly in my coursework.

My local frontend server (running on localhost) was attempting to make a direct request to the Clarifai API, which was blocking it for security reasons. After extensive searching and troubleshooting (including exploring my course's Discord channel and external resources), I understood the need to proxy the request.

My immediate solution for testing was to create a "fake server" (a simple local proxy) that would appear as the origin to Clarifai, bypassing the CORS restriction for development purposes. This temporary solution allowed me to successfully retrieve API results and proceed with development, though I knew a proper backend proxy would be essential for the final application.

Calculating and Displaying Bounding Boxes:
Once I received the data from Clarifai, the next step was to parse the API response to calculate the precise coordinates for the bounding boxes. This involved some careful geometric calculations based on the image dimensions and the API's returned data.

Image description

The calculated displayFaceBox object was then passed as a prop to the FaceRecognition component, which dynamically rendered the bounding boxes around the detected faces.

Image description

Conclusion

I hope this first part has given you a clear insight into building the frontend of a full-stack application with React. I'd love to hear your thoughts or questions about anything covered here – feel free to leave them in the comments below!"

Top comments (0)