DEV Community

Gorchene Bader
Gorchene Bader

Posted on

2

Deploy Yolov8 model locally using FastAPI x ReactJS 🐍⚡

Hello, data science enthusiasts! In this tutorial, we'll walk through the process of deploying a YOLOv8 object detection model using FastAPI for the backend microservice and ReactJS for the frontend interface.

First, make sure you have Python installed on your system. Then, let's create our project directory:

mkdir Yolov8_Fast_React
cd Yolov8_Fast_React
Enter fullscreen mode Exit fullscreen mode

Next, we'll set up a Python virtual environment to keep our dependencies isolated:

python -m venv env
source env/bin/activate
Enter fullscreen mode Exit fullscreen mode

With the virtual environment activated, we can now proceed to install the necessary dependencies. We'll use FastAPI for the backend and React for the frontend:

pip install fastapi uvicorn ultralytics opencv-python-headless numpy
Enter fullscreen mode Exit fullscreen mode

Now that we have the required tools, let's start building the backend API using FastAPI:

touch main.py
Enter fullscreen mode Exit fullscreen mode

In the main.py file, we'll define the API endpoints and handle the object detection logic:

from fastapi import FastAPI, File, UploadFile
import cv2
import numpy as np
from ultralytics import YOLO
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()
model = YOLO("yolov8n.pt")

allowed_origins = [
    "http://localhost",
    "http://localhost:3000",
]

app.add_middleware(
    CORSMiddleware,
    allow_origins=allowed_origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

@app.post("/detect/")
async def detect_objects(file: UploadFile):
    # Process the uploaded image for object detection
    image_bytes = await file.read()
    image = np.frombuffer(image_bytes, dtype=np.uint8)
    image = cv2.imdecode(image, cv2.IMREAD_COLOR)

    # Perform object detection with YOLOv8
    results = model.predict(image)

    # Process the detection results and return a response
    detections = []
    for result in results[0].boxes:
        x1, y1, x2, y2 = result.xyxy[0]
        confidence = result.conf[0]
        class_id = result.cls[0]
        detections.append({
            "x1": float(x1),
            "y1": float(y1),
            "x2": float(x2),
            "y2": float(y2),
            "confidence": float(confidence),
            "class_id": int(class_id)
        })
        print(detections)

    return {"detections": detections}
Enter fullscreen mode Exit fullscreen mode

Next, let's create the frontend using React:

npx create-react-app frontend
cd frontend
npm start
Enter fullscreen mode Exit fullscreen mode

In the React app, we'll create a simple interface to upload an image and display the detected objects:

import React, { useState } from 'react';
import axios from 'axios';

function App() {
  const [image, setImage] = useState(null);
  const [detections, setDetections] = useState([]);

  const handleImageUpload = async (event) => {
    const file = event.target.files[0];
    setImage(file);

    const formData = new FormData();
    formData.append('file', file);

    const response = await axios.post('/detect', formData);
    setDetections(response.data.objects);
  };

  return (
    <div>
      <input type="file" onChange={handleImageUpload} />
      {image && (
        <div>
          <img src={URL.createObjectURL(image)} alt="Uploaded" />
          {detections.map((detection, index) => (
            <div key={index}>
              Object at {detection[0]}, {detection[1]}, {detection[2]}, {detection[3]}
            </div>
          ))}
        </div>
      )}
    </div>
  );
}

export default App;
Enter fullscreen mode Exit fullscreen mode

Finally, to run the full application, start the FastAPI backend and the React frontend in separate terminals:

# Backend
uvicorn main:app --reload

# Frontend
cd frontend
npm start
Enter fullscreen mode Exit fullscreen mode

The application should now be accessible at http://localhost:3000, allowing you to upload images and view the detected objects using the YOLOv8 model.

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more →

Top comments (0)

The Most Contextual AI Development Assistant

Pieces.app image

Our centralized storage agent works on-device, unifying various developer tools to proactively capture and enrich useful materials, streamline collaboration, and solve complex problems through a contextual understanding of your unique workflow.

👥 Ideal for solo developers, teams, and cross-company projects

Learn more

👋 Kindness is contagious

Discover a treasure trove of wisdom within this insightful piece, highly respected in the nurturing DEV Community enviroment. Developers, whether novice or expert, are encouraged to participate and add to our shared knowledge basin.

A simple "thank you" can illuminate someone's day. Express your appreciation in the comments section!

On DEV, sharing ideas smoothens our journey and strengthens our community ties. Learn something useful? Offering a quick thanks to the author is deeply appreciated.

Okay