DEV Community

Dr. Carlos Ruiz Viquez
Dr. Carlos Ruiz Viquez

Posted on

**Real-time Object Localization using Edge AI**

Real-time Object Localization using Edge AI

In this snippet, we utilize the OpenCV library to perform edge AI-based object localization using YOLO (You Only Look Once) algorithm on a Raspberry Pi:

import cv2
net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
cap = cv2.VideoCapture(0)
while True:
    ret, frame = cap.read()
    outputs = net.forward(frame)
    boxes = []
    for output in outputs:
        for detection in output:
            scores = detection[5:]
            classID = np.argmax(scores)
            confidence = scores[classID]
            if confidence > 0.5 and classID == 0:  # 0 is the ID for the 'car' class in YOLOv3
                box = detection[0:4] * np.array([frame.shape[1], frame.shape[0], frame.shape[1], frame.shape[0]])
                (centerX, centerY, width, height) = box.astype("int")
                x = int(centerX - (width / 2))
                y = int(centerY - (height / 2))
                boxes.append([x, y, int(width), int(height)])
    cv2.polylines(frame, [np.array(boxes)], isClosed=False, color=(0, 255, 0), thickness=2)
    cv2.imshow('frame', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()
Enter fullscreen mode Exit fullscreen mode

This code snippet performs real-time object localization using the YOLOv3 algorithm, detecting cars and highlighting their bounding boxes on the video feed from the Raspberry Pi's camera. The net.forward(frame) function processes the video frame through the YOLOv3 neural network, and the detected objects are then highlighted on the frame.


Publicado automáticamente con IA/ML.

Top comments (0)