DEV Community

loading...
Cover image for Breathing Detection Using Computer vision with webcam

Breathing Detection Using Computer vision with webcam

jemaloQiu
・2 min read

Introduction

When I worked on the development of a healthcare device for detecting baby Apnea (pause of breathing), I have spent some time on studying possible technological solutions. I have tried several different types of sensors including: RGB-camera, 3-axis accelerometer, heat camera, depth camera (kinect), etc. The rgb-camera-based solution is very interesting. Firstly because they are low-cost; moreover, it does not require mounting the sensor on baby's body thus the comfort is guaranteed. That's why I worked for about 3 weeks on this solution.

As experiments, I tried multiple possible algorithms including: feature points tracking, inter-frame difference checking, flux, and color nuance variation tracking.

In this post, I want to present the latter one. This work is inspired by this study. The basic idea is to track the color variation of an ROI on human face. The blood circulation and breathing will cause slight color variation of our skin. This change is so slight that our eyes can not perceive it at all. However, a normal webcam can identify color variation with a resolution as high as 2^16 grades.

ROI on human forehead

I have chosen a small rectangle area as ROI (region of interest) for tracking color variation. Below is the part of code for this purpose.

x1 = 0.4
x2 = 0.6
y1 = 0.1
y2 = 0.25
face_cascade=cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")

def getFaceROI(img):
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.2, 5)

    if len(faces)>0:
        img=cv2.rectangle(img, (faces[0][0] + int(x1*faces[0][2]), faces[0][1]+ int(y1*faces[0][3]) ), (faces[0][0]+ int(x2*faces[0][2]), faces[0][1]+ int(y2*faces[0][3]) ), (255, 0, 0), 2)

        return [faces[0][0]+int(x1*faces[0][2]),  faces[0][1]+ int(y1*faces[0][3]),  faces[0][0]+ int(x2*faces[0][2]), faces[0][1]+ int(y2*faces[0][3])]
    else:
        return [0,0,0,0]
Enter fullscreen mode Exit fullscreen mode

Color Calculation

We simply calculate the average value of a given color channel (I used the green channel in my test) of all the pixels in the selected ROI.

def getColorAverage(frame, color_id):
    return frame[:,:,color_id].sum()*1.0 / (frame.shape[0]*frame.shape[1])
Enter fullscreen mode Exit fullscreen mode

Drawbacks

Entire code has been uploaded to github repo below:
https://github.com/QiuZhaopeng/CV2_heartbeat_breathing_detection

This solution suffers from limitations in multiple aspects, e.g. the lighting condition, human should stay still, the camera noisy level, etc. That's why I did not choose this solution for our product (finally we chose the 6-axis sensor solution).

Result

Below is the snapshot of the execution of this program. As one can see, there are low-frequency wave shape mixed with high-frequency sawtooth (small peaks).
Alt Text
I measured my pulses during execution of this program. It can be verified that each small peak corresponds to a pulse beat (heart beat). And the large peak-valley transition corresponds to breathing cycles.

Anyone interested in this small experiment are encouraged to try it and do some further work. Good luck.

Discussion (0)