DEV Community

t3chflicks
t3chflicks

Posted on

🦉Zombie Detecting Smart Security Owl (Deep Learning)

In this Halloween tutorial, we’ll be showing you how we put a super spooky twist on a mundane household classic: the security camera. How?! We’ve made a night vision owl which uses image processing to track people. Oh, and it hoots, just like the real thing!

We’ve been super excited about this project and we’ve been waiting to do it ever since the new Raspberry Pi 4 dropped. It’s got 4GB RAM, which opens the door to loads of really exciting possibilities, including doing some image processing with deep learning models in real time.

If you want to keep an eye out for approaching zombies on Halloween, or just check your garden the rest of the year round, this is the one for you. Security doesn’t have to be boring to be effective!

Supplies

For this build, you will need:

Build 🛠️

Decapitate

a. Pull the head off the owl (sometimes you just have to be brutal) by pulling hard on its head where it attaches to the spring.

b. The owl’s head connects to the body by a cylinder which sits on top of a large spring. Remove this cylinder by taking out the screw.

c. The cylinder you just removed is made of two parts, a plastic cup and a bearing which sits inside it. Remove the bearing from the cylinder using a screwdriver (or similar tool).

d. Remove the spring by unscrewing the three screws that secure it to the body.

e. Make a hole in the top of the owl’s body which is large enough to fit some wires and the camera cable. We used an inelegant combination of a drill and a screwdriver to do this.

Add Smart 🧠

a. 3D print the camera case and paint it to match the owl — we used some cheap acrylic paints. Painting isn’t a vital step, but it does dramatically improve the overall look!
Raspberry Pi Night Vision Camera Mount IR CUT by Minims
*Works with Raspberry Pi and night vision camera using auto IR CUT + Infrared spots : =>…*www.thingiverse.com

GoPro compatible visor mount by olivermandic
*Download files and build them with your 3D printer, laser cutter, or CNC. Thingiverse is a universe of things.*www.thingiverse.com

b. With the owl’s head upside down, screw the top of the camera case into the inside of its head, where the beak protrudes.

c. Put the camera into the case and connect the camera cable.

d. Glue the servo to the top panel of the spring.

e. Connect long wires to the servo pins (5V, GND, Signal)

f. Feed the camera cable and wires for the servo through the spring and through the hole you made in the top of the body so they are inside the owl’s hollow body.

g. Screw the neck back together and push the head on.

Fill Her Up 🤖

a. Remove the plug from the bottom of the owl and increase the size of this hole by cutting the plastic. Continue increasing the size until the Raspberry Pi and speaker can fit through into the body of the owl.

b. Once the hole is big enough for all the components to fit inside, pull the camera cable which you fed through the top of the owl out of the base and plug it into the Raspberry Pi.

c. Similarly, pull the servo wires through and plug them into the Raspberry Pi:

+5V Servo => +5V Pi
GND Servo => GND Pi
Signal Servo => Pin 12 Pi

d. Plug the USB speaker into the Pi.

e. Insert the SD card into the Pi.

f. Power Pi using portable power supply.

g. Insert the Pi, power supply and speaker into owl through the hole in the base.

4. Setup the Pi 🥧

a. Download Raspian and upload it to your SD card using Balena Etcher.

b. To access your pi remotely:

  • Add a file called ssh to your boot sd card

  • Add a file called wpa_supplicant.conf and put your wifi credentials in

    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1

    network={

    ssid="MySSID"

    psk="MyPassword"

    }

c. Insert the SD card in the pi and try an access via ssh.

Moving the Head ⚙️

Code tutorial for moving the head (controlling a servo with a raspberry pi) — github

To control a servo running on the Pi we are going to create script that controls the GPIO pins which the servo is connected to.

a. Ensure connections between the servo to the Pi:

+5V Servo => +5V Pi
GND Servo => GND Pi
Signal Servo => Pin 12 Pi

b. You must first set up the gpio pins to use PWM on the signal pin of the servo.

c. Then, it is as simple as selecting the duty cycle (explained here) of the signal pin to move the servo from 90 degrees with a duty cycle of 7.5 to 0 degrees when the duty cycle is 2.5 and to 180 degrees with a duty cycle of 12.5

import RPi.GPIO as GPIO
import time

GPIO.setmode(GPIO.BOARD)

GPIO.setwarnings(False)

GPIO.setup(12, GPIO.OUT)

p = GPIO.PWM(12, 50)

p.start(7.5)
try:
    while True:
        p.ChangeDutyCycle(7.5)  # 90 degrees
        time.sleep(1)
        p.ChangeDutyCycle(2.5)  # 0 degrees
        time.sleep(1)
        p.ChangeDutyCycle(12.5) # 180 degrees
        time.sleep(1)
except KeyboardInterrupt:
    p.stop()
    GPIO.cleanup()
Enter fullscreen mode Exit fullscreen mode

Making It Hoot 🔉

Code tutorial for making the owl hoot (playing audio with a raspberry pi)-github

a. Plug in the USB speaker.

b. Download a sound — we chose a spooky hoot.

c. Play the sound by running this command: omxplayer -o alsa:hw:1,0 owl_sound.mp3

[d. If this doesn’t work, check what output your Pi is using and at what volume by using the command alsamixer — you will be greeted with the mixer screen where you can change the volume and select your media device. To increase the volume of your sound, do the command like this omgxplayer -o alsa:hw:1,0 owl_sound.mp3 -vol 500 to play this sound using Python, have a look at our test script.]

import subprocess

command = "omxplayer -o alsa:hw:1,0 ../../assets/owl_sound.mp3 --vol 500"

player = subprocess.Popen(command.split(' '), 
                            stdin=subprocess.PIPE, 
                            stdout=subprocess.PIPE, 
                            stderr=subprocess.PIPE
                          )
Enter fullscreen mode Exit fullscreen mode

Stream the Video From the Pi 🎦

Code tutorial creating a raspberry pi camera stream — github

a. Run python app.py and view on your local network at http://raspberrypi.local:5000 .

b. This code was taken and slightly adapted from Miguel Grinberg blog post, he explains nicely how it’s done and his tutorials are great — deffo check him out! The basic concept is that we use threading and generators to improve the streaming speed.

The camera works in both night and dark.

Body Detection 🕵️‍♀️

Code for body detection(ImageNetSSD on a video stream with raspberry pi) — github

a. Since we’re using the Raspberry Pi 4, we thought it was best to try out some deep learning models on it instead of the basic HaarCascade method we’ve been limited to so far.

b. We had a look at some of the pre-trained models out there, like YOLOv3 which looks super cool. YOLOv3 tiny weights, which would have been perfect for the Pi, but we couldn’t get it running :(

c. Instead, we opted for the MobileSSD model which we can run using openCVs DNN (deep neural net) module, as we learnt from this code: and from the hero of image processing tutorials, Adrian Rosebrock.

d. However, as we are trying to stream this content and run models on every frame, this results in a laggy, fragmented video. We learnt again from Adrian Rosebrock and used the Python multiprocessing module to put our images into queues where they can be processed without blocking the camera stream so heavily.

e. Try running the code yourself

CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
    "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
    "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
    "sofa", "train", "tvmonitor"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))
net = cv2.dnn.readNetFromCaffe('../../model/MobileNetSSD_deploy.prototxt.txt',
                               '../../MobileNetSSD_deploy.caffemodel')


img = stream.read()
img = Image.open(stream)
img = imutils.resize(img, width=WIDTH, height=HEIGHT)
img = np.array(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

(fH, fW) = img.shape[:2]
if inputQueue.empty():
    inputQueue.put(img)

detections = None
if not outputQueue.empty():
    detections = outputQueue.get()
if detections is not None:
    for i in np.arange(0, detections.shape[2]):
        confidence = detections[0, 0, i, 2]
        if confidence < 0.1:
            continue
        idx = int(detections[0, 0, i, 1])
        dims = np.array([fW, fH, fW, fH])
        box = detections[0, 0, i, 3:7] * dims
        (startX, startY, endX, endY) = box.astype("int")
        label = f"{CLASSES[idx]}: {confidence * 100}"
        if CLASSES[idx] == "person":
            hoot()
            cv2.rectangle(img, (startX, startY), (endX, endY),
                COLORS[idx], 2)
            pwm = find_new_position(pwm, startX, endX)

Enter fullscreen mode Exit fullscreen mode

Sending Zombie Notifications

Code for sending a notification (python to phone) — github

a. We decided to use https://pushed.co notification service.

b. You can get a free account and download the app and really quickly get set up making mobile notifications. We created the notifications using a python script like this.

import requests 

payload = {
          "app_key": "APP_KEY",
          "app_secret": "APP_SECRET",
          "target_type": "app",
          "content": "Zombie approaching!"
          }

r = requests.post("https://api.pushed.co/1/push", data=payload)
Enter fullscreen mode Exit fullscreen mode

What a Hoot!

We hope you enjoyed our Smart Security Owl project! This has been a super fun make and I feel a whole lot safer knowing my house is being guarded by our trusty owl. Checkout the Youtube Video.

Thanks For Reading

I hope you have enjoyed this article. If you like the style, check out T3chFlicks.org for more tech focused educational content (YouTube, Instagram, Facebook, Twitter).

Top comments (0)