DEV Community

Pubudu Jayawardana for AWS Community Builders

Posted on • Edited on

How I created a door bell with AWS Serverless

Intro

Recently, there was a hackathon at my work place, and with one of my colleagues, I created an intelligent door bell with AWS Serverless services + a raspberry pi.

Whenever someone clicks on the button of the 'door bell', it will capture a image and check through a Amazon Rekognition faces collection to see if the faces on the image are already indexed. And it will send a message to Slack with the scaled image with a watermark with timestamp indicating the number of people on the image and names of people if they are already in the faces collection.

This post describes how we built this project and some of the learning.

Architecture

Serverless Bell Architecture

Image: Architecture

serverless-bell-state-machine

Image: State Machine

How it works

There are two main components - Face indexing and face recognition.

Face Indexing

  1. We have created a simple frontend with VueJS which is hosted on a S3 bucket. Here, we asked to upload an image with a face and the name of the person.

  2. Once uploaded, we proxy a lambda function via API gateway to create a pre-signed url and using this generated pre-signed url, we upload the image to s3 bucket with the person's name as a meta data value.

  3. Once the image is uploaded to s3 bucket, there is a lambda function triggers which will detect the face in the image and create a entry in the pre defined AWS Rekognition collection (faces collection) with the external id as the name.

Face Recognition

  1. Using a Raspberry pi with it's camera module, solderless breadboard, and a button, we created the image capturing part when a button is pressed - 'the door bell'.

  2. This captured image is uploaded to AWS S3, a lambda function is triggered to initialize Step function execution.

  3. Within the Step function, there are two parallel flows.

  4. One flow is to detect the faces of the image and search them in the faces collection. This function will output the total number of faces detected. If there are recognized faces, it will output names of those faces as well.

  5. The other flow will resize the image and create a watermark with the timestamp. There are lambda functions used for all these functionality.

  6. After completing both the flows, there is another lambda function triggers to compose and send the message to the Slack channel.

Output

In the Slack channel, output will be as follows:

example_output

Image: Example output

Here, (my sons) Wanuja and Thenuja are already indexed in the faces collection, not me.

Code

Complete source code can be found at: https://github.com/pubudusj/serverless-bell

How to set up

You may deploy the stack using AWS SAM framework easily.

Prerequisites:

  • AWS SAM cli + AWS profile set up
  • npm (to build frontend)
  • Slack Webhook URL

Create a Slack App at https://api.slack.com/apps/. Enable 'Incoming web hook' and add the created webhook to the workspace choosing a channel. This will generate a webhook url in the format of https://hooks.slack.com/services/XXXX/XXXX/XXXXXX

Deployment

  1. First create a AWS Rekognition collection in a region where you are going to deploy the stack:

    aws rekognition create-collection \
    --collection-id serverless-bell-faces-collection
    
  2. Clone the github repo. This has several directories for different purposes as described below:

    • backend - source code needs to be deployed using SAM
    • face_index_frontend - source code for face index frontend
    • testing - For local testing without using Raspberry pi, you may use this code to test the face recognition functionality. This will upload the provided image similar as Pi uploads the image to s3.
    • scripts_in_pi - Simple python script to use within Pi which will capture the image from the camera module and upload to s3.
  3. In cli, go to /backend directory

  4. Run command: sam build --use-container
    This will build the python functions with necessary dependancies.

  5. Then, to deploy the resources, run: sam deploy -g
    This will ask you to enter details of the stack to be created in AWS including the stack name, region, Rekognition face collection and slack url.

    Please make sure you create the stack in the same region as Rekognition faces collection.

  6. Once the deployment is done, copy these output values as they are required in the next steps: FaceIndexHostingS3Bucket, FaceIndexWebsiteURL, GeneratePresignedUrl, GeneratePresignedUrlForTesting, FaceDetectUploadBucketName

  7. Now goto face_index_frontend directory, where face index frontend source code is located.

  8. Create new .env file copying .env.example. For the VUE_APP_GENERATE_URL_API variable, use GeneratePresignedUrl output value.

  9. Run npm install to install required modules and then run npm run build to build the project. This will create dist directory.

  10. Then, let's upload contents of dist directory to s3 to be used as a s3 hosted web site. Use value of the output FaceIndexHostingS3Bucket as the s3 bucket.

    aws s3 cp dist s3://[BucketName] --recursive
    
  11. Now you will be able to access the face index web site using the output value: FaceIndexWebsiteURL.

  12. Upload a face image with a name and you will see the face is index in the faces collection.
    aws rekognition list-faces --collection-id "serverless-bell-faces-collection"

Within Raspberry pi

  1. Set up Raspberry PI with camera module and AWS profile.

  2. Use the example script in scripts_in_pi directory to capture and upload image to S3.

    Replace bucket-name with the output value FaceDetectUploadBucketName
    Use relevant gpiozero Button number as per your set up.

  3. Once captured, you can see the message in your Slack channel.

Local testing without Raspberry pi

  1. Goto the testing directory.

  2. Create new .env file copying .env.example. For the VUE_APP_GENERATE_URL_API variable, use GeneratePresignedUrlForTesting output value.

  3. Run npm install and npm run serve

  4. With the provided url, you can access frontend to upload images to detect faces.

  5. Once uploaded, you can see the message in your Slack channel.

Some lessons learnt

  1. In Rekognition faces collection, ExternalImageId only allow alpha numeric characters. So to store the names with multiple parts spaces in between, we have to replace spaces with underscores and when retrieving the vice versa.

  2. When trigger a lambda function from S3 file upload, lambda will not receive the meta data of the uploaded file. So, to retrieve the meta data of the file, need to read the file again.

  3. In SAM, it is not possible to use automatically generated S3 bucket names in the policy objects of the function - reference.
    Because of this, we had to build the S3 bucket name instead of SAM generated S3 buckets with random names as described in here.

Possible improvements

  1. Implement authentication for face index frontends, APIs, Lambda functions.
  2. Handle failed scenarios in the Step function execution.
  3. Process EXIF orientation data of a uploaded image to get the correct orientation.

Please feel free to try this and let me know your thoughts.

Top comments (16)

Collapse
 
sheenbrisals profile image
Sheen Brisals

Nicely written with all the details. Possibly, with a bit of IoT, you could extend it to automatically unlock the door for the known faces! :-)

Remember reading a similar theme on Azure a few years ago.

Collapse
 
pubudusj profile image
Pubudu Jayawardana

Thanks for the feedback & suggestion @sheenbrisals
Yeah, with IoT enabled on the pi, there will be great features to be enabled :)

Collapse
 
icecreamsandwich profile image
crazyRubix

Awesome. will try it

Collapse
 
mbappai profile image
Mujahid Bappai

Hacks like these makes me happy to be in this industry, knowing I will one day create something as badass as this. Thanks alot man! Really inspiring

Collapse
 
pubudusj profile image
Pubudu Jayawardana

Thanks for the kind words @mujeex . Keep building, keep sharing knowledge!

Collapse
 
nikuamit profile image
Amit Kumar Sahu

Interesting.

Collapse
 
sadat97 profile image
Mohamed Anwar

Awesome, do you know the average latency of the request for the aws recknogition

how many ms does it take to compare the faces or to get the identity of faces in a picture

Collapse
 
pubudusj profile image
Pubudu Jayawardana

Thanks Mohamed. Face recognition of AWS Rekognition is quite fast and normally you get the results within sing digit latency. For the whole life cycle from image upload to Slack notification, normally it will complete withing a second or max couple of seconds. However, naturally, if Lamba cold start is in play, it will increase by 15-20 milliseconds.

Collapse
 
cbloss profile image
cbloss

This is super crafty! Well done!

Collapse
 
prernaweb profile image
Andrew

Nice work, some well thought out architecture there and creativity. Keen to give it a go.

Collapse
 
atulcodex profile image
๐Ÿšฉ Atul Prajapati ๐Ÿ‡ฎ๐Ÿ‡ณ

Nice invention ๐Ÿ’

Collapse
 
andyoverlord profile image
Andy Zhu

Awesome! Very inspiring, I would love to try it.

Could you please let me know what version of the Raspberry pi you used and where I can get the camera module? Thank you.

Collapse
 
pubudusj profile image
Pubudu Jayawardana

Thanks @andyoverlord .
I used Raspberry Pi 3 model B+.
And I ordered the camera module from Amazon. There are lot of options and I bought it around usd 20.

Collapse
 
mnabeelp profile image
Mohammed Nabeel

Wonderful. Would love to give it a shot. Thanks.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.