DEV Community

Cover image for Build a robot that plays hide and seek (Raspberry Pi + AI)
nathant
nathant

Posted on • Updated on

Build a robot that plays hide and seek (Raspberry Pi + AI)

Building a robot from scratch can be an intimidating task. We however accepted the challenge to build a robot which you can play hide and seek with. The project has three key elements: a frontend for players, a backend for game logic and the robot itself. In this article we will mostly cover the hardware and software of the robot and how we managed to make it work with the backend and frontend on a high level.

The game.

The idea behind the game is to play hide and seek with a robot. Making use of a webapplication, a game master can start a new game which other players can join. Then, a robot in the area will autonomously join this game too. Next, it’s the robot that will have to find all players in order to win the game. With the help of AI, sensors and a camera, the robot will navigate itself through the room to find the players. If the robot doesn’t succeed in his mission finding all players within a certain time, the players have won from the robot. Cool right? Now let’s dig in to the part where we will explain all the bits and bytes of how we managed to realise this.

Victor the robot.

Please meet Victor, our three-weeled robot which we will explain more about.

Victor robot

Hardware

To build the robot, we used:

Thanks to the CamJam Edukit building the robot was a pretty easy task. It took us a couple of hours to put all parts together.

We made sure the camera is pointed up so the robot won’t have a hard time detecting and recognizing humans.

Software

Once our robot is put together, we move on to the next step which is writing its software.

We run Python code on our Pi which will do various things like:

  • Human detection
  • Facial recognition
  • Autonomously driving in a space
  • Communicating with the games’s API service
  • Orchestrating all the different tasks

Human detection (mobilenet-ssd model)

We struggled for a long time finding a quick and accurate human detection model that works well on our Pi which has limited computing power.

After trying out lots of different models, we decided to use the pre-trained MobileNet-SSD model which is intended for real-time object detection. One reason why we chose this algorithm is because it gives good detection accuracy while being quicker than different models, like for example, YOLO. Especially when attempting to detect humans in real time on low computing devices as in our case.

In the background we also used the open-source library OpenCV which is needed to capture and process camera’s output.

Facial recognition

The robot should be able to recognize faces. To make this possible, we used the well-known face-recognition Python library.

Example face rec
Source: face-recognition documentation

It can recognize and manipulate faces from Python using dlib’s state-of-the-art face recognition built with deep learning. Further it’s also lightweight, which is good for our Pi. At last it achieves very good accuracy scores (99.38% on LFW benchmark). That’s exactly what we were looking for when thinking about a face recognition model.

Autonomous driving (ultrasonic distance sensor)

To make autonomous driving possible, the Python library gpiozero was used. This library contains easy commands to steer the CamJam robot and use the distance sensor.

While driving, the robot avoids possible obstacles by using the ultrasonic distance sensor.

😵‍💫 Ultrasonic distance what?!
An ultrasonic distance sensor sends out pulses of ultrasound and detects the echo that is sent back when the sound bounces off a nearby object. It then uses the speed of sound to calculate the distance from the object.

When a person is detected by the camera, a more precise steering mechanism takes over. This will make the robot drive directly towards the detected person. To make this work, we implemented an algorithm that calculates how much degrees the robot should turn to have the detected person in the center of its sight. Like this, the robot can drive and turn autonoumously through a room.

Here is an example of how we used the ultrasonic distance sensor to drive towards a human:

def is_not_at_human():
    global distance_threshold_human
    distance = sensor.distance * 100
    return distance > distance_threshold_human

def approach_human():
    logging.info('Approaching human.')

    while is_not_at_person():
        robot.forward(speed)
        time.sleep(0.1)

    logging.info('Human reached.')

  robot.stop()
Enter fullscreen mode Exit fullscreen mode

Communication with game API

Communication with the API is important to make sure the robot plays the game correctly, but first the robot needs to connect to an open game.

To make sure the robots can play along, we set up communication between the robot and the backend service with an API. When the robot is switched on, it will start polling. With the use of polling, the robot keeps looking if there is an open game in its vicinity.

💡 Ehm, what’s polling?
The simplest way to get new information from the server is periodic polling. This means sending regular requests to the server: “Hey there, It’s Victor the robot here, do you have anything new for me?”. For example, once every 10 seconds.

When a game is found, the robot keeps polling to retrieve player information and check if the game has started. If that is the case, the robot stops using polling and starts to hunt the players.

When a player is found, the robot sends this information to the API. When all the players are found, or the seeking time is over, the robot disconnects itself from the game and starts looking for another game to join.

Orchestrating all the different tasks with threading

One of the biggest challenges was to orchestrate all the different tasks of the robot in a proper way. The robot’s tasks are:

  • Driving with distance sensor
  • Calculating how to follow human
  • Human detection
  • Facial recognition

To do this, we used the perks of threading with Python. Each thread will start executing its task once a certain event is fired. For example if a human is detected (event), another thread will execute the code to approach the human. Then once the human is approached (event), another thread will do its actions and so on.

Short overview of the flows:
Flowcharts

A user friendly webapp with React.

Players need a web interface to interact with the game. Therefore we built a webapp on which players can start a game, join a game, follow the game progress and so on.

When joining a game, the player has to provide a name and up to six photos. These photos will then be used for the robot’s facial recognition.

The app is build with React and hosted on Firebase. It continously makes use of the backend API to fetch information on the games and players. To achieve a user friendly UI, we chose to work with the well-known React MUI design framework. All this together resulted in an easy-to-use, fast and reliable frontend for players.

Mockup 1

Mockup 2

Building the API with Java SpringBoot.

The robot and the frontend need to retrieve and manipulate data on the game somehow. To make this possible we made a simple REST API with Java SpringBoot.

The backend’s main responsibility is to store data provided by users and make sure the robot can retrieve it. To do this, the backend makes use of a Firestore database.

Another key thing the backend does is handling incoming events. These events include creating, starting and ending a game and a player being found by the robot.

To make the backend (API) available for the clients, we’ve dropped it in a Docker container and deployed it on Google Cloud Run with CI/CD.

That's about it.

Congrats if you made it until here. While we're already playing hide and seek with Victor, we hope you also managed to build a cute and smart sibling for him.

Credits for the R&D and the article:
Thijs Hoppenbrouwers
Joris Rombauts
Nathan Tetroashvili

This project was commissioned by KdG University College.

Thank you to our mentors at KdG (Geert De Paepe, Toni Mini) for guiding us through this project.

Oldest comments (0)