DEV Community

Jorge Martin
Jorge Martin

Posted on • Edited on

Blog Robotica de Servicios

Index

P1: Localized Vacuum Cleaner

Objective

The objective was to program a high-end robotic vacuum cleaner to clean a house efficiently. To achieve this, a BSA algorithm will be used. Additionally, taking advantage of the fact that it is a high-end vacuum cleaner, a self-localization algorithm will also be employed.

Procedure

1. Coords conversion

The robot is simulated in Gazebo’s 3D environment, and I need to convert its 3D world coordinates into 2D pixel coordinates. For this purpose, the following formula is applied:
Formula
However, since the goal is a 2D transformation, the z-coordinate can be discarded, and as the angular component remains unchanged, the formula can be reduced to:
Formula simple
To determine the scale, I recorded several pixel positions and their corresponding simulator coordinates. This was done roughly by eye, so multiple points were used to reduce error. The calculated scale is 101.5, with the x-scale equal in magnitude but negative compared to the y-scale, reflecting the axis inversion between pixel and simulator coordinates.
Scale
Using the reference pixel position (0,0), which corresponds to world coordinates (5.6,−4.17), the translations were calculated as the differences between pixel and simulator coordinates, resulting in 𝑡x=−5.6 and 𝑡y=4.17.
TXTY
Now that we have calculated everything, the transformation matrix can be expressed as follows:
Matrix

2.Grid map

To divide the map into cells, the first step is to determine their size. Since the robot measures 35x35 pixels, the cells should be slightly larger without being excessive. Therefore, I decided to make each cell 37x37 pixels.

Next, the obstacles are expanded so that if the percentage of black pixels within a cell exceeds 12%, that cell is assigned a value of 0, corresponding to an obstacle. Otherwise, it is assigned a value of 2, representing a dirty cell.

# Cells values
CELL_COLORS = {
    0: 0,    # Obstacle: black
    1: 131,  # Clean: green
    2: 129,  # Dirty: orange
    3: 132,  # Return: blue
    4: 134,  # Critic: violet
    5: 130   # Plan: yellow
}
Enter fullscreen mode Exit fullscreen mode

Each cell has a value between 0 and 5, depending on the type of cell it represents.

3.Return and critical points

Among the different types of cells, two are essential: critical points and return points.

Critical points are determined based on the immediate neighbors of a cell (N, S, E, W). If all of its neighboring cells are obstacles and/or already cleaned cells, the cell becomes a critical point.

Critic

Return points are checkpoints where the vacuum can return once it reaches a critical point. These are generated in all immediate neighboring cells (N, S, E, W) that are neither obstacles nor already cleaned cells.

Return

4.Route planning

To plan the route, the BSA algorithm is used, where initially the entire map consists of obstacles and dirty cells. As the robot moves, cells are marked as cleaned, while critical points and return points are identified. The algorithm enables the robot to systematically traverse all accessible areas, avoiding obstacles and optimizing the path to cover the maximum space without unnecessary movements.

Plan

During the route planning phase, the algorithm evaluates the neighboring cells of the current position in a fixed order: North, East, South, and West. The first neighboring cell that is neither an obstacle nor already planned is selected as the next cell to visit.

When the robot reaches a critical point, it cleans it and selects the return point based on the smallest Manhattan distance.

for rc in return_points:
        dist = abs(rc[0] - r) + abs(rc[1] - c)
Enter fullscreen mode Exit fullscreen mode

Once the return point is selected, the robot uses a Breadth-First Search (BFS) algorithm to determine the shortest path to that point. BFS explores the neighboring cells systematically, avoiding obstacles and already cleaned cells, ensuring that the robot reaches the return point efficiently while covering all accessible areas.

BFS

5.Movement

During the planning stage, the program stores in an array the positions of the centers of the cells in the order they should be visited. This ensures that the robot stays as centered as possible while moving through each cell, guaranteeing full cleaning coverage and reducing the risk of collisions with obstacles.

Once the robot has identified the next cell center to visit, it must decide which direction to move in. The robot can move in four possible directions — North, South, East, and West — and the decision is based on the difference between the robot’s current position and the position of the target cell center.

if (abs(diff_u) > abs(diff_v)):
        if (diff_u < 0):
            direction = "East"
        else:
            direction = "West"
    else:
        if (diff_v < 0):
            direction = "South"
        else:
            direction = "North"
Enter fullscreen mode Exit fullscreen mode

To rotate the robot, the system uses the yaw angle. Knowing the ideal yaw value for each direction, the robot turns until it reaches the angle corresponding to the direction it needs to follow.

Angles

if direction == "North":
        target_yaw = -np.pi/2
    elif direction == "South":
        target_yaw = np.pi/2
    elif direction == "East":
        target_yaw = np.pi
    elif direction == "West":
        target_yaw = 0.0
Enter fullscreen mode Exit fullscreen mode

Challenges

  • One of the main challenges I faced throughout the project was the coordinate transformations from 3D to 2D. I had to identify multiple reference points to minimize errors, determine the correct scale, and construct the transformation matrix accurately.

  • Another challenge was planning the return route from a critical point to a return point, as I had to implement a BFS algorithm to determine the optimal path.

Functionality


(In case the video doesn´t play try this link: https://youtu.be/wA_xdow-XGE)

P2: Rescue People

Objective

The objective of this exercise was to program the behavior of a drone that must go to an area where people are floating in the sea, recognize the faces of those people, and record their coordinates so that rescue services can save them.

Procedure

1. Coords coversion

The first step is to convert the GPS coordinates to local coordinates. To do this, I first transform the GPS data into UTM coordinates.

The next step is to interpret the UTM coordinates. After that, we obtain the global positions of both the base and the rescue area. Once this is done, we convert these coordinates into the local reference frame, assuming that the base is located at the point (0, 0) in the local system.

Coords

2. Camera FOV

The next fundamental step was to determine the drone camera’s FOV, since it is needed to calculate the spacing between the route points and, later, to determine the local coordinates of detected faces.

I determined the FOV by using the drone’s starting base as a reference and moving the drone a certain distance at a fixed height of 5 m until the base was no longer visible.
Drone FOV

3 Generate route

Knowing the coordinates of the rescue area, I defined a search zone. Considering the area covered by the drone’s camera, I divided this zone into points for the drone to follow, ensuring complete coverage of the search area. To avoid gaps, I set the spacing between points to 0.8 times the drone’s camera range, reducing the risk of leaving any areas unscanned.

Route

4. FSM Behaviour

FSM

4.1 Reach the rescue zone

Knowing the position of the rescue area, using a position-based control is the simplest way to move the drone to that location.

zone_coords = (40, -30, 5)
HAL.takeoff(5)
HAL.set_cmd_pos(zone_coords[0], zone_coords[1], zone_coords[2], 0)
Enter fullscreen mode Exit fullscreen mode

4.2 Face detection

To detect people’s faces, I used the drone’s camera along with the Haar Cascade classifier. Since the faces appear relatively small due to the drone’s altitude, it was necessary to adjust the detection parameters. Specifically, I modified the minimum size and the number of neighbors.

faces = face_cascade.detectMultiScale(
                gray, scaleFactor=1.1, minNeighbors=4,
                minSize=(20, 20), maxSize=(100, 100)
Enter fullscreen mode Exit fullscreen mode

To avoid capturing duplicate faces, I store the local coordinates of detected faces. If the distance between a new detection and an existing one is below a threshold, I assume it’s the same face and discard it to prevent duplicates.

for x, y in coords:
        distance = math.sqrt((x - x_new)**2 + (y - y_new)**2)
        if distance < coord_threshold:
            return True
    return False
Enter fullscreen mode Exit fullscreen mode

4.3 Select next point

Since the route points are stored in an array, moving to the next point is straightforward: simply pop the next point from the array and use position-based control to reach it.

if scan_points:
                current_scan_point = scan_points.pop(0)
                HAL.set_cmd_pos(current_scan_point[0],
                                current_scan_point[1],
                                5,
                                0)
Enter fullscreen mode Exit fullscreen mode

4.4 Return to base

To return to the base, I again use position-based control, since the base’s position is known.

HAL.set_cmd_pos(0, 0, 1.5, 0)
HAL.land()
Enter fullscreen mode Exit fullscreen mode

Challenges

  • The only difficulty I faced was the multiple detection of the same face due to the detectMultiScale parameters. I solved it by applying a coordinate filter to remove duplicate detections.

Functionality


(In case the video doesn´t play try this link: https://youtu.be/-RhBi4Nvp_I)

Top comments (0)