DEV Community

Stephan
Stephan

Posted on • Updated on

The journey towards creating a Basketball mini-map

One specific goal of the open source basketball analytics machine learning project is to provide a mini-map of the players. Basically a top-down view of the court with the different players represented as coloured circles.

2dColouredMap

Eventually we could also draw the players movement on the 2D view to detect patterns of basketball plays.

Let's have a closer look how this can be accomplished using Python, OpenCV and machine learning libraries.

BTW Suggestions and comments are always very welcome to improve this open source project. I've also included a fully working tutorial of this article so you can experiment with the provided code (link in footer of article).

Where to place the camera?

I've done two camera experiments, one where the camera is positioned in the corner and another where it's placed in the middle (as shown in the pictures below).

court

The best result for image transformations was achieved where the camera was positioned in the middle.

3DBasketballMiddleView (1)

Read also my article on how to record a basketball game on a budget

Identify the players with Detectron2

With Mask R-CNN models you can easily identify objects in an image. #Yolo

I played with Yolo last week but wanted to experiment with Detectron2 (powered by PyTorch). This is an open source project from Facebook, it implements state-of-the-art object detection algorithms. It's amazing what it can detect, let's have a closer look.

persons1

Funny enough the used model thinks that the right basketball hoop is a TV with 56% probability. It also correctly found a chair with 61% probability.

We'll need to filter out the persons and actually work only with the players that are on the court. The used picture has all the players grouped together because it's the start of a game, as a result only 8 out of 10 players were found.

The COCO Panoptic Segmentation model detects the ceiling, walls and floor and colours them accordingly. This will be very interesting input for the court detection, because now we can limit the "search" in the floor polygon.

persons2

Detectron2 also supports Human Pose Estimation which we'll use in the future to classify basketball actions of players.

persons3

Retrieving the position of each player is accomplished using following python code.

The DefaultPredictor.predictor method returns a list of rectangle coordinates (pred_boxes) of each identified object. The object classes are stored in pred_classes, where person objects are marked as 0.

code

Because the automatic court detection is not yet ready, I had to provide the polygon coordinates of the court manually.

pts_src = np.array([
    [1, 258],       # left bottom - bottom corner
    [400, 308],     # middle bottom corner
    [798, 280],     # right bottom - bottom corner
    [798, 220],     # right bottom - top corner
    [612, 176],     # top right rorner
    [186, 168],     # top left corner
    [3, 201]        # left bottom - top corner
    ])   
Enter fullscreen mode Exit fullscreen mode

Drawing this polygon onto the image allowed me to debug my court coordinates and adjust them when needed.

court_poly

Representing a player on the court

We will draw a blue circle for each player by iterating over the predicated coordinates of the found objects (boxes). We should only include Person objects which are positioned within the court polygon coordinates using : Point(player_pos).within(court) statement

# Use the boxes info from the tensor prediction result
#
# x1,y1 ------
# |          |
# |          |
# |          |
# --------x2,y2
#

from shapely.geometry import Point, Polygon

color = [255, 0, 0]   # BLUE
thickness = 2
radius = 2

i  = 0
for box in pred_boxes:

  # Include only class Person
  if pred_classes[i] == 0:  

    x1 = int(box[0])
    y1 = int(box[1])

    x2 = int(box[2])
    y2 = int(box[3])

    xc = x1 + int((x2 - x1)/2)
    player_pos = (xc, y2)

    court = Polygon(src_pts)

    # Draw only players that are within the basketball court
    if Point(player_pos).within(court):
      cv2.circle(im, player_pos, radius, color, thickness, lineType=8, shift=0)

    i += 1
Enter fullscreen mode Exit fullscreen mode

Great, we now have marked 8 players on the basketball court and two which are hidden in the back 🏀💪🏻

playersoncourt

Image transformations

Using homography image transformation we can morph the above image onto a 2D court image shown below.

No alt text provided for this image
We declare the similar court coordinates (the same 7 points starting with left bottom - bottom corner, etc.) but now from the 2D image.

# Four corners of the court + mid-court circle point in destination image 
# Start top-left corner and go anti-clock wise + mid-court circle point
dst_pts = np.array([
    [43, 355],       # left bottom - bottom corner
    [317, 351],      # middle bottom corner
    [563, 351],      # right bottom - bottom corner
    [629, 293],      # right bottom - top corner
    [628, 3],        # top right rorner
    [8, 4],          # top left corner
    [2, 299]         # left bottom - top corner
    ])   
Enter fullscreen mode Exit fullscreen mode

Now for the Homography call which behind the scene uses matrix multiplication mathematics.

# Calculate Homography

h, status = cv2.findHomography(src_pts, dst_pts)

img_out = cv2.warpPerspective(im, h, (img_dst.shape[1], img_dst.shape[0]))
Enter fullscreen mode Exit fullscreen mode

The output image (img_out) shows the player dots within a 2D view of the court!! 😱

transformed

The basketball mini-map solution is almost here.

Mask strategy

One approach to get the player coordinates on the transformed basketball court image is via a colour mask.

lower_range = np.array([255,0,0])                         # Set the Lower range value of blue in BGR
upper_range = np.array([255,155,155])                     # Set the Upper range value of blue in BGR
mask = cv2.inRange(img_out, lower_range, upper_range)     # Create a mask with range
result = cv2.bitwise_and(img_out, img_out, mask = mask)   # Performing bitwise and operation with mask in img variable                            

mask = cv2.inRange(result, lower_range, upper_range)  
cv2_imshow(mask)      
Enter fullscreen mode Exit fullscreen mode

mask

Now we can retrieve the coordinates of the "non zero" pixels in the mask and use these coordinates to draw a circle on the 2D basketball court image.

#get all non zero values
coord = cv2.findNonZero(mask)

# Radius of circle 
radius = 3

# Blue color in BGR 
color = (255, 0, 0) 

# Line thickness of 2 px 
thickness = 2

court_img = cv2.imread('./court.jpg')
for pos in coord:
  center_coordinates = (pos[0][0], pos[0][1])
  cv2.circle(court_img, center_coordinates, radius, color, thickness) 

cv2_imshow(court_img)
Enter fullscreen mode Exit fullscreen mode

Update (29 Dec 2019)

By studying the player trail tracking visualisation I ran into this ball tracking example. That demo uses the OpenCV findContours method to retrieve the coordinates of the masked ball. So instead of using cv2.findNonZero(mask) which returns all the non zero pixels in the mask, I can now retrieve just 8 coordinates of the players within the mask using the following code:

cnts = cv2.findContours(mask.copy(), 
                        cv2.RETR_EXTERNAL, 
                        cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
Enter fullscreen mode Exit fullscreen mode

Great, a (draft) workable solution has arrived :)

Basketball

See also video example output on YouTube.

What is still missing?

We still need to identify the players per team which can be achieved using colour detection.

If we can identify each individual player we could also do player tracking on the mini map.

I did create a full tutorial which will take you step-by-step through the above journey.

Hopefully this is enough to experiment and maybe come up with some practical suggestions on how to finalise the 2D mapping?!

Peace,

Stephan

Top comments (2)

Collapse
 
oguchiebube profile image
oguchi ebube

Hello, Thanks for this wonderful post. Just a quick question please for your code base did you use Colab, because i keep running into issues with Colab when trying to process videos

Collapse
 
stephan007 profile image
Stephan

Apologies for the very late reply... but yes, I've used Colab!