<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kennedy Antonio</title>
    <description>The latest articles on DEV Community by Kennedy Antonio (@kennedy_antonio_90d664580).</description>
    <link>https://dev.to/kennedy_antonio_90d664580</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kennedy_antonio_90d664580"/>
    <language>en</language>
    <item>
      <title>How to count the number of black and white squares on a chessboard?</title>
      <dc:creator>Kennedy Antonio</dc:creator>
      <pubDate>Sun, 08 Dec 2024 02:04:36 +0000</pubDate>
      <link>https://dev.to/kennedy_antonio_90d664580/how-to-count-the-number-of-black-and-white-squares-on-a-chessboard-5c32</link>
      <guid>https://dev.to/kennedy_antonio_90d664580/how-to-count-the-number-of-black-and-white-squares-on-a-chessboard-5c32</guid>
      <description>&lt;p&gt;&lt;strong&gt;The main purpose of the code is to detect the squares of the chessboard, draw lines around them, and count how many of these squares are black and white based on their average pixel intensity. Here’s a detailed explanation of the code's purpose and functionality.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Import Libraries&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;The code begins by importing necessary libraries: OpenCV (&lt;code&gt;cv2&lt;/code&gt;) for image processing, NumPy for numerical operations, Matplotlib for displaying images, and Pandas.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import cv2
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from google.colab.patches import cv2_imshow

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Load and Prepare the Image&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The original image is loaded using &lt;code&gt;cv2.imread()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The image is converted from BGR (OpenCV default) to RGB format for proper color representation.&lt;/li&gt;
&lt;li&gt;A grayscale version of the image is created for processing.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;original_image = cv2.imread('original.png')

rgb_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2RGB)
gray_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Image Preprocessing&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gaussian Blur&lt;/strong&gt;: A Gaussian blur is applied to the grayscale image to reduce noise and improve edge detection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Otsu's Thresholding&lt;/strong&gt;: Otsu's method is used to convert the blurred grayscale image into a binary image, where pixels are classified as either black or white based on their intensity.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gaussian_blur = cv2.GaussianBlur(gray_image, (5, 5), 0)
ret,otsu_binary = cv2.threshold(gaussian_blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Edge Detection&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The Canny edge detection algorithm is applied to the binary image to find edges.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;canny = cv2.Canny(otsu_binary,20,255)

kernel = np.ones((7, 7), np.uint8)

img_dilation = cv2.dilate(canny, kernel, iterations=1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Morphological Operations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dilation is performed on the Canny image to enhance the detected edges, making it easier to identify lines in the next step.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Line Detection&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Hough Line Transform (&lt;code&gt;cv2.HoughLinesP&lt;/code&gt;) is used to detect lines in the dilated image. Detected lines are drawn on the image for visualization.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lines = cv2.HoughLinesP(img_dilation, 1, np.pi/180, threshold=200, minLineLength=100, maxLineGap=50)

if lines is not None:
    for i, line in enumerate(lines):
        x1, y1, x2, y2 = line[0]

        # draw lines
        cv2.line(img_dilation, (x1, y1), (x2, y2), (100,100,255), 2)

kernel = np.ones((3, 3), np.uint8)

img_dilation_2 = cv2.dilate(img_dilation, kernel, iterations=1)

plt.imshow(img_dilation_2,cmap="gray")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Contour Detection&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Contours of potential squares (chessboard cells) are found using &lt;code&gt;cv2.findContours()&lt;/code&gt;. This helps in identifying rectangular shapes that represent squares on the chessboard.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;board_contours, hierarchy = cv2.findContours(img_dilation_2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Filtering Rectangles&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Each contour is analyzed, and only those with an area between 4000 and 40000 pixels are considered (to filter out noise).&lt;/li&gt;
&lt;li&gt;The contours are approximated to polygons, and only those with four vertices (quadrilaterals) are retained as valid squares.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if 4000 &amp;lt; cv2.contourArea(contour) &amp;lt; 40000:
        # Approximate the contour to a simpler shape
        epsilon = 0.02 * cv2.arcLength(contour, True)
        approx = cv2.approxPolyDP(contour, epsilon, True)

        # Ensure the approximated contour has 4 points (quadrilateral)
        if len(approx) == 4:
            pts = [pt[0] for pt in approx]  # Extract coordinates

            # Define the points explicitly
            pt1 = tuple(pts[0])
            pt2 = tuple(pts[1])
            pt4 = tuple(pts[2])
            pt3 = tuple(pts[3])

            x, y, w, h = cv2.boundingRect(contour)
            center_x=(x+(x+w))/2
            center_y=(y+(y+h))/2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Storing Square Centers&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The centers of valid squares are calculated and stored along with their corner points for further processing.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;square_centers.append([center_x,center_y,pt2,pt1,pt3,pt4])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sorting Coordinates&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The detected square centers are sorted by their y-coordinates (row-wise) and grouped based on proximity in their y-values.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sorted_coordinates = sorted(square_centers, key=lambda x: x[1], reverse=True)

groups = []
current_group = [sorted_coordinates[0]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Handling Undetected Squares&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Additional logic checks for undetected squares between detected ones based on their coordinates and draws lines to connect them if they fall within certain criteria.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for num in range(len(sorted_coordinates)-1):
    if abs(sorted_coordinates[num][1] - sorted_coordinates[num+1][1])&amp;lt; 100 :
        if sorted_coordinates[num+1][0] - sorted_coordinates[num][0] &amp;gt; 200:
            x=(sorted_coordinates[num+1][0] + sorted_coordinates[num][0])/2
            y=(sorted_coordinates[num+1][1] + sorted_coordinates[num][1])/2
            p1=sorted_coordinates[num][5]
            p2=sorted_coordinates[num+1][4]
            p3=sorted_coordinates[num+1][3]
            p4=sorted_coordinates[num][2]
            cv2.line(otsu_binary, p1, p2, (255, 255, 0), 7)
            cv2.line(otsu_binary, p1, p4, (255, 255, 0), 7)
            cv2.line(otsu_binary, p2, p3, (255, 255, 0), 7)
            cv2.line(otsu_binary, p3, p4, (255, 255, 0), 7)
            sorted_coordinates.insert(num+1,[x,y,p1,p2,p3,p4])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Counting Black and White Squares&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;For each detected square, its bounding box is extracted from the binary image.&lt;/li&gt;
&lt;li&gt;The average pixel intensity of each square is calculated: if it's greater than 127, it’s counted as white; otherwise, it’s counted as black.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for coordinate in sorted_coordinates:
    points = coordinate[2:]  # Get only the tuple points

    # Extract x and y coordinates from the points
    x_coords = [point[0] for point in points]
    y_coords = [point[1] for point in points]

    # Determine the bounding box of the rectangle
    x_min = int(min(x_coords))
    x_max = int(max(x_coords))
    y_min = int(min(y_coords))
    y_max = int(max(y_coords))

    # Extract the rectangle from the binary image
    rectangle = otsu_binary[y_min:y_max, x_min:x_max]

    # Calculate the average color of the rectangle
    avg_color = np.mean(rectangle)

    # Count based on average color
    if avg_color &amp;gt; 127:  # Assuming average color &amp;gt; 127 is white
        white_count += 1
    else:
        black_count += 1

print(white_count)
print(black_count)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Output Results&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Finally, the counts of black and white squares are printed, and the processed binary image with drawn contours is displayed using Matplotlib.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Custom Control
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The code can be customized based on specific requirements or characteristics of different chessboards or images. 
For example:

&lt;ul&gt;
&lt;li&gt;Adjusting thresholds in Otsu's method or Canny edge detection can help in better detecting edges depending on lighting conditions.&lt;/li&gt;
&lt;li&gt;Modifying area constraints when filtering contours can help include or exclude certain sizes of detected squares.&lt;/li&gt;
&lt;li&gt;Additional features such as color detection could be implemented if colored chessboards are being analyzed rather than just black-and-white ones.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This code effectively demonstrates how to use OpenCV for complex image processing tasks such as detecting shapes (in this case, chessboard squares), analyzing their properties, and counting them based on color criteria. It serves as a practical example of applying computer vision techniques in Python for real-world applications like game analysis or board recognition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzka8gty7lk80xo0qrf0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzka8gty7lk80xo0qrf0f.png" alt="Image description" width="560" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>machinelearning</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>How to count the number of black and white squares on a chessboard?</title>
      <dc:creator>Kennedy Antonio</dc:creator>
      <pubDate>Sun, 08 Dec 2024 02:04:36 +0000</pubDate>
      <link>https://dev.to/kennedy_antonio_90d664580/how-to-count-the-number-of-black-and-white-squares-on-a-chessboard-1jlg</link>
      <guid>https://dev.to/kennedy_antonio_90d664580/how-to-count-the-number-of-black-and-white-squares-on-a-chessboard-1jlg</guid>
      <description>&lt;p&gt;&lt;strong&gt;The main purpose of the code is to detect the squares of the chessboard, draw lines around them, and count how many of these squares are black and white based on their average pixel intensity. Here’s a detailed explanation of the code's purpose and functionality.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Import Libraries&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;The code begins by importing necessary libraries: OpenCV (&lt;code&gt;cv2&lt;/code&gt;) for image processing, NumPy for numerical operations, Matplotlib for displaying images, and Pandas.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import cv2
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from google.colab.patches import cv2_imshow

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Load and Prepare the Image&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The original image is loaded using &lt;code&gt;cv2.imread()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The image is converted from BGR (OpenCV default) to RGB format for proper color representation.&lt;/li&gt;
&lt;li&gt;A grayscale version of the image is created for processing.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;original_image = cv2.imread('original.png')

rgb_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2RGB)
gray_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Image Preprocessing&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gaussian Blur&lt;/strong&gt;: A Gaussian blur is applied to the grayscale image to reduce noise and improve edge detection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Otsu's Thresholding&lt;/strong&gt;: Otsu's method is used to convert the blurred grayscale image into a binary image, where pixels are classified as either black or white based on their intensity.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gaussian_blur = cv2.GaussianBlur(gray_image, (5, 5), 0)
ret,otsu_binary = cv2.threshold(gaussian_blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Edge Detection&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The Canny edge detection algorithm is applied to the binary image to find edges.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;canny = cv2.Canny(otsu_binary,20,255)

kernel = np.ones((7, 7), np.uint8)

img_dilation = cv2.dilate(canny, kernel, iterations=1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Morphological Operations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dilation is performed on the Canny image to enhance the detected edges, making it easier to identify lines in the next step.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Line Detection&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Hough Line Transform (&lt;code&gt;cv2.HoughLinesP&lt;/code&gt;) is used to detect lines in the dilated image. Detected lines are drawn on the image for visualization.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lines = cv2.HoughLinesP(img_dilation, 1, np.pi/180, threshold=200, minLineLength=100, maxLineGap=50)

if lines is not None:
    for i, line in enumerate(lines):
        x1, y1, x2, y2 = line[0]

        # draw lines
        cv2.line(img_dilation, (x1, y1), (x2, y2), (100,100,255), 2)

kernel = np.ones((3, 3), np.uint8)

img_dilation_2 = cv2.dilate(img_dilation, kernel, iterations=1)

plt.imshow(img_dilation_2,cmap="gray")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Contour Detection&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Contours of potential squares (chessboard cells) are found using &lt;code&gt;cv2.findContours()&lt;/code&gt;. This helps in identifying rectangular shapes that represent squares on the chessboard.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;board_contours, hierarchy = cv2.findContours(img_dilation_2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Filtering Rectangles&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Each contour is analyzed, and only those with an area between 4000 and 40000 pixels are considered (to filter out noise).&lt;/li&gt;
&lt;li&gt;The contours are approximated to polygons, and only those with four vertices (quadrilaterals) are retained as valid squares.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if 4000 &amp;lt; cv2.contourArea(contour) &amp;lt; 40000:
        # Approximate the contour to a simpler shape
        epsilon = 0.02 * cv2.arcLength(contour, True)
        approx = cv2.approxPolyDP(contour, epsilon, True)

        # Ensure the approximated contour has 4 points (quadrilateral)
        if len(approx) == 4:
            pts = [pt[0] for pt in approx]  # Extract coordinates

            # Define the points explicitly
            pt1 = tuple(pts[0])
            pt2 = tuple(pts[1])
            pt4 = tuple(pts[2])
            pt3 = tuple(pts[3])

            x, y, w, h = cv2.boundingRect(contour)
            center_x=(x+(x+w))/2
            center_y=(y+(y+h))/2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Storing Square Centers&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The centers of valid squares are calculated and stored along with their corner points for further processing.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;square_centers.append([center_x,center_y,pt2,pt1,pt3,pt4])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sorting Coordinates&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The detected square centers are sorted by their y-coordinates (row-wise) and grouped based on proximity in their y-values.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sorted_coordinates = sorted(square_centers, key=lambda x: x[1], reverse=True)

groups = []
current_group = [sorted_coordinates[0]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Handling Undetected Squares&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Additional logic checks for undetected squares between detected ones based on their coordinates and draws lines to connect them if they fall within certain criteria.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for num in range(len(sorted_coordinates)-1):
    if abs(sorted_coordinates[num][1] - sorted_coordinates[num+1][1])&amp;lt; 100 :
        if sorted_coordinates[num+1][0] - sorted_coordinates[num][0] &amp;gt; 200:
            x=(sorted_coordinates[num+1][0] + sorted_coordinates[num][0])/2
            y=(sorted_coordinates[num+1][1] + sorted_coordinates[num][1])/2
            p1=sorted_coordinates[num][5]
            p2=sorted_coordinates[num+1][4]
            p3=sorted_coordinates[num+1][3]
            p4=sorted_coordinates[num][2]
            cv2.line(otsu_binary, p1, p2, (255, 255, 0), 7)
            cv2.line(otsu_binary, p1, p4, (255, 255, 0), 7)
            cv2.line(otsu_binary, p2, p3, (255, 255, 0), 7)
            cv2.line(otsu_binary, p3, p4, (255, 255, 0), 7)
            sorted_coordinates.insert(num+1,[x,y,p1,p2,p3,p4])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Counting Black and White Squares&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;For each detected square, its bounding box is extracted from the binary image.&lt;/li&gt;
&lt;li&gt;The average pixel intensity of each square is calculated: if it's greater than 127, it’s counted as white; otherwise, it’s counted as black.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for coordinate in sorted_coordinates:
    points = coordinate[2:]  # Get only the tuple points

    # Extract x and y coordinates from the points
    x_coords = [point[0] for point in points]
    y_coords = [point[1] for point in points]

    # Determine the bounding box of the rectangle
    x_min = int(min(x_coords))
    x_max = int(max(x_coords))
    y_min = int(min(y_coords))
    y_max = int(max(y_coords))

    # Extract the rectangle from the binary image
    rectangle = otsu_binary[y_min:y_max, x_min:x_max]

    # Calculate the average color of the rectangle
    avg_color = np.mean(rectangle)

    # Count based on average color
    if avg_color &amp;gt; 127:  # Assuming average color &amp;gt; 127 is white
        white_count += 1
    else:
        black_count += 1

print(white_count)
print(black_count)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Output Results&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Finally, the counts of black and white squares are printed, and the processed binary image with drawn contours is displayed using Matplotlib.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Custom Control
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The code can be customized based on specific requirements or characteristics of different chessboards or images. 
For example:

&lt;ul&gt;
&lt;li&gt;Adjusting thresholds in Otsu's method or Canny edge detection can help in better detecting edges depending on lighting conditions.&lt;/li&gt;
&lt;li&gt;Modifying area constraints when filtering contours can help include or exclude certain sizes of detected squares.&lt;/li&gt;
&lt;li&gt;Additional features such as color detection could be implemented if colored chessboards are being analyzed rather than just black-and-white ones.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This code effectively demonstrates how to use OpenCV for complex image processing tasks such as detecting shapes (in this case, chessboard squares), analyzing their properties, and counting them based on color criteria. It serves as a practical example of applying computer vision techniques in Python for real-world applications like game analysis or board recognition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzka8gty7lk80xo0qrf0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzka8gty7lk80xo0qrf0f.png" alt="Image description" width="560" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>machinelearning</category>
      <category>algorithms</category>
    </item>
  </channel>
</rss>
