DEV Community

Cover image for CS50 PSet Filter(more) : helper.c
vivekvohra
vivekvohra

Posted on • Updated on

CS50 PSet Filter(more) : helper.c

In this post, we continue our series with a more detailed look at the helper.c file, while breaking down its key components and functionality.
helpers.c is a file where we implement helper functions that are used by the main program i.e. filter.c
This is the part where we have to write the series of functions in C that apply various image filters to bmp inputs.

Image description

Image Filtering

What does it even mean to filter an image? We can think of filtering an image as taking the pixels of some original image and modifying each pixel in such a way that a particular effect is apparent in the resulting image.
Here we have to code the following filter:

  • Grayscale function takes an image and turns it into a black-and-white version of the same image.

  • Reflect function takes an image and reflects it horizontally.

  • Blur function takes an image and turns it into a box-blurred version of the same image.

  • Edge function takes an image and highlights the edges between objects, according to the Sobel operator.

void grayscale(int height, int width, RGBTRIPLE image[height][width])
Enter fullscreen mode Exit fullscreen mode

Grayscale:

This function takes an image and converts it into a black-and-white version of the same image. This is done by taking an average of the RGB values of each pixel and setting them all equal to the average.

This is because as long as the RGB values are all equal, the result will be varying shades of gray along the black-white spectrum, with higher values meaning lighter shades (closer to white) and lower values meaning darker shades (closer to black).

To ensure each pixel of the new image still has the same general brightness or darkness as the old image, we can take the average of the red, green, and blue values to determine what shade of grey to make the new pixel.

Reflect:

This function flips an image about the vertical axis, which returns a mirror image.

Image description

As a pixel is just a block in color in its essence, we don’t have to mirror it, we only have to swap them. So any pixels on the left side of the image should end up on the right, and vice versa.

Blur:

Things get a bit trickier here. The purpose here is to return a blurred version of the input image. We do this by implementing the “box blur,” which works by taking each pixel and, for each color value, giving it a new value by averaging the color values of neighboring pixels.

Image description

Image description

The color values for pixel 11 would be obtained by averaging the color values of pixels 6, 7, 8, 10, 11, 12, 14, 15, and 16 of the original picture.

And for a pixel along the edge or corner, like pixel 15, pixels 10, 11, 12, 14, 15, and 16 will be taken for average.

How to implement it in code?

    // Create temp array as we require original values
    RGBTRIPLE temp[height][width];
    for (int i = 0; i < height; i++)
    {
        for (int j = 0; j < width; j++)
        {
            temp[i][j] = image[i][j];
        }
    }
Enter fullscreen mode Exit fullscreen mode

First, we have to create a temporary array where we duplicate the image into a separate array of RGBTRIPLEs. This is required as we require the original surrounding pixels at all times to calculate the averages, which will be removed from the original image once we start blurring.

    for (int i = 0; i < height; i++)
    {
        for (int j = 0; j < width; j++)
        {
            // Initialise values for each i,j pixel
            float sum_red;
            float sum_blue;
            float sum_green;
            int counter; // to count no. of surr. pixels
            sum_red = sum_blue = sum_green = counter = 0;

            for (int k = i - 1; j <= i + 1; k++)  // for surrounding pixels
            {
                for (int l = j - 1; l <= j + 1; l++)
                {
                    if (k >= 0 && l >= 0 && k < height && l < width)
                    { // if that ipixel is in image

                        sum_red += temp[k][l].rgbtRed;
                        sum_blue += temp[k][l].rgbtBlue;
                        sum_green += temp[k][l].rgbtGreen;
                        counter++;
                    }
                }
            }
            // take average
            image[i][j].rgbtRed = round(sum_red / counter);
            image[i][j].rgbtGreen = round(sum_green / counter);
            image[i][j].rgbtBlue = round(sum_blue / counter);
        }
    }
    return;
}
Enter fullscreen mode Exit fullscreen mode

To determine the existence of adjacent pixels, we iterate through the pixels above and below, as well as to the left and right, utilizing the variables k and l respectively. Should k fall outside the bounds of 0 and the image's height, it indicates that the pixel doesn’t exist, allowing us to skip it. The same logic is applied to l in relation to the image's width.

Finally, we calculate the average for each pixel and set the original pixel values equal to this average, thereby producing the desired blurring effect.

Edge:

In artificial intelligence algorithms for image processing, it is often useful to detect edges in an image: lines in the image that create a boundary between one object and another. One way to achieve this effect is by applying the Sobel operator to the image.

Sobel Operator uses two 3×3 kernels which are convolved with the original image.

CONVOLUTION

CONVOLUTION is a mathematical operator “*”.Graphically, it expresses how the 'shape' of one function is modified by the other.

It is applicable only to Linear Time Invariant Systems.

(can skip this part)

Time Invariant: systems where a shift in input results in an identical shift in output.eg:

Image description

**Linear: **In simple terms it is a graph between output and input which must be a straight line through the origin without having saturation or dead time.

Image description
For discrete systems, the formula is :

Image description

Image description

https://youtu.be/QmcoPYUfbJ8?si=2CCf64_x9fGl7jx5

In this video, we understand convolution using a great analogy. Suppose we want to find how much smoke burning several matchsticks produce. Here we take a time frame of 5 minutes. Let's define 2 functions:

Smoke function: S(t) - It describes the amount of smoke produced by a single matchstick across time. It might be an exponential decay graph.
Image description

Firework function: F(t)- It describes the number of matchsticks lit per minute.
Let it be a linear function. y = x

  • t=0 1 match was lit
  • t=2 2 matches were lit: total smoke = smoke from 2 sticks at present = 2*S(0) (because for matchsticks at this time they just started burning) + smoke of prev. 1 stick after 1 sec. of burning = 1*S(1). Image description
at minute 0 1*S(0)
at minute 1 2*S(0)+ 1*S(1)
at minute 2 3*S(0) + prev(t + 1)

Thus we can conclude that it indeed follows the above equation.

We can visualize convolution as a sliding window:

Image description

Image description

Image description

Image description

Now if are 1st function is very large as compared to our 2nd function

Our visualization will be somewhat like this:

Image description

In 2-D version(Image processing)

Image description

In the 2-D function, we can solve this Matrix convolution at (2,2) by flipping and multiplying.

i.e. = (i*1)+(h*2)+(g*3)+(f*4)+ (e*5) + (d*6) + (c*7) + (b*8) + (a*9)

In Image Processing, the 2nd function is known as Kernel which is in this case 3x3 matrix

[1/9, 1/9, 1/9],

[1/9, 1/9, 1/9],

[1/9, 1/9, 1/9]

We could have solved our Blur function by taking our 2nd function such that it sum of all nodes is 1 and then doing the convolution. (Remember while multiplying we have to flip the Matrix!!).

Edge Detection

Image description
Here the Kernels for convolution are Gx and Gy for respective cases:

Gx → For vertical edge detection or edges in the x direction

Gy → For horizontal edge detection or edges in the y direction

Image description
For each of the three color values for each pixel, we’ll compute two values Gx and Gy. To compute Gx for the red channel value of a pixel, for instance, we’ll take the original red values for the nine pixels that form a 3x3 box around the pixel, multiply them each by the corresponding value in the Gx kernel, and take the sum of the resulting values.

Image description

For Example Gx: We’re multiplying the pixels to the right of the target pixel by a positive number, and multiplying the pixels to the left of the target pixel by a negative number. If the sum is close to zero this implies that the values cancel out and thus the colors on both sides are similar.

But if the pixels on the right are very different from the pixels on the left, then the resulting value will be very positive or very negative, indicating a color change that likely is the result of a boundary between objects.

In the case of edge cases, treat any pixels past the border as 0 or black. This will effectively ignore those pixels from our calculations of Gx and Gy.

To find the RGB value of the new pixel, we combine the Gx and Gy using the Sobel filter algorithm i.e.

Image description

Code Implementation

It is similar to Blur; with the main difference in operation i.e:

Gx_red += temp[k][l].rgbtRed * Gx[k -i + 1][l -j + 1];
Enter fullscreen mode Exit fullscreen mode
(1,1) (1,2) (1,3)
(2,1) (2,2) (2,3)
(3,1) (3,2) (3,2)

Thus at top leftmost: (1,1) ⇒ k= 1 and i =2 value of [k -i + 1] is 0

So it iterates from 0 to 2.

Also if the value of G from is more than the limit i.e. 255. Then we assign it the maximum value possible i.e. 255.

            // if the value is above the limit; assg. it maxm possible
            if (red > 255)
            {
                red = 255;
            }
Enter fullscreen mode Exit fullscreen mode

With this, we end our 3 part series. We learned what is an image(especially bmp) its structure and how it's defined. We looked through various files (BMP.h,helpers.h, Makefile, filter.c etc.) provided to us and understood about underlying code and its functionality. Finally, we got to know about filters, the theory behind them, and how to implement them.


And so we conclude this series here. How you learned something new.Until we meet again! In the meantime continue to code with passion. 🚀

As a novice writer, I’m eager to learn and improve. Should you spot any errors, I warmly welcome your insights in the comments below. Your feedback is invaluable to me.

My code:

GitHub logo vivekvohra / filter

This code requires <cs50 library> to run.

filter

In this problem, we have to code the following filter:

  • Grayscale function takes an image and turns it into a black-and-white version of the same image.

  • Reflect function takes an image and reflects it horizontally.

  • Blur function takes an image and turns it into a box-blurred version of the same image.

  • Edge function takes an image and highlights the edges between objects, according to the Sobel operator.

Grayscale:

This function takes an image and converts it into a black-and-white version of the same image. This is done by taking an average of the RGB values of each pixel and setting them all equal to the average.

Reflect:

This function flips an image about the vertical axis, which returns a mirror image.

Blur:

The purpose here is to return a blurred version of the input image. We do this by implementing the “box blur,” which works by taking each pixel…




Credits & References
Harvard University

Top comments (0)