During my learning of the ways of the force in OpenCV, it was a matter of time before encountering edge detection. This concept, that also falls under image processing, uses the same principle of convolution that I wrote about before. The aim of the edge detection is to highlight the outline of the most important information of an image. This is one of the essential steps in object detection and directly affects machines’ understanding of the world.

# So what is an edge?

First of all, it is important to understand that the real world is continuous whereas a picture of the real world is made up of discrete (individual) pixels. Now let’s zoom in on the pixels for one second:

It is obvious to all of us, that the edge is where the red line is indicating; however, it is not possible to draw the edge between the smallest two units (not entirely true, there are methods, like super-sampling). Therefore, the computer checks whether a pixel is on an edge by comparing the pixel’s adjacent neighbors. Here, if we were to go from left to right, pixel number 2 is an edge as there is a significant difference between 1 and 3. Same applies for pixel number 3. The result would look something like this:

Nevertheless, in real life, pictures also include gray pixels with various intensity (value from 1 to 254). Therefore, to see how much the gradient changed (see also gradient magnitude if you are more interested), we use the first derivative to indicate the magnitude of change. It is then up to us to establish a threshold that would determine whether the magnitude of change is large enough to be considered an edge. To take it one step further, we could also use the second derivative and look for zero-crossing values, which capture the local maxima in the picture gradient (see image below).

In addition, due to the nature of edge detection, they are quite sensitive to noise (think about poor image quality; therefore, it is highly recommended to blur the image in order to smooth the noise out and therefore increase the accuracy of the edges.

Said in simpler words, first order derivatives are best for identification of strong edges by establishing a threshold whereas second order derivatives are best for locating where the edge is. Both types are noise sensitive, so blur the images first to obtain best results.

# Examples of edge detection

I will be applying various types of edge detection methods on this original image below. Be sure to check out the code on my GitHub.

### First order derivative - Sobel operator

Sobel edge detector is a first order derivative edge detection method. It calculates the gradients separately along the X axis and Y axis. The kernels already incorporate a smoothing effect. There are many other types of kernels like Scharr or Prewitt.

### Second order derivative - Laplacian operator

Unlike Sobel operator, Laplacian uses only one kernel, nevertheless, more popular use is to combine it with the Gaussian blur in order to reduce noise.

### Canny edge detection

Canny edge detector is probably one of the most widely used edge detection methods. The reason why it is superior to the so far mentioned methods is due to “non-maximum suppression” to produce a one pixel thick edge. The detection consists of many steps, check it out here.

# Real life application

Edge detection is just a first step that is necessary for further image analysis like shape detection or object identification. Apart from the most obvious use in machine vision (i.e. self driving vehicles), it is also widely used on analysis of medical images to help with diagnostics or even identify pathological objects like tumor.

All in all, this was yet again a fun way to play around with OpenCV, let me know what you think and may the Python be with you.

## Top comments (0)