DEV Community

Cover image for How do Convolutional Neural Networks work?
Deepak Raj
Deepak Raj

Posted on • Updated on • Originally published at codeperfectplus.com

How do Convolutional Neural Networks work?

Today we are going to be talking about Convolutional neural networks that are an important part of Deep learning. Convolutional neural networks are similar to the artificial neural network. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity.

According to Wikipedia

In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery.

How do Convolutional Neural Networks work?

Convolutional neural networks mainly used in computer vision. They are similar to ANN and also have parameters in the form of the Weight and Bias that can be learned. These networks consist mainly of 3 layers. An input layer, an output layer and several hidden layers.

To train the deep-learning model, each input image will pass through Convolutional layer, filters and pooling layer then an activation function that will classify output in probabilities.

Convolution Layer:-Convolution of an image with different filters can perform operations such as edge detection, blur and sharpen by applying filters. It's the first layer in CNN to extract features from the input image.

The convolutional layers operate on 3D tensors, called features map(Kernals) with height, width and depth.

How do Convolutional Neural Networks work?

Pooling Layer:- The next layer in CNN is Pooling Layer and it's also known as downsampling and subsampling.
Pooling layers are used to simplify the information collected by the convolutional layer and it reduces the parameters and improves computation for the complex image.

max-pooling simply takes the largest value from one patch of an image, places it in a new matrix next to the max values from other patches, and discards the rest of the information contained in the activation maps.

Max pooling

credits to [Andrej Karpathy.](https://cs231n.github.io/)

The most common approach used in pooling is max pooling.

Average pooling can also be used instead of Max pooling. where each entry points transformed into the average value of the group of points instead of its maximum value.

fully connected neural networks

Activation Function:- Activation function is used to make neural network non-linear. It's limit the output between 0 and 1 probability.

Relu and Adam are most used activation function in CNN Models.

Activation function in neural network

Top comments (0)