DEV Community

Cover image for Neural Networks: The Artificial Brain behind AI
Ajay Krupal K
Ajay Krupal K

Posted on • Updated on

Neural Networks: The Artificial Brain behind AI

AI accomplishes a myriad of tasks comparable to human capabilities, thanks to its foundation in neural networks, a structure inspired by the working of the human brain. This article aims to provide an overview of the working of neural networks in brief.

Neural Networks

Neural networks take in data as input, train themselves to recognize the pattern in the data and then predict the output for a new set of similar data.

How do Neural Networks work?

Neural networks are made of neurons, the basic processing units. Each neural network has 3 layers: an input layer, hidden layer, and output layer. The input layer takes in the input and passes it on to the hidden layer, which processes the input and then passes it on to the output layer which predicts a response.

Let's say a Neural Network is being trained to recognize images. Each image pixel is fed as input to the neuron of the input layer. Neurons are connected to the next layer through channels. Each channel is assigned a numerical value known as weight. Each input is multiplied by the corresponding weight of the connected channel and their sum is sent as input to the hidden layer.

Forward Propagation

Each of the neurons in the hidden layer is associated with a numerical value called bias which is added to the sum got from the previous layer. The total value is sent to a threshold function called activation function. The result of the activation function will decide if the neuron is activated or not. If a neuron is activated, it will transmit data to the output layer. The transmission of data in this way is called forward propagation.

In the output layer, the neuron with the highest value fires and determines the output. The output might not be correct, but the network will also be fed the correct output and the predicted output will be compared with the actual output to find out the error. The magnitude of the error indicates how wrong the network was and the sign shows how much higher or lower than expected. This information is transferred backwards through the network, known as back propagation.

The training of the network ends like this. Though the training process is time-consuming at times, the benefits of neural networks are large. Google Lens and Facial Recognition are a few of the applications of neural networks.

Follow me on twitter for more here

Top comments (0)