Hello, I'm Ganesh. I'm building git-lrc, an AI code reviewer that runs on every commit. It is free, unlimited, and source-available on Github. Star git-lrc on GitHub to help more developers discover the project. Do give it a try and share your feedback for improving the product.
In the previous article, we discussed neural networks and how they work.
What is a Neural Network
A neural network consists of nodes and connections between those nodes.
The connections between nodes are called parameters or weights. These values are estimated and updated during training so the model can make better predictions.
In the image above, we can see how curved lines are created to fit the data points.
Neural networks start with unknown parameter values.
The model then tries to fit the data points using those parameters and make predictions.
If the prediction is not accurate, the model updates the parameters and tries again.
This process is done using the backpropagation algorithm, which we will discuss in a later article.
Building Blocks of Neural Networks
The curved lines created to fit the data points are represented using mathematical functions.
We can reshape these functions to better fit the data points.
There are many common activation functions used in neural networks.
1. Softplus
2. ReLU
3. Sigmoid
These curved functions are called activation functions.
Basically, we choose different activation functions depending on how we want the neural network to learn and fit the data.
Conclusion
We now have a basic understanding of how neural networks work and how activation functions help fit data points.
As we continue, we will explore concepts like backpropagation, weights, biases, and training neural networks in more detail.
Any feedback or contributors are welcome! Itβs online, source-available, and ready for anyone to use.





Top comments (0)