# Build your perceptron neural net from scratch

###
smakosh
*Updated on *
ă»1 min read

*This was originally published on Medium*

Letâs start by defining the terms first, Ai (Artificial intelligence), ANN (Artificial neural networks), Machine learning & Deep learning.

The field of **AI research** defines itself as an area of computer science that deals with giving machines the ability to seem like they are intelligent, which we havenât yet reached since neuroscience havenât solved intelligence yet.

**ANN or a neural net** is a computing system inspired by the biological neural networks that constitute biological brains but they look so different.

**Machine learning** is a field within Ai that focuses on the design of algorithms that can learn from a given data and results which we call âtraining dataâ to make a prediction based on that given data from new input data.

**Deep learning** is a sub-field within Machine learning which focuses on the same goal but which uses neural networks & deep neural networks.

So whatâs the perceptron then ? Well the perceptron is an algorithm for supervised learning (which means we know the result weâre trying to get, say like feeding the size & location of a house to predict the price, in the other hand there is unsupervised learning which is used to draw results from datasets consisting of input data without labeled responses).

The perceptron receives input data multiplied by random weights and adds a bias value, put in an activation function to get a result, if the result value is wrong, it uses back propagation & gradient descent to go back & tweak the weights to get a correct result.

I know there are some terms there you didnât get, so letâs go ahead and explain that slowly, letâs say youâre a farmer and you want to classify two types of flowers manually, the best way to do this manually is to take the length & width of each flowersâ paddle and represent them on a graph just like shown bellow:

Noticed the unknown flower weâre trying to figure its type, as you can see on the graph itâs represented among the type 2 of flowers which means it is a type 2 flower.

What youâve just did manually is called linear regression, youâve represented as many flowers you have on the graph as a training data, then you noticed that pattern and drew a line between the two types.

So now letâs automate it using our own perceptron built from scratch, the activation function weâll be using is Sigmoid, hereâs how it looks like on the graph:

Now letâs understand how back propagation & gradient descent work, and to do so we will need:

**The cost function** or also known as squared error function also called cost function:

**The derivative of the sigmoid function**

**The derivative of the cost function**

**The slope of the cost function** is: the derivative of the cost function multiplied by the derivative of sigmoid.

**Learning rate**: we will go with 0.2

**For the derivatives of the weights**, itâs the inputsâ values,âââbecause the derivative of a constant multiplied by a non constant is the constantâââ for **the derivative of the Bias** itâs 1 because the Bias is a non constant multiplied by 1.

Then weâll be able to get **the new weights & bias** values like shown bellow: