Machine learning is about a computer learning the patterns that distinguish things.

__Let's start with a very simple question :
__

X = -1 , 0 , 1 , 2 , 3 , 4

Y = -3 , -1 , 1 , 3 , 5 , 7

What is the formula that maps X to Y?

β 2X-1

i.e. 2(-1)-1 = -3

##
**Neural Network is a set of functions that can learn patterns.**

```
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
```

- The above code is written using python , TF and an API in TF called "Keras".
- "
**Keras**" makes it easy to define Neural networks. -
**"Dense"**defines a layer of connected neurons. β 1 dense β 1 layer β 1 unit β 1 neuron - Successive layers are defined in sequence as "
**Sequential"**β 1 neuron. - Shape of what's input to NN in 1st layer β 1 value.

## Important functions:

- Optimizers
- Loss

__Very simple :
__

The neural network has no idea what is the relation between X and Y as mentioned before So,

It guesses the formula for example β Y=10X-10 and then it will use the data it knows about (Set of Xs and Ys) to measure how good or bad its guess was. The **LOSS** fun. measures this and then gives it to the **OPTIMIZER** which figures out the next guess So, The optimizer thinks about how good / bad the guess was done using the data from the loss function.

- Each guess should be better than the prev. one.
- As the guesses become better and better β Accuracy approaches 100%. (
**Convergence**)

## Convergence

- A machine learning model reaches convergence when it achieves a state during training in which loss settles to within an error range around the final value β A model converges when additional training will not improve the model.

## Loss

A lost/cost function is about checking probabilities of a prediction based on how much the prediction varies from the true value. This helps us to know more about how well our model is performing.

Unlike accuracy, loss is not a % β it is a summation of the errors made for each sample in training or validation sets. Loss is often used in the training process to find the "best" parameter values for the model (e.g. weights in neural network). During the training process the goal is to minimize this value.

A ML model reaches convergence when it achieves a state during training in which loss settles to within an error range around the final value.

A model converges when any additional training will not improve the model.

E.g. β Mean squared error.

## mean_squared_error ( ):

Computes the mean squared error between labels and predictions.

# Optimizer

- SGD (Stochastic Gradient Descent).

```
model.complie(optimizer = 'sgd', loss_function = 'mean_squared_error')
```

Now, Let's get back to our example and our sets (X & Y):

```
Xs= np.array([-1.0, 0.0, 1.0, 2,0, 3.0, 4,0], dtype= float)
Ys= np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7,0], dtype=float)
```

We used numpy **python ** library (np) for data representation.

```
model.fit(Xs,Ys, epochs=100)
```

As discussed above : 'Epochs' is to loop 100 times and make a guess => measure how good/bad the guesses by LOSS and then use the OPTIMIZER + data to make another guess and repeat this more and more.

```
print(model.predict([10]))
```

When you run the whole code

You'll notice that it the prediction is [[17.862192]] and not 19 as expected that's because in neural networks we deal in "Probability"!

Wait for more blogs explaining the glorious role of probability in the art of data!

__Resources and they're pretty good to explore more:
__

The main refrence

Also, This video explains the NN very well!

## Top comments (2)

Great

Thank you