Gradient descent is used to minimize the cost function.

**Now, what is the cost function?**

The cost function is a method to calculate the error between the actual output and the predicted output.

The formula of the cost function is-

**cost function= 1/2 square(y – y^)**

here, y= actual output

y^=predicted output

The goal of a Neural network is to predict the output which is similar to the actual output.

The lower the cost function, the closer the predicted output is to the actual output.

So to minimize this cost function we use Gradient Descent.

Gradient Descent is of three types-

- Stochastic Gradient Descent.
- Batch Gradient Descent.
- Mini-Batch Gradient Descent.

Stochastic Gradient Descent is used to solve the local minimum problem.

For more details regarding the types of Gradient Descent, you can check this article 👇

## Discussion (0)