DEV Community

Cover image for What is Gradient Descent in Neural Network?
tut_ml
tut_ml

Posted on

What is Gradient Descent in Neural Network?

Gradient descent is used to minimize the cost function.

Now, what is the cost function?

The cost function is a method to calculate the error between the actual output and the predicted output.

The formula of the cost function is-
cost function= 1/2 square(y – y^)

here, y= actual output
y^=predicted output

The goal of a Neural network is to predict the output which is similar to the actual output.

The lower the cost function, the closer the predicted output is to the actual output.

So to minimize this cost function we use Gradient Descent.

Gradient Descent is of three types-

  1. Stochastic Gradient Descent.
  2. Batch Gradient Descent.
  3. Mini-Batch Gradient Descent.

Stochastic Gradient Descent is used to solve the local minimum problem.

For more details regarding the types of Gradient Descent, you can check this article 👇

Stochastic Gradient Descent- A Super Easy Complete Guide!

AWS Security LIVE!

Join us for AWS Security LIVE!

Discover the future of cloud security. Tune in live for trends, tips, and solutions from AWS and AWS Partners.

Learn More

Top comments (0)

AWS Security LIVE!

Tune in for AWS Security LIVE!

Join AWS Security LIVE! for expert insights and actionable tips to protect your organization and keep security teams prepared.

Learn More

👋 Kindness is contagious

Discover a treasure trove of wisdom within this insightful piece, highly respected in the nurturing DEV Community enviroment. Developers, whether novice or expert, are encouraged to participate and add to our shared knowledge basin.

A simple "thank you" can illuminate someone's day. Express your appreciation in the comments section!

On DEV, sharing ideas smoothens our journey and strengthens our community ties. Learn something useful? Offering a quick thanks to the author is deeply appreciated.

Okay