DEV Community

sandro
sandro

Posted on

Learn Tesseract

Introduction to the Adam Optimization

The Adam optimization algorithm is an extension to stochastic gradient descent that has recently seen broader adoption for deep learning applications in computer vision and natural language processing.
(https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/)

...But, what is the stochastic gradient descent?

Is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It is called stochastic because the method uses randomly selected (or shuffled) samples to evaluate the gradients, hence SGD can be regarded as a stochastic approximation of gradient descent optimization.
(wiki: https://en.wikipedia.org/wiki/Stochastic_gradient_descent)

what do i do with this information?

An RNN using LSTM units can be trained in a supervised fashion, on a set of training sequences, using an optimization algorithm, like gradient descent, combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.
(wiki: https://en.wikipedia.org/wiki/Long_short-term_memory)

Top comments (0)