DEV Community

Cover image for Auto-Encoder: What is it? & What is it used for?
BPB Online
BPB Online

Posted on • Updated on

Auto-Encoder: What is it? & What is it used for?

Autoencoders have a simple and effective task: the output generated must be equal to that given in input. The purpose is to compress the input of the network in a smaller form, through a set of central nodes (h, also called space of latent variables) of a smaller number than the starting ones and to construct the output starting from the latter.

This objective, therefore, allows you to generate a compressed form input and then get an encoder. An Autoencoder network consists basically of the following two components: encoder and decoder.

Alt Text

Encoder is the component of the network that compresses the input into a space of latent variables and which can be represented by the encodingfunction h = f (x).
Decoder is the component that deals with reconstructing the input based on the information previously collected. It is represented by the decoding function r = g (h).
In other words, the encoder compresses the input in order to generate the code, whereas the decoder reconstructs the input starting from the generated code and using only it. The effectiveness of the network is measured by evaluating the similarity between input and output generated, the more it is similar and the better the network has learned to compress and decompress the information received.

Uses of Autoencoders

To date, noise and dimensionality reduction for Data Visualization are considered the most interesting applications of Autoencoders. With appropriately setting dimensionality and related constraints on the dispersion of the data, through the Autoencoder it is possible to obtain projections in subspaces of greater interest compared to linear methods such as principal component analysis (PCA). Autoencoders train automatically through sample data. This means, it is easy to train the network in order to perform well on similar types of input, without the need to generalize, that is, only the compression performed on data similar to those used in the training set will have good results, however, will be not very effective if performed on different data.

Other compression techniques like JPEG will be better able to perform this function. These neural networks are not only trained to preserve as much information as possible when inserted into the encoder and then into the encoder, but also make new representations while acquiring different types of properties.

Top comments (0)