DEV Community

Cover image for Autoencoders

Posted on • Updated on


If you are new to Deep learning, and would love to understand Neural Network Architecture or would like to tinker with CNN’s / ANN’s then autoencoder is the best point to start with. In this post, we will go through a quick Introduction to Autoencoders.

Visit — machine.learns — Here you can Visualize the working of Autoencoders in different configurations and more.

So before directly jumping into the technical details, let’s first see some of its applications.

  1. Noise Reduction: Autoencoders can be used for Reducing Noise. Noise could be in an Image or in Sound.
  2. Image Compression: So using autoencoders if you have an image of size 784 pixels, you can convert this image into 64 pixels and then again get the original image from that low dimensional image.

    Alt Text

  3. Converting a Black and White Image to a Colored Image.
    Alt Text

  4. Removing Watermarks from an image.
  5. Fraud Detection eg. credit card fraud detection.

What is an Autoencoders?

Autoencoder is an Artificial Neural Network learning technique. It is an Unsupervised Learning Approach. It mainly helps in achieving representation learning i.e. we come up with an architecture that forces the model to learn a compressed representation of the input data.

Alt Text

This compressed representation is also known as a bottleneck or latent representation. The bottleneck basically learns the features of the data, i.e. it learns to represent a particular data point based on a certain number of features. For example — if the input is a digit dataset then it may learn features like no of horizontal and vertical edges and based on these features it identifies each data point.

Latent Representation (Bottleneck)

Alt Text

The above figure is a plot of Latent space or Bottleneck

We can see clusters of similar colors. Each cluster above is representing similar types of objects example the blue dots are representing asphalt etc.

It is important to know that each dot is representing a unique type of object.

So in total, we can divide an Autoencoder into three parts

  1. Encoder: Part of the Architecture which compresses or forces the model to capture the important information from the data.
  2. Bottleneck(latent representation): This is the compressed representation of the original data.
  3. Decoder: This Part tries to reconstruct the original image from latent representation.

This concept of Autoencoders can be applied using architectures like CNNs and LSTMs.


Despite being a very basic architecture it helps a lot in understanding the various concepts of Neural networks and their architectures. So I will highly encourage you two to learn the implementation of Autoencoders too.

Hope you would have learned something valuable.

Visit: machine.learns 😄

Discussion (0)