DEV Community

Cover image for Building an Autoencoder for Generative Models
K
K

Posted on

Building an Autoencoder for Generative Models

Cover image by Justin Lincoln, on Flickr

Learn about autoencoders in this article by Patrick D. Smith, the data science lead for Excella in Arlington, Virginia, where he founded the data science and machine learning team.

Autoencoders

Autoencoders, and their encoder/decoder frameworks are the inspiration behind generative models. They are a self-supervised technique for representation learning, where our network learns about its input so that it may generate new data just as input. In this section, we'll learn about their architecture and uses as an introduction to the generative networks that they inspire.

Network Architecture

Autoencoders work by taking input and generating a smaller vector representation for later reconstructing its input. They do this by using an encoder to impose an information bottleneck on incoming data and then utilizing a decoder to recreate the input data based on that representation. This is based on the idea that there are structures within data (that is, correlations, and so on) that exist, but that are not readily apparent. Autoencoders are a means of automatically learning these relationships without explicitly doing so.

Structurally, autoencoders consist of an input layer, a hidden layer, and an output layer, as demonstrated in the following diagram:

Autoencoder Layers

The encoder learns to preserve as much of the relevant information as possible in the limited encoding, and intelligently discards irrelevant parts. This forces the network to maintain only the data required to recreate the input; we do this using a reconstruction loss with a regularization term to prevent overfitting. As the task of autoencoders is to recreate their output, they utilize a type of loss function known as reconstruction loss. These loss functions are usually mean squared error or cross entropy loss functions that penalize the network for creating an output that is markedly different from the input.

The information bottleneck is the key to helping us to minimize this reconstruction loss; if there was no bottleneck, information could flow too quickly from the input to the output, and the network would likely overfit from learning generic representations. The ideal autoencoder is both of the following:

  • Sensitive enough to its input data that it can accurately reconstruct it
  • Insensitive enough to its input data that the model doesn't suffer from overfitting that data

The process of going from a high input dimension to a low input dimension in the encoder process is a dimensionality reduction method that is almost identical to principal component analysis (PCA). The difference lies in the fact that PCA is restricted to linear manifolds, while autoencoders can handle non-nonlinear manifolds. A manifold is a continuous, non-intersecting surface. For the sake of neural networks, learning, and loss functions, be sure to always think of manifolds as a topological map.

Building an autoencoder

If you're thinking that the task of reconstructing an output doesn't appear that useful, you're not alone. What exactly do we use these networks for? Autoencoders help to extract features when there are no known labeled features at hand. To illustrate how this works, let's walk through an example using TensorFlow. We're going to reconstruct the MNIST dataset here, and, later on, we will compare the performance of the standard autoencoder against the variational autoencoder in relation to the same task.

Let's get started with our imports and data. MNIST is contained natively within TensorFlow, so we can easily import it:

import tensorflow as tf
import numpy as np

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
Enter fullscreen mode Exit fullscreen mode

For ease, we can build the auto-encoder with the tf.layers library. We'll want our Autoencoder architecture to follow the convolutional/de-convolutional pattern, where the input layer of the decoder matches the size of the input and the subsequent layer squash the data into a smaller and smaller representation. The decoder will be the same architecture reversed, starting with the small representation and working larger.

All together, we want it to look something like the following:

Layers

Let's start with the encoder; we'll define an initializer for the weight and bias factors first, and then define the encoder as a function that takes and input, x. We'll then use the tf.layers.dense function to create standard, fully connected neural network layers. The encoder will have three layers, with the first layer size matching the input dimensions of the input data (784), with the subsequent layers getting continually smaller:

initializer = tf.contrib.layers.xavier_initializer()


def encoder(x):
    input_layer = tf.layers.dense(inputs=x, units=784,
                                  activation=tf.nn.relu,
                                  kernel_initializer=initializer,
                                  bias_initializer=initializer)
    z_prime = tf.layers.dense(inputs=input_layer, units=256,
                              activation=tf.nn.relu,
                              kernel_initializer=initializer,
                              bias_initializer=initializer)
    z = tf.layers.dense(inputs=z_prime, units=128,
                        activation=tf.nn.relu,
                        kernel_initializer=initializer,
                        bias_initializer=initializer)
    return z
Enter fullscreen mode Exit fullscreen mode

Next, let's build our decoder; it will be using the same layer type and initializer as the encoder, only now we invert the layers so that the first layer of the decoder is the smallest and the last is the largest.

def decoder(x):
    x_prime_one = tf.layers.dense(inputs=x, units=128,
                                  activation=tf.nn.relu,
                                  kernel_initializer=initializer,
                                  bias_initializer=initializer)
    x_prime_two = tf.layers.dense(inputs=x_prime_one, units=256,
                                  activation=tf.nn.relu,
                                  kernel_initializer=initializer,
                                  bias_initializer=initializer)
    output_layer = tf.layers.dense(inputs=x_prime_two, units=784,
                                   activation=tf.nn.relu,
                                   kernel_initializer=initializer,
                                   bias_initializer=initializer)
    return output_layer
Enter fullscreen mode Exit fullscreen mode

Before we get to training, let's define some hyper-parameters that will be needed during the training cycle. We'll set the size of our input, the learning rate, number of training steps, the batch size for the training cycle, as well as how often we want to display information about our training progress.

input_dim = 784 
learning_rate = 0.001
num_steps = 1000
batch_size = 256
display = 1
Enter fullscreen mode Exit fullscreen mode

We'll then define the placeholder for our input data so that we can compile the model:

x = tf.placeholder("float", [None, input_dim])
Enter fullscreen mode Exit fullscreen mode

And subsequently, we compile the model and the optimizer:

# Construct the full autoencoder

z = encoder(x)

## x_prime represents our predicted distribution

x_prime = decoder(z)

# Define the loss function and the optimizer

loss = tf.reduce_mean(tf.pow(x - x_prime, 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(loss)
Enter fullscreen mode Exit fullscreen mode

Lastly, we'll code up the training cycle. Start a TensorFlow session and iterate over the epochs/batches, computing the loss and accuracy at each point:

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    # # Training Loop

    for i in range(1, num_steps + 1):

        # # Feed Batches of MNIST Data

        (batch_x, _) = mnist.train.next_batch(batch_size)

        # # Run the Optimization Process

        (_, l) = sess.run([optimizer, loss], feed_dict={x: batch_x})

        # # Display the loss at every 1000 out of 30,000 steps

        if i % display == 0 or i == 1:
            print 'Step %i: Loss: %f' % (i, l)      
Enter fullscreen mode Exit fullscreen mode

For this particular example, add in a little something more to this process; a way to plot the reconstructed images alongside their original versions. Keep in mind that this code is still contained within the training session, just outside of the training loop:

n = 4
    canvas_orig = np.empty((28 * n, 28 * n))
    canvas_recon = np.empty((28 * n, 28 * n))

    for i in range(n):

        batch_x, _ = mnist.test.next_batch(n)

        # Encode and decode each individual written digit
        g = sess.run(decoder, feed_dict={x: batch_x})

        # Display original images
        for j in range(n):

            # Draw the original digits
            canvas_orig[i * 28:(i + 1) * 28, j * 28:(j + 1) * 28] = batch_x[j].reshape([28, 28])

        # Display reconstructed images
        for j in range(n):

            # Draw the reconstructed digits
            canvas_recon[i * 28:(i + 1) * 28, j * 28:(j + 1) * 28] = g[j].reshape([28, 28])

    # Plot the original image vs the reconstructed images. 
    print("Original Images")
    plt.figure(figsize=(n, n))
    plt.imshow(canvas_orig, origin="upper", cmap="gray")
    plt.show()

    print("Reconstructed Images")
    plt.figure(figsize=(n, n))
    plt.imshow(canvas_recon, origin="upper", cmap="gray")
    plt.show()
Enter fullscreen mode Exit fullscreen mode

After training, you should end up with a result along the lines of the following, with the actual digits on the left, and the reconstructed digits on the right:

numbers

So what have we done here? By training the autoencoder on unlabeled digits, we've done the following:

  • Learned the latent features of the dataset without having explicit labels
  • Successfully learned the distribution of the data and reconstructed the image from scratch, from that distribution

Now, let's say that we wanted to take this further and generate or classify new digits that we haven't seen yet. To do this, we could remove the decoder and attach a classifier or generator network:

Classifier

The encoder, therefore, becomes a means of initializing a supervised training model.

If you found this article interesting; you can explore Hands-On Artificial Intelligence for Beginners to grasp the fundamentals of artificial intelligence and build your own intelligent systems with ease. Hands-On Artificial Intelligence for Beginners will teach you what artificial intelligence is and how to design and build smart applications.

Top comments (2)

Collapse
 
zeljkobekcic profile image
TheRealZeljko

Depending on the shape of the data I could imagine that a function which can result into negative output to be useful, thinking of a leaky relu, sigmoid or even a linear function.

Thank you for this interesting post :)

Collapse
 
mrm8488 profile image
Manuel Romero

👏👏👏