DEV Community

Jordan Wallingsford
Jordan Wallingsford

Posted on • Updated on

Beginners Guide to TensorFlow

Introduction

I recently began using TensorFlow for an ML project and found that a lot of tutorials and walkthroughs can be long and/or complicated to completely digest. So, I decided I would create a simple and distilled guide to getting your first model up and working in TensorFlow. In this post, I provide you with an introduction to TensorFlow, go over the basics of a Tensor, and do a walk-through of creating a model utilizing the MNIST hand-written number database.

TensorFlow

TensorFlow is a tool built by Google to be used with Python for data preprocessing, analyzing, modeling, and manipulating. It is an end-to-end platform built to simplify the process of building, training, testing and deploying large-scale machine learning models. It is open-source and comes pre-installed with many pre-trained machine learning models that can be used in your own products.

Setup

Assuming you have a python environment set up, you can simply install TensorFlow with pip:
pip install tensorflow

Once it is installed, you can create a python file, or open an already created one, and import TensorFlow into the file:
import tensorflow as tf

Now you're ready to begin using TensorFlow.

Brief Overview of Tensors

Before we get into using the actual module, I want to give a brief overview of Tensors, the built-in objects that are passed around during any computations or modeling in TensorFlow. They can be compared to NumPy arrays, and are multi-dimensional with a uniform type.

You can declare a single tensor with the TensorFlow variable() function:
tensor = tf.Variable("hello world", tf.string)
tensor_int = tf.Variable(12, tf.int16)

Tensor's also have a rank, or degree, attribute which is simply the number of dimensions the Tensor has. Below is an example of declaring both a 1 and 2 dimensional tensor:

one_dim_tensor = tf.Variable(["one"], tf.string)
two_dim_tensor = tf.Variable([["one", "one"],["two","two"]], tf.string)
Enter fullscreen mode Exit fullscreen mode

As you can see, the one_dim_tensor is just a single-dimensional array, and the two_dim_tensor is a 2D array.

Implementation

Now that we have a little idea of how to install, import, and create variables with TensorFlow lets move into building, training, and evaluating our first model.

Keras API

According to TensorFlow's website Keras is "the high-level API of the TensorFlow platform. It provides an approachable, highly-productive interface for solving machine learning (ML) problems, with a focus on modern deep learning. Keras covers every step of the machine learning workflow, from data processing to hyperparameter tuning to deployment. It was developed with a focus on enabling fast experimentation." So we will be using the Keras Sequential Model for our work moving forward.

The Dataset

The dataset we will be using is the MNIST dataset of handwritten digits. This dataset is a large collection of handwritten digits with a training sample of 60,000 samples and a test set of 10,000 samples. The digits are size-normalized and have a fixed location in the center of a 28x28-pixel image. Each pixel in the image contains an integer value in the range [0,255] representing the gray-scale value of that pixel. Our goal for the model is to accurately classify the handwritten digits with the number.

Importing the Dataset

With TensorFlow and Keras, we get many built-in datasets. This makes it easy to import data to train and test our models. For the MNIST, we can simply declare a variable assigned to the MNIST dataset in Keras:
mnist = tf.keras.datasets.mnist

Once that is done, we can use the load_data() function to load our training and testing sets/labels into their own variables:

(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
Enter fullscreen mode Exit fullscreen mode

Then to normalize the data and get all of the values for the pixels in the range of [0,1] we will divide the image sets by 255.0:

train_images, test_images = train_images / 255.0 test_images / 255.0
Enter fullscreen mode Exit fullscreen mode

Once it is loaded and normalized, we can use MatPlotLib to show an example of one of the images:

import numpy as np
import matplotlib.pyplot as plt

plt.figure()
plt.imshow(train_images[2])
plt.colorbar()
plt.show()
print(f"The label for this image is : {train_labels[2]}")
Enter fullscreen mode Exit fullscreen mode

Preview of Image

Awesome, now we know our dataset actually contains number, we've set up the training and testing sets, and now we are ready to build the model.

Model Time

To do this, we will be using the Keras Sequential model. A sequential model is a type of feed-forward neural network. Our first layer of the network will be a flattened form of our input images which are a 28x28 grid. This will result in a 784-length vector (or single-dimensional array) as the input layer of our network. The second layer is going to be densely connected, meaning every neuron in the layer before is connected to every neuron in this layer, with 128 neurons. Finally, we will have another densely connected layer of 10 output neurons, with 10 representing the numbers 0-9 (which is the range of numbers we are going to be classifying our images as).

model = tf.keras.Sequential()

model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dense(10))
Enter fullscreen mode Exit fullscreen mode

Now that the model is built, we will compile the model in order to apply our loss function, optimizer class, and the measured metrics we will be looking for:

model.compile(
    optimizer='adam',
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=['accuracy']
)
Enter fullscreen mode Exit fullscreen mode

We are using the optimizer class Adam (an optimization algorithm used instead of the traditional gradient descent algorithm), the SpareCategoricalCrossEntropy loss function (used for multi-class classification and combines softmax and cross-entropy loss), and measuring the accuracy metric for our model.

Now that we have declared and compiled our model we can fit the training data:
model.fit(train_images, train_labels, epochs=5)

Here we only do 5 iterations of the data so we don't overfit our model.
Once it is done training the model we can use the evaluate() function to evaluate our model with the metrics we declared against our test dataset:
model.evaluate(test_images, test_labels, verbose=2)
Evaluation

As you can see, our model had pretty good accuracy in classifying the images at around 97.69% accuracy.

Conclusion

This was a very basic introduction to using TensorFlow to build, compile, and train a machine learning model as well as importing data and visualizing some of the data. I hope it has been helpful!


References

https://www.tensorflow.org/tutorials/quickstart/beginner

https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam

https://developers.google.com/machine-learning/glossary

Top comments (0)