Hi devs,
If you're new to deep learning, you've likely come across the name Keras. But what is it exactly, and how does it work? In this post, I'll explain everything from the ground up and show you a step-by-step example using Keras to build a simple deep learning model. I'll explain key concepts like the MNIST dataset as well, so that you can follow along easily!
1. What is Keras?
Keras is an open-source high-level neural networks API written in Python. It allows developers to quickly and easily build deep learning models using a user-friendly interface. Keras sits on top of more complex deep learning frameworks like TensorFlow, allowing you to focus on building your model without getting bogged down by the underlying complexity.
2. Why Use Keras?
- Ease of Use: Keras is designed to be easy to read and understand, which makes it great for beginners.
- Modular: It's highly modular, meaning you can put together models like building blocks.
- Multi-backend support: Keras can run on top of TensorFlow, Theano, or CNTK, making it flexible.
- Quick Prototyping: You can build, compile, and train deep learning models in just a few lines of code.
3. What is MNIST?
The MNIST dataset is one of the most famous datasets in machine learning. It contains 70,000 images of handwritten digits (0-9). Each image is a grayscale picture, 28x28 pixels in size. The goal is to classify these images into one of the ten digit categories.
Here’s an example of some digits from the MNIST dataset:
[0] [1] [2] [3] [4] [5] [6] [7] [8] [9]
When working with Keras, you'll often see the MNIST dataset used in tutorials because it's simple, well understood, and great for testing out new models.
4. Building a Simple Neural Network with Keras (Step-by-Step)
Let's now build a simple neural network using Keras to classify these handwritten digits. We'll go through it step by step.
Step 1: Install TensorFlow (Keras comes bundled with TensorFlow)
First, you need to have TensorFlow installed, as Keras is part of TensorFlow in the latest versions. You can install it via pip:
pip install tensorflow
Step 2: Import the Required Libraries
We'll import TensorFlow and Keras-specific libraries that we'll need to build and train the model.
import tensorflow as tf
from tensorflow.keras import layers, models
Here, tensorflow.keras
is the Keras API within TensorFlow.
Step 3: Load the MNIST Dataset
Keras provides easy access to datasets like MNIST. We’ll load the dataset and split it into training and test sets.
# Load the MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
In this step, train_images
and train_labels
hold the training data, while test_images
and test_labels
hold the test data.
Each image in train_images
is a 28x28 pixel grayscale image, and train_labels
contains the digit labels (0-9) corresponding to each image.
Step 4: Preprocess the Data
Next, we need to normalize the pixel values of the images to make the model training more efficient. Each pixel value in an image is between 0 and 255. We'll scale these values to be between 0 and 1 by dividing the images by 255.
# Normalize pixel values to be between 0 and 1
train_images = train_images / 255.0
test_images = test_images / 255.0
Step 5: Build the Model
Now let's build our neural network using Keras. We’ll create a Sequential model, which allows us to stack layers one on top of another.
# Build the model
model = models.Sequential([
layers.Flatten(input_shape=(28, 28)), # Flatten the 28x28 images into a 1D vector of 784 pixels
layers.Dense(128, activation='relu'), # Add a fully-connected (Dense) layer with 128 neurons
layers.Dense(10, activation='softmax') # Output layer with 10 neurons (one for each digit 0-9)
])
-
Flatten: The
Flatten
layer converts the 28x28 2D image into a 1D array of 784 values. -
Dense: A
Dense
layer is a fully-connected layer. Here we have 128 neurons in the hidden layer and 10 neurons in the output layer (because we have 10 digit classes). We use ReLU as the activation function for the hidden layer and softmax for the output layer.
Step 6: Compile the Model
Next, we need to compile the model. This is where we specify the optimizer, loss function, and evaluation metrics.
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
- Adam optimizer: This is a popular optimizer for training deep learning models.
- Sparse categorical crossentropy: This loss function is used for multi-class classification problems like ours.
- Accuracy: We'll use accuracy as a metric to evaluate the model's performance.
Step 7: Train the Model
Now, we’re ready to train the model! We’ll train it for 5 epochs (i.e., the model will go through the entire training dataset 5 times).
# Train the model
model.fit(train_images, train_labels, epochs=5)
Step 8: Evaluate the Model
Once the model is trained, we can evaluate its performance on the test data.
# Evaluate the model
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')
This will give us the model’s accuracy on the test dataset.
5. What’s Happening Behind the Scenes?
To put it simply:
- Data Preprocessing: We normalized the data to make training more efficient.
- Model Definition: We built a simple feedforward neural network using the Sequential API.
- Compilation: We selected the right loss function and optimizer to guide the model’s learning.
- Training: The model learned to map images to digits over multiple passes through the dataset.
- Evaluation: Finally, we checked how well the model generalized to unseen data.
6. Where to Go From Here?
Keras simplifies the process of building and training neural networks, making it an ideal starting point for beginners. Once you're comfortable with basic models, you can experiment with more complex architectures like convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
Feel free to dive deeper into the world of deep learning with Keras, experiment with different models, and push the boundaries of what's possible!
What do you think of Keras so far?
Top comments (0)