<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paul Chibueze</title>
    <description>The latest articles on DEV Community by Paul Chibueze (@chibueze).</description>
    <link>https://dev.to/chibueze</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chibueze"/>
    <language>en</language>
    <item>
      <title>Building and Training a Neural Network with PyTorch: A Step-by-Step Guide</title>
      <dc:creator>Paul Chibueze</dc:creator>
      <pubDate>Fri, 26 Jul 2024 08:36:36 +0000</pubDate>
      <link>https://dev.to/chibueze/building-and-training-a-neural-network-with-pytorch-a-step-by-step-guide-o52</link>
      <guid>https://dev.to/chibueze/building-and-training-a-neural-network-with-pytorch-a-step-by-step-guide-o52</guid>
      <description>&lt;p&gt;Imagine a world where machines can not only see but also understand and classify images as effortlessly as humans. This capability has been at the heart of many breakthroughs in artificial intelligence, revolutionizing fields from healthcare to retail.&lt;/p&gt;

&lt;p&gt;In recent years, advancements in &lt;a href="https://www.sciencedirect.com/topics/computer-science/deep-learning" rel="noopener noreferrer"&gt;deep learning&lt;/a&gt; have enabled computers to recognize objects, identify faces, and even understand emotions depicted in images. One of the pivotal tasks in this domain is &lt;a href="https://desktop.arcgis.com/en/arcmap/latest/extensions/spatial-analyst/image-classification/what-is-image-classification-.htm#:~:text=Image%20classification%20refers%20to%20the,used%20to%20create%20thematic%20maps." rel="noopener noreferrer"&gt;image classification&lt;/a&gt; — teaching computers to categorize images into predefined classes based on their visual features.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll embark on a journey to build and train a neural network using &lt;a href="https://pytorch.org/" rel="noopener noreferrer"&gt;PyTorch&lt;/a&gt;. We’ll start by preparing our data — transforming raw images into a format suitable for training our model. Then, we’ll delve into defining our neural network architecture, which will learn to recognize various clothing items based on their pixel patterns. For this project we will use &lt;a href="https://www.kaggle.com/datasets/zalando-research/fashionmnist" rel="noopener noreferrer"&gt;FashionMNIST dataset.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FashionMNIST, is a dataset that captures grayscale images of clothing items, serves as an excellent playground for learning and mastering image classification techniques. Similar to its predecessor, MNIST (which consists of handwritten digits), FashionMNIST challenges us to distinguish between different types of apparel with the aid of deep learning models. PyTorch provides tools to download and load datasets conveniently.&lt;/p&gt;

&lt;p&gt;As we progress, we’ll explore how to train our model using backpropagation and gradient descent, evaluate its performance on unseen data, and ensure it generalizes well to new examples.&lt;/p&gt;

&lt;p&gt;Finally, we’ll learn how to save our trained model’s parameters, enabling us to deploy it in real-world applications or continue refining its capabilities.&lt;/p&gt;

&lt;p&gt;I guess you are already excited, I am too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Neural Network?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://www.sciencedirect.com/topics/earth-and-planetary-sciences/artificial-neural-network" rel="noopener noreferrer"&gt;neural network&lt;/a&gt; is a series of interconnected nodes, inspired by the structure of the human brain. It learns by processing data and adjusting its internal connections based on the results. In this case, the neural network will learn to recognize patterns in images of clothing and predict the corresponding category (t-shirt, dress, etc.).&lt;/p&gt;

&lt;p&gt;Throughout this tutorial, we will cover essential steps in deep learning especially for building classification neural network models. Some of the steps we will employ includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Preparation&lt;/strong&gt;: We will download and prepare our dataset, transforming it into a format suitable for training with PyTorch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Model Definition&lt;/strong&gt;: We will also define a neural network architecture using PyTorch’s &lt;code&gt;nn.Module&lt;/code&gt; that will learn to classify images into different clothing categories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Training and Evaluation&lt;/strong&gt;: We will then implement the training loop to optimize our model’s parameters using gradient descent, evaluate its performance on test data, and monitor its progress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Model Persistence&lt;/strong&gt;: you will also see how to save and load trained models, allowing you to reuse them for predictions or further training.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of this journey, you will not only have a grasp of the fundamental concepts of deep learning with PyTorch but also a practical understanding of how to apply them to real-world datasets.&lt;/p&gt;

&lt;p&gt;Let’s embark on this learning adventure together!&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Dataset Preparation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The first step is to prepare our dataset. Like I initially said, we will use the &lt;a href="https://www.kaggle.com/datasets/zalando-research/fashionmnist" rel="noopener noreferrer"&gt;FashionMNIST&lt;/a&gt; dataset, which is readily available in PyTorch’s torchvision library. This dataset contains 70,000 grayscale images of 10 different classes of clothing items.&lt;/p&gt;

&lt;p&gt;We start by importing the necessary libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;torch.utils.data&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DataLoader&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;torchvision&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datasets&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;torchvision.transforms&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ToTensor&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;torch&lt;/code&gt;: The core PyTorch library for building and training neural networks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;nn&lt;/code&gt;: A submodule of &lt;code&gt;torch&lt;/code&gt; containing building blocks for neural networks like layers and activation functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;DataLoader&lt;/code&gt;: A class from &lt;code&gt;torch.utils.data&lt;/code&gt; that helps us load and iterate over datasets in batches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;datasets&lt;/code&gt;: A submodule of &lt;code&gt;torchvision&lt;/code&gt; providing access to pre-downloaded datasets like Fashion MNIST.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ToTensor&lt;/code&gt;: A data transform that converts images to PyTorch tensors.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After we are done importing libraries, its time we download the training dataset and test data set too from the FashionMNIST plaform and also load them into our environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# download training data from the FashionMNISTdataset.
&lt;/span&gt;&lt;span class="n"&gt;training_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datasets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;FashionMNIST&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;train&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;transform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ToTensor&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;download&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# download test data from the FashionMNIST dataset.
&lt;/span&gt;&lt;span class="n"&gt;test_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datasets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;FashionMNIST&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;train&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;transform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ToTensor&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;download&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code downloads the FashionMNIST dataset. We also specify that we want the training data by setting (&lt;code&gt;train=True&lt;/code&gt;) and test data (&lt;code&gt;train=False&lt;/code&gt;). We also apply the &lt;code&gt;ToTensor&lt;/code&gt; transform, which converts the raw image data (pixel intensities between 0 and 255) into PyTorch tensors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Loaders
&lt;/h3&gt;

&lt;p&gt;Next step is to define or dataset loaders, Data loaders will help us load the dataset in batches, making it easier to manage memory and speed up training of our model. To define our data loaders for our model, we first declare the loading batch size.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;batch_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;

&lt;span class="c1"&gt;# create data loaders
&lt;/span&gt;&lt;span class="n"&gt;training_loader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DataLoader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;training_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_loader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DataLoader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;test_loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Shape of X [N C H W]: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Shape of y: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;break&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We just define the batch size, which will help control how many images are processed at once during training. We then create data loaders for both the training and test data. Our configuration is that the data loaders will feed the data into the neural network in batches during training and evaluation.&lt;/p&gt;

&lt;p&gt;Also we use the &lt;code&gt;for&lt;/code&gt; loop to iterate through the batches of data and prints the shapes of the input images (&lt;code&gt;X&lt;/code&gt;) and their corresponding labels (&lt;code&gt;y&lt;/code&gt;). We see that &lt;code&gt;X&lt;/code&gt; has a shape of &lt;code&gt;[batch_size, channel, height, width]&lt;/code&gt;, where &lt;code&gt;batch_size&lt;/code&gt; is 64 in this case, &lt;code&gt;channel&lt;/code&gt; is 1 (grayscale images), and &lt;code&gt;height&lt;/code&gt; and &lt;code&gt;width&lt;/code&gt; are both 28 (representing the 28x28 pixel images). The labels &lt;code&gt;y&lt;/code&gt; are a one-dimensional tensor of integers representing the clothing categories.&lt;/p&gt;

&lt;p&gt;Since we have defined and configured our data loaders for both the training and and testing datasets, lets then define how we mount our model unto our devices, in our case we will mount it into our CPU device.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# get cpu, gpu or mps device for training.
&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cuda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_available&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mps&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;backends&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_available&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cpu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Using &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; device&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# OUTPUT&lt;/span&gt;

Using cpu device
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our code checks if a GPU or MPS device is available and uses that for training if possible, otherwise it defaults to CPU. Using a GPU or MPS can significantly speed up the training process considering that training large neural models requires compute power and CPU allocation.&lt;/p&gt;

&lt;p&gt;Consequently, all things being equal, we will continue with the next step which is defining our network.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Defining the Neural Network Model&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We define a simple fully connected neural network. Our model will have three layers with &lt;a href="https://builtin.com/machine-learning/relu-activation-function#:~:text=ReLU%2C%20short%20for%20rectified%20linear,as%20the%20rectifier%20activation%20function" rel="noopener noreferrer"&gt;ReLU&lt;/a&gt; activations in between.&lt;/p&gt;

&lt;p&gt;To define a neural network in PyTorch, we create a class that inherits from nn.Module. We define the layers of the network in the &lt;strong&gt;init&lt;/strong&gt; function and specify how data will pass through the network in the forward function. To accelerate operations in the neural network, we move it to the GPU or MPS if available.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;NeuralNetwork&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Module&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;NeuralNetwork&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Flatten&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;linear_relu_stack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Linear&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ReLU&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Linear&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ReLU&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Linear&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;forward&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;logits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;linear_relu_stack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;logits&lt;/span&gt;

&lt;span class="n"&gt;device&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;device&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cuda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_available&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cpu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;NeuralNetwork&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# OUTPUT&lt;/span&gt;

NeuralNetwork&lt;span class="o"&gt;(&lt;/span&gt;
  &lt;span class="o"&gt;(&lt;/span&gt;Flatten&lt;span class="o"&gt;)&lt;/span&gt;: Flatten&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;start_dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1, &lt;span class="nv"&gt;end_dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-1&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;(&lt;/span&gt;linear_relu_stack&lt;span class="o"&gt;)&lt;/span&gt;: Sequential&lt;span class="o"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;(&lt;/span&gt;0&lt;span class="o"&gt;)&lt;/span&gt;: Linear&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;in_features&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;784, &lt;span class="nv"&gt;out_features&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512, &lt;span class="nv"&gt;bias&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;: ReLU&lt;span class="o"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;(&lt;/span&gt;2&lt;span class="o"&gt;)&lt;/span&gt;: Linear&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;in_features&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512, &lt;span class="nv"&gt;out_features&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512, &lt;span class="nv"&gt;bias&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;(&lt;/span&gt;3&lt;span class="o"&gt;)&lt;/span&gt;: ReLU&lt;span class="o"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;(&lt;/span&gt;4&lt;span class="o"&gt;)&lt;/span&gt;: Linear&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;in_features&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512, &lt;span class="nv"&gt;out_features&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10, &lt;span class="nv"&gt;bias&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Few things you should know about our neural network:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;nn.Module&lt;/strong&gt;: Base class for all neural network modules in PyTorch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flatten&lt;/strong&gt;: Flattens the input tensor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;nn.Sequential&lt;/strong&gt;: A sequential container to define the layers of the model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;nn.Linear&lt;/strong&gt;: Fully connected layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;nn.ReLU&lt;/strong&gt;: ReLU activation function.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, looks like we are all set, lets move over to defining our &lt;a href="https://pytorch.org/docs/stable/nn.html" rel="noopener noreferrer"&gt;loss function&lt;/a&gt; and &lt;a href="https://pytorch.org/docs/stable/optim.html" rel="noopener noreferrer"&gt;optimizer&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Defining the Loss Function and Optimizer&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The loss function measures how well the model’s predictions match the actual labels. while the optimizer updates the model parameters to minimize the loss.&lt;/p&gt;

&lt;p&gt;To handle this, we just define the following variables&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;loss_fn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;CrossEntropyLoss&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;optim&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SGD&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1e-3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;momentum&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.9&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explaining each of the concepts, we will have that;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;nn.CrossEntropyLoss&lt;/strong&gt;: is a loss function used primarily for classification tasks where the model predicts probabilities for each class. It combines &lt;code&gt;nn.LogSoftmax()&lt;/code&gt; and &lt;code&gt;nn.NLLLoss()&lt;/code&gt; in one single class. The CrossEntropyLoss expects raw logits (the output of the model before applying soft max) as input. It computes the soft max internally to normalize logits and then computes the negative log likelihood loss between the predicted class probabilities and the actual target labels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;torch.optim.SGD&lt;/strong&gt;: also is the optimizer that implements Stochastic Gradient Descent (SGD), a fundamental optimization algorithm used for training neural networks. SGD updates the model parameters in the direction of the negative gradient of the loss function with respect to the parameters. The &lt;code&gt;model.parameters()&lt;/code&gt; argument specifies which parameters of the model should be optimized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;lr (Learning rate)&lt;/strong&gt;: which is a scalar factor that controls the step size taken during optimization. It determines how much to change the model parameters with respect to the gradient of the loss function. A higher learning rate can speed up convergence, but if it’s too high, it may cause the model to overshoot optimal values. Conversely, a lower learning rate can improve stability and precision but may require more iterations to converge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;momentum&lt;/strong&gt;: Momentum simply is a parameter that accelerates SGD in the relevant direction and dampens oscillations. It improves the convergence rate and helps SGD to escape shallow local minima more effectively. A common value for momentum is 0.9, but it can be tuned depending on the specific problem and dataset characteristics.&lt;/p&gt;

&lt;p&gt;In summary, these components together form the backbone of the optimization process during training. &lt;code&gt;nn.CrossEntropyLoss&lt;/code&gt; computes the loss based on model predictions and target labels, &lt;code&gt;torch.optim.SGD&lt;/code&gt; updates the model parameters based on the computed gradients, and &lt;code&gt;lr&lt;/code&gt; and &lt;code&gt;momentum&lt;/code&gt; are crucial hyperparameters that affect how quickly and effectively the model learns from the data. Adjusting these parameters can significantly impact the training process and model performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Defining our Training Function&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The training function iterates over the data loader, computes predictions, calculates the loss, and updates the model parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;size: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Move input data to the device (GPU or CPU)
&lt;/span&gt;        &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Move target labels to the device (GPU or CPU)
&lt;/span&gt;
        &lt;span class="c1"&gt;# compute predicted y by passing X to the model
&lt;/span&gt;        &lt;span class="n"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# compute the loss
&lt;/span&gt;        &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

      &lt;span class="c1"&gt;#  apply zero gradients, perform a backward pass, and update the weights
&lt;/span&gt;        &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;zero_grad&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  
        &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;backward&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  
        &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  

        &lt;span class="c1"&gt;# print training progress
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;loss_value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  
            &lt;span class="n"&gt;current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;loss: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;loss_value&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;  [&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, to check the model’s performance against the test dataset to ensure it is learning, lets define a test learning function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;num_batches&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;test_loss&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;correct&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;no_grad&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;test_loss&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nf"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;correct&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;test_loss&lt;/span&gt; &lt;span class="o"&gt;/=&lt;/span&gt; &lt;span class="n"&gt;num_batches&lt;/span&gt;
        &lt;span class="n"&gt;correct&lt;/span&gt; &lt;span class="o"&gt;/=&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Test Error: &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt; Accuracy: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;correct&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;%, Avg loss: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;test_loss&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Its time we train our model, lets do that in the next step&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Defining the training loop&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The training process is conducted over several iterations (epochs). During each epoch, the model learns parameters to make better predictions. We print the model’s accuracy and loss at each epoch; we’d like to see the accuracy increase and the loss decrease with every epoch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;epoch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;epoch&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Epoch &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;-------------------------------&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;training_loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Done!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# OUTPUT &lt;/span&gt;

Epoch 1
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;
size: 60000
loss: 2.301722 &lt;span class="o"&gt;[&lt;/span&gt;    0/60000]
loss: 2.196219 &lt;span class="o"&gt;[&lt;/span&gt; 6400/60000]
loss: 1.919408 &lt;span class="o"&gt;[&lt;/span&gt;12800/60000]
loss: 1.602865 &lt;span class="o"&gt;[&lt;/span&gt;19200/60000]
loss: 1.206242 &lt;span class="o"&gt;[&lt;/span&gt;25600/60000]
loss: 1.089895 &lt;span class="o"&gt;[&lt;/span&gt;32000/60000]
loss: 1.010409 &lt;span class="o"&gt;[&lt;/span&gt;38400/60000]
loss: 0.888665 &lt;span class="o"&gt;[&lt;/span&gt;44800/60000]
loss: 0.871484 &lt;span class="o"&gt;[&lt;/span&gt;51200/60000]
loss: 0.801176 &lt;span class="o"&gt;[&lt;/span&gt;57600/60000]
Test Error: 
 Accuracy: 70.4%, Avg loss: 0.797208 

Epoch 2
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;
size: 60000
loss: 0.793278 &lt;span class="o"&gt;[&lt;/span&gt;    0/60000]
loss: 0.839569 &lt;span class="o"&gt;[&lt;/span&gt; 6400/60000]
loss: 0.590993 &lt;span class="o"&gt;[&lt;/span&gt;12800/60000]
loss: 0.796638 &lt;span class="o"&gt;[&lt;/span&gt;19200/60000]
loss: 0.679180 &lt;span class="o"&gt;[&lt;/span&gt;25600/60000]
loss: 0.645485 &lt;span class="o"&gt;[&lt;/span&gt;32000/60000]
loss: 0.705061 &lt;span class="o"&gt;[&lt;/span&gt;38400/60000]
loss: 0.694501 &lt;span class="o"&gt;[&lt;/span&gt;44800/60000]
loss: 0.680406 &lt;span class="o"&gt;[&lt;/span&gt;51200/60000]
loss: 0.634787 &lt;span class="o"&gt;[&lt;/span&gt;57600/60000]
Test Error: 
 Accuracy: 78.1%, Avg loss: 0.632338 

Epoch 3
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;
size: 60000
loss: 0.558544 &lt;span class="o"&gt;[&lt;/span&gt;    0/60000]
loss: 0.660779 &lt;span class="o"&gt;[&lt;/span&gt; 6400/60000]
loss: 0.436486 &lt;span class="o"&gt;[&lt;/span&gt;12800/60000]
loss: 0.679563 &lt;span class="o"&gt;[&lt;/span&gt;19200/60000]
loss: 0.600478 &lt;span class="o"&gt;[&lt;/span&gt;25600/60000]
loss: 0.567539 &lt;span class="o"&gt;[&lt;/span&gt;32000/60000]
loss: 0.587003 &lt;span class="o"&gt;[&lt;/span&gt;38400/60000]
loss: 0.657008 &lt;span class="o"&gt;[&lt;/span&gt;44800/60000]
loss: 0.643853 &lt;span class="o"&gt;[&lt;/span&gt;51200/60000]
loss: 0.547364 &lt;span class="o"&gt;[&lt;/span&gt;57600/60000]
Test Error: 
 Accuracy: 80.3%, Avg loss: 0.560929 

Epoch 4
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;
size: 60000
loss: 0.462072 &lt;span class="o"&gt;[&lt;/span&gt;    0/60000]
loss: 0.580780 &lt;span class="o"&gt;[&lt;/span&gt; 6400/60000]
loss: 0.374757 &lt;span class="o"&gt;[&lt;/span&gt;12800/60000]
loss: 0.618166 &lt;span class="o"&gt;[&lt;/span&gt;19200/60000]
loss: 0.552829 &lt;span class="o"&gt;[&lt;/span&gt;25600/60000]
loss: 0.526478 &lt;span class="o"&gt;[&lt;/span&gt;32000/60000]
loss: 0.529090 &lt;span class="o"&gt;[&lt;/span&gt;38400/60000]
loss: 0.666382 &lt;span class="o"&gt;[&lt;/span&gt;44800/60000]
loss: 0.634566 &lt;span class="o"&gt;[&lt;/span&gt;51200/60000]
loss: 0.482042 &lt;span class="o"&gt;[&lt;/span&gt;57600/60000]
Test Error: 
 Accuracy: 81.2%, Avg loss: 0.523512 

Epoch 5
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;
size: 60000
loss: 0.403316 &lt;span class="o"&gt;[&lt;/span&gt;    0/60000]
loss: 0.539046 &lt;span class="o"&gt;[&lt;/span&gt; 6400/60000]
loss: 0.340361 &lt;span class="o"&gt;[&lt;/span&gt;12800/60000]
loss: 0.577453 &lt;span class="o"&gt;[&lt;/span&gt;19200/60000]
loss: 0.509404 &lt;span class="o"&gt;[&lt;/span&gt;25600/60000]
loss: 0.496750 &lt;span class="o"&gt;[&lt;/span&gt;32000/60000]
loss: 0.495348 &lt;span class="o"&gt;[&lt;/span&gt;38400/60000]
loss: 0.670772 &lt;span class="o"&gt;[&lt;/span&gt;44800/60000]
loss: 0.620382 &lt;span class="o"&gt;[&lt;/span&gt;51200/60000]
loss: 0.439184 &lt;span class="o"&gt;[&lt;/span&gt;57600/60000]
Test Error: 
 Accuracy: 82.2%, Avg loss: 0.500474 

Done!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;epochs&lt;/strong&gt;: Number of times to iterate over the entire training dataset in our case 5 times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;train()&lt;/strong&gt;: Calls the training function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;test()&lt;/strong&gt;: Calls the evaluation(test) function.&lt;/p&gt;

&lt;p&gt;At this point, we already have a trained model that can perfectly predict and classify images and provide output value or expected value as the case may be.&lt;/p&gt;

&lt;p&gt;Moving forward, next thing to consider is ways to save our trained model, so that when we want to use or deploy them for application usage, we can easily call them and provide the required classes and values.&lt;/p&gt;

&lt;p&gt;To save our defined model, we follow the following ways;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;state_dict&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model.pth&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Saved PyTorch Model State to model.pth&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="c"&gt;# OUTPUT Saved PyTorch Model State to model.pth&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach will save the model and and serialize the internal state dictionary (containing the model parameters).&lt;/p&gt;

&lt;p&gt;After saving the model, if next time we want to use our model for predictions, we will first load them into our compute space. And to do that we;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;NeuralNetwork&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_state_dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model.pth&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# OUTPUT &amp;lt;All keys matched successfully&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above process involves loading our model which also includes re-creating the model structure and loading the state dictionary into it.&lt;/p&gt;

&lt;p&gt;Finally, to make use of our loaded model for maybe prediction or classification.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Model Usage for prediction&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;classes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;T-shirt/top&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Trouser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pullover&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dress&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Coat&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sandal&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Shirt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sneaker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bag&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Ankle boot&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;



&lt;span class="c1"&gt;# set model to evaluation mode
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;


&lt;span class="n"&gt;sample_index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# sample Index (Change this index to select a different sample)
&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;test_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sample_index&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;test_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sample_index&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# make prediction without gradient calculation
&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;no_grad&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unsqueeze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="c1"&gt;# get predicted and actual classes
&lt;/span&gt;    &lt;span class="n"&gt;predicted&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()],&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Predicted: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;predicted&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, Actual: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# OUTPUT: Predicted: "Pullover", Actual: "Pullover"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Preparation and Data&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;classes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;T-shirt/top&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Trouser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pullover&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dress&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Coat&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sandal&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Shirt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sneaker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bag&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Ankle boot&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;classes&lt;/code&gt;: This is a list of class labels that correspond to the categories the model is trained to recognize. Each index in this list represents a specific class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set Model to Evaluation Mode&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;model.eval()&lt;/code&gt;: Sets the model to evaluation mode. This is important because some layers (e.g., dropout, batch normalization) behave differently during training and evaluation. In evaluation mode, these layers operate in inference mode, ensuring consistent results during testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select a Single Test Sample&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;test_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;test_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;x, y = test_data[0][0], test_data[0][1]&lt;/code&gt;: Selects the first sample from the &lt;code&gt;test_data&lt;/code&gt; dataset. &lt;code&gt;x&lt;/code&gt; is the input data (e.g., an image), and &lt;code&gt;y&lt;/code&gt; is the corresponding label (e.g., the class index).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make Prediction Without Gradient Calculation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;no_grad&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;with torch.no_grad():&lt;/code&gt;: Disables gradient calculation, which is not needed for evaluation and reduces memory usage and computation time.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;x = x.to(device)&lt;/code&gt;: Moves the input data to the specified device (CPU or GPU) where the model is located.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pred = model(x)&lt;/code&gt;: Passes the input data through the model to obtain the predictions. &lt;code&gt;pred&lt;/code&gt; is typically a tensor containing the output logits or probabilities for each class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To Determine Predicted and Actual Class Labels&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;predicted&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;pred&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)],&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;pred[0].argmax(0)&lt;/code&gt;: Finds the index of the class with the highest score in the model's output for the first (and only) sample in the batch. This index corresponds to the predicted class.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;classes[pred[0].argmax(0)]&lt;/code&gt;: Uses the index to look up the predicted class label from the &lt;code&gt;classes&lt;/code&gt; list.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;classes[y]&lt;/code&gt;: Uses the true label index &lt;code&gt;y&lt;/code&gt; to look up the actual class label from the &lt;code&gt;classes&lt;/code&gt; list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Print the Predicted and Actual Class Labels&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Predicted: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;predicted&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, Actual: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prints the predicted and actual class labels in a formatted string.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;we walked through the entire process of building, training and evaluating a neural network using PyTorch with the FashionMNIST dataset. We covered essential concepts such as dataset preparation, defining a neural network model, setting up training and evaluation loops, saving and loading models, and making predictions.&lt;/p&gt;

&lt;p&gt;Lastly, constant practice leads to mastery, so experiment with different models, hyperparameters, and datasets to deepen your understanding and improve your skills in deep learning and image classifications.&lt;/p&gt;

&lt;p&gt;Till next time, but for now all I can say is, &lt;strong&gt;Happy coding&lt;/strong&gt;! 🚀&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/topics/neural-networks" rel="noopener noreferrer"&gt;&lt;strong&gt;What is a Neural Network? | IBM&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Neural networks allow programs to recognize patterns and solve common problems in artificial intelligence, machine…*www.ibm.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kaggle.com/datasets/zalando-research/fashionmnist" rel="noopener noreferrer"&gt;&lt;strong&gt;Fashion MNIST&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
An MNIST-like dataset of 70,000 28x28 labeled fashion images*www.kaggle.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Quickstart - PyTorch Tutorials 2.3.0+cu121 documentation&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Read the PyTorch Domains documentation to learn more about domain-specific libraries*pytorch.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/bitgrit-data-science-publication/building-an-image-classification-model-with-pytorch-from-scratch-f10452073212" rel="noopener noreferrer"&gt;&lt;strong&gt;Building an Image Classification model with PyTorch from scratch&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A step-by-step guide to building a CNN model with PyTorch.medium.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/chibuezedev/Machine-learning/blob/main/cloth-classification-using-pytorch.ipynb" rel="noopener noreferrer"&gt;&lt;strong&gt;Machine-learning/cloth-classification-using-pytorch.ipynb at main · chibuezedev/Machine-learning&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Collection of my Machine learning models. Contribute to chibuezedev/Machine-learning development by creating an account…github.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>ai</category>
      <category>learning</category>
    </item>
    <item>
      <title>Analogy of Hypertext-Driven RESTful APIs: Unleashing the Power of Hyperlinks</title>
      <dc:creator>Paul Chibueze</dc:creator>
      <pubDate>Fri, 21 Jul 2023 07:52:07 +0000</pubDate>
      <link>https://dev.to/chibueze/analogy-of-hypertext-driven-restful-apis-unleashing-the-power-of-hyperlinks-4off</link>
      <guid>https://dev.to/chibueze/analogy-of-hypertext-driven-restful-apis-unleashing-the-power-of-hyperlinks-4off</guid>
      <description>&lt;p&gt;When we think about developing RESTful APIs, one term that often comes to mind is HATEOAS (Hypermedia as the Engine of Application State). HATEOAS enables the API to guide the client’s navigation through the application by including hyperlinks in the API responses. While HATEOAS is a powerful concept, there is another approach to designing RESTful APIs that deserves attention: Hypertext-Driven APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Hypertext-Driven APIs&lt;/strong&gt;&lt;br&gt;
Hypertext-Driven APIs, as the name suggests, leverage hypertext and hyperlinks to drive client interaction with the API. Unlike traditional APIs that require clients to have prior knowledge of resource URIs and actions, hypertext-driven APIs provide links within the API responses that allow clients to navigate and discover resources.&lt;/p&gt;

&lt;p&gt;In this approach, the API acts as a sort of “hypermedia document” that not only provides data but also includes links to related resources and actions that can be performed. This empowers clients to dynamically explore the API without relying on hard-coded URLs or predefined navigation paths.&lt;/p&gt;

&lt;p&gt;Now, before you continue reading, there are couple of things you need to have installed to get you started on testing the codes on each example.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure you have Node.js &amp;amp; npm installed on your machine. You can download and install Node.js from the official website: &lt;a href="https://nodejs.org" rel="noopener noreferrer"&gt;https://nodejs.org&lt;/a&gt; or NPM from &lt;a href="https://docs.npmjs.com/downloading-and-installing-node-js-and-npm" rel="noopener noreferrer"&gt;https://docs.npmjs.com/downloading-and-installing-node-js-and-npm&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new directory for your project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open a terminal or command prompt and navigate to the project directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the code example into a new file and save it with a .js extension, for example ecommerce.js. In the terminal or command prompt, install the required dependencies by running the following command:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install express
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Once the installation is complete, start the server by running the following command:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node ecommerce.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You should see a message indicating that the server is running on a specific port (in this case, 3000).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open a web browser and navigate to &lt;code&gt;http://localhost:3000/products&lt;/code&gt; to test the product listing endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should see a JSON response containing information about the available products.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Try accessing other endpoints such as /cart, /cart/checkout, and &lt;code&gt;/cart/payment_info&lt;/code&gt; to test different functionalities of the e-commerce platform.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Feel free to explore the code and make modifications to suit your needs. You can add new routes, implement functionality for adding/removing items from the cart, or integrate with a database to persist data.&lt;/p&gt;

&lt;p&gt;Please note that this is a basic example for testing purposes. In a real-world scenario, you would need to consider additional security measures, handle database operations, and handle user authentication/authorization.&lt;/p&gt;
&lt;h2&gt;
  
  
  Rethinking API Interaction
&lt;/h2&gt;

&lt;p&gt;Imagine a RESTful API for an e-commerce platform. Traditionally, a client would need to know the specific URIs for retrieving a list of products, adding items to the shopping cart, and checking out. With a hypertext-driven API, the API response itself contains hyperlinks for these actions and more.&lt;/p&gt;

&lt;p&gt;For example, instead of relying on a fixed URL for retrieving products, the API response may include a link with the relation type “products” that the client can follow to get the list of available products. Similarly, the response may include links to add items to the cart or proceed to the checkout.&lt;/p&gt;

&lt;p&gt;Lets see some codes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express'); 
const app = express(); 
const PORT = 3000; 

app.get('/products', (req, res) =&amp;gt; { 
const productData = [{ 
  id: '1', 
  name: 'Product 1', 
  description: 'This is product 1', 
  price: 10.99, 
  imageLink: 'https://via.placeholder.com/150', 
  addToCartLink: '/cart/add/1' }, 

{ 
  id: '2', 
  name: 'Product 2', 
  description: 'This is product 2', 
  price: 19.99, 
  imageLink: 'https://via.placeholder.com/150', 
  addToCartLink: '/cart/add/2' 
}]; 

  res.json(productData); 
}); 

app.get('/cart', (req, res) =&amp;gt; {

const cartData = { 
  items: [ { 
  id: '1', 
  name: 'Product 1', 
  description: 'This is product 1', 
  price: 10.99, 
  quantity: 2, 
  imageLink: 'https://via.placeholder.com/150', 
  removeFromCartLink: '/cart/remove/1' } ], 
  subTotal: 21.98, 
  tax: 1.77, 
  total: 23.75, 
  checkoutLink: '/cart/checkout' 
}; 

res.json(cartData); 
}); 


app.get('/cart/checkout', (req, res) =&amp;gt; {

const checkoutData = { 
  firstName: 'John', 
  lastName: 'Doe', 
  email: 'johndoe@example.com', 
  address: '123 Main St.',
  city: 'Anytown', 
  state: 'CA', 
  zipCode: '90210', 
  paymentInfoLink: '/cart/payment_info' 
}; 

res.json(checkoutData); 
}); 

app.get('/cart/payment_info', (req, res) =&amp;gt; {

const paymentData = { 
  cardNumber: '**** **** **** 1234', 
  expDate: '06/25', cvv: '123', 
  paymentLink: '/cart/submit_payment' 
}; 
res.json(paymentData); 
}); 


app.post('/cart/submit_payment', (req, res) =&amp;gt; { // handle payment submission 

res.sendStatus(200); }); 

app.listen(PORT, () =&amp;gt; { 
console.log(`Server running on port ${PORT}`); 
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, a GET request to &lt;code&gt;/products&lt;/code&gt; returns information about available products, including hyperlinks for adding each product to the shopping cart. A GET request to &lt;code&gt;/cart&lt;/code&gt;returns cart information, including items in the &lt;code&gt;cart&lt;/code&gt;, a sub-total, tax, and a checkout hyperlink. A GET request to &lt;code&gt;/cart/checkout&lt;/code&gt; returns checkout information, such as billing and shipping information, and a hyperlink for entering payment information. A GET request to &lt;code&gt;/cart/payment_info&lt;/code&gt; returns a form for entering payment information, and a hyperlink for submitting payment information.&lt;/p&gt;

&lt;p&gt;By breaking down the overall user flow into smaller resource interactions, each with hyperlinks to related resources, this e-commerce platform can be designed to be more flexible and adaptable, allowing clients to interact with the API in a more dynamic way.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Another example, Let’s consider an API for a social networking platform. A traditional RESTful API for this platform may have endpoints for retrieving users, posts, and comments, with specific URLs and routes for each of these resources.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, a hypertext-driven API for this platform would include hyperlinks within the responses that allow clients to navigate and interact with the API in a more dynamic way.&lt;/p&gt;

&lt;p&gt;For instance, a response for retrieving a user may not only include the user’s profile information but also hyperlinks to the user’s posts, followers, and following list. Instead of requiring clients to know the URLs for these resources and making multiple API calls to retrieve them, clients can simply follow the hyperlinks to access additional information and resources.&lt;/p&gt;

&lt;p&gt;Similarly, a response for retrieving a post may include hyperlinks to the post’s author, comments, and related posts. Clients can follow these links to explore the post’s context and engage with other relevant resources.&lt;/p&gt;

&lt;p&gt;By including hyperlinks within the responses, a hypertext-driven API for a social networking platform can make the user experience more interactive and seamless. Clients can explore and interact with the API in a more natural and intuitive way, without having to rely on prior knowledge of specific URLs and APIs.&lt;/p&gt;

&lt;p&gt;Get code this illustration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express'); 
const app = express();

const PORT = 3000; 

app.get('/users/:id', (req, res) =&amp;gt; { 

  const userId = req.params.id; 

  const userData = { 
   id: userId, 
   name: 'John Doe', 
   bio: 'Software developer and music lover', 
   postsLink: `/users/${userId}/posts`, 
   followersLink: `/users/${userId}/followers`, 
   followingLink: `/users/${userId}/following` 
};
   res.json(userData); 
}); 


app.get('/users/:id/posts', (req, res) =&amp;gt; { 

   const userId = req.params.id; 

   const postsData = [ 
{ 
   id: '1', 
   authorId: userId, 
   content: 'My first post', 
   commentsLink: `/posts/1/comments`, 
   relatedPostsLink: `/users/${userId}/related_posts` 
}, 
{ 
   id: '2', 
   authorId: userId, 
   content: 'My second post', 
   commentsLink: `/posts/2/comments`, 
   relatedPostsLink: `/users/${userId}/related_posts` 
  } 
]; 

res.json(postsData); }); // other endpoints for comments, following, related posts, etc. 


app.listen(PORT, () =&amp;gt; { 
console.log(`Server running on port ${PORT}`); 
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the example above, a GET request to &lt;code&gt;/users/:id&lt;/code&gt; returns information about a specific user, including hyperlinks for accessing their posts, followers, and following list. A GET request to &lt;code&gt;/users/:id/&lt;/code&gt; posts returns the user’s posts, along with hyperlinks for accessing comments and related posts.&lt;/p&gt;

&lt;p&gt;By including hyperlinks within the responses, clients can dynamically navigate and interact with the API without having to rely on hardcoded URLs or predefined navigation paths. This simplifies client development and promotes loose coupling between the client and server components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Hypertext-Driven APIs
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Improved Discoverability
&lt;/h2&gt;

&lt;p&gt;Hypertext-driven APIs enhance discoverability by providing a roadmap for clients to explore available resources and actions. Clients no longer need to rely on API documentation or prior knowledge to navigate the API. They can simply follow links and let the API guide them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flexibility and Adaptability
&lt;/h2&gt;

&lt;p&gt;By using hyperlinks to drive interaction, hypertext-driven APIs become more flexible and adaptable. Adding or modifying resources and actions does not require changing client code, as clients can dynamically discover and interact with new features through the hyperlinks provided in the API responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Loose Coupling
&lt;/h2&gt;

&lt;p&gt;Hypertext-driven APIs promote loose coupling between the client and the API. The client does not need to have hardcoded knowledge of resource URLs or specific API routes. Instead, it relies on the hyperlinks provided by the API to navigate and interact. This decoupling makes the API more resilient to future changes and allows for easier evolution of both the client and server components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplified Client Development
&lt;/h2&gt;

&lt;p&gt;Clients interacting with hypertext-driven APIs can focus more on business logic and user experience, as they can delegate the navigation and interaction aspects to the API itself. This can streamline client development, reduce complexity, and improve maintainability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While HATEOAS is a well-known approach for designing RESTful APIs, hypertext-driven APIs offer an alternative perspective that emphasizes the power of hyperlinks. By embedding hyperlinks within the API responses, these APIs enable dynamic navigation and discovery, improving discoverability, flexibility, and the overall client experience.&lt;/p&gt;

&lt;p&gt;Hypertext-driven APIs empower clients to rely on the guidance provided by the API, reducing the need for hardcoded URLs and predefined navigation paths. This promotes loose coupling, simplifies client development, and allows for easier evolution of both the client and server components.&lt;/p&gt;

&lt;p&gt;So, next time you design a RESTful API, consider the potential of hypertext-driven APIs and embrace the power of hyperlinks. Let your API become a hypermedia document that guides clients on their journey and unlocks a whole new level of flexibility and adaptability.&lt;/p&gt;

&lt;p&gt;Reference:&lt;br&gt;
&lt;a href="https://restfulapi.net/hateoas/" rel="noopener noreferrer"&gt;https://restfulapi.net/hateoas/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven" rel="noopener noreferrer"&gt;https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow on &lt;a href="https://twitter.com/chibuezeai" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>restapi</category>
      <category>node</category>
      <category>softwareengineering</category>
      <category>programming</category>
    </item>
    <item>
      <title>How then shall a developer stay healthy and mentally fit?</title>
      <dc:creator>Paul Chibueze</dc:creator>
      <pubDate>Fri, 10 Feb 2023 16:33:15 +0000</pubDate>
      <link>https://dev.to/chibueze/how-then-shall-a-developer-stay-healthy-and-mentally-fit-4jp5</link>
      <guid>https://dev.to/chibueze/how-then-shall-a-developer-stay-healthy-and-mentally-fit-4jp5</guid>
      <description>&lt;p&gt;"I shall get up as early as 4:00 am, make myself a cup of tea and probably start hitting the keyboard till whenever it's time for standup or Mr. A meeting time". This is probably the most morning ritual for 80% of developers, they pay more attention to the calendar schedules and pay almost no attention to their state of health.&lt;/p&gt;

&lt;p&gt;However, I am writing this as a developer, who was caught up by the 2020 coronavirus outbreak and was forced to adopt the newer method of this generation's working style "Remote". Before then, I've had this normal routine of almost every developer, wake up in the morning, look at my schedules for the day and probably head over to the office. While in the office it's mostly about, writing that piece of code for the next feature from my company, and meetings from the HR to the tribes. This process goes down from Monday till Friday. On Saturday I feel so weak that id off my alarm and slept till my neighbors come knocking at my door the following morning.&lt;/p&gt;

&lt;p&gt;Moreover, this recycling life of most developers and in turn same as mine is one of the problems we have as software engineers. It might not bother you, but everything you do as a developer or a non-developer largly depends on how active you are to notice things around you. &lt;/p&gt;

&lt;p&gt;One of the biggest factors we as developers should pay attention to is our ability to think critically in a healthy state of mind. Most times, it's not always about writing those codes but ensuring we are actually in the best state to solve those problems with our code. I will go on and list some of the things developers are expected to do to ensure the best state of the mind and increase productivity at work.&lt;/p&gt;

&lt;p&gt;Develop that attitude of exercising at least two times  a week. in as much as your activities may be always include free time to exercise and make sure you stay fit. Being physically active can improve your brain health, help manage weight, reduce the risk of disease, strengthen bones and muscles, and improve your ability to do everyday activities.&lt;/p&gt;

&lt;p&gt;Build self-awareness and observe what's going on inside your mind. Let go of all those things that create clutter in your mind. Try to forgive people unconditionally and be free. Try to figure out your dharma.&lt;/p&gt;

&lt;p&gt;Socialization often prepares people to participate in a social group by teaching them its norms and expectations. Socialization has three primary goals: teaching impulse control and developing a conscience, preparing people to perform certain social roles, and cultivating shared sources of meaning and value.&lt;/p&gt;

&lt;p&gt;Take good care of your diet. A healthy diet is essential for good health and nutrition. It protects you against many chronic non-communicable diseases, such as heart disease, diabetes and cancer. Eating a variety of foods and consuming less salt, sugars and saturated and industrially-produced trans-fats, are essential for a healthy diet.&lt;/p&gt;

&lt;p&gt;Don't forget to pay attention to your eye, protect it at all costs. I was a victim of this and I got my right eye damaged after staring for a long time at the screen without taking eye precautions. Use anti-blue light glasses and make sure you give your eye the necessary care it demands. Eat more fruits. Reduce the use of blue lights in your setup rooms and also bedrooms to make sure you give your eye the necessary relaxations it requires.&lt;/p&gt;

&lt;p&gt;Finally, make sure you don't remain introverted, relationships are one of the most powerful feelings we have as humans. Meet with your family members and loved ones in your free time. &lt;/p&gt;

&lt;p&gt;Now, at this point i feel like youve learnt one thing or the other to help your maintain a stable state of mind as a creative and productive developer. By paying attention to our health we tend to prevent pretigious treat to our level of productivity and help us focus more and be more productive.&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>Roadmap To Backend Developer in 2022.</title>
      <dc:creator>Paul Chibueze</dc:creator>
      <pubDate>Sun, 10 Jul 2022 00:07:48 +0000</pubDate>
      <link>https://dev.to/chibueze/roadmap-to-backend-developer-in-2022-50le</link>
      <guid>https://dev.to/chibueze/roadmap-to-backend-developer-in-2022-50le</guid>
      <description>&lt;p&gt;Backend just like the kitchen is where the whole mixing and fries are done. Our web applications are incomplete or almost nothing without the backend, it could be seen as the backbone of every web application. It serves the informations we have in our database to the users once such demand is called for. There's always the unseens part of every restaurant, where all the fries are done before it is served to the customers. When we talk about the internet , the backend is considered the kitchen.&lt;br&gt;
 However, it is so important to know how to start and where to start as a beginner who wishes to go into backend development. In this blog we will explore the core backend super master Roadmap covering steps on how to start and resources to refer to along your journey.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Know the Basics of Coding
The most fundamental step in learning backend development is to learn to code. Learn fundamentals syntax, variables, functions, objects, data types, and execution. Some of the common programming languages used in the backend are PHP, Javascript, Python, and C#. Learning software languages will help you speed up your career in backend development.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Besides, one must learn various databases to help store data electronically. Traditionally, backend developers used either relational or NoSQL databases. NoSQL databases are document-based and have a dynamic schema, whereas SQL databases are table-based and have a fixed or predefined schema.&lt;/p&gt;

&lt;p&gt;Relational Databases&lt;br&gt;
MySQL&lt;br&gt;
Oracle&lt;br&gt;
PostgreSQL&lt;br&gt;
NoSQL Databases&lt;br&gt;
Firebase&lt;br&gt;
MongoDB&lt;br&gt;
Cassandra&lt;br&gt;
Furthermore, students should be well-read with data structures and algorithms to ease their workflow and improve efficiency. Learning version control systems (VCS) is also essential in the basics. Version control systems such as Git, Github, and GitLab are the most commonly used solutions to assist frontend and backend developers in interactions and managing the changes made over time.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Gain Intermediate Skills
Frameworks in backend development form the essential skills after the fundamentals of coding and databases. They are crucial as using frameworks allows the creation of templates and code that may be reused in the future. They minimise the amount of code one must write. Hence, programming becomes more efficient. As a result, knowing a framework is also an excellent idea.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Common frameworks used in backend development are -&lt;/p&gt;

&lt;p&gt;Microsoft's ASP.NET is a web application platform that allows programmers to create dynamic websites. It enables you to create web applications using a full-featured programming language such as C#.&lt;br&gt;
Laravel is regarded as among the best PHP frameworks developing online applications. It aids in the creation of fantastic apps by utilising creative grammar.&lt;br&gt;
Rails, often known as Ruby on Rails, is a free and open-source framework built on the Ruby programming language. When using RoR, developers do not have to work on every single program in the web application development process.&lt;br&gt;
Django is a set of Python libraries that allows you to rapidly and efficiently develop a high-quality web application that can be used on both the front and backend.&lt;br&gt;
Node.js is a runtime environment that allows software developers to use JavaScript to launch frontend and backend web projects.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Advanced Topics
The topics and tools mentioned in the advanced sections are relatively more challenging than the previous skills. Therefore, learners should focus on APIs, Security, Caching, and Testing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;APIs&lt;br&gt;
An Application Programming Interface enables two apps to communicate with one another. Backend developers utilise APIs to connect various apps or services to improve user experience on the frontend. Some of the APIs to learn are -&lt;/p&gt;

&lt;p&gt;REST&lt;br&gt;
JSON&lt;br&gt;
AES&lt;br&gt;
GSON&lt;br&gt;
SOAP&lt;br&gt;
XML-RPC&lt;br&gt;
Caching&lt;br&gt;
It is the technique of storing a copy of a given resource in a cache (temporary storage site) and quickly providing the data when requested. Caching's primary purpose is to increase data retrieval performance while removing the need to contact the underlying storage layer, which is slow to process. Some caching tools are -&lt;/p&gt;

&lt;p&gt;CDN&lt;br&gt;
Server-Side&lt;br&gt;
Redis&lt;br&gt;
Client-Side&lt;br&gt;
Security&lt;br&gt;
Web security knowledge is crucial for backend development. You can learn some of these topics to enhance your understanding of web security:&lt;/p&gt;

&lt;p&gt;HTTPS&lt;br&gt;
SSL&lt;br&gt;
CORS&lt;br&gt;
Hashing Algorithms&lt;br&gt;
Testing&lt;br&gt;
Backend testing is the process of checking a web application's database or server end. The goal of backend testing is to evaluate the database layer's efficiency while ensuring it is free of data corruption, deadlocks, and data loss. In backend testing, commonly used testing methods are as follows. These methods are also used in other software-related careers:&lt;/p&gt;

&lt;p&gt;Integration testing&lt;br&gt;
Functional testing&lt;br&gt;
Unit testing&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Learn Additional Tools
In addition to the different fundamental tools and advanced topics, here are some topics that add value to your knowledge of backend development.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Code Analysis Tools&lt;br&gt;
Code analysis is a technique for troubleshooting and evaluating code to ensure smooth work. Some analysis tools include -&lt;/p&gt;

&lt;p&gt;SonarLint&lt;br&gt;
PMD&lt;br&gt;
SonarQube&lt;br&gt;
JUnit&lt;br&gt;
JaCoCo&lt;br&gt;
Architectural Pattern&lt;br&gt;
An architectural pattern is a reusable solution to problems encountered when designing software. Among the most prevalent architectural patterns are SOA, Microservices, and CQRS.&lt;/p&gt;

&lt;p&gt;Message Broker&lt;br&gt;
A message broker is a piece of software that allows apps, systems, and services to communicate to exchange data. The primary function of a broker is to translate the server's formal message protocol into the client's formal message protocol (receiver). You should learn one of the message brokers provided and use it in different projects.&lt;/p&gt;

&lt;p&gt;Containerization&lt;br&gt;
Containerization is the process of packaging software code with all of the necessary components, such as frameworks, dependencies, and other libraries, to build isolated services in a container. A backend developer performs containerization to quickly migrate or run a container regardless of its infrastructure or environment. Some of the most commonly used containers are tools like Docker.&lt;/p&gt;

&lt;p&gt;Web Servers&lt;br&gt;
The Apache, often known as the Apache HTTP Server, is a cross-platform open-source web server. The Apache Software Foundation created it. NGINX is another open-source web server that can be used for reverse proxying, load balancing, caching, mail proxying, and other purposes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Practicing with Real-World Applications
Since all the backend development tools have different use cases and requirements, an aspiring backend developer needs to know which tool can help them in a particular requirement.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hence, practicing is an important step. Look for small projects and tasks on how to learn backend development. These projects can help in understanding the different tools better. Making a simple application like business websites, blogs, etc., can help you practice what you learned practically.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a Clone
This is an advanced step that will test all your skills in backend development. The idea behind creating a clone is to create a copy of an existing startup or a business using backend development.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Since these businesses have different complexities involved in their products, it will be a good way to expand your knowledge horizon. It will also help you ideate your online products, which is the best way to showcase your skills.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Backend development is a lucrative option in the IT space, opening good career options. There are also a lot of advancements in this field; hence, it is ever-growing with a lot of new things to keep up with each year.&lt;/p&gt;

&lt;p&gt;For more reference about how to get started as a backend developer visit &lt;a href="https://github.com/kamranahmedse/developer-roadmap" rel="noopener noreferrer"&gt;Roadmap.sh&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>django</category>
      <category>node</category>
    </item>
  </channel>
</rss>
