<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Muiz Alvi</title>
    <description>The latest articles on DEV Community by Muiz Alvi (@muizalvi).</description>
    <link>https://dev.to/muizalvi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/muizalvi"/>
    <language>en</language>
    <item>
      <title>AI 101 | Introduction to Artificial Intelligence</title>
      <dc:creator>Muiz Alvi</dc:creator>
      <pubDate>Fri, 08 Jan 2021 06:00:25 +0000</pubDate>
      <link>https://dev.to/muizalvi/ai-101-introduction-to-artificial-intelligence-39fi</link>
      <guid>https://dev.to/muizalvi/ai-101-introduction-to-artificial-intelligence-39fi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;If you are someone who is intrigued by self-driving cars, androids that can converse fluently or software that can diagnose tens of thousands of patients in a matter of minutes then artificial intelligence is the field for you!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnhw6zortbuijmyxplet6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnhw6zortbuijmyxplet6.png" alt="1" width="800" height="450"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This post covers basic components of the field, goes over the optimal learning path to take and includes a number of resources to help get you started. So no need to worry about all those complicated terms, super lengthy videos and numerous courses available online. Just follow this post and observe as we simplify AI for you!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzq7xjby52zckea63bhjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzq7xjby52zckea63bhjl.png" alt="2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;About this Post&lt;/li&gt;
&lt;li&gt;Learning Objectives&lt;/li&gt;
&lt;li&gt;What is Artificial Intelligence?&lt;/li&gt;
&lt;li&gt;The Power &amp;amp; Uses of AI&lt;/li&gt;
&lt;li&gt;Components of AI&lt;/li&gt;
&lt;li&gt;Python Learning Resources&lt;/li&gt;
&lt;li&gt;Logic &amp;amp; Rules-Based Approach&lt;/li&gt;
&lt;li&gt;Intro to Machine Learning&lt;/li&gt;
&lt;li&gt;First AI Classifier (hands-on)&lt;/li&gt;
&lt;li&gt;Classic Machine Learning&lt;/li&gt;
&lt;li&gt;Intro to Deep Learning&lt;/li&gt;
&lt;li&gt;Machine Learning vs. Deep Learning&lt;/li&gt;
&lt;li&gt;More on Deep Learning&lt;/li&gt;
&lt;li&gt;Theoretical &amp;amp; Hands-on Notebooks&lt;/li&gt;
&lt;li&gt;Problems with ML &amp;amp; DL&lt;/li&gt;
&lt;li&gt;Cloud Services: Microsoft Azure&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  About this Post &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This post covers content discussed in the &lt;a href="https://www.facebook.com/events/429419051422702" rel="noopener noreferrer"&gt;AI 101 | Introduction to Artificial Intelligence&lt;/a&gt; hands-on online workshop conducted by &lt;a href="https://www.facebook.com/mlsaislamabad" rel="noopener noreferrer"&gt;Microsoft Learn Student Ambassadors - Islamabad&lt;/a&gt;, &lt;a href="https://www.facebook.com/mlsa.community" rel="noopener noreferrer"&gt;Student Ambassadors Club - Karachi&lt;/a&gt; and &lt;a href="https://www.facebook.com/NUST.ACM" rel="noopener noreferrer"&gt;NUST ACM Student Chapter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcjr77r8en21p1yw0dmdj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcjr77r8en21p1yw0dmdj.png" alt="3" width="800" height="420"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;If you prefer watching the recording for the session, it is available &lt;a href="https://youtu.be/3K6bz7KCdFc" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: You do not need to watch the recording to understand this post, the link has only been provided for learning purposes.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning Objectives &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Understanding the applications of AI&lt;/li&gt;
&lt;li&gt;Understanding &amp;amp; differentiating AI components &lt;/li&gt;
&lt;li&gt;Implementing a Basic AI classifier (Hands-on)&lt;/li&gt;
&lt;li&gt;Discussing Learning Paths &amp;amp; Resources&lt;/li&gt;
&lt;li&gt;Understanding Cloud Technologies in AI
&lt;/li&gt;
&lt;li&gt;Discussing Opportunities in AI&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Artificial Intelligence? &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We are living in the modern age of problem solving where we have based entire fields on ideas of science and mathematics to identify and solve problems. Artificial Intelligence, although an extremely broad term, can be thought of as a means of &lt;em&gt;using Computers to solve problems or make automated decisions for tasks that, when done by humans, typically require intelligence&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbwentaouvnb05z4rfbsl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbwentaouvnb05z4rfbsl.jpg" alt="1" width="770" height="400"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;A classic example would be shape identification. The following picture contains a few shapes that can easily be identified by a toddler.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzugxgboboge2z4ulcba4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzugxgboboge2z4ulcba4.png" alt="2" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this task requires some degree of intelligence and hence if a computer program were able to do it, we could say it does so using artificial intelligence!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power &amp;amp; Uses of AI &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Shape identification was pretty impressive back in the 1990s, but the applications of AI have grown ever since that time. Decades of research and technology have allowed us to do things that were once considered impossible! Let us dive into the vast capabilities of this field.&lt;/p&gt;

&lt;p&gt;The following is a short video of a &lt;a href="https://en.wikipedia.org/wiki/Humanoid_robot" rel="noopener noreferrer"&gt;humanoid robot&lt;/a&gt; called &lt;a href="https://en.wikipedia.org/wiki/Sophia_(robot)" rel="noopener noreferrer"&gt;Sophia&lt;/a&gt; being interviewed by the &lt;a href="https://www.stylist.co.uk/" rel="noopener noreferrer"&gt;Stylist Magazine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZQrKFAAlxO4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Now a lot of people have this confusion that AI refers to such robots (like Sophia) that can, to an extent, mimic human brain functionality to produce abstract thought. When this is entirely false. Robots, like Sophia, make use of an AI backend that allow them to &lt;em&gt;indulge in conversation&lt;/em&gt;, this is regardless of a humanoid exterior.&lt;/p&gt;

&lt;p&gt;So what exactly are the applications of AI other than robotics? Well, here are a few to help you get started:&lt;/p&gt;

&lt;h4&gt;
  
  
  Data Processing &amp;amp; Cleaning
&lt;/h4&gt;

&lt;p&gt;Corporate giants maintain tens of thousands of customers' data on a daily basis. Now this task can be both tedious and prone to error if assigned to employees, instead they opt for an AI solution to learn certain patterns and trends overtime to clean or process incorrect, incomplete, irrelevant, duplicated, or improperly formatted customer data. &lt;/p&gt;

&lt;h4&gt;
  
  
  Speech Recognition
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy1aavfhnd8h3j12w4rfe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy1aavfhnd8h3j12w4rfe.jpg" alt="3" width="626" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ever used &lt;a href="https://www.apple.com/siri/" rel="noopener noreferrer"&gt;Siri&lt;/a&gt;, &lt;a href="https://www.alexa.com/" rel="noopener noreferrer"&gt;Alexa&lt;/a&gt;, &lt;a href="https://assistant.google.com/" rel="noopener noreferrer"&gt;Google Assistant&lt;/a&gt; or any voice-activated piece of technology? Ever wondered &lt;em&gt;how&lt;/em&gt; a machine that communicates in binary (1's and 0's) is able to understand you? Well, the answer is the vast field of speech recognition. Here, an AI is trained to understand commands, phrases and even words! Speech recognition is used immensely and almost everywhere, from virtual assistants in our phones, to biometric verification for security purposes and even online banking!&lt;/p&gt;

&lt;h4&gt;
  
  
  Natural Language Processing
&lt;/h4&gt;

&lt;p&gt;This powerful field of AI involves analyzing human language and deriving understanding based on language structure and context.&lt;/p&gt;

&lt;p&gt;For example in the fill-in-the-blank: &lt;strong&gt;Man&lt;/strong&gt; is to &lt;strong&gt;Woman&lt;/strong&gt; what &lt;strong&gt;King&lt;/strong&gt; is to &lt;strong&gt;____&lt;/strong&gt;, We as humans know that the most suitable replacement to the blank would be &lt;strong&gt;Queen&lt;/strong&gt;, as it gives us a good analogy statement. This is what we are trying to teach our AI as well and can be done by training it on huge amounts of linguistic data. &lt;/p&gt;

&lt;p&gt;Applications of this field include the predictive keyboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frz2lfiyopokz6tyg0p8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frz2lfiyopokz6tyg0p8y.png" alt="4" width="394" height="265"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  Image Processing &amp;amp; Transformation
&lt;/h4&gt;

&lt;p&gt;Many amazing artworks have effortlessly been created with the help of AI in this field. Below is an example of convolving a picture of a dog with &lt;a href="https://en.wikipedia.org/wiki/Vincent_van_Gogh" rel="noopener noreferrer"&gt;Vincent van Gogh&lt;/a&gt;'s famous &lt;a href="https://en.wikipedia.org/wiki/The_Starry_Night" rel="noopener noreferrer"&gt;Starry Night&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7fuh8dzq6vgtw3w01hlr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7fuh8dzq6vgtw3w01hlr.jpeg" alt="6" width="800" height="170"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;For further clarity, an AI is needed to extract the essence of both pictures and put them together.&lt;/p&gt;

&lt;h4&gt;
  
  
  Computer Vision
&lt;/h4&gt;

&lt;p&gt;This field makes use of the ideas behind object detection and identification in images to make applications like self-driving cars a reality. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fumy7dtbndljff0b4422p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fumy7dtbndljff0b4422p.png" alt="7" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As seen in the image above, an AI has successfully identified each object and can now make decisions based on this. Incase of a self driving car, it will either go or stop depending on the number and position of people on the street.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components of AI &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Artificial Intelligence has been around for a while now and modern breakthroughs in technology have allowed for greater advancements in this field. Using ideas of &lt;strong&gt;Machine Learning&lt;/strong&gt; and &lt;strong&gt;Deep Learning&lt;/strong&gt;, we are now able to achieve the extraordinary. As you can observe in the diagram below, Machine Learning, or ML, and Deep Learning, or DL, are simply the subfields of AI, which will be discussed to some extend in this post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8ht6yrjdwan8g2csiumk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8ht6yrjdwan8g2csiumk.png" alt="10" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how should one go about studying this field? Maybe you are someone who wants to start with AI, maybe you are stuck on ML or perhaps confused as to where to go when done with DL? Should you start with DL, then go to ML and only then learn about the broader field of AI? With so much content out there, one can get quite confused about which learning path to take. So to make things easier, I'll be sharing my learning path here. It may not be the best path to take but it will give you a structure while studying for AI.&lt;/p&gt;

&lt;h4&gt;
  
  
  Recommended Learning Path
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Learn Python&lt;/li&gt;
&lt;li&gt;Study Classic Machine Learning Examples &lt;/li&gt;
&lt;li&gt;Make use of Deep Learning Libraries&lt;/li&gt;
&lt;li&gt;Implement Cloud Technology Practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Key-elements of this learning path will be discussed throughout this post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Python Learning Resources &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Python is a powerful &lt;a href="https://en.wikipedia.org/wiki/Programming_language" rel="noopener noreferrer"&gt;programming language&lt;/a&gt; that will allow us to create our AI &lt;a href="https://en.wikipedia.org/wiki/Programming_model" rel="noopener noreferrer"&gt;models&lt;/a&gt;. We choose python because it allows us to make use of many Deep Learning libraries that can make programming much more easier. More on that when we talk about Deep Learning.&lt;/p&gt;

&lt;p&gt;Follow these resources to get command on this language in no time! Also, if you are someone that is proficient in another language like C, C++, Java etc. then the learning process will be easier for you.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resource 1: Watch Python in Action
&lt;/h4&gt;

&lt;p&gt;The following is a video by &lt;a href="https://www.youtube.com/channel/UCovR8D97-8qmQ8hWQW0d3ew" rel="noopener noreferrer"&gt;howCode&lt;/a&gt; and goes over all the basics of Python in just 5 minutes!&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/I2wURDqiXdM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h4&gt;
  
  
  Resource 2: Signup for HackerRank
&lt;/h4&gt;

&lt;p&gt;Programming requires practice and this website has a ton of python-related problems for you to solve and help improve your skills for free.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Feu1dlzf9vj2zkdreitka.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Feu1dlzf9vj2zkdreitka.PNG" alt="11" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can signup &lt;a href="https://www.hackerrank.com/auth/signup" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resource 3: Python Repository
&lt;/h4&gt;

&lt;p&gt;The following &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; repository contains python problems taken from HackerRank along with solutions. You can make use of the code here for learning and evaluation purposes.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/MuizAlvi" rel="noopener noreferrer"&gt;
        MuizAlvi
      &lt;/a&gt; / &lt;a href="https://github.com/MuizAlvi/Python_Programs" rel="noopener noreferrer"&gt;
        Python_Programs
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Solved programming problems using Python 
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Sample Python problem solutions&lt;/h1&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;List of Problems:&lt;/h2&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;Sock Merchant: &lt;a href="https://www.hackerrank.com/challenges/sock-merchant/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/sock-merchant/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Counting Valleys: &lt;a href="https://www.hackerrank.com/challenges/counting-valleys/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/counting-valleys/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Jumping on the Clouds: &lt;a href="https://www.hackerrank.com/challenges/jumping-on-the-clouds/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/jumping-on-the-clouds/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Repeating Strings: &lt;a href="https://www.hackerrank.com/challenges/repeated-string/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/repeated-string/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Pyramid Problem: Create a simple program that takes the number of rows as input and prints out an astrisks pyramid.&lt;/li&gt;
&lt;li&gt;2D Array - DS: &lt;a href="https://www.hackerrank.com/challenges/2d-array/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/2d-array/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Arrays - Left Rotation: &lt;a href="https://www.hackerrank.com/challenges/ctci-array-left-rotation/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/ctci-array-left-rotation/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anagram: &lt;a href="https://www.hackerrank.com/challenges/anagram/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/anagram/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Jack and the Skyscrapers: &lt;a href="https://www.hackerrank.com/challenges/jim-and-the-skyscrapers/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/jim-and-the-skyscrapers/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Simple Chess Engine: &lt;a href="https://www.hackerrank.com/challenges/simplified-chess-engine/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/simplified-chess-engine/problem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;King Richard's Knights: &lt;a href="https://www.hackerrank.com/challenges/king-richards-knights/problem" rel="nofollow noopener noreferrer"&gt;https://www.hackerrank.com/challenges/king-richards-knights/problem&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/MuizAlvi/Python_Programs" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h2&gt;
  
  
  Logic &amp;amp; Rules-Based Approach &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Remember this diagram?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8ht6yrjdwan8g2csiumk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8ht6yrjdwan8g2csiumk.png" alt="11" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You've already been made familiar with the Machine Learning and Deep Learning subsets of this field, but what about the portion of AI that is not enclosed by either subset? Well, that is the part of AI that makes use of a logical and rules-based approach, and by that we simply mean &lt;a href="https://www.geeksforgeeks.org/python-if-else/" rel="noopener noreferrer"&gt;if-else statements&lt;/a&gt;! &lt;/p&gt;

&lt;p&gt;There was this &lt;a href="https://en.wikipedia.org/wiki/Internet_meme" rel="noopener noreferrer"&gt;meme&lt;/a&gt; that circulated back in 2014:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1moognwaegl3uyaxjh7v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1moognwaegl3uyaxjh7v.jpg" alt="Meme" width="700" height="821"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how accurate is this statement anyways? can we create AI using simple if-else statements? Well the answer is yes, we can do that but this comes with a lot of limitations.&lt;/p&gt;

&lt;p&gt;First of all, to program an AI like &lt;a href="https://en.wikipedia.org/wiki/Sophia_(robot)" rel="noopener noreferrer"&gt;Sophia&lt;/a&gt; one would have to write about 50,000 such statements, which is quite laborious. Secondly, it would take a processor quite a lot of time to execute such a code, making it quite inefficient as well. &lt;/p&gt;

&lt;p&gt;So is there a way to make use of this approach? Yes! Some Deep Learning models make use of this method by combining if-else statements with &lt;a href="https://www.geeksforgeeks.org/object-oriented-programming-in-python-set-1-class-and-its-members/" rel="noopener noreferrer"&gt;Object Oriented Programming&lt;/a&gt;. Here is a sample code to help your understanding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;road&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;people&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
   &lt;span class="n"&gt;car&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;move&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
   &lt;span class="n"&gt;car&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this program, the car only moves when people are not seen on the road.&lt;/p&gt;

&lt;p&gt;So in conclusion, &lt;em&gt;If-else Statements are a very limited approach to solving AI problems&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Intro to Machine Learning &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now let us move towards a more refined version of AI, known as Machine Learning. In this subfield, we make use of &lt;a href="https://en.wikipedia.org/wiki/Algorithm#:~:text=In%20mathematics%20and%20computer%20science,or%20to%20perform%20a%20computation." rel="noopener noreferrer"&gt;algorithms&lt;/a&gt; that find patterns in data and infer rules on their own. So what do we mean by learning through data? Well let's first look at the way us humans learn. Consider the picture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl8p6g0pjq4hi8qhip4h4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl8p6g0pjq4hi8qhip4h4.jpg" alt="6" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this image, we are able to identify the animal on the left as a 'dog' and on the right as a 'cat'. Ever wondered how we were able to do this? Well chances are that you have seen multiple cats and dogs in real life, books or on television etc. and are now able to identify them with ease. Machine Learning works in a similar way, we give a program some data and tell it to learn features (like whiskers, ear size etc.) it then adjusts its learnable parameter(s) according to this data and only then is made able to produce a desired output. More on learnable parameters in the following example. &lt;/p&gt;
&lt;h2&gt;
  
  
  First AI Classifier (hands-on) &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now let us try to build our first AI classifier. Read the problem statement below to understand what we are trying to build:&lt;/p&gt;
&lt;h4&gt;
  
  
  Problem Statement:
&lt;/h4&gt;

&lt;p&gt;A computer is given a dataset of 'labeled' animal heights to help train its learnable parameter 'x'. Once trained, the computer will be given a height value and will have to determine whether this height belongs to a 'cat' or 'dog'&lt;/p&gt;

&lt;p&gt;Algorithm: When the computer receives a height value greater than x, it outputs 'dog' and if less than or equal to x then it outputs 'cat'&lt;/p&gt;

&lt;p&gt;Dataset:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc8r54qo0ubivfopc3qjs.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc8r54qo0ubivfopc3qjs.PNG" alt="7" width="157" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now before moving towards the code, you should first try to figure out the unknown label for the 50 cm height on your own. (Hint: It will help to write down all heights and labels in ascending order). Once done, follow the steps below to create your first AI classifier:&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Create Data Variables &amp;amp; Lists:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; 
&lt;span class="n"&gt;label&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dog&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dog&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The code above is our training data taken from the table in the problem statement. Each data and label has been placed in a list in respective order. Our program will learn from this data.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="c1"&gt;# learning parameter
&lt;/span&gt;&lt;span class="n"&gt;H&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We have now initialize our learnable parameter 'x'. The program will set the value of x according to our data. The variable 'H' is the height of the animal we need to classify.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Train the Program
&lt;/h4&gt;

&lt;p&gt;The code snippet below simply sets a value for 'x' on the basis of our data and labels being used.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt; 
    &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;When you are done running this code, the optimum value of 'x' will have been set.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3: Test the Program
&lt;/h4&gt;

&lt;p&gt;This is a simple if-else statement that compares 'H' with 'x' and determines whether the animal is a 'cat' or 'dog' based on its height.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;H&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dog&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;print &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Based on the data the output should be &lt;strong&gt;dog&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ands that's it! you have successfully built, trained and tested your first AI classifier! Now the learning outcome here is that a model extracts features from data and sets its learnable parameter according to these features. It then uses its learnable parameter to predict outcomes on test data.&lt;/p&gt;
&lt;h4&gt;
  
  
  Full Code
&lt;/h4&gt;

&lt;p&gt;This is what the final form of your code should look like after completing all the steps. I have added comments as well for better understanding.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# Training
&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Training dataset 
&lt;/span&gt;&lt;span class="n"&gt;label&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dog&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dog&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Training dataset labels
&lt;/span&gt;
&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="c1"&gt;# initial value for x
&lt;/span&gt;&lt;span class="n"&gt;H&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="c1"&gt;# Test data
&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt; 
    &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Testing
&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;H&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dog&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;print &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Classic Machine Learning &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Machine Learning has been around for quite sometime now. We know that Deep Learning is a subset of ML, so let us discuss the part of ML that does not fall in the DL category. Namely the 'Classic' version of ML. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm9r44wf4cex9awxl1yw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm9r44wf4cex9awxl1yw1.png" alt="10" width="800" height="328"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;I would now like to ask you a question. &lt;strong&gt;Was the above AI classifier an example of Classic Machine Learning?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The answer is &lt;strong&gt;no&lt;/strong&gt;. This is because of &lt;a href="https://en.wikipedia.org/wiki/Tom_M._Mitchell" rel="noopener noreferrer"&gt;Tom Mitchell&lt;/a&gt;'s modern definition of ML: &lt;em&gt;"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."&lt;/em&gt; In our classifier, the program lacks a performance measure and so is &lt;strong&gt;not&lt;/strong&gt; an example of Machine Learning. It was only presented to give you an idea about how computers learn.&lt;/p&gt;

&lt;p&gt;Classic Machine Learning follows the same principles: it extracts features from data, adjusts its learnable parameters (i.e. weight and bias) but also accounts for performance. In layman's terms this means that our program is learning through its mistakes. We will not go into how this works or, the math behind this learning, as it is beyond the scope of this post, this is an introductory post after all, but I hope the example gave you some basic intuition about &lt;em&gt;how&lt;/em&gt; ML works.&lt;/p&gt;
&lt;h4&gt;
  
  
  Applications
&lt;/h4&gt;

&lt;p&gt;There are many applications of Classic Machine Learning. Cooperate giants make use of spam filters that train on spam emails, learn words like 'Nigerian prince' or 'free survey' overtime and filter out such emails. Allowing for a cleaner workspace. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7u90ywodztpc7fra6h9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7u90ywodztpc7fra6h9i.png" alt="11" width="712" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another application is image classification, which will be discussed in detail later in this post.&lt;/p&gt;
&lt;h4&gt;
  
  
  Resources
&lt;/h4&gt;

&lt;p&gt;The best way to start with ML would be with a course by &lt;a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer"&gt;Andrew NG&lt;/a&gt; available on &lt;a href="//www.coursera.org"&gt;Coursera&lt;/a&gt;. This is a free course and covers topics related to ML extensively. You can check it out &lt;a href="https://www.coursera.org/learn/machine-learning" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For people already done with ML courses and wish to strengthen their ML concepts with some intro to DL, there's this &lt;a href="https://deeplizard.com/" rel="noopener noreferrer"&gt;DeepLizard&lt;/a&gt; playlist covering fundamentals for both ML and DL. You can find this playlist &lt;a href="https://www.youtube.com/playlist?list=PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Intro to Deep Learning &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Deep Learning is a sub-field of Machine Learning that allows for us to overcome limitations of Classic Machine Learning and train powerful models with revolutionary applications. This field makes use of &lt;a href="https://en.wikipedia.org/wiki/Artificial_neural_network" rel="noopener noreferrer"&gt;Neural Networks&lt;/a&gt; (shown below) for calculations and is quite &lt;a href="https://en.wikipedia.org/wiki/Graphics_processing_unit" rel="noopener noreferrer"&gt;GPU&lt;/a&gt; intensive. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnj4upc7nvefcpxjrfpm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnj4upc7nvefcpxjrfpm6.png" alt="12" width="800" height="357"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In the diagram above, each circle represents a node and each column of nodes represent a layer. Check out the &lt;a href="https://dev.to/muizalvi/build-your-first-neural-network-with-the-keras-api-35b4"&gt;Build your first Neural Network with the Keras API&lt;/a&gt; tutorial to learn all about Neural Networks and their implementation.&lt;/p&gt;
&lt;h2&gt;
  
  
  Machine Learning vs. Deep Learning &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you are someone who understands the concepts of ML and DL but aren't exactly aware of the technical differences between both. Then this section is just for you! Let us look at the differences between the approach taken by both subfields by the help of an image classification example. The following is a picture of me taken a few weeks ago. Our models will try to train themself on this image. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbjbafffh3spqnasrnjlr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbjbafffh3spqnasrnjlr.png" alt="1" width="367" height="412"&gt;&lt;/a&gt; &lt;/p&gt;
&lt;h4&gt;
  
  
  Machine Learning Approach
&lt;/h4&gt;

&lt;p&gt;An ML model will first extract features, for instance face shape and distance between pupils &lt;em&gt;relative&lt;/em&gt; to distance between forehead and chin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm2a2ga7md6dc1tuy6kmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm2a2ga7md6dc1tuy6kmc.png" alt="2" width="367" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The model will then adjust its learnable parameter according to these features.&lt;/p&gt;
&lt;h4&gt;
  
  
  Deep Learning Approach
&lt;/h4&gt;

&lt;p&gt;A DL model, on the other hand, takes the image as a whole and passes it through a Neural Network. The Neural Net for our problem looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg1khbvrp6p01x5aw9at9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg1khbvrp6p01x5aw9at9.jpeg" alt="8" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Network has been programmed to &lt;em&gt;slice up&lt;/em&gt; an image and take each slice as an input for each input node (nodes on the left most side). The image will be split as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzyjt7omq5rjjwtnrnsmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzyjt7omq5rjjwtnrnsmx.png" alt="3" width="367" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now each input node will extract some feature from each slice on its own and pass it to the next node to find out another feature and so on. The nodes are learning the features on their own. These features could be anything, like &lt;em&gt;hair color&lt;/em&gt;, &lt;em&gt;eye shape&lt;/em&gt;, &lt;em&gt;lip shape&lt;/em&gt; and so on. This is exactly why DL is GPU intensive, as each node is carrying out complex calculations to extract and learn features. This is also why DL models are far more accurate as compared to ML models.&lt;/p&gt;
&lt;h2&gt;
  
  
  More on Deep Learning &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The applications of DL include self-driving cars, object detection algorithms (like the one discussed above) and &lt;a href="https://en.wikipedia.org/wiki/Deepfake" rel="noopener noreferrer"&gt;Deepfakes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The most efficient way for you to create DL models is simply by implementing certain Python libraries in your code. These libraries make it super easy to build, train, validate and test your Deep Learning models. Examples of such libraries are &lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;tensorflow&lt;/a&gt;, &lt;a href="https://keras.io/" rel="noopener noreferrer"&gt;keras&lt;/a&gt;, &lt;a href="https://pytorch.org/" rel="noopener noreferrer"&gt;pytorch&lt;/a&gt; and &lt;a href="https://www.fast.ai/" rel="noopener noreferrer"&gt;fast.ai&lt;/a&gt;. &lt;/p&gt;
&lt;h4&gt;
  
  
  Resources
&lt;/h4&gt;

&lt;p&gt;You can access these libraries and many more using &lt;a href="https://www.anaconda.com/" rel="noopener noreferrer"&gt;Anaconda&lt;/a&gt;. You can learn all about acquiring Anaconda and installing DL libraries in the &lt;a href="https://dev.to/muizalvi/setting-up-python-environments-using-anaconda-1a2m"&gt;Setting up Python environments using Anaconda&lt;/a&gt; tutorial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn67jc48dadonxfqnv2ti.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn67jc48dadonxfqnv2ti.PNG" alt="13" width="800" height="426"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can also check out the Deep Learning Specialization course by &lt;a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer"&gt;Andrew NG&lt;/a&gt; available on &lt;a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer"&gt;Coursera&lt;/a&gt;. This is a powerful five-course specialization that covers DL in great detail. However it is &lt;strong&gt;not&lt;/strong&gt; free but you can apply for financial aid if necessary. Check out the course &lt;a href="https://www.coursera.org/specializations/deep-learning" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another recommended course would be a video series by &lt;a href="//fast.ai"&gt;fast.ai&lt;/a&gt; that covers the use of the fast.ai python library. This series has eight videos and you can check them out &lt;a href="https://course.fast.ai/videos/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Deep Learning Models
&lt;/h4&gt;

&lt;p&gt;The following GitHub repository contains a number of DL Models that you can make use of according to your liking. I have created this repo for people studying math intensive DL courses to be able to see the programs in action. It contains actual image classifiers made with python libraries.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/MuizAlvi" rel="noopener noreferrer"&gt;
        MuizAlvi
      &lt;/a&gt; / &lt;a href="https://github.com/MuizAlvi/Machine_Learning_and_Deep_Learning_models" rel="noopener noreferrer"&gt;
        Machine_Learning_and_Deep_Learning_models
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Repository containing models based on ideas of Machine learning and Deep learning
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Machine_Learning_and_Deep_Learning_models&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Repository containing models based on ideas of Machine learning and Deep learning. List of files:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Simple Sequential Model&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Uses randomly generated trainin set (10% of which is used in validation set) and test data&lt;/li&gt;
&lt;li&gt;Shows final predictions in a confusion matrix&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cat and Dog Classifier - Convolution Neural Network&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Uses a data set of 1300 images (1000 for training set, 200 for validation set, 100 for test set) randomly picked out of a larger data set of 25000 images&lt;/li&gt;
&lt;li&gt;Image Data: &lt;a href="https://www.kaggle.com/c/dogs-vs-cats/data" rel="nofollow noopener noreferrer"&gt;https://www.kaggle.com/c/dogs-vs-cats/data&lt;/a&gt; (25000 images of cats and dogs)&lt;/li&gt;
&lt;li&gt;Model experiences overfitting and needs to be improved&lt;/li&gt;
&lt;li&gt;Model has not been tested for now due to overfitting on the training set&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cat and Dog Classifier 2.0 [using existing model] - Convolution Neural Network&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Trains existing model VGG16 (with some alterations)&lt;/li&gt;
&lt;li&gt;Uses data prepeartion used in the previous upload (Cat and Dog Classifier - Convolution Neural Network)&lt;/li&gt;
&lt;li&gt;Highly accurate model with…&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/MuizAlvi/Machine_Learning_and_Deep_Learning_models" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;
  
  
  Theoretical &amp;amp; Hands-on Notebooks &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Although videos, books and courses are a great source of learning, fields like AI require a more hands-on approach. The following repository contains a number of &lt;a href="https://jupyter.org/" rel="noopener noreferrer"&gt;jupyter notebooks&lt;/a&gt; to help further your understanding of ML and DL. It also includes the code to the AI classifier example along with a number of resources to guide you on the path of Artificial Intelligence.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/MuizAlvi" rel="noopener noreferrer"&gt;
        MuizAlvi
      &lt;/a&gt; / &lt;a href="https://github.com/MuizAlvi/AI101" rel="noopener noreferrer"&gt;
        AI101
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Repository containing guidance material for participants attending the AI101 Workshop 
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;AI101&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;Repository containing hands-on, open-source, practise notebooks for participants attending the &lt;a href="https://www.facebook.com/events/429419051422702" rel="nofollow noopener noreferrer"&gt;AI 101: Introduction to Artificial Intelligence&lt;/a&gt; Workshop.&lt;/p&gt;
&lt;p&gt;List of files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Workshop Resources&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Resources.txt&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Workshop Example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Feature Classifier Example&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For Machine Learning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;01 - Data Exploration&lt;/li&gt;
&lt;li&gt;02 - Regression&lt;/li&gt;
&lt;li&gt;03 - Classification&lt;/li&gt;
&lt;li&gt;04 - Clustering&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For Deep Learning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;05a - Deep Neural Networks (PyTorch)&lt;/li&gt;
&lt;li&gt;05a - Deep Neural Networks (TensorFlow)&lt;/li&gt;
&lt;li&gt;05b - Convolutional Neural Networks (PyTorch)&lt;/li&gt;
&lt;li&gt;05b - Convolutional Neural Networks (Tensorflow)&lt;/li&gt;
&lt;li&gt;05c - Transfer Learning (PyTorch)&lt;/li&gt;
&lt;li&gt;05c - Transfer Learning (Tensorflow)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;

  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/MuizAlvi/AI101" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Note: Please do make use of these notebooks as they will improve your understanding and learning.&lt;/p&gt;
&lt;h2&gt;
  
  
  Problems with ML &amp;amp; DL &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Classic Machine Learning comes with a number of limitations. These limitations are highlighted in the image classification problem discussed above as well. &lt;/p&gt;

&lt;p&gt;The problem with Deep Learning is that it is GPU intensive (as mentioned several times above), requires huge amounts of training data and has a very complicated code.&lt;/p&gt;
&lt;h2&gt;
  
  
  Cloud Services: Microsoft Azure &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/" rel="noopener noreferrer"&gt;Microsoft Azure&lt;/a&gt; is your one-stop-shop solution to all the above mentioned problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fohe7vbfba2f0b713pxoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fohe7vbfba2f0b713pxoh.png" alt="microsoft-azure-500x500" width="480" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Azure provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Virtual_machine" rel="noopener noreferrer"&gt;Virtual Machines&lt;/a&gt; that do complex computations for you!&lt;/li&gt;
&lt;li&gt;Cloud storage space to store your models and datasets&lt;/li&gt;
&lt;li&gt;Open datasets that you can use to train your models&lt;/li&gt;
&lt;li&gt;Automated ML (drag and drop), so you don't event have to code anything!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now a lot of individuals hesitate to use this platform because it's a paid service, but there is a way around this! &lt;/p&gt;
&lt;h4&gt;
  
  
  Azure Free Credits
&lt;/h4&gt;

&lt;p&gt;I will be listing down two ways to obtain free Azure credits to help you get started on the platform:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can get $200 worth of free Azure credits simply by signing up for an Azure account &lt;a href="https://azure.microsoft.com/en-us/free/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;You can obtain $100 worth of free Azure credits and much more by applying for the GitHub Student Developer Pack &lt;a href="https://education.github.com/pack" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Budgeting
&lt;/h4&gt;

&lt;p&gt;Now the question you're probably asking yourself is &lt;em&gt;how much can be done with this amount of free credits?&lt;/em&gt; Well to answer your question, I built, trained, tested and deployed an ML and a DL model on Azure and that only cost me $0.64! You can see the details below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdsknct1frc033i5gg2gb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdsknct1frc033i5gg2gb.PNG" alt="9" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So either package is more than enough to for learning purposes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;AI is a powerful field with immense research and implementation opportunity. You should now have a good idea about this field, its subfields and the possible path to take when it comes to learning. If you are someone starting out with AI then please feel free to make use of the provided resources as they will help you on your journey immensely. I would also urge you to share this post in your circle as well, as it addresses a lot of issues faced by people in the AI learning process. It's all here, we just need to get the word out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx749s8bckfq0rfvy33sy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx749s8bckfq0rfvy33sy.jpeg" alt="DL is a superpower" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The picture above is reference to &lt;a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer"&gt;Andrew NG&lt;/a&gt;'s &lt;a href="https://www.coursera.org/specializations/deep-learning" rel="noopener noreferrer"&gt;Deep Learning Specialization&lt;/a&gt; course where he tells us that DL is no less than a super power and it is our job to make the best use of it.&lt;/p&gt;

&lt;p&gt;I hope the post was clear and covered everything. Please use the discussion/comment section to let me know if you faced any difficulty or have any questions. &lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read this article!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>github</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Build your first Neural Network with the Keras API</title>
      <dc:creator>Muiz Alvi</dc:creator>
      <pubDate>Tue, 29 Dec 2020 17:12:37 +0000</pubDate>
      <link>https://dev.to/muizalvi/build-your-first-neural-network-with-the-keras-api-35b4</link>
      <guid>https://dev.to/muizalvi/build-your-first-neural-network-with-the-keras-api-35b4</guid>
      <description>&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;This tutorial will help programmers build, train and test their first Neural Network using the powerful Keras python library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;About Neural Networks and Keras&lt;/li&gt;
&lt;li&gt;Github code&lt;/li&gt;
&lt;li&gt;Problem Statement&lt;/li&gt;
&lt;li&gt;Generating Dataset&lt;/li&gt;
&lt;li&gt;Building a Sequential Model&lt;/li&gt;
&lt;li&gt;Training the Model&lt;/li&gt;
&lt;li&gt;Testing the Model using Predictions&lt;/li&gt;
&lt;li&gt;Plotting Predictions using Confusion Matrix&lt;/li&gt;
&lt;li&gt;Final Code&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In pursuit of learning about the field of artificial intelligence, many come across the term 'Neural Networks'. They realize the importance of these algorithms and their application in the field of deep learning, however face difficulty building their own. &lt;/p&gt;

&lt;p&gt;This tutorial will not only show you how to build a neural network from scratch, but will also walk you over the code for training and testing your model. All while solving an actual deep learning problem in the process! &lt;/p&gt;

&lt;p&gt;Note: The Keras API along with 10,000 other such python libraries can be accessed from the Anaconda Navigator. You can learn all about acquiring Anaconda and installing the Keras API in the &lt;a href="https://dev.to/muizalvi/setting-up-python-environments-using-anaconda-1a2m"&gt;Setting up Python environments using Anaconda&lt;/a&gt; tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Neural Networks and Keras
&lt;/h2&gt;

&lt;p&gt;Artificial Neural Networks, ANNs or Neural Networks are a series of algorithms that are modeled after the biological activity of the human brain. Neural Networks compose of layers. Each layer consists of nodes (also called neurons).&lt;/p&gt;

&lt;p&gt;You can learn more about Neural Networks from the following video created by &lt;a href="https://deeplizard.com/" rel="noopener noreferrer"&gt;DeepLizard&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/hfK_dvC-avg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This video is part of the &lt;a href="https://www.youtube.com/playlist?app=desktop&amp;amp;list=PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU" rel="noopener noreferrer"&gt;Machine Learning &amp;amp; Deep Learning Fundamentals&lt;/a&gt; playlist and has been taken from the &lt;a href="https://www.youtube.com/channel/UC4UJ26WkceqONNF5S26OiVw" rel="noopener noreferrer"&gt;DeepLizard YouTube channel&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Keras is a python library that uses &lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;Tensorflow&lt;/a&gt; as its back-end. This library allows us to build, train and test models effectively. It also allows us to make use of its own pre-existing models. You can learn more about the deep learning API from the &lt;a href="https://keras.io/" rel="noopener noreferrer"&gt;Keras website&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Github Code
&lt;/h2&gt;

&lt;p&gt;The following problem statement along with the code for this blog are all available on &lt;a href="https://github.com/MuizAlvi" rel="noopener noreferrer"&gt;my Github profile&lt;/a&gt; and compiled in a Jupyter Notebook titled &lt;a href="https://github.com/MuizAlvi/Machine_Learning_and_Deep_Learning_models/blob/master/First%20Neural%20Network%20with%20Keras%20API.ipynb" rel="noopener noreferrer"&gt;First Neural Network with Keras API&lt;/a&gt;. You can view and make use of the code to your liking. I will encourage you to read through this blog as well for better explanation of the written code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem Statement
&lt;/h2&gt;

&lt;p&gt;An experimental drug was tested on 2100 individual in a clinical trail. The ages of participants ranged from thirteen to a hundred. Half of the participants were under the age of 65 years, the other half were 65 years or older. &lt;/p&gt;

&lt;p&gt;Ninety five percent patients that were 65 years or older experienced side effects. Ninety five percent patients under 65 years of age experienced no side effects.&lt;/p&gt;

&lt;p&gt;You have to build a program that takes the age of a participant as input and predicts whether this patient has suffered from a side effect or not.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate a random dataset that adheres to these statements&lt;/li&gt;
&lt;li&gt;Divide the dataset into Training (90%) and Validation (10%) set&lt;/li&gt;
&lt;li&gt;Build a Simple Sequential Model&lt;/li&gt;
&lt;li&gt;Train and Validate the Model using the dataset&lt;/li&gt;
&lt;li&gt;Randomly choose 20% data from dataset as Test set&lt;/li&gt;
&lt;li&gt;Plot predictions made by the Model on the Test set&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Generating Dataset
&lt;/h2&gt;

&lt;p&gt;First we import some of the libraries needed for generating the dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;randint&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.utils&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.preprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MinMaxScaler&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We are importing &lt;a href="https://numpy.org/" rel="noopener noreferrer"&gt;numpy&lt;/a&gt; as all our variables are n dimensional arrays. Rest of the libraries will be used for randomizing, shuffling and scaling data respectively.&lt;/p&gt;

&lt;p&gt;Next we initialize the empty lists for training samples along with their labels.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;train_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The train_samples list includes participant age and train_label includes whether they have suffered from side effects (denoted by '1') or not (denoted by 'zero'). &lt;/p&gt;

&lt;p&gt;We now move towards randomly generating values for both lists.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 5% of younger individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 5% of older individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 95% of younger individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 95% of older individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The first 50 iterations generate random ages for &lt;em&gt;participants younger than 65 that did suffer from side effects&lt;/em&gt; (labeled &lt;br&gt;
'1') and &lt;em&gt;participants 65 and older that did not suffer from side effects&lt;/em&gt; (labeled '0'). The random age is generated, placed inside the random_younger or random_older variable and then appended into the train_samples list. A value for the label is also appended into the train_label list. For the next 1000 iterations, a similar approach is taken but for the &lt;em&gt;participants younger than 65 that did not suffer from side effects&lt;/em&gt; (labeled &lt;br&gt;
'0') and &lt;em&gt;participants 65 and older that did suffer from side effects&lt;/em&gt; (labeled '1').&lt;/p&gt;

&lt;p&gt;Once the iterations are complete, the lists are converted to arrays. The data of these arrays are also shuffled.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;train_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The conversion is required as calculations are done on n dimensional arrays and not lists. The shuffling is done to remove any order imposed on the data set during the creation process.&lt;/p&gt;

&lt;p&gt;The next step requires us to transform and scale data in order to pass it through our model.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;scaler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MinMaxScaler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;feature_range&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;scaled_train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the first line we specify the range for the imported MinMaxScalar function and in the next line, we are scaling the values of the train_samples according to the specified range whilst also reshaping the n dimensional array to a shape appropriate for our model.&lt;/p&gt;
&lt;h2&gt;
  
  
  Building a Sequential Model
&lt;/h2&gt;

&lt;p&gt;Now we move towards building our Neural Network. The first step is to import the Tensorflow and Keras library, along with certain parameters required for our model. These parameters are all being imported from within Keras that is using a Tensorflow backend.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;keras&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Sequential&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.layers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Activation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Dense&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.optimizers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Adam&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.metrics&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;categorical_crossentropy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Our model is a simple sequential model, meaning that the layers are linearly stacked, hence we are importing &lt;a href="https://www.tensorflow.org/api_docs/python/tf/keras/Sequential" rel="noopener noreferrer"&gt;Sequential&lt;/a&gt;. From layers we are importing the fully-connected or &lt;a href="//tensorflow.org/api_docs/python/tf/keras/layers/Dense"&gt;Dense&lt;/a&gt; layer along with the type of &lt;a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Activation" rel="noopener noreferrer"&gt;activation&lt;/a&gt; function needed for scaling data sent to nodes. &lt;a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers" rel="noopener noreferrer"&gt;Optimizers&lt;/a&gt; are also required to help minimize loss, which is a crucial step in a neural network's calculation. Finally categorical_crossentropy is the type of &lt;a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses" rel="noopener noreferrer"&gt;loss function&lt;/a&gt; that we will be using for our model. &lt;/p&gt;

&lt;p&gt;If you are facing difficulty in understanding any of these terms then it is a good idea to check them out by clicking on them as this will take you directly to the documentation. However there is no need to worry as these things do take time and improve with practice.&lt;/p&gt;

&lt;p&gt;Let us now build our model. I am creating a model with one input (with 16 units), one hidden (with 32 units) and one output (with 2 units) layer. Deciding the number of layers and units to set differ from problem to problem and can be improved overtime through practice.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;input_shape&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; 
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; 
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;softmax&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This code is pretty simple, we are creating a sequential model with three dense layers. The input layer takes in a tuple of integers that matches the shape of the input data, hence (1,). 'relu' and 'softmax' are types of activation functions.&lt;/p&gt;

&lt;p&gt;Here is what the summary of the model should look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fifw5861jam3nect1yl1q.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fifw5861jam3nect1yl1q.PNG" alt="2" width="526" height="243"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Training the Model
&lt;/h2&gt;

&lt;p&gt;To train our model we simple use the &lt;em&gt;model.compile()&lt;/em&gt; function followed by the &lt;em&gt;model.fit()&lt;/em&gt; function.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0001&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sparse_categorical_crossentropy&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;accuracy&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaled_train_samples&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;validation_split&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;epochs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verbose&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The compile function pieces our model together while the fit function begins the training process. We specified the 'accuracy' metric as we want to see the accuracy of our model during training. By using the validation_split parameter we are automatically splitting the dataset into training and validation sets. '0.1' here means that 10% goes to the validation set and the remaining 90% goes to the training set. The dataset is also being split into batches of 10 (batch_size) and will be passed through the model 30 times. Each cycle is referred to as an 'epoch'. 'Verbose' here refers to &lt;em&gt;how detailed the training outputs would be&lt;/em&gt; and so we set that to '2' which is maximum detail. &lt;/p&gt;
&lt;h2&gt;
  
  
  Preprocessing Test Data
&lt;/h2&gt;

&lt;p&gt;This is similar to the data scaling and transforming done for the training and validation data sets. &lt;/p&gt;

&lt;p&gt;First we initialize the test lists.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;test_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;One for ages and one for labels (&lt;em&gt;if suffered from side effects&lt;/em&gt; (denoted as '1') or &lt;em&gt;if not suffered from side effects&lt;/em&gt; (denoted as '0')). &lt;/p&gt;

&lt;p&gt;Now we randomly generate values for both lists.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 5% of younger individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 5% of older individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 95% of younger individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 95% of older individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The first 10 iterations generate random ages for &lt;em&gt;participants younger than 65 that did suffer from side effects&lt;/em&gt; (labeled &lt;br&gt;
'1') and &lt;em&gt;participants 65 and older that did not suffer from side effects&lt;/em&gt; (labeled '0'). The random age is generated, placed inside the random_younger or random_older variable and then appended into the test_samples list. A value for the label is also appended into the test_label list. For the next 200 iterations, a similar approach is taken but for the &lt;em&gt;participants younger than 65 that did not suffer from side effects&lt;/em&gt; (labeled &lt;br&gt;
'0') and &lt;em&gt;participants 65 and older that did suffer from side effects&lt;/em&gt; (labeled '1').&lt;/p&gt;

&lt;p&gt;We will now convert the lists to numpy arrays and shuffle, similar to what we did with the training/validation set.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;test_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will also scale, transform and reshape our data to make it appropriate for our model. Again this is similar to the process done for the training/validation set.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;scaled_test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Testing the Model using Predictions
&lt;/h2&gt;

&lt;p&gt;In order to test our model we will make use of the predict() function, this will take each individual test age and take out the probability of it being part of either label. A rounding off function is then used to round off and keep the probability of the higher label only and discard the other label&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaled_test_samples&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verbose&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;rounded_predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;predictions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Plotting Predictions using Confusion Matrix
&lt;/h2&gt;

&lt;p&gt;In order to plot the results, I have used a confusion matrix. The code can be found on the scikit-learn website &lt;a href="https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Simply copy the code from the website and run it.&lt;/p&gt;

&lt;p&gt;Now make use of appropriate labels and plot the matrix.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cm_plot_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;no_side_effects&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;had_side_effects&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nf"&gt;plot_confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cm_plot_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Confusion Matrix&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Your output should look something like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F08un0tvdwkqhy0oz04lz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F08un0tvdwkqhy0oz04lz.PNG" alt="4" width="364" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! you've successfully created your first Neural Network that actually solves a real problem!&lt;/p&gt;
&lt;h2&gt;
  
  
  Final Code
&lt;/h2&gt;

&lt;p&gt;Now that we're done with all the steps, your code should look something like this.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;randint&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.utils&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.preprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MinMaxScaler&lt;/span&gt;

&lt;span class="n"&gt;train_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;     &lt;span class="c1"&gt;# one means side effect experienced, zero means no side effect experienced
&lt;/span&gt;&lt;span class="n"&gt;train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 5% of younger individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 5% of older individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 95% of younger individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 95% of older individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;train_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# randomly shuffles each individual array, removing any order imposed on the data set during the creation process
&lt;/span&gt;
&lt;span class="n"&gt;scaler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MinMaxScaler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;feature_range&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c1"&gt;# specifying scale (range: 0 to 1)
&lt;/span&gt;&lt;span class="n"&gt;scaled_train_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c1"&gt;# transforms our data scale (range: 13 to 100) into the one specified above (range: 0 to 1), we use the reshape fucntion as fit_transform doesnot accept 1-D data by default hence we need to reshape accordingly here
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;keras&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Sequential&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.layers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Activation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Dense&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.optimizers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Adam&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorflow.keras.metrics&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;categorical_crossentropy&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;input_shape&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; 
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; 
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;softmax&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0001&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sparse_categorical_crossentropy&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;accuracy&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaled_train_samples&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;train_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;validation_split&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;epochs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verbose&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;test_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 5% of younger individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 5% of older individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The 95% of younger individuals who did not experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_younger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_younger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# The 95% of older individuals who did experience side effects
&lt;/span&gt;    &lt;span class="n"&gt;random_older&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_older&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;test_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;scaled_test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="n"&gt;scaled_test_samples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_samples&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="n"&gt;rounded_predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;predictions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.metrics&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;confusion_matrix&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;itertools&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;

&lt;span class="n"&gt;cm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_true&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;test_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rounded_predictions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# This function has been taken from the website of scikit Learn. link: https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;plot_confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;normalize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Confusion matrix&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;cmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Blues&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;interpolation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;nearest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cmap&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;colorbar&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;tick_marks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;arange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xticks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tick_marks&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rotation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;yticks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tick_marks&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;cm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;float&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)[:,&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;newaxis&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Normalized confusion matrix&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Confusion matrix, without normalization&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;thresh&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mf"&gt;2.&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;itertools&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;product&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])):&lt;/span&gt;
        &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                 &lt;span class="n"&gt;horizontalalignment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;center&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                 &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;white&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;thresh&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;black&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tight_layout&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ylabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;True label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xlabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Predicted label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;cm_plot_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;no_side_effects&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;had_side_effects&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nf"&gt;plot_confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cm_plot_labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Confusion Matrix&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This code is also available on &lt;a href="https://github.com/MuizAlvi" rel="noopener noreferrer"&gt;my Github profile&lt;/a&gt; and compiled in a Jupyter Notebook titled &lt;a href="https://github.com/MuizAlvi/Machine_Learning_and_Deep_Learning_models/blob/master/First%20Neural%20Network%20with%20Keras%20API.ipynb" rel="noopener noreferrer"&gt;First Neural Network with Keras API&lt;/a&gt;. From there you can make use of this code to your liking and suggest improvements there as well.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You should now have a good idea about how Neural Networks are built, trained, validated and tested. You can also check out other cool deep learning models in the following GitHub repository: &lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/MuizAlvi" rel="noopener noreferrer"&gt;
        MuizAlvi
      &lt;/a&gt; / &lt;a href="https://github.com/MuizAlvi/Machine_Learning_and_Deep_Learning_models" rel="noopener noreferrer"&gt;
        Machine_Learning_and_Deep_Learning_models
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Repository containing models based on ideas of Machine learning and Deep learning
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Machine_Learning_and_Deep_Learning_models&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Repository containing models based on ideas of Machine learning and Deep learning. List of files:&lt;/p&gt;


&lt;ol&gt;

&lt;li&gt;

&lt;p&gt;Simple Sequential Model&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses randomly generated trainin set (10% of which is used in validation set) and test data&lt;/li&gt;
&lt;li&gt;Shows final predictions in a confusion matrix&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Cat and Dog Classifier - Convolution Neural Network&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses a data set of 1300 images (1000 for training set, 200 for validation set, 100 for test set) randomly picked out of a larger data set of 25000 images&lt;/li&gt;
&lt;li&gt;Image Data: &lt;a href="https://www.kaggle.com/c/dogs-vs-cats/data" rel="nofollow noopener noreferrer"&gt;https://www.kaggle.com/c/dogs-vs-cats/data&lt;/a&gt; (25000 images of cats and dogs)&lt;/li&gt;
&lt;li&gt;Model experiences overfitting and needs to be improved&lt;/li&gt;
&lt;li&gt;Model has not been tested for now due to overfitting on the training set&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Cat and Dog Classifier 2.0 [using existing model] - Convolution Neural Network&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trains existing model VGG16 (with some alterations)&lt;/li&gt;
&lt;li&gt;Uses data prepeartion used in the previous upload (Cat and Dog Classifier - Convolution Neural Network)&lt;/li&gt;
&lt;li&gt;Highly accurate model with…&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ol&gt;
&lt;/div&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/MuizAlvi/Machine_Learning_and_Deep_Learning_models" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;I hope the tutorial was clear and covered everything, please use the discussion/comment section to let me know if you faced any difficulty or if any step is unclear. Thank you for taking out the time to read this article!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>python</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Setting up Python environments using Anaconda</title>
      <dc:creator>Muiz Alvi</dc:creator>
      <pubDate>Fri, 13 Nov 2020 19:10:01 +0000</pubDate>
      <link>https://dev.to/muizalvi/setting-up-python-environments-using-anaconda-1a2m</link>
      <guid>https://dev.to/muizalvi/setting-up-python-environments-using-anaconda-1a2m</guid>
      <description>&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;This tutorial will help programmers setup their first python-based environment using a powerful open-source distribution known as Anaconda.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Acquiring Anaconda&lt;/li&gt;
&lt;li&gt;Creating a new Environment&lt;/li&gt;
&lt;li&gt;Adding packages to your Environment&lt;/li&gt;
&lt;li&gt;Activating your Environment&lt;/li&gt;
&lt;li&gt;Testing your Environment&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Students starting out with advanced data science fields, especially deep learning, often struggle with code implementation while making use of libraries. I have often observed programmers install a number of libraries each time they build a solution, only to reinstall these libraries whenever they restart their builder or system. This is slow, inefficient and quite an irritable process. Anaconda is a powerful open-source distribution that not only provides over 10,000 python libraries at a user's disposal but the feature of compartmentalizing and preloading any number of library packages in the form of environments. A user can now simply build an environment, download the libraries to this environment and use them whenever. &lt;/p&gt;

&lt;h2&gt;
  
  
  Acquiring Anaconda
&lt;/h2&gt;

&lt;p&gt;Anaconda is free for individual use but if you have a company of over 200 employees then it is recommended to purchase a commercial license. Below is a two-step guide for downloading and installing Anaconda. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Download
&lt;/h3&gt;

&lt;p&gt;For this step, simply visit the Anaconda &lt;a href="https://www.anaconda.com/" rel="noopener noreferrer"&gt;website&lt;/a&gt; and select the &lt;em&gt;Individual Edition&lt;/em&gt; tab under the products dropdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Forz5wntlz1hnauzfbl1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Forz5wntlz1hnauzfbl1z.png" alt="8" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or simply click this &lt;a href="https://www.anaconda.com/products/individual" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now click the &lt;strong&gt;Download&lt;/strong&gt; button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn6ximwg30q9igmwkuhuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn6ximwg30q9igmwkuhuh.png" alt="9" width="752" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will navigate you to the lower part of the tab where you can choose an installer according to your device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3aqig9lb41m6ozo9m0zr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3aqig9lb41m6ozo9m0zr.png" alt="10" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking a link will download the installer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Install
&lt;/h3&gt;

&lt;p&gt;Clicking the downloaded file will start the installation wizard. The installation is fairly simple and the installation wizard will walk you through this process with ease.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjnweadtb8gvm7d03rrbu.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjnweadtb8gvm7d03rrbu.PNG" alt="13" width="499" height="388"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Incase of any difficulty with this step you can view the official documentation for installing Anaconda &lt;a href="https://docs.anaconda.com/anaconda/install/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once on this page, you can choose the guide for your operating system from a number of given hyperlinks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a new Environment
&lt;/h2&gt;

&lt;p&gt;Navigate to where you installed Anaconda. You will see a variety of Anaconda related applications there. Windows users can simply view all this from the &lt;em&gt;start menu&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjm2clw7c4vw66s2eooit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjm2clw7c4vw66s2eooit.png" alt="14" width="650" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once in the desired folder or menu, open the &lt;strong&gt;Anaconda Navigator&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgn3nwejz7due4f2tf907.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgn3nwejz7due4f2tf907.png" alt="15" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the navigator, click on the &lt;em&gt;Environments&lt;/em&gt; tab. On the left you will see a list of environments and on the right you can see the python packages included in each environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsfz47rax5e992diu1xwf.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsfz47rax5e992diu1xwf.PNG" alt="16" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Create&lt;/strong&gt; button in the lower part of the list of environments, this will open a prompt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5k9pc2d7jit0vra5hv2r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5k9pc2d7jit0vra5hv2r.PNG" alt="17" width="451" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inside this prompt, give an appropriate name for your environment and click the create button. Now wait for your environment to be created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding packages to your Environment
&lt;/h2&gt;

&lt;p&gt;Now that you have successfully created an environment, it is important to install some python packages to this environment. These packages can then be accessed using any IDE or solution builder being used by your environment, more on this later.&lt;/p&gt;

&lt;p&gt;To add packages, simply click on your environment (my environment is called &lt;em&gt;irisproject&lt;/em&gt;) and view the installed packages on the right side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fympl0tizhrnzcgjuyqoz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fympl0tizhrnzcgjuyqoz.PNG" alt="18" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how you are viewing the &lt;em&gt;installed&lt;/em&gt; packages. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fejp9oyotwlnumn3dqk8h.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fejp9oyotwlnumn3dqk8h.PNG" alt="19" width="572" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the drop down select &lt;em&gt;not installed&lt;/em&gt; and use the &lt;em&gt;search packages&lt;/em&gt; bar to search for a package. I will be installing Keras (you can learn more about this library &lt;a href="https://keras.io/" rel="noopener noreferrer"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxpc79m4aiqm6r8hb6omi.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxpc79m4aiqm6r8hb6omi.PNG" alt="20" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now select the package or packages you wish to install and click &lt;em&gt;apply&lt;/em&gt; on the lower right of the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3xhn2xqcq4cgxohldtjo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3xhn2xqcq4cgxohldtjo.PNG" alt="21" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have now successfully installed a package to your environment.&lt;/p&gt;

&lt;p&gt;Note: You should also install Jupyter Notebook, Spyder and/or any other IDE using the steps mentioned above. These are all categorized as packages and are installed to an environment using the same steps I have used to install Keras. &lt;/p&gt;

&lt;h2&gt;
  
  
  Activating your Environment
&lt;/h2&gt;

&lt;p&gt;You can access and make use of your environment in a variety of different ways. However, I will show you the fastest and simplest method of activating your environment. Simply open the Anaconda Prompt. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faf48kgo1b3s61t314tj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faf48kgo1b3s61t314tj0.png" alt="22" width="784" height="678"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the prompt opens, type in the &lt;em&gt;activate&lt;/em&gt; command followed by the name of your environment (this name is case sensitive).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh34j5ri80c43bmm7xmyo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh34j5ri80c43bmm7xmyo.PNG" alt="23" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can see that the environment updated from &lt;em&gt;base&lt;/em&gt; to your environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmyqw9ilfrf55b91kb2oo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmyqw9ilfrf55b91kb2oo.PNG" alt="24" width="800" height="418"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You have successfully activated your custom environment!&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Your Environment
&lt;/h2&gt;

&lt;p&gt;Now let's see if we can make use of the packages we installed in one of the previous steps. Use the Anaconda Prompt to open an IDE, if you haven't installed an IDE in your environment then do it now using the same steps for &lt;em&gt;adding packages to your environment&lt;/em&gt;. I will be using Jupyter Notebook. Simply type in the name of the IDE to open it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frhvq60undfeedisi2okn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frhvq60undfeedisi2okn.PNG" alt="25" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new notebook or workspace (depending on the IDE you're using) and then import a package as I've done below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8yxpng04abr4ytzhmc39.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8yxpng04abr4ytzhmc39.PNG" alt="26" width="426" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the package has been imported successfully and now we have an ideal solution environment that does not require us to install libraries each time it is used. You can now install new packages to the navigator and use them with ease. &lt;/p&gt;

&lt;p&gt;Note: You do &lt;em&gt;not&lt;/em&gt; have to create a new environment each time you wish to install a new package, simply add said package to the environment using the Anaconda Navigator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You should now be able to create and modify your own environment(s) using Anaconda. I hope the tutorial was clear and covered everything, please use the discussion/comment section to let me know if you faced any difficulty or if any step is unclear. &lt;/p&gt;

&lt;h2&gt;
  
  
  Thank You for reading!
&lt;/h2&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>python</category>
      <category>deeplearning</category>
    </item>
  </channel>
</rss>
