<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Afroz Chakure</title>
    <description>The latest articles on DEV Community by Afroz Chakure (@afrozchakure).</description>
    <link>https://dev.to/afrozchakure</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/afrozchakure"/>
    <language>en</language>
    <item>
      <title>Yay! Reached 1035+ days Daily Coding Streak on Leetcode!</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Sun, 29 Dec 2024 17:11:20 +0000</pubDate>
      <link>https://dev.to/afrozchakure/yay-reached-1035-days-daily-coding-streak-on-leetcode-13ae</link>
      <guid>https://dev.to/afrozchakure/yay-reached-1035-days-daily-coding-streak-on-leetcode-13ae</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fillg730qmr433674gjsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fillg730qmr433674gjsi.png" alt=" " width="406" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leetcode</category>
      <category>programming</category>
      <category>algorithms</category>
      <category>datastructures</category>
    </item>
    <item>
      <title>52 Tricks to Media Manipulation</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Thu, 19 Dec 2024 13:06:39 +0000</pubDate>
      <link>https://dev.to/afrozchakure/52-tricks-to-media-manipulation-3i5k</link>
      <guid>https://dev.to/afrozchakure/52-tricks-to-media-manipulation-3i5k</guid>
      <description>&lt;p&gt;Here's a list of 52 major media manipulation tactics used in the modern world to hide the truth:&lt;/p&gt;

&lt;h3&gt;
  
  
  Media Manipulation Tactics
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Agenda Setting&lt;/strong&gt;: How narratives are chosen to dominate public discourse.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Astroturfing&lt;/strong&gt;: Creating fake grassroots support to manipulate opinions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clickbait Headlines&lt;/strong&gt;: Misleading titles to draw attention while obscuring the real story.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smear Campaigns&lt;/strong&gt;: Targeted attacks to discredit individuals or groups.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Censorship by Omission&lt;/strong&gt;: Ignoring inconvenient facts or stories.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manufactured Consent&lt;/strong&gt;: Using media to gain public approval for controversial decisions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Echo Chambers&lt;/strong&gt;: How reinforcing certain views limits critical thinking.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Misinformation vs. Disinformation&lt;/strong&gt;: Spotting the difference.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Overton Window Manipulation&lt;/strong&gt;: Shifting what’s seen as acceptable in public debate.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;False Equivalence&lt;/strong&gt;: Creating the illusion of balance where none exists.
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Narrative Framing
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fearmongering&lt;/strong&gt;: Leveraging fear to control public opinion.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Scapegoat Strategy&lt;/strong&gt;: Diverting blame to a convenient target.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Diversionary Tactics&lt;/strong&gt;: The art of misdirection.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sloganeering&lt;/strong&gt;: Catchy phrases used to oversimplify complex issues.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Polarization&lt;/strong&gt;: Dividing people to conquer attention.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Whataboutism&lt;/strong&gt;: Avoiding criticism by pointing fingers elsewhere.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cherry-Picking Data&lt;/strong&gt;: Presenting partial truths to mislead.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gaslighting the Masses&lt;/strong&gt;: Denying reality to confuse and control.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controlled Opposition&lt;/strong&gt;: Simulating dissent to manipulate outcomes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dog Whistle Politics&lt;/strong&gt;: Using coded language to rally specific groups.
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Techniques to Manipulate Public Sentiment
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Emotional Manipulation&lt;/strong&gt;: Tugging at heartstrings to bypass logic.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hero-Villain Narratives&lt;/strong&gt;: Simplifying stories into good vs. bad.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaponizing Shame&lt;/strong&gt;: Using guilt to silence critics.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bandwagon Effect&lt;/strong&gt;: Encouraging conformity by portraying widespread support.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Appeal to Authority&lt;/strong&gt;: Leveraging "experts" to sell questionable ideas.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fake Fact-Checking&lt;/strong&gt;: Discrediting truths under the guise of verification.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leaked “Exclusives”&lt;/strong&gt;: Controlled releases to steer narratives.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forced Virality&lt;/strong&gt;: Coordinated campaigns to make topics trend.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Streisand Effect&lt;/strong&gt;: When censorship backfires.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Propaganda by Numbers&lt;/strong&gt;: Inflating statistics to create urgency or panic.
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Structural Control of Media
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Media Monopolies&lt;/strong&gt;: The dangers of concentrated ownership.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advertising Pressure&lt;/strong&gt;: When sponsors dictate editorial policy.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Revolving Doors&lt;/strong&gt;: The media-politics nexus.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Silencing Journalists&lt;/strong&gt;: Subtle and overt threats to free press.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Influencer Manipulation&lt;/strong&gt;: Co-opting voices of trust.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Algorithmic Bias&lt;/strong&gt;: How platforms amplify manipulation.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Saturation Coverage&lt;/strong&gt;: Drowning other stories out.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token Opposition&lt;/strong&gt;: Allowing controlled dissent for credibility.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestrated Leaks&lt;/strong&gt;: When whistleblowing is weaponized.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rebranding Controversy&lt;/strong&gt;: Euphemisms to sanitize wrongdoing.
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Fighting Back and Staying Aware
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Critical Thinking&lt;/strong&gt;: Cultivating skepticism in media consumption.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fact-Checking Tools&lt;/strong&gt;: Essential resources to verify claims.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Media Literacy Education&lt;/strong&gt;: Teaching people to decode media.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recognizing Bias&lt;/strong&gt;: Spotting partiality in reporting.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dismantling Echo Chambers&lt;/strong&gt;: Diversifying your information sources.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Whistleblower Protections&lt;/strong&gt;: Supporting those who expose manipulation.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Citizen Journalism&lt;/strong&gt;: Empowering independent reporting.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Digital Detox&lt;/strong&gt;: Breaking free from constant manipulation.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Call-Out Culture&lt;/strong&gt;: Holding media accountable.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supporting Ethical Journalism&lt;/strong&gt;: Funding independent voices.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spotting Red Flags&lt;/strong&gt;: Identifying signs of manipulation in headlines.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Role of Transparency&lt;/strong&gt;: How openness combats misinformation.
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>socialmedia</category>
      <category>propaganda</category>
      <category>news</category>
      <category>media</category>
    </item>
    <item>
      <title>Solved 1000+ Problems on Leetcode, Yay!</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Wed, 01 May 2024 18:19:11 +0000</pubDate>
      <link>https://dev.to/afrozchakure/solved-1000-problems-on-leetcode-yay-50kk</link>
      <guid>https://dev.to/afrozchakure/solved-1000-problems-on-leetcode-yay-50kk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwngvp217rtrcil6397a2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwngvp217rtrcil6397a2.png" alt=" " width="395" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leetcode</category>
    </item>
    <item>
      <title>Argh! Lost my 649 Days Hot streak on Leetcode</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Wed, 03 Jan 2024 18:20:32 +0000</pubDate>
      <link>https://dev.to/afrozchakure/lost-my-649-days-hot-streak-on-leetcode-584h</link>
      <guid>https://dev.to/afrozchakure/lost-my-649-days-hot-streak-on-leetcode-584h</guid>
      <description>&lt;p&gt;Funny enough, the only badge I missed was my December 2023 badge (exhausted my time travels). Ended up losing my streak because of it. :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk6ucgb6tldlp5f290sf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk6ucgb6tldlp5f290sf.png" alt=" " width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can follow me on Linkedin: &lt;a href="https://www.linkedin.com/in/afrozchakure/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/afrozchakure/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leetcode</category>
      <category>programming</category>
    </item>
    <item>
      <title>365 days and counting</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Wed, 08 Mar 2023 16:38:07 +0000</pubDate>
      <link>https://dev.to/afrozchakure/365-days-and-counting-4k2k</link>
      <guid>https://dev.to/afrozchakure/365-days-and-counting-4k2k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsh9v4szr891kx03u4tc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsh9v4szr891kx03u4tc.png" alt=" " width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leetcode</category>
      <category>programming</category>
    </item>
    <item>
      <title>Ouch! Just lost my streak on Leetcode</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Wed, 02 Nov 2022 02:18:35 +0000</pubDate>
      <link>https://dev.to/afrozchakure/ouch-just-lost-my-streak-on-leetcode-4khf</link>
      <guid>https://dev.to/afrozchakure/ouch-just-lost-my-streak-on-leetcode-4khf</guid>
      <description>&lt;p&gt;Was closing in on 250 days, damn!&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23t6764pjvep57qmagp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23t6764pjvep57qmagp4.png" alt=" " width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leetcode</category>
      <category>dailydev</category>
    </item>
    <item>
      <title>Completed 500 problems on Leetcode!</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Thu, 18 Aug 2022 18:22:00 +0000</pubDate>
      <link>https://dev.to/afrozchakure/completed-500-problems-on-leetcode-4ba6</link>
      <guid>https://dev.to/afrozchakure/completed-500-problems-on-leetcode-4ba6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6zocbxwbouijcyegrqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6zocbxwbouijcyegrqi.png" alt="Completed 500 problems on Leetcode screenshot" width="432" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leetcode</category>
    </item>
    <item>
      <title>Yo! I completed 100 days of Leetcode</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Wed, 08 Jun 2022 17:43:04 +0000</pubDate>
      <link>https://dev.to/afrozchakure/yo-i-completed-100-days-of-leetcode-bdm</link>
      <guid>https://dev.to/afrozchakure/yo-i-completed-100-days-of-leetcode-bdm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwhvparfxsd6prg1jvfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwhvparfxsd6prg1jvfc.png" alt="Screenshot of 100 days of Leetcode badge" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>100daysofcode</category>
      <category>leetcode</category>
    </item>
    <item>
      <title>All you need to know about YOLO v3 (You Only Look Once)</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Mon, 01 Mar 2021 19:34:25 +0000</pubDate>
      <link>https://dev.to/afrozchakure/all-you-need-to-know-about-yolo-v3-you-only-look-once-e4m</link>
      <guid>https://dev.to/afrozchakure/all-you-need-to-know-about-yolo-v3-you-only-look-once-e4m</guid>
      <description>&lt;p&gt;This blog will provide an exhaustive study of YOLOv3 (You only look once), which is one of the most popular deep learning models extensively used for object detection, semantic segmentation, and image classification. In this blog, I'll explain the architecture of YOLOv3 model, with its different layers, and see some results for object detection that I got while running the inference program on some test images using the model. &lt;/p&gt;

&lt;p&gt;So keep reading the blog to find out more about YOLOv3.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is YOLO?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;"You Only Look Once" or YOLO is a family of deep learning models designed for fast object Detection.&lt;/li&gt;
&lt;li&gt;There are three main variations of YOLO, they are YOLOv1, YOLOv2, and YOLOv3. &lt;/li&gt;
&lt;li&gt;The first version proposed the general architecture, where the second version refined the design and made use of predefined anchor boxes to improve the bounding box proposal, and version three further refined the model architecture and training process.&lt;/li&gt;
&lt;li&gt;It is based on the idea that :
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;" A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. "&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Network Architecture Diagram of YOLOv3
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr5oq1n6ukg89z8e6ytl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr5oq1n6ukg89z8e6ytl.png" alt="yolov3_architecture" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Description of Architecture
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Steps for object Detection using YOLO v3:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The inputs is a batch of images of shape (m, 416, 416, 3).&lt;/li&gt;
&lt;li&gt;YOLO v3 passes this image to a convolutional neural network (CNN).&lt;/li&gt;
&lt;li&gt;The last two dimensions of the above output are flattened to get an output volume of (19, 19, 425):

&lt;ul&gt;
&lt;li&gt;Here, each cell of a 19 x 19 grid returns 425 numbers.&lt;/li&gt;
&lt;li&gt;425 = 5 * 85, where 5 is the number of anchor boxes per grid.&lt;/li&gt;
&lt;li&gt;85 = 5 + 80, where 5 is (pc, bx, by, bh, bw) and 80 is the number of classes we want to detect.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers &lt;strong&gt;(pc, bx, by, bh, bw, c)&lt;/strong&gt;. If we expand c into an 80-dimensional vector, each bounding box is represented by 85 numbers.&lt;/li&gt;

&lt;li&gt;Finally, we do the IoU (Intersection over Union) and Non-Max Suppression to avoid selecting overlapping boxes.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr8792eyt5d5q8479es9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr8792eyt5d5q8479es9.png" alt="attributes_of_bounding_box" width="600" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Regarding the architecture:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;YOLO v3 uses a variant of Darknet, which originally has &lt;strong&gt;53 layer network trained on Imagenet&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;For the task of detection, 53 more layers are stacked onto it, giving us a 106 layer fully convolutional underlying architecture for YOLO v3.&lt;/li&gt;
&lt;li&gt;In YOLO v3, the detection is done by applying 1 x 1 detection kernels on feature maps of three different sizes at three different places in the network. &lt;/li&gt;
&lt;li&gt;The shape of detection kernel is &lt;strong&gt;1 x 1 x (B x (5 + C))&lt;/strong&gt;. Here &lt;strong&gt;B is the number of bounding boxes a cell on the feature map can predict&lt;/strong&gt;, &lt;strong&gt;'5' is for the 4 bounding box attributes and one object confidence&lt;/strong&gt; and &lt;strong&gt;C is the no. of classes&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;YOLO v3 uses &lt;strong&gt;binary cross-entropy&lt;/strong&gt; for calculating the &lt;strong&gt;classification loss&lt;/strong&gt; for each label while &lt;strong&gt;object confidence and class predictions are predicted through logistic regression&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Hyper-parameters used
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;class_threshold&lt;/strong&gt; - Defines probability threshold for the predicted object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-Max suppression Threshold&lt;/strong&gt; - It helps overcome the problem of detecting an object multiple times in an image. It does this by taking boxes with maximum probability and suppressing the close-by boxes with non-max probabilities (less than the predefined threshold).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;input_height &amp;amp; input_shape&lt;/strong&gt; - Image size to input.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CNN architecture of Darknet-53
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Darknet-53&lt;/strong&gt; is used as a feature extractor. &lt;/li&gt;
&lt;li&gt;Darknet-53 mainly composed of 3 x 3 and 1 x 1 filters with skip connections like the residual network in ResNet. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqbj6s9qlivvkncmft6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqbj6s9qlivvkncmft6u.png" alt="darknet53" width="317" height="445"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Architecture diagram of YOLOv3
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdmk2adlckbnm8k9n0p8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdmk2adlckbnm8k9n0p8.png" alt="cnn_architecture_darknet" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Layers Details
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;YOLO makes use of only convolutional layers, making it a fully convolutional network (FCN)&lt;/li&gt;
&lt;li&gt;In YOLOv3 a deeper architecture of feature extractor called Darknet-53 is used.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Convolution layers in YOLOv3
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It contains 53 convolutional layers which have been, each followed by batch normalization layer and Leaky ReLU activation.&lt;/li&gt;
&lt;li&gt;Convolution layer is used to convolve multiple filters on the images and produces multiple feature maps&lt;/li&gt;
&lt;li&gt;No form of pooling is used and a convolutional layer with stride 2 is used to downsample the feature maps.&lt;/li&gt;
&lt;li&gt;It helps in preventing loss of low-level features often attributed to pooling.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. Different Layers inside YOLO
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code for Layer 1 to 53 in Tensorflow&lt;/strong&gt; : Consider res_block() method for below code
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;res_block&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;  
    &lt;span class="n"&gt;shortcut&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;inputs&lt;/span&gt;  
    &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filters&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
    &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filters&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
    &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;shortcut&lt;/span&gt;  
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First two conv2d layers with 32 and 64 filters&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;res_block * 1&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;    &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;res_block&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Convolutional block with 128 filters
&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;res_block * 2&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;res_block&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Convolutional layer with 256 filters
&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;res_block * 8&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;res_block&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Convolutional layer with 512 filters
&lt;/span&gt;&lt;span class="n"&gt;route_1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;
&lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;res_block * 8&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;res_block&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Convolutional layer with 1024 filters
&lt;/span&gt;&lt;span class="n"&gt;route_2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;
&lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;res_block * 4&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;res_block&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;route_3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  5. Input Details for Model Inference
&lt;/h3&gt;

&lt;h4&gt;
  
  
  i. Input Preprocessing:
&lt;/h4&gt;

&lt;p&gt;The images need to be resized to 416 x 416 pixels before feeding it into our model or the dimensions can also be specified while running the python file&lt;/p&gt;

&lt;h4&gt;
  
  
  ii. Input Dimensions:
&lt;/h4&gt;

&lt;p&gt;The model expects inputs to be color images with the &lt;strong&gt;square shape of 416 x 416 pixels&lt;/strong&gt; or it can also be specified by the user.&lt;/p&gt;




&lt;h3&gt;
  
  
  6. Details on Model Inference program and output
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. The output of the model
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The output is a list of bounding boxes along with the recognized classes. &lt;/li&gt;
&lt;li&gt;Each bounding box is represented by 6 numbers &lt;strong&gt;(pc, bx, by, bh, bw, c)&lt;/strong&gt;. Here c is an 80-dimensional vector equivalent to the number of classes we want to predict, each bounding box is represented by 85 numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. Output of Post-processing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The Inference program &lt;strong&gt;produces a list of numpy arrays&lt;/strong&gt;, the shape of which is displayed as the output. These arrays &lt;strong&gt;predict both the bounding boxes and class labels but are encoded&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Further we will take each of the numpy arrays and decode the candidate bounding boxes and class predictions. If there are any &lt;strong&gt;bounding boxes that don't confidently describe an object (having class probabilities less than the threshold, 0.3) we'll ignore them&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Here a &lt;strong&gt;maximum of 200 bounding boxes&lt;/strong&gt; can be considered in an image.&lt;/li&gt;
&lt;li&gt;I have used the &lt;strong&gt;correct_yolo_boxes()&lt;/strong&gt; function to &lt;strong&gt;perform translation of bounding box coordinates so that we can plot the original image and draw the bounding boxes&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Again to &lt;strong&gt;remove those candidate bounding boxes that may be referring to the same object&lt;/strong&gt;, we define the amount of overlap as &lt;strong&gt;non-max suppression threshold = 0.45&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Also there is a need to rescale the coordinates of the bounding boxes to the original image along with displaying the label and scores on top of each of them.&lt;/li&gt;
&lt;li&gt;All these Post-processing steps need to be performed before we get the bounding boxes along with the recognized classes in our output image.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Output of inference program for the model
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp4bfay0kauseh5qi7ee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp4bfay0kauseh5qi7ee.png" alt="predictions3" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi03i6zfwe7v0o8xcpmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi03i6zfwe7v0o8xcpmq.png" alt="predictions4" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0334dn2man69a6am3caf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0334dn2man69a6am3caf.png" alt="predictions6" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Speed and Accuracy of YOLOv3
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model Speed (in FPS)&lt;/strong&gt; - On a Pascal Titan X it processes images at 30 FPS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Accuracy (On Testing dataset)&lt;/strong&gt; - It has a mAP (Mean Average Precision) of 87.54% on VOC test set.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  8. Run the model
&lt;/h3&gt;

&lt;p&gt;Find link to my Dockerfile : &lt;a href="https://hub.docker.com/repository/docker/afrozchakure/yolov3-tensorflow" rel="noopener noreferrer"&gt;afrozchakure/yolov3-tensorflow&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run the model using docker (docker should be preinstalled) on your machine use the below commands in the terminal :)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xhost +

&lt;span class="nb"&gt;sudo &lt;/span&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-ti&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host &lt;span class="nt"&gt;--ipc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DISPLAY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$DISPLAY&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /tmp/.X11-unix:/tmp/.X11-unix &lt;span class="nt"&gt;--env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"QT_X11_NO_MITSHM=1"&lt;/span&gt; afrozchakure/yolov3-tensorflow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  9. Learn more about Working Principles of YOLOv3 from below resources
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b" rel="noopener noreferrer"&gt;Referrence Link 1&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.analyticsvidhya.com/blog/2018/12/practical-guide-object-detection-yolo-framewor-python/" rel="noopener noreferrer"&gt;Referrence Link 2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well done if you came this far. Cheers 😇💖&lt;br&gt;
Connect with me: &lt;a href="https://www.linkedin.com/in/afrozchakure" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/afrozchakure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can support my work using Buy me a Coffee: &lt;br&gt;
&lt;a href="https://buymeacoffee.com/afrozchakure" rel="noopener noreferrer"&gt;https://buymeacoffee.com/afrozchakure&lt;/a&gt;&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>architecture</category>
      <category>docker</category>
    </item>
    <item>
      <title>Transfer Learning with MobileNet-v2</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Wed, 17 Feb 2021 08:14:23 +0000</pubDate>
      <link>https://dev.to/afrozchakure/transfer-learning-with-mobilenet-v2-4ijn</link>
      <guid>https://dev.to/afrozchakure/transfer-learning-with-mobilenet-v2-4ijn</guid>
      <description>&lt;h3&gt;
  
  
  What is Transfer Learning ?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In transfer learning, we &lt;strong&gt;use a pre-trained model&lt;/strong&gt; which is trained on a large and general enough dataset to serve as a &lt;strong&gt;generic model for our needs.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;We can use these pre-trained models without having to train a model from scratch on a large dataset.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0rozoitwmsojodas745.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0rozoitwmsojodas745.png" alt="transfer_learning" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Code
&lt;/h4&gt;

&lt;p&gt;Find the Jupyter Notebook with Code for Transfer Learning &lt;a href="https://github.com/afrozchakure/MobileNet-v2-Transfer-Learning-Hyper-Parameter-Tuning/blob/main/Transfer_Learning_using_MobileNet_v2.ipynb" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Model used to perform transfer learning - MobileNet V2
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is a model developed at Google.&lt;/li&gt;
&lt;li&gt;It is trained on the ImageNet dataset, a large dataset of 1.4M images and 1000 classes. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Using the “Bottleneck Layer” for Feature Extraction
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Here in the &lt;strong&gt;cats_vs_dogs dataset&lt;/strong&gt; we use the very last layer before the flatten operation for feature extraction. This layer is called a &lt;strong&gt;‘bottleneck layer’.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The bottleneck layer features retain more generality as compared to the final/top layer. &lt;/li&gt;
&lt;li&gt;Here we set the include_top=False, so that &lt;strong&gt;we load a model that doesn’t include the classification layers at the top.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  General Workflow for using a Pre-trained Model:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Examine and understand the data.&lt;/li&gt;
&lt;li&gt;Build an input pipeline &lt;/li&gt;
&lt;li&gt;Compose
a. Loading the pre-trained model and pre-trained weights.
b. Stack the classification layers on top.&lt;/li&gt;
&lt;li&gt;Train&lt;/li&gt;
&lt;li&gt;Evaluate &lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Advantages of using a pre-trained model for feature extraction:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;When working with a small dataset, we can &lt;strong&gt;take advantage of features learned by a model trained on a larger dataset in the same domain.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;This is done by &lt;strong&gt;instantiating the pre-trained model and adding a fully-connected classifier on top.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;pre-trained model is ‘frozen’&lt;/strong&gt; and &lt;strong&gt;only the weights of the classifier get updated during training.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;This in turn helps us achieve &lt;strong&gt;better accuracy for our model&lt;/strong&gt;, even if we have a &lt;strong&gt;small dataset and perform only fewer computations&lt;/strong&gt; for training.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Accuracy and loss of model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The model has a initial Accuracy of 0.54 and initial loss of 0.63 for the Validation set before training.&lt;/li&gt;
&lt;li&gt;After training the model on Train set, it has an accuracy of 0.9501 and loss of 0.1020.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Link to Dockerfile:&lt;/strong&gt; &lt;a href="https://hub.docker.com/repository/docker/afrozchakure/tl_tensorflow" rel="noopener noreferrer"&gt;https://hub.docker.com/repository/docker/afrozchakure/tl_tensorflow&lt;/a&gt;&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>architecture</category>
      <category>docker</category>
    </item>
    <item>
      <title>Hyper-Parameter Tuning to Optimize ML Models (MobileNet v2)</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Wed, 17 Feb 2021 08:11:39 +0000</pubDate>
      <link>https://dev.to/afrozchakure/hyper-parameter-tuning-to-optimize-ml-models-mobilenet-v2-4jnc</link>
      <guid>https://dev.to/afrozchakure/hyper-parameter-tuning-to-optimize-ml-models-mobilenet-v2-4jnc</guid>
      <description>&lt;h3&gt;
  
  
  What is meant by Fine-Tuning a model ?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The goal of fine-tuning is to adapt specialized features from a generic dataset to work with the new dataset, rather than overwrite the generic learning.&lt;/li&gt;
&lt;li&gt;Here I have trained (or "fine-tuned") the weights of the top layers of the pre-trained model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;p&gt;Find the Jupyter Notebook with the code &lt;a href="https://github.com/afrozchakure/MobileNet-v2-Transfer-Learning-Hyper-Parameter-Tuning/blob/main/Hyper_Param_Tuning_MobileNetv2.ipynb" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parameters to tune
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Learning rate&lt;/strong&gt;: Choosing an &lt;strong&gt;appropriate learning_rate helps improve the accuracy&lt;/strong&gt;. Here we have used &lt;strong&gt;learning_rate = 0.0001&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No. of epochs&lt;/strong&gt;: Increasing the no. of epochs upto which the model should train, can increase the accuracy of the model. Here we have trained the model again on 10 epochs (20 in total).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unfreezing the top layers of the model&lt;/strong&gt; : We try and fine-tune a small number of top layers rather than the whole MobileNet model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Un-freezing the top layers of the model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;First step is to unfreeze the &lt;code&gt;base_model&lt;/code&gt; and set the bottom layers to be un-trainable.&lt;/li&gt;
&lt;li&gt;We then recompile the model (necessary for these changes to take effect), and resume training for 10 more epochs.&lt;/li&gt;
&lt;li&gt;Lastly we compile the model using a much lower learning rate (0.0001).&lt;/li&gt;
&lt;li&gt;After fine-tuning the model nearly reaches 98% accuracy.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Setting the top layers as trainable
&lt;/span&gt;&lt;span class="n"&gt;base_mode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trainable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

&lt;span class="c1"&gt;# To check how many layers are in the base model
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Number of layers in the base model: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="c1"&gt;# Fine-tune from this layer onwards 
&lt;/span&gt;&lt;span class="n"&gt;fine_tune_at&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;

&lt;span class="c1"&gt;# Freeze all the layers before the `fine_tune_at` layer
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;layer&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;base_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="n"&gt;fine_tune_at&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="n"&gt;layer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trainable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;

&lt;span class="c1"&gt;# This step improves the accuracy of the model by a few steps
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;losses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BinaryCrossentropy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;from_logits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
              &lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;optimizers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;RMSprop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;base_learning_rate&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
              &lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;accuracy&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fine Tuning the parameter weights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To further improve performance, &lt;strong&gt;we can repurpose the top-level layers of the pre-trained models to the new dataset via fine-tuning&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;In this case, &lt;strong&gt;we tuned our weights such that our model learned high-level featuers specific to the dataset&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;This technique is usually recommended &lt;strong&gt;when the training dataset is large&lt;/strong&gt; and &lt;strong&gt;very similar to the original dataset that the pre-trained model was trained on&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Results after Fine-Tuning the model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Before fine-tuning the model, the model has an &lt;strong&gt;accuracy of 93.25%&lt;/strong&gt; and &lt;strong&gt;loss of 0.14 on the Test set&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;After fine-tuning the model, the model has an &lt;strong&gt;accuracy of 97.89%&lt;/strong&gt; and &lt;strong&gt;loss of 0.058 on the Test set&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Accuracy and Loss on Test set (Before Fine Tuning)
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxndgjwn7xbcuqid782nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxndgjwn7xbcuqid782nw.png" alt="before_fine_tuning_1" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Accuaracy and Loss on Test set (After Fine Tuning)
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63r2td4mzysthzg4rd9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63r2td4mzysthzg4rd9d.png" alt="after_fine_tuning_1" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>architecture</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deep Neural Network (DNN) in a Brief</title>
      <dc:creator>Afroz Chakure</dc:creator>
      <pubDate>Sun, 07 Feb 2021 13:29:18 +0000</pubDate>
      <link>https://dev.to/afrozchakure/deep-neural-network-dnn-summary-444l</link>
      <guid>https://dev.to/afrozchakure/deep-neural-network-dnn-summary-444l</guid>
      <description>&lt;h3&gt;
  
  
  Neural Network (NN):
&lt;/h3&gt;

&lt;p&gt;It is a series of algorithms that &lt;strong&gt;endeavors to recognize underlying relationships&lt;/strong&gt; in a set of data through a process that mimics the way the human brain operates.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples and Applications of NN:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Convolutional neural network&lt;/strong&gt; =&amp;gt; 
Good for image recognition&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long Short-Term memory network&lt;/strong&gt; =&amp;gt;
Good for Speech Recognition &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Neuron:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;It is a mathematical function that models the functioning of a biological neuron.&lt;/li&gt;
&lt;li&gt;It computes the weighted average of its input and passes the sum through a non-linear function called the activation function (such as the sigmoid).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Training an Artificial neural network:
&lt;/h3&gt;


&lt;center&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk0pktxju3n7czufy99jy.png" width="800" height="433"&gt;Fig 1. Training an Artificial Neural Network
&lt;/center&gt; 
&lt;h3&gt;
  
  
  Gradient Descent:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;The process of repeated nudging an input of a function by some multiple of the negative gradient is called Gradient Descent.&lt;/li&gt;
&lt;li&gt;When there are one or more inputs, you can use Gradient descent for &lt;strong&gt;optimizing the values of the coefficients by iteratively minimizing the error of the model on your training data&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;learning rate&lt;/strong&gt; is used as a scale factor and the coefficients are updated in the direction towards &lt;strong&gt;minimizing the error&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;This process is &lt;strong&gt;repeated until a minimum sum squared error is achieved or no further improvement is possible.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;center&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmw267omsid15qiwtoeed.png" width="800" height="405"&gt;Fig 2. Gradient Descent Image
&lt;/center&gt;
&lt;h3&gt;
  
  
  Backward Propagation:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;It is an algorithm for supervised learning of an ANN using Gradient Descent.&lt;/li&gt;
&lt;li&gt;Given an ANN and an error function it calculates the gradient of the error function w.r.t. the NN weights.&lt;/li&gt;
&lt;li&gt;Here the partial computations of the gradient from one layer are reused in the computation of the gradient for the previous layer.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Backpropagation Calculation:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The chain rule expressions give us the derivatives that determine each component in the gradient that helps to minimize the cost of the network by repeatedly stepping downhill.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fid6w0b54ncpc74corw0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fid6w0b54ncpc74corw0c.png" alt="calculus1" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqv4wn312nvkdnngi2zqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqv4wn312nvkdnngi2zqk.png" alt="backpropagation_calculus" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>architecture</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
