<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Benjamin Blouin</title>
    <description>The latest articles on DEV Community by Benjamin Blouin (@benjaminblouin).</description>
    <link>https://dev.to/benjaminblouin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/benjaminblouin"/>
    <language>en</language>
    <item>
      <title>TensorFlow with Interactive Example</title>
      <dc:creator>Benjamin Blouin</dc:creator>
      <pubDate>Mon, 01 Feb 2021 17:27:46 +0000</pubDate>
      <link>https://dev.to/benjaminblouin/tensorflow-with-interactive-example-1lfm</link>
      <guid>https://dev.to/benjaminblouin/tensorflow-with-interactive-example-1lfm</guid>
      <description>&lt;p&gt;I taught myself TensorFlow and used Jupyter Notebooks for part of my Capstone project for Electrical and Computer Engineering, training a model that can decide if an image has fire and/or smoke. I've included a link to the Binder notebook, where you can run each cell and play with the notebook to see what happens.&lt;br&gt;
Every cell can be run in the Notebook, so you don't even use your own computer to do the model training. I think this means you might be able to train on any browser.&lt;br&gt;&lt;br&gt;
I will try to explain as best I can what is happening.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://mybinder.org/v2/gh/3keepmovingforward3/Wildfire_Warning_System/main"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mEjI_Bhc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://mybinder.org/badge_logo.svg" alt="Binder"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;h2&gt;
  
  
  Modules
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WGqEMR2R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tl9d5nb3gb2y4lf1db8k.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WGqEMR2R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tl9d5nb3gb2y4lf1db8k.PNG" alt="cell_1"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;The line with the percent prefix is a magic function that helps display the plots used later.&lt;br&gt;&lt;br&gt;
The imports should be self explanatory.&lt;br&gt;&lt;br&gt;
Setting the level of TensorFlow helps speed up using API by logging less.&lt;br&gt;&lt;br&gt;
The last variable is used later.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Download Dataset
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ryQvrS89--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gbn6vluftgencem4im80.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ryQvrS89--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gbn6vluftgencem4im80.PNG" alt="cell_2"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;This dataset is what I used for my Capstone, downloaded to the local notebook.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Unpack Dataset
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VfgpV5IC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/osoxtpeqmoiarvzvtjho.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VfgpV5IC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/osoxtpeqmoiarvzvtjho.PNG" alt="cell_3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We import the modules needed to unpack the dataset, as well as try to make a folder for decompression. This can probably be more efficient, but I actually didn't need this for the Capstone project.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Create Dataset Objects
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lFyxFxR9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n3hbjwg8pfjpnzwbxorh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lFyxFxR9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n3hbjwg8pfjpnzwbxorh.PNG" alt="cell_4"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;First lets set some variables, no magic numbers!&lt;br&gt;&lt;br&gt;
We unpacked the dataset, now we have to turn it into something TensorFlow can use. The API has a very nice function that can create a dataset from a folder.  The folder names, in this case, has two sub-folders: 'fire_smoke', 'no-fire'. These are the classes, or categories.&lt;br&gt;&lt;br&gt;
In these function's arguments we must give:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The data directory path, we made this in cell 1.&lt;/li&gt;
&lt;li&gt; The subset: training and validation are our only choices. This works because we've setup the dataset directory in an orderly manner.
&lt;/li&gt;
&lt;li&gt; The validation split is the amount of each directories images we want to use for validation, and the amount for training. In this case, and this might be confusing, but 0.21 for training = 79% of images, and the second value, 0.21, actually being 21%. Making sure these are equal is safe for simplification.
&lt;/li&gt;
&lt;li&gt; The seed is the pseudorandom number used to pick the images; helps randomize the chosen pictures for each type: validation and training don't overlap, and are randomly chosen each time.
&lt;/li&gt;
&lt;li&gt; image_size resizes the images to standardize the matrix sizes: 160px x 120px; we can not do the matrix multiplication if the sizes are different, so we resize them.
&lt;/li&gt;
&lt;li&gt; color_mode makes the images black and white. This makes sure the matrices for each image are (160,120,1). Now all our pictures are exactly the same size, and grayscale.
&lt;/li&gt;
&lt;li&gt; batch_size is a little arbitray, and the value is chosen so the computer running the training can actually finish. If it's too big, you run out of memory.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, we save the class names (fire_smoke, no-fire) into a variable, for the examples for the datasets.  The final section is plotting, not unlike how Matlab works, and the autotune tries to help make the dataset work better for the computer.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Create Model
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MJXq6WHi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lvqmd32rtpja115i8592.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MJXq6WHi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lvqmd32rtpja115i8592.PNG" alt="cell_5"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;We only have two choices, either fire_smoke or no-fire; therefore, we have 2 classes.&lt;/p&gt;

&lt;p&gt;The sequential model means that each part of the model has one input and one output.  &lt;/p&gt;

&lt;p&gt;The first layer normalizes the input between 0 and 1. These numbers are easier for TensorFlow use.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Dense layers
&lt;/h3&gt;

&lt;p&gt;Every value in the domain is connected every value in the range. This layer is used to give a greater number of tries, 64, for each try of the next layer.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Conv2D Layers
&lt;/h3&gt;

&lt;p&gt;Extracts features from the image, or parts of the image, by doing convolution. Explaining convolution is beyond the scope of this article.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Dropout
&lt;/h3&gt;

&lt;p&gt;This makes the learned values to be thrown away; this helps make sure our model is actually learning, or just getting better at this particular dataset. We want models to be generalized so it can be used in many different scenarios.  &lt;/p&gt;

&lt;h3&gt;
  
  
  MaxPooling2D
&lt;/h3&gt;

&lt;p&gt;This is a different way to avoid overfitting, but is a reduction instead of throwing away the learned variables.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Flatten
&lt;/h3&gt;

&lt;p&gt;Turns the learning values into a vector, making the math become matrix multiplication instead of some other harder math function.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Output
&lt;/h3&gt;

&lt;p&gt;The output layer is our last layer, which has the same number of neurons as classes. This is where a decision is made, fire_smoke or no-fire.  &lt;/p&gt;

&lt;p&gt;The next set of settings requires a delve into different types of gradient linear regressions. Basically, are my guesses getting better? If yes, save these values for my next guesses, and if not go a different way. The learning rate is how far away from my current guess the next guess will be. Adam is a type of linear regression that includes momentum.&lt;br&gt;&lt;br&gt;
The loss variable is used to tell the model what we want to train for, in this case we want the accuracy of guesses to be training for. We use the metrics variable to save the actual values of our learning progress as we go through each round of learning.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Train Model
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_ejdIDQd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vcoezl1tvoqd3rpsnot9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_ejdIDQd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vcoezl1tvoqd3rpsnot9.PNG" alt="cell_6"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;This is where the training actually starts. Epochs is the number of times we will let the model start over with the new, learned variables. The other arguments are self explanator, and we want to see the results in real-time, so we set verbose. The summary call gives you a better idea the shape and sizes of each layer we made earlier.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Plot Model
&lt;/h2&gt;

&lt;p&gt;The next cell is really more about plotting the results, so I'll include the most recent run, which I ran while writing this post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RTmCy2P5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gu2uaqd1gnfddx2qyb4w.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RTmCy2P5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gu2uaqd1gnfddx2qyb4w.PNG" alt="model_output"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Success! Our model got better each epoch, is fairly linear, and gets really good at guessing, over 90%.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Prediction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iogCLKtj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4trqrrxfgoubgkrezpy8.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iogCLKtj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4trqrrxfgoubgkrezpy8.PNG" alt="cell_7_and_output"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
I won't explain this cell, and the output is self explanatory.  We asked the model to make a prediction on an image that isn't part of the dataset. It guesses correctly, at least on these two images.&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0wdxne2u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/slc42emfycn3e7jalgxp.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0wdxne2u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/slc42emfycn3e7jalgxp.PNG" alt="cell_8_and_output"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Hope this helps.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>jupyter</category>
      <category>machinelearning</category>
      <category>python</category>
    </item>
    <item>
      <title>Part 1: Installing ROSS Can Be Hard, Let's Make it Easy</title>
      <dc:creator>Benjamin Blouin</dc:creator>
      <pubDate>Sat, 13 Apr 2019 03:28:32 +0000</pubDate>
      <link>https://dev.to/benjaminblouin/part-1-installing-ross-can-be-hard-let-s-make-it-easy-45hg</link>
      <guid>https://dev.to/benjaminblouin/part-1-installing-ross-can-be-hard-let-s-make-it-easy-45hg</guid>
      <description>&lt;p&gt;This is a post to keep track of setting up ROS on Ubuntu 18.04LTS because the first tries made me sigh more than I wanted to.&lt;/p&gt;

&lt;h1&gt;ROS Melodic Morenia (EOL May, 2023)&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://wiki.ros.org/melodic/Installation/Ubuntu"&gt;ROS' Installation Guide for Ubuntu&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;strong&gt;&lt;em&gt;I'm assuming you have a fresh, unmodified Ubuntu 18.04LTS install!&lt;/em&gt;&lt;/strong&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;Repository Initializations&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Allow Ubuntu repositories "restricted," "universe," and "multiverse."
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ol3fz8Hu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9nd9tjyhv2iq0s6j6xj2.png" alt="Software &amp;amp; Updates Settings"&gt;
&lt;/li&gt;
&lt;li&gt;In the same window select "Other software" tab. Select Add, paste from below: &lt;code&gt;deb &lt;a href="http://mirror.umd.edu/packages.ros.org/ros/ubuntu"&gt;http://mirror.umd.edu/packages.ros.org/ros/ubuntu&lt;/a&gt; bionic main&lt;/code&gt;
&lt;em&gt;This repo is hosted at the University of Maryland. The link given in ROS's guide is broken. If you want to use a different one, click below.&lt;/em&gt;
&lt;a href="https://wiki.ros.org/ROS/Installation/UbuntuMirrors"&gt;ROS Mirrors&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Apply/Close window. There's a good chance an error is thrown saying there's no public-key available.&lt;/li&gt;
&lt;li&gt;There's a good chance a missing public key error is thrown. Open Terminal and run the command below:&lt;p&gt; 
&lt;code&gt;sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 5523BAEEB01FA116&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's 12:37 AM, I’m going to 🛌 😴 💤 &lt;br&gt;
Let the 🤖 🌚 💀 😂 &lt;/p&gt;

</description>
      <category>ross</category>
      <category>robotics</category>
    </item>
    <item>
      <title>Hello, world!</title>
      <dc:creator>Benjamin Blouin</dc:creator>
      <pubDate>Mon, 04 Dec 2017 19:02:34 +0000</pubDate>
      <link>https://dev.to/benjaminblouin/hello-world-80f</link>
      <guid>https://dev.to/benjaminblouin/hello-world-80f</guid>
      <description>

&lt;p&gt;Hello, world!&lt;/p&gt;


</description>
      <category>helloworldfirst1st</category>
    </item>
  </channel>
</rss>
