<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: YASH GARG</title>
    <description>The latest articles on DEV Community by YASH GARG (@ygarg704).</description>
    <link>https://dev.to/ygarg704</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ygarg704"/>
    <language>en</language>
    <item>
      <title>How to Build a Flower Recognition App with Teachable Machine and SashiDo</title>
      <dc:creator>YASH GARG</dc:creator>
      <pubDate>Sat, 03 Jul 2021 12:52:46 +0000</pubDate>
      <link>https://dev.to/ygarg704/flora-3b44</link>
      <guid>https://dev.to/ygarg704/flora-3b44</guid>
      <description>&lt;p&gt;Hello! I recently built a machine learning based web application that can identify a flower (daisy, dandelion, sunflower, rose and tulip for now) using &lt;a href="https://teachablemachine.withgoogle.com/" rel="noopener noreferrer"&gt;Google's Teachable Machine&lt;/a&gt; for training a machine learning model and &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo&lt;/a&gt; for storing images. I thought this was an interesting idea where users can either upload an image or use their webcam to get predicted results and now I'll walk you through it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Here's a short demo video showing how the application works:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/4JGH3JQjizc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Teachable Machine Learning&lt;/li&gt;
&lt;li&gt;SashiDo&lt;/li&gt;
&lt;li&gt;Frontend&lt;/li&gt;
&lt;li&gt;WebCam Based Prediction&lt;/li&gt;
&lt;li&gt;Uploaded Image Based Prediction&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;References&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Google's Teachable Machine Learning &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;For classifying flowers, the first step is to generate a ML model. &lt;a href="https://teachablemachine.withgoogle.com/" rel="noopener noreferrer"&gt;Teachable machine&lt;/a&gt; is a web-based tool that can be used to generate 3 types of models based on the input type, namely Image, Audio and Pose. I created an image project and uploaded images of flowers which were taken from a &lt;a href="https://www.kaggle.com/alxmamaev/flowers-recognition" rel="noopener noreferrer"&gt;kaggle dataset&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ia445owel76wau31ouq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ia445owel76wau31ouq.png" alt="TML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are a few advanced settings for epochs,learning rate and batch size, but I felt the default ones were good enough for the task.After training, I exported the model and uploaded it. This stores it in the cloud and gives a shareable public link which can be then used in the project.&lt;/p&gt;

&lt;p&gt;The next step would be to use to model to perform classification. There are two ways of providing input,we shall go through both of them.&lt;/p&gt;

&lt;h4&gt;
  
  
  SashiDo &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo&lt;/a&gt; is a beautiful backend as a service platform and has a lot of built in functions.In this project, I've used only the Files functionality to store images uploaded by users. I agree that this isn't totally necessary but it is a great way to obtain more samples from the public and build a better dataset. To connect the application with &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo&lt;/a&gt; copy the code in the getting started page in &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo's&lt;/a&gt; Dashboard to the javascript file and also add the following script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script src=https://unpkg.com/parse/dist/parse.min.js&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next step would be working on the frontend.&lt;/p&gt;

&lt;h4&gt;
  
  
  Frontend &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;I've created two buttons to start/stop the webcam and to upload image, an input element for file upload and 3 empty divs to display the webcam input,image input and the output(prediction result). I have used &lt;a href="https://getbootstrap.com/" rel="noopener noreferrer"&gt;Bootstrap&lt;/a&gt;, so in case you're not familiar with it, the class names basically correspond to various utilities in it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    &amp;lt;div class="container" id="main"&amp;gt;
        &amp;lt;div class="row justify-content-center"&amp;gt;
            &amp;lt;div class="col-lg-10 col-md-12"&amp;gt;
                &amp;lt;div class="card m-4"&amp;gt;
                    &amp;lt;div class="card-body" id="box-cont" style="text-align: center;"&amp;gt;
                        &amp;lt;h3 class="card-title py-3 title" id="detect"&amp;gt;Flower Recognition Application
                        &amp;lt;/h3&amp;gt;
                        &amp;lt;p class="px-3"&amp;gt;
                            To identify a &amp;lt;strong&amp;gt;&amp;lt;span class="yellow"&amp;gt;Daisy&amp;lt;/span&amp;gt;, &amp;lt;span style="color: pink;"&amp;gt;Rose&amp;lt;/span&amp;gt;, &amp;lt;span class="yellow"&amp;gt;Dandelion&amp;lt;/span&amp;gt;, &amp;lt;span style="color: pink;"&amp;gt;Tulip&amp;lt;/span&amp;gt;, or &amp;lt;span class="yellow"&amp;gt;Sunflower&amp;lt;/span&amp;gt;&amp;lt;/strong&amp;gt;, either use your web camera and show the flower
                            or upload an image from your device.
                        &amp;lt;/p&amp;gt;
                        &amp;lt;label for="webcam" class="ps-3 pt-3 pb-3" style="color: #fcfcfc"&amp;gt;USE WEBCAM:&amp;lt;/label&amp;gt;
                        &amp;lt;button id="webcam" type="button" class="btn btn-primary ms-3"
                            onclick="useWebcam()"&amp;gt;Start
                            webcam
                        &amp;lt;/button&amp;gt;
                        &amp;lt;div id="webcam-container" class="px-3"&amp;gt;&amp;lt;/div&amp;gt;
                        &amp;lt;div id="label-container" class="px-3 pt-3" style="color: #fcfcfc;"&amp;gt;&amp;lt;/div&amp;gt;
                        &amp;lt;label class="p-3" for="fruitimg" style="color: #fcfcfc"&amp;gt;UPLOAD IMAGE:&amp;lt;/label&amp;gt;
                        &amp;lt;div class="input-group px-3 pb-3" id="inputimg" style="text-align: center;"&amp;gt;
                            &amp;lt;input type="file" class="form-control" accept="image/*" id="fruitimg"&amp;gt;
                            &amp;lt;button class="btn btn-primary" id="loadBtn"&amp;gt;Load&amp;lt;/button&amp;gt;
                        &amp;lt;/div&amp;gt;
                        &amp;lt;div id="uploadedImage" class="px-3"&amp;gt;&amp;lt;/div&amp;gt;
                        &amp;lt;div id="label-container-cam" class="px-3 pt-3"&amp;gt;&amp;lt;/div&amp;gt;
                    &amp;lt;/div&amp;gt;
                &amp;lt;/div&amp;gt;
            &amp;lt;/div&amp;gt;
        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  WebCam Based Prediction &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypn3nl7mawi3rjhmomqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypn3nl7mawi3rjhmomqi.png" alt="Web Based"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The model can be used in our javascript project easily using the &lt;a href="https://teachablemachine.withgoogle.com/" rel="noopener noreferrer"&gt;Teachable machine&lt;/a&gt; library for images. To use the library, just add the following scripts at the bottom of the html file. Alternatively, you could also install the library from NPM package installer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.3.1/dist/tf.min.js"&amp;gt;&amp;lt;/script&amp;gt;
&amp;lt;script
    src="https://cdn.jsdelivr.net/npm/@teachablemachine/image@0.8/dist/teachablemachine-image.min.js"&amp;gt;
&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following code helps in toggling the webcam button and declares some variables.The URL constant is set to the model link.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let model, webcam, newlabel, canvas, labelContainer, maxPredictions, camera_on = false, image_upload = false;

function useWebcam() {
    camera_on = !camera_on;

    if (camera_on) {
        init();
        document.getElementById("webcam").innerHTML = "Close Webcam";
    }
    else {
        stopWebcam();
        document.getElementById("webcam").innerHTML = "Start Webcam";
    }
}

async function stopWebcam() {
    await webcam.stop();
    document.getElementById("webcam-container").removeChild(webcam.canvas);
    labelContainer.removeChild(newlabel);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now,we can load the model and perform the prediction and display the class having highest probability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Load the image model and setup the webcam
async function init() {

    const modelURL = URL + "model.json";
    const metadataURL = URL + "metadata.json";

    // load the model and metadata
    model = await tmImage.load(modelURL, metadataURL);
    maxPredictions = model.getTotalClasses();

    // Convenience function to setup a webcam
    const flip = true; // whether to flip the webcam
    webcam = new tmImage.Webcam(200, 200, flip); // width, height, flip
    await webcam.setup(); // request access to the webcam
    await webcam.play();
    window.requestAnimationFrame(loop);

    // append element to the DOM
    document.getElementById("webcam-container").appendChild(webcam.canvas);

    newlabel = document.createElement("div");
    labelContainer = document.getElementById("label-container");
    labelContainer.appendChild(newlabel);
}

async function loop() {
    webcam.update(); // update the webcam frame
    await predict(webcam.canvas);
    window.requestAnimationFrame(loop);
}

// run the image through the image model
async function predict(input) {
    // predict can take in an image, video or canvas html element
    const prediction = await model.predict(input);

    var highestVal = 0.00;
    var bestClass = "";
    result = document.getElementById("label-container");
    for (let i = 0; i &amp;lt; maxPredictions; i++) {
        var classPrediction = prediction[i].probability.toFixed(2);
        if (classPrediction &amp;gt; highestVal) {
            highestVal = classPrediction;
            bestClass = prediction[i].className;
        }
    }

    if (bestClass == "Daisy" || bestClass == "Dandelion" || bestClass == "Sunflower") {
        newlabel.className = "alert alert-warning";
    }
    else {
        newlabel.className = "alert alert-danger";
    }

    newlabel.innerHTML = bestClass;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Uploaded Image Based Prediction &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpmz62vxzdjw4uzi0fjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpmz62vxzdjw4uzi0fjg.png" alt="Image Based"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second way of providing input is by uploading an image. I've used a little bit of jQuery code to do this. Essentially, once a user selects an image file using the input element on the client side and clicks load,the reference to the file is obtained using a click handler and a new Parse file is created. A Parse file lets us store application files in the cloud that would be too large to store in a object. Next, I created a canvas element to display the saved image and used it to predict the class of the uploaded image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$(document).ready(function () {
    $("#loadBtn").on("click", async function () {

        labelContainer = document.getElementById("label-container-cam");

        image_upload = !image_upload;

        if (!image_upload) {
            labelContainer.removeChild(newlabel);
            document.getElementById("uploadedImage").removeChild(canvas);
        }

        const fileUploadControl = $("#fruitimg")[0];
        if (fileUploadControl.files.length &amp;gt; 0) {

            const modelURL = URL + "model.json";
            const metadataURL = URL + "metadata.json";

            // load the model and metadata
            model = await tmImage.load(modelURL, metadataURL);
            maxPredictions = model.getTotalClasses();

            const file = fileUploadControl.files[0];

            const name = "photo.jpg";
            const parseFile = new Parse.File(name, file);

            parseFile.save().then(async function () {
                //The file has been saved to the Parse server

                img = new Image(224, 224);
                img.crossOrigin = "Anonymous";
                img.addEventListener("load", getPredictions, false);
                img.src = parseFile.url();

            }, function (error) {
                // The file either could not be read, or could not be saved to Parse.
                result.innerHTML = "Uploading your image failed!";
            });
        }
        else {
            result.innerHTML = "Try Again!";
        }
    });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the below code, a canvas is created to display the image and prediction is done using the same predict function that was used for webcam.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function getPredictions() {

    canvas = document.createElement("canvas");
    var context = canvas.getContext("2d");
    canvas.width = "224";
    canvas.height = "224";
    context.drawImage(img, 0, 0);
    document.getElementById("uploadedImage").appendChild(canvas);

    newlabel = document.createElement("div");
    labelContainer = document.getElementById("label-container-cam");
    labelContainer.appendChild(newlabel);

    await predict(canvas);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! The project is now ready to classify flowers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I learnt a lot by doing this project. I hadn't used &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo&lt;/a&gt;, this is my first time using it, but it made the backend process really simple. Also, I had to learn about jquery, as this is my first time writing code in jquery. &lt;a href="https://teachablemachine.withgoogle.com/" rel="noopener noreferrer"&gt;Google's Teachable machine&lt;/a&gt; helped a lot in making the machine learning model, it made the overall process very smooth and efficient. I hope you enjoyed reading this. This is a pretty simple project, so if you have some time and are interested, go ahead and try building it yourself!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ygarg704/Flora" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://yashgarg.xyz/Flora/" rel="noopener noreferrer"&gt;Project Link&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;SashiDo Starter Guide &lt;a href="https://blog.sashido.io/sashidos-getting-started-guide/" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;&lt;br&gt;
SashiDo Starter Guide &lt;a href="https://blog.sashido.io/sashidos-getting-started-guide-part-2/" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;&lt;br&gt;
The Awesome Teachable Machine Learning &lt;a href="https://github.com/SashiDo/awesome-teachable-machine" rel="noopener noreferrer"&gt;List&lt;/a&gt;&lt;br&gt;
Teachable Machine &lt;a href="https://github.com/SashiDo/teachablemachine-node" rel="noopener noreferrer"&gt;Node&lt;/a&gt;&lt;br&gt;
Parse &lt;a href="https://github.com/parse-community/Parse-SDK-JS" rel="noopener noreferrer"&gt;SDK&lt;/a&gt;&lt;br&gt;
Parse &lt;a href="https://docs.parseplatform.org/js/guide/#creating-a-parsefile" rel="noopener noreferrer"&gt;File&lt;/a&gt;&lt;br&gt;
&lt;a href="https://teachablemachine.withgoogle.com/" rel="noopener noreferrer"&gt;Teachable Machine&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>javascript</category>
      <category>html</category>
    </item>
  </channel>
</rss>
