loading...
Cover image for Building & Deploying an Image Classification Web App with GCP AutoML Vision Edge, Tensorflow.js & GCP App Engine

Building & Deploying an Image Classification Web App with GCP AutoML Vision Edge, Tensorflow.js & GCP App Engine

singh08prashant profile image Prashant Singh ・10 min read

In this article, we’ll be learning how we can make a custom image classification web application with a minimal amount of code using AutoML & deploy it on Google Cloud App Engine.

For this article, I will be building a model which will classify whether a person is wearing a face mask or not.

You can check my project by visiting this link on a laptop/PC with a webcam, ensure that your face is well lit for better predictions.

By the end of this article, you will be able to build your image classification model just by changing the dataset.

What is AutoML Vision?

AutoML allows us to train our custom model with our data. It uses NAS(Neural Architecture Search) to find the best way to train our models. The only thing we need to do is to gather data, to enhance the accuracy of the model. It exponentially reduces the effort of writing code to train a model from scratch.

Steps Involved

  1. Set-up & Configure a Google Cloud Project
  2. Set-up AutoML Vision
  3. Creating a Dataset
  4. Training The Model
  5. Evaluate The Model
  6. Testing The Model
  7. Exporting the Model as a TensorFlow.js model
  8. Building the Web Application
  9. Hosting the Application on App Engine

Want to skip to these steps and jump over to the code? It’s all available in my GitHub repository at:

GitHub logo singh08prashant / MaskDetection

Image Classification Web Appl built using Tensorflow.js & Google Cloud AutoML Vision.


1. Set-up & Configure a Google Cloud Project

Head over to Google Cloud Console, log in with your Google account and create a new project/ choose an existing project by following this link. Next up you need to set up a billing account for this project. Learn how to confirm billing is enabled for your project.

If you are a student, you can apply for free credits worth $50/year, access to free hands-on training and more at edu.google.com without submitting any credit card details.

2. Set-up AutoML Vision

Now let’s enable the Cloud AutoML API by following steps.
You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation Menu at the top-left.
From the Navigation menu select APIs & Services > Library.

Navigation Menu > API & Services > Library<br>
Navigation Menu > API & Services > Library

Find & enable the Cloud AutoML API. This may take a few minutes.
Cloud AutoML API enabled<br>
Cloud AutoML API enabled

Activate the Google Cloud Shell from the top-right of the toolbar. When you are connected the cloud shell should look like this :
Cloud Shell<br>
Cloud Shell

In Cloud Shell use the following commands to create environment variables for you Project ID and Username, replacing <USERNAME> with the Google account User Name using which you logged into the console:

export PROJECT_ID=$DEVSHELL_PROJECT_ID
export USERNAME=<USERNAME>

Now run the following command to give AutoML admin permissions to your account:

gcloud projects add-iam-policy-binding $PROJECT_ID \ 
    --member="user:$USERNAME" \
    --role="roles/automl.admin"

Next, we’ll create a Cloud Storage Bucket in the region us-central1, to store our images using the following command(alternatively, you can do the same using the UI by going to Navigation Menu > Storage > Browser > Create Bucket) :

gsutil mb -p $PROJECT_ID \ 
   -c standard    \
   -l us-central1 \
   gs://$PROJECT_ID-vcm/

3. Creating a Dataset

Download and store the images in separate folders according to their labels, for eg. I have stored the images of people wearing a mask in a directory named ‘mask’ and images with people without a mask in a directory named ‘No Mask’ and compressed those folders for upload.

Datasets on my local hard drive<br>
Datasets on my local hard drive

You can use this tool for downloading images in bulk from the internet.

GitHub logo singh08prashant / Image-Scrapping-with-python-and-selenium

A python codebase that downloads images from Google Browser with the search term mentioned.



Now, let’s head over to the AutoML UI in a new browser window & click on New Dataset.

Create new Dataset
Create new Dataset


Enter a dataset name and choose your model’s objective. As you can see I have selected Multi-Label Classification to see the prediction percentage for both the labels ‘mask’ & ‘no mask’. You can select Single-Label Classification if you want your image to belong to a single class at a time. Click Create Dataset to create a dataset.

Import Images to Dataset<br><br>
Import Images to Dataset



Now select upload images from your computer > click Select Files > select the zip files that we created earlier on our local disk.

Next, you need to specify a destination where the uploaded files will be stored on Cloud Storage. Click BROWSE & select the bucket that we created earlier by the name -vcm.

Click CONTINUE and wait for the files to get imported in your Dataset.

After the images have finished importing you will be able to see them under the IMAGES tab in the Dataset with their respective labels. If you are not happy with an image or a label then you can edit/delete them using the UI tools.

Ideally, we should provide at least 100 images for each label & we should have an equal number of images for each label for better performance of the model. However, you can proceed to train if you have at least 10 images per label. The Dataset is automatically split into Train, Validation & Test set in a ratio of 8:1:1.

You can read more about preparing your training data at: Prepare your Training Data

Images imported in the dataset<br><br>
Images imported in the dataset

4. Training The Model

Once you are satisfied with importing & labelling the Dataset, proceed to the TRAIN tab and click START TRAINING.
Now you’ll be asked to define your model. Give a name to your model and select Edge for creating a downloadable model. Click CONTINUE.
Now you need to specify whether you want the model to be fast or accurate. To find a balance between these select the Best trade-off option & click CONTINUE.
Then you’ll be asked to set a node hour budget for training your model. The more training hours you give, the higher accuracy will be. But the documentation also tells us that if the model stops improving, the training will stop. It’s a good practice to choose the recommended node-hour budget.

Set a node-hour budget<br>
Set a node-hour budget

You’re now ready to start training your model by clicking START TRAINING.

While your model is training you can close the tab and maybe relax a bit by having a cup of coffee ☕. Once training is completed, you’ll get an email notification.

5. Evaluate Your Model

Once you receive the email notification you can head over to Evaluating your model, but before we evaluate the model, we should understand the meanings of both precision and recall. Precision refers to the percentage of your results that are relevant. On the other hand, recall refers to the percentage of the total relevant results correctly classified by your algorithm. You can learn more about precision and recall here.

Metrics
Model Metrics

You can see many new terminologies, about which you can learn more here.

6. Testing The Model

Under the TEST & USE tab, click on UPLOAD IMAGES button to upload an image and generate predictions. You can upload 10 images at a time.

Predictions Online<br>
Predictions Online

ou may want to Remove Deployment to avoid any unwanted Billing Cost.

Congratulations, you’ve successfully created an image classification model. But wait, it isn’t available for other users to interact with.

7. Exporting the Model as a TensorFlow.js model

Before we proceed further, we would need gsutil : a Python application that lets us access Cloud Storage from the command line.

If you have PyPI (Python Package Index) installed, you can run pip install gsutil or you can Install gsutil as part of the Google Cloud SDK.
Under the TEST & USE tab, you can see many different options for exporting and using your model, we’ll use the TensorFlow.js model.

TensorFlow.js is a library that helps to use ML directly in the browser. AutoML takes very little time to create a model and TensorFlow.js is the easiest and the most efficient way to run models directly inside the browser.

tfjs
Use your model

Select TensorFlow.js> Specify or create a bucket in the same region us-central1 for exporting the model as TensorFlow.js package and click EXPORT.
After, the export is complete run the following command in command prompt or terminal to copy files from Cloud Storage Bucket to Local Directory by replacing <bucket-name> with the name of the bucket where the model is exported to & <local-folder> with the path of the local directory where you wish to save the model.

gsutil cp gs://<cloud-storage-bucket>/model-export/icn/* <local-folder>
for example
gsutil cp gs://mask-detection-pbl-vcm/model-export/icn/* Downloads

Once the model is downloaded, you’ll see a model.json file, which contains the tensor information along with the weight file names, .bin files containing the model weights & a dict.txt file, containing the labels which in my case is Mask & No Mask.

Downloaded model
Downloaded Model

. 8.Writing The Web Application

8.1 Creating the index.html file

In the same folder, create an index.html file and copy the following code:

<html>
<head>
<script src="https://unpkg.com/@tensorflow/tfjs"></script>
<script src="https://unpkg.com/@tensorflow/tfjs-automl"></script>
</head>
<body >

<video autoplay playsinline muted id="webcam" width="224" height="224"> 
</video>

<div id="predictions-mask"></div>
<div id="predictions-no-mask"></div>

<button type="button" id="startPredicting" onclick="startPredicting()"> Start Predicting </button>
<button type="button" id="stopPredicting" onclick="stopPredicting()" >Stop Predicting</button>
<script src= "index.js"></script>
</center>
</body>
</html>

The tfjs-automl and tfjs scripts contain the functions required to run the model. If you want to use the model offline, you can download a copy of these scripts and include them in your html file.
The <video> tag creates a video element on the webpage. The 2 <div> will be used to write the predictions from the model. The buttons will be used to Start & Stop predictions, respectively. The script <index.js> is where we will implement the model.

8.2 Creating the index.js file

Now, we’ll create a <index.js> file in the same folder and copy the following code:

const webcamElement= document.getElementById('webcam');
let net;
let isPredicting = false;
function startPredicting(){
 isPredicting=true;
 app();
}
function stopPredicting(){
 isPredicting=false;
 app();
}
async function app(){
 console.log('Loading model..');
 net= await tf.automl.loadImageClassification('model.json');
 console.log('Successfully loaded model');

 const webcam = await tf.data.webcam(webcamElement);
 while(isPredicting){
 const img = await webcam.capture();
 const result = await net.classify(img);

 console.log(result);

 document.getElementById("predictions-mask").innerText=result['0']['label']+": "+Math.round(result['0']['prob']*100)+"%";
 document.getElementById("predictions-no-mask").innerText=result['1']['label']+": "+Math.round(result['1']['prob']*100)+"%";
img.dispose();

await tf.nextFrame();

 }

}

You may get overwhelmed by looking at this code so let me explain it for you. Here, inside the asynchronous function app() , the tf.automl.loadImageClassification() loads the model model.json for us and stores it as net.
tf.data.webcam() will set up the webcam. webcam.capture() will capture images from the live input through webcam and store it as img.
We then pass this image to the model using net.classify(img) and the predictions are stored in the variable result.
The functions startPredicting() & stopPredicting() act as switches to trigger an infinite loop for image classification.
Now you might wonder what these lines do:

document.getElementById("predictions-mask").innerText=result['0']['label']+": "+Math.round(result['0']['prob']*100)+"%";
 document.getElementById("predictions-no-mask").innerText=result['1']['label']+": "+Math.round(result['1']['prob']*100)+"%";

The output that we get is stored in result in this form

[0: {label: “Mask”, prob: 0.048721060156822205}, 1: {label: “No Mask”, prob: 0.9479466080665588}]

The above code is used to write this result on the HTML page as Mask: 5% & No Mask: 95%.
Lastly, img.dispose() disposes of the tensor to release the memory & tf.nextFrame() gives some breathing room by waiting for the next animation frame to fire.

To run the web app download & launch Web Server for Chrome & set-up a by choosing the directory where the code is located.

The app should look like this:

Web App<br>
Web App

Feel free to add some CSS to the web-page to make it look fancier.

9. Hosting the application on App Engine

Login to your Google Cloud Console and launch the cloud shell and open the Editor.
Inside the editor create a directory, let’s call it www & upload all your files related to the model and web page inside this directory.
In the root directory (outside www ) create a file app.yaml & paste the following code in it:

runtime: python27
api_version: 1
threadsafe: true

handlers:
- url: /
  static_files: www/index.html
  upload: www/index.html

- url: /(.*)
  static_files: www/\1
  upload: www/(.*)

The app.yaml file is a configuration file that tells App Engine how to map URLs to your static files. In the following steps, you will add handlers that will load www/index.html when someone visits your website, and all static files will be stored in and called from the www directory.

Now switch to the cloud shell and run the following command to deploy the web app:

gcloud app deploy

To launch your browser and view the app at https://PROJECT_ID.REGION_ID.r.appspot.com, run the following command:

gcloud app browse

Conclusion

With this, we’ve successfully built and deployed an image classification web app on Google Cloud. Thank you so much for reading patiently. 😃

Discussion

pic
Editor guide