DEV Community

Cover image for Integrate computer vision with flutter (using Google Teachable)
tedTecch
tedTecch

Posted on

Integrate computer vision with flutter (using Google Teachable)

Introduction

In recent years, computer vision AI has become increasingly popular in the development of mobile applications. With computer vision AI, mobile apps can perform a variety of tasks, such as image recognition, object detection, emotion detection and facial recognition. Google Teachable is a platform that allows users to create and train their own machine learning models, without requiring any programming knowledge. By integrating computer vision AI with Google Teachable in a Flutter app, developers can create intelligent mobile applications that can recognise and classify images with high accuracy.

In this article, we will explore the process of integrating computer vision AI with Google Teachable in a Flutter app. We will discuss the steps required to create and train a machine learning model using Google Teachable, and how to export it as a TensorFlow Lite model that can be used in a Flutter app. We will also demonstrate how to use the TensorFlow Lite interpreter to perform image(fruit) classification in a Flutter app, and how to build a user interface that captures and processes images.

Finally, we will discuss the benefits of using computer vision AI in mobile apps, and the potential applications of this technology in the future.

Getting Started

In this section, we will discuss how to integrate computer vision AI into a Flutter app using Google Teachable. Specifically, we will cover the initial steps of creating a new Flutter project and installing the necessary packages and dependencies.

Creating a New Flutter Project
Before we can begin integrating computer vision AI into our Flutter app, we need to create a new Flutter project. This can be done using the following command in the terminal:

flutter create <project-name>
Enter fullscreen mode Exit fullscreen mode

This command will create a new Flutter project with the specified name. Once the project is created, navigate to the project directory using the cd command in the terminal.

Installing Necessary Packages and Dependencies
Now that we have created our new Flutter project, we need to install the necessary packages and dependencies for integrating computer vision AI. The packages required for this integration are:

  • camera: This package provides access to the device's camera, which we will need for capturing images for computer vision AI analysis.

  • tflite: This package provides support for TensorFlow Lite models, which are used for machine learning and computer vision AI.

To install these packages, add them to the pubspec.yaml file in the project directory and run the following command in the terminal:

flutter pub get
Enter fullscreen mode Exit fullscreen mode

Integrating Google Teachable with Flutter App

Google Teachable is a platform that allows developers to train custom machine learning models without writing any code. With Google Teachable, we can easily create and train machine learning models using a simple drag-and-drop interface. This makes it an excellent tool for adding computer vision AI capabilities to our Flutter app.

Setting up a Google Teachable Account
To get started with Google Teachable, we need to create an account. To do this, visit https://teachablemachine.withgoogle.com/ and sign in with your Google account. Once you have signed in, you will be taken to the Teachable Machine dashboard where you create a new image project just like the image below:

Teachable Machine dashboard

Creating a Custom Machine Learning Model in Google Teachable
Now that we have set up our Google Teachable account and a new project, we can create a custom machine learning model. For the purpose of this tutorial, we will be creating an image classification model that can identify different fruits(strawberries, oranges, apples).

To create the model, we will use the Teachable Machine webcam feature to capture images of different fruits. The images of these fruits would be in separate folders or class, you can do this by renaming the folders(class) according to the category of each fruit(strawberries, oranges, apples). Alternatively, we could upload images of the fruits we want to classify.

Image description

Once we have captured enough images for each fruit(300 images in this case for each category), we can train the model by clicking on the Train Model button. This will train the machine learning model and generate a classification model and a set of classification labels. The training can take some time so be patient till its done.

Dataset ready to be trained

Exporting the Model as a TensorFlow Lite Model
After training the model, we need to export it as a TensorFlow Lite model. To do this, we can select the TensorFlow Lite export option from the Export Model dropdown menu, then click the download model button. This will generate a TensorFlow Lite model file that we can integrate into our Flutter app.

Image description

Image description

Integrating the TensorFlow Lite Model in Flutter App
Now that we have exported the TensorFlow Lite model from Google Teachable, we can integrate it into our Flutter app. To do this, we need to unzip the downloaded file and copy the model and label file to our assets folder in our flutter app.

Once we have loaded the TensorFlow Lite model in our Flutter app, we can use it to perform image classification on images captured from the device's camera using the camera package. We can then display the classification results in the Flutter app UI.

Implementing Computer Vision AI in Flutter App

Using TensorFlow Lite Interpreter in Flutter app to perform image classification
First, we need to initialize our camera in Flutter. We do this by adding a CameraController to our initState() method. We set up the camera using the availableCameras() method from the camera package. We then set the back-facing camera as our default camera and initialize the camera using the initialize() method. We start the image stream from the camera using the startImageStream() method, which is called when the camera is initialized. This allows us to capture frames from the camera continuously.

  CameraImage? cameraImage;
  CameraController? cameraController;

  @override
  void initState() {
    super.initState();
    loadCamera();
  }

  loadCamera() async {
    final cameras = await availableCameras();
    final frontCamera = cameras.firstWhere(
        (camera) => camera.lensDirection == CameraLensDirection.back);
    cameraController = CameraController(
      frontCamera,
      ResolutionPreset.medium,
      enableAudio: false,
    );

    cameraController!.initialize().then((_) {
      if (!mounted) {
        return;
      } else {
        setState(() {
          cameraController!.startImageStream((imageStream) {
            cameraImage = imageStream;
          });
        });
      }
    });
  }

Enter fullscreen mode Exit fullscreen mode

Next, we need to use the Tflite package to perform image classification. We load our model and labels using the loadModel() method. We then use the runModelOnFrame() method to classify each frame from the camera stream. This method takes in the bytesList, image height, image width, imageMean, imageStd, rotation, numResults, and threshold as parameters. We set the asynch parameter to true so that the model runs on a separate thread, preventing the UI from freezing.Finally, we are using the forEach() function to loop through the predictions and update the UI with the predicted output.

  String output = '';

  runModel() async {
    if (cameraImage != null) {
      var prediction = await Tflite.runModelOnFrame(
        bytesList: cameraImage!.planes.map((plane) {
          return plane.bytes;
        }).toList(),
        imageHeight: cameraImage!.height,
        imageWidth: cameraImage!.width,
        imageMean: 127.5,
        imageStd: 127.5,
        rotation: 90,
        numResults: 2,
        threshold: 0.1,
        asynch: true,
      );
      prediction!.forEach((element) {
        setState(() {
          output = element['label'];
        });
      });
    }
  }

  loadModel() async {
    await Tflite.loadModel(
      model: 'assets/model_unquant.tflite',
      labels: 'assets/labels.txt',
    );
  }
Enter fullscreen mode Exit fullscreen mode

*** Building a user interface to capture and process images**

Next, we will build the user interface to capture and display images. In the Scaffold() widget, we will create an AppBar() widget with the title "Fruit Classification". We will then use the SafeArea() widget to avoid overlapping the UI with the device's system UI. We will create a Container() widget with a height of 70% of the device height and width of the device width to hold the camera preview. We will then check if the camera controller is initialized and use the AspectRatio() widget to set the aspect ratio of the camera preview. Finally, we will add a Text() widget to display the output label from the machine learning model.

Scaffold(
  appBar: AppBar(
    title: const Text("Fruit classification "),
  ),
  body: SafeArea(
    child: Padding(
      padding: const EdgeInsets.all(16.0),
      child: Column(
        crossAxisAlignment: CrossAxisAlignment.start,
        children: [
          Container(
            height: MediaQuery.of(context).size.height * 0.7,
            width: MediaQuery.of(context).size.width,
            child: !cameraController!.value.isInitialized
                ? Container()
                : AspectRatio(
                    aspectRatio: cameraController!.value.aspectRatio,
                    child: CameraPreview(cameraController!),
                  ),
          ),
          Text(
            output,
            style: const TextStyle(
                fontWeight: FontWeight.bold, fontSize: 20),
          )
        ],
      ),
    ),
  ));

Enter fullscreen mode Exit fullscreen mode

That concludes the logic, when you run your project, you should have something similar to the video below.

Image description

Conclusion

In conclusion, this article has demonstrated how to integrate computer vision AI into a Flutter app using Google Teachable. We have explored how to create a custom machine learning model in Google Teachable, export the model as a TensorFlow Lite model, and integrate it into a Flutter app using the TensorFlow Lite Interpreter. We have also built a user interface to capture and process images and displayed the results of the classification in real-time.

Future possibilities and improvements
As for future possibilities and improvements, there are various ways to enhance the user experience by integrating computer vision AI with other technologies such as augmented reality (AR). For instance, we could build an AR app that displays additional information about the classified fruits, such as their nutritional value and recipes.

Importance of computer vision AI in mobile app development
The importance of computer vision AI in mobile app development cannot be overstated. It opens up endless possibilities for creating innovative and intelligent mobile apps that can recognize and understand the world around them. With the help of Google Teachable and Flutter, developers can easily integrate computer vision AI into their mobile apps and enhance the user experience.

You can find the complete code for this project on the following link: Repo

Top comments (6)

Collapse
 
whitedove profile image
Damis

nice one chief

Collapse
 
tedtecch profile image
tedTecch

Thanks

Collapse
 
achilela profile image
Ataliba Miguel

Can we use it for transfer learning? Nice one.

Collapse
 
tedtecch profile image
tedTecch

Yes it does.

Collapse
 
davey555 profile image
Chika David Ama

Greak work bro, will take this and test out!

Collapse
 
tedtecch profile image
tedTecch

Thanks 😊, If you encounter any problem feel free to reach out