DEV Community

Cover image for Flutter liveness: 300% performance enhance
Jodamco
Jodamco

Posted on

Flutter liveness: 300% performance enhance

Stepping forward!!

Hello there again! I'm glad to be here sharing results and great improvements. If you're short of time and only want to know how to improve your app performance I recommend you peek at the code here and check out this Flutter doc about Isolates. On the other hand, if you want to know the full story, I invite you to sit comfortably and drop your comments around.

The UI has changed

In the last article from this series, we ended up with a simple screen with just a Widget live streaming the camera and a custom painter to draw a square where the face was located. I decided to enhance it a bit and provide a few things:

  • Start and Stop buttons (or anything similar) so I could start and top the process without hot-reloading the app. This was needed mainly because constant camera usage and processing consumes a lot of battery and I had to recharge my phone a few times during the development 😅.
  • Remove painters and provide something less visually invasive.
  • Export statistics and data from the face detection layer to the parent so it may decide what to do with it, meaning which widgets to paint or display on top of the camera live stream.

Sketch of the app screen

I planned it a bit (the sketch above is only here as proof) and decided to use routes to control start and stop by navigating between the live data stream page and the home page. Also, as a substitute for the square around my face, I decided to use a cube positioned with the angles captured by the face detector.

With some sort of design in hand I started to code the changes and get to the result with few difficulties. The interaction between the face detector layer and its parent was made using callbacks. This made things simpler than using any other state manager with the drawback of making the parent have a specified callback (hence, no drawbacks hehehe). Once I had the data in a different layer I just needed to share it with other children.

For the cube, I used some code from stack overflow 🙃 (you can check the final result here). I code the graph using FlChart (which I found way too complicated for nothing) and the rest of the widgets were made using Text and spacing. The new UI ended up like this

New parts of the UI

Time to understand the performance-related problems of the app.

Performance: the problem

Just as I finished the refactor of the UI and started to test the app, I noticed I had a "latency" of 700ms to run all the needed steps to generate the data from the live stream. Initially, I had a single Isolate being spaned with Isolate.run() running three main steps:

  1. Parse of the ImageData from camera stream to InputImage, including the conversion from the yuv_420_888 format to n21
  2. Instantiate the faceDetector object since I had problems passing it down from the main Isolate to the spawned one.
  3. Run the detection using the generated data.

This was doing great, but with this huge time I would lose too much head movement and the lack of data could compromise the decision around whether the face being detected is a person or just a photo moving around in front of the camera. Since the first version had some issues I had to work them out.

Performance: first approach

My first thought was: split to conquer, divide the work with more isolates. With this in mind I refactored the code and the core change was refactoring _processImage into this



  Future<void> _runAnalysisPipeline(CameraImageMetaData metaData) async {
    if (_isBusy) return;
    _isBusy = true;

    final (analysisData) = await _processImage(metaData);
    if (analysisData != null && widget.onAnalysisData != null) {
      widget.onAnalysisData!(analysisData);
    }

    _isBusy = false;
  }

  Future<(InputImage, List<Face>)?> _processImage(
    CameraImageMetaData metaData,
  ) async {
    final inputImage = await parseMetaData(metaData);

    if (inputImage == null ||
        inputImage.metadata?.size == null ||
        inputImage.metadata?.rotation == null) return null;

    final faceList = await runFaceDetection(inputImage);
    if (faceList.isEmpty) return null;

    return (inputImage, faceList);
  }

  Future<InputImage?> parseMetaData(CameraImageMetaData metaData) async {
    RootIsolateToken rootIsolateToken = RootIsolateToken.instance!;
    return await Isolate.run<InputImage?>(() async {
      BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);

      final inputImage = metaData.inputImageFromCameraImageMetaData(metaData);
      return inputImage;
    });
  }

  Future<List<Face>> runFaceDetection(InputImage inputImage) async {
    RootIsolateToken rootIsolateToken = RootIsolateToken.instance!;
    return await Isolate.run<List<Face>>(() async {
      BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);

      final FaceDetector faceDetector = FaceDetector(
        options: FaceDetectorOptions(
          enableContours: true,
          enableLandmarks: true,
          enableTracking: true,
          enableClassification: true
        ),
      );

      final faces = await faceDetector.processImage(inputImage);
      await faceDetector.close();
      return faces;
    });
  }


Enter fullscreen mode Exit fullscreen mode

This code splits the function and uses two Isolates to handle the work. I admit I didn't think this through though. To run the detection I would need the InputImage parsed thus making me wait for it and I also didn't consider the operations related to the spawning and killing of an Isolate. I still think the refactor made it easier to read, but it made the 'latency' go from ~700ms to ~1000ms. It got worse!

To decide what to do next, I did some measurements and discovered I was spending too much time instantiating heavy objects and spawning the isolates, so I decided to get rid of them.

Performance: long-lived Isolates!

Here things get interesting. During my first ever usage of Isolates in this project I went through the docs and chose the simpler approach: Isolate.run. This spawns the Isolate in a way you don't need to handle the communication between the main and the spawned Isolate. To solve my problem I just needed a long-lived Isolate (or worker Isolate). This approach has more complexity but it allows me to create the Isolate when my faceDetection layer is mounted and kill it on dispose, saving me spawn time between detections. Also, I could make it in a way that I was able to instantiate the faceDetection object within the isolate and save its instantiation time as well.

New detection time

These changes granted me an average processing time (in debug mode) of ~340ms (an enhancement of almost 300%)!!! The responsible for it is the face detector worker that hold all the logic to spawn, maintain and use the Isolate to run the steps I described before.

Results

Let's do a recap of the results we have so far:

  • Enabled the face detection and live streaming of data
  • Provided visual feedback of the face Euler angles and facial expressions
  • Measurement of detection performance

Regarding the performance, we have

  • Initial average performance of 700ms
  • Worsening of 42% with first refactor (from 700ms to 1000ms average)
  • Enhancement of 300% with Isolate worker approach (340ms average)

Solving the detection time bottleneck was extremely necessary once, in the end, we will be performing mathematical operations to decide whether the variation of the angles is real or not. For this to be more accurate, we need more points on time during the tilt of the head in any direction and that's exactly where the time for detection is crucial.

So far, the achievements of this project are small, but the journey has been full of new concepts, techniques, and tools. Regarding the UI, the visual choices provided the opportunity to work with 3D figures, object transformation, and graphs which I didn't have the opportunity to build before using Flutter. As for the performance, to note the necessity and to be able to evolve from one approach to another was a fantastic chance to build up knowledge on how to handle the usage of Isolates with problems related to processing time.

What's next

Now, I think we can start to build the liveness check. I have to re-read the 'base article' I'm using to back up this whole idea and find out what steps I need to determine if a face is a live person or a photo. I'll also try some things related to the variation of the data (if not provided in the article as well) since I got the feeling that part of the parameter is related to the variation itself.

Don't be shy, let your comments out if you have any.

Top comments (1)

Collapse
 
padmanaban_sharvesh_8b15e profile image
padmanaban sharvesh

hi, eager to know the process of this project, i too stuck in liveness detection, i used mesh detect for it, but not getting results