DEV Community

Cover image for Detect and blur faces in flutter using pixlab API
Hrishikesh Pathak
Hrishikesh Pathak

Posted on

Detect and blur faces in flutter using pixlab API

We all have seen tagging facilities in Facebook or Instagram where app can recognize faces and suggest us to tag them in our photos. The process works so smoothly that it give a nice user experience to the user.

Sometimes those apps blur some faces. This can be done because of some privacy reasons or sometime user upload some explicit content which is not accurate for mass users.

If we analyze these two workflows, at first we need to detect the faces in an image. After detection, we can apply other types of photo manipulation techniques, for example blurring or applying some filters or suggest tagging someone.

If you want to implement this face detection in your flutter app, and you are not an AI/ML expert, then surely it will be very hard for you. You have to learn OpenCV or other tonnes of bits and pieces to give a feature, which is pleasant to have to the user.

In programming paradigm, it is said “never reinvent the wheel”. As time is valuable and as a programmer, you have to move fast, you can use some services that can do all the heavy lifting for you. You can detect faces, blur faces, extract data from images or from an ID card etc. The newest and most popular service for the aforementioned feature is PixLab.

What is Pixlab

Now let us take a look at what is Pixlab. Pixlab is a Software as a service (SaaS) platform known for providing machine vision and media processing API. It has 130 API endpoints and numbers of endpoints growing forever.

You can use their API for your business without a doubt as they come with 99.9% uptime guarantee and 24/7 support.

They have 4 tiers of pricing, and you can find more about them here

What are we building

Here in this article, we are building an image detector app with flutter. The app will detect human faces from an image. After that, detected faces will automatically blur and displayed inside the app.

The final version of the app will look like this.

detect human faces and blur them flutter

Supported Devices

As we are using flutter, and not using any platform specific dependency, you can run this app in all the platforms that flutter supports. I have personally tested this app on android, windows and Linux and this is running smoothly.

If you get any error during this tutorial, please let me know in the comments.

App structure

At first, create a new flutter project. Install flutter you don't have flutter installed already in your system. Then navigate to your folder of choice and run

flutter create pixlab_demo
Enter fullscreen mode Exit fullscreen mode

A new project of flutter will be created inside the pixlab_demo folder. Open this folder inside a text editor. I am using Visual Studio Code for this purpose.

Then inside the lib/main.dart file, delete all the demo code and write

import 'package:flutter/material.dart';

void main() {
  runApp(
    const MaterialApp(
      home: FaceBlurPage(),
    ),
  );
}
Enter fullscreen mode Exit fullscreen mode

Now create a stateful widget named FaceBlurPage().

class FaceBlurPage extends StatefulWidget {
  const FaceBlurPage({super.key});

  @override
  State<FaceBlurPage> createState() => _FaceBlurPageState();
}

class _FaceBlurPageState extends State<FaceBlurPage> {
  @override
  Widget build(BuildContext context) {

  }
}
Enter fullscreen mode Exit fullscreen mode

Install dependencies

In this app, we are going to make HTTP request to Pixlab API endpoints. So we need a HTTP client. In flutter, dio is a very powerful and easy to use HTTP client. To install dio inside your project, run.

flutter pub add dio
Enter fullscreen mode Exit fullscreen mode

After dio has successfully installed in your project, you need to have an API key to interact with the Pixlab endpoints. For that, signup for a pixlab plan and acquire an API key.

Your API key may look like this 74fg2erg46a1fe159er8rd8a4545a4.

HTTP requests in flutter

Making and HTTP request in flutter is not very hard. As we have installed dio package, this make our life very easy.

If you are a beginner and don't know much about HTTP requests, this is a very helpful guide to make your basics clear.

To make a GET request in flutter, at first create an instance of dio and write

import 'package:dio/dio.dart'

// Instentiating Dio
var dio = Dio();

// Making a get request in dio
dio.get("https://yourdomain.com")
Enter fullscreen mode Exit fullscreen mode

For making a POST request, similarly, we have to create an instance of dio and provide a body in addition.

import 'package:dio/dio.dart'

// Instentiating dio
var dio = Dio()

// Making a post request
dio.post("https://yourdomain.com",data:{})
Enter fullscreen mode Exit fullscreen mode

Inside the data parameter, you can put your body of the request.

In this article, we will use only these 2 type of HTTP request, so this quick introduction may help you to dive deep in this article.

Detect faces from an image in flutter

At first, create a variable named imagelink inside the stateful widget. There, you provide your image URL. I am using this image for this tutorial. You can use any images as per your want.

String imagelink = "https://pixlab.io/images/m3.jpg";
Enter fullscreen mode Exit fullscreen mode

Now create a function named detectFaces(). This will be an async function and will return a Future of Response. Inside this function, we make a GET request to pixlab facedetect API endpoint.

In the request, we have to provide 2 query parameters. First one is the image URL, and then the API key. For image URL, we will use imagelink variable we defined earlier.

The complete function will look like this.

import 'package:dio/dio.dart';

var dio = Dio();

// Detect faces using pixlab API
  Future<Response> detectFaces(String image) async {
    return dio.get(
      "https://api.pixlab.io/facedetect",
      queryParameters: {
        "img": image,
        "key": pixlabkey,
      },
    );
  }
Enter fullscreen mode Exit fullscreen mode

This function returns a JSON response, which we can access by response.data method.

The response will look something like this.

{
    "faces": [
        {
            "face_id": 1,
            "bottom": 118,
            "right": 386,
            "top": 74,
            "left": 343,
            "width": 44,
            "height": 45
        },
        {
            "face_id": 2,
            "bottom": 159,
            "right": 210,
            "top": 107,
            "left": 158,
            "width": 53,
            "height": 53
        },
        {
            "face_id": 3,
            "bottom": 127,
            "right": 516,
            "top": 84,
            "left": 472,
            "width": 45,
            "height": 44
        },
        {
            "face_id": 4,
            "bottom": 156,
            "right": 135,
            "top": 94,
            "left": 73,
            "width": 63,
            "height": 63
        },
        {
            "face_id": 5,
            "bottom": 211,
            "right": 504,
            "top": 159,
            "left": 452,
            "width": 53,
            "height": 53
        },
        {
            "face_id": 6,
            "bottom": 163,
            "right": 301,
            "top": 101,
            "left": 239,
            "width": 63,
            "height": 63
        },
        {
            "face_id": 7,
            "bottom": 136,
            "right": 631,
            "top": 84,
            "left": 579,
            "width": 53,
            "height": 53
        }
    ],
    "status": 200
}
Enter fullscreen mode Exit fullscreen mode

This facedetect endpoints give us the coordinate of all the faces present in the image. By using these coordinates, you can draw a rectangle or blur the faces. Use your creativity and tell what can you do will all these data in the comments.

Blur detected faces in flutter

As the objective of our article is to detect and blur face data, we are finally reached at this stage to blur the faces. Are you excited ? Let's make it happen together.

Make a function named blurface(). This is an async function and make a POST request to the Pixlab mogrify API endpoint. The function returns a Future of Response. Inside the body of the POST request, we have provided a JSON object. It consists of our image, API key and the face coordinates we get from the above face detection function.

The function will look like this

import 'package:dio/dio.dart';

var dio = Dio();

// Blurring faces using facial coordinates
  Future<Response> blurface(String image, List coordinates) async {
    return await dio.post(
      "https://api.pixlab.io/mogrify",
      data: {
        "img": image,
        "key": pixlabkey,
        "cord": coordinates,
      },
      options: Options(contentType: "application/json"),
    );
  }
Enter fullscreen mode Exit fullscreen mode

This endpoint responds with a JSON object which contain blurred image link, that we can download or directly display inside our flutter app with Image.network() widget.

Building the UI

After making all the utility functions, detectFaces() and blurface(), Now this is the time to build our user interface.

We keep the interface very minimal. We define a column inside the body of the scaffold. Then the first element of the column will be the preview of our image. We will use network image for this purpose.

Then beside the preview image, we define a placeholder text for our final processed blurred image from pixlab.

Then inside the floating action button, we make an icon button, when we click this button, our previously defined function detectFaces() and blurface() function will run.

Working of the app

e a GET request to pixlab API endpoint will our preview image to detect all the faces inside the image.

This API endpoint response with a JSON object with all the face coordinates present inside the app.

Then we take those coordinates and put them inside blurface() function. Then make a POST request with all the face coordinates inside the body of the request.

Now pixlab API's do their magic and response with a blurred face image url, which we will use to disply inside our app as a final output.

Final result and code

The final result of the app is looked like this.

detect human faces and blur them flutter

I am using setstate() to update the blurImageLink variable, so that final image can be displayed inside the app.

The complete code of the app looks like this.

import 'package:flutter/material.dart';
import 'package:dio/dio.dart';

void main() {
  runApp(
    const MaterialApp(
      home: FaceBlurPage(),
    ),
  );
}

class FaceBlurPage extends StatefulWidget {
  const FaceBlurPage({super.key});

  @override
  State<FaceBlurPage> createState() => _FaceBlurPageState();
}

class _FaceBlurPageState extends State<FaceBlurPage> {
  String pixlabkey = "74389de25cb37a10adf615e8a79c8da4";
  String imagelink = "https://pixlab.io/images/m3.jpg";
  String? blurImagelink;

  // Instentiating Dio
  var dio = Dio();

  // Detect faces using pixlab API
  Future<Response> detectFaces(String image) async {
    return dio.get(
      "https://api.pixlab.io/facedetect",
      queryParameters: {
        "img": image,
        "key": pixlabkey,
      },
    );
  }

  // Blurring faces using facial coordinates
  Future<Response> blurface(String image, List coordinates) async {
    return await dio.post(
      "https://api.pixlab.io/mogrify",
      data: {
        "img": image,
        "key": pixlabkey,
        "cord": coordinates,
      },
      options: Options(contentType: "application/json"),
    );
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text("Face Blur Example"),
      ),
      body: Column(
        children: [
          Image.network(imagelink),
          blurImagelink != null
              ? Image.network(blurImagelink!)
              : const Text("No image provided"),
        ],
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: () async {
          Response faces = await detectFaces(imagelink);
          Response blurfaceImageResponse =
              await blurface(imagelink, faces.data["faces"]);
          setState(() {
            blurImagelink = blurfaceImageResponse.data["ssl_link"];
          });
        },
        child: const Icon(Icons.auto_mode_rounded),
      ),
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

It is a very fun project to learn how to use pixlab API to integrate AI/ML detection features in your app. If you stuck at something, please let me know. My twitter ID is @hrishikshpathak.

Bonus tip : If you don't want to make multiple request to access your image and bring down your CDN cost, you can use Cached_Network_Image package. This package will cache your network image and make your app more smoother and use less data in user device.

Top comments (0)