developing smile photographing using machine learning

irene83018774 profile image Irene ・4 min read

1 introductions

What is Machine Learning Kit (MLKit)? What can MLKIT do? Which of the following problems can be solved during application development?

Today, let's take face detection as an example to show you the powerful functions of MLKIT and the convenience it provides for developers.

1.1 Capabilities Provided by MLKIT Face Detection

First, let's look at the face detection capability of Huawei Machine Learning Service (MLKIT).
Alt Text
As shown in the animation, facial recognition can recognize the face direction, detect facial expressions (such as happy, disgusted, surprised, sad, angry, and angry), detect facial attributes (such as gender, age, and wearable), and detect whether to open or close eyes, supports coordinate detection of features such as faces, noses, eyes, lips, and eyebrows. In addition, multiple faces can be detected at the same time.

Tips: This function is free of charge and covers all Android models.

2 Development of the Multi-Face Smile Photographing Function

Today, I will use the multi-facial recognition and expression detection capabilities of MLKIT to write a small demo for smiling snapshot and perform a practice.

To download the Github demo source code, click here (the project directory is Smile-Camera).

2.1 Development Preparations

The preparations for developing the kit of Huawei HMS are similar. The only difference is that the Maven dependency is added and the SDK is introduced.
1.Add the Huawei Maven repository to the project-level gradle.
Incrementally add the following Maven addresses:

buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
allprojects {
repositories {
maven {url 'http://developer.huawei.com/repo/'}

2.Add the SDK dependency to the build.gradle file at the application level.
Introduce the facial recognition SDK and basic SDK.

// Introduce the basic SDK.
implementation 'com.huawei.hms:ml-computer-vision:'
// Introduce the face detection capability package.
implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:'

3.The model is added to the AndroidManifest.xml file in incremental mode for automatic download.
This is mainly used to update the model. After the algorithm is optimized, the model can be automatically downloaded to the mobile phone for update.

android:value= "face"/>

4.Apply for camera and storage permissions in the AndroidManifest.xml file.

<!-Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Use the storage permission.-->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

2.2 Code development

1.Create a face analyzer and take photos when a smile is detected.
Photos taken after detection:
1)Analyzer parameter configuration
2)Sends analyzer parameter settings to the analyzer.
3)In analyzer.setTransacto, transactResult is rewritten to process the content after facial recognition. After facial recognition, a confidence level (smiling probability) is returned. You only need to set the confidence level to a certain value.

private MLFaceAnalyzer analyzer;
private void createFaceAnalyzer() {
MLFaceAnalyzerSetting setting =
new MLFaceAnalyzerSetting.Factory()
this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() 
public void destroy() {
public void transactResult(MLAnalyzer.Result<MLFace> result) {
SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
int flag = 0;
for (int i = 0; i < faceSparseArray.size(); i++) {
MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
if (emotion.getSmilingProbability() > smilingPossibility) {
if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
safeToTakePicture = false;

2.Create a visual engine to capture dynamic video streams from cameras and send the streams to the analyzer.

private void createLensEngine() {
Context context = this.getApplicationContext();
// Create LensEngine
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
.applyDisplayDimension(640, 480)

3.Dynamic permission application, attaching the analyzer and visual engine creation code

public void onCreate(Bundle savedInstanceState) {
if (savedInstanceState! = null) {
this.lensType = savedInstanceState.getInt("lensType");
this.mPreview = this.findViewById(R.id.preview);
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
} else {
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {

Is the development process very simple? A new feature can be developed in 30 minutes. Let's experience the effect of the multi-faced smile capture.

Based on the face detection capability, which functions can be done? Open your brain hole! Here are a few hints:
1. Add interesting decorative effects by identifying the locations of facial features such as ears, eyes, nose, mouth, and eyebrows.
2. Identify facial contours and stretch the contours to generate interesting portraits or develop facial beautification functions for contour areas.
3. Develop some parental control functions based on age identification and children's infatuation with electronic products.
4. Develop the eye comfort feature by detecting the duration of eyes staring at the screen.
5. Implements liveness detection through random commands (such as shaking the head, blinking the eyes, and opening the mouth).
6. Recommend offerings to users based on their age and gender.

For details about the development guide, visit the HUAWEI Developer official website:https://developer.huawei.com/consumer/en/

HUAWEI Developer Machine Learning Service Developer Guide:https://developer.huawei.com/consumer/en/hms/huawei-mlkit

Posted on by:


markdown guide