<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vivi-clevercoder</title>
    <description>The latest articles on DEV Community by Vivi-clevercoder (@viviclevercoder).</description>
    <link>https://dev.to/viviclevercoder</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/viviclevercoder"/>
    <language>en</language>
    <item>
      <title>How a Programmer Developed a Text Reader App for His 80-Year-Old Grandpa</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Fri, 06 Aug 2021 10:29:24 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/how-a-programmer-developed-a-text-reader-app-for-his-80-year-old-grandpa-k4g</link>
      <guid>https://dev.to/viviclevercoder/how-a-programmer-developed-a-text-reader-app-for-his-80-year-old-grandpa-k4g</guid>
      <description>&lt;p&gt;"John, have you seen my glasses?"&lt;/p&gt;

&lt;p&gt;Our old friend John, a programmer at Huawei, has a grandpa who despite his old age, is an avid reader. Leaning back, struggling to make out what was written on the newspaper through his glasses, but unable to take his eyes off the text — this was how my grandpa used to read, John explained.&lt;/p&gt;

&lt;p&gt;Reading this way was harmful on his grandpa's vision, and it occurred to John that the ears could take over the role of "reading" from the eyes. He soon developed a text-reading app that followed this logic, recognizing and then reading out text from a picture. Thanks to this app, John's grandpa now can ”read” from the comfort of his rocking chair, without having to strain his eyes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Implement&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user takes a picture of a text passage. The app then automatically identifies the location of the text within the picture, and adjusts the shooting angle to an angle directly facing the text.&lt;/li&gt;
&lt;li&gt;The app recognizes and extracts the text from the picture.&lt;/li&gt;
&lt;li&gt;The app converts the recognized text into audio output by leveraging text-to-speech technology.
These functions are easy to implement, when relying on three services in HUAWEI ML Kit: document skew correction, text recognition, and text to speech (TTS).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Preparations&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildscript {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
    dependencies {
        classpath "com.android.tools.build:gradle:4.1.1"
        classpath 'com.huawei.agconnect:agcp:1.4.2.300'
        // NOTE: Do not place your app dependencies here; they belong
        // in the individual module build.gradle files.
    }
}
allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add the build dependencies for the HMS Core SDK.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dependencies {

    // Import the base SDK.
    implementation 'com.huawei.hms:ml-computer-voice-tts:2.1.0.300'
    // Import the bee voice package.
    implementation 'com.huawei.hms:ml-computer-voice-tts-model-bee:2.1.0.300'
    // Import the eagle voice package.
    implementation 'com.huawei.hms:ml-computer-voice-tts-model-eagle:2.1.0.300'
    // Import a PDF file analyzer.
    implementation 'com.itextpdf:itextg:5.5.10'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tap PREVIOUS or NEXT to turn to the previous or next page. Tap speak to start reading; tap it again to pause reading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development process&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a TTS engine by using the custom configuration class MLTtsConfig. Here, on-device TTS is used as an example.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private void initTts() {
    // Set authentication information for your app to download the model package from the server of Huawei.
    MLApplication.getInstance().setApiKey(AGConnectServicesConfig.
            fromContext(getApplicationContext()).getString("client/api_key"));
    // Create a TTS engine by using MLTtsConfig.
    mlTtsConfigs = new MLTtsConfig()
            // Set the text converted from speech to English.
            .setLanguage(MLTtsConstants.TTS_EN_US)
            // Set the speaker with the English male voice (eagle).
            .setPerson(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE)
            // Set the speech speed whose range is (0, 5.0]. 1.0 indicates a normal speed.
            .setSpeed(.8f)
            // Set the volume whose range is (0, 2). 1.0 indicates a normal volume.
            .setVolume(1.0f)
            // Set the TTS mode to on-device.
            .setSynthesizeMode(MLTtsConstants.TTS_OFFLINE_MODE);
    mlTtsEngine = new MLTtsEngine(mlTtsConfigs);
    // Update the configuration when the engine is running.
    mlTtsEngine.updateConfig(mlTtsConfigs);
    // Pass the TTS callback function to the TTS engine to perform TTS.
    mlTtsEngine.setTtsCallback(callback);
    // Create an on-device TTS model manager.
    manager = MLLocalModelManager.getInstance();
    isPlay = false;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a TTS callback function for processing the TTS result.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MLTtsCallback callback = new MLTtsCallback() {
    @Override   
    public void onError(String taskId, MLTtsError err) {
        // Processing logic for TTS failure.
    }
    @Override
    public void onWarn(String taskId, MLTtsWarn warn) {
        // Alarm handling without affecting service logic.
    }
    @Override
    // Return the mapping between the currently played segment and text. start: start position of the audio segment in the input text; end (excluded): end position of the audio segment in the input text.
    public void onRangeStart(String taskId, int start, int end) {
        // Process the mapping between the currently played segment and text.
    }
    @Override
    // taskId: ID of a TTS task corresponding to the audio.
    // audioFragment: audio data.
    // offset: offset of the audio segment to be transmitted in the queue. One TTS task corresponds to a TTS queue.
    // range: text area where the audio segment to be transmitted is located; range.first (included): start position; range.second (excluded): end position.
    public void onAudioAvailable(String taskId, MLTtsAudioFragment audioFragment, int offset,
                                 Pair&amp;lt;Integer, Integer&amp;gt; range, Bundle bundle) {
        // Audio stream callback API, which is used to return the synthesized audio data to the app.
    }
    @Override
    public void onEvent(String taskId, int eventId, Bundle bundle) {
        // Callback method of a TTS event. eventId indicates the event name.
        boolean isInterrupted;
        switch (eventId) {
            case MLTtsConstants.EVENT_PLAY_START:
                // Called when playback starts.
                break;
            case MLTtsConstants.EVENT_PLAY_STOP:
                // Called when playback stops.
                isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED);
                break;
            case MLTtsConstants.EVENT_PLAY_RESUME:
                // Called when playback resumes.
                break;
            case MLTtsConstants.EVENT_PLAY_PAUSE:
                // Called when playback pauses.
                break;
            // Pay attention to the following callback events when you focus on only the synthesized audio data but do not use the internal player for playback.
            case MLTtsConstants.EVENT_SYNTHESIS_START:
                // Called when TTS starts.
                break;
            case MLTtsConstants.EVENT_SYNTHESIS_END:
                // Called when TTS ends.
                break;
            case MLTtsConstants.EVENT_SYNTHESIS_COMPLETE:
                // TTS is complete. All synthesized audio streams are passed to the app.
                isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_SYNTHESIS_INTERRUPTED);
                break;
            default:
                break;
        }
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Extract text from a PDF file.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private String loadText(String path) {
    String result = "";
    try {
        PdfReader reader = new PdfReader(path);
        result = result.concat(PdfTextExtractor.getTextFromPage(reader,
                mCurrentPage.getIndex() + 1).trim() + System.lineSeparator());
        reader.close();
    } catch (IOException e) {
        showToast(e.getMessage());
    }
    // Obtain the position of the header.
    int header = result.indexOf(System.lineSeparator());
    // Obtain the position of the footer.
    int footer = result.lastIndexOf(System.lineSeparator());
    if (footer != 0){
        // Do not display the text in the header and footer.
        return result.substring(header, footer - 5);
    }else {
        return result;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Perform TTS in on-device mode.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create an MLTtsLocalModel instance to set the speaker so that the language model corresponding to the speaker can be downloaded through the model manager.
MLTtsLocalModel model = new MLTtsLocalModel.Factory(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE).create();
manager.isModelExist(model).addOnSuccessListener(new OnSuccessListener&amp;lt;Boolean&amp;gt;() {
    @Override
    public void onSuccess(Boolean aBoolean) {
        // If the model is not downloaded, call the download API. Otherwise, call the TTS API of the on-device engine.
        if (aBoolean) {
            String source = loadText(mPdfPath);
            // Call the speak API to perform TTS. source indicates the text to be synthesized.
            mlTtsEngine.speak(source, MLTtsEngine.QUEUE_APPEND);
            if (isPlay){
                // Pause playback.
                mlTtsEngine.pause();
                tv_speak.setText("speak");
            }else {
                // Resume playback.
                mlTtsEngine.resume();
                tv_speak.setText("pause");
            }
            isPlay = !isPlay;
        } else {
            // Call the API for downloading the on-device TTS model.
            downloadModel(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE);
            showToast("The offline model has not been downloaded!");
        }
    }
}).addOnFailureListener(new OnFailureListener() {
    @Override
    public void onFailure(Exception e) {
        showToast(e.getMessage());
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Release resources when the current UI is destroyed.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Override
protected void onDestroy() {
    super.onDestroy();
    try {
        if (mParcelFileDescriptor != null) {
            mParcelFileDescriptor.close();
        }
        if (mCurrentPage != null) {
            mCurrentPage.close();
        }
        if (mPdfRenderer != null) {
            mPdfRenderer.close();
        }
        if (mlTtsEngine != null){
            mlTtsEngine.shutdown();
        }
    } catch (IOException e) {
        e.printStackTrace();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Other Applicable Scenarios&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;TTS can be used across a broad range of scenarios. For example, you could integrate it into an education app to read bedtime stories to children, or integrate it into a navigation app, which could read out instructions aloud.&lt;/p&gt;

&lt;p&gt;To learn more, visit the following links:&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-locationkit"&gt;Documentation on the HUAWEI Developers website&lt;/a&gt;&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-MapKit"&gt;https://developer.huawei.com/consumer/en/hms/huawei-MapKit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How a Programmer Used 300 Lines of Code to Help His Grandma Shop Online with Voice Input</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Fri, 30 Jul 2021 08:38:24 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/how-a-programmer-used-300-lines-of-code-to-help-his-grandma-shop-online-with-voice-input-2446</link>
      <guid>https://dev.to/viviclevercoder/how-a-programmer-used-300-lines-of-code-to-help-his-grandma-shop-online-with-voice-input-2446</guid>
      <description>&lt;p&gt;"John, why the writing pad is missing again?"&lt;/p&gt;

&lt;p&gt;John, programmer at Huawei, has a grandma who loves novelty, and lately she's been obsessed with online shopping. Familiarizing herself with major shopping apps and their functions proved to be a piece of cake, and she had thought that her online shopping experience would be effortless — unfortunately, however, she was hindered by product searching.&lt;/p&gt;

&lt;p&gt;John's grandma tended to use handwriting input. When using it, she would often make mistakes, like switching to another input method she found unfamiliar, or tapping on undesired characters or signs.&lt;br&gt;
Except for shopping apps, most mobile apps feature interface designs that are oriented to younger users — it's no wonder that elderly users often struggle to figure out how to use them.&lt;br&gt;
John patiently helped his grandma search for products with handwriting input several times. But then, he decided to use his skills as a veteran coder to give his grandma the best possible online shopping experience. More specifically, instead of helping her adjust to the available input method, he was determined to create an input method that would conform to her usage habits.&lt;br&gt;
Since his grandma tended to err during manual input, John developed an input method that converts speech into text. Grandma was enthusiastic about the new method, because it is remarkably easy to use. All she has to do is to tap on the recording button and say the product's name. The input method then recognizes what she has said, and converts her speech into text.&lt;/p&gt;

&lt;p&gt;Real-time speech recognition and speech to text are ideal for a broad range of apps, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Game apps (online): Real-time speech recognition comes to users' aid when they team up with others. It frees up users' hands for controlling the action, sparing them from having to type to communicate with their partners. It can also free users from any potential embarrassment related to voice chatting during gaming.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Work apps: Speech to text can play a vital role during long conferences, where typing to keep meeting minutes can be tedious and inefficient, with key details being missed. Using speech to text is much more efficient: during a conference, users can use this service to convert audio content into text; after the conference, they can simply retouch the text to make it more logical.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learning apps: Speech to text can offer users an enhanced learning experience. Without the service, users often have to pause audio materials to take notes, resulting in a fragmented learning process. With speech to text, users can concentrate on listening intently to the material while it is being played, and rely on the service to convert the audio content into text. They can then review the text after finishing the entire course, to ensure that they've mastered the content.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How to Implement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Two services in HUAWEI ML Kit: automatic speech recognition (ASR) and audio file transcription, make it easy to implement the above functions.&lt;/p&gt;

&lt;p&gt;ASR can recognize speech of up to 60s, and convert the input speech into text in real time, with recognition accuracy of over 95%. It currently supports Mandarin Chinese (including Chinese-English bilingual speech), English, French, German, Spanish, Italian, and Arabic.&lt;br&gt;
 Real-time result output&lt;br&gt;
 Available options: with and without speech pickup UI&lt;br&gt;
 Endpoint detection: Start and end points can be accurately located.&lt;br&gt;
 Silence detection: No voice packet is sent for silent portions.&lt;br&gt;
 Intelligent conversion to digital formats: For example, the year 2021 is recognized from voice input.&lt;br&gt;
Audio file transcription can convert an audio file of up to five hours into text with punctuation, and automatically segment the text for greater clarity. In addition, this service can generate text with timestamps, facilitating further function development. &lt;/p&gt;

&lt;p&gt;In this version, both Chinese and English are supported.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xh5hvh76d5nd2wnjbil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xh5hvh76d5nd2wnjbil.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development Procedures&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Preparations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;（1） Configure the Huawei Maven repository address, and put the agconnect-services.json file under the app directory.&lt;br&gt;
Open the build.gradle file in the root directory of your Android Studio project.&lt;br&gt;
Add the AppGallery Connect plugin and the Maven repository.&lt;br&gt;
 Go to allprojects &amp;gt; repositories and configure the Maven repository address for the HMS Core SDK.&lt;br&gt;
 Go to buildscript &amp;gt; repositories and configure the Maven repository address for the HMS Core SDK.&lt;br&gt;
 If the agconnect-services.json file has been added to the app, go to buildscript &amp;gt; dependencies and add the AppGallery Connect plugin configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildscript {
    repositories {
        google()
        jcenter()
        maven { url 'https://developer.huawei.com/repo/' }
    }
    dependencies {
        classpath 'com.android.tools.build:gradle:3.5.4'
        classpath 'com.huawei.agconnect:agcp:1.4.1.300'
        // NOTE: Do not place your app dependencies here; they belong
        // in the individual module build.gradle files.
    }
}

allprojects {
    repositories {
        google()
        jcenter()
        maven { url 'https://developer.huawei.com/repo/' }
    }
}
Set the app authentication information. For details, see Notes on Using Cloud Authentication Information.
（2） Add the build dependencies for the HMS Core SDK.
dependencies {
    //The audio file transcription SDK.
    implementation 'com.huawei.hms:ml-computer-voice-aft:2.2.0.300'
    // The ASR SDK.
    implementation 'com.huawei.hms:ml-computer-voice-asr:2.2.0.300'
    // Plugin of ASR.
    implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.2.0.300'
    ...
}
apply plugin: 'com.huawei.agconnect'  // AppGallery Connect plugin.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;（3） Configure the signing certificate in the build.gradle file under the app directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;signingConfigs {
    release {
        storeFile file("xxx.jks")
        keyAlias xxx
        keyPassword xxxxxx
        storePassword xxxxxx
        v1SigningEnabled true
        v2SigningEnabled true
    }

}

buildTypes {
    release {
        minifyEnabled false
        proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
    }

    debug {
        signingConfig signingConfigs.release
        debuggable true
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;（4） Add permissions in the AndroidManifest.xml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;uses-permission android:name="android.permission.INTERNET" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&amp;gt;

&amp;lt;application
    android:requestLegacyExternalStorage="true"
  ...
&amp;lt;/application&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Integrating the ASR Service&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;（1） Dynamically apply for the permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (ActivityCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
    requestCameraPermission();
}

private void requestCameraPermission() {
    final String[] permissions = new String[]{Manifest.permission.RECORD_AUDIO};
    if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.RECORD_AUDIO)) {
        ActivityCompat.requestPermissions(this, permissions, Constants.AUDIO_PERMISSION_CODE);
        return;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;（2） Create an Intent to set parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Set authentication information for your app.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.fromContext(this).getString("client/api_key"));
//// Use Intent for recognition parameter settings.
Intent intentPlugin = new Intent(this, MLAsrCaptureActivity.class)
        // Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Example: "zh-CN": Chinese; "en-US": English.
        .putExtra(MLAsrCaptureConstants.LANGUAGE, MLAsrConstants.LAN_EN_US)
        // Set whether to display the recognition result on the speech pickup UI.
        .putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX);
startActivityForResult(intentPlugin, "1");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;（3） Override the onActivityResult method to process the result returned by ASR.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    String text = "";
    if (null == data) {
        addTagItem("Intent data is null.", true);
    }
    if (requestCode == "1") {
        if (data == null) {
            return;
        }
        Bundle bundle = data.getExtras();
        if (bundle == null) {
            return;
        }
        switch (resultCode) {
            case MLAsrCaptureConstants.ASR_SUCCESS:
                // Obtain the text information recognized from speech.
                if (bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
                    text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT);
                }
                if (text == null || "".equals(text)) {
                    text = "Result is null.";
                    Log.e(TAG, text);
                } else {
                    // Display the recognition result in the search box.
                    searchEdit.setText(text);
                    goSearch(text, true);
                }
                break;
            // MLAsrCaptureConstants.ASR_FAILURE: Recognition fails.
            case MLAsrCaptureConstants.ASR_FAILURE:
                // Check whether an error code is contained.
                if (bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_CODE)) {
                    text = text + bundle.getInt(MLAsrCaptureConstants.ASR_ERROR_CODE);
                    // Troubleshoot based on the error code.
                }
                // Check whether error information is contained.
                if (bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)) {
                    String errorMsg = bundle.getString(MLAsrCaptureConstants.ASR_ERROR_MESSAGE);
                    // Troubleshoot based on the error information.
                    if (errorMsg != null &amp;amp;&amp;amp; !"".equals(errorMsg)) {
                        text = "[" + text + "]" + errorMsg;
                    }
                }
                // Check whether a sub-error code is contained.
                if (bundle.containsKey(MLAsrCaptureConstants.ASR_SUB_ERROR_CODE)) {
                    int subErrorCode = bundle.getInt(MLAsrCaptureConstants.ASR_SUB_ERROR_CODE);
                    // Troubleshoot based on the sub-error code.
                    text = "[" + text + "]" + subErrorCode;
                }
                Log.e(TAG, text);
                break;
            default:
                break;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Integrating the Audio File Transcription Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;（1） Dynamically apply for the permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private static final int REQUEST_EXTERNAL_STORAGE = 1;
private static final String[] PERMISSIONS_STORAGE = {
        Manifest.permission.READ_EXTERNAL_STORAGE,
        Manifest.permission.WRITE_EXTERNAL_STORAGE };
public static void verifyStoragePermissions(Activity activity) {
    // Check if the write permission has been granted.
    int permission = ActivityCompat.checkSelfPermission(activity,
            Manifest.permission.WRITE_EXTERNAL_STORAGE);
    if (permission != PackageManager.PERMISSION_GRANTED) {
        // The permission has not been granted. Prompt the user to grant it.
        ActivityCompat.requestPermissions(activity, PERMISSIONS_STORAGE,
                REQUEST_EXTERNAL_STORAGE);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;（2）   Create and initialize an audio transcription engine, and create an audio file transcription configurator.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Set the API key.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.fromContext(getApplication()).getString("client/api_key"));
MLRemoteAftSetting setting = new MLRemoteAftSetting.Factory()
        // Set the transcription language code, complying with the BCP 47 standard. Currently, Mandarin Chinese and English are supported.
        .setLanguageCode("zh")
        // Set whether to automatically add punctuations to the converted text. The default value is false.
        .enablePunctuation(true)
        // Set whether to generate the text transcription result of each audio segment and the corresponding audio time shift. The default value is false. (This parameter needs to be set only when the audio duration is less than 1 minute.)
        .enableWordTimeOffset(true)
        // Set whether to output the time shift of a sentence in the audio file. The default value is false.
        .enableSentenceTimeOffset(true)
        .create();

// Create an audio transcription engine.
MLRemoteAftEngine engine = MLRemoteAftEngine.getInstance();
engine.init(this);
// Pass the listener callback to the audio transcription engine created beforehand.
engine.setAftListener(aftListener);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;（3）   Create a listener callback to process the audio file transcription result.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; Transcription of short audio files with a duration of 1 minute or shorter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private MLRemoteAftListener aftListener = new MLRemoteAftListener() {
    public void onResult(String taskId, MLRemoteAftResult result, Object ext) {
        // Obtain the transcription result notification.
        if (result.isComplete()) {
            // Process the transcription result.
        }
    }
    @Override
    public void onError(String taskId, int errorCode, String message) {
        // Callback upon a transcription error.
    }
    @Override
    public void onInitComplete(String taskId, Object ext) {
        // Reserved.
    }
    @Override
    public void onUploadProgress(String taskId, double progress, Object ext) {
        // Reserved.
    }
    @Override
    public void onEvent(String taskId, int eventId, Object ext) {
        // Reserved.
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; Transcription of audio files with a duration longer than 1 minute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private MLRemoteAftListener asrListener = new MLRemoteAftListener() {
    @Override
    public void onInitComplete(String taskId, Object ext) {
        Log.e(TAG, "MLAsrCallBack onInitComplete");
        // The long audio file is initialized and the transcription starts.
        start(taskId);
    }
    @Override
    public void onUploadProgress(String taskId, double progress, Object ext) {
        Log.e(TAG, " MLAsrCallBack onUploadProgress");
    }
    @Override
    public void onEvent(String taskId, int eventId, Object ext) {
        // Used for the long audio file.
        Log.e(TAG, "MLAsrCallBack onEvent" + eventId);
        if (MLAftEvents.UPLOADED_EVENT == eventId) { // The file is uploaded successfully.
            // Obtain the transcription result.
            startQueryResult(taskId);
        }
    }
    @Override
    public void onResult(String taskId, MLRemoteAftResult result, Object ext) {
        Log.e(TAG, "MLAsrCallBack onResult taskId is :" + taskId + " ");
        if (result != null) {
            Log.e(TAG, "MLAsrCallBack onResult isComplete: " + result.isComplete());
            if (result.isComplete()) {
                TimerTask timerTask = timerTaskMap.get(taskId);
                if (null != timerTask) {
                    timerTask.cancel();
                    timerTaskMap.remove(taskId);
                }
                if (result.getText() != null) {
                    Log.e(TAG, taskId + " MLAsrCallBack onResult result is : " + result.getText());
                    tvText.setText(result.getText());
                }
                List&amp;lt;MLRemoteAftResult.Segment&amp;gt; words = result.getWords();
                if (words != null &amp;amp;&amp;amp; words.size() != 0) {
                    for (MLRemoteAftResult.Segment word : words) {
                        Log.e(TAG, "MLAsrCallBack word  text is : " + word.getText() + ", startTime is : " + word.getStartTime() + ". endTime is : " + word.getEndTime());
                    }
                }
                List&amp;lt;MLRemoteAftResult.Segment&amp;gt; sentences = result.getSentences();
                if (sentences != null &amp;amp;&amp;amp; sentences.size() != 0) {
                    for (MLRemoteAftResult.Segment sentence : sentences) {
                        Log.e(TAG, "MLAsrCallBack sentence  text is : " + sentence.getText() + ", startTime is : " + sentence.getStartTime() + ". endTime is : " + sentence.getEndTime());
                    }
                }
            }
        }
    }
    @Override
    public void onError(String taskId, int errorCode, String message) {
        Log.i(TAG, "MLAsrCallBack onError : " + message + "errorCode, " + errorCode);
        switch (errorCode) {
            case MLAftErrors.ERR_AUDIO_FILE_NOTSUPPORTED:
                break;
        }
    }
};
// Upload a transcription task.
private void start(String taskId) {
    Log.e(TAG, "start");
    engine.setAftListener(asrListener);
    engine.startTask(taskId);
}
// Obtain the transcription result.
private Map&amp;lt;String, TimerTask&amp;gt; timerTaskMap = new HashMap&amp;lt;&amp;gt;();
private void startQueryResult(final String taskId) {
    Timer mTimer = new Timer();
    TimerTask mTimerTask = new TimerTask() {
        @Override
        public void run() {
            getResult(taskId);
        }
    };
    // Periodically obtain the long audio file transcription result every 10s.
    mTimer.schedule(mTimerTask, 5000, 10000);
    // Clear timerTaskMap before destroying the UI.
    timerTaskMap.put(taskId, mTimerTask);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;（4）   Obtain an audio file and upload it to the audio transcription engine.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Obtain the URI of an audio file.
Uri uri = getFileUri();
// Obtain the audio duration.
Long audioTime = getAudioFileTimeFromUri(uri);
// Check whether the duration is longer than 60s.
if (audioTime &amp;lt; 60000) {
    // uri indicates audio resources read from the local storage or recorder. Only local audio files with a duration not longer than 1 minute are supported.
    this.taskId = this.engine.shortRecognize(uri, this.setting);
    Log.i(TAG, "Short audio transcription.");
} else {
    // longRecognize is an API used to convert audio files with a duration from 1 minute to 5 hours.
    this.taskId = this.engine.longRecognize(uri, this.setting);
    Log.i(TAG, "Long audio transcription.");
}

private Long getAudioFileTimeFromUri(Uri uri) {
    Long time = null;
    Cursor cursor = this.getContentResolver()
            .query(uri, null, null, null, null);
    if (cursor != null) {

        cursor.moveToFirst();
        time = cursor.getLong(cursor.getColumnIndexOrThrow(MediaStore.Video.Media.DURATION));
    } else {
        MediaPlayer mediaPlayer = new MediaPlayer();
        try {
            mediaPlayer.setDataSource(String.valueOf(uri));
            mediaPlayer.prepare();
        } catch (IOException e) {
            Log.e(TAG, "Failed to read the file time.");
        }
        time = Long.valueOf(mediaPlayer.getDuration());
    }
    return time;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To learn more, visit the following links:&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-locationkit" rel="noopener noreferrer"&gt;Documentation on the HUAWEI Developers website&lt;/a&gt;&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-MapKit" rel="noopener noreferrer"&gt;https://developer.huawei.com/consumer/en/hms/huawei-MapKit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1" rel="noopener noreferrer"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1" rel="noopener noreferrer"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/" rel="noopener noreferrer"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core" rel="noopener noreferrer"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services" rel="noopener noreferrer"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
      <category>tutorial</category>
    </item>
    <item>
      <title>
Building High-Precision Location Services with Location Kit
</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Fri, 30 Jul 2021 06:43:44 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/building-high-precision-location-services-with-location-kit-41pj</link>
      <guid>https://dev.to/viviclevercoder/building-high-precision-location-services-with-location-kit-41pj</guid>
      <description>&lt;p&gt;HUAWEI Location Kit provides you with the tools to build ultra-precise location services into your apps, by utilizing GNSS, Wi-Fi, base stations, and a range of cutting-edge hybrid positioning technologies. Location Kit-supported solutions give your apps a leg up in a ruthlessly competitive marketplace, making it easier than ever for you to serve a vast, global user base. &lt;/p&gt;

&lt;p&gt;Location Kit currently offers three main functions: fused location, geofence, and activity identification. When used in conjunction with the Map SDK, which is supported in 200+ countries and regions and 100+ languages, you'll be able to bolster your apps with premium mapping services that enjoy a truly global reach.&lt;/p&gt;

&lt;p&gt;Fused location provides easy-to-use APIs that are capable of obtaining the user's location with meticulous accuracy, and doing so while consuming a minimal amount of power. HW NLP, Huawei's exclusive network location service, makes use of crowdsourced data to achieve heightened accuracy. Such high-precision, cost-effective positioning has enormous implications for a broad array of mobile services, including ride hailing navigation, food delivery, travel, and lifestyle services, providing customers and service providers alike with the high-value, real time information that they need. &lt;/p&gt;

&lt;p&gt;To avoid boring you with the technical details, we've provided some specific examples of how positioning systems, geofence, activity identification, map display and route planning services can be applied in the real world.&lt;/p&gt;

&lt;p&gt;For instance, you can use Location kit to obtain the user's current location and create a 500-meter geofence radius around it, which can be used to determine the user's activity status when the geofence is triggered, then automatically plan a route based on this activity status (for example, plan a walking route when the activity is identified as walking), and have it shown on the map.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;This article addresses the following functions: *&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Fused location: Incorporates GNSS, Wi-Fi, and base station data via easy-to-use APIs, making it easy for your app to obtain device location information.&lt;/li&gt;
&lt;li&gt; Activity identification: Identifies the user's motion status, using the acceleration sensor, network information, and magnetometer, so that you can tailor your app to account for the user's behavior.&lt;/li&gt;
&lt;li&gt; Geofence: Allows you to set virtual geographic boundaries via APIs, to send out timely notifications when users enter, exit, or remain with the boundaries.&lt;/li&gt;
&lt;li&gt; Map display: Includes the map display, interactive features, map drawing, custom map styles, and a range of other features.&lt;/li&gt;
&lt;li&gt; Route planning: Provides HTTP/HTTPS APIs for you to initiate requests using HTTP/HTTPS, and obtain the returned data in JSON format.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Usage scenarios:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Using high-precision positioning technology to obtain real time location and tracking data for delivery or logistics personnel, for optimally efficient services. In the event of accidents or emergencies, the location of personnel could also be obtained with ease, to ensure their quick rescue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating a geofence in the system, which can be used to monitor an important or dangerous area at all times. If someone enters such an area without authorization, the system could send out a proactive alert. This solution can also be linked with onsite video surveillance equipment. When an alert is triggered, the video surveillance camera could pop up to provide continual monitoring, free of any blind spots. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tracking patients with special needs in hospitals and elderly residents in nursing homes, in order to provide them with the best possible care. Positioning services could be linked with wearable devices, for attentive 24/7 care in real time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using the map to directly find destinations, and perform automatic route planning.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;I.    Advantages of Location Kit and Map Kit&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Low-power consumption (Location Kit): Implements geofence using the chipset, for optimized power efficiency&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;High precision (Location Kit): Optimizes positioning accuracy in urban canyons, correctly identifying the roadside of the user. Sub-meter positioning accuracy in open areas, with RTK (Real-time kinematic) technology support. Personal information, activity identification, and other data are not uploaded to the server while location services are performed. As the data processor, Location Kit only uses data, and does not store it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Personalized map displays (Map Kit): Offers enriching map elements and a wide range of interactive methods for building your map.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Broad-ranging place searches (Map Kit): Covers 130+ million POIs and 150+ million addresses, and supports place input prompts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Global coverage: Supports 200+ countries/regions, and 40+ languages.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For more information and development guides, &lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-MapKit"&gt;please visit:&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;II.   Demo App Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to illustrate how to integrate Location Kit and Map Kit both easily and efficiently, we've provided a case study here, which shows the simplest coding method for running the demo.&lt;br&gt;
This app is used to create a geofence on the map based on the location when the user opens the app. The user can drag on the red mark to set a destination. After being confirmed, when the user triggers the geofence condition, the app will automatically detect their activity status and plan a route for the user, such as planning a walking route if the activity status is walking, or cycling route if the activity status is cycling. You can also implement real-time voice navigation for the planned route. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;III.  Development Practice&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;You need to set the priority (which is 100 by default) before requesting locations. To request the precise GPS location, set the priority to 100. To request the network location, set the priority to 102 or 104. If you only need to passively receive locations, set the priority to 105.&lt;/p&gt;

&lt;p&gt;Parameters related to activity identification include VEHICLE (100), BIKE (101), FOOT (102), and STILL (103).&lt;br&gt;
Geofence-related parameters include ENTER_GEOFENCE_CONVERSION (1), EXIT_GEOFENCE_CONVERSION (2), and DWELL_GEOFENCE_CONVERSION (4).&lt;br&gt;
The following describes how to run the demo using source code, helping you understand the implementation details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preparations&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Preparing Tools&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;1)  Huawei phones (It is recommended that multiple devices be tested)&lt;br&gt;
2)  Android Studio&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Registering as a Developer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;1)  Register as a &lt;a href="https://developer.huawei.com/consumer/en/?ha_source=hms1"&gt;Huawei developer&lt;/a&gt;.&lt;br&gt;
2)  Create an app in AppGallery Connect.&lt;br&gt;
Create an app in AppGallery Connect by referring to &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/config-agc-0000001057629153?ha_source=hms1"&gt;Location Kit development preparations&lt;/a&gt; or &lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-MapKit?ha_source=hms1"&gt;Map Kit development preparations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt; Enable Location Kit and Map Kit for the app on the Manage APIs page.&lt;br&gt;
 Add the SHA-256 certificate fingerprint.&lt;br&gt;
 Download the agconnect-services.json file and add it to the app directory of the project.&lt;br&gt;
3)  Create an Android demo project.&lt;br&gt;
4)  Learn about the function restrictions.&lt;br&gt;
To use the route planning function of Map Kit, refer to &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/supported-countries-and-regions-route-planning-0000001091168530-V5?ha_source=hms1"&gt;Supported Countries/Regions (Route Planning)&lt;/a&gt;.&lt;br&gt;
To use other services of Map Kit, refer to &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/supported-countries-and-regions-0000001050160946-V5?ha_source=hms1"&gt;Supported Countries/Regions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BxQ9a4k1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cvdnfpbudyiey978f8qz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BxQ9a4k1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cvdnfpbudyiey978f8qz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Running the Demo App&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;1)  Install the app on the test device after debugging the project in Android Studio successfully&lt;br&gt;
2)  Replace the project package name and JSON file with those of your own.&lt;br&gt;
3)  Tap related button in the demo app to create a geofence which has a radius of 200 and is centered on the current location automatically pinpointed by the demo app.&lt;br&gt;
4)  Drag the mark point on the map to select a destination.&lt;br&gt;
5)  View the route that is automatically planned based on the current activity status when the geofence is triggered. &lt;br&gt;
The following figure shows the demo effect:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mab9BekD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7b4oqpd4daqncmyeq5m0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mab9BekD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7b4oqpd4daqncmyeq5m0.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Add the Huawei Maven repository to the project-level build.gradle file.
Add the following Maven repository address to the project-level build.gradle file of your Android Studio project:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildscript {
    repositories {
        maven { url 'http://developer.huawei.com/repo/'}
    }
dependencies {
        ...
        // Add the AppGallery Connect plugin configuration.
        classpath 'com.huawei.agconnect:agcp:1.4.2.300'
    }
}allprojects {
    repositories {
        maven { url 'http://developer.huawei.com/repo/'}
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; Add dependencies on the SDKs in the app-level build.gradle file.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dependencies {
   implementation 'com.huawei.hms:location:5.1.0.300'
   implementation 'com.huawei.hms:maps:5.2.0.302' }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; Add the following configuration to the next line under apply plugin: 'com.android.application' in the file header:
apply plugin: 'com.huawei.agconnect'&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note:&lt;br&gt;
• You must configure apply plugin: 'com.huawei.agconnect' under apply plugin: 'com.android.application'.&lt;br&gt;
• The minimum Android API level (minSdkVersion) required for the HMS Core Map SDK is 19.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Declare system permissions in the AndroidManifest.xml file.
Location Kit uses GNSS, Wi-Fi, and base station data for fused location, enabling your app to quickly and accurately obtain users' location information. Therefore, Location Kit requires permissions to access Internet, obtain the fine location, and obtain the coarse location. If your app needs to continuously obtain the location information when it runs in the background, you also need to declare the ACCESS_BACKGROUND_LOCATION permission in the AndroidManifest.xml file:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;uses-permission android:name="android.permission.INTERNET" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.WAKE_LOCK" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /&amp;gt;
&amp;lt;uses-permission android:name="com.huawei.hms.permission.ACTIVITY_RECOGNITION" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note: Because the ACCESS_FINE_LOCATION, WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE, and ACTIVITY_RECOGNITION permissions are dangerous system permissions, you need to dynamically apply for these permissions. If you do not have the permissions, Location Kit will reject to provide services for your app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I.  Map Display&lt;br&gt;
Currently, the Map SDK supports two map containers: SupportMapFragment and MapView. This document uses the SupportMapFragment container.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Add a Fragment object in the layout file (for example: activity_main.xml), and set map attributes in the file.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;fragment
    android:id="@+id/mapfragment_routeplanningdemo"
    android:name="com.huawei.hms.maps.SupportMapFragment"
    android:layout_width="match_parent"
    android:layout_height="match_parent" /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; To use a map in your app, implement the OnMapReadyCallback API.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RoutePlanningActivity extends AppCompatActivity implements OnMapReadyCallback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; Load SupportMapView in the onCreate method, call getMapAsync to register the callback.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fragment fragment = getSupportFragmentManager().findFragmentById(R.id.mapfragment_routeplanningdemo);
if (fragment instanceof SupportMapFragment) {
    SupportMapFragment mSupportMapFragment = (SupportMapFragment) fragment;
    mSupportMapFragment.getMapAsync(this);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; Call the onMapReady callback to obtain the HuaweiMap object.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Override
public void onMapReady(HuaweiMap huaweiMap) {

    hMap = huaweiMap;
    hMap.setMyLocationEnabled(true);
    hMap.getUiSettings().setMyLocationButtonEnabled(true);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;II.   Function Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Check the permissions.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (Build.VERSION.SDK_INT &amp;lt;= Build.VERSION_CODES.P) {
    if (ActivityCompat.checkSelfPermission(context,
            "com.huawei.hms.permission.ACTIVITY_RECOGNITION") != PackageManager.PERMISSION_GRANTED) {
        String[] permissions = {"com.huawei.hms.permission.ACTIVITY_RECOGNITION"};
        ActivityCompat.requestPermissions((Activity) context, permissions, 1);
        Log.i(TAG, "requestActivityTransitionButtonHandler: apply permission");
    }
} else {
    if (ActivityCompat.checkSelfPermission(context,
            "android.permission.ACTIVITY_RECOGNITION") != PackageManager.PERMISSION_GRANTED) {
        String[] permissions = {"android.permission.ACTIVITY_RECOGNITION"};
        ActivityCompat.requestPermissions((Activity) context, permissions, 2);
        Log.i(TAG, "requestActivityTransitionButtonHandler: apply permission");
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; Check whether the location permissions have been granted. If no, the location cannot be obtained.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;settingsClient.checkLocationSettings(locationSettingsRequest)
        .addOnSuccessListener(locationSettingsResponse -&amp;gt; {
                       fusedLocationProviderClient
                    .requestLocationUpdates(mLocationRequest, mLocationCallback, Looper.getMainLooper())
                    .addOnSuccessListener(aVoid -&amp;gt; {
                        //Processing when the API call is successful.
                    });
        })
        .addOnFailureListener(e -&amp;gt; {});
if (null == mLocationCallbacks) {
    mLocationCallbacks = new LocationCallback() {
        @Override
        public void onLocationResult(LocationResult locationResult) {
            if (locationResult != null) {
                List&amp;lt;HWLocation&amp;gt; locations = locationResult.getHWLocationList();
                if (!locations.isEmpty()) {
                    for (HWLocation location : locations) {
                        hMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(location.getLatitude(), location.getLongitude()), 14));
                        latLngOrigin = new LatLng(location.getLatitude(), location.getLongitude());
                        if (null != mMarkerOrigin) {
                            mMarkerOrigin.remove();
                        }
                        MarkerOptions options = new MarkerOptions()
                                .position(latLngOrigin)
                                .title("Hello Huawei Map")
                                .snippet("This is a snippet!");
                        mMarkerOrigin = hMap.addMarker(options);
                        removeLocationUpdatesWith();
                    }
                }
            }
        }

        @Override
        public void onLocationAvailability(LocationAvailability locationAvailability) {
            if (locationAvailability != null) {
                boolean flag = locationAvailability.isLocationAvailable();
                Log.i(TAG, "onLocationAvailability isLocationAvailable:" + flag);
            }
        }
    };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;III.  Geofence and Ground Overlay Creation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a geofence based on the current location and add a round ground overlay on the map.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GeofenceRequest.Builder geofenceRequest = new 
GeofenceRequest.Builder geofenceRequest = new GeofenceRequest.Builder();
geofenceRequest.createGeofenceList(GeoFenceData.returnList());
geofenceRequest.setInitConversions(7); 
try {
    geofenceService.createGeofenceList(geofenceRequest.build(), pendingIntent)
            .addOnCompleteListener(new OnCompleteListener&amp;lt;Void&amp;gt;() {
                @Override
                public void onComplete(Task&amp;lt;Void&amp;gt; task) {
                    if (task.isSuccessful()) {
                        Log.i(TAG, "add geofence success!");
                        if (null == hMap) {
                            return; }
                        if (null != mCircle) {
                            mCircle.remove();
                            mCircle = null;
                        }
                        mCircle = hMap.addCircle(new CircleOptions()
                                .center(latLngOrigin)
                                .radius(500)
                                .strokeWidth(1)
                                .fillColor(Color.TRANSPARENT));
                    } else {Log.w(TAG, "add geofence failed : " + task.getException().getMessage());}
                }
            });
} catch (Exception e) {
    Log.i(TAG, "add geofence error:" + e.getMessage());
}

// Geofence service
&amp;lt;receiver
    android:name=".GeoFenceBroadcastReceiver"
    android:exported="true"&amp;gt;
    &amp;lt;intent-filter&amp;gt;
        &amp;lt;action android:name=".GeoFenceBroadcastReceiver.ACTION_PROCESS_LOCATION" /&amp;gt;
    &amp;lt;/intent-filter&amp;gt;
&amp;lt;/receiver&amp;gt;

if (intent != null) {
    final String action = intent.getAction();
    if (ACTION_PROCESS_LOCATION.equals(action)) {
        GeofenceData geofenceData = GeofenceData.getDataFromIntent(intent);
        if (geofenceData != null &amp;amp;&amp;amp; isListenGeofence) {
            int conversion = geofenceData.getConversion();
            MainActivity.setGeofenceData(conversion);
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mark the selected point on the map to obtain the destination information, check the current activity status, and plan routes based on the detected activity status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hMap.setOnMapClickListener(latLng -&amp;gt; {
    latLngDestination = new LatLng(latLng.latitude, latLng.longitude);
    if (null != mMarkerDestination) {
        mMarkerDestination.remove();
    }
    MarkerOptions options = new MarkerOptions()
            .position(latLngDestination)
            .title("Hello Huawei Map");
    mMarkerDestination = hMap.addMarker(options);
    if (identification.getText().equals("To exit the fence,Your activity is about to be detected.")) {
        requestActivityUpdates(5000);
    }

});
// Activity identification API
activityIdentificationService.createActivityIdentificationUpdates(detectionIntervalMillis, pendingIntent)
        .addOnSuccessListener(new OnSuccessListener&amp;lt;Void&amp;gt;() {
            @Override
            public void onSuccess(Void aVoid) {
                Log.i(TAG, "createActivityIdentificationUpdates onSuccess");
            }
        })
        .addOnFailureListener(new OnFailureListener() {
            @Override
            public void onFailure(Exception e) {
                Log.e(TAG, "createActivityIdentificationUpdates onFailure:" + e.getMessage());
            }
        });
// URL of the route planning API (cycling route is used as an example): https://mapapi.cloud.huawei.com/mapApi/v1/routeService/bicycling?key=API KEY
 NetworkRequestManager.getBicyclingRoutePlanningResult(latLngOrigin, latLngDestination,
        new NetworkRequestManager.OnNetworkListener() {
            @Override
            public void requestSuccess(String result) {
                generateRoute(result);
            }

            @Override
            public void requestFail(String errorMsg) {
                Message msg = Message.obtain();
                Bundle bundle = new Bundle();
                bundle.putString("errorMsg", errorMsg);
                msg.what = 1;
                msg.setData(bundle);
                mHandler.sendMessage(msg);
            }
        });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note:&lt;/p&gt;

&lt;p&gt;The route planning function provides a set of HTTPS-based APIs used to plan routes for walking, cycling, and driving and calculate route distances. The APIs return route data in JSON format and provide the route planning capabilities.&lt;/p&gt;

&lt;p&gt;The route planning function can plan walking, cycling, and driving routes. &lt;br&gt;
You can try to plan a route from one point to another point and then draw the route on the map, achieving the navigation effects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Related Parameters&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; In indoor environments, the navigation satellite signals are usually weak. Therefore, HMS Core (APK) will use the network location mode, which is relatively slow compared with the GNSS location. It is recommended that the test be performed outdoors.&lt;/li&gt;
&lt;li&gt; In Android 9.0 or later, you are advised to test the geofence outdoors. In versions earlier than Android 9.0, you can test the geofence indoors. &lt;/li&gt;
&lt;li&gt; Map Kit is unavailable in the Chinese mainland. Therefore, the Android SDK, JavaScript API, Static Map API, and Directions API are unavailable in the Chinese mainland. For details, please refer to Supported Countries/Regions.&lt;/li&gt;
&lt;li&gt; In the Map SDK for Android 5.0.0.300 and later versions, you must set the API key before initializing a map. Otherwise, no map data will be displayed.&lt;/li&gt;
&lt;li&gt; Currently, the driving route planning is unavailable in some countries and regions outside China. For details about the supported countries and regions, please refer to the Huawei official website.&lt;/li&gt;
&lt;li&gt; Before building the APK, configure the obfuscation configuration file to prevent the HMS Core SDK from being obfuscated.
 Open the obfuscation configuration file proguard-rules.pro in the app's root directory of your project and add configurations to exclude the HMS Core SDK from obfuscation.
 If you are using AndResGuard, add its trustlist to the obfuscation configuration file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For details, please visit the [following link]: &lt;/p&gt;

&lt;p&gt;(&lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/android-sdk-config-obfuscation-scripts-0000001061882229"&gt;https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/android-sdk-config-obfuscation-scripts-0000001061882229&lt;/a&gt; )&lt;/p&gt;

&lt;p&gt;To learn more, visit the following links:&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-locationkit"&gt;Documentation on the HUAWEI Developers website&lt;/a&gt;&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-MapKit"&gt;https://developer.huawei.com/consumer/en/hms/huawei-MapKit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How a Programmer Developed a Live-Streaming App with Gesture-Controlled Virtual Backgrounds</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Tue, 27 Jul 2021 11:42:48 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/how-a-programmer-developed-a-live-streaming-app-with-gesture-controlled-virtual-backgrounds-5e7m</link>
      <guid>https://dev.to/viviclevercoder/how-a-programmer-developed-a-live-streaming-app-with-gesture-controlled-virtual-backgrounds-5e7m</guid>
      <description>&lt;p&gt;"What's it like to date a programmer?"&lt;/p&gt;

&lt;p&gt;John is a Huawei programmer. His girlfriend Jenny, a teacher, has an interesting answer to that question: "Thanks to my programmer boyfriend, my course ranked among the most popular online courses at my school".&lt;/p&gt;

&lt;p&gt;Let's go over how this came to be. Due to COVID-19, the school where Jenny taught went entirely online. Jenny, who was new to live streaming, wanted her students to experience the full immersion of traveling to Tokyo, New York, Paris, the Forbidden City, Catherine Palace, and the Louvre Museum, so that they could absorb all of the relevant geographic and historical knowledge related to those places. But how to do so?&lt;/p&gt;

&lt;p&gt;Jenny was stuck on this issue, but John quickly came to her rescue.&lt;/p&gt;

&lt;p&gt;After analyzing her requirements in detail, John developed a tailored online course app that brings its users an uncannily immersive experience. It enables users to change the background while live streaming. The video imagery within the app looks true-to-life, as each pixel is labeled, and the entire body image — down to a single strand of hair — is completely cut out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Implement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Changing live-streaming backgrounds by gesture can be realized by using image segmentation and hand gesture recognition in HUAWEI ML Kit&lt;/p&gt;

&lt;p&gt;The image segmentation service segments specific elements from static images or dynamic video streams, with 11 types of image elements supported: human bodies, sky scenes, plants, foods, cats and dogs, flowers, water, sand, buildings, mountains, and others.&lt;br&gt;
The hand gesture recognition service offers two capabilities: hand keypoint detection and hand gesture recognition. Hand keypoint detection is capable of detecting 21 hand keypoints (including fingertips, knuckles, and wrists) and returning positions of the keypoints. The hand gesture recognition capability detects and returns the positions of all rectangular areas of the hand from images and videos, as well as the type and confidence of a gesture. This capability can recognize 14 different gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time video streams.&lt;br&gt;
Development Process&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Add the AppGallery Connect plugin and the Maven repository.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    buildscript {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
    dependencies {
        ...
        classpath 'com.huawei.agconnect:agcp:1.4.1.300'
    }
}

allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Integrate required services in the full SDK mode.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    dependencies{
     // Import the basic SDK of image segmentation.
    implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
    // Import the multiclass segmentation model package.
    implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
    // Import the human body segmentation model package.
    implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
    // Import the basic SDK of hand gesture recognition.
    implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
    // Import the model package of hand keypoint detection.
    implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Add configurations in the file header.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Automatically update the machine learning model.&lt;/strong&gt;&lt;br&gt;
Add the following statements to the AndroidManifest.xml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;manifest
    ...
    &amp;lt;meta-data
        android:name="com.huawei.hms.ml.DEPENDENCY"
        android:value="imgseg,handkeypoint" /&amp;gt;
    ...
&amp;lt;/manifest&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Create an image segmentation analyzer.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Hand gesture recognition analyzer.

MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
                                    .add(imageSegmentationAnalyzer)
                                   .add(handKeypointAnalyzer)
                                   .create();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Create a class for processing the recognition result.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor&amp;lt;MLImageSegmentation&amp;gt; {
    @Override
    public void transactResult(MLAnalyzer.Result&amp;lt;MLImageSegmentation&amp;gt; results) {
        SparseArray&amp;lt;MLImageSegmentation&amp;gt; items = results.getAnalyseList();
        // Process the recognition result as required. Note that only the detection results are processed.
        // Other detection-related APIs provided by ML Kit cannot be called.
    }
    @Override
    public void destroy() {
        // Callback method used to release resources when the detection ends.
    }
}

public class HandKeypointTransactor implements MLAnalyzer.MLTransactor&amp;lt;List&amp;lt;MLHandKeypoints&amp;gt;&amp;gt; {
    @Override
    public void transactResult(MLAnalyzer.Result&amp;lt;List&amp;lt;MLHandKeypoints&amp;gt;&amp;gt; results) {
        SparseArray&amp;lt;List&amp;lt;MLHandKeypoints&amp;gt;&amp;gt; analyseList = results.getAnalyseList();
        // Process the recognition result as required. Note that only the detection results are processed.
        // Other detection-related APIs provided by ML Kit cannot be called.
    }
    @Override
    public void destroy() {
        // Callback method used to release resources when the detection ends.
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. Set the detection result processor to bind the analyzer to the result processor.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;8. Create a LensEngine object.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
    // Set the front or rear camera mode. LensEngine.BACK_LENS indicates the rear camera, and LensEngine.FRONT_LENS indicates the front camera.
    .setLensType(LensEngine.FRONT_LENS)
    .applyDisplayDimension(1280, 720)
    .applyFps(20.0f)
    .enableAutomaticFocus(true)
    .create();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;9. Start the camera, read video streams, and start recognition.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    // Implement other logics of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
    lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
    // Exception handling logic.
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;10. Stop the analyzer and release the recognition resources when recognition ends.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    if (analyzer != null) {
    try {
        analyzer.stop();
    } catch (IOException e) {
        // Exception handling.
    }
}
if (lensEngine != null) {
    lensEngine.release();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information, please visit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>virtual</category>
    </item>
    <item>
      <title>Implementing Real-Time Transcription in an Easy Way</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Fri, 23 Jul 2021 12:36:54 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/implementing-real-time-transcription-in-an-easy-way-n3d</link>
      <guid>https://dev.to/viviclevercoder/implementing-real-time-transcription-in-an-easy-way-n3d</guid>
      <description>&lt;p&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The real-time onscreen subtitle is a must-have function in an ordinary video app. However, developing such a function can prove costly for small- and medium-sized developers. And even when implemented, speech recognition is often prone to inaccuracy. Fortunately, there's a better way — HUAWEI ML Kit, which is remarkably easy to integrate, and makes real-time transcription an absolute breeze!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction to ML Kit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ML Kit allows your app to leverage Huawei's longstanding machine learning prowess to apply cutting-edge artificial intelligence (AI) across a wide range of contexts. With Huawei's expertise built in, ML Kit is able to provide a broad array of easy-to-use machine learning capabilities, which serve as the building blocks for tomorrow's cutting-edge AI apps. ML Kit capabilities include those related to:&lt;/p&gt;

&lt;p&gt;Ø Text (including text recognition, document recognition, and ID card recognition)&lt;/p&gt;

&lt;p&gt;Ø Language/Voice (such as real-time/on-device translation, automatic speech recognition, and real-time transcription)&lt;/p&gt;

&lt;p&gt;Ø Image (such as image classification, object detection and tracking, and landmark recognition)&lt;/p&gt;

&lt;p&gt;Ø Face/Body (such as face detection, skeleton detection, liveness detection, and face verification)&lt;/p&gt;

&lt;p&gt;Ø Natural language processing (text embedding)&lt;/p&gt;

&lt;p&gt;Ø Custom model (including the on-device inference framework and model development tool)&lt;/p&gt;

&lt;p&gt;Real-time transcription is required to implement the function mentioned above. Let's take a look at how this works in practice:&lt;/p&gt;

&lt;p&gt;Now let's move on to how to integrate this service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrating Real-Time Transcription&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Registering as a Huawei developer on &lt;a href="https://developer.huawei.com/consumer/en/?ha_source=hms1"&gt;HUAWEI Developers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Creating an app&lt;/p&gt;

&lt;p&gt;Create an app in &lt;a href="https://developer.huawei.com/consumer/en/service/josp/agc/index.html#/?ha_source=hms1"&gt;AppGallery Connect&lt;/a&gt;. For details, see &lt;a href="https://developer.huawei.com/consumer/en/doc/development/AppGallery-connect-Guides/agc-get-started#createproject?ha_source=hms1"&gt;Getting Started with Android&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We've provided some screenshots for your reference:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jPGU7MM9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82lebqbt9apzwladkb7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jPGU7MM9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82lebqbt9apzwladkb7w.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f55iPlBr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1zjc2rbnr35mamlv501q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f55iPlBr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1zjc2rbnr35mamlv501q.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7mBzqyJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00zdfzcunabe7yzknjz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7mBzqyJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00zdfzcunabe7yzknjz5.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A3KihMvb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8hi1alljzr6qd0x5saz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A3KihMvb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8hi1alljzr6qd0x5saz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enabling ML Kit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DQKVFKQY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iq4gc3qw684dv8xfegtx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DQKVFKQY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iq4gc3qw684dv8xfegtx.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Integrating the HMS Core SDK&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Add the AppGallery Connect configuration file by completing the steps below:&lt;/p&gt;

&lt;p&gt;n Download and copy the agconnect-service.json file to the app directory of your Android Studio project.&lt;/p&gt;

&lt;p&gt;n Call setApiKey during app initialization.&lt;/p&gt;

&lt;p&gt;To learn more, go to &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/add-appgallery-0000001050038080-V5?ha_source=hms1"&gt;Adding the AppGallery Connect Configuration File&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/config-maven-0000001050040031-V5?ha_source=hms1"&gt;Configuring the maven repository address&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;n Add build dependencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j0wCzbL_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uxb02wcznn0acbcpkvkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j0wCzbL_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uxb02wcznn0acbcpkvkq.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;implementation 'com.huawei.hms:ml-computer-voice-realtimetranscription:2.2.0.300'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;n Add the AppGallery Connect plugin configuration.&lt;/p&gt;

&lt;p&gt;Method 1: Add the following information under the declaration in the file header:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apply plugin: 'com.huawei.agconnect'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Method 2: Add the plugin configuration in the plugins block.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;plugins {
id 'com.android.application'
// Add the following configuration:
id 'com.huawei.agconnect'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please refer to &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/real-time-transcription-sdk-0000001055762756-V5?ha_source=hms1"&gt;Integrating the Real-Time Transcription SDK&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setting the cloud authentication information&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When using on-cloud services of ML Kit, you can set the API key or access token (recommended) in either of the following ways:&lt;/p&gt;

&lt;p&gt;Access token&lt;/p&gt;

&lt;p&gt;You can use the following API to initialize the access token when the app is started. The access token does not need to be set again once initialized.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MLApplication.getInstance().setAccessToken("your access token");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;API key&lt;/p&gt;

&lt;p&gt;You can use the following API to initialize the API key when the app is started. The API key does not need to be set again once initialized.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MLApplication.getInstance().setApiKey("your ApiKey");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For details, see Notes on &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/sdk-data-security-0000001050040129-V5#EN-US_TOPIC_0000001050750251__section2688102310166?ha_source=hms1"&gt;Using Cloud Authentication Information&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Development&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;l Create and configure a speech recognizer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig.Factory()

// Set the language. Currently, this service supports Mandarin Chinese, English, and French.

.setLanguage(MLSpeechRealTimeTranscriptionConstants.LAN_ZH_CN)

// Punctuate the text recognized from the speech.

.enablePunctuation(true)

// Set the sentence offset.

.enableSentenceTimeOffset(true)

// Set the word offset.

.enableWordTimeOffset(true)

// Set the application scenario. MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING indicates shopping, which is supported only for Chinese. Under this scenario, recognition for the name of Huawei products has been optimized.

.setScenes(MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING)

.create();

MLSpeechRealTimeTranscription mSpeechRecognizer = MLSpeechRealTimeTranscription.getInstance();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;l Create a speech recognition result listener callback.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Use the callback to implement the MLSpeechRealTimeTranscriptionListener API and methods in the API.

protected class SpeechRecognitionListener implements

MLSpeechRealTimeTranscriptionListener{
u/Override
public void onStartListening() {
// The recorder starts to receive speech.
}
u/Override
public void onStartingOfSpeech() {
// The user starts to speak, that is, the speech recognizer detects that the user starts to speak.
}
u/Override
public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
// Return the original PCM stream and audio power to the user. This API is not running in the main thread, and the return result is processed in a sub-thread.
}
u/Override
public void onRecognizingResults(Bundle partialResults) {
// Receive the recognized text from MLSpeechRealTimeTranscription.
}
u/Override
public void onError(int error, String errorMessage) {
// Called when an error occurs in recognition.
}
u/Override
public void onState(int state,Bundle params) {
// Notify the app of the status change.
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The recognition result can be obtained from the listener callbacks, including onRecognizingResults. Design the UI content according to the obtained results. For example, display the text transcribed from the input speech.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--feImw4lb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61doy0g3prfeqlyusj8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--feImw4lb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61doy0g3prfeqlyusj8e.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;mSpeechRecognizer.setRealTimeTranscriptionListener(new SpeechRecognitionListener());&lt;/p&gt;

&lt;p&gt;l Call startRecognizing to start speech recognition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mSpeechRecognizer.startRecognizing(config);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;l Release resources after recognition is complete.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (mSpeechRecognizer!= null) {
mSpeechRecognizer.destroy();
}
l (Optional) Obtain the list of supported languages.
MLSpeechRealTimeTranscription.getInstance()
.getLanguages(new MLSpeechRealTimeTranscription.LanguageCallback() {
u/Override
public void onResult(List&amp;lt;String&amp;gt; result) {
 Log.i(TAG, "support languages==" + result.toString());
}
u/Override
public void onError(int errorCode, String errorMsg) {
Log.e(TAG, "errorCode:" + errorCode + "errorMsg:" + errorMsg);
}
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've finished integration here, so let's test it out on a simple screen.&lt;/p&gt;

&lt;p&gt;Tap START RECORDING. The text recognized from the input speech will display in the lower portion of the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H7UavBpg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pkr37khd00ibvfoqtzdf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H7UavBpg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pkr37khd00ibvfoqtzdf.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P5J7IxJD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t1nta6yplw1f33k8bmnh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P5J7IxJD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t1nta6yplw1f33k8bmnh.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've now built a simple audio transcription function.&lt;/p&gt;

&lt;p&gt;Eager to build a fancier UI, with stunning animations, and other effects? By all means, take your shot!&lt;/p&gt;

&lt;p&gt;For more information, please visit:&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/real-time-transcription-0000001054964200?ha_source=hms1"&gt;Real-Time Transcription&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/sdk-data-security-0000001050040129-V5#EN-US_TOPIC_0000001050750251__section2688102310166?ha_source=hms1"&gt;Sample Code for ML Kit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-arengine/?ha_source=hms1"&gt;Documentation on the HUAWEI Developers website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Monitor Real-time Health during Workouts with Body and Face Tracking</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Wed, 21 Jul 2021 12:06:54 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/monitor-real-time-health-during-workouts-with-body-and-face-tracking-40n6</link>
      <guid>https://dev.to/viviclevercoder/monitor-real-time-health-during-workouts-with-body-and-face-tracking-40n6</guid>
      <description>&lt;p&gt;Still wearing a smart watch to monitor health indicators during workouts? Curious at what makes AR apps so advanced? Still think that AR is only used in movies? With HUAWEI AR Engine, you can integrate AR capabilities into your own apps in just a few easy steps. If this has piqued your interest, read on to learn more!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is AR Engine?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HUAWEI AR Engine is an engine designed for building augmented reality (AR) apps to be run on Android smartphones. It is based on the HiSilicon chipset, and integrates AR core algorithms to provide a range of basic AR capabilities, such as motion tracking, environment tracking, body tracking, and face tracking, enabling your app to bridge real and virtual worlds, by offering a brand new visually interactive user experience.&lt;/p&gt;

&lt;p&gt;AR Engine provides for high-level health status detection, via facial information, and encompasses a range of different data indicators including heart rate, respiratory rate, facial health status, and heart rate waveform signals.&lt;/p&gt;

&lt;p&gt;With the human body and face tracking capability, one of the engine's three major capabilities (the other two being motion tracking and environment tracking), HUAWEI AR Engine is able to monitor and display the user's real time health status during workouts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application scenarios&lt;/strong&gt;:&lt;br&gt;
Gym: Checking real-time body indicators during workouts.&lt;br&gt;
Medical treatment: Monitoring patients' physical status in real time.&lt;br&gt;
Caregiving: Monitoring health indicators of the elderly in real time.&lt;/p&gt;

&lt;p&gt;Next, let's take a look at how to implement these powerful functions.&lt;/p&gt;

&lt;p&gt;Advantages of AR monitoring and requirements for hardware:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Detects facial health information and calculates key health information, such as real time heart rate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The human body and face tracking capabilities also equip your device to better understanding users. By locating hand locations and recognizing specific gestures, AR Engine can assist in placing a virtual object in the real world, or overlaying special effects on a hand. With the depth sensing components, the hand skeleton tracking capability is capable of tracking 21 hand skeleton points, to implement precise interactive controls and special effect overlays. With regard to body tracking, the capability can track 23 body skeleton points to detect human posture in real time, providing a strong foundation for motion sensing and fitness &amp;amp; health apps..&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For details about supported models, please refer to the software and hardware dependencies on the HUAWEI Developers website.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bDFoJVQL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzaja7fkpoti6wmwb6nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bDFoJVQL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzaja7fkpoti6wmwb6nc.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A demo is offered here for you to learn how to integrate AR Engine with simplest code in the fastest way.&lt;br&gt;
 Enable health check by using ENABLE_HEALTH_DEVICE.&lt;br&gt;
 FaceHealthCheckStateEvent functions as a parameter of ServiceListener.handleEvent(EventObject eventObject) that passes health check status information to the app.&lt;br&gt;
 The health check HealthParameter includes the heart rate, respiratory rate, facial attributes (like age and gender), and hear rate waveform signal.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Development Practice
The following describes how to run the demo using source code, enabling you to understand the implementation details.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Preparations&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Get the tools prepared.&lt;br&gt;
a)  A Huawei P30 running Android 11.&lt;br&gt;
b)  Development tool: Android Studio; development language: Java.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Register as a Huawei developer.&lt;br&gt;
a)  Register as a Huawei developer.&lt;br&gt;
b)  Create an app.&lt;br&gt;
Follow instructions in the AR Engine Development Guide to add an app in AppGallery Connect.&lt;br&gt;
c)  Build the demo app.&lt;br&gt;
 Import the source code to Android Studio.&lt;br&gt;
 Download the agconnect-services.json file of the created app from AppGallery Connect, and add it to the app directory in the sample project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the demo app.&lt;br&gt;
a)  Install the demo app on the test device.&lt;br&gt;
b)  After the app is started, access facial recognition. During recognition, the progress will be displayed on the screen in real time.&lt;br&gt;
c)  Your heart rate, respiratory rate, and real-time heart rate waveform will be displayed after successful recognition.&lt;br&gt;
The results are as shown in the following figure.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Add the Huawei Maven repository to the project-level build.gradle file.
Add the following Maven repository address to the project-level build.gradle file of your Android Studio project:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildscript {
    repositories {
        maven { url 'http://developer.huawei.com/repo/'}
    }
dependencies {
        ...
        // Add the AppGallery Connect plugin configuration.
        classpath 'com.huawei.agconnect:agcp:1.4.2.300'
    }
}allprojects {
    repositories {
        maven { url 'http://developer.huawei.com/repo/'}
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;2.    Add dependencies on the SDKs in the app-level build.gradle file.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dependencies {
   implementation 'com.huawei.hms:arenginesdk: 2.15.0.1'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.    Declare system permissions in the AndroidManifest.xml file.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The required permissions include the camera permission and network permission.&lt;/p&gt;

&lt;p&gt;Camera permission: android.permission.CAMERA, which is indispensable for using the AR Engine Server.&lt;br&gt;
Network permission: android.permission.INTERNET, which is used to analyze API calling status and guide continuous capability optimization.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
Note: The AR Engine SDK processes data only on the device side, and does not report data to the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Code Description&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Check the AR Engine availability.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Check whether AR Engine has been installed on the current device. If yes, the app can run properly. If not, the app automatically redirects the user to AppGallery to install AR Engine. Sample code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
        if (!isInstallArEngineApk) {
            // ConnectAppMarketActivity.class is the activity for redirecting to AppGallery.
            startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
            isRemindInstall = true;
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Create a ARFaceTrackingConfig scene.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create an ARSession.
mArSession = new ARSession(this);
// Select a specific Config to initialize the ARSession based on the application scenario.
ARWorldTrackingConfig config = new ARWorldTrackingConfig(mArSession);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Add the listener for passing information such as the health check status and progress.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mArSession.addServiceListener(new FaceHealthServiceListener() {
    @Override
    public void handleEvent(EventObject eventObject) {
        // FaceHealthCheckStateEvent passes the health check status information to the app.
        if (!(eventObject instanceof FaceHealthCheckStateEvent)) {
            return;
        }
        // Obtain the health check status.
        final FaceHealthCheckState faceHealthCheckState =
                ((FaceHealthCheckStateEvent) eventObject).getFaceHealthCheckState();
        runOnUiThread(new Runnable() {
            @Override
            public void run() {
                mHealthCheckStatusTextView.setText(faceHealthCheckState.toString());
            }
        });
    }
    //handleProcessProgressEvent Health check progress
    @Override
    public void handleProcessProgressEvent(final int progress) {
        mHealthRenderManager.setHealthCheckProgress(progress);
        runOnUiThread(new Runnable() {
            @Override
            public void run() {
                setProgressTips(progress);
            }
        });
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information, please visit:&lt;br&gt;
&lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-arengine/?ha_source=hms1"&gt;Documentation on the HUAWEI Developers website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
      <category>ar</category>
    </item>
    <item>
      <title>How to Build a 3D Product Model Within Just 5 Minutes</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Mon, 19 Jul 2021 09:47:20 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/how-to-build-a-3d-product-model-within-just-5-minutes-1inh</link>
      <guid>https://dev.to/viviclevercoder/how-to-build-a-3d-product-model-within-just-5-minutes-1inh</guid>
      <description>&lt;p&gt;Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!&lt;/p&gt;

&lt;p&gt;The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle, before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping.&lt;/p&gt;

&lt;p&gt;Despite its advantages, the 3D model has yet to be widely adopted. The underlying reason for this is that applying current 3D modeling technology is expensive:&lt;br&gt;
 Technical requirements: Learning how to build a 3D model is time-consuming.&lt;br&gt;
 Time: It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one.&lt;br&gt;
 Spending: The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.&lt;/p&gt;

&lt;p&gt;Luckily, 3D object reconstruction, a capability in 3D Modeling Kit newly launched in HMS Core, makes 3D model building straightforward. This capability automatically generates a 3D model with a texture for an object, via images shot from different angles with a common RGB-Cam. It gives an app the ability to build and preview 3D models. For instance, when an e-commerce app has integrated 3D object reconstruction, it can generate and display 3D models of shoes. Users can then freely zoom in and out on the models for a more immersive shopping experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Actual Effect&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--479AHHOf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lp3puw052mpqavoc45n2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--479AHHOf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lp3puw052mpqavoc45n2.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Solutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Tx_DRL7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2eopd63j791gm6wx53cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Tx_DRL7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2eopd63j791gm6wx53cg.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3D object reconstruction is implemented on both the device and cloud. RGB images of an object are collected on the device and then uploaded to the cloud. Key technologies involved in the on-cloud modeling process include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Finally, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-u18om7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgywcnyr50vn8cvmlgmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-u18om7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgywcnyr50vn8cvmlgmx.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preparations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.    Configuring a Dependency on the 3D Modeling SDK&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open the app-level build.gradle file and add a dependency on the 3D Modeling SDK in the dependencies block.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Build a dependency on the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.    Configuring AndroidManifest.xml&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open the AndroidManifest.xml file in the main folder. Add the following information before  to apply for the storage read and write permissions and camera permission.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- Permission to read data from and write data into storage. --&amp;gt;
&amp;lt;uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /&amp;gt;
&amp;lt;uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /&amp;gt;
&amp;lt;!-- Permission to use the camera. --&amp;gt;
&amp;lt;uses-permission android:name="android.permission.CAMERA" /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Development Procedure&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1.    Configuring the Storage Permission Application&lt;/strong&gt;&lt;br&gt;
In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
    Log.i(TAG, "Permissions OK");
} else {
    EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
            RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the application result. If the permissions are not granted, prompt the user to grant them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Override
public void onPermissionsGranted(int requestCode, @NonNull List&amp;lt;String&amp;gt; perms) {
    Log.i(TAG, "permissions = " + perms);
    if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE &amp;amp;&amp;amp;              PERMISSIONS.length == perms.size()) {
        initView();
        initListener();
    }
}

@Override
public void onPermissionsDenied(int requestCode, @NonNull List&amp;lt;String&amp;gt; perms) {
    if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
        new AppSettingsDialog.Builder(this)
                .setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
                .setRationale("To use this app, you need to enable the permission.")
                .setTitle("Insufficient permissions")
                .build()
                .show();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.    Creating a 3D Object Reconstruction Configurator&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Set the PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
        .setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
        .create();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.    Creating a 3D Object Reconstruction Engine and Initializing the Task&lt;/strong&gt;&lt;br&gt;
Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create an engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the engine to initialize the task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Initialize the 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4.    Creating a Listener Callback to Process the Image Upload Result&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a listener callback that allows you to configure the operations triggered upon upload success and failure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create an upload listener callback.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
    @Override
    public void onUploadProgress(String taskId, double progress, Object ext) {
        // Upload progress.
    }

    @Override
    public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
        if (result.isComplete()) {
            isUpload = true;
            ScanActivity.this.runOnUiThread(new Runnable() {
                @Override
                public void run() {
                    progressCustomDialog.dismiss();
                    Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
                }
            });
            TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
        }
    }

    @Override
    public void onError(String taskId, int errorCode, String message) {
        isUpload = false;
        runOnUiThread(new Runnable() {
            @Override
            public void run() {
                progressCustomDialog.dismiss();
                Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
                LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
            }
        });

    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5.    Passing the Upload Listener Callback to the Engine to Upload Images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pass the upload listener callback to the engine. Call uploadFile(),&lt;br&gt;
pass the task ID obtained in step 3 and the path of the images to be uploaded. Then, upload the images to the cloud server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Pass the listener callback to the engine.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Start uploading.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6.    Querying the Task Status&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create a task processing instance.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call queryTask of the task processing instance to query the status of the 3D object reconstruction task.&lt;/p&gt;

&lt;p&gt;// Query the task status, which can be: 0 (images to be uploaded); 1: (image upload completed); &lt;br&gt;
2: (model being generated); &lt;br&gt;
3( model generation completed); &lt;br&gt;
4: (model generation failed).&lt;br&gt;
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.    Creating a Listener Callback to Process the Model File Download Result&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a listener callback that allows you to configure the operations triggered upon download success and failure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create a download listener callback.
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
    @Override
    public void onDownloadProgress(String taskId, double progress, Object ext) {
        ((Activity) mContext).runOnUiThread(new Runnable() {
            @Override
            public void run() {
                dialog.show();
            }
        });
    }

    @Override
    public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
        ((Activity) mContext).runOnUiThread(new Runnable() {
            @Override
            public void run() {
                Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
                TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
                dialog.dismiss();
            }
        });
    }

    @Override
    public void onError(String taskId, int errorCode, String message) {
        LogUtil.e(taskId + " &amp;lt;---&amp;gt; " + errorCode + message);
        ((Activity) mContext).runOnUiThread(new Runnable() {
            @Override
            public void run() {
                Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
                dialog.dismiss();
            }
        });
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;8.    Passing the Download Listener Callback to the Engine to Download the File of the Generated Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pass the download listener callback to the engine. Call downloadModel, pass the task ID obtained in step 3 and the path for saving the model file to download it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Pass the download listener callback to the engine.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More Information&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The object should have rich texture, be medium-sized, and a rigid body. The object should not be reflective, transparent, or semi-transparent. The object types include goods (like plush toys, bags, and shoes), furniture (like sofas), and cultural relics (such as bronzes, stone artifacts, and wooden artifacts).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The object dimension should be within the range from 15 x 15 x 15 cm to 150 x 150 x 150 cm. (A larger dimension requires a longer time for modeling.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;3D object reconstruction does not support modeling for the human body and face.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure the following requirements are met during image collection: Put a single object on a stable plane in pure color. The environment shall not be dark or dazzling. Keep all images in focus, free from blur caused by motion or shaking. Ensure images are taken from various angles including the bottom, flat, and top (it is advised that you upload more than 50 images for an object). Move the camera as slowly as possible. Do not change the angle during shooting. Lastly, ensure the object-to-image ratio is as big as possible, and all parts of the object are present.&lt;br&gt;
These are all about the sample code of 3D object reconstruction. Try to integrate it into your app and build your own 3D models!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To learn more, please visit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
      <category>3d</category>
    </item>
    <item>
      <title>Communicating Between JavaScript and Java Through the Cordova Plugins in HMS Core Kits</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Thu, 15 Jul 2021 08:13:56 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/communicating-between-javascript-and-java-through-the-cordova-plugins-in-hms-core-kits-kc7</link>
      <guid>https://dev.to/viviclevercoder/communicating-between-javascript-and-java-through-the-cordova-plugins-in-hms-core-kits-kc7</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Background&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cordova is an open-source cross-platform development framework that allows you to use HTML and JavaScript to develop apps across multiple platforms, such as Android and iOS. So how exactly does Cordova enable apps to run on different platforms and implement the functions? The abundant plugins in Cordova are the main reason, and free you to focus solely on app functions, without having to interact with the APIs at the OS level.&lt;/p&gt;

&lt;p&gt;HMS Core provides a set of Cordova-related plugins, which enable you to integrate kits with greater ease and efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here, I'll use the Cordova plugin in HUAWEI Push Kit as an example to demonstrate how to call Java APIs in JavaScript through JavaScript-Java messaging.&lt;br&gt;
The following implementation principles can be applied to all other kits, except for Map Kit and Ads Kit (which will be detailed later), and help you master troubleshooting solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Basic Structure of Cordova&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you call loadUrl in MainActivity, CordovaWebView will be initialized and Cordova starts up. In this case, CordovaWebView will create PluginManager, NativeToJsMessageQueue, as well as ExposedJsApi of JavascriptInterface. ExposedJsApi and NativeToJsMessageQueue will play a role in the subsequent communication.&lt;/p&gt;

&lt;p&gt;During the plugin loading, all plugins in the configuration file will be read when the PluginManager object is created, and plugin mappings will be created. When the plugin is called for the first time, instantiation is conducted and related functions are executed.&lt;/p&gt;

&lt;p&gt;A message can be returned from Java to JavaScript in synchronous or asynchronous mode. In Cordova, set async in the method to distinguish the two modes.&lt;/p&gt;

&lt;p&gt;In synchronous mode, Cordova obtains data from the header of the NativeToJsMessageQueue queue, finds the message request based on callbackID, and returns the data to the success method of the request.&lt;/p&gt;

&lt;p&gt;In asynchronous mode, Cordova calls the loop method to continuously obtain data from the NativeToJsMessageQueue queue, finds the message request, and returns the data to the success method of the request.&lt;br&gt;
In the Cordova plugin of Push Kit, the synchronization mode is used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Plugin Call&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You may still be unclear on how the process works, based on the description above, so I've provided the following procedure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.    Install the plugin.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the cordova plugin add @hmscore/cordova-plugin-hms-push command to install the latest plugin. After the command is executed, the plugin information is added to the plugins directory.&lt;/p&gt;

&lt;p&gt;The plugin.xml file records all information to be used, such as JavaScript and Android classes. During the plugin initialization, the classes will be loaded to Cordova. If a method or API is not configured in the file, it is unable to be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.    Create a message mapping.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The plugin provides the methods for creating mappings for the following messages:&lt;/p&gt;

&lt;p&gt;1)  HmsMessaging&lt;/p&gt;

&lt;p&gt;In the HmsPush.js file, call the runHmsMessaging API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.&lt;br&gt;
The message will be transferred to the HmsPushMessaging class. The execute method in HmsPushMessaging can transfer the message to a method for processing based on the action type in the message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void execute(String action, final JSONArray args, final CallbackContext callbackContext)
        throws JSONException {
    hmsLogger.startMethodExecutionTimer(action);
    switch (action) {
        case "isAutoInitEnabled":
            isAutoInitEnabled(callbackContext);
            break;
        case "setAutoInitEnabled":
            setAutoInitEnabled(args.getBoolean(1), callbackContext);
            break;
        case "turnOffPush":
            turnOffPush(callbackContext);
            break;
        case "turnOnPush":
            turnOnPush(callbackContext);
            break;
        case "subscribe":
            subscribe(args.getString(1), callbackContext);
            break;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The processing method returns the result to JavaScript. The result will be written to the nativeToJsMessageQueue queue.&lt;br&gt;
            callBack.sendPluginResult(new PluginResult(PluginResult.Status.OK,autoInit));&lt;/p&gt;

&lt;p&gt;2)  HmsInstanceId&lt;/p&gt;

&lt;p&gt;In the HmsPush.js file, call the runHmsInstance API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.&lt;br&gt;
The message will be transferred to the HmsPushInstanceId class. The execute method in HmsPushInstanceId can transfer the message to a method for processing based on the action type in the message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException {
    if (!action.equals("init"))
        hmsLogger.startMethodExecutionTimer(action);

    switch (action) {
        case "init":
            Log.i("HMSPush", "HMSPush initialized ");
            break;
        case "enableLogger":
            enableLogger(callbackContext);
            break;
        case "disableLogger":
            disableLogger(callbackContext);
            break;
        case "getToken":
            getToken(args.length() &amp;gt; 1 ? args.getString(1) : Core.HCM, callbackContext);
            break;
        case "getAAID":
            getAAID(callbackContext);
            break;
        case "getCreationTime":
            getCreationTime(callbackContext);
            break;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, the processing method returns the result to JavaScript. The result will be written to the nativeToJsMessageQueue queue.&lt;br&gt;
            callBack.sendPluginResult(new PluginResult(PluginResult.Status.OK,autoInit));&lt;/p&gt;

&lt;p&gt;This process is similar to that for HmsPushMessaging. The main difference is that HmsInstanceId is used for HmsPushInstanceId-related APIs, and HmsMessaging is used for HmsPushMessaging-related APIs.&lt;/p&gt;

&lt;p&gt;3)  localNotification&lt;/p&gt;

&lt;p&gt;In the HmsLocalNotification.js file, call the run API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.&lt;br&gt;
The message will be transferred to the HmsLocalNotification class. The execute method in HmsLocalNotification can transfer the message to a method for processing based on the action type in the message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException {
    switch (action) {
        case "localNotification":
            localNotification(args, callbackContext);
            break;
        case "localNotificationSchedule":
            localNotificationSchedule(args.getJSONObject(1), callbackContext);
            break;
        case "cancelAllNotifications":
            cancelAllNotifications(callbackContext);
            break;
        case "cancelNotifications":
            cancelNotifications(callbackContext);
            break;
        case "cancelScheduledNotifications":
            cancelScheduledNotifications(callbackContext);
            break;
        case "cancelNotificationsWithId":
            cancelNotificationsWithId(args.getJSONArray(1), callbackContext);
            break;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call sendPluginResult to return the result. However, for localNotification, the result will be returned after the notification is sent.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Perform message push event callback.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In addition to the method calling, message push involves listening for many events, for example, receiving common messages, data messages, and tokens.&lt;/p&gt;

&lt;p&gt;The callback process starts from Android.&lt;br&gt;
In Android, the callback method is defined in HmsPushMessageService.java.&lt;/p&gt;

&lt;p&gt;Based on the SDK requirements, you can opt to redefine certain callback methods, such as onMessageReceived, onDeletedMessages, and onNewToken.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U7TX7oIP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1yemh8qmg2c3avjt8jkp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U7TX7oIP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1yemh8qmg2c3avjt8jkp.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When an event is triggered, an event notification is sent to JavaScript.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static void runJS(final CordovaPlugin plugin, final String jsCode) {
    if (plugin == null)
        return;
    Log.d(TAG, "runJS()");

    plugin.cordova.getActivity().runOnUiThread(() -&amp;gt; {
        CordovaWebViewEngine engine = plugin.webView.getEngine();
        if (engine == null) {
            plugin.webView.loadUrl("javascript:" + jsCode);

        } else {
            engine.evaluateJavascript(jsCode, (result) -&amp;gt; {

            });
        }
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each event is defined and registered in HmsPushEvent.js.&lt;br&gt;
exports.REMOTE_DATA_MESSAGE_RECEIVED = "REMOTE_DATA_MESSAGE_RECEIVED";&lt;br&gt;
exports.TOKEN_RECEIVED_EVENT = "TOKEN_RECEIVED_EVENT";&lt;br&gt;
exports.ON_TOKEN_ERROR_EVENT = "ON_TOKEN_ERROR_EVENT";&lt;br&gt;
exports.NOTIFICATION_OPENED_EVENT = "NOTIFICATION_OPENED_EVENT";&lt;br&gt;
exports.LOCAL_NOTIFICATION_ACTION_EVENT = "LOCAL_NOTIFICATION_ACTION_EVENT";&lt;br&gt;
exports.ON_PUSH_MESSAGE_SENT = "ON_PUSH_MESSAGE_SENT";&lt;br&gt;
exports.ON_PUSH_MESSAGE_SENT_ERROR = "ON_PUSH_MESSAGE_SENT_ERROR";&lt;br&gt;
exports.ON_PUSH_MESSAGE_SENT_DELIVERED = "ON_PUSH_MESSAGE_SENT_DELIVERED";&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function onPushMessageSentDelivered(result) {
  window.registerHMSEvent(exports.ON_PUSH_MESSAGE_SENT_DELIVERED, result);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;exports.onPushMessageSentDelivered = onPushMessageSentDelivered;&lt;/p&gt;

&lt;p&gt;Please note that the event initialization needs to be performed during app development. Otherwise, the event listening will fail. For more details, please refer to eventListeners.js in the demo.&lt;/p&gt;

&lt;p&gt;If the callback has been triggered in Java, but is not received in JavaScript, check whether the event initialization is performed.&lt;/p&gt;

&lt;p&gt;In doing so, when an event is triggered in Android, JavaScript will be able to receive and process the message. You can also refer to this process to add an event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Summary&lt;/strong&gt;&lt;br&gt;
The description above illustrates how the plugin implements the JavaScript-Java communications. The methods of most kits can be called in a similar manner. However, Map Kit, Ads Kit, and other kits that need to display images or videos (such as maps and native ads) require a different method, which will be introduced in a later article.&lt;/p&gt;

&lt;p&gt;To learn more, please visit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
      <category>cordova</category>
    </item>
    <item>
      <title>How Fingerprint and Facial Authentication in Mission: Impossible Can be Brought to Life</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Thu, 15 Jul 2021 02:51:23 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/how-fingerprint-and-facial-authentication-in-mission-impossible-can-be-brought-to-life-780</link>
      <guid>https://dev.to/viviclevercoder/how-fingerprint-and-facial-authentication-in-mission-impossible-can-be-brought-to-life-780</guid>
      <description>&lt;p&gt;Have you ever marveled at the impressive technology in sci-fi movies, such as the floating touchscreen in Iron Man and the fingerprint and iris authentication in Mission: Impossible?&lt;/p&gt;

&lt;p&gt;Such cutting-edge technology has already entered our day-to-day lives, with fingerprint and facial authentication being widely used.&lt;/p&gt;

&lt;p&gt;Users are paying more and more attention to individual privacy protection and thus have higher requirements about app security, which can be guaranteed with the help of authentication based on the unique nature of fingerprints and facial data.&lt;br&gt;
Fingerprint and facial authentication effectively reduces the risk of account theft and information leakage when used for unlocking devices, making payments, and accessing files.&lt;/p&gt;

&lt;p&gt;Such an authentication mode can be realized with &lt;a href="https://developer.huawei.com/consumer/en/hms/huawei-fido?ha_source=hms1"&gt;HUAWEI FIDO&lt;/a&gt;: it arms your app with FIDO2 client capabilities based on the WebAuthn standard, as well as the fingerprint and facial authentication capabilities of BioAuthn.&lt;/p&gt;

&lt;p&gt;FIDO ensures that the authentication result is secure and reliable by checking the system integrity and using cryptographic key verification. It allows password-free authentication during sign-in, a general solution that can be easily integrated with the existing account infrastructure.&lt;/p&gt;

&lt;p&gt;Let's see how to integrate the fingerprint and facial authentication capabilities in FIDO.&lt;br&gt;
Perform the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/config-agc-0000001050262772?ha_source=hms1"&gt;Configure app information in AppGallery Connect.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/integrating-sdk-0000001050176697?ha_source=hms1"&gt;Integrate the HMS Core SDK.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/bioauthn-androidx-sdk-0000001055876259?ha_source=hms1"&gt;Integrate the BioAuthn-AndroidX SDK.&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Click the hyperlinks of step 1 and 2 to learn more about them.&lt;br&gt;
Note that in step 2 there are two SDKs:&lt;/p&gt;

&lt;p&gt;Bioauthn-AndroidX: implementation 'com.huawei.hms:fido-bioauthn-androidx:5.2.0.301'&lt;/p&gt;

&lt;p&gt;BioAuthn: implementation 'com.huawei.hms:fido-bioauthn:5.2.0.301'&lt;br&gt;
They're slightly different from each other:&lt;br&gt;
The BioAuthn-AndroidX SDK provides a unified UI for fingerprint authentication. You do not need to design a fingerprint authentication UI for your app, whereas the BioAuthn SDK requires you to design a fingerprint authentication UI for your app.&lt;/p&gt;

&lt;p&gt;Below is the detailed description of the difference in the &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/faq-0000001051069994-V5?ha_source=hms1"&gt;FAQs&lt;/a&gt; section of this kit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oCLTPwD0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1tm8kgvwbl4rg006kkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oCLTPwD0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1tm8kgvwbl4rg006kkd.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article gives an introduction about how to integrate the BioAuthn-AndroidX SDK. You can download its demo &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Examples-V5/sample-code-0000001050158985-V5?ha_source=hms1"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrating the BioAuthn-AndroidX SDK&lt;/strong&gt;&lt;br&gt;
Notes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The fingerprint and facial authentication capabilities cannot be used on a rooted device.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Before testing, make sure that you've enrolled facial data and a fingerprint in the testing device. Otherwise, an error code will be reported.&lt;br&gt;
Go to Settings &amp;gt; Biometrics &amp;amp; password on the device to enroll facial data and a fingerprint.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Fingerprint Authentication&lt;/strong&gt;&lt;br&gt;
To use the fingerprint authentication capability, perform the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Initialize the &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-References-V5/bioauthnprompt_x-0000001050267874-V5?ha_source=hms1"&gt;BioAuthnPrompt&lt;/a&gt; object:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BioAuthnPrompt bioAuthnPrompt = new BioAuthnPrompt(this, ContextCompat.getMainExecutor(this), new BioAuthnCallback() {
    @Override
    public void onAuthError(int errMsgId, CharSequence errString) {
        showResult("Authentication error. errorCode=" + errMsgId + ",errorMessage=" + errString);
    }
    @Override
    public void onAuthSucceeded(BioAuthnResult result) {
        showResult("Authentication succeeded. CryptoObject=" + result.getCryptoObject());
    }
    @Override
    public void onAuthFailed() {
        showResult("Authentication failed.");
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;2.Configure prompt information and perform authentication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Customize the prompt information.
BioAuthnPrompt.PromptInfo.Builder builder =
        new BioAuthnPrompt.PromptInfo.Builder().setTitle("This is the title.")
                .setSubtitle("This is the subtitle.")
                .setDescription("This is the description.");

// The user is allowed to authenticate with methods other than biometrics.
builder.setDeviceCredentialAllowed(true);

BioAuthnPrompt.PromptInfo info = builder.build();

// Perform authentication.
bioAuthnPrompt.auth(info);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the configuration is complete, fingerprint authentication can be performed on a screen similar to the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2FYclDFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jcjum04ivjkpvsqcn0h3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2FYclDFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jcjum04ivjkpvsqcn0h3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Facial Authentication&lt;/strong&gt;&lt;br&gt;
There are many restrictions on using the facial authentication capability. For details, please refer to the corresponding &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/faq-0000001051069994?ha_source=hms1"&gt;FAQs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2vY4CGfx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/me6q6xwjla647qfzv206.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2vY4CGfx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/me6q6xwjla647qfzv206.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Check whether the camera permission has been granted to your app. (Note that this permission is not needed on devices running EMUI 10.1 or later.)
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    int permissionCheck = 
    ContextCompat.checkSelfPermission(MainActivity.this, 
    Manifest.permission.CAMERA);
    if (permissionCheck != PackageManager.PERMISSION_GRANTED) {
        showResult("Grant the camera permission first.");

        ActivityCompat.requestPermissions(MainActivity.this, new 
    String[] {Manifest.permission.CAMERA}, 1);
        return;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; Check whether the device supports facial authentication.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FaceManager faceManager = new FaceManager(this);

int errorCode = faceManager.canAuth();
if (errorCode != 0) {
    resultTextView.setText("");
    showResult("The device does not support facial authentication. errorCode=" + errorCode);
    return;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt; Perform facial authentication.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;int flags = 0;
Handler handler = null;
CryptoObject crypto = null;

faceManager.auth(crypto, cancellationSignal, flags, new BioAuthnCallback() {
    @Override
    public void onAuthError(int errMsgId, CharSequence errString) {
        showResult("Authentication error. errorCode=" + errMsgId + ",errorMessage=" + errString
                + (errMsgId == 1012 ? " The camera permission has not been granted." : ""));
    }

    @Override
    public void onAuthHelp(int helpMsgId, CharSequence helpString) {
        showResult("This is the prompt information during authentication. helpMsgId=" + helpMsgId + ",helpString=" + helpString + "\n");
    }

    @Override
    public void onAuthSucceeded(BioAuthnResult result) {
        showResult("Authentication succeeded. CryptoObject=" + result.getCryptoObject());
    }

    @Override
    public void onAuthFailed() {
        showResult("Authentication failed.");
    }
}, handler);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is all the code for facial authentication. You can call it to perform this capability.&lt;br&gt;
Note that there is no default UI for this capability. You need to design a UI as needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Scenarios&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Fingerprint Authentication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fingerprint authentication is commonly used before payments by users for security authentication.&lt;br&gt;
It can also be integrated into file protection apps to allow only users passing fingerprint authentication to access relevant files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Facial Authentication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This capability works well in scenarios where fingerprint authentication can be used. For file protection apps, facial authentication has a better performance than fingerprint authentication.&lt;/p&gt;

&lt;p&gt;This is because such apps share a common flaw: they make it clear that a file is very important or sensitive.&lt;br&gt;
Therefore, a hacker can access this file once they figure out a way to obtain the fingerprint authentication of the app, which can be done despite the difficulty in doing so.&lt;/p&gt;

&lt;p&gt;To avoid this, in addition to fingerprint authentication, a file protection app can adopt facial authentication "secretly" — this capability does not require a UI. The app displays the real file after a user obtains both fingerprint and facial authentication, otherwise it will display a fake file.&lt;/p&gt;

&lt;p&gt;In this way, it can improve the protection of user privacy.&lt;br&gt;
The following is the sample code for developing such a function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;faceManager.auth(crypto, cancellationSignal, flags, new BioAuthnCallback() {
    @Override
    public void onAuthError(int errMsgId, CharSequence errString) {
        if(isFingerprintSuccess){// Fingerprint authentication succeeded but facial authentication failed.
            // Display a fake file.
            showFakeFile();
        }
    }

    @Override
    public void onAuthHelp(int helpMsgId, CharSequence helpString) {
    }

    @Override
    public void onAuthSucceeded(BioAuthnResult result) {
        if(isFingerprintSuccess){// Fingerprint authentication succeeded.
            // Display the real file.
            showRealFile();
        }else {// Fingerprint authentication failed.
            // Display a fake file.
            showFakeFile();
        }

    }

    @Override
    public void onAuthFailed() {
        if(isFingerprintSuccess){// Fingerprint authentication succeeded but facial authentication failed.
            // Display a fake file.
            showFakeFile();
        }

    }
}, handler);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To learn more, please visit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/hms?ha_source=hms1"&gt;HUAWEI Developers official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.huawei.com/consumer/en/doc/development?ha_source=hms1"&gt;Development Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/r/HMSCore/"&gt;Reddit&lt;/a&gt;to join developer discussions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/HMS-Core"&gt;GitHub&lt;/a&gt; or Gitee to download the &lt;a href="https://github.com/HMS-Core"&gt;demo&lt;/a&gt; and sample code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/tagged/huawei-mobile-services"&gt;Stack Overflow&lt;/a&gt; to solve integration problems&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Audio File Transcription, for Super-Efficient Recording</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Mon, 12 Jul 2021 08:20:10 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/audio-file-transcription-for-super-efficient-recording-33f5</link>
      <guid>https://dev.to/viviclevercoder/audio-file-transcription-for-super-efficient-recording-33f5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Converting audio into text has a wide range of applications: generating video subtitles, taking meeting minutes, and writing interview transcripts. HUAWEI ML Kit's service makes doing so easier than ever before, converting audio files into meticulously accurate text, with correct punctuation as well!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Actual Effects&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Build and run an app with audio file transcription integrated. Then, select a local audio file and convert it into text.&lt;/p&gt;

&lt;p&gt;![Alt text of image]&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6s0ep7ao1axd08c80gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6s0ep7ao1axd08c80gm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development Preparations&lt;/strong&gt; &lt;br&gt;
For details about configuring the Huawei Maven repository and integrating the audio file transcription SDK, please refer to the &lt;a href="https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/audio-sdk-0000001050038090-V5" rel="noopener noreferrer"&gt;Development Guide&lt;/a&gt; of ML Kit on HUAWEI Developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Declaring Permissions in the AndroidManifest.xml File&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Open the AndroidManifest.xml in the main folder. Add the network connection, network status access, and storage read permissions before 
Please note that these permissions need to be dynamically applied for. Otherwise, Permission Denied will be reported.&lt;br&gt;
&lt;br&gt;
&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development Procedure&lt;/strong&gt;&lt;br&gt;
Creating and Initializing an Audio File Transcription Engine**&lt;br&gt;
Override onCreate in MainActivity to create an audio transcription engine.&lt;br&gt;
private MLRemoteAftEngine mAnalyzer;&lt;/p&gt;

&lt;p&gt;mAnalyzer = MLRemoteAftEngine.getInstance();&lt;br&gt;
mAnalyzer.init(getApplicationContext());&lt;br&gt;
mAnalyzer.setAftListener(mAsrListener);&lt;/p&gt;

&lt;p&gt;Use MLRemoteAftSetting to configure the engine. The service currently supports Mandarin Chinese and English, that is, the options of mLanguage are zh and en.&lt;br&gt;
MLRemoteAftSetting setting = new MLRemoteAftSetting.Factory()&lt;br&gt;
        .setLanguageCode(mLanguage)&lt;br&gt;
        .enablePunctuation(true)&lt;br&gt;
        .enableWordTimeOffset(true)&lt;br&gt;
        .enableSentenceTimeOffset(true)&lt;br&gt;
        .create();&lt;/p&gt;

&lt;p&gt;enablePunctuation indicates whether to automatically punctuate the converted text, with a default value of false.&lt;br&gt;
If this parameter is set to true, the converted text is automatically punctuated; false otherwise.&lt;/p&gt;

&lt;p&gt;enableWordTimeOffset indicates whether to generate the text transcription result of each audio segment with the corresponding offset. The default value is false. You need to set this parameter only when the audio duration is less than 1 minute.&lt;br&gt;
If this parameter is set to true, the offset information is returned along with the text transcription result. This applies to the transcription of short audio files with a duration of 1 minute or shorter.&lt;br&gt;
If this parameter is set to false, only the text transcription result of the audio file will be returned.&lt;/p&gt;

&lt;p&gt;enableSentenceTimeOffset indicates whether to output the offset of each sentence in the audio file. The default value is false.&lt;br&gt;
If this parameter is set to true, the offset information is returned along with the text transcription result.&lt;br&gt;
If this parameter is set to false, only the text transcription result of the audio file will be returned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Listener Callback to Process the Transcription Result&lt;/strong&gt;&lt;br&gt;
private MLRemoteAftListener mAsrListener = new MLRemoteAftListener() &lt;/p&gt;

&lt;p&gt;After the listener is initialized, call startTask in AftListener to start the transcription.&lt;/p&gt;

&lt;p&gt;(&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Override
public void onInitComplete(String taskId, Object ext) {
    Log.i(TAG, "MLRemoteAftListener onInitComplete" + taskId);
    mAnalyzer.startTask(taskId);
}

Override onUploadProgress, onEvent, and onResult in MLRemoteAftListener.

@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
    Log.i(TAG, " MLRemoteAftListener onUploadProgress is " + taskId + " " + progress);
}

@Override
public void onEvent(String taskId, int eventId, Object ext) {
    Log.e(TAG, "MLAsrCallBack onEvent" + eventId);
    if (MLAftEvents.UPLOADED_EVENT == eventId) { // The file is uploaded successfully.
        showConvertingDialog();
        startQueryResult(); // Obtain the transcription result.
    }
}

@Override
public void onResult(String taskId, MLRemoteAftResult result, Object ext) {
    Log.i(TAG, "onResult get " + taskId);
    if (result != null) {
        Log.i(TAG, "onResult isComplete " + result.isComplete());
        if (!result.isComplete()) {
            return;
        }
        if (null != mTimerTask) {
            mTimerTask.cancel();
        }
        if (result.getText() != null) {
            Log.e(TAG, result.getText());
            dismissTransferringDialog();
            showCovertResult(result.getText());
        }

        List&amp;lt;MLRemoteAftResult.Segment&amp;gt; segmentList = result.getSegments();
        if (segmentList != null &amp;amp;&amp;amp; segmentList.size() != 0) {
            for (MLRemoteAftResult.Segment segment : segmentList) {
                Log.e(TAG, "MLAsrCallBack segment  text is : " + segment.getText() + ", startTime is : " + segment.getStartTime() + ". endTime is : " + segment.getEndTime());
            }
        }

        List&amp;lt;MLRemoteAftResult.Segment&amp;gt; words = result.getWords();
        if (words != null &amp;amp;&amp;amp; words.size() != 0) {
            for (MLRemoteAftResult.Segment word : words) {
                Log.e(TAG, "MLAsrCallBack word  text is : " + word.getText() + ", startTime is : " + word.getStartTime() + ". endTime is : " + word.getEndTime());
            }
        }

        List&amp;lt;MLRemoteAftResult.Segment&amp;gt; sentences = result.getSentences();
        if (sentences != null &amp;amp;&amp;amp; sentences.size() != 0) {
            for (MLRemoteAftResult.Segment sentence : sentences) {
                Log.e(TAG, "MLAsrCallBack sentence  text is : " + sentence.getText() + ", startTime is : " + sentence.getStartTime() + ". endTime is : " + sentence.getEndTime());
            }
        }
    }

}
(```

)


Processing the Transcription Result in Polling Mode
After the transcription is completed, call getLongAftResult to obtain the transcription result. Process the obtained result every 10 seconds.


(

```)
private void startQueryResult() {
    Timer mTimer = new Timer();
    mTimerTask = new TimerTask() {
        @Override
        public void run() {
            getResult();
        }
    };
    mTimer.schedule(mTimerTask, 5000, 10000); // Process the obtained long speech transcription result every 10s.
}

private void getResult() {
    Log.e(TAG, "getResult");
    mAnalyzer.setAftListener(mAsrListener);
    mAnalyzer.getLongAftResult(mLongTaskId);
}

(```

)


(https://stackoverflow.com/questions/tagged/huawei-mobile-services)

Follow our official account for the latest HMS Core-related news and updates.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>AppGallery and HMS Core Enable Game Developers in Vietnam</title>
      <dc:creator>Vivi-clevercoder</dc:creator>
      <pubDate>Fri, 25 Dec 2020 03:42:35 +0000</pubDate>
      <link>https://dev.to/viviclevercoder/appgallery-and-hms-core-enable-game-developers-in-vietnam-bih</link>
      <guid>https://dev.to/viviclevercoder/appgallery-and-hms-core-enable-game-developers-in-vietnam-bih</guid>
      <description>&lt;p&gt;On November 19 2020, Vietnam's Huawei Developers Day ended in Hanoi. The theme of the seminar was Unlock the global market, focusing on the game field, attracting nearly 60 leading game developers from Vietnam.&lt;/p&gt;

&lt;p&gt;Huawei Advertising Network: Leading Game Revenue Growth Strategy in Vietnam&lt;/p&gt;

&lt;p&gt;Maintaining the ability to develop revenue is critical for long-term games. But when foreign giants make up a large share of their investment in the domestic market, this becomes a difficulty for Vietnamese developers. In this case, for game developers who want to grow in the long term, expansion into the international market is an inevitable trend.&lt;/p&gt;

&lt;p&gt;To address this issue, Huawei AppGallery, a mobile app store, has invested in the development of Huawei's advertising network. For developers, multiple sources of revenue can be realized, even for free applications. In a short period of time, developers can establish advertising partnerships with Huawei. They can integrate Ads Kit of HMS Core to gain profits from advertisements in video game applications. Ads kit will enable developers to have high-quality promotions from advertisers in more than 220 countries.&lt;/p&gt;

&lt;p&gt;One of the guests at this event, Mr. Dang Thanh Long, Chief Technology Officer of Segu Company, a well-known developer, said, "Huawei AppGallery is a huge partner for us and will help us quickly realize our desire to bring value to global users through mobile apps and make life better, Happier. "With the advantages of Huawei AppGallery, Vietnamese developers will no longer rely on a single market for revenue. Instead, they are entirely proactive in looking for opportunities to increase sales and penetrate global markets.&lt;/p&gt;

&lt;p&gt;Local policy support and technical support: Huawei AppGallery Plus HMS Core&lt;/p&gt;

&lt;p&gt;The local developer support policy is an innovation of Huawei AppGallery's competitive edge. “The Huawei Developer Fund” established to help developers obtain resources and reinvest game quality and content for long-term development. From the technical perspective, the HMS Core can be connected to applications to meet four key requirements, including graphics development, network acceleration, technical innovation, and monetization. In this process, the Vietnamese technical team and local experts will also be provided, to help them overcome language barriers and cultural differences when releasing games in the marketplace.&lt;/p&gt;

&lt;p&gt;In addition to improving the content discovery and recommendation system, Huawei AppGallery also highlights the difference in revenue sharing rates between the app store and developers. Specifically, by working with Huawei AppGallery, developers can earn up to 90% of the revenue from ads on their apps. This share will continue until the end of 2020 and will reach 80-20% by 2021.&lt;/p&gt;

&lt;p&gt;Huawei AppGallery will become a very good platform for developers in Vietnam who want to achieve long-term growth and gain access to the international market, and HMS Core will help its applications achieve more functions after integration.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
