DEV Community

Jackson for HMS Core

Posted on

Top Tips for Developing a Recordist Function

Efficient records management is more relevant now than ever. In our digital age, huge growth of information — audio, video, and more — must be handled in a limited time. This makes a real-time transcription function essential, because it is useful in many scenarios.
In audio or video conferencing, this function records meeting minutes that I can refer to later, which is more convenient than writing them all by myself. I've seen my kids struggling to take notes during their online courses, so I know this process can be so much easier with the help of the transcription function. In short, it removed the job of writing down everything the teacher says, allowing the kids to focus on the lecture itself and easily review the content again later. Also, the live captions provide viewers with real-time subtitles, for a better watching experience.
As a coder, I am a believer in "Actions speak louder than words". That's why I developed a real-time transcription function, with the help of a real-time transcription capability from ML Kit, like this.

Demo

Image description
This function transcribes up to 5 hours of speech into Chinese, English (or both), and French languages in real time. In addition, the output text is punctuated and contains timestamps.
This function has some requirements: the support for French is dependent on the mobile phone model, whereas Chinese and English are available on all mobile phone models. Also, the function requires Internet connection.
Okay, let's move on to the point of this article: How I developed this real-time transcription function.

Development Procedure

i. Make necessary preparations. This is described in detail in the References section.
ii. Create and then configure a speech recognizer.

MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig.Factory()
    // Set the language, which can be Chinese, English, both Chinese and English, or French.
    .setLanguage(MLSpeechRealTimeTranscriptionConstants.LAN_ZH_CN)
    // Punctuate the text recognized from the speech.
    .enablePunctuation(true)
    // Set the sentence offset.
    .enableSentenceTimeOffset(true)
    // Set the word offset.
    .enableWordTimeOffset(true)
    .create();
MLSpeechRealTimeTranscription mSpeechRecognizer = MLSpeechRealTimeTranscription.getInstance();
Enter fullscreen mode Exit fullscreen mode

iii. Create a callback for the speech recognition result listener.

// Use the callback to implement the [MLSpeechRealTimeTranscriptionListener](https://developer.huawei.com/consumer/en/doc/development/hiai-References/mlspeechrealtimetranscriptionlistener-0000001159518088) API and methods in the API.
Protected class SpeechRecognitionListener implements MLSpeechRealTimeTranscriptionListener{
    @Override
    public void onStartListening() {
        // The recorder starts to receive speech.
    }

    @Override
    public void onStartingOfSpeech() {
        // The speech recognizer detects the user speaking.
    }

    @Override
    public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
        // Return the original PCM stream and audio power to the user. The API does not run in the main thread, and the return result is processed in a sub-thread.
   }

    @Override
    public void onRecognizingResults(Bundle partialResults) {
        // Receive recognized text from **MLSpeechRealTimeTranscription**.
    }

    @Override
    public void onError(int error, String errorMessage) {
        // Callback when an error occurs during recognition.
    }

    @Override
    public void onState(int state,Bundle params) {
        // Notify the app of the recognizer status change.
    }
}
Enter fullscreen mode Exit fullscreen mode

iv. Bind the speech recognizer.

mSpeechRecognizer.setRealTimeTranscriptionListener(new SpeechRecognitionListener());
Enter fullscreen mode Exit fullscreen mode

v. Call startRecognizing to begin speech recognition.

mSpeechRecognizer.startRecognizing(config);
Enter fullscreen mode Exit fullscreen mode

vi. Stop recognition and release resources occupied by the recognizer when the recognition is complete.

if (mSpeechRecognizer!= null) {
    mSpeechRecognizer.destroy();
}
Enter fullscreen mode Exit fullscreen mode

References

Top comments (0)