DEV Community

Cover image for Real-time sign language detection android application using TensorFlow lite
Deepak Raj
Deepak Raj

Posted on • Updated on

Real-time sign language detection android application using TensorFlow lite

In this article, we are going to convert the TensorFlow model to tflite model and will use it in a real-time Sign language detection app. we will not cover the training part in this article btw I used the TensorFlow object detection API for that. first, we will save the inference model from the checkpoint which we created while training our model. this article is helpful for those who are starting the TensorFlow lite. I hope it will be helpful. Application link at the end of the article.

Alt Text

  1. First we will freeze the inference graph using TensorFlow od API.

freezing infernce_graph
Freezing is the process to identify and save all of the required things(graph, weights, etc) in a single file that you can easily use.

python models/research/object_detection/exporter_main_v2.py \
 --input_type image_tensor \
 --pipeline_config_path /output/exported_models/training/001/pipeline.config \
 --trained_checkpoint_dir output/exported_models/training/001/ \
 --output_directory output/exported_models/inference_model
Enter fullscreen mode Exit fullscreen mode

2.Then we will convert the model to the tflite inference graph.

python models/research/object_detection/export_tflite_graph_tf2.py \
 --pipeline_config_path output/exported_models/inference_model/inference_modelsaved_model/pipeline.config \
 --trained_checkpoint_dir output/exported_models/inference_model/saved_model/checkpint \
 --output_directory output/exported_models/tflite_infernce
Enter fullscreen mode Exit fullscreen mode

3.Then we will post quantize the graph and save the tflite model.

# save this file as postQuantization.py
def representative_dataset():
    for _ in range(100):
      data = np.random.rand(1, 320, 320, 3)
      yield [data.astype(np.float32)]

import numpy as np
import tensorflow as tf

saved_model_dir = "output/exported_models/tflite_infernce/saved_model"

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.allow_custom_ops = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.inference_input_type = tf.uint8  # or tf.uint8
converter.inference_output_type = tf.uint8  # or tf.uint8
tflite_quant_model = converter.convert()

with tf.io.gfile.GFile(tf_lite_model_path, 'wb') as f:
  f.write(tflite_quant_model)
Enter fullscreen mode Exit fullscreen mode

4.Write metadata into the tflite model to use with the android.

''' Writing MetaData to TfLite Model 
save it as MetaDataWriterFile.py
'''

from tflite_support.metadata_writers import object_detector
from tflite_support.metadata_writers import writer_utils
from tflite_support import metadata

ObjectDetectorWriter = object_detector.MetadataWriter
_MODEL_PATH = <tf_lite_model_path>
_LABEL_FILE = <label_path>
_SAVE_TO_PATH = <path_to_tflite_path/tflite_with_metadata.tflite>

writer = ObjectDetectorWriter.create_for_inference(
    writer_utils.load_file(_MODEL_PATH), [127.5], [127.5], [_LABEL_FILE])
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)

# Verify the populated metadata and associated files.
displayer = metadata.MetadataDisplayer.with_model_file(_SAVE_TO_PATH)
print("Metadata populated:")
print(displayer.get_metadata_json())
print("Associated file(s) populated:")
print(displayer.get_packed_associated_file_list())
Enter fullscreen mode Exit fullscreen mode

5.Clone the Tensorflow-examples repository from the TensorFlow GitHub account.

Download the Android studio and SDK file to use the android app for detection.

git clone https://github.com/tensorflow/examples
Enter fullscreen mode Exit fullscreen mode

copy your tflite_with_metadata.tflite file and rename it as 'detect.tfliteand save it in theapp/src/main/assets/detect.tflite`

Change the TF_OD_API_INPUT_SIZE to the model in the app\src\main\java\org\tensorflow\lite\examples\detection\DetectorActivity.java to 320.

Create the virtual device or connect your phone and run the object detection application successfully.

If this Article helpful then Please hit a like and subscribe to the channel to encourage us to make more videos and articles.

In the next article, I will share how to train the TensorFlow ssdMobilenet for object detection.

GitHub Link with all data including android app
American_sign_Language_detection

Download the APK for testing from Google Drive

Thanks to David Lee and Roboflow for the Dataset.

Top comments (0)