DEV Community

Cover image for React Native + Tensorflow.js - implementing a model
Aldo Ferlatti
Aldo Ferlatti

Posted on • Originally published at Medium

React Native + Tensorflow.js - implementing a model

Original source: Medium

There are three reasons why I decided to write this post:

  1. Sometime ago I came upon an article about how to implement a machine learning model with React. The article was about implementing a simple Gaussian Naïve Bayes binary classifier made with scikit-learn which ran on a Flask backend, while the front was made in React. Obviously, it is a very needed skill and I recommend everyone to read it. However, I had a different problem. What if my model needs to be loaded onto the device, needs to be mobile compatible, it is more than 200MB in size and is made with Tensorflow? The ‘simple’ server solution doesn’t work anymore.
  2. A claim made on VentureBeat says that 87% of data science projects never make it into production. That means that only 1 in 10 project are actually being used. Considering all the money and time (a lot of time) needed for developing a model, the odds are not very motivating. After 9 projects you spend your time and efforts on, end up in some cloud folder (because maybe someday will be used), you start to question if this is the right way and if your next project will also be a waste of time.
  3. Lastly, not every company has a data science team to build models and a development team to implement the said models. Sometimes, if you want your model to be used by people, you need to put them out there by yourself or nobody will.

Following these points, I wanted to write about a method to put to use our hardly made and time consuming models out there, in the world, by ourselves.

Here I will not write about building the model (it has already been built) but only about its implementation and use.

There are two paths to choose for mobile development: native code or cross-platform. As the choice of development can vary, so it can the choice for model processing. If you prefer native code, then a tensorflow-lite approach would be a better option, on the other hand, a cross-platform approach like React Native, allows to transfer knowledge from the web development into mobile, consequentially making TensorFlow.js (tfjs) a good choice.


As you probably already guessed, in this article I’ll be using the cross-platform path, therefore tfjs will be used as a central library. For the conversion part we need the python library:

pip install tensorflowjs

And because we are trying to implement it with React Native, we need the adapter for the framework:

npm i @tensorflow/tfjs
npm i @tensorflow/tfjs-react-native
Enter fullscreen mode Exit fullscreen mode

The implementation is done in four steps:

  1. Transform the model so it can be loaded onto the device and be used with tfjs
  2. Load the model
  3. Transform the input (image) in a way it can be fed to the model
  4. And finally make predictions

Model transformation

Once you trained your model and are satisfied with the result, you save the entire model as a SavedModel format. The SavedModel format is a directory containing a protobuf binary and a TensorFlow checkpoint which can be loaded with tensorflow using the load_model function. But this format is not suitable for mobile and cannot be loaded inside Tensorflow.js library. For that, tfjs has a built-in converter which can convert a SavedModel format into a javascript compatible format (JSON + weights: more about it later).

To convert a saved model, use the following command. Be sure you are inside the root directory, where your model is saved:

>tensorflowjs_converter --input_format=tf_saved_model --saved_model_tags=serve --weight_shard_size_bytes=30000000 "path_to_your/model/" "converted_model"
Enter fullscreen mode Exit fullscreen mode

What does it do? It is capable to convert from 3 types of models (SavedModel, Frozen Model and from Tensorflow Hub). Because of that, we need to specify what type is the input model (input format). The output format is an JSON file with the dataflow graph and weight manifest of the model, together with a collection of binary weights files.
If the shared size is smaller than the total size of the model’s weights, then the weights are sliced in multiple files. However, to load the model with tfjs, we need weights in a single file. Therefore, if you put a small shared size, keep in mind that you need to merge the output files into one file. The last two lines represents the input path, or the position of your model, and the output directory path where the generated files will be stored.

After running the above command the result should be a model.json and a group1-shard\of\ binary files (in our case, it should be just one shard of weights).

Before jumping to the application, we need to import all the necessary packages, and more importantly the files we just created with the converter.

Loading the model

The next step is to load the model which, thanks to tensorflow and its simple api, is basically a one liner. Tfjs_allows to load graph models and layered models. Since this is a Keras sequential model, we will load a Layered model with _loadLayersModel function. We load the weights and the json in one go and to do so we use the helper from the react-native adapter bundleResourceIO

After this, the model is loaded and ready to use.

Input image transformations

Now that we have our model loaded, we need to feed it data. But before that we need do some transformations so it would be compatible with the input shape. My model is for image classification and require a tensor with a size of an image of 300x300 pixels. The input depends on the model and the training of it, so you need to transform it in the way model learned it before. For this I will transform a local image into base64 encoding and then transform it into a tensor.

Make predictions

Just as easy it was to load the model, making a prediction is the same. So yeah, a one liner. The predict function can do a prediction on a batch of images, we only need to split the result based on the batch size.

Wrap everything together

The only thing left to do is to call our functions. However, before using any tfjs methods, we need to load the package with tf.ready(), only after that we can use the tensorflow package. We export this function so we can call it later from wherever we want in the application.


Congratulations! Now you can run inferences on your mobile device. In this case, the _tfjs_library was used only for loading and predictions, but it also has all the tools for training models. I invite you to experiment with it and let me know if it even makes sense to train a model on a mobile device, and if it does, up to what reasonable point.


Sometimes it is the people no one can imagine anything of who do the things no one can imagine. ― Alan Turing

Top comments (0)