DEV Community

Cover image for Tensorflow + Node.js = Magic🎉💥
Ganesh Kumar T K
Ganesh Kumar T K

Posted on

Tensorflow + Node.js = Magic🎉💥

If you're a Node.js developer as well as have hands in machine learning, then you know that to deploy a pre-trained TensorFlow SavedModel, you need to do model conversion, which at times proves costly. But, not anymore. . . 😀

ICYMI, TensorFlow.js is an open-source library that lets you define, train, and run machine learning models in Javascript. The library has empowered a new set of developers from the extensive JavaScript community to build and deploy machine learning models and has enabled new use cases of machine learning. For example TensorFlow.js runs in all major browsers, server-side in Node.js, and more recently, in WeChat and React Native providing hybrid mobile apps access to ML without having to leave the JS ecosystem.

One of the key benefits of TensorFlow.js is that JavaScript developers can easily deploy a pre-trained TensorFlow model for inference. TensorFlow.js provides the tfjs-converter tool which can convert a TensorFlow SavedModel, TFHub module, or Keras model to a JavaScript compatible format. However the converter tool requires JavaScript developers to install the Python TensorFlow package and learn how to use it. Further, the converter tool does not support the full set of TensorFlow ops (supported ops), so if a model contains an unsupported op, it would not be convertible with the tool.

🐧 Native SavedModel execution in Node.js

We are now excited to bring a new way for Node.js developers to easily deploy a pre-trained TensorFlow SavedModel directly with high performance and without any need for model conversion. - TensorFlow Team

Now I'm excited to say that Tensorflow announced native TensorFlow SavedModel execution in Node.js. You can now bring a pre-trained TensorFlow model in SavedModel format, load it in Node.js through the @tensorflow/tfjs-node (or tfjs-node-gpu) package, and execute the model for inference without using tfjs-converter.

The TensorFlow SavedModel has one or several named functions, called SignatureDef. If you have a pre-trained TensorFlow SavedModel, you can load the model’s SignatureDef in JavaScript through one line of code, and the model is ready to use for inference.

const model = await tf.node.loadSavedModel(path, [tag], signatureKey);
const output = model.predict(input);

You can also feed multiple inputs to the model as an array or a map:

const model1 = await tf.node.loadSavedModel(path1, [tag], signatureKey);
const outputArray = model1.predict([inputTensor1, inputTensor2]);

const model2 = await tf.node.loadSavedModel(path2, [tag], signatureKey);
const outputMap = model2.predict({input1: inputTensor1, input2:inputTensor2});

If you want to inspect the details of a TensorFlow SavedModel to find the model tags and signatures information (aka MetaGraphs), they can be parsed through a JavaScript helper API, similar to TensorFlow SavedModel client tool:

const modelInfo = await tf.node.getMetaGraphsFromSavedModel(path);

This new feature is available in the @tensorflow/tfjs-node package version 1.3.2 and newer, for both CPU and GPU. It supports TensorFlow SavedModel trained and exported in both TensorFlow Python versions 1.x and 2.0. Besides the benefit of not needing any conversion, native execution of TensorFlow SavedModel means that you can run models with ops that are not in TensorFlow.js yet, through loading the SavedModel as a TensorFlow session in the C++ bindings.

In addition to usability benefits, this also has performance implications. In our performance benchmarking tests with MobileNetV2 model (inference time), we see inference time improvements on both CPU and GPU when executing SavedModels directly in Node.js.
Runtime & Inference time

Inputs: Tensorflow Blog.

Top comments (1)

Collapse
 
shanecandoit profile image
outOfBounds

Loved the links and the code. I have made tensorflow models but stalled on getting them hooked up with a front-end. You make it sound very do-able. Cheers!