<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Moses Odhiambo</title>
    <description>The latest articles on DEV Community by Moses Odhiambo (@badasstechie).</description>
    <link>https://dev.to/badasstechie</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/badasstechie"/>
    <language>en</language>
    <item>
      <title>Latent Variables in Computer Vision</title>
      <dc:creator>Moses Odhiambo</dc:creator>
      <pubDate>Wed, 25 Aug 2021 08:23:19 +0000</pubDate>
      <link>https://dev.to/badasstechie/latent-variables-in-computer-vision-14fa</link>
      <guid>https://dev.to/badasstechie/latent-variables-in-computer-vision-14fa</guid>
      <description>&lt;p&gt;Imagine prisoners chained together in a cave, with all they can see being a wall in front of them. Behind the prisoners is a fire, and between the fire and the prisoners are people carrying sculptures that cast shadows on the wall. The prisoners watch these shadows, believing them to be real.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jchd7jif793qcdwnzp3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jchd7jif793qcdwnzp3.jpg" alt="Allegory of the cave" width="705" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If an object (a book, let us say) is carried past behind them, and it casts a shadow on the wall, and one of them says “I see a book", he thinks that the word “book” refers to the very thing he is looking at. But he would be wrong. He’s only looking at a shadow. The real referent of the word “book” he cannot see. To see it, he would have to turn his head around.&lt;/p&gt;

&lt;p&gt;In a lot of computer vision tasks, images to neural networks are like shadows to the prisoners. In order to gain an understanding of an image's contents, they need to find and use variables that are not exactly observable to them but are the true explanatory factors behind the image. These factors are known as &lt;strong&gt;latent variables&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Latent Space
&lt;/h2&gt;

&lt;p&gt;A latent space or vector is basically a distribution of the latent variables above. It is also commonly referred to as a &lt;strong&gt;feature representation&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Think of a latent vector as a collection of an image's 'features', I.e. variables that describe what is going on in an image, such as the setting (medieval or modern), the time of day, and so on. This is not exactly how it works - it is just an intuition. The idea is that the latent variables represent high level attributes, rather than raw pixels with little meaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where are Latent Variables Used
&lt;/h2&gt;

&lt;p&gt;Latent variables can be used when connecting computer vision models to models from other domains that do not deal with image data. For example, a common task in natural language processing is image captioning. To generate a caption for an image, an NLP model would require the image's latent variables. That is where they get the understanding necessary to describe what is going on in the image.&lt;/p&gt;

&lt;p&gt;Latent variables are also used in image manipulation. These variables when learned well can be used to adjust higher level properties about an image. A common application of this is the creation of deepfakes.&lt;/p&gt;

&lt;p&gt;Face recognition is a problem that heavily leverages latent variables. A model can be trained on face recognition data and used to obtain latent variables for every face it encounters. The latent space can then be compared to that of another image to see if it matches. &lt;/p&gt;

&lt;p&gt;Another application of latent vectors is debiasing of computer vision systems. By learning a latent space a computer vision model will gain a high level understanding of the data and can be used to tell which features are underly represented, for instance, in the context of face recognition, faces of a certain race or skin complexion.&lt;/p&gt;

&lt;h2&gt;
  
  
  How is a Latent Space Learned?
&lt;/h2&gt;

&lt;p&gt;A popular model used to learn a latent space for images of a given distribution is the autoencoder, which consists of two parts - an encoder and a decoder. The encoder portion of the network maps an image to its latent space and the decoder samples from this latent space to reconstruct the original image. By training the model end to end to a point where the reconstruction is as close as possible to the original image, the latent space would be learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Disentanglement
&lt;/h2&gt;

&lt;p&gt;Ideally, we want learned latent variables to be as independent as possible and not correlated with each other at all such that when we vary a given latent variable, only the aspect of the image the variable represents is changed. There are many ways to enforce this - most of them are beyond the scope of this article.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>computervision</category>
      <category>ai</category>
    </item>
    <item>
      <title>TinyML: Deploying TensorFlow models to Android</title>
      <dc:creator>Moses Odhiambo</dc:creator>
      <pubDate>Tue, 24 Aug 2021 17:21:30 +0000</pubDate>
      <link>https://dev.to/badasstechie/tinyml-deploying-tensorflow-models-to-android-2i73</link>
      <guid>https://dev.to/badasstechie/tinyml-deploying-tensorflow-models-to-android-2i73</guid>
      <description>&lt;h2&gt;
  
  
  What is TinyML?
&lt;/h2&gt;

&lt;p&gt;Tiny machine learning (TinyML) is a field that focuses on running machine learning (mostly deep learning) algorithms directly on edge devices such as microcontrollers and mobile devices. The algorithms have to be highly optimized to be able to run on such systems since most of them are low powered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait, what do you mean by 'edge devices'?
&lt;/h2&gt;

&lt;p&gt;An edge device is the device which makes use of the &lt;strong&gt;final&lt;/strong&gt; output of machine learning algorithms, for instance, a camera that displays the result of image recognition, or a smartphone that plays speech synthesized from text. Most practitioners run machine learning models on more powerful devices, then send the output to edge devices, but this is starting to change with the advent of TinyML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why TinyML?
&lt;/h2&gt;

&lt;p&gt;The need to run machine learning directly on edge devices and the convenience that comes with this has made TinyML become one of the fastest growing fields in deep learning.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does one go about deploying ML to edge devices?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Train a machine learning model on a more powerful environment such as a cloud virtual machine or a faster computer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimize the model, say, by reducing the number of parameters, or by using low precision data types such as 16 bit floats. This will make the model smaller and the inference faster and more power efficient at the cost of accuracy, which is a compromise you'll have to take.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the model 'on the edge'!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  TensorFlow Lite Quick Start
&lt;/h2&gt;

&lt;p&gt;TensorFlow Lite is TensorFlow's take on TinyML.&lt;/p&gt;

&lt;h3&gt;
  
  
  Converting a saved model from TensorFlow to TensorFlow Lite
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/path_to_model.h5&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;converter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lite&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TFLiteConverter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_keras_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tflite_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;converter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;convert&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tflite_model.tflite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tflite_model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, it only takes a few lines of code 😊.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running a TensorFlow Lite model in TensorFlow Lite's Python Interpreter
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;
&lt;span class="n"&gt;interpreter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lite&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Interpreter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tflite_model.tflite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="c1"&gt;#initialize interpreter with model
&lt;/span&gt;&lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;allocate_tensors&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;input_details&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_input_details&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;output_details&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_output_details&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;inputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;  &lt;span class="c1"&gt;#list of input tensors
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_tensor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_details&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;    &lt;span class="c1"&gt;#run model
&lt;/span&gt;
&lt;span class="n"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="c1"&gt;#output tensors
&lt;/span&gt;&lt;span class="n"&gt;num_outputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_details&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;#number of output tensors
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_outputs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tensor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_details&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running a TensorFlow Lite model in an Android application
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Create a new Android Studio Project
&lt;/h4&gt;

&lt;h4&gt;
  
  
  2. Import the model into Android Studio
&lt;/h4&gt;

&lt;p&gt;Copy the .tflite model to app/src/main/assets/ - create the assets folder if it does not exist.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Import TensorFlow Lite into your project
&lt;/h4&gt;

&lt;p&gt;Add the following dependency to your app-level build.gradle&lt;/p&gt;

&lt;p&gt;&lt;code&gt;implementation 'org.tensorflow:tensorflow-lite:+'&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Load Model
&lt;/h4&gt;

&lt;p&gt;Load the .tflite model you placed in your assets folder as a MappedByteBuffer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;MappedByteBuffer&lt;/span&gt; &lt;span class="nf"&gt;loadModelFile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Context&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="no"&gt;MODEL_FILE&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;IOException&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;AssetFileDescriptor&lt;/span&gt; &lt;span class="n"&gt;fileDescriptor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAssets&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;openFd&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;MODEL_FILE&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="nc"&gt;FileInputStream&lt;/span&gt; &lt;span class="n"&gt;inputStream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FileInputStream&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fileDescriptor&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getFileDescriptor&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
    &lt;span class="nc"&gt;FileChannel&lt;/span&gt; &lt;span class="n"&gt;fileChannel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;inputStream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getChannel&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;startOffset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fileDescriptor&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getStartOffset&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;declaredLength&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fileDescriptor&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getDeclaredLength&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fileChannel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;map&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;FileChannel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;MapMode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;READ_ONLY&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;startOffset&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;declaredLength&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;loadModelFile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;name_of_model&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;tflite&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  5. Initialize Interpreter
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;interpreter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Interpreter&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;IOException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;printStackTrace&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  6. Run Model
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;inputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="n"&gt;input1&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;input2&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="o"&gt;...}&lt;/span&gt; &lt;span class="c1"&gt;//the objects in inputs{} are jagged arrays - what in TensorFlow would be considered tensors&lt;/span&gt;

&lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HashMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt; &lt;span class="c1"&gt;//same for outputs&lt;/span&gt;
&lt;span class="n"&gt;outputs&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;output1&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;    &lt;span class="c1"&gt;//add outputs to the map&lt;/span&gt;

&lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;runForMultipleInputsOutputs&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;outputs&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;   &lt;span class="c1"&gt;//get inference from interpreter&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/badass-techie/IAmNotReal" rel="noopener noreferrer"&gt;Here is source code&lt;/a&gt; for a &lt;a href="https://en.wikipedia.org/wiki/Generative_adversarial_network" rel="noopener noreferrer"&gt;GAN&lt;/a&gt; deployed to an Android app with TensorFlow Lite. &lt;a href="https://play.google.com/store/apps/details?id=com.apptasticmobile.iamnotreal" rel="noopener noreferrer"&gt;Here is the android app&lt;/a&gt; for you to play with. &lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>tinyml</category>
      <category>tensorflow</category>
    </item>
  </channel>
</rss>
