<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mariano Martinez Peck</title>
    <description>The latest articles on DEV Community by Mariano Martinez Peck (@martinezpeck).</description>
    <link>https://dev.to/martinezpeck</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/martinezpeck"/>
    <language>en</language>
    <item>
      <title>Segfaults: GDB Scripts to the Rescue</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Tue, 10 Nov 2020 11:53:36 +0000</pubDate>
      <link>https://dev.to/martinezpeck/segfaults-gdb-scripts-to-the-rescue-51cb</link>
      <guid>https://dev.to/martinezpeck/segfaults-gdb-scripts-to-the-rescue-51cb</guid>
      <description>&lt;p&gt;We recently discovered that we would very sporadically get a segmentation fault when running the whole test suite of the &lt;a href="https://www.instantiations.com/products/vasmalltalk/index.html"&gt;VAST Platform (VA Smalltalk)&lt;/a&gt; on Linux.&lt;/p&gt;

&lt;p&gt;During our initial investigation, we realized this was not something new, but that the problem had existed for many years. It happens only on Linux (not Windows), and with all program variants: with and without JIT compiler, with and without memory sharing, 32 and 64-bit, etc. At least we knew it was not related to those broader differences.&lt;/p&gt;

&lt;p&gt;In the end, we were able to find and fix the bug. In this post, I’ll outline some of the experiments we performed and conclude with the final GDB approach that ultimately did the trick.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attempts at running the whole test suite
&lt;/h2&gt;

&lt;p&gt;For the internal development of the VAST Platform, we use a huge test suite of about 16,000 total tests. When diagnosing an issue like this, the first obvious step was to just run the suite and see if we would get the “segmentation fault”. Obviously we did not. Otherwise, we would have fixed this long ago. We then tried randomizing the order of the tests, running the suite multiple times, etc. Still no luck.&lt;/p&gt;

&lt;p&gt;One of the peculiarities about this issue is that there are very few scenarios that could cause a segmentation fault in VAST. But when it does occur, it normally dumps a &lt;code&gt;vmtrap.log&lt;/code&gt; which provides you a lot of useful information for debugging. Unfortunately, in this case, the log was not being produced. In addition, when getting this segmentation fault, it seemed that VAST was doing a regular successful exit (quitting the image, and shutting down the VM).&lt;/p&gt;

&lt;p&gt;That made us think that the problem could be related to the “exit code” in the Virtual Machine. The next step was to write a bash script that would do a lot of image starts &amp;amp; exits so we could observe the exit codes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# "Settings"&lt;/span&gt;
&lt;span class="nv"&gt;vaRoot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/usr/local/VASmalltalk/10.0.0x64"&lt;/span&gt;
&lt;span class="nv"&gt;vaVMPath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaRoot&lt;/span&gt;&lt;span class="s2"&gt;/bin/es"&lt;/span&gt;
&lt;span class="nv"&gt;workingDirectoryPath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;log&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workingDirectoryPath&lt;/span&gt;&lt;span class="s2"&gt;/ctest.log"&lt;/span&gt;

&lt;span class="c"&gt;# copied them from environments directory.&lt;/span&gt;
&lt;span class="c"&gt;# The icx needs to have SUnitOneClick map loaded&lt;/span&gt;
&lt;span class="nv"&gt;imagePath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workingDirectoryPath&lt;/span&gt;&lt;span class="s2"&gt;/abt.icx"&lt;/span&gt;
&lt;span class="nv"&gt;iniPath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workingDirectoryPath&lt;/span&gt;&lt;span class="s2"&gt;/abt.ini"&lt;/span&gt;

&lt;span class="nv"&gt;cnfPath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaRoot&lt;/span&gt;&lt;span class="s2"&gt;/abt.cnf"&lt;/span&gt;
&lt;span class="c"&gt;# The -no_break is strictly necessary because else it looks like&lt;/span&gt;
&lt;span class="c"&gt;# the VM randomly freezes and we get defunct processes ..&lt;/span&gt;
&lt;span class="c"&gt;# so the only workaround for now is to disable the break process.&lt;/span&gt;
&lt;span class="nv"&gt;vaVMArguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;" -no_break -mcd -msd -i&lt;/span&gt;&lt;span class="nv"&gt;$imagePath&lt;/span&gt;&lt;span class="s2"&gt; -ini:&lt;/span&gt;&lt;span class="nv"&gt;$iniPath&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="c"&gt;# This testcases.txt file can be autogenerated with the Smalltalk code:&lt;/span&gt;
&lt;span class="c"&gt;# CtestSunitRunner basicNew createTestCaseList&lt;/span&gt;
&lt;span class="nv"&gt;testCaseFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workingDirectoryPath&lt;/span&gt;&lt;span class="s2"&gt;/testcases.txt"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Testcase file: &lt;/span&gt;&lt;span class="nv"&gt;$testCaseFile&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Doing a cd is important here because else the ICs won't be found in the&lt;/span&gt;
&lt;span class="c"&gt;# defined IC_ROOT in the .ini&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$vaRoot&lt;/span&gt;
&lt;span class="c"&gt;# If we don't specify this LANG or any other compatible one&lt;/span&gt;
&lt;span class="c"&gt;# we would get quite some test failures&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LANG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;en_US.iso88591
&lt;span class="c"&gt;# Necessary, else the esvm40.so for example is not found&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LD_LIBRARY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaRoot&lt;/span&gt;&lt;span class="s2"&gt;/bin"&lt;/span&gt;
&lt;span class="c"&gt;# Just to always start with a fresh log&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$log&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$log&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" ======== Starting process ======="&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; line &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt;&lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$line&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="c"&gt;# Print some status on the console...the rest..directly into the log.&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Line: &lt;/span&gt;&lt;span class="nv"&gt;$line&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="c"&gt;# Below two lines create forces a new abt.cnf files and appends&lt;/span&gt;
    &lt;span class="c"&gt;# the necessary code to run the test with Ctest&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"PostStartUp ! "&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$cnfPath&lt;/span&gt;
    &lt;span class="c"&gt;# This line just extracts the class name from the line and appends the&lt;/span&gt;
    &lt;span class="c"&gt;# code to abt.cnf&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"SunitRunner run: #&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$line&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'#'&lt;/span&gt; &lt;span class="nt"&gt;-f2-&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'-'&lt;/span&gt; &lt;span class="nt"&gt;-f1&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt; !"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$cnfPath&lt;/span&gt;

    &lt;span class="c"&gt;# Start VA VM with all the necessary arguments and redirect stdout/stderr&lt;/span&gt;
    &lt;span class="c"&gt;# to the specified log file&lt;/span&gt;
    &lt;span class="nv"&gt;$vaVMPath&lt;/span&gt; &lt;span class="nv"&gt;$vaVMArguments&lt;/span&gt; &amp;lt; /dev/null &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$log&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;&amp;amp;1

    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Line finished"&lt;/span&gt;

&lt;span class="k"&gt;done&lt;/span&gt; &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$testCaseFile&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" ======== Process finished ======="&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$log&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"CTestFailure"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I don’t want to get into the gory details as that’s not the focus of this post, but basically, the script does the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads the file &lt;code&gt;testcases.txt&lt;/code&gt; which has all &lt;code&gt;TestCase&lt;/code&gt; subclasses (one test class name per line) of the whole test suite.&lt;/li&gt;
&lt;li&gt;For each test class, it auto-generates a file called &lt;code&gt;abt.cnf&lt;/code&gt; that is automatically read by VAST at startup, and that file includes the code to run that test (&lt;code&gt;SunitRunner run: #MyTestCaseSubclassExample&lt;/code&gt;) .&lt;/li&gt;
&lt;li&gt;For each test class, it launches VAST. At startup, it runs the test. When done, it will print the test results and exit. &lt;/li&gt;
&lt;li&gt;It will keep repeating steps 2 and 3 until the end of &lt;code&gt;testcases.txt&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally, when running this bash script, we were able to sporadically reproduce that segmentation fault. (Sometimes we needed to run this script a few times to get to the crash.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/10/Screen-Shot-2020-10-15-at-5.02.06-PM.png?ssl=1"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IrLRJ2nz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/10/Screen-Shot-2020-10-15-at-5.02.06-PM.png%3Fresize%3D748%252C425%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trying to run with GDB
&lt;/h2&gt;

&lt;p&gt;Ok…we were able to somehow, every once in a while, reproduce the crash with the “release” VM. The next step was to compile the VAST VM in “debug” mode and cross our fingers that we’d be able to still reproduce the bug. (Sometimes life isn’t easy and bugs don’t show up with the debug flavor).&lt;/p&gt;

&lt;p&gt;I normally compile and run the VAST VM under GDB with a command like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VM_COMPILED_BIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/mpeck/Instantiations/git/vm-devel/devel/build/bin&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VA_ROOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/VASmalltalk/10.0.0x64&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$VA_ROOT&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LANG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;en_US.iso88591 &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LD_LIBRARY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$VM_COMPILED_BIN&lt;/span&gt;:&lt;span class="nv"&gt;$VA_ROOT&lt;/span&gt;/bin&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
gdb &lt;span class="nt"&gt;-ex&lt;/span&gt; run &lt;span class="nt"&gt;--args&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;$VM_COMPILED_BIN&lt;/span&gt;/es &lt;span class="nt"&gt;-no_break&lt;/span&gt; &lt;span class="nt"&gt;-ini&lt;/span&gt;:/home/mpeck/Instantiations/SUnitOneClick/abt.ini &lt;span class="nt"&gt;-i&lt;/span&gt;/home/mpeck/Instantiations/SUnitOneClick/abt.icx |&amp;amp; &lt;span class="nb"&gt;tee &lt;/span&gt;gdb.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Therefore, I altered above bash script and tried to replace the line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$vaVMPath&lt;/span&gt; &lt;span class="nv"&gt;$vaVMArguments&lt;/span&gt; &amp;lt; /dev/null &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$log&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;&amp;amp;1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;with something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
  gdb &lt;span class="nt"&gt;-ex&lt;/span&gt; run &lt;span class="nt"&gt;--args&lt;/span&gt; &lt;span class="nv"&gt;$vaVMPath&lt;/span&gt; &lt;span class="nv"&gt;$vaVMArguments&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$log&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;&amp;amp;1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, after running that new version of the bash script we received both good and bad news. The good news was that the bug still shows up with the debug VM. The bad news was that &lt;code&gt;gdb&lt;/code&gt; ends the session if it’s run in a bash script. Therefore, I wasn’t able to get into the GDB console once the segmentation fault was being triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Narrowing down the reproducible case
&lt;/h2&gt;

&lt;p&gt;Reproducing the crash with the bash script fueled even more of our suspicions relating to the “exit code” in the VM. We thought that maybe it wasn’t necessary to run the whole suite of tests, but rather, just running one single test class hundreds of times could still reproduce it.&lt;/p&gt;

&lt;p&gt;In other words, altering &lt;code&gt;testcases.txt&lt;/code&gt; with a single line with a single test (&lt;code&gt;TestProcessPrims&lt;/code&gt; in our case) and adding an outer loop in bash to repeat the process 1000 times.&lt;/p&gt;

&lt;p&gt;Our hunch was right and it still reproduced the problem. Now this was going to be much easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run a program 1000 times with GDB
&lt;/h2&gt;

&lt;p&gt;Finally…we are getting close to the most helpful piece of this post.&lt;/p&gt;

&lt;p&gt;We knew that all we needed to do was to run VAST 1000 times with &lt;code&gt;gdb&lt;/code&gt; with an &lt;code&gt;abt.cnf&lt;/code&gt; like below, but without ending the GDB session when the segmentation fault was hit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
PostStartUp &lt;span class="o"&gt;!&lt;/span&gt; 
SunitRunner run: &lt;span class="c"&gt;#TestProcessPrims !&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Researching online how to do this, we arrived at &lt;a href="https://stackoverflow.com/questions/37299926/how-to-re-run-program-in-gdb-several-times/37305474#37305474"&gt;this useful post&lt;/a&gt; that shows how you can write a GDB script for that.&lt;/p&gt;

&lt;p&gt;We created the file &lt;code&gt;/tmp/script_file.txt&lt;/code&gt; as follow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;set &lt;/span&gt;pagination off
handle SIGUSR1 noprint nostop
handle SIGUSR2 noprint nostop
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nv"&gt;$n&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; 1000
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nv"&gt;$n&lt;/span&gt;&lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 0
  &lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;=========================&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;starting program: %d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;, 1000-&lt;span class="nv"&gt;$n&lt;/span&gt;
  run
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nv"&gt;$_siginfo&lt;/span&gt;
    &lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"Received signal %d, stopping&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;, &lt;span class="nv"&gt;$_siginfo&lt;/span&gt;.si_signo
    loop_break
  &lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"program exited&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  end
end

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then ran GDB and VAST like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VM_COMPILED_BIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/mpeck/Instantiations/git/vm-devel/devel/build/bin&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VA_ROOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/VASmalltalk/10.0.0x64&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$VA_ROOT&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LANG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;en_US.iso88591 &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LD_LIBRARY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$VM_COMPILED_BIN&lt;/span&gt;:&lt;span class="nv"&gt;$VA_ROOT&lt;/span&gt;/bin&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
gdb &lt;span class="nt"&gt;-x&lt;/span&gt; /tmp/script_file.txt &lt;span class="nt"&gt;--args&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;$VM_COMPILED_BIN&lt;/span&gt;/es &lt;span class="nt"&gt;-no_break&lt;/span&gt; &lt;span class="nt"&gt;-ini&lt;/span&gt;:/home/mpeck/Instantiations/SUnitOneClick/abt.ini &lt;span class="nt"&gt;-i&lt;/span&gt;/home/mpeck/Instantiations/SUnitOneClick/abt.icx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the &lt;code&gt;gdb -x /tmp/script_file.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally!!!! When running that, we were able to hit the segmentation fault, get into the GDB console, and debug.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i1.wp.com/marianopeck.blog/wp-content/uploads/2020/10/Screen-Shot-2020-10-19-at-6.50.31-PM.png?ssl=1"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KxVPj-pi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2020/10/Screen-Shot-2020-10-19-at-6.50.31-PM.png%3Fresize%3D748%252C402%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The line 96 of &lt;code&gt;estimer.c&lt;/code&gt; was what we needed to understand the problem and fix it.&lt;/p&gt;

&lt;p&gt;This was an ancient bug. VAST’s main thread sets the ‘done’ flag to be later read by the ‘delay’ thread (&lt;code&gt;estimer.c&lt;/code&gt;) and then continues to execute. Sometimes, a rare condition appears when the main thread is done shutting down and cleans the ‘platform globals’ struct where the ‘done’ flag is located. Thus, by the time the ‘delay’ thread attempts to read the ‘done’ flag, it’s actually reading already freed memory causing a segfault. The easy fix is to wait for the ‘delay’ thread to finish after ‘done’ flag is set, but there are of course other possible fixes too. In any case, the bug and fix itself was not the purpose of this blog post, but rather to show how we approached the problem and some tips with GDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Reproducing bugs and debugging them is not usually as easy as one would like. In this particular case, it was a learning experience for me how to run a program multiple times from within GDB.&lt;/p&gt;

</description>
      <category>gdb</category>
      <category>debugging</category>
      <category>troubleshooting</category>
      <category>showdev</category>
    </item>
    <item>
      <title>“DLL Hell”: Tips &amp; Tricks to Avoid it in VAST</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Wed, 21 Oct 2020 13:50:22 +0000</pubDate>
      <link>https://dev.to/martinezpeck/dll-hell-tips-tricks-to-avoid-it-in-vast-27gm</link>
      <guid>https://dev.to/martinezpeck/dll-hell-tips-tricks-to-avoid-it-in-vast-27gm</guid>
      <description>&lt;p&gt;Delegating tasks from high-level languages like the &lt;a href="https://www.instantiations.com/products/vasmalltalk/index.html"&gt;VAST Platform (VA Smalltalk)&lt;/a&gt; to languages like C, C++, Rust via some kind of FFI (Foreign Function Interface) is becoming more and more common.&lt;/p&gt;

&lt;p&gt;Ideally, you would like to have everything implemented in your preferred high-level language, but I believe in using the appropriate tool for each problem. Sometimes you need more performance, or sometimes you want to save yourself the costs of implementing and maintaining a library that’s already implemented in another language. The reasons could be many, but in this case, you will end up making a thin FFI binding (a.k.a wrapper) against that library.&lt;/p&gt;

&lt;p&gt;Then, when you are in your high-level language and you want to call, for example, a function from a C library compiled into a DLL, you need to search for that DLL (Dynamic-link library) and load it into your applications process space.&lt;/p&gt;

&lt;p&gt;That may sound like an easy step, but it can be one of those situations where developers spend a lot of time trying to figure out what is wrong: why a DLL couldn’t be loaded, why another DLL (with a different path) was loaded instead of the expected one, and so forth. This is a frequent problem that is widely known as “&lt;a href="https://en.wikipedia.org/wiki/DLL_Hell"&gt;DLL Hell&lt;/a&gt;“.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i0.wp.com/marianopeck.blog/wp-content/uploads/2020/10/1_DMv_Km8Ml49BuRXwWHydpg.png?ssl=1"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qu8aZQYv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2020/10/1_DMv_Km8Ml49BuRXwWHydpg.png%3Fresize%3D335%252C335%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post I’ll show some tips, tricks, and scripts to help you get out of this situation when loading DLLs from the VAST Platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting VAST .ini specifications
&lt;/h2&gt;

&lt;p&gt;In VAST, most of the time, the library filename of the different wrappers are specified in the main &lt;code&gt;.ini&lt;/code&gt; file under the section &lt;code&gt;[PlatformLibrary Name Mappings]&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For example, below is an extract of it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[PlatformLibrary Name Mappings]
; Almost all .DLL files referenced by VA Smalltalk have aliases (logical names) that are mapped here.
;
; The keywords (logical names) are case sensitive; the values (dll names) are not.
Abt_Primitives=
AbtNativeQ=abtvof40
AbtNativeV=abtvmf40
BROTLI_LIB=esbrotli40
CgImageSupport=escgi40
CRYPTO_LIB=libcrypto

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That way you customize the “physical name” (ex. &lt;code&gt;libcrypto&lt;/code&gt;) of a given “logical name” (ex. &lt;code&gt;CRYPTO_LIB&lt;/code&gt;). You can use different names or even full paths…for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
CRYPTO_LIB=my-special-libcrypto
CRYPTO_LIB=c:\Users\mpeck\Documents\Instantiations\libs\OpenSSL_x64\libcrypto

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So…let’s see the typical issues and possible solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Be certain of which .ini file VAST is reading
&lt;/h3&gt;

&lt;p&gt;Before you start digging into why a certain DLL didn’t load properly and you start randomly editing &lt;code&gt;.ini&lt;/code&gt; files… just be sure you are editing the correct one.&lt;/p&gt;

&lt;p&gt;This may be obvious, but I have seen cases where VAST is started from a chain of &lt;code&gt;.bat&lt;/code&gt; files (where it’s hard to get to the exact line that ends up executing VAST) or cases where there are also many custom &lt;code&gt;.ini files&lt;/code&gt; (read by end-user applications). In these situations, it’s not always easy to “know” which is the exact &lt;code&gt;.ini&lt;/code&gt; file being read by VAST itself at startup to map logical to physical names.&lt;/p&gt;

&lt;p&gt;The first solution here is to ask the system which exact &lt;code&gt;.ini&lt;/code&gt; file was read at startup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;
&lt;span class="nf"&gt;Transcript&lt;/span&gt; 
    &lt;span class="nv"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 
    &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'INI File: '&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="nc"&gt;System&lt;/span&gt; &lt;span class="nf"&gt;primitiveIniFileFullPathAndName&lt;/span&gt; &lt;span class="nf"&gt;asString&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will only help if you are having the problem on a development image or on a runtime one that you are able to modify and re-package it to print this information.&lt;/p&gt;

&lt;p&gt;However, if the problem is in a runtime image you can’t re-package, then it will help to know the exact algorithm the VM does to look for the primary &lt;code&gt;.ini&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;
 &lt;span class="cm"&gt;/**
 * Finds the .INI filename.
 * This is called AFTER EsFindImageFileName().
 * Algorithm:
 * 1. If globalInfo-&amp;gt;iniName has a name in it (came from -ini: commandline switch),
 * look for (globalInfo-&amp;gt;iniName).ini file, answer TRUE if present; else answer FALSE.
 * 2. Look for (globalInfo-&amp;gt;imagePath)(globalInfo-&amp;gt;imageName).ini, answer TRUE if present.
 * 3. Look for &amp;lt;exePath&amp;gt;&amp;lt;exeName&amp;gt;.ini, answer TRUE if present.
 * 4. Look for es.ini, answer TRUE if present.
 * 5. Answer FALSE.
 */&lt;/span&gt;
&lt;span class="n"&gt;BOOLEAN&lt;/span&gt; &lt;span class="n"&gt;EsFindIniFileName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;EsGlobalInfo&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;globalInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, you can manually check for the existence of these files step-by-step until you find which one is being used.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;BTW: VAST 2021 (v10.x.x) is including this information as part of the default walkback header.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Be sure your .ini edits are being taken into account
&lt;/h3&gt;

&lt;p&gt;Again, this sounds obvious, but I’ve seen end-user applications that override the default VAST startup mechanism, have their own &lt;code&gt;.ini&lt;/code&gt; reading, hardcoded settings, just to name a few…&lt;/p&gt;

&lt;p&gt;So… once you start modifying the correct &lt;code&gt;.ini&lt;/code&gt; file to troubleshoot the problem, be sure that those changes are being taken into account. The below code queries VAST to know exactly what have &lt;code&gt;CRYPTO_LIB&lt;/code&gt; been mapped to after startup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;
&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; logicalName aliases sharedLibs lib &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;
&lt;span class="nv"&gt;logicalName&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;'CRYPTO_LIB'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;aliases&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;PlatformLibrary&lt;/span&gt; &lt;span class="nf"&gt;classPool&lt;/span&gt; &lt;span class="nf"&gt;at:&lt;/span&gt; &lt;span class="s"&gt;'Aliases'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;sharedLibs&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;PlatformLibrary&lt;/span&gt; &lt;span class="nf"&gt;classPool&lt;/span&gt; &lt;span class="nf"&gt;at:&lt;/span&gt; &lt;span class="s"&gt;'SharedLibraries'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'Alias Key: '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;logicalName&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="s"&gt;' Value: '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;aliases&lt;/span&gt; &lt;span class="nf"&gt;at:&lt;/span&gt; &lt;span class="nv"&gt;logicalName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;lib&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nv"&gt;sharedLibs&lt;/span&gt; &lt;span class="nf"&gt;detect:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="nf"&gt;logicalName=&lt;/span&gt; &lt;span class="nv"&gt;logicalName&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nf"&gt;ifNone:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="bp"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;
&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'SharedLibraries LogicalName: '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;lib&lt;/span&gt; &lt;span class="nf"&gt;logicalName,&lt;/span&gt; &lt;span class="s"&gt;' PhysicalName: '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;lib&lt;/span&gt; &lt;span class="nf"&gt;physicalName&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So for example, if I edited my &lt;code&gt;.ini&lt;/code&gt; file with this line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
CRYPTO_LIB=c:\Users\mpeck\Documents\Instantiations\libs\OpenSSL_x64\libcrypto

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But above code prints &lt;code&gt;libcrypto&lt;/code&gt; then it means that for some reason my changes are ignored. So….be sure that above code prints what you specified in the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting DLL load failure
&lt;/h2&gt;

&lt;p&gt;Assuming you are sure which main &lt;code&gt;.ini&lt;/code&gt; VAST is reading and that your changes would be taken into account, you could now start troubleshooting why a DLL failed to load or why a wrong one has been.&lt;/p&gt;

&lt;h3&gt;
  
  
  Windows DLL lookup order
&lt;/h3&gt;

&lt;p&gt;The most common problem when loading a DLL is that either Windows couldn’t find it through its lookup mechanism or it did but it’s a different version than the one you would expect (which could be problematic).&lt;/p&gt;

&lt;p&gt;If you look at the official &lt;a href="https://docs.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order#search-order-for-desktop-applications"&gt;documentation about the lookup mechanism&lt;/a&gt; you will see that there are many factors that could affect it: registry keys such as &lt;code&gt;KnownDLLs or SafeDllSearchMode&lt;/code&gt;, custom load via &lt;code&gt;LOAD_WITH_ALTERED_SEARCH_PATH&lt;/code&gt;, etc. However…. most of the time you end up using the “default” behavior.&lt;/p&gt;

&lt;p&gt;Frequently, it’s hard to identify the working directory, what/where the VAST executable binary is, what directory is &lt;code&gt;GetSystemDirectory&lt;/code&gt; or &lt;code&gt;GetWindowsDirectory&lt;/code&gt;, etc.&lt;/p&gt;

&lt;p&gt;The below script tries to mimic the Windows lookup order and displays what would be the directory for each step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;
&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; string m p &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; 
&lt;span class="nv"&gt;string&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;new:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="nv"&gt;m&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;OSHmodule&lt;/span&gt; &lt;span class="nf"&gt;getModuleHandle:&lt;/span&gt; &lt;span class="bp"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;m&lt;/span&gt; &lt;span class="nf"&gt;isNull&lt;/span&gt; &lt;span class="nb"&gt;ifFalse:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;new:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
  &lt;span class="nv"&gt;m&lt;/span&gt; &lt;span class="nf"&gt;getModuleFileName:&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="nf"&gt;cbFileName:&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="nf"&gt;size&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
  &lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'1: The directory containing the EXE file: '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="nf"&gt;trimNull&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nb"&gt;ifTrue:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'Can''t find module!'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.].&lt;/span&gt;

&lt;span class="nc"&gt;OSCall&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;getSystemDirectory:&lt;/span&gt; &lt;span class="nv"&gt;string&lt;/span&gt; &lt;span class="nf"&gt;cbSysPath:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'2: GetSystemDirectory : '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="nc"&gt;OSCall&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;getWindowsDirectory:&lt;/span&gt; &lt;span class="nv"&gt;string&lt;/span&gt; &lt;span class="nf"&gt;cbSysPath:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'4: GetWindowsDirectory : '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'5: Current Directory : '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;CfsDirectoryDescriptor&lt;/span&gt; &lt;span class="nf"&gt;getcwd&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'6: $Path : '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'Path'&lt;/span&gt; &lt;span class="nf"&gt;abtScanEnv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that script could print something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
1: The directory containing the EXE file: Z:\Common\Development\VAST\10.0.0x64-b466\10.0.0x64\abt.exe
2: GetSystemDirectory : C:\WINDOWS\system32
4: GetWindowsDirectory : C:\WINDOWS
5: Current Directory : Z:\Common\Development\Images\10.0.0.x64-b466-dev
6: $Path : Z:\Common\Development\VAST\10.0.0x64-b466\10.0.0x64\;Z:\Common\Development\VAST\10.0.0x64-b466\10.0.0x64;C:\Python27\;C:\Python27\Scripts;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\PuTTY\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\Git\cmd;C:\Program Files\LLVM\bin;C:\Program Files\CMake\bin;C:\Users\mpeck\AppData\Local\Programs\Python\Python37\Scripts\;C:\Users\mpeck\AppData\Local\Programs\Python\Python37\;C:\Users\mpeck\AppData\Local\Microsoft\WindowsApps;C:\Program Files\PuTTY;C:\Users\mpeck\AppData\Local\Microsoft\WindowsApps;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you know which directories Windows will look for and in which order.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pro Tip: Are you running VAST 32-bit on Windows 64-bit? If so, bear in mind that the function &lt;code&gt;GetSystemDirectory&lt;/code&gt; will answer &lt;code&gt;C:\WINDOWS\system32&lt;/code&gt; even though the real physical directory mapped to it is &lt;code&gt;C:\Windows\SysWOW64\&lt;/code&gt;. You can read &lt;a href="https://www.howtogeek.com/326509/whats-the-difference-between-the-system32-and-syswow64-folders-in-windows/"&gt;this post&lt;/a&gt; for more details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Oh, by the way, do you also want to grab any of the involved Windows registry keys that could affect the load order? You can do that from VAST too!&lt;/p&gt;

&lt;p&gt;The below example gets the value of &lt;code&gt;SafeDllSearchMode&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;
&lt;span class="c"&gt;"Registry Hive: HKEY_LOCAL_MACHINE
Registry Path: \System\CurrentControlSet\Control\Session Manager\

Value Name: SafeDllSearchMode

Value Type: REG_DWORD
Value: 1
"&lt;/span&gt;

&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; subKey buffer key answer valueName &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;

&lt;span class="nv"&gt;subKey&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;ByteArray&lt;/span&gt; &lt;span class="nf"&gt;new:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;buffer&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;new:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;key&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;'System\CurrentControlSet\Control\Session Manager'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;valueName&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;'SafeDllSearchMode'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

 &lt;span class="nv"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OSHkey&lt;/span&gt; &lt;span class="nf"&gt;immediate:&lt;/span&gt; &lt;span class="err"&gt;PlatformConstants::&lt;/span&gt;&lt;span class="nc"&gt;HkeyLocalMachine&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;regOpenKeyEx:&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt; &lt;span class="nf"&gt;asParameter&lt;/span&gt;
    &lt;span class="nf"&gt;ulOptions:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="nf"&gt;samDesired:&lt;/span&gt; &lt;span class="err"&gt;PlatformConstants::&lt;/span&gt;&lt;span class="nc"&gt;KeyQueryValue&lt;/span&gt;
    &lt;span class="nf"&gt;phkResult:&lt;/span&gt; &lt;span class="nv"&gt;subKey&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

 &lt;span class="nv"&gt;answer&lt;/span&gt; &lt;span class="nf"&gt;=&lt;/span&gt; &lt;span class="err"&gt;PlatformConstants::&lt;/span&gt;&lt;span class="nc"&gt;ErrorSuccess&lt;/span&gt;
  &lt;span class="nb"&gt;ifTrue:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nv"&gt;subKey&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OSHkey&lt;/span&gt; &lt;span class="nf"&gt;immediate:&lt;/span&gt; &lt;span class="nv"&gt;subKey&lt;/span&gt; &lt;span class="nf"&gt;abtAsInteger&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;regQueryValueEx:&lt;/span&gt; &lt;span class="nv"&gt;valueName&lt;/span&gt;
        &lt;span class="nf"&gt;lpReserved:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
        &lt;span class="nf"&gt;lpType:&lt;/span&gt; &lt;span class="bp"&gt;nil&lt;/span&gt;
        &lt;span class="nf"&gt;lpData:&lt;/span&gt; &lt;span class="nv"&gt;buffer&lt;/span&gt; &lt;span class="nf"&gt;asParameter&lt;/span&gt;
        &lt;span class="nf"&gt;lpcbData:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OSUInt32&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;uint32At:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nf"&gt;put:&lt;/span&gt; &lt;span class="nv"&gt;buffer&lt;/span&gt; &lt;span class="nf"&gt;size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;yourself&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;asParameter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;buffer&lt;/span&gt; &lt;span class="nf"&gt;asByteArray&lt;/span&gt; &lt;span class="nf"&gt;uint64At:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;inspect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
  &lt;span class="p"&gt;].&lt;/span&gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What’s the path of my loaded DLL?
&lt;/h3&gt;

&lt;p&gt;Another typical problem when dealing with DLLs is that for certain VAST images or machines it works, but for another it doesn’t. If you didn’t specify a full path for that DLL in the &lt;code&gt;.ini&lt;/code&gt; file, then it would be useful to know which exact DLL (with path) is being picked-up by Windows on both cases.&lt;/p&gt;

&lt;p&gt;For that, you can use below VAST script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;
&lt;span class="c"&gt;"Disclaimer: this assumes you already loaded the dll into the running process"&lt;/span&gt;
 &lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; m p &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;
 &lt;span class="nv"&gt;m&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;OSHmodule&lt;/span&gt; &lt;span class="nf"&gt;getModuleHandle:&lt;/span&gt; &lt;span class="s"&gt;'libeay32.dll'&lt;/span&gt; &lt;span class="nf"&gt;asPSZ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;m&lt;/span&gt; &lt;span class="nf"&gt;isNull&lt;/span&gt; &lt;span class="nb"&gt;ifFalse:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;new:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
  &lt;span class="nv"&gt;m&lt;/span&gt; &lt;span class="nf"&gt;getModuleFileName:&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="nf"&gt;cbFileName:&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="nf"&gt;size&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
  &lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'Loaded DLL: '&lt;/span&gt;&lt;span class="nf"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt; &lt;span class="nf"&gt;trimNull&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nb"&gt;ifTrue:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;'Can''t find module!'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.]&lt;/span&gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which prints something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Loaded DLL: z:\Common\Development\Images\10.0.0.x64-b466-dev\libeay32.dll

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another way to do this (outside of VAST) is using the “Process Explorer” tool as explained in &lt;a href="https://dev.to/martinezpeck/troubleshooting-applications-running-on-windows-107m"&gt;this previous post&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check DLL dependencies
&lt;/h3&gt;

&lt;p&gt;Many times I think “I swear the DLL is there but VAST fails to load it”. It could be that the DLL you are trying to load is in the correct place and Windows is indeed trying to load it, but Windows may be failing to load it because a &lt;strong&gt;dependency&lt;/strong&gt; of that DLL couldn’t be found.&lt;/p&gt;

&lt;p&gt;For example…say we tried to load &lt;code&gt;ssleay32.dll&lt;/code&gt; and that failed. &lt;a href="https://dev.to/martinezpeck/troubleshooting-applications-running-on-windows-107m"&gt;As shown in a previous post&lt;/a&gt;, we can use the &lt;code&gt;dumpbin&lt;/code&gt; tool to identify the dependencies of that DLL and be sure that we are not missing anything:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i1.wp.com/marianopeck.blog/wp-content/uploads/2020/10/Screen-Shot-2020-10-16-at-4.33.55-PM.png?ssl=1"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FgK8DUaV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2020/10/Screen-Shot-2020-10-16-at-4.33.55-PM.png%3Fresize%3D748%252C327%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, in this example, &lt;code&gt;ssleay32.dll&lt;/code&gt; depends on &lt;code&gt;libeay32.dll&lt;/code&gt; (as well as others). So… if &lt;code&gt;libeay32.dll&lt;/code&gt; or any other of the dependencies cannot be found, the load of the root DLL (&lt;code&gt;ssleay32&lt;/code&gt; in this example) will just fail.&lt;/p&gt;

&lt;p&gt;Always check dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify the bitness of your executable and the DLL
&lt;/h3&gt;

&lt;p&gt;A variation from the previous problem: “I swear the DLL is there but VAST fails to load it”. If you use both 32 and 64-bit programs you may eventually face this issue. When you are looking at a DLL and thinking Windows should have been able to load it, but it couldn’t even though the dependencies are fine, the other thing to check is the bitness. VAST 64-bit is only able to load 64-bit DLLs…and same for goes for 32-bit.&lt;/p&gt;

&lt;p&gt;For this, I don’t use a VAST script but rather a &lt;a href="https://superuser.com/a/889267"&gt;simple technique using a text editor&lt;/a&gt;. Very useful to keep in mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Hopefully these tips and tricks are useful for you! Do you have another trick to share? I would love to hear about it.&lt;/p&gt;

</description>
      <category>debugging</category>
      <category>dll</category>
      <category>windows</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>Troubleshooting Applications Running on Windows</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Thu, 08 Oct 2020 19:33:13 +0000</pubDate>
      <link>https://dev.to/martinezpeck/troubleshooting-applications-running-on-windows-107m</link>
      <guid>https://dev.to/martinezpeck/troubleshooting-applications-running-on-windows-107m</guid>
      <description>&lt;p&gt;Over the past few months, I’ve been troubleshooting hard problems that have appeared when running the &lt;a href="https://www.instantiations.com/products/vasmalltalk/index.html" rel="noopener noreferrer"&gt;VAST Platform (VA Smalltalk)&lt;/a&gt; on Windows. Some of the problems were indeed bugs (like sockets leaking under a particular scenario) and some were just Windows or customer issues.&lt;/p&gt;

&lt;p&gt;Regardless of where the problem was, I learned much about certain tools and tricks that could help me again in the future or possibly help others too. Hence, the reason for this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Process Explorer
&lt;/h2&gt;

&lt;p&gt;I have heard before about the “&lt;a href="https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer" rel="noopener noreferrer"&gt;Process Explorer&lt;/a&gt;” tool, but I had not used it until recently. This is a really small application (3MB) that allows you to analyze many aspects of the running processes: loaded DLLs, threads, opened file handlers, and many more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-10.51.29-AM.png%3Fresize%3D748%252C645%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-10.51.29-AM.png%3Fresize%3D748%252C645%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-10.52.37-AM.png%3Fresize%3D748%252C653%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-10.52.37-AM.png%3Fresize%3D748%252C653%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case, the two features I used the most were the ability to know:&lt;/p&gt;

&lt;p&gt;(1) Which exact DLLs (with full path) were loaded in the process (for one of the problems a wrong version of the DLL was being loaded)&lt;/p&gt;

&lt;p&gt;(2) The list of opened file handlers (another problem was a leak getting close to the max allowed file handlers per process)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;PRO TIP:&lt;/strong&gt; You don’t even need to install “Process Explorer” (sometimes installing programs on large companies isn’t easy) as it comes in a kind of “portable” executable that you just double click.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Dumpbin
&lt;/h2&gt;

&lt;p&gt;A shared library could (and very likely will) depend on other shared libraries (dependencies).&lt;/p&gt;

&lt;p&gt;One typical problem when you dynamically load shared libraries is that the load of the library fails because the operating system can’t find one or more of the required libraries.&lt;/p&gt;

&lt;p&gt;In Linux, we would normally use something like &lt;code&gt;ldd mylib.so&lt;/code&gt; and check for the not found. But for me, it wasn’t obvious how to do that on Windows. I knew about the “&lt;a href="https://www.dependencywalker.com/" rel="noopener noreferrer"&gt;Dependency Walker&lt;/a&gt;” but I also understand that it does not support Windows 10 very well, nor is it officially supported by Microsoft.&lt;/p&gt;

&lt;p&gt;Finally, I found the “&lt;a href="https://docs.microsoft.com/en-us/cpp/build/reference/dumpbin-reference?view=vs-2019" rel="noopener noreferrer"&gt;dumpbin&lt;/a&gt;” tool which &lt;em&gt;is&lt;/em&gt; officially supported by Microsoft and allowed me to do what I needed…and more. If you have Microsoft Visual Studio, then you probably already have the tool (although it’s very very hidden). Otherwise, you can install the “&lt;a href="https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk/" rel="noopener noreferrer"&gt;Windows 10 SDK&lt;/a&gt;“.&lt;/p&gt;

&lt;p&gt;The feature I needed this time was &lt;code&gt;/dependents&lt;/code&gt; as you can see in below example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.18.33-AM.png%3Fresize%3D748%252C382%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.18.33-AM.png%3Fresize%3D748%252C382%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check the documentation for all the possible command line arguments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Microsoft Error Lookup Tool
&lt;/h2&gt;

&lt;p&gt;I am a little bit ashamed that I discovered this tool only just recently. A customer reported a Windows error in hexadecimal format. Sure, I can Google that exact hexa error code, but the &lt;a href="https://docs.microsoft.com/en-us/windows/win32/debug/system-error-code-lookup-tool" rel="noopener noreferrer"&gt;Microsoft Error Lookup Tool&lt;/a&gt; is very handy. You can just pass the hexa as an argument, and I will show all the places Windows found for that error and also print in which C header file it’s defined.&lt;/p&gt;

&lt;p&gt;Here is a simple example with the hexa code 0x80070583 I was investigating due to a COM initialization problem:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.34.27-AM.png%3Fresize%3D748%252C222%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.34.27-AM.png%3Fresize%3D748%252C222%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is an example with more than 1 result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.36.28-AM.png%3Fresize%3D748%252C325%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.36.28-AM.png%3Fresize%3D748%252C325%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Forcing Windows to re-load manifest files
&lt;/h2&gt;

&lt;p&gt;VAST Platform 9.2 comes with a full native HiDPI support for Windows. For that, it needs some lines in the manifest files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt; &lt;span class="nt"&gt;&amp;lt;compatibility&lt;/span&gt; &lt;span class="na"&gt;xmlns=&lt;/span&gt;&lt;span class="s"&gt;"urn:schemas-microsoft-com:compatibility.v1"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt; 
   &lt;span class="nt"&gt;&amp;lt;application&amp;gt;&lt;/span&gt; 
       &lt;span class="c"&gt;&amp;lt;!-- Windows 10 --&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;supportedOS&lt;/span&gt; &lt;span class="na"&gt;Id=&lt;/span&gt;&lt;span class="s"&gt;"{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
       &lt;span class="c"&gt;&amp;lt;!-- Windows 8.1 --&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;supportedOS&lt;/span&gt; &lt;span class="na"&gt;Id=&lt;/span&gt;&lt;span class="s"&gt;"{1f676c76-80e1-4239-95bb-83d0f6d0da78}"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
       &lt;span class="c"&gt;&amp;lt;!-- Windows 8 --&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;supportedOS&lt;/span&gt; &lt;span class="na"&gt;Id=&lt;/span&gt;&lt;span class="s"&gt;"{4a2f28e3-53b9-4441-ba9c-d69d4a4a6e38}"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
       &lt;span class="c"&gt;&amp;lt;!-- Windows 7 --&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;supportedOS&lt;/span&gt; &lt;span class="na"&gt;Id=&lt;/span&gt;&lt;span class="s"&gt;"{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
       &lt;span class="c"&gt;&amp;lt;!-- Windows Vista --&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;supportedOS&lt;/span&gt; &lt;span class="na"&gt;Id=&lt;/span&gt;&lt;span class="s"&gt;"{e2011457-1546-43c5-a5fe-008deee3d3f0}"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt; 
   &lt;span class="nt"&gt;&amp;lt;/application&amp;gt;&lt;/span&gt; 
 &lt;span class="nt"&gt;&amp;lt;/compatibility&amp;gt;&lt;/span&gt;
 &lt;span class="nt"&gt;&amp;lt;application&amp;gt;&lt;/span&gt; 
   &lt;span class="nt"&gt;&amp;lt;windowsSettings&amp;gt;&lt;/span&gt; 
     &lt;span class="nt"&gt;&amp;lt;dpiAwareness&lt;/span&gt; &lt;span class="na"&gt;xmlns=&lt;/span&gt;&lt;span class="s"&gt;"http://schemas.microsoft.com/SMI/2016/WindowsSettings"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;PerMonitor&lt;span class="nt"&gt;&amp;lt;/dpiAwareness&amp;gt;&lt;/span&gt; 
     &lt;span class="nt"&gt;&amp;lt;dpiAware&lt;/span&gt; &lt;span class="na"&gt;xmlns=&lt;/span&gt;&lt;span class="s"&gt;"http://schemas.microsoft.com/SMI/2005/WindowsSettings"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;True/PM&lt;span class="nt"&gt;&amp;lt;/dpiAware&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;/windowsSettings&amp;gt;&lt;/span&gt; 
 &lt;span class="nt"&gt;&amp;lt;/application&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One problem we detected is that Windows seems to cache the data of the manifest file even if you edit it and reboot. We researched a lot for possible workarounds and a good one was found in &lt;a href="http://csi-windows.com/blog/all/27-csi-news-general/245-find-out-why-your-external-manifest-is-being-ignored" rel="noopener noreferrer"&gt;this post&lt;/a&gt;. However, we found an easier trick that worked for us. Example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.52.57-AM.png%3Fresize%3D748%252C218%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-11.52.57-AM.png%3Fresize%3D748%252C218%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrong Locale when running your program as a Windows Service
&lt;/h2&gt;

&lt;p&gt;If you normally use the “US” region on Windows you may not have faced this issue. If you have custom Locale settings (ex. german-german) you may experience they are not being properly detected by your program when running as a Windows Service, but they are when run from a console.&lt;/p&gt;

&lt;p&gt;When you run a program from a normal CMD console, it uses the normal Windows’ user logged in (ex. “Mariano” user). But when you register your application as a service, the user to log on, will probably be “Local System” (the default). You can confirm this by looking at your registered service and the column “Log On As”.&lt;/p&gt;

&lt;p&gt;The problem is that the “Local System” settings for Locale are not necessary the same as your current Windows user (ex. “Mariano”). The solution to this problem is to specify the Locale information also to the “Local System” user. &lt;a href="http://www.perennitysoft.com/support/kb/faq.php?id=86" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is how you do it.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenSSL for Windows
&lt;/h2&gt;

&lt;p&gt;OpenSSL is becoming more and more critical every day and not only on HTTP but in SMTP, MQ, SFTP, SSH, just to name a few. As you probably know, &lt;a href="https://www.openssl.org/community/binaries.html" rel="noopener noreferrer"&gt;there is no “official” binary download for Windows&lt;/a&gt;. That means that there are &lt;a href="https://wiki.openssl.org/index.php/Binaries" rel="noopener noreferrer"&gt;a few different 3rd party places&lt;/a&gt; from where you can download pre-compiled binaries. You can even compile it yourself. This, combined with the fact mentioned above that OpenSSL is become more and more used, could be nightmare if you are trying to load OpenSSL dynamically and relying on &lt;a href="https://docs.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order#search-order-for-desktop-applications" rel="noopener noreferrer"&gt;Windows DLL lookup mechanism&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The mentioned Process Explorer may help you realize which exact OpenSSL dll you are loading, but after that, it would be helpful to know more about that dll. Which version was it? Which parts were included when it was compiled? To help resolve those questions, we found this great website: &lt;a href="https://www.howsmyssl.com/" rel="noopener noreferrer"&gt;https://www.howsmyssl.com/&lt;/a&gt;. You can do a normal HTTP GET with your favorite client (curl, wget, a web browser, etc) and it will answer a nice HTML response with a lot of info. Example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-12.11.58-PM.png%3Fresize%3D748%252C448%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2020%2F10%2FScreen-Shot-2020-10-08-at-12.11.58-PM.png%3Fresize%3D748%252C448%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This will probably just be a reminder for “future me” when I encounter these problems again. But in the meantime, I hope this was helpful for others too.&lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

</description>
      <category>debugging</category>
      <category>troubleshooting</category>
      <category>windows</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Deploying VASmalltalk on Amazon AWS ARM servers</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Tue, 14 Apr 2020 12:26:22 +0000</pubDate>
      <link>https://dev.to/martinezpeck/deploying-vasmalltalk-on-amazon-aws-arm-servers-aan</link>
      <guid>https://dev.to/martinezpeck/deploying-vasmalltalk-on-amazon-aws-arm-servers-aan</guid>
      <description>&lt;p&gt;A while ago, I did some experiments with &lt;a href="https://aws.amazon.com/"&gt;Amazon AWS&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/instance-types/"&gt;EC2 instances&lt;/a&gt;. Back then, they weren’t very different from a VPS except that you were charged based on the usage and the servers were easier to grow or shrink.&lt;/p&gt;

&lt;p&gt;But… much has changed since then and AWS now offers many services and features. One of them particularly caught my attention since it was announced: &lt;a href="https://aws.amazon.com/blogs/aws/new-ec2-instances-a1-powered-by-arm-based-aws-graviton-processors/"&gt;ARM-based instances&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As you may have read in my previous posts, I have been doing a lot of work with the VA Smalltalk platform together with &lt;a href="https://dev.to/martinezpeck/docker-swarm-cloud-on-a-arm64-diy-sbc-cluster-running-a-smalltalk-webapp-9l1"&gt;Docker on ARM&lt;/a&gt;, &lt;a href="https://dev.to/martinezpeck/getting-started-with-vasmalltalk-raspberry-pi-and-other-devices-25g3-temp-slug-2060664"&gt;Raspberry Pi and IoT&lt;/a&gt;, &lt;a href="https://dev.to/martinezpeck/getting-started-with-nvidia-jetson-nano-tensorflow-and-smalltalk-23mk"&gt;Machine Learning on the Edge&lt;/a&gt;, etc. Therefore, trying an EC2 ARM instance with Docker and running a VA Smalltalk program inside the container was just logical.&lt;/p&gt;

&lt;p&gt;In this post, you will read how easy it is to deploy a VA Smalltalk web application into an A1 EC2 instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Account Setup
&lt;/h2&gt;

&lt;p&gt;Funnily enough, this is the step that took the most amount of time! Hahaha. If you don’t have an AWS account, you need to set up one with payment information and all the other necessary info.&lt;/p&gt;

&lt;p&gt;Once that is done, the next step is to create an ACCESS_KEY and a SECRET_ACCESS_KEY. You can read &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey"&gt;this guide for details&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Selecting the EC2 Instance
&lt;/h2&gt;

&lt;p&gt;Here you can see the list of &lt;a href="https://aws.amazon.com/ec2/instance-types/"&gt;all possible EC2 instances&lt;/a&gt;. There are both ARM and Intel-based instances (A1, T3, M6g, etc). For each instance there are several possible configurations (numbers of processors, memory, hard disk, GPU, bandwidth, etc) for example: &lt;code&gt;a1.medium&lt;/code&gt;, &lt;code&gt;a1.large&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In addition, the type of instances are split in categories such as “General Purpose”, “Accelerated Computing”, etc., so it’s really easy to find an instance and configuration that matches your specific needs and costs.&lt;/p&gt;

&lt;p&gt;For this demo, I will use an A1 large, which is a general purpose instance with an AWS Graviton Processor featuring 64-bit ARM Neoverse cores.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Instance
&lt;/h2&gt;

&lt;p&gt;Once you have selected the type and configuration of instance you want, the obvious next step is to create the instance and start it. There are many ways of doing that. One way, is using the AWS web console. Go to “Services” -&amp;gt; “EC2” and then “Launch Instance”. This will open a wizard dialog which allows you to choose the OS you want, network configuration, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Z34vDC3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-12.49.49-PM.png%3Ffit%3D748%252C335%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Z34vDC3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-12.49.49-PM.png%3Ffit%3D748%252C335%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another way is using &lt;code&gt;docker-machine&lt;/code&gt; to create the instance which is described in the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating and managing instances with docker-machine
&lt;/h2&gt;

&lt;p&gt;Because I wanted to use Docker containers to run my demo web application, I thought it would also be useful to give &lt;a href="https://docs.docker.com/machine/overview/"&gt;Docker Machine&lt;/a&gt; a try. Docker Machine is a set of tools to create, delete, start, stop, and manage different kind of machines. It supports different backends like EC2, VirtualBox, etc.&lt;/p&gt;

&lt;p&gt;What is great is that, once you get used to the commands, you will likely be able to use that common API across different providers. This tool comes with the “Docker Desktop” app (at least for OSX and Windows).&lt;/p&gt;

&lt;p&gt;So, the first thing I did was to create &lt;code&gt;~/.aws/credentials&lt;/code&gt; to avoid having to pass &lt;code&gt;ACCESS_KEY&lt;/code&gt; and &lt;code&gt;SECRET_ACCESS_KEY&lt;/code&gt;as arguments to &lt;code&gt;docker-machine&lt;/code&gt; commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default] 
aws_access_key_id = XXX 
aws_secret_access_key = YYY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let’s create the machine!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-machine create &lt;span class="nt"&gt;--driver&lt;/span&gt; amazonec2 &lt;span class="nt"&gt;--amazonec2-region&lt;/span&gt; us-east-2 &lt;span class="nt"&gt;--amazonec2-ami&lt;/span&gt; ami-0959e8feedaf156bf &lt;span class="nt"&gt;--amazonec2-instance-type&lt;/span&gt; a1.large &lt;span class="nt"&gt;--amazonec2-open-port&lt;/span&gt; 9000 &lt;span class="nt"&gt;--amazonec2-open-port&lt;/span&gt; 7777 &lt;span class="nt"&gt;--amazonec2-open-port&lt;/span&gt; 80 arm-a1-docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few details here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By specifying &lt;code&gt;–driver amazonec2&lt;/code&gt; you are saying you will use EC2 instances. You can check &lt;a href="https://github.com/Nordstrom/docker-machine/blob/master/docs/drivers/aws.md"&gt;here&lt;/a&gt; all possible arguments for this driver.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;–amazonec2-ami ami-0959e8feedaf156bf&lt;/code&gt; is the AMI (Amazon Machine Image) to use. In this case, &lt;code&gt;ami-0959e8feedaf156bf&lt;/code&gt; is Ubuntu 18.04 ARM 64 bit. You need to be sure that the selected AMI is supported in the specified &lt;code&gt;–amazonec2-region&lt;/code&gt; as not all AMIs are possible for all regions as far as I understand. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;–amazonec2-instance-type a1.large&lt;/code&gt; specifies the A1 large instance as already stated.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;–amazonec2-open-port&lt;/code&gt; opens ports in the VM, in particular I am telling to open 80 and 7777 (for my web app) and 9000 for &lt;a href="https://www.portainer.io/"&gt;portainer.io&lt;/a&gt; tool. Note that when using &lt;code&gt;docker-machine&lt;/code&gt; to create the instance, a security group called &lt;code&gt;docker-machine&lt;/code&gt; will also be added in AWS with the ports 22 (ssh) and 2376 (docker) already open. You may want to &lt;a href="https://docs.docker.com/engine/security/https/"&gt;protect the docker port&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Finally, &lt;code&gt;arm-a1-docker&lt;/code&gt; is the name you give to the instance. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the create command is finished, you can check in the web console and confirm the instance is present:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OK51oaKq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-1.59.00-PM.png%3Ffit%3D748%252C235%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OK51oaKq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-1.59.00-PM.png%3Ffit%3D748%252C235%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it, now you can use whatever &lt;code&gt;docker-machine&lt;/code&gt; command you want, for example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZgGdNSLf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-2.04.04-PM.png%3Fw%3D748%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZgGdNSLf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-2.04.04-PM.png%3Fw%3D748%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Docker in the created instance
&lt;/h2&gt;

&lt;p&gt;When creating the instance with &lt;code&gt;docker-machine&lt;/code&gt; and using the AMI I mentioned, &lt;code&gt;docker&lt;/code&gt; comes already installed in the VM. The only thing I had to do was to connect via SSH into the server and then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After I did that, I was able to successfully run Docker hello world:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XP-hs7yU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-2.09.19-PM.png%3Fw%3D748%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XP-hs7yU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-2.09.19-PM.png%3Fw%3D748%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running VA Smalltalk Web Application with Docker on the A1 Instance
&lt;/h2&gt;

&lt;p&gt;The server and docker are now running. Deploying the real project is the only thing left to do. For that, I will use the same web app example “Seaside Traffic Light” that I have been using in my previous posts. There is even a working &lt;a href="https://github.com/vasmalltalk/docker-examples/tree/master/source/SeasideTrafficLights/Raspberry64"&gt;ARM docker example for it on Github&lt;/a&gt;:&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--oIyBseCK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1062868826755424256/Mmsm-X5G_normal.jpg" alt="Mariano Martinez Peck profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Mariano Martinez Peck
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="comment-mentioned-user" href="https://dev.to/martinezpeck"&gt;@martinezpeck&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      My first &lt;a href="https://twitter.com/hashtag/ARM"&gt;#ARM&lt;/a&gt; 64 &lt;a href="https://twitter.com/hashtag/Docker"&gt;#Docker&lt;/a&gt; Swarm "2 nodes cluster" (&lt;a href="https://twitter.com/picocluster"&gt;@picocluster&lt;/a&gt; I will get to you sooner or later) with a &lt;a href="https://twitter.com/hashtag/RaspberryPi"&gt;#RaspberryPi&lt;/a&gt; 3 (Ubuntu 18.04) and a &lt;a href="https://twitter.com/hashtag/Rock64"&gt;#Rock64&lt;/a&gt; (Armbian).  &lt;a href="https://twitter.com/traefik"&gt;@traefik&lt;/a&gt; load balancing 10 &lt;a href="https://twitter.com/hashtag/VASmalltalk"&gt;#VASmalltalk&lt;/a&gt; 64 images running a &lt;a href="https://twitter.com/hashtag/Seaside"&gt;#Seaside&lt;/a&gt; webapp. &lt;a href="https://twitter.com/hashtag/Smalltalk"&gt;#Smalltalk&lt;/a&gt; Code: &lt;a href="https://t.co/fQYmH32Bt7"&gt;github.com/vasmalltalk/do…&lt;/a&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      15:15 PM - 08 May 2019
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1126143503262728197" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1126143503262728197" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      5
      &lt;a href="https://twitter.com/intent/like?tweet_id=1126143503262728197" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      11
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;So all I have to do is to clone that repo, build the docker image, and start the Docker Swarm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;Instantiations
&lt;span class="nb"&gt;cd &lt;/span&gt;Instantiations
git clone https://github.com/vasmalltalk/docker-examples.git
&lt;span class="nb"&gt;cd &lt;/span&gt;docker-examples/source/SeasideTrafficLights/Raspberry64/
docker build &lt;span class="nt"&gt;-f&lt;/span&gt; ./debian_slim_Dockerfile &lt;span class="nt"&gt;-t&lt;/span&gt; seaside-debian-slim.
./startSwarm.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it. We should now have a Docker Swarm running 10 VA Smalltalk containers (as specified in the &lt;a href="https://github.com/vasmalltalk/docker-examples/blob/master/source/SeasideTrafficLights/Raspberry64/docker-compose.yml#L15"&gt;docker-compose&lt;/a&gt; of that example) and you should be able to enter your web app at: &lt;a href="http://YOUR_SERVER_IP/trafficlight"&gt;http://YOUR_SERVER_IP/trafficlight&lt;/a&gt;. Pro tip: you can use the command &lt;code&gt;docker-machine ip arm-a1-docker&lt;/code&gt; to get the external IP of the instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dV9NwUF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-3.48.49-PM.png%3Ffit%3D400%252C920%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dV9NwUF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-3.48.49-PM.png%3Ffit%3D400%252C920%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Docker containers with Portainer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.portainer.io/"&gt;Portainer&lt;/a&gt; is a nice tool for managing and monitoring Docker containers from within a web console. The best thing in my opinion is how easy it is to run. Guess how? Of course, it’s another Docker container! All you need to do is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume create portainer_datadocker
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:9000 &lt;span class="nt"&gt;-v&lt;/span&gt; /var/run/docker.sock:/var/run/docker.sock &lt;span class="nt"&gt;-v&lt;/span&gt; portainer_data:/data portainer/portainer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then you can open a web browser on: &lt;a href="http://YOUR_SERVER_IP:9000/"&gt;http://YOUR_SERVER_IP:9000/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6X_tklsA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-3.59.04-PM.png%3Ffit%3D748%252C340%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6X_tklsA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2020/04/Screen-Shot-2020-04-06-at-3.59.04-PM.png%3Ffit%3D748%252C340%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;This was a very short introduction, but it shows how easy it is to deploy a VA Smalltalk project into AWS instances. Hopefully, there will be more experiments to come… Stay tuned!&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>aws</category>
      <category>docker</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>"El Carrillon" project  featured in the RaspberryPi magazine MagPi</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Fri, 28 Feb 2020 15:05:38 +0000</pubDate>
      <link>https://dev.to/martinezpeck/el-carrillon-project-featured-in-the-raspberrypi-magazine-magpi-2fpp</link>
      <guid>https://dev.to/martinezpeck/el-carrillon-project-featured-in-the-raspberrypi-magazine-magpi-2fpp</guid>
      <description>&lt;p&gt;Some time ago &lt;a href="https://dev.to/martinezpeck/developing-testing-and-mocking-the-largest-midi-instrument-of-the-world-with-a-raspberry-pi-and-a-diy-leds-piano-7el"&gt;I blogged about a fantastic project Gerardo Richarte and I were doing with an 18-bell tower located in Argentina&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Back then, I showed some details of the architecture, code, and how I mocked-up “El Carrillon” with a homemade LED piano so I could test it at home.&lt;/p&gt;

&lt;p&gt;Later on, I wrote &lt;a href="https://dev.to/martinezpeck/el-carrillon-playing-midi-songs-on-an-18-bell-tower-with-a-raspberry-pi-and-smalltalk-3f1f"&gt;another post&lt;/a&gt;, where you can see it running live and read some great news about the project (including an award!).&lt;/p&gt;

&lt;p&gt;In this quick post, I am so happy to share with you some BIG NEWS:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/gerasdf/carrillon"&gt;El Carrillon&lt;/a&gt; is the first &lt;a href="https://www.instantiations.com/iot/index.html"&gt;VASmalltalk IoT&lt;/a&gt; project to appear as a showcase in &lt;a href="https://magpi.raspberrypi.org/issues/91"&gt;the official RaspberryPi magazine, The MagPi!&lt;/a&gt;. It was a pleasure for me to work on the project with Gerardo Richarte, and thanks to Instantiations for their support! &lt;/p&gt;

&lt;p&gt;Thanks for re-sharing via twitter or &lt;a href="https://www.linkedin.com/posts/mariano-martinez-peck_vasmalltalk-iot-raspberrypi-activity-6638911668254490625-qLQ7"&gt;linkedin post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

</description>
      <category>iot</category>
      <category>raspberrypi</category>
      <category>coding</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>El Carrillon: playing MIDI songs on an 18-bell tower with a Raspberry Pi and Smalltalk</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Wed, 04 Dec 2019 11:49:56 +0000</pubDate>
      <link>https://dev.to/martinezpeck/el-carrillon-playing-midi-songs-on-an-18-bell-tower-with-a-raspberry-pi-and-smalltalk-3f1f</link>
      <guid>https://dev.to/martinezpeck/el-carrillon-playing-midi-songs-on-an-18-bell-tower-with-a-raspberry-pi-and-smalltalk-3f1f</guid>
      <description>&lt;p&gt;Some time ago &lt;a href="https://dev.to/martinezpeck/developing-testing-and-mocking-the-largest-midi-instrument-of-the-world-with-a-raspberry-pi-and-a-diy-leds-piano-7el"&gt;I blogged about a fantastic project Gerardo Richarte and I were doing with an 18-bell tower located in Argentina&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Back then, I showed some details of the architecture, code, and how I mocked-up “El Carrillon” with a homemade LED piano so I could test it at home.&lt;/p&gt;

&lt;p&gt;In this post, you will see it running live and read some great news about the project!&lt;/p&gt;

&lt;h3&gt;
  
  
  Playing live at “La Fiesta Nacional de la Flor”
&lt;/h3&gt;

&lt;p&gt;“El Carrillon” had to play not only during my LED piano testing, but also on the real hardware with real bells during a huge event last November called: &lt;a href="http://www.fiestadelaflor.org.ar/web/"&gt;“La Fiesta Nacional de la Flor”&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We first started testing at home with the real hardware, but without the bells to confirm the GPIOs were responding correctly:&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;p&gt;Once that was working, the next obvious step was to move to production and test with physical bells!! The mechanical/hardware part of some bells needed repair, so it took some time until we were able to test in-person. Once we did, everything worked as expected and without much trouble.&lt;/p&gt;

&lt;p&gt;Deploying was really easy thanks to the “image” concept of Smalltalk and &lt;a href="https://github.com/gerasdf/carrillon/tree/master/scripts"&gt;a few bash scripts that we had prepared&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The good news is that it worked and was a huge success, see below:&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;h3&gt;
  
  
  Won 3rd place at the Innovation Technology Awards at ESUG 2019
&lt;/h3&gt;

&lt;p&gt;We presented this project at the &lt;a href="https://esug.github.io/2019-Conference/awardsSubmissions.html"&gt;Innovation Technology Awards at ESUG 2019&lt;/a&gt; held in Koln, Germany. Below is the teaser video we submitted before the competition:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/mP-7XB4fnao"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Our efforts were worthwhile as we won 3rd place! Thanks to all that voted for us!&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I personally believe this was a great example of using IoT and Smalltalk. We used &lt;a href="https://www.instantiations.com/products/vasmalltalk/"&gt;VAST (VA Smalltalk)&lt;/a&gt; for managing the GPIOs, running the web application, and all within a Raspberry Pi Zero.&lt;/p&gt;

&lt;p&gt;VAST has great development and debugging tools, minimal Smalltalk images, good GPIO libraries, and excellent ARM support.&lt;/p&gt;

&lt;p&gt;Let’s do more IoT projects with Smalltalk!!!&lt;/p&gt;

&lt;p&gt;PS: The official Raspberry Pi twitter account did like our project! :)&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

</description>
      <category>iot</category>
      <category>raspberrypi</category>
      <category>coding</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>It’s 2019: stop reading text and start debugging!</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Tue, 17 Sep 2019 11:31:41 +0000</pubDate>
      <link>https://dev.to/martinezpeck/it-s-2019-stop-reading-text-and-start-debugging-2460</link>
      <guid>https://dev.to/martinezpeck/it-s-2019-stop-reading-text-and-start-debugging-2460</guid>
      <description>&lt;p&gt;If you are a programmer, I am sure you have found yourself many many times “debugging” a system by writing strings to the console / stdout / file. Those messages were usually something like “passed here!”, “false branch”, “now about to enter loop” and maybe even some “fuc..” or “WTF” too.&lt;/p&gt;

&lt;p&gt;If you were “lucky” and your system went into production it’s very likely that you had to support and maintain it. How many times you found yourself “debugging” an error in production by reading hundreds of log files?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That’s not debugging. That’s text reading.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BUOzZWK_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/textDebugging.jpg%3Fw%3D748%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BUOzZWK_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/textDebugging.jpg%3Fw%3D748%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine if you could live debug for real (with a real debugger) a deployed application? And that the application could even be running remotely. Or suppose that the application had an error and instead of writing a text-based stack trace, you could dump a snapshot of the execution stack (with all its variables and state) and then do a post-mortem analysis in your development environment (yes, again, with a real debugger)?&lt;/p&gt;

&lt;p&gt;Is this how debugging will work in the future? No. I’ve been doing it already for years. Smalltalk has had this ability since at least 1995.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don’t text the console. Inquire the process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What is wrong with text reading and poor man’s debugging?
&lt;/h3&gt;

&lt;p&gt;I could write a whole blog post about this, but here are some thoughts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s a very unproductive and unpleasant task. It’s even worse if you have to do the stop – change (add logging) – start cycle.&lt;/li&gt;
&lt;li&gt;It’s error prone. For example, you could have broken code while adding or removing your printing statements. &lt;/li&gt;
&lt;li&gt;Whenever you finish your “debugging”, you must spend time rolling back all the changes. Even worse, you may forget some debugging code which makes it into production.&lt;/li&gt;
&lt;li&gt;When you are looking at a log you are blind. You have no information about the execution stack (what was the state of this variable at this point? no clue..). At most, you could get “some” info thanks to the printing of the temp variables of each frame.&lt;/li&gt;
&lt;li&gt;It’s not enough for mission critical systems. Many times, when there is an error and a system is down, it may be losing millions of dollars per hour. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do you think that reading a log really helps? Maybe. But I hope I have already convinced you that we can do a lot better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo: a simple web application
&lt;/h3&gt;

&lt;p&gt;For this post I will be using a demo web application (written with &lt;a href="https://github.com/seasidest/seaside"&gt;Seaside framework&lt;/a&gt;) that represents a simple Traffic Light. When you click one of the three lights, the clicked one turns on and the others turn off:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7l_5xzvH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-9.43.39-AM-1-3806457451-1568224546506.png%3Fw%3D748%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7l_5xzvH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-9.43.39-AM-1-3806457451-1568224546506.png%3Fw%3D748%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One important detail is that, for demoing purposes, we put an error on one method that gets called when the green button is clicked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nf"&gt;toggleGreenLight&lt;/span&gt;
    &lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; boolean allLights &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;
    &lt;span class="nv"&gt;boolean&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="bp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="nv"&gt;allLights&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;Array&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="nv"&gt;allLights&lt;/span&gt; &lt;span class="nf"&gt;first&lt;/span&gt; &lt;span class="nf"&gt;turnOff&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="nv"&gt;allLights&lt;/span&gt; &lt;span class="nf"&gt;second&lt;/span&gt; &lt;span class="nf"&gt;turnOff&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="bp"&gt;self&lt;/span&gt; &lt;span class="nf"&gt;setGreenLight:&lt;/span&gt; &lt;span class="nv"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Did you spot it? No? Then keep reading ;) In this example we see one error in particular but everything I will show works exactly the same as if that would have been any kind a runtime error.&lt;/p&gt;

&lt;p&gt;Now, we would need another machine that acts as the “production” server in which we could deploy our application and see how we can debug it. OK, for that, I will be using a Raspberry Pi that could be running anywhere in the world. This would also help me demonstrate &lt;a href="https://dev.to/martinezpeck/why-is-smalltalk-a-good-fit-for-iot-and-edge-computing-9jl-temp-slug-3509166"&gt;how cool these unique debugging techniques are for IoT&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The debugging features I will be showing here are part of &lt;a href="https://www.instantiations.com/products/vasmalltalk/index.html"&gt;VASmalltalk&lt;/a&gt; but some of these features exist on other Smalltalk dialects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 1: running on development
&lt;/h3&gt;

&lt;p&gt;This is simple. He have the application running locally in our development environment and if we click the green button, we get a debugger:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9GBXJMpr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-12-at-5.54.52-PM.png%3Ffit%3D748%252C530%26ssl%3D1" alt=""&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this situation there is not much magic. Well, at least for Smalltalkers. For non Smalltalkers there is some magic. For example you can inspect objects, send messages to them, evaluate code, &lt;strong&gt;change code&lt;/strong&gt; and then just resume the execution. Yes, no stop-compile-start cycle. Live coding as they say. Do you now understand why we (Smalltalkers) love developing in the debugger?&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 2: running on production with text logs
&lt;/h3&gt;

&lt;p&gt;We are now running our application in our production machine (in this example a Raspberry Pi) and we will be using the classical log file for error handling.&lt;/p&gt;

&lt;p&gt;Below you can see an extract of the generated log file (here it is called &lt;code&gt;walkback.log&lt;/code&gt;). This is likely what you are most familiar with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q5KRQt1T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-12-at-5.59.41-PM.png%3Ffit%3D748%252C681%26ssl%3D1" alt=""&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Taking a detailed look at the above log file illustrates the limitations I mentioned earlier in this post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 3: running on production with binary stack dumping
&lt;/h3&gt;

&lt;p&gt;Finally!!! The cool stuff begins :) In this case, when I clicked the green button the application did not generate a log file but a binary &lt;code&gt;seaside.sdf&lt;/code&gt;. That file contains a snapshot of the execution stack at the moment the exception happened.&lt;/p&gt;

&lt;p&gt;The magic is that later on I can then take any development environment, open a debugger, attach that &lt;code&gt;seaside.sdf&lt;/code&gt; file and do a post-mortem analysis!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dhKk1aQB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-3.02.55-PM.png%3Ffit%3D748%252C537%26ssl%3D1" alt=""&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this example, I copy the file from the server into my development machine and here is the result:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aE0TiGsL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-12-at-6.03.48-PM.png%3Ffit%3D748%252C531%26ssl%3D1" alt=""&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can notice that on this debugger the &lt;code&gt;Into&lt;/code&gt;, &lt;code&gt;Over&lt;/code&gt; and all the navigation buttons are disabled. This is because this is a snapshot of a stack so you can’t really change it (wait for next section!). However, you can see how much friendlier this is than dealing with log files. Here you can see that I can inspect objects, I can see their state, etc…&lt;/p&gt;

&lt;p&gt;At this point, the error should be clear: the variable &lt;code&gt;allLights&lt;/code&gt; is empty and so sending &lt;code&gt;first&lt;/code&gt; to it throws an exception. The fix here was to simply use an existing method &lt;code&gt;turnAllOff&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nf"&gt;toggleGreenLight&lt;/span&gt;

    &lt;span class="bp"&gt;self&lt;/span&gt; &lt;span class="nf"&gt;turnAllOff&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="bp"&gt;self&lt;/span&gt; &lt;span class="nf"&gt;setGreenLight:&lt;/span&gt; &lt;span class="bp"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Just to summarize: I have a local debugger opened with a snapshot of the execution stack when an exception was triggered on a remote server.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Scenario 4: running on production with live remote debugging
&lt;/h3&gt;

&lt;p&gt;The previous binary stack dump is great for doing post-mortem analysis of production errors,  but what if we want to live debug what is happening on the server? Or in a IoT board? Well, we can then use the remote debugger feature.&lt;/p&gt;

&lt;p&gt;For this demo what I want to do is to halt the execution (like a breakpoint) at a certain place so that I could open a debugger and see what is going on. For that I added a &lt;code&gt;Halt halt&lt;/code&gt; at the beginning of the method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nf"&gt;toggleGreenLight&lt;/span&gt;
    &lt;span class="nc"&gt;Halt&lt;/span&gt; &lt;span class="nf"&gt;halt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; 
    &lt;span class="bp"&gt;self&lt;/span&gt; &lt;span class="nf"&gt;turnAllOff&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="bp"&gt;self&lt;/span&gt; &lt;span class="nf"&gt;setGreenLight:&lt;/span&gt; &lt;span class="bp"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The remote debugger works via TCP which means:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The “target” (remote system) must know the IP (or hostname) of the “development” environment.&lt;/li&gt;
&lt;li&gt;We need certain port to be opened in the network.&lt;/li&gt;
&lt;li&gt;We can remotely debug ANY remote system running anywhere in the globe. In this case, the Raspberry Pi is next to me, but it would the same if it is 14k km away. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first thing I need to do is to tell my “development” environment to listen for possible remote connections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NmZrejyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-3.11.07-PM.png%3Fw%3D748%26ssl%3D1" alt=""&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, I can simply run the web application and click on the green button. The browser would then look like “waiting” on the callback of the button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v2FB_zUE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-11.34.02-AM.png%3Fw%3D748%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v2FB_zUE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-11.34.02-AM.png%3Fw%3D748%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But…if I now go to my development machine…I see a debugger that came in!!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sFCNyRZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-13-at-9.37.42-PM.png%3Ffit%3D748%252C529%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sFCNyRZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-13-at-9.37.42-PM.png%3Ffit%3D748%252C529%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Into&lt;/code&gt;, &lt;code&gt;Over&lt;/code&gt; and the rest of the buttons are now available. Again, I can inspect objects, see their state and send messages to them. But in addition, I can &lt;strong&gt;change code&lt;/strong&gt; and resume the execution (just as I did when developing on my local machine!). For example, let’s change the code so that now the green light is red:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jrzUDWS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-13-at-9.38.47-PM.png%3Ffit%3D748%252C547%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jrzUDWS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-13-at-9.38.47-PM.png%3Ffit%3D748%252C547%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then I just click on &lt;code&gt;Resume&lt;/code&gt; button. The debugger disappears and the web browser continues its rendering (but now, obviously, the green light is red):  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2HI4lmub--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-11.43.31-AM.png%3Fresize%3D273%252C351%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2HI4lmub--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2019/09/Screen-Shot-2019-09-11-at-11.43.31-AM.png%3Fresize%3D273%252C351%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Just to summarize: I have a local debugger opened with a live execution stack when an exception was triggered on a remote server. I even could modify code and resume the exception to continue its normal execution.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Too magical to believe it? Am I cheating?
&lt;/h3&gt;

&lt;p&gt;The text and the screenshots might not be enough to properly show all these features, therefore, I recorded this short video to show things working live:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/XZiJTdxrsvY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Other possible uses?
&lt;/h3&gt;

&lt;p&gt;I have only shown a few scenarios, but there are more! Some others I am aware of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://marianopeck.blog/2012/07/25/reviving-ci-test-failures-in-local-machine/"&gt;Debugging Continues Integration test failures&lt;/a&gt;: sometimes a test failed on the CI but not on your local machine. So… we just added stack dumping to tests that fail. That way, you simply download the stack file and see what happened.&lt;/li&gt;
&lt;li&gt;Sharing “errors” between the community: instead of sharing a plain string stack, you share stack dump.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gitlab.com/macta/PharoLambda"&gt;Debugging AWS Lamba services&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Debugging on IoT.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;We have been doing this for years. The binary stack dump and remote debugging of &lt;a href="https://www.instantiations.com/products/vasmalltalk/"&gt;VASmalltalk&lt;/a&gt; dates from around 1995. Yes, 24 years ago. &lt;a href="https://pharo.org"&gt;Pharo&lt;/a&gt; and &lt;a href="https://gemtalksystems.com/products/gs64/"&gt;GemStone&lt;/a&gt; Smalltalks both have similar features. In fact, I &lt;a href="https://marianopeck.blog/2012/01/19/moving-contexts-and-debuggers-between-images-with-fuel/"&gt;co-authored that Pharo development&lt;/a&gt; in 2012 which &lt;a href="http://esug.org/wiki/pier/Conferences/2011/InnovationTechnologyAwards"&gt;won first place at the ESUG Innovation Technology Awards&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So this is what “&lt;strong&gt;debug to the future&lt;/strong&gt; ” looks like. The pun is doubly intended: Smalltalk has allowed this kind of debugging since ages ago, although it may look like a futuristic approach to programmers who haven’t tried it. Also, with the ideas shown in the post, you can see how you can have a stack trace now, but debug it much later, even in a different environment, kinda like suspended animation for computing processes.&lt;/p&gt;

&lt;p&gt;Being a software professional also means caring about the tools you choose to use. I can use &lt;code&gt;vim&lt;/code&gt; and know lots of great hotkeys, but I also have a nice monitor and a mouse, so why can’t I just enjoy and take some advantage of modernity?&lt;/p&gt;

&lt;p&gt;I’d love for you to use Smalltalk (that would be very cool), but at the very least fight for having decent debugging capabilities in whatever language you are using.&lt;/p&gt;

&lt;p&gt;People now use the trendy words “Technical debt”. Well, I promise that replacing a text editor and &lt;code&gt;grep&lt;/code&gt; with a debugger does help to reduce it.&lt;/p&gt;

&lt;p&gt;Finally, don’t stay with the idea that this is just something “cool”. It is cool, but it also brings a &lt;strong&gt;lot of value&lt;/strong&gt; to businesses and customers.&lt;/p&gt;

&lt;p&gt;It’s time to move forward.&lt;/p&gt;

&lt;p&gt;PS: Thanks to all the BA-ST Slack group and Instantiations team that help me a lot with ideas, quotes, revisions and much more!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>coding</category>
      <category>showdev</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>Getting Started with Nvidia Jetson Nano, TensorFlow and Smalltalk</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Thu, 05 Sep 2019 11:40:50 +0000</pubDate>
      <link>https://dev.to/martinezpeck/getting-started-with-nvidia-jetson-nano-tensorflow-and-smalltalk-23mk</link>
      <guid>https://dev.to/martinezpeck/getting-started-with-nvidia-jetson-nano-tensorflow-and-smalltalk-23mk</guid>
      <description>&lt;p&gt;On all my previous posts (like &lt;a href="https://dev.to/martinezpeck/docker-swarm-cloud-on-a-arm64-diy-sbc-cluster-running-a-smalltalk-webapp-9l1"&gt;this one&lt;/a&gt;) you can see &lt;a href="https://www.instantiations.com/products/vasmalltalk/" rel="noopener noreferrer"&gt;VASmalltalk&lt;/a&gt; running on any Raspberry Pi, on Rock64 and even on Nvidia Jetson TX2:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1141775466384306182-579" src="https://platform.twitter.com/embed/Tweet.html?id=1141775466384306182"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1141775466384306182-579');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1141775466384306182&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;In addition, you can also see &lt;a href="https://dev.to/martinezpeck/object-detection-with-tensorflow-and-smalltalk-15p7"&gt;previous posts&lt;/a&gt; where I show how to use TensorFlow from Smalltalk to recognize objects in images.&lt;/p&gt;

&lt;p&gt;Last week, at &lt;a href="https://esug.github.io/2019-Conference/conf2019.html" rel="noopener noreferrer"&gt;ESUG 2019&lt;/a&gt;, I demoed a VA Smalltalk and TensorFlow project on an Nvidia Jetson Nano provided by &lt;a href="https://twitter.com/instantiations" rel="noopener noreferrer"&gt;Instantiations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this post, I will show you how to get started with the Jetson Nano, how to run VASmalltalk and finally how to use the TensorFlow wrapper to take advantage of the 128 GPU cores.&lt;/p&gt;

&lt;h3&gt;
  
  
  What do you need before starting
&lt;/h3&gt;

&lt;p&gt;Below is the whole list of supplies I gathered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/NVIDIA-Jetson-Nano-Developer-Kit/dp/B07PZHBDKT/ref=sr_1_3?crid=3TX2GTYZE1PQO&amp;amp;keywords=jetson+nano&amp;amp;qid=1567444057&amp;amp;s=gateway&amp;amp;sprefix=jetson+nano%2Caps%2C347&amp;amp;sr=8-3" rel="noopener noreferrer"&gt;NVIDIA Jetson Nano Developer Kit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;MicroSD card: I got a &lt;a href="https://www.amazon.com/Samsung-MicroSDXC-Adapter-MB-ME128GA-AM/dp/B06XWZWYVP/ref=pd_bxgy_147_img_3/144-0517921-1924023?_encoding=UTF8&amp;amp;pd_rd_i=B06XWZWYVP&amp;amp;pd_rd_r=3351843b-8299-4a77-bbb3-d1dd44b03579&amp;amp;pd_rd_w=2dh8W&amp;amp;pd_rd_wg=eLEPI&amp;amp;pf_rd_p=a2006322-0bc0-4db9-a08e-d168c18ce6f0&amp;amp;pf_rd_r=KFRNVE50K798SYHVYKX3&amp;amp;psc=1&amp;amp;refRID=KFRNVE50K798SYHVYKX3" rel="noopener noreferrer"&gt;Samsung 128GB U3&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Power supply: you can choose either a limited USB or a much more powerful DC switching power supply. I got &lt;a href="https://www.amazon.com/SMAKN-Switching-Supply-Adapter-100-240/dp/B01N4HYWAM/ref=pd_bxgy_147_img_2/144-0517921-1924023?_encoding=UTF8&amp;amp;pd_rd_i=B01N4HYWAM&amp;amp;pd_rd_r=3351843b-8299-4a77-bbb3-d1dd44b03579&amp;amp;pd_rd_w=2dh8W&amp;amp;pd_rd_wg=eLEPI&amp;amp;pf_rd_p=a2006322-0bc0-4db9-a08e-d168c18ce6f0&amp;amp;pf_rd_r=KFRNVE50K798SYHVYKX3&amp;amp;psc=1&amp;amp;refRID=KFRNVE50K798SYHVYKX3" rel="noopener noreferrer"&gt;the latter&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Case (optional): I like metal cases so I got &lt;a href="https://www.amazon.com/Geekworm-NVIDIA-Enclosure-Control-Developer/dp/B07RRRX121/ref=sr_1_4?keywords=jetson+nano+case&amp;amp;qid=1565123115&amp;amp;s=electronics&amp;amp;sr=1-4" rel="noopener noreferrer"&gt;one&lt;/a&gt; with Power &amp;amp; Reset Control Switch.&lt;/li&gt;
&lt;li&gt;Fan (optional): I could only find &lt;a href="https://www.amazon.com/Noctua-NF-A4x20-5V-PWM-Premium-Quality/dp/B071FNHVXN/ref=sr_1_3?keywords=noctua+nfa4x20+5v+pwm&amp;amp;qid=1567445024&amp;amp;s=gateway&amp;amp;sr=8-3" rel="noopener noreferrer"&gt;one fan that would fit on the Nano&lt;/a&gt; but it was very expensive. &lt;/li&gt;
&lt;li&gt;Wireless Module (optional): the board does not come with built-in WiFi or Bluetooth so I decided to buy &lt;a href="https://www.amazon.com/Waveshare-AC8265-Wireless-Supports-Bluetooth/dp/B07SGDRG34/ref=sr_1_2_sspa?keywords=jetson+nano+wifi&amp;amp;qid=1565122735&amp;amp;s=electronics&amp;amp;sr=1-2-spons&amp;amp;psc=1&amp;amp;spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzRDhMM1hZQURCMDdVJmVuY3J5cHRlZElkPUEwNTAyNjMzMk1FWU5aNEhVNkJKNSZlbmNyeXB0ZWRBZElkPUEwMzAyMDcxM0JIS1VGSE43TTFPVyZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=" rel="noopener noreferrer"&gt;this module&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Assembling the Nano and related hardware
&lt;/h3&gt;

&lt;p&gt;For this step, I first followed &lt;a href="https://towardsdatascience.com/getting-started-with-nvidia-jetson-nano-and-installing-tensorflow-gpu-ad4a3da8ed26" rel="noopener noreferrer"&gt;this short guide&lt;/a&gt; but finally moved to &lt;a href="https://blog.hackster.io/getting-started-with-the-nvidia-jetson-nano-developer-kit-43aa7c298797" rel="noopener noreferrer"&gt;this one which was super detailed&lt;/a&gt;. I won’t repeat everything written there but instead I will add my own bits below.&lt;/p&gt;

&lt;p&gt;I started by formatting the SD. For this, I always use “SD Card Formatter” program. Downloading the operating system image and flashing the SD was easy… But the first downside is that for the first boot you NEED an external monitor, keyboard and mouse. No way to do it headless :( After the first boot, you can indeed enable SSH and VNC, but not for the first time.&lt;/p&gt;

&lt;p&gt;The next step was to assemble the Wifi and Bluetooth. It was not a walk in the park but not that difficult either. You need to disassemble the Nano a bit, connect some wires, etc:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1165271816224563200-706" src="https://platform.twitter.com/embed/Tweet.html?id=1165271816224563200"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1165271816224563200-706');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1165271816224563200&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Something bad is that the board is configured to start by default with a USB power supply. In my case, I ordered a DC instead as it’s better if you want to take the maximum power. But…to tell the Nano whether to use USB or DC, you must change the jumper J48. But guess what? The development kit (100USD) does NOT even bring a single jumper. So you got your Nano and your DC power supply and you are dead, you can’t boot it. Seriously? (BTW, before you ask, no, I didn’t have a female-to-female cable with me that day as a workaround for the jumper)&lt;/p&gt;

&lt;p&gt;The other complicated part was to assemble the case and the fan. For that, I needed to &lt;a href="https://www.youtube.com/watch?v=SpUB6h4Akp4&amp;amp;t=365s" rel="noopener noreferrer"&gt;carefully watch this video&lt;/a&gt; a few times. Once it was built, it felt really solid and nice. BTW the case did come with a jumper for J48 which was really nice since that meant I could use the DC power supply.&lt;/p&gt;

&lt;p&gt;The fan itself was also complicated. The Noctua NF-A4x20 5V PWM I bought wouldn’t fit easily. The NA-AV3 silicone anti-vibration mounts would not get through the holes of the Nano. And the screws for the fan provided by the case were too short. So I had to buy some other extra screws that were long enough.&lt;/p&gt;

&lt;p&gt;When I was ready to try the fan, I powered it and nothing happened. I thought I did something wrong and I had to re-open the case a few times… painful process. I almost gave up, when I found &lt;a href="https://devtalk.nvidia.com/default/topic/1049589/jetson-nano/fan-not-working/" rel="noopener noreferrer"&gt;some help over the internet&lt;/a&gt;. Believe it or not, you must run a console command in order to start the fan: &lt;code&gt;sudo jetson_clocks&lt;/code&gt;. After that, it started working.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1168873407116644352-384" src="https://platform.twitter.com/embed/Tweet.html?id=1168873407116644352"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1168873407116644352-384');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1168873407116644352&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting it to run headless
&lt;/h3&gt;

&lt;p&gt;While in most boards and operating systems this is easy, on the Nano this is a challenging part. The SSH part is easy and you almost don’t need to do anything in particular. But for VNC… OMG…. I followed all the recommendations provided in &lt;a href="https://blog.hackster.io/getting-started-with-the-nvidia-jetson-nano-developer-kit-43aa7c298797" rel="noopener noreferrer"&gt;this guide&lt;/a&gt;. In my case, I could never get the &lt;code&gt;xrdp&lt;/code&gt; working… when it tries to connect from my Mac, it simply crashes…&lt;/p&gt;

&lt;p&gt;As for VNC, after all the workarounds/corrections mentioned there, I was able to connect but the resolution was too bad (640×480). I spent quite some time googling until I found a workaround mentioned &lt;a href="https://devtalk.nvidia.com/default/topic/995621/jetson-tx1/jetson-tx1-desktop-sharing-resolution-problem-without-real-monitor/1" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Basically, I did &lt;code&gt;sudo vim /etc/X11/xorg.conf&lt;/code&gt; and I added these lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Section "Screen"
   Identifier    "Default Screen"
   Monitor        "Configured Monitor"
   Device        "Default Device"
   SubSection "Display"
       Depth    24
       Virtual 1280 800
   EndSubSection
EndSection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In other words, I needed to change the size of the Virtual Display that is used if no monitor is connected (by default it was 640×480).&lt;/p&gt;

&lt;p&gt;After rebooting, I was finally able to get a decent resolution with VNC.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-02-at-4.16.38-PM.png%3Ffit%3D748%252C468%26ssl%3D1"&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installing VASmalltalk dependencies
&lt;/h3&gt;

&lt;p&gt;This part was easy and I basically followed the bash script of a &lt;a href="https://dev.to/martinezpeck/getting-started-with-vasmalltalk-raspberry-pi-and-other-devices-25g3-temp-slug-2060664"&gt;previous post&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install VA Dependencies for running headfull and VA Environments tool&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--assume-yes&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  libc6 &lt;span class="se"&gt;\&lt;/span&gt;
  locales &lt;span class="se"&gt;\&lt;/span&gt;
  xterm &lt;span class="se"&gt;\&lt;/span&gt;
  libxm4 &lt;span class="se"&gt;\&lt;/span&gt;
  xfonts-base &lt;span class="se"&gt;\&lt;/span&gt;
  xfonts-75dpi &lt;span class="se"&gt;\&lt;/span&gt;
  xfonts-100dpi

&lt;span class="c"&gt;# Only necessary if we are using OpenSSL from Smalltalk&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--assume-yes&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  libssl-dev 

&lt;span class="c"&gt;# Generate locales&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;su
&lt;span class="nb"&gt;echo &lt;/span&gt;en_US.ISO-8859-1 ISO-8859-1 &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/locale.gen
&lt;span class="nb"&gt;echo &lt;/span&gt;en_US.ISO-8859-15 ISO-8859-15 &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/locale.gen
locale-gen
&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing TensorFlow and VASmalltalk wrapper
&lt;/h3&gt;

&lt;p&gt;The first thing you must do is to either build TensorFlow from scratch for Nvidia Jetson Nano with CUDA support or try to get a pre-build binary from somewhere.  I am getting the latter using the following bash script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;tensorflow
&lt;span class="nb"&gt;cd &lt;/span&gt;tensorflow
wget https://dl.photoprism.org/tensorflow/nvidia-jetson/libtensorflow-nvidia-jetson-nano-1.14.0.tar.gz
&lt;span class="nb"&gt;tar &lt;/span&gt;xvzf libtensorflow-nvidia-jetson-nano-1.14.0.tar.gz
&lt;span class="nb"&gt;cd &lt;/span&gt;lib
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; libtensorflow_framework.so libtensorflow_framework.so.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The symbolic link is a workaround because in the shared libraries that I downloaded, &lt;code&gt;libtensorflow.so&lt;/code&gt; would depend on &lt;code&gt;libtensorflow_framework.so.1&lt;/code&gt; but the library that was shipped was &lt;code&gt;libtensorflow_framework.so&lt;/code&gt; and so I made a symlink.&lt;/p&gt;

&lt;p&gt;To install VASmalltalk and the TensorFlow wrapper, I followed &lt;a href="https://github.com/vasmalltalk/tensorflow-vast/blob/master/README.md#installation" rel="noopener noreferrer"&gt;the instructions from the Github repository&lt;/a&gt;. The only detail is that ARM64 VM will be shipped in the upcoming 9.2 ECAP 3….so send me a private message and I will send it to you until the release is public.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;.ini&lt;/code&gt; file I added:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;TENSORFLOW_LIB&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/home/mpeck/Instantiations/tensorflow/lib/libtensorflow.so&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last bit is that TensorFlow needs help so that &lt;code&gt;libtensorflow&lt;/code&gt; can find &lt;code&gt;libtensorflow_framework&lt;/code&gt;. So what I did is to export &lt;code&gt;LD_LIBRARY_PATH&lt;/code&gt; before starting the VASmalltalk image. Another possibility is moving the shared libraries to &lt;code&gt;/usr/lib&lt;/code&gt; or &lt;code&gt;/usr/local/lib&lt;/code&gt;. It’s up to you.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/Instantiations/VastEcap3_b437_7b7fc914f16f_linux/raspberryPi64/
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LD_LIBRARY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/mariano/Instantiations/libtensorflow-nvidia-jetson-nano-1.14.0.2/lib:&lt;span class="nv"&gt;$LD_LIBRARY_PATH&lt;/span&gt;
./abt64.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And all tests were green:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-11.45.48-AM.png%3Ffit%3D748%252C459%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-11.45.48-AM.png%3Ffit%3D748%252C459%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Confirming we are using GPU
&lt;/h3&gt;

&lt;p&gt;By default, if a GPU is present (and the shared library was compiled with GPU support), TensorFlow will use GPU over CPU. From Smalltalk we can confirm this by checking the available &lt;code&gt;TFDevice&lt;/code&gt; by inspecting the result of &lt;code&gt;(TFSession on: TFGraph create) devices&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-11.54.56-AM.png%3Fw%3D748%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-11.54.56-AM.png%3Fw%3D748%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then run a simple test like &lt;code&gt;TensorFlowCAPITest &amp;gt;&amp;gt; testAddControlInput&lt;/code&gt; and see the log printed into the &lt;code&gt;xterm&lt;/code&gt;. You should see that a GPU device is being used:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-12.07.47-PM.png%3Fw%3D748%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-12.07.47-PM.png%3Fw%3D748%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using 128 GPU cores, TensorFlow and VASmalltalk to detect Kölsch beers with #esug19 pictures&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OK. So we have TensorFlow running, all our tests passing and we are sure we are using GPU. The obvious next step is to run some real-world demo.&lt;/p&gt;

&lt;p&gt;In a &lt;a href="https://dev.to/martinezpeck/object-detection-with-tensorflow-and-smalltalk-15p7"&gt;previous post&lt;/a&gt; you saw some examples of Object Detection. During the ESUG 2019 Conference I wanted to show this demo but instead of recognizing random objects on random images, I showed how to detect “beers” (Kölsch! we were at Cologne, Germany!) on the real pictures people uploaded to Twitter #esug19 hashtag.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1167438196038348801-607" src="https://platform.twitter.com/embed/Tweet.html?id=1167438196038348801"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1167438196038348801-607');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1167438196038348801&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;The code for that was fairly easy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nc"&gt;ObjectDetectionZoo&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;
    &lt;span class="nf"&gt;imageFiles:&lt;/span&gt; &lt;span class="nc"&gt;OrderedCollection&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'/home/mariano/Instantiations/tensorflow/esug2019/beer1.png'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;graphFile:&lt;/span&gt; &lt;span class="s"&gt;'/home/mariano/Instantiations/tensorflow/frozen_inference_graph-faster_resnet50.pb'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;labelsFile:&lt;/span&gt; &lt;span class="s"&gt;'examples/objectDetectionZoo/mscoco_label_map.pbtxt'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareImageInput&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareSession&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;openPictureWithBoundingBoxesAndLabel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here are the results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-2.52.37-PM.png%3Ffit%3D748%252C690%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fmarianopeck.blog%2Fwp-content%2Fuploads%2F2019%2F09%2FScreen-Shot-2019-09-04-at-2.52.37-PM.png%3Ffit%3D748%252C690%26ssl%3D1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;The Nvidia Jetson Nano does offer good hardware at a reasonable price. However, it’s much harder to setup than most of the boards out there. It’s the first board that takes me sooooo much time to get fully working.&lt;/p&gt;

&lt;p&gt;But the worst part, in my opinion, is the state of Linux Tegra. Crashes everywhere and almost impossible to setup something as simple as VNC. I would really like to see a better/newer OS for the Nano.&lt;/p&gt;

&lt;p&gt;Once all your painful setup is done, it works well and it provides nice GPU capabilities. We now have everything in place to start experimenting with it.&lt;/p&gt;

&lt;p&gt;PS: Thanks Maxi Tabacman and Gera Richarte for doing a review of this post!&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>tensorflow</category>
      <category>tutorial</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>Object Detection with TensorFlow and Smalltalk</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Mon, 19 Aug 2019 10:46:56 +0000</pubDate>
      <link>https://dev.to/martinezpeck/object-detection-with-tensorflow-and-smalltalk-15p7</link>
      <guid>https://dev.to/martinezpeck/object-detection-with-tensorflow-and-smalltalk-15p7</guid>
      <description>&lt;p&gt;In a &lt;a href="https://dev.to/martinezpeck/recognizing-objects-in-images-with-tensorflow-and-smalltalk-1nep"&gt;previous post&lt;/a&gt; we saw basic object recognition in images using Google’s &lt;a href="https://www.tensorflow.org"&gt;TensorFlow library&lt;/a&gt; from Smalltalk. This post will walk you step by step through the process of using a pre-trained model to detect objects in an image.&lt;/p&gt;

&lt;p&gt;It may also catch your attention that we are doing this from &lt;a href="https://twitter.com/instantiations"&gt;VASmalltalk&lt;/a&gt; rather than Python. Check out &lt;a href="https://marianopeck.blog/2019/08/07/recognizing-objects-in-images-with-tensorflow-and-smalltalk/"&gt;the previous post&lt;/a&gt; to see why I believe Smalltalk could be a great choice for doing Machine Learning.&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;h3&gt;
  
  
  TensorFlow detection model Zoo
&lt;/h3&gt;

&lt;p&gt;In this post, we will be again using a &lt;a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md"&gt;pre-trained model&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We provide a collection of detection models pre-trained on the &lt;a href="http://mscoco.org/"&gt;COCO dataset&lt;/a&gt;, the &lt;a href="http://www.cvlibs.net/datasets/kitti/"&gt;Kitti dataset&lt;/a&gt;, the &lt;a href="https://github.com/openimages/dataset"&gt;Open Images dataset&lt;/a&gt;, the &lt;a href="https://research.google.com/ava/"&gt;AVA v2.1 dataset&lt;/a&gt; and the &lt;a href="https://github.com/visipedia/inat_comp/blob/master/2017/README.md#bounding-boxes"&gt;iNaturalist Species Detection Dataset&lt;/a&gt;. These models can be useful for out-of-the-box inference if you are interested in categories already in those datasets. They are also useful for initializing your models when training on novel datasets.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The original idea of using these models was written in this &lt;a href="https://www.kdnuggets.com/2018/03/google-tensorflow-object-detection-api-the-easiest-way-implement-image-recognition.html"&gt;great post&lt;/a&gt;. Our work was also inspired by &lt;a href="https://github.com/tensorflow/models/blob/477ed41e7e4e8a8443bc633846eb01e2182dc68a/object_detection/object_detection_tutorial.ipynb"&gt;this&lt;/a&gt; and &lt;a href="https://github.com/priya-dwivedi/Deep-Learning/blob/master/Object_Detection_Tensorflow_API.ipynb"&gt;this&lt;/a&gt; Juypiter notebooks for the demo. &lt;/p&gt;

&lt;h3&gt;
  
  
  Designing the demo with a Smalltalk object-oriented approach
&lt;/h3&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/martinezpeck/recognizing-objects-in-images-with-tensorflow-and-smalltalk-1nep"&gt;previous post&lt;/a&gt; you can see that all the demo was developed in the class &lt;code&gt;LabelImage&lt;/code&gt;. While starting to implement this new demo we detected a lot of common behaviors when running pre-trained frozen prediction models. So… we first created a superclass called &lt;code&gt;FrozenImagePredictor&lt;/code&gt; and changed &lt;code&gt;LabelImage&lt;/code&gt; to be a subclass of it, overriding only a small part of the protocol.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WSkDoJMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.32.54-AM.png%3Ffit%3D748%252C581%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WSkDoJMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.32.54-AM.png%3Ffit%3D748%252C581%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cdOEFZaT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.33.06-AM.png%3Ffit%3D748%252C688%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cdOEFZaT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.33.06-AM.png%3Ffit%3D748%252C688%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After we finished the refactor it was quite easy to add a new subclass &lt;code&gt;ObjectDetectionZoo&lt;/code&gt;. And all we needed to implement on that class was just 7 methods (and only 5 methods in&lt;code&gt;LabelImage&lt;/code&gt;). So…as you can see, it’s quite easy now to add more and more frozen image predictors.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/martinezpeck/recognizing-objects-in-images-with-tensorflow-and-smalltalk-1nep"&gt;previous example&lt;/a&gt; (with &lt;code&gt;LabelImage&lt;/code&gt;) we processed the “raw” results just as TensorFlow would answer it. However, with &lt;code&gt;ObjectDetectionZoo&lt;/code&gt; the results were a bit more complex and in addition we needed to improve the readability of the information, for example, to render the “bounding boxes”. So we reified the TensorFlow results in &lt;code&gt;ObjectDetectionImageResults&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qMVjfGcC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.42.24-AM.png%3Ffit%3D748%252C455%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qMVjfGcC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.42.24-AM.png%3Ffit%3D748%252C455%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These pre-trained models can answer the data for the “bounding boxes”. We were interested in seeing if we could render the image directly from Smalltalk and draw the boxes and labels. Again, time to reify that in &lt;code&gt;ObjectDetectionImageRenderer&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xmtuVWqV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.43.37-AM.png%3Ffit%3D748%252C472%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xmtuVWqV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-11.43.37-AM.png%3Ffit%3D748%252C472%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To conclude, we have &lt;code&gt;ObjectDetectionZoo&lt;/code&gt; which will run the model and answer &lt;code&gt;ObjectDetectionImageResults&lt;/code&gt; and then delegate to &lt;code&gt;ObjectDetectionImageRenderer&lt;/code&gt; to display and draw the results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running the examples!
&lt;/h3&gt;

&lt;p&gt;To run the examples, you must first check &lt;a href="https://dev.to/martinezpeck/recognizing-objects-in-images-with-tensorflow-and-smalltalk-1nep"&gt;the previous post&lt;/a&gt; to see how to install VASmalltalk and TensorFlow. After that, you can check the example yourself in the class comment of &lt;code&gt;ObjectDetectionZoo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This example runs the basic &lt;code&gt;mobilenet_v1&lt;/code&gt; net which is fast but not very accurate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nc"&gt;ObjectDetectionZoo&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;
    &lt;span class="nf"&gt;imageFiles:&lt;/span&gt; &lt;span class="nc"&gt;OrderedCollection&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\objectDetectionZoo\images\000000562059.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;graphFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\objectDetectionZoo\ssd_mobilenet_v1_coco_2018_01_28\frozen_inference_graph.pb'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareImageInput&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareSession&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;openPictureWithBoundingBoxesAndLabel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can simply change the one line where you specify the graph to use &lt;code&gt;rcnn_inception_resnet_v2&lt;/code&gt; and you will see the results are much better:&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://github.com/vasmalltalk/tensorflow-vast"&gt;tensorflow-vast repository&lt;/a&gt; we only provide a few frozen pre-trained graphs because they are really big. However, you can very easily &lt;a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models"&gt;download additional ones&lt;/a&gt; and use them. The only thing you must do is to uncompress the &lt;code&gt;.tar.gz&lt;/code&gt; and simply change the one line where you specify the graph (&lt;code&gt;graphFile:&lt;/code&gt;) to use &lt;code&gt;rcnn_inception_resnet_v2&lt;/code&gt; and you will see the results are much better: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M61XIhMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-12.14.33-PM.png%3Ffit%3D748%252C587%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M61XIhMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-12.14.33-PM.png%3Ffit%3D748%252C587%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see with &lt;code&gt;mobilenet_v1&lt;/code&gt; the spoon was detected as person, the apple on the left and the bowl were not detected and the cake was interpreted as a sandwich. With &lt;code&gt;rcnn_inception_resnet_v2&lt;/code&gt; all looks correct:&lt;/p&gt;

&lt;p&gt;Something very cool from TensorFlow is that you can run multiple images in parallel on a single invocation. So here is another example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nc"&gt;ObjectDetectionZoo&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;
    &lt;span class="nf"&gt;imageFiles:&lt;/span&gt; &lt;span class="nc"&gt;OrderedCollection&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;graphFile:&lt;/span&gt; &lt;span class="s"&gt;'z:\Instantiations\TensorFlow\faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28\frozen_inference_graph.pb'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\objectDetectionZoo\images\000000463849.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\objectDetectionZoo\images\000000102331.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\objectDetectionZoo\images\000000079651.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\objectDetectionZoo\images\000000045472.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareImageInput&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareSession&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;openPictureWithBoundingBoxesAndLabel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which brings these results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h-77G42M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-12.34.23-PM.png%3Ffit%3D748%252C505%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h-77G42M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/marianopeck.blog/wp-content/uploads/2019/08/Screen-Shot-2019-08-18-at-12.34.23-PM.png%3Ffit%3D748%252C505%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see &lt;a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models"&gt;here&lt;/a&gt; there are many different pre-trained models so you can use and experiment with any of those. In this post we just took 2 of them (&lt;code&gt;mobilenet_v1&lt;/code&gt; and &lt;code&gt;rcnn_inception_resnet_v2&lt;/code&gt;) but you can try with anyone. All you need to do is to download the &lt;code&gt;.tar.gz&lt;/code&gt; of that model, uncompress it, and specify the graph file with &lt;code&gt;graphFile:&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, you can also try with different pictures. You can try with any image of your own or try with the ones provided in the databases used to train these models (COCO, Kitti, etc.).&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions and Future Work
&lt;/h3&gt;

&lt;p&gt;We keep pushing to show TensorFlow examples from Smalltalk. In the future, we would really like to experiment with training models in Smalltalk itself. Particularly, we want to experiment with IoT boards with GPU (like Nvidia Jetson or similar).&lt;/p&gt;

&lt;p&gt;It would also be interesting to try detecting objects on videos aside from pictures.&lt;/p&gt;

&lt;p&gt;Finally, thanks to Gera Richarte for the help on this work and to Maxi Tabacman for reviewing the post.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>tensorflow</category>
      <category>tutorial</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>Recognizing objects in images with TensorFlow and Smalltalk</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Wed, 07 Aug 2019 11:50:02 +0000</pubDate>
      <link>https://dev.to/martinezpeck/recognizing-objects-in-images-with-tensorflow-and-smalltalk-1nep</link>
      <guid>https://dev.to/martinezpeck/recognizing-objects-in-images-with-tensorflow-and-smalltalk-1nep</guid>
      <description>&lt;p&gt;In this post, I will be showing a simple example of object recognition in images using the TensorFlow library from Smalltalk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Doing Machine Learning without Python? With Smalltalk? Seriously?
&lt;/h3&gt;

&lt;p&gt;Whenever you start entering the world of AI and Machine Learning you will notice immediately that Python has been widely accepted as the “default” programming language for these topics.&lt;/p&gt;

&lt;p&gt;I am not against Python and I believe that people are using it for a reason. However, I do believe that providing alternatives is a good thing, too. And Smalltalk could be that alternative you are looking for.&lt;/p&gt;

&lt;p&gt;Last week, my smalltalker friend Maxi Tabacman told me:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Smalltalk is for complex models and Machine Learning is complex.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Apart from Maxi’s point, there is another reason to have it in Smalltalk. If you are working in a Smalltalk project, you are likely managing complex data. This means you would probably benefit from applying Machine Learning. So, why would you work outside of your Smalltalk environment and model to apply it?&lt;/p&gt;

&lt;p&gt;Then, even &lt;a href="https://bracha.org/Site/Home.html"&gt;Gilad Bracha&lt;/a&gt; told me:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A civilized front end for TensorFlow would be great.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As for what that means – well that’s a long discussion :)&lt;/p&gt;

&lt;p&gt;A long time ago, &lt;a href="https://github.com/gerasdf"&gt;Gera “Richie” Richarte&lt;/a&gt; said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Smalltalk is an excellent environment for learning and experimentation. It’ll be a great place to start playing with Machine Learning.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And he did eventually start playing with ML and Smalltalk a few years ago. He started developing &lt;a href="https://github.com/Cuis-Smalltalk/Machine-Learning"&gt;a TensorFlow wrapper for Cuis Smalltalk&lt;/a&gt;. Later on, this binding &lt;a href="https://github.com/PolyMathOrg/libtensorflow-pharo-bindings/"&gt;was ported to Pharo Smalltalk&lt;/a&gt; too and &lt;a href="https://github.com/vasmalltalk/tensorflow-vast"&gt;now finally to VASmalltalk&lt;/a&gt;. So…as you can see.. as &lt;a href="https://twitter.com/SergeStinckwich"&gt;Serge Stinckwich&lt;/a&gt; said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All Smalltalk unite for Machine Learning and Deep Learning!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This project is not just a binding/wrapper. It tries (as it is still evolving and improving) to provide a much richer API and take advantage of the unique features of Smalltalk.&lt;/p&gt;

&lt;p&gt;Liquid error: internal &lt;/p&gt;

&lt;h3&gt;
  
  
  Preparing the example
&lt;/h3&gt;

&lt;p&gt;In this post I will be using the &lt;a href="https://github.com/vasmalltalk/tensorflow-vast"&gt;VASmalltalk TensorFlow wrapper&lt;/a&gt; but a similar approach could be done with the other Smalltalk dialects. The first thing you must do is to download TensorFlow shared libraries, install the Smalltalk wrapper and configure it. All the details &lt;a href="https://github.com/vasmalltalk/tensorflow-vast#installation"&gt;are explained in the official docs&lt;/a&gt;. To confirm everything is working with the wrapper, you can run the 250+ unit tests we have (not all of them are currently passing in this early preview of the VASmalltalk version).&lt;/p&gt;

&lt;p&gt;Liquid error: internal &lt;/p&gt;

&lt;p&gt;You will see that TensorFlow examples are loaded within the standard load process. This means you should have the &lt;code&gt;LabelImage&lt;/code&gt; class loaded. You may want to check its class comment for explanations and details.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;LabelImage&lt;/code&gt; example shows how you can use a pre-trained TensorFlow network to recognize objects in images. As this is a pre-trained model, you will need its proto buffers file (&lt;code&gt;.pb&lt;/code&gt;). This example also needs a “labels” file. But don’t worry, we have already included everything necessary in &lt;code&gt;examples/labelImage&lt;/code&gt;. So… all you need to do is copy the directory &lt;code&gt;examples&lt;/code&gt; from the git root directory to your Smalltalk image working directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running the examples!
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Single image – Inception V3
&lt;/h4&gt;

&lt;p&gt;The first example runs with this image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VwWKNgkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/github.com/vasmalltalk/tensorflow-vast/raw/master/examples/labelImage/grace_hopper.jpg%3Fresize%3D183%252C215%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VwWKNgkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/github.com/vasmalltalk/tensorflow-vast/raw/master/examples/labelImage/grace_hopper.jpg%3Fresize%3D183%252C215%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And runs an Inception V3 model. You can read more info about it &lt;a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/label_image/README.md"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt;results&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;
&lt;span class="nv"&gt;results&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;LabelImage&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;
    &lt;span class="nf"&gt;imageSize:&lt;/span&gt; &lt;span class="m"&gt;299&lt;/span&gt;&lt;span class="nf"&gt;@&lt;/span&gt;&lt;span class="m"&gt;299&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;imageFiles:&lt;/span&gt; &lt;span class="nc"&gt;OrderedCollection&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;graphFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\inception_v3_2016_08_28_frozen.pb'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;labelsFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\labels.txt'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\grace_hopper.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareImageInput&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareSession&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;results&lt;/span&gt; &lt;span class="nf"&gt;do:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;eachImage&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;eachImage&lt;/span&gt; &lt;span class="nf"&gt;first:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;do:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;
        &lt;span class="nc"&gt;Transcript&lt;/span&gt; 
            &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="nf"&gt;key&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;' %'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="nf"&gt;value&lt;/span&gt; &lt;span class="nf"&gt;*&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;roundTo:&lt;/span&gt; &lt;span class="m"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;asString&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;].&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that should print the following on the Transcript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;military uniform %92.5
mortarboard %1.61
academic gown, academic robe, judge's robe %0.44
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Single image – Mobilenet V1
&lt;/h4&gt;

&lt;p&gt;The Mobilnet model doesn’t give results as good as the Inception V3 but it runs much faster.&lt;/p&gt;

&lt;p&gt;You can run this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;LabelImage&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;
        &lt;span class="nf"&gt;imageSize:&lt;/span&gt; &lt;span class="m"&gt;224&lt;/span&gt;&lt;span class="nf"&gt;@&lt;/span&gt;&lt;span class="m"&gt;224&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nf"&gt;graphFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\mobilenet_v1_1.0_224_quant_frozen.pb'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nf"&gt;labelsFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\labels.txt'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\grace_hopper.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nf"&gt;prepareImageInput&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nf"&gt;prepareSession&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;first&lt;/span&gt;
                &lt;span class="nf"&gt;first:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="nf"&gt;collect:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="nf"&gt;key&lt;/span&gt; &lt;span class="nf"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="nf"&gt;value&lt;/span&gt; &lt;span class="nf"&gt;*&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;roundTo:&lt;/span&gt; &lt;span class="m"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;asString&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it will print:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ('military uniform' -&amp;gt; '39.73'
 'Windsor tie' -&amp;gt; '24.14'
 'bow tie, bow-tie, bowtie' -&amp;gt; '10.52'
 'bulletproof vest' -&amp;gt; '4.59'
 'mortarboard' -&amp;gt; '3.88' 
 'bolo tie, bolo, bola tie, bola' -&amp;gt; '1.69'
 'suit, suit of clothes' -&amp;gt; '1.21' 
 'bearskin, busby, shako' -&amp;gt; '1.21'
 'microphone, mike' -&amp;gt; '0.87' 
 'wig' -&amp;gt; '0.74')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; note that in both models I am specifying a different image size: &lt;code&gt;299&lt;/code&gt; for Inception V3 and &lt;code&gt;224&lt;/code&gt; for Mobilenet V1. This is because the image size is dependent on the pre-trained model we are using. You must read the documentation in order to know which value to use.&lt;/p&gt;

&lt;h4&gt;
  
  
  Multi image – Inception V3
&lt;/h4&gt;

&lt;p&gt;In the previous examples we saw only one image processing at a time. However, TensorFlow allows us to run multiple images in parallel. This is what the example below does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt;results&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;
&lt;span class="nv"&gt;results&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nc"&gt;LabelImage&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;
    &lt;span class="nf"&gt;imageSize:&lt;/span&gt; &lt;span class="m"&gt;299&lt;/span&gt;&lt;span class="nf"&gt;@&lt;/span&gt;&lt;span class="m"&gt;299&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;imageFiles:&lt;/span&gt; &lt;span class="nc"&gt;OrderedCollection&lt;/span&gt; &lt;span class="nb"&gt;new&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;graphFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\inception_v3_2016_08_28_frozen.pb'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;labelsFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\labels.txt'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\grace_hopper.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\ski.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\Carrillon-Escobar-1.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;addImageFile:&lt;/span&gt; &lt;span class="s"&gt;'examples\labelImage\byteSmalltalk.jpg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareImageInput&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;prepareSession&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;results&lt;/span&gt; &lt;span class="nf"&gt;do:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;eachImage&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;eachImage&lt;/span&gt; &lt;span class="nf"&gt;first:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;do:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;
        &lt;span class="nc"&gt;Transcript&lt;/span&gt; 
            &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="nf"&gt;key&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="s"&gt;' %'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nf"&gt;show:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nv"&gt;each&lt;/span&gt; &lt;span class="nf"&gt;value&lt;/span&gt; &lt;span class="nf"&gt;*&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;roundTo:&lt;/span&gt; &lt;span class="m"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;asString&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="p"&gt;].&lt;/span&gt;
    &lt;span class="nc"&gt;Transcript&lt;/span&gt; &lt;span class="nf"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; 
&lt;span class="p"&gt;].&lt;/span&gt;   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are the results. Can you match the picture to the Transcript results? :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4DExVt6x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.wpcomstaging.com/wp-content/uploads/2019/08/Screen-Shot-2019-08-06-at-3.48.58-PM.png%3Ffit%3D748%252C454%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4DExVt6x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/marianopeck.wpcomstaging.com/wp-content/uploads/2019/08/Screen-Shot-2019-08-06-at-3.48.58-PM.png%3Ffit%3D748%252C454%26ssl%3D1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; note that in this case there is still only ONE Smalltalk process doing a single (well, a few) FFI call to TensorFlow… the exact same FFI calls as it would do for a single-image scenario. The magic is in TF, as it will parallelize the processing of those images. Depending on the hardware, how TF was built, your configuration, etc., it may be using only CPUs or also GPUs.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to train your dragon
&lt;/h3&gt;

&lt;p&gt;This post was about a pre-trained net that we simply downloaded. But obviously, the more interesting and complex scenario is to develop and train your own model. That will be a topic for a future post. But in the meantime, you can get a taste of it by checking the snippets below.&lt;/p&gt;

&lt;p&gt;The first one is nice and simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nf"&gt;initializeForwardGraph&lt;/span&gt;
   &lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; input &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;
   &lt;span class="nv"&gt;input&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nv"&gt;graph&lt;/span&gt; &lt;span class="nf"&gt;placeholder:&lt;/span&gt; &lt;span class="s"&gt;'Input'&lt;/span&gt; &lt;span class="nf"&gt;type:&lt;/span&gt; &lt;span class="nc"&gt;TF_Tensor&lt;/span&gt; &lt;span class="nf"&gt;typeFloat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
   &lt;span class="nv"&gt;activation&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;weights&lt;/span&gt; &lt;span class="nf"&gt;*&lt;/span&gt; &lt;span class="nv"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;sigmoid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following one is a bit more complicated but you can see how you can nest things to build the net:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight smalltalk"&gt;&lt;code&gt;&lt;span class="nf"&gt;initializeInferenceGraph&lt;/span&gt;
        &lt;span class="nv"&gt;hidden1&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nv"&gt;graph&lt;/span&gt;
                &lt;span class="nf"&gt;fromBlock:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;image&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;
                        &lt;span class="nv"&gt;input&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
                        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;image&lt;/span&gt; &lt;span class="nf"&gt;*&lt;/span&gt; &lt;span class="nv"&gt;weights1&lt;/span&gt; &lt;span class="nf"&gt;+&lt;/span&gt; &lt;span class="nv"&gt;biases1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;rectified&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
                &lt;span class="nf"&gt;named:&lt;/span&gt; &lt;span class="s"&gt;'layer1'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
        &lt;span class="nv"&gt;hidden2&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nv"&gt;graph&lt;/span&gt; &lt;span class="nf"&gt;fromBlock:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;hidden1&lt;/span&gt; &lt;span class="nf"&gt;*&lt;/span&gt; &lt;span class="nv"&gt;weights2&lt;/span&gt; &lt;span class="nf"&gt;+&lt;/span&gt; &lt;span class="nv"&gt;biases2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;rectified&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nf"&gt;named:&lt;/span&gt; &lt;span class="s"&gt;'layer2'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
        &lt;span class="nv"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nv"&gt;graph&lt;/span&gt;
                &lt;span class="nf"&gt;fromBlock:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                        &lt;span class="nv"&gt;netInput&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nv"&gt;hidden2&lt;/span&gt; &lt;span class="nf"&gt;*&lt;/span&gt; &lt;span class="nv"&gt;weights3&lt;/span&gt; &lt;span class="nf"&gt;+&lt;/span&gt; &lt;span class="nv"&gt;biases3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
                        &lt;span class="nv"&gt;netInput&lt;/span&gt; &lt;span class="nf"&gt;softmax&lt;/span&gt; &lt;span class="nf"&gt;findMaxOn:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="nf"&gt;asInt32Tensor&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
                &lt;span class="nf"&gt;named:&lt;/span&gt; &lt;span class="s"&gt;'layer3'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ready to train your own?&lt;/p&gt;

&lt;h3&gt;
  
  
  Now imagine… TensorFlow + Smalltalk + ARM + GPU
&lt;/h3&gt;

&lt;p&gt;As you can see in previous posts like &lt;a href="https://dev.to/martinezpeck/challenge-accepted-build-tensorflow-c-binding-for-raspberry-pi-in-2019-4f89"&gt;this one&lt;/a&gt; and &lt;a href="https://dev.to/martinezpeck/the-battle-continues-build-tensorflow-c-binding-for-raspberry-pi-in-2019-553j"&gt;this one&lt;/a&gt;, I have been building TensorFlow on different ARM-based SoB (Raspberry Pi, Rock64, etc). I succeeded on both ARM and ARM64. And VASmalltalk supports both too.&lt;/p&gt;

&lt;p&gt;Liquid error: internal &lt;/p&gt;

&lt;p&gt;So…why did I do this? One obvious reason is that soon I want to test an ARM-based board with good GPU…like Nvidia Jetson Nano (128 GPU cores) or XT2 (256 cores). That is….I want TensorFlow running and taking advantage of all those cores!&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;Only time will tell if doing Machine Learning and Deep Learning with Smalltalk is a good idea. From my side, even if I am far from being an expert in this area, I will do my best to make it happen.&lt;/p&gt;

&lt;p&gt;Thanks to Gera Richarte and Koan-Sin TanThanks for pushing these examples and to Maxi Tabacman for reviewing the post.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>tensorflow</category>
      <category>showdev</category>
      <category>smalltalk</category>
    </item>
    <item>
      <title>The battle continues: Build TensorFlow C Binding for Raspberry Pi in 2019</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Mon, 29 Jul 2019 13:39:23 +0000</pubDate>
      <link>https://dev.to/martinezpeck/the-battle-continues-build-tensorflow-c-binding-for-raspberry-pi-in-2019-553j</link>
      <guid>https://dev.to/martinezpeck/the-battle-continues-build-tensorflow-c-binding-for-raspberry-pi-in-2019-553j</guid>
      <description>&lt;p&gt;In a &lt;a href="https://dev.to/martinezpeck/challenge-accepted-build-tensorflow-c-binding-for-raspberry-pi-in-2019-4f89"&gt;recent post&lt;/a&gt; I showed up how challenging it still is to build TensorFlow C bindings for Raspberry Pi and other SBCs (Single Board Computer) and the lack of pre-build binaries.&lt;/p&gt;

&lt;p&gt;As you could read, I was successful with certain approach (cross-compiling with a RaspberryPi-only script) but I wasn’t yet able to compile on the target (RaspberryPi 3 in this example) and I still had quite some questions open.&lt;/p&gt;

&lt;p&gt;In this post I will show some more successful attempts as well as answer some of my previous questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Successful attempt: build from scratch on Raspberry Pi 3B+ and Raspbian Buster
&lt;/h3&gt;

&lt;p&gt;I suspected one of the reasons of why I failed to do this on the first place was the amount of the swap space as well as free hard disk space.&lt;/p&gt;

&lt;p&gt;It was the perfect excuse to buy another 64GB fast micro SD, try Raspbian Buster and start from a fresh install. So, that’s what I did.&lt;/p&gt;

&lt;p&gt;If you remember from the previous post, the first thing you needed to do was to install Bazel. In this case, the first thing I did is to give it a lot of swap space. Instead of 2GB as I did before, I now assigned 8GB.&lt;/p&gt;

&lt;p&gt;Next, remember you don’t need any version of Bazel. The version that TensorFlow would need. One open question I had from previous post was “how do I know that?” And &lt;a href="https://github.com/PINTO0309/Tensorflow-bin/issues/10" rel="noopener noreferrer"&gt;here&lt;/a&gt; is the answer: you must check the file &lt;code&gt;tensorflow/tensorflow/tools/ci_build/install/install_bazel.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For example, for 1.13.1 you can see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Select bazel version.&lt;/span&gt;
&lt;span class="nv"&gt;BAZEL_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"0.20.0"&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; +e
&lt;span class="nv"&gt;local_bazel_ver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;bazel version 2&amp;gt;&amp;amp;1 | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; label | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $3}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$local_bazel_ver&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BAZEL_VERSION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OK, so I started compiling Bazel following the instructions of the previous post and I found out problems with the Java VM. I am sorry, but I didn’t write down which was the exact issue. But it was an error clearly related with some Java compilation that I did NOT have with my previous attempt.&lt;/p&gt;

&lt;p&gt;My first step was to check JVM versions (&lt;code&gt;java --version&lt;/code&gt;) of this Raspbian Buster vs the Raspbian Stretch I used before. On the latter it showed&lt;code&gt;java version “1.8.0_65″&lt;/code&gt; while in the former &lt;code&gt;openjdk 11.0.3 2019-04-16&lt;/code&gt;. OK…so Stretch came with Java 8 while Buster with 11.&lt;/p&gt;

&lt;p&gt;This is when I imagined that maybe Bazel could be build only with a given Java version in particular. Which one? No clue (tell me if you do). So what I did on my fresh Raspbian Buster is to install java 8 too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;openjdk-8-jdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you can check which are the alternatives:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;update-java-alternatives &lt;span class="nt"&gt;-l&lt;/span&gt;
java-1.11.0-openjdk-armhf      1111       /usr/lib/jvm/java-1.11.0-openjdk-armhf
java-1.8.0-openjdk-armhf       1081       /usr/lib/jvm/java-1.8.0-openjdk-armhf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obviously, I didn’t want to change the default Java version for my whole OS, so I just made the following workaround:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;BAZEL_JAVAC_OPTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"-J-Xms384m -J-Xmx1024m"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;JAVA_TOOL_OPTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"-Xmx1024m"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;JAVA_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/usr/lib/jvm/java-1.8.0-openjdk-armhf"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;EXTRA_BAZEL_ARGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"--host_javabase=@local_jdk//:jdk"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
bash ./compile.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically, I am just telling Bazel to use Java 8. After that, it could finish correctly. The next step was to build TensorFlow.&lt;/p&gt;

&lt;p&gt;I started my build process but I hit &lt;a href="https://github.com/aws/aws-sdk-cpp/issues/836" rel="noopener noreferrer"&gt;another compilation issue related to AWS&lt;/a&gt; which I could workaround by just telling to NOT compile AWS….&lt;/p&gt;

&lt;p&gt;Finally, after 13 hours (yes, that’s the expected time) I could successfully build it:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1152631270377218048-680" src="https://platform.twitter.com/embed/Tweet.html?id=1152631270377218048"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1152631270377218048-680');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1152631270377218048&amp;amp;theme=dark"
  }



 &lt;/p&gt;

&lt;p&gt;The final Bazel compilation script for TensorFlow was like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;JAVA_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/usr/lib/jvm/java-1.8.0-openjdk-armhf"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
bazel &lt;span class="nt"&gt;--host_jvm_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-Xmx1024m&lt;/span&gt; &lt;span class="nt"&gt;--host_jvm_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-Xms384m&lt;/span&gt; build &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noaws &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--config&lt;/span&gt; opt &lt;span class="nt"&gt;--verbose_failures&lt;/span&gt; &lt;span class="nt"&gt;--local_resources&lt;/span&gt; 1024,1.0,1.0 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-mfpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;neon-vfpv4 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-funsafe-math-optimizations&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-loop-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-fomit-frame-pointer&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-DRASPBERRY_PI&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-mfpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;neon-vfpv4 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-funsafe-math-optimizations&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-loop-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-fomit-frame-pointer&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-DRASPBERRY_PI&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
//tensorflow/tools/lib_package:libtensorflow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interesting points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I added the &lt;code&gt;JAVA_HOME&lt;/code&gt; workaround for Java 8.&lt;/li&gt;
&lt;li&gt;I removed the original &lt;code&gt;--jobs=3&lt;/code&gt; that I used before because didn’t seem to change much. &lt;/li&gt;
&lt;li&gt;I added the &lt;code&gt;--config=noaws&lt;/code&gt; workaround for AWS. &lt;/li&gt;
&lt;li&gt;As I knew it would take a lot of time to compile I wanted to double check the CPU temperature, so every in a while I run &lt;code&gt;/opt/vc/bin/vcgencmd measure_temp&lt;/code&gt; and check its results. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once it finishes, you may want to copy the resulting files somewhere. In my case I did:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;bazel-bin/tensorflow/libtensorflow.so /usr/local/lib/
&lt;span class="nb"&gt;sudo cp &lt;/span&gt;bazel-bin/tensorflow/libtensorflow_framework.so /usr/local/lib/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Failed attempt (but with learnings!): build from scratch on Raspberry Pi 3B+ and Ubuntu Server 18.04 and aarch64 (ARM 64 bits)
&lt;/h3&gt;

&lt;p&gt;Once I had ARM 32 bits working, my next step was to try on aarch64 (ARM 64 bits). At the time of this writing, there is no official 64 bits Raspbian version. It’s not the first time I want to run something on the Pi3B+ with aarch64 so I already had a micro SD with Ubuntu Server 18.04 up and running.&lt;/p&gt;

&lt;p&gt;I started eating my own dog food and here it comes the first issue: the way to change the swap partition is not the same with Raspbian and Ubuntu. So I followed &lt;a href="https://linuxize.com/post/how-to-add-swap-space-on-ubuntu-18-04/" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; for Ubuntu and assigned also 8GB.&lt;/p&gt;

&lt;p&gt;Second, it seems you need &lt;code&gt;python&lt;/code&gt; installed. No, this Ubuntu did not have any python installed. I then understood why some blog posts started with a “first, install dependencies..” and provided below line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;gcc g++ swig build-essential openjdk-8-jdk python zip unzip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I let you decide if you want Python 2.x or 3x. As we found out before, the JDK version is also important.&lt;/p&gt;

&lt;p&gt;Anyway, after those 2 issues, I was able to compile and run bazel. However, as soon as I tried to run the previous bazel script for TensorFlow I got a compilation error saying that &lt;code&gt;--copt=-mfpu=neon-vfpv4&lt;/code&gt; was not a recognized option:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarianopeck.files.wordpress.com%2F2019%2F07%2Fscreen-shot-2019-07-23-at-9.47.58-am.png%3Fw%3D748" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarianopeck.files.wordpress.com%2F2019%2F07%2Fscreen-shot-2019-07-23-at-9.47.58-am.png%3Fw%3D748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks to &lt;a href="https://github.com/freedomtan" rel="noopener noreferrer"&gt;freedomtan&lt;/a&gt; he told me that I don’t need all those extra &lt;code&gt;--copt&lt;/code&gt;s and &lt;code&gt;--host_copt&lt;/code&gt;s (they are for complicated Raspian environ). So the bazel script should then be something simpler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bazel &lt;span class="nt"&gt;--host_jvm_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-Xmx1024m&lt;/span&gt; &lt;span class="nt"&gt;--host_jvm_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-Xms384m&lt;/span&gt; build &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noaws &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--config&lt;/span&gt; opt &lt;span class="nt"&gt;--verbose_failures&lt;/span&gt; &lt;span class="nt"&gt;--local_resources&lt;/span&gt; 1024,1.0,1.0 &lt;span class="se"&gt;\&lt;/span&gt;
//tensorflow/tools/lib_package:libtensorflow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I thought this was going to work, but after many hours it just didn’t finish. It somehow hung. I didn’t fight any longer because this Ubuntu installation was never really stable for me. So I will try this again anytime soon with a fresh install of Debian Buster or something for aarch64.&lt;/p&gt;

&lt;h3&gt;
  
  
  Successful attempt: convinced someone else to build it!
&lt;/h3&gt;

&lt;p&gt;In the previous post I commented about &lt;a href="https://github.com/PINTO0309/Tensorflow-bin" rel="noopener noreferrer"&gt;this Github repo providing binaries for TensorFlow and Raspberry Pi&lt;/a&gt;. The drawback was that all binaries they ship were Python only and not the C shared library.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/PINTO0309/Tensorflow-bin/issues/10" rel="noopener noreferrer"&gt;After a nice discussion with the author&lt;/a&gt;, he now build it for ARM and ARM64!! And seem to have included it as part of his release process. But that’s not all. It seems that even if Google itself provides official Python wheels for Raspberry Pi, there are many people that still uses his builds. Why? He explains himself:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The difference between my Wheel file and the official Wheel is the following two points.&lt;/p&gt;

&lt;p&gt;* Official Wheel is disabled by default on Tensorflow Lite. However, my wheel is enabled by default.&lt;/p&gt;

&lt;p&gt;* Tuned to 2.5 times the performance of the official Tensorflow Lite.&lt;/p&gt;

&lt;p&gt;It is 1. that is considered a problem by engineers around the world.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can read more in above link and see how to get those binaries! BTW, the same author also &lt;a href="https://github.com/PINTO0309/Bazel_bin" rel="noopener noreferrer"&gt;provides binaries of Bazel for ARM&lt;/a&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  Future attempts (yes, the journey is not over!)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cross-compile myself (any combination for this purpose) using &lt;a href="https://github.com/PINTO0309/Tensorflow-bin" rel="noopener noreferrer"&gt;these instructions&lt;/a&gt;. The main issue with this kind of compilation is that for some reason &lt;a href="https://github.com/PINTO0309/Tensorflow-bin/issues/10#issuecomment-514292503" rel="noopener noreferrer"&gt;the resulting shared library seems linked with a concrete glibc version&lt;/a&gt;. Therefore, its usage is limited because it will only work with operating systems matching that glibc (you can try multi-glibc in the same host but it seems a pain). If you know how we can fix this, please let us know. &lt;/li&gt;
&lt;li&gt;Cross-compile myself using this &lt;a href="https://github.com/lhelontra/tensorflow-on-arm" rel="noopener noreferrer"&gt;other instructions&lt;/a&gt; which seems interesting too. Unfortunately &lt;a href="https://github.com/lhelontra/tensorflow-on-arm/issues/69" rel="noopener noreferrer"&gt;the author does not show too much interest in providing C bindings&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Try again aarch64 compilation on the Pi but with a different OS (not Ubuntu).&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tensorflow</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>raspberrypi</category>
    </item>
    <item>
      <title>Challenge Accepted: Build TensorFlow C Binding for Raspberry Pi in 2019</title>
      <dc:creator>Mariano Martinez Peck</dc:creator>
      <pubDate>Wed, 17 Jul 2019 14:26:27 +0000</pubDate>
      <link>https://dev.to/martinezpeck/challenge-accepted-build-tensorflow-c-binding-for-raspberry-pi-in-2019-4f89</link>
      <guid>https://dev.to/martinezpeck/challenge-accepted-build-tensorflow-c-binding-for-raspberry-pi-in-2019-4f89</guid>
      <description>&lt;p&gt;Believe me. Setting up the environment and building TensorFlow C binding for Raspberry Pi is more complicated than training a neural network that makes me rich by robo-trading assets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Motivation
&lt;/h3&gt;

&lt;p&gt;As SBCs (Single Board Computer) get more and more powerful and cheap, the more likely we will want to run some more heavy computation on them. People like to use terms like “Edge Computing”, “Embedded HPC or ML” or similar terms.&lt;/p&gt;

&lt;p&gt;Something quite common between all these different SBCs alternatives is the use of ARM processors plus some type of GPU.&lt;/p&gt;

&lt;p&gt;A classical example of this heavy computation is AI (Artificial Intelligence) and ML (Machine Learning). In this area, one of the most used and accepted library is Google’s &lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;TensorFlow&lt;/a&gt;. Such library is written in Python. However, &lt;a href="https://www.tensorflow.org/install" rel="noopener noreferrer"&gt;there are also pre-build official binaries for C, Java and Go&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.tensorflow.org/guide/extend/bindings" rel="noopener noreferrer"&gt;The C API is commonly used for binding to other languages via FFI&lt;/a&gt; (Foreign Function Interface). From my point of view, that’s a critical binary.&lt;/p&gt;

&lt;p&gt;Currently, I am developing/testing a &lt;a href="https://github.com/vasmalltalk/tensorflow-vast" rel="noopener noreferrer"&gt;VASmalltalk binding that uses the C library via FFI&lt;/a&gt;. I tested on Linux x64, I tested on Windows and then I wanted to try in Raspberry Pi 3B+, Pine64, Nvidia Jetson Nano, etc… Why? Because I truly believe that this “embedded ML” (or whatever you call it) has value. Running machine learning algorithms in a 35USD machine seems interesting to me.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1149286273661710336-29" src="https://platform.twitter.com/embed/Tweet.html?id=1149286273661710336"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1149286273661710336-29');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1149286273661710336&amp;amp;theme=dark"
  }



 &lt;/p&gt;

&lt;p&gt;So…what happened? I simply &lt;a href="https://www.tensorflow.org/install/lang_c#download" rel="noopener noreferrer"&gt;went to TensorFlow’s official website&lt;/a&gt; and look for the shared library. Guess what? There was none. Zero. Null. Nil. No binaries for any kind of ARM board. I was so surprised that &lt;a href="https://stackoverflow.com/questions/56837317/how-can-i-get-a-tensorflow-c-binding-for-raspberry-pi" rel="noopener noreferrer"&gt;I asked in StackOverflow&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I understand that there are plenty of boards out there each with different hardwares, softwares, drivers, operating systems, etc. But I was expecting at least to have it for some very common ones like Raspberry Pi and Nvidia Jetson Nano.&lt;/p&gt;

&lt;p&gt;Anyway…this is how my journey started. I am not sure if my writings would be useful for others, but at least for my future me, I am sure they will.&lt;/p&gt;

&lt;p&gt;The next sections are sorted in the order I look for the solutions. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;DISCLAIMER: I am NOT a TensorFlow expert. So if you have any feedback, please share!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Failed attempt 1: install Python version and extract the shared library from there
&lt;/h3&gt;

&lt;p&gt;With some recent TensorFlow version, Raspberry Pi / Raspbian is officially supported (I think &amp;gt;= 1.9). However, the only “binaries” available are Python wheels. I suspected Python would be using C underneath so I install the Python version directly on my Pi following &lt;a href="https://www.tensorflow.org/install/pip" rel="noopener noreferrer"&gt;the official instructions using &lt;code&gt;pip&lt;/code&gt;&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--user&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; tensorflow &lt;span class="c"&gt;# install in $HOME&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I then look into the intalled directory and found some shared libraries!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /usr/local/lib/python3.5/dist-packages/tensorflow/python
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-lah&lt;/span&gt; _pywrap_tensorflow_internal.so
&lt;span class="nt"&gt;-rwxr-xr-x&lt;/span&gt; 1 root staff 154M Jul  1 09:32 _pywrap_tensorflow_internal.so
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But guess what? &lt;code&gt;_pywrap_tensorflow_internal.so&lt;/code&gt; is not the same as the shared library we need for the C binding (&lt;code&gt;libtensorflow.so.1.14.0&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;I kept looking and then &lt;a href="https://www.tensorflow.org/install/docker" rel="noopener noreferrer"&gt;I found an installation with Docker&lt;/a&gt;. But again, only possible to build binaries for Python, not for C.&lt;/p&gt;

&lt;p&gt;After all my failed attempts, &lt;a href="https://github.com/tensorflow/tensorflow/issues/30359" rel="noopener noreferrer"&gt;I opened a case on Github&lt;/a&gt; as a “feature request”.&lt;/p&gt;

&lt;h3&gt;
  
  
  Failed attempt 2: looking for non official pre-build binaries
&lt;/h3&gt;

&lt;p&gt;The obvious next step was… “OK, if Google doesn’t do it, then someone else must”. I mean…. Smalltalk is not the only one wanting to bind against the C library, right?&lt;/p&gt;

&lt;p&gt;Long short story, I found none. I found &lt;a href="https://github.com/PINTO0309/Tensorflow-bin/issues/10" rel="noopener noreferrer"&gt;this one&lt;/a&gt;, but it was only python builds (but he said he may try providing shared libraries…so stay tunned!). I then found &lt;a href="https://dl.photoprism.org/tensorflow/" rel="noopener noreferrer"&gt;this one&lt;/a&gt; that would work at least for Nvidia Jetson Nano (but I don’t have the Nano with me yet). I found &lt;a href="https://github.com/lhelontra/tensorflow-on-arm/issues/69" rel="noopener noreferrer"&gt;another one&lt;/a&gt;, but again only Python.&lt;/p&gt;

&lt;p&gt;So….in conclusion, I didn’t find the shared library for the Raspberry Pi anywhere. If you are aware of something, please let me know. What was worst was that most answers were “you better compile it yourself”. That didn’t sound too bad…I mean…sure, why not? Until I checked the official size of the Linux x64 shared library and the &lt;code&gt;libtensorflow.so&lt;/code&gt; was 216MB. WHATTTTTTTTTT? At that moment I thought “OK, this is not gonna be easy”.&lt;/p&gt;

&lt;h3&gt;
  
  
  Abandoned attempt: build from scratch on the Pi
&lt;/h3&gt;

&lt;p&gt;My next obvious step was to try to build from scratch on the Pi. For that, I based my work on &lt;a href="https://gist.github.com/EKami/9869ae6347f68c592c5b5cd181a3b205" rel="noopener noreferrer"&gt;this very helpful step by step guide&lt;/a&gt;. However, time has passed since that guide was written, TensorFlow become “a bit easier” to build on the Pi and so some instructions from it are not necessary anymore. In addition, I found my own problems that were not addressed there.&lt;/p&gt;

&lt;p&gt;I recommend you read that guide first and then continue here. Below is what I ended up doing, which is similar to that guide.&lt;/p&gt;

&lt;p&gt;Before getting started, some important tips I recommend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have many free GBs in your Pi disk. &lt;/li&gt;
&lt;li&gt;Be sure to NOT be running anything heavy on the Pi (shutdown X, VNC, docker, whatever that can use CPU or memory).&lt;/li&gt;
&lt;li&gt;Run the build from &lt;code&gt;ssh&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;tmux&lt;/code&gt; or similar tool because the process takes hours (many hours) and so you will likely want to power off your development machine and check Pi status the next morning.&lt;/li&gt;
&lt;li&gt;Use heat sinks in your Pi if you don’t want to burn it. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Building the builder: bazel
&lt;/h4&gt;

&lt;p&gt;The first thing is that to build TensorFlow you need the bazel tool. Of course: &lt;code&gt;sudo apt-get install bazel&lt;/code&gt;, right? hahahahahahah. LOL. I wish it was that simple. Once again, it looks there is no bazel package ready to install on the Pi. So you must first compile it. OK…this thing is becoming meta. I need to build the builder…what’s next? to compile the Linux kernel in which i will build the builder? …&lt;/p&gt;

&lt;p&gt;Now…to compile either bazel or TensorFlow, in both cases, the 1GB RAM of your Pi won’t be enough. So you must increase the swap space. In the mentioned guide it mounts an external USB stick / hard disk / etc. In my case, I just increased the swap partition from the SD card to 2GB. But people recommend more….like 8GB (but I didn’t have that much free):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vim /etc/dphys-swapfile &lt;span class="c"&gt;# change CONF_SWAPFILE to 2000&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; /etc/init.d/dphys-swapfile stop
&lt;span class="nb"&gt;sudo&lt;/span&gt; /etc/init.d/dphys-swapfile start
free &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="c"&gt;# confirm we have now 2000mb&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;IMPORTANT: Whether you success or not with bazel and TensorFlow compilation, its VERY important that you put back the original swap space size (&lt;code&gt;CONF_SWAPFILE&lt;/code&gt;) when you are done. Else, you will ruin the SD lifespan.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The compilation of bazel can take a few hours. Once I finished and started to compile TensorFlow, I got a wonderful message:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please downgrade your bazel installation to version 0.21.0 or lower to build TensorFlow!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Are you kidding me?????? I spent hours compiling the wrong bazel version? FUC… Is there a way to know in advance which bazel version each TensorFlow version needs? I have no clue. If you know, please tell me. Anyway, I started over with the version it needed for the version of TensorFlow I wanted (1.13.1):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;bazel
&lt;span class="nb"&gt;cd &lt;/span&gt;bazel 
wget https://github.com/bazelbuild/bazel/releases/download/0.21.0/bazel-0.21.0-dist.zip
unzip bazel-0.21.0-dist.zip
&lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;BAZEL_JAVAC_OPTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"-J-Xms384m -J-Xmx1024m"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;JAVA_TOOL_OPTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"-Xmx1024m"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;EXTRA_BAZEL_ARGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"--host_javabase=@local_jdk//:jdk"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
bash ./compile.sh
&lt;span class="nb"&gt;sudo cp &lt;/span&gt;output/bazel /usr/local/bin/bazel
&lt;span class="nb"&gt;cd&lt;/span&gt; ..
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; bazel&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Java options for the memory (the &lt;code&gt;1024&lt;/code&gt; is because the Pi 3B+ has 1GB RAM) are necessary because else compilation just fails (thanks &lt;a href="https://github.com/freedomtan" rel="noopener noreferrer"&gt;freedomtan&lt;/a&gt; for the help). And no, it doesn’t fail with a nice “Out of Memory” but some kind of random error. &lt;a href="https://github.com/bazelbuild/bazel/issues/8882" rel="noopener noreferrer"&gt;I reported that into a Github issue.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other necessary part &lt;code&gt;--host_javabase=@local_jdk//:jdk&lt;/code&gt;. I don’t even remember why…it simply wouldn’t work without that.&lt;/p&gt;

&lt;p&gt;If you succeed on doing this, save that &lt;code&gt;bazel&lt;/code&gt; binary everywhere! don’t loose it hahahaha. Again, if you know somewhere where I can find bazel pre-build binaries for the Pi, please let me know.&lt;/p&gt;

&lt;h4&gt;
  
  
  Building TensorFlow
&lt;/h4&gt;

&lt;p&gt;The first steps are trivial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone &lt;span class="nt"&gt;--recurse-submodules&lt;/span&gt; https://github.com/tensorflow/tensorflow.git
&lt;span class="nb"&gt;cd &lt;/span&gt;tensorflow
git checkout v1.13.1
./configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;./configure&lt;/code&gt; will ask you a few questions about what support you want to add to the TensorFlow compilation you are about to do. The answers will depend on the hardware you are targeting. For Raspberry Pi I think it’s OK to simple answer false to all of them:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0tfg7lsa551wjgnjf7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0tfg7lsa551wjgnjf7g.png" width="748" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Watching at the questions, you may get an idea what you will eventually answer for Nvidia Jetson, Parallella Board, etc. And yes, I would like to see if it works on the Parallella Board:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1147132349450203136-374" src="https://platform.twitter.com/embed/Tweet.html?id=1147132349450203136"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1147132349450203136-374');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1147132349450203136&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Finally, time to run compilation. No, don’t grab a beer, you will end drunk. No, don’t take coffee…you will drink so much caffeine that you will not be able to sleep for a whole week.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bazel &lt;span class="nt"&gt;--host_jvm_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-Xmx1024m&lt;/span&gt; &lt;span class="nt"&gt;--host_jvm_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-Xms384m&lt;/span&gt; build &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--config&lt;/span&gt; opt &lt;span class="nt"&gt;--verbose_failures&lt;/span&gt; &lt;span class="nt"&gt;--jobs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="nt"&gt;--local_resources&lt;/span&gt; 1024,1.0,1.0 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-mfpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;neon-vfpv4 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-funsafe-math-optimizations&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-loop-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-fomit-frame-pointer&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-DRASPBERRY_PI&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-mfpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;neon-vfpv4 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-funsafe-math-optimizations&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-ftree-loop-vectorize&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-fomit-frame-pointer&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--host_copt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;-DRASPBERRY_PI&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
//tensorflow/tools/lib_package:libtensorflow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some interesting points about that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most of the &lt;code&gt;--copt&lt;/code&gt; and &lt;code&gt;--host_copt&lt;/code&gt; &lt;a href="https://github.com/tensorflow/tensorflow/issues/30359" rel="noopener noreferrer"&gt;were not identified by me&lt;/a&gt;. Again, thanks &lt;a href="https://github.com/freedomtan" rel="noopener noreferrer"&gt;freedomtan&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;I already explain why the Java memory arguments.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--verbose_failures&lt;/code&gt; is useful if our build fails to get some description of what went wrong.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--local_resources&lt;/code&gt; helps specify “how much hardware resources” to use.&lt;/li&gt;
&lt;li&gt;For me, it was still failing to build because of low resources. So I ended up adding &lt;code&gt;--jobs=3&lt;/code&gt; which minimizes the use of resources (but will take longer, obviously). &lt;a href="https://stackoverflow.com/questions/34382360/decrease-bazel-memory-usage" rel="noopener noreferrer"&gt;I got this from a StackOverflow&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;It’s interesting to note that building a Python Wheel or a shared library is almost the same process. The only change is that instead of &lt;code&gt;//tensorflow/tools/lib_package:libtensorflow&lt;/code&gt; (for .so) you use &lt;code&gt;//tensorflow/tools/pip_package:build_pip_package&lt;/code&gt; to get a Wheel. That’s why I was kindly asking those already providing Wheels, to also provide shared libraries. &lt;/li&gt;
&lt;li&gt;This process will take many many hours (in my case it took more than 20). So, go to sleep and check the next morning. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This &lt;strong&gt;should&lt;/strong&gt; work. However, as it was taking too much time, I continued looking for other alternatives and I never really let the process to finish. So I can’t confirm it works. And now my SD doesn’t have free space and I already got a working .so (next section). If you try it and it works, let me know! Otherwise, I guess I will try again in the near future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final attempt: cross-compiling
&lt;/h3&gt;

&lt;p&gt;While I was waiting the compilation on the Pi to finish and suffering by watching its green led constantly turned on for hours and hours, I continued looking for more alternatives. By chance, I arrived to &lt;a href="https://www.tensorflow.org/install/source_rpi#build_from_source" rel="noopener noreferrer"&gt;an official link that showed how to cross-compile TensorFlow for the Pi.&lt;/a&gt; (I should have seen this before! hahahahaha)&lt;/p&gt;

&lt;p&gt;Just to understand how difficult it is to have all the environment setup ready, imagine that the cross-compile procedure is to use Docker and start off from an existing image they provide…&lt;/p&gt;

&lt;p&gt;The procedure looked very simple: install docker and then run one shell line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tensorflow/tools/ci_build/ci_build.sh PI &lt;span class="se"&gt;\&lt;/span&gt;
    tensorflow/tools/ci_build/pi/build_raspberry_pi.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cool. That sounded magical. Too good to be true. I was then ready to take advantage of all my 8 CPU cores and 16 GB RAM of my MBP. Unfortunately, the process never finished well for me. Each run would fail at a different place and the explanation was never clear. Again, &lt;a href="https://github.com/tensorflow/tensorflow/issues/30764" rel="noopener noreferrer"&gt;I opened a case on Github&lt;/a&gt; but no response so far.&lt;/p&gt;

&lt;p&gt;I was about to abandon all my attempts for TensorFlow on ARM / SBC. But I had one last idea: try again this cross-compilation with Docker but now on a Linux virtual machine that I had with Linux Mint 18.3. Of course, this VM was never gonna be as fast as doing it directly in my host (OSX), but it should still be much faster than doing it on the Pi.&lt;/p&gt;

&lt;p&gt;Call it a miracle or not, but after a few hours, that DID WORK. I successfully got the &lt;code&gt;.so&lt;/code&gt;, moved it into the Pi and then run my tests. Everything was working:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1151100675104940032-44" src="https://platform.twitter.com/embed/Tweet.html?id=1151100675104940032"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1151100675104940032-44');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1151100675104940032&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;I hope Google would officially ship C binaries at least for the most common SBC like Raspberry Pi or Jetson Nano. If not that, I hope some of those people already compiling Wheels for Raspberry could compile shared libraries too.&lt;/p&gt;

&lt;p&gt;Finally, nevertheless, I think it was worth for me learning the low level details of a build from scratch. Why? Because there are many boards I would like to experiment with: Rpi3 but with a ARM 64 OS (Armbian, Ubuntu Server 18.04, etc), Rpi4, Pine64, Jetson Nano, etc. We can even test on an Nvidia Jetson TX2!!! And for all these cases, I won’t be able to use the cross-compile alternative out-of-the-box because that was intended for the Pi only.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1141775466384306182-133" src="https://platform.twitter.com/embed/Tweet.html?id=1141775466384306182"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1141775466384306182-133');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1141775466384306182&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;I hope I could have helped someone else aside from my future me. If you have any kind of feedback please share!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://dev.to/martinezpeck/the-battle-continues-build-tensorflow-c-binding-for-raspberry-pi-in-2019-553j"&gt;UPDATE: See next post about this topic!&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>tensorflow</category>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
  </channel>
</rss>
