<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Xavier Rey-Robert</title>
    <description>The latest articles on DEV Community by Xavier Rey-Robert (@xreyrobertibm).</description>
    <link>https://dev.to/xreyrobertibm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/xreyrobertibm"/>
    <language>en</language>
    <item>
      <title>Monitoring HP HBA H240 with telegraf and grafana</title>
      <dc:creator>Xavier Rey-Robert</dc:creator>
      <pubDate>Wed, 28 Feb 2024 21:58:17 +0000</pubDate>
      <link>https://dev.to/xreyrobertibm/monitoring-hp-hba-h240-with-telegraf-and-grafana-4f0c</link>
      <guid>https://dev.to/xreyrobertibm/monitoring-hp-hba-h240-with-telegraf-and-grafana-4f0c</guid>
      <description>&lt;p&gt;I've recently got an HP SAS HBA H240 for my home lab to manage eight SAS SSD Pm1633a drives for better IOPs - who doesn't need that to run OpenShift at home right ? Given the HBA240's tendency to heat up, especially in a workstation setup, it's important to keep an eye on temperatures (Controller and SSDs).&lt;/p&gt;

&lt;p&gt;To tackle this, I wrote a simple Python script that parses SSA CLI output into JSON format. This makes it easy to feed the data into Telegraf, enabling straightforward monitoring with Grafana.&lt;/p&gt;

&lt;p&gt;Just a quick post to share this and for posterity...&lt;br&gt;
I don't get into telegraf and grafana now, just comment if you want telegraf conf / grafana panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/XReyRobert/f3d6177d2b50b4198ea9f8896437c5b8"&gt;https://gist.github.com/XReyRobert/f3d6177d2b50b4198ea9f8896437c5b8&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Effortlessly Exporting and Importing Podman Volumes Across Hosts</title>
      <dc:creator>Xavier Rey-Robert</dc:creator>
      <pubDate>Sun, 18 Feb 2024 12:18:35 +0000</pubDate>
      <link>https://dev.to/xreyrobertibm/effortlessly-exporting-and-importing-podman-volumes-across-hosts-2ph4</link>
      <guid>https://dev.to/xreyrobertibm/effortlessly-exporting-and-importing-podman-volumes-across-hosts-2ph4</guid>
      <description>&lt;h2&gt;
  
  
  Effortlessly Exporting and Importing Podman Volumes Across Hosts
&lt;/h2&gt;

&lt;p&gt;Hey folks, let's tackle a common hiccup in managing Podman volumes remotely. If you've tried using &lt;code&gt;podman volume export&lt;/code&gt; with a remote Podman client, you've likely noticed it's not directly supported. But, I've crafted a workaround that simplifies the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2yygppv2xavzo9e7hld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2yygppv2xavzo9e7hld.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;Working remotely and need to move a Podman volume from one server to another? You'll quickly find that &lt;code&gt;podman volume export&lt;/code&gt; isn't designed for remote client operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Workaround
&lt;/h3&gt;

&lt;p&gt;The solution lies in two Bash scripts that utilize SSH, SCP, and Podman's capabilities to facilitate volume export and import across remote hosts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exporting Volumes Made Simple
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;podman_remote_volume_export.sh&lt;/code&gt; script connects to your remote host via SSH, exports the specified Podman volume to a tarball, and then SCPs this tarball back to your local machine. It's a straightforward way to get your volume data where you need it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Importing as easy
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;podman_remote_volume_import.sh&lt;/code&gt; script then takes over, uploading the exported tarball to a different remote host. It checks for existing volumes (offering an option to overwrite) and imports the volume data efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Few Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Safety Checks&lt;/strong&gt;: To prevent accidental data loss, there's a prompt for confirmation before overwriting existing volumes during the import process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean as We Go&lt;/strong&gt;: Both scripts clean up after themselves, removing temporary tarballs to keep your hosts tidy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Usage
&lt;/h3&gt;

&lt;p&gt;To run these scripts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./podman_remote_volume_export.sh user@remotehost volume_name
./podman_remote_volume_import.sh user@remotehost new_volume_name /path/to/archive.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Bottom Line
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;podman volume export&lt;/code&gt; limitation for remote operations can be circumvented with these scripts, streamlining the process of migrating volumes. Designed for developers familiar with container management, they offer a practical solution to a common problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  CODE
&lt;/h3&gt;

&lt;p&gt;For a closer look and potential customization, the scripts are available on Gist: &lt;a href="https://gist.github.com/XReyRobert/aaec1a69eb38f54d869c6b5447babb20"&gt;Podman Volume Management Scripts Gist&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy containerizing!&lt;/p&gt;

&lt;p&gt;Edit 02/19/24:&lt;br&gt;
Just added podman_remote_volume_migrate.sh to do one or multiple volumes in one shot...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;podman_remote_volume_migrate.sh
Usage: /Volumes/UsersData_Macos/Users/xav/scripts/podman_remote_volume_migrate.sh &amp;lt;SOURCE_HOST&amp;gt; &amp;lt;DESTINATION_HOST&amp;gt; &amp;lt;VOLUME_NAME&amp;gt;...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Quick hack to use multiple instances of Newtek NDI Scan Converter on MacOS</title>
      <dc:creator>Xavier Rey-Robert</dc:creator>
      <pubDate>Sat, 29 Aug 2020 18:32:54 +0000</pubDate>
      <link>https://dev.to/xreyrobertibm/quick-hack-to-use-multiple-instances-of-newtek-ndi-scan-converter-on-macos-10eb</link>
      <guid>https://dev.to/xreyrobertibm/quick-hack-to-use-multiple-instances-of-newtek-ndi-scan-converter-on-macos-10eb</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1lQ4aH8N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/p1fq62c7dj18kkrj9dkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1lQ4aH8N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/p1fq62c7dj18kkrj9dkg.png" alt="Alt Text" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm prepping for some upcoming education sessions and I ran into issue needing multiple NDI video streams out of my mac applications and so I need a way to &lt;strong&gt;overcome the limitation to one single feed of the NewTek NDI Scan Converter App&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you stream live from your PC, for Videoconferencing, teaching or gaming, you might already know the free NDI tools from NewTek and the great addition they can be to OBS.&lt;/p&gt;

&lt;p&gt;Check NewTek Ndi Tools &lt;a href="https://ndi.tv/tools/"&gt;Download here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This quick post is to showcase how to allow multiple instances of Newtek scan converter to run on Mac. You can then have multiple apps, broadcasted as multiple NDI streams. In the screenshot above you can see 3 NDI feeds, one from an iPhone Cam, one from a Terminal and one from 3D Heavens benchmark, all displayed at once in OBS.&lt;/p&gt;

&lt;p&gt;While It's easy to spawn more than one &lt;em&gt;Scan Converter&lt;/em&gt; App (open -n command line), the NDI stream name is hard coded to "Scan Converter" and therefore the two instances outputs are conflicting (and only one is showing)&lt;/p&gt;

&lt;p&gt;So I came up with the following procedure to make things working:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Duplicate &lt;em&gt;NewTek NDI Scan Converter&lt;/em&gt; app and rename it to whatever you like (for me bellow &lt;em&gt;Hacked Scan Converter&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will need a binary editor, you can get &lt;a href="https://ridiculousfish.com/hexfiend/"&gt;Hex Friend&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open then App package and look for the app binary: &lt;em&gt;-&amp;gt;Contents-&amp;gt;MacOS-&amp;gt;NewTek NDI Scan Converter&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Open it with HexFiend and search for the Hex sequence :  "53 63 61 6E 20 43 6F 6E 76 65 72 74 65 72 00 61 70 70 6C 69 63 61 74 69 6F 6E 4E 61 6D 65" (which is Scan Converter/00applicationName ). This is the string that is used for the NDI stream name.&lt;/li&gt;
&lt;li&gt;Change &lt;em&gt;Scan Converter&lt;/em&gt; to something like &lt;em&gt;Hacked Scan 01&lt;/em&gt; (It has to be the &lt;strong&gt;exact&lt;/strong&gt; same length)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we've modified the app, the app signature is now invalid so we'll just get rid of it with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;codesign --remove-signature '/Applications/hacked Scan Converter.app'&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we have to change the bundle info so that both app wont interfere in security settings&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;edit &lt;em&gt;/Applications/Hacked Scan Converter.app/Contents/Info.plist&lt;/em&gt; and change *
CFBundleName* and &lt;em&gt;CFBundleIdentifier&lt;/em&gt; values to reflect new name:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;key&amp;gt;CFBundleName&amp;lt;/key&amp;gt;
&amp;lt;string&amp;gt;Hacked NDI Scan Converter&amp;lt;/string&amp;gt;
&amp;lt;key&amp;gt;CFBundleIdentifier&amp;lt;/key&amp;gt;
&amp;lt;string&amp;gt;com.hacked.Application-Mac-NDI-ScanConverter&amp;lt;/string&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start both apps and make sure they have the right permissions in &lt;em&gt;Security-&amp;gt;Privacy-&amp;gt;Screen Recording&lt;/em&gt; settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should now see &lt;em&gt;Scan Converter&lt;/em&gt; and &lt;em&gt;Hacked Scan 01&lt;/em&gt; NDI sources available in NDI Monitor or other NDI apps. &lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;

&lt;p&gt;You can repeat the steps above changing the names if you need more than 2 NDI Scan converter app streams simultaneously...&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Gigabyte GA-X79-UP4 rev 1.1 with Xeon E5 2697v2 - 12 cores 24 threads</title>
      <dc:creator>Xavier Rey-Robert</dc:creator>
      <pubDate>Tue, 04 Aug 2020 22:02:39 +0000</pubDate>
      <link>https://dev.to/xreyrobertibm/gigabyte-ga-x79-up4-with-xeon-e5-2697-v2-12-cores-24-threads-297i</link>
      <guid>https://dev.to/xreyrobertibm/gigabyte-ga-x79-up4-with-xeon-e5-2697-v2-12-cores-24-threads-297i</guid>
      <description>&lt;p&gt;This is a short post for people with &lt;a href="https://www.gigabyte.com/Motherboard/GA-X79-UP4-rev-11#ov"&gt;Gigabyte X79-UP4&lt;/a&gt; wondering if they can upgrade their CPU to a 12 cores &lt;a href="https://ark.intel.com/content/www/us/en/ark/products/75283/intel-xeon-processor-e5-2697-v2-30m-cache-2-70-ghz.html"&gt;Xeon E5 2697v2&lt;/a&gt;. It's probably not interesting for the rest of the world! As I found absolutely no success story online with this motherboard/cpu combination, I drop it here for the archives :)&lt;/p&gt;

&lt;p&gt;In 2014, I made myself a decent setup with a X79-UP4 and a &lt;strong&gt;i7-4930K&lt;/strong&gt; 6 cores CPU. Six years later in 2020 it is still a very nice workhorse and doesn't pale in comparison to more modern setups. As an example the 2018 Macbook pro 6 cores i7 I'm using for work is far bellow in terms of performance under load (mainly due to thermal throttling). The 4930k is still a really appreciated CPU, and overclocking it under (simple) water cooling I can get 6 cores running altogether at 4.3Ghz easily.&lt;/p&gt;

&lt;p&gt;Just recently, after upgrading my GPU for a &lt;a href="https://www.amd.com/en/products/graphics/amd-radeon-rx-5700-xt"&gt;Radeon RX5700XT&lt;/a&gt; and memory +32GB, I started to wonder if I could get a better CPU for my setup. So I started looking towards the Xeon E5 line. &lt;/p&gt;

&lt;p&gt;When I picked the &lt;a href="https://ark.intel.com/content/www/us/en/ark/products/77780/intel-core-i7-4930k-processor-12m-cache-up-to-3-90-ghz.html"&gt;i7 4930k&lt;/a&gt; in 2014 it was priced at $600 but at the top of the line of the Ivy bridge CPUs was standing the &lt;a href="https://ark.intel.com/content/www/us/en/ark/products/75283/intel-xeon-processor-e5-2697-v2-30m-cache-2-70-ghz.html"&gt;Xeon E5 2697-v2&lt;/a&gt; - 12 Cores, 24 threads - for a bit less than $3000! It was the best CPU you could fit in the $6000+ &lt;a href="https://support.apple.com/kb/SP697?locale=fr_FR"&gt;2013 Mac PRO&lt;/a&gt; I was drooling on at the time.&lt;/p&gt;

&lt;p&gt;The e5 2697v2 is still sold by Intel new at $2000, but you can find used ones for much less. I picked mine for $170 &lt;a href="https://fr.aliexpress.com/wholesale?catId=0&amp;amp;initiative_id=SB_20200804134853&amp;amp;SearchText=xeon+2697+v2"&gt;directly from China!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I checked Gigabyte specifications I realised that &lt;br&gt;
unfortunately, &lt;strong&gt;the Xeon E5-2697v2 was not on the list of &lt;a href="https://www.gigabyte.com/Motherboard/GA-X79-UP4-rev-11/support#support-cpu"&gt;supported CPUs&lt;/a&gt;&lt;/strong&gt;. Strangely all the CPUs of the Ivy Bridge-E family are there but not this one (and a few others). I contacted Gigabyte support and got an answer like &lt;em&gt;"If it's not on the list, it's not supported. We recommend using CPU's from the list"&lt;/em&gt;. Fine, but not supported because not-tested, or tested and not working? 3 weeks later, the request is still open and not properly answered... Congrats support.&lt;/p&gt;

&lt;p&gt;Well, 3 weeks, was actually the time needed for the CPU to arrive from China, at this price I didn't wait and decided I'd take the risk to try by myself ! I could see no reasons why all the family would work but not this one...&lt;/p&gt;

&lt;p&gt;and I was right ! &lt;strong&gt;It's perfectly working&lt;/strong&gt;! and as I'm on a summer vacation, I take some time to tell the world about it or at least drop the information here just in case someone else is googling on the same path. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kMQ43xfe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://valid.x86.fr/cache/screenshot/dqm095.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kMQ43xfe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://valid.x86.fr/cache/screenshot/dqm095.png" alt="CPU-Z validation" width="403" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So is it really an upgrade from an overclocked 4930k ? hmmm not an easy answer.&lt;/p&gt;

&lt;p&gt;My Geekbench score for the overclocked &lt;strong&gt;4930k was 975 single core and 5884 multicore&lt;/strong&gt; (all 6 cores running at 4.3 Ghz). The &lt;strong&gt;non-overclocked E5 2697v2 is a bit disappointing with scores of 678 single core and 6439 multi-core&lt;/strong&gt;. That's respectively -30% and +9.5%.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;E5 2697v2 - like most Xeons - is locked&lt;/strong&gt; and therefore not &lt;strong&gt;easily&lt;/strong&gt; overclockable. I started playing with the bus clock which is the only way to squeeze a little more juice out of it and I got honourable &lt;strong&gt;759 single core and 8480 multicore scores&lt;/strong&gt;. Respectively -22% and +32% / 4930k oc. But to make things worse the 113 MHz bus clock boost led to some instability with my GPU...&lt;/p&gt;

&lt;p&gt;Of course with a non-overclocked 4930k that would be a different story and to be fair I've been using my 4930k at specifications speed for 6 years totally satisfied and just started overclocking it few weeks ago only because I was going to receive the new CPU. It's been running rock steady on overclock since then though.&lt;/p&gt;

&lt;p&gt;On the temperature side, under standard/idle use (browsing/video) I reach 40° with the xeon when it was 60° with the oc 4930k. Under heavy load (Cinebench) I would reach 80° with the 4930k and I top a 60° with the Xeon...&lt;/p&gt;

&lt;p&gt;Using my system, I definitely cannot feel the -30% penalty on single core performance and for some workloads, it might still be a nice improvement. Compiling Tensorflow was taking about 1h to compile, I will try and see how much it takes now.&lt;/p&gt;

&lt;p&gt;Oh but wait, I'm just reading there is the &lt;strong&gt;Xeon E5 1680v2&lt;/strong&gt; -8 cores, 16 thread - with one interesting particularity in the Xeon line... &lt;strong&gt;he's not unlocked&lt;/strong&gt; and I have the feeling that with this one one could beat the single core performance of the 4930k and probably get close to the multicore performance of the 12 cores E5 2697v2 when overclocked ! &lt;a href="http://cpuboss.com/cpus/Intel-Xeon-E5-2697-v2-vs-Intel-Xeon-E5-1680-v2"&gt;See how close they are non overclocked&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I guess I could sell my 4930k and order a E5 1680v2, just to try.... or just stop here and wait...&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Machine learning on macOs using Keras -&gt; Tensorflow (1.15.0) -&gt; nGraph -&gt; PlaidML -&gt; AMD GPU</title>
      <dc:creator>Xavier Rey-Robert</dc:creator>
      <pubDate>Thu, 23 Jul 2020 08:46:06 +0000</pubDate>
      <link>https://dev.to/xreyrobertibm/machine-learning-on-macos-using-keras-tensorflow-1-15-0-ngraph-plaidml-amd-gpu-l4j</link>
      <guid>https://dev.to/xreyrobertibm/machine-learning-on-macos-using-keras-tensorflow-1-15-0-ngraph-plaidml-amd-gpu-l4j</guid>
      <description>&lt;p&gt;Since the unavailability of Cuda on macOS, choices to use GPUs for Machine learning on Macs are sparse.&lt;/p&gt;

&lt;p&gt;After failing to find some practical ways to do it, I resorted to use a second Linux computer with an Nvidia GPU for training my networks.&lt;/p&gt;

&lt;p&gt;The availability of macOS Catalina with &lt;a href="https://support.apple.com/en-us/HT208544" rel="noopener noreferrer"&gt;Apple support for Navi AMD GPUs&lt;/a&gt; incited me to give it another try. This was quite tough so I decided to write it down to share the experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  The easy way: Keras with &lt;a href="https://github.com/plaidml/plaidml" rel="noopener noreferrer"&gt;PlaidML&lt;/a&gt; - &lt;em&gt;No tensorflow involved&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;This is quite straight forward and I'm not going to cover it again here. You can check this article here : &lt;a href="https://medium.com/@bamouh42/gpu-acceleration-on-amd-with-plaidml-for-training-and-using-keras-models-57a9fce883b9" rel="noopener noreferrer"&gt;https://medium.com/@bamouh42/gpu-acceleration-on-amd-with-plaidml-for-training-and-using-keras-models-57a9fce883b9&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case that was not satisfying. Here Keras is using PlaidML as a backend and I want to be able to use &lt;a href="https://github.com/keunwoochoi/kapre" rel="noopener noreferrer"&gt;Kapre&lt;/a&gt; which &lt;strong&gt;requires a tensorflow backend&lt;/strong&gt;. &lt;a href="https://github.com/keunwoochoi/kapre" rel="noopener noreferrer"&gt;Kapre&lt;/a&gt; is a neat library providing keras layers to calculate melspectrograms on the fly. &lt;/p&gt;

&lt;p&gt;Be aware that " &lt;a href="https://devclass.com/2019/09/18/another-one-bites-the-dust-keras-team-steps-away-from-multi-backends-refocuses-on-tf-keras/" rel="noopener noreferrer"&gt;Keras team steping away from multi-backends&lt;/a&gt; " so &lt;strong&gt;the Keras -&amp;gt; PlaidML&lt;/strong&gt; approach might be a dead end anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  The journey to Tensorflow execution on mac GPUs / eGPUs
&lt;/h2&gt;

&lt;p&gt;The key element here is &lt;a href="https://github.com/NervanaSystems/ngraph" rel="noopener noreferrer"&gt;nGraph&lt;/a&gt;. Without entering into details, nGraph is pursuing a neutral approach in supporting multiple frameworks &lt;em&gt;(Tensorflow, ONNX, etc.)&lt;/em&gt; and multiple  hardware targets &lt;em&gt;(Intel CPU, NNPs, etc)&lt;/em&gt; and luckily for us (not so! just wait) nGraph was also integrated with PlaidML to offer support for GPUs &lt;em&gt;(Intel, Nvidia and... AMD)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FNervanaSystems%2Fngraph%2Fblob%2Fmaster%2Fdoc%2Fsphinx%2Fsource%2Fgraphics%2FnGraph_main.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FNervanaSystems%2Fngraph%2Fblob%2Fmaster%2Fdoc%2Fsphinx%2Fsource%2Fgraphics%2FnGraph_main.png%3Fraw%3Dtrue" alt="ngraph-ecosystem"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So on paper all is great, we have a way to go: &lt;br&gt;
&lt;strong&gt;Keras -&amp;gt; Tensorflow -&amp;gt; nGraph -&amp;gt; nGraph-bridge -&amp;gt; PlaidML -&amp;gt; Metal -&amp;gt; AMD GPU&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this domain like others, things are moving fast. So fast that it's not allways easy to keep pace and for the teams of those projects it's the same. There are a lot of involved sofware and things are changing so fast that developpers don't have time - or take time - to settle things down. &lt;/p&gt;

&lt;p&gt;nGraph-bridge team hasn't been doing proper releases since August 2019 (v0.18.1) and while they are still activily working on the project they seem to have been focusing on big refactoring. &lt;/p&gt;

&lt;p&gt;To make things worse &lt;strong&gt;PlaidML support was (silently) dropped from nGraph&lt;/strong&gt; in April without much explanations or warning so forget about using the latest github master to try to sort it out ! I spend hours wondering why it wasn't working when it was simply not there anymore.&lt;/p&gt;
&lt;h4&gt;
  
  
  Why was PlaidML bridge droped ?
&lt;/h4&gt;

&lt;p&gt;It seems that the futur &lt;em&gt;path to hapyness&lt;/em&gt; will be &lt;strong&gt;Keras -&amp;gt; Tensorflow -&amp;gt; Mlir -&amp;gt; PlaidMl -&amp;gt; ...&lt;/strong&gt; and all are preping for the jump when Mlir as tensorflow backend will be released ... &lt;strong&gt;in 2021!&lt;/strong&gt; but as of today users are just left hanging in midair.&lt;/p&gt;
&lt;h4&gt;
  
  
  What are your options ?
&lt;/h4&gt;

&lt;p&gt;At time of writing the latest release is ngraph-bridge v0.18.1 (dated 20 Aug 2019!). It's using tensorflow v1.14.0 - Argh! Kapre requirement is tensorflow v1.15 - Dead end again.&lt;/p&gt;

&lt;p&gt;I should mention that &lt;strong&gt;you should better not use prebuilt wheels&lt;/strong&gt;. I realized not all are compiled with PlaidML backend support. &lt;strong&gt;So your best chance is to Build nGraph and nGraph-bridge from sources&lt;/strong&gt; and you'd rather have all stars aligned for that to happend flawlessly. A lot of things can go wrong: Python versions, bazel versions, libraries incompatibilities, bugs to fix in the code etc... all joys of pythons&lt;/p&gt;
&lt;h4&gt;
  
  
  Picking a release candidate to build
&lt;/h4&gt;

&lt;p&gt;v0.19.0-rc9 brings Tensorflow v1.15.0, nGraph 0.28.0-rc1 - the recommended last stable baseline - is Tensorflow v1.14.0&lt;/p&gt;

&lt;p&gt;I need TF15 so let's try with &lt;strong&gt;v.0.19.0-rc10&lt;/strong&gt; then... of course standard build miserably crash which lead me to think that this rc was probably never compiled/tested with plaidml support on mac as clang fails because of a non complete switch statement in plaidml_translate.cpp&lt;/p&gt;

&lt;p&gt;We will fix it by adding this line to the to the switch(dt) in the tile_converter function:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;case PLAIDML_DATA_BFLOAT16: return "as_bfloat16(" + tensor_name + ", 16)";&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;See The complete build instructions bellow.&lt;/p&gt;

&lt;p&gt;If everything goes right you should end up with something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TensorFlow version:  1.15.0
C Compiler version used in building TensorFlow:  4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)
nGraph bridge version: b'0.19.0-rc10'
nGraph version used for this build: b'0.25.1-rc.10+90c70dd'
TensorFlow version used for this build: v1.15.0-rc3-22-g590d6eef7e
CXX11_ABI flag used for this build: 0
nGraph bridge built with Grappler: False
nGraph bridge built with Variables and Optimizers Enablement: False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Final thoughts - Use at your own risks
&lt;/h4&gt;

&lt;p&gt;Ok, we have a working environment but they are so many imbricated (fresh) software bricks that we have no garantee that all this will run properly in all circumstances.&lt;br&gt;
Using Kapre for exemple, I'm able to use the &lt;strong&gt;_mel_spectrogram&lt;/strong&gt;_ layer just fine, but ngraph-bridge&lt;br&gt;
will crash on a &lt;em&gt;Caught exception while executing nGraph computation: syntax error&lt;/em&gt; when trying to use the STFT layer... &lt;/p&gt;

&lt;p&gt;I will not abandon quite yet my linux deep learning work horse but at least I have an environment to try out that will use my Macbook pro GPU on the go and my Catalina / AMD RX 5700 XT setup at home.&lt;/p&gt;
&lt;h4&gt;
  
  
  The complete build instructions
&lt;/h4&gt;

&lt;p&gt;I'm putting bellow what worked for me - I retested on a fresh mac after days of messing up -&lt;/p&gt;

&lt;p&gt;Make sure you have a proper python3 installation (I wont cover it). I'm using 3.7 and using ‘‘‘brew install &lt;a href="mailto:python@3.7"&gt;python@3.7&lt;/a&gt; to manage it.‘‘‘&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/tensorflow/ngraph-bridge.git
cd ngraph-bridge

git checkout v0.19.0-rc10

# Install bazel (bazelisk was a mess)
export BAZEL_VERSION=0.25.2 

curl -LO "https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh"

chmod +x "bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh"
./bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh --user

source ~/.bazel/bin/bazel-complete.bash

# Add $HOME/bin to your PATH in .zshrc (or .bashrc) and source it

echo "\nexport PATH=$PATH:$HOME/bin" &amp;gt;&amp;gt; ~/.zshrc
source ~/.zshrc

# check bazel 
bazel version

# I like to start with a fresh venv dedicated to the build

python3 -m venv build-venv
source build-venv/bin/activate

# Recommended virtualenv v16.0.0 didn't work, I ended up using latest version

python3 -m pip3 install virtualenv

#Install tensorflow from wheel (find the right one here: https://pypi.org/project/tensorflow/1.15.0/#files)

python3 -m pip install https://files.pythonhosted.org/packages/dc/65/a94519cd8b4fd61a7b002cb752bfc0c0e5faa25d1f43ec4f0a4705020126/tensorflow-1.15.0-cp37-cp37m-macosx_10_11_x86_64.whl

#start the build

python3 build_ngtf.py --use_prebuilt_tensorflow --build_plaidml_backend

# When the build fails edit plaidml_translate.cpp from ngraph to add the missing case 

vi /build_cmake/ngraph/src/ngraph/runtime/plaidml/plaidml_translate.cpp 

#re-start the build

python3 build_ngtf.py --use_prebuilt_tensorflow --build_plaidml_backend

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Some hints for the records:
&lt;/h4&gt;

&lt;p&gt;When installing Kapre you might run into&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;AttributeError: module 'enum' has no attribute 'IntFlag&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;This is solved by removing enum34:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;enum34 1.1.10&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;When importing Librosa, you might run into:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ModuleNotFoundError: No module named 'numba.decorators&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;This is solved by using an older version of numba:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install numba==0.48&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

</description>
      <category>keras</category>
      <category>plaidml</category>
      <category>ngraph</category>
      <category>amd</category>
    </item>
  </channel>
</rss>
