<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: PythicCoder</title>
    <description>The latest articles on DEV Community by PythicCoder (@aribornstein).</description>
    <link>https://dev.to/aribornstein</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aribornstein"/>
    <language>en</language>
    <item>
      <title>5 Steps to Training your first Video Classifier in a Flash</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Thu, 13 May 2021 11:51:23 +0000</pubDate>
      <link>https://dev.to/aribornstein/5-steps-to-training-your-first-video-classifier-in-a-flash-1am8</link>
      <guid>https://dev.to/aribornstein/5-steps-to-training-your-first-video-classifier-in-a-flash-1am8</guid>
      <description>&lt;h4&gt;
  
  
  Learn step by step how to build, train and make predictions on a video classification task with 3 simple tools: Lightning Flash, PyTorch Video, and Kornia.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QW6723So--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AU7QYo989MKG86EIZ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QW6723So--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AU7QYo989MKG86EIZ" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video Understanding enables computers to recognize actions, objects, and events in videos. From Retail, Health Care to Agriculture, Video Understanding enables automation of &lt;a href="https://www.cio.com/article/3431138/ai-gets-the-picture-streamlining-business-processes-with-image-and-video-classification.html"&gt;countless industry use cases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SEeH6BtP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A10HEDLmt5V6GppWI.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SEeH6BtP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A10HEDLmt5V6GppWI.jpg" alt=""&gt;&lt;/a&gt;“A case in point: Walmart has developed an AI system that inspects fresh food for signs of defects and spoilage. This system is helping Walmart monitor the temperature and freshness of produce and perishable foods, improve visual inspection at distribution centers, and route perishable food to the nearest store.“ [&lt;a href="https://www.cio.com/article/3431138/ai-gets-the-picture-streamlining-business-processes-with-image-and-video-classification.html"&gt;Source&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ai.facebook.com"&gt;Facebook AI Research&lt;/a&gt;recently released a new library called &lt;a href="https://pytorchvideo.org/"&gt;&lt;strong&gt;PyTorchVideo&lt;/strong&gt;&lt;/a&gt; powered by &lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;PyTorch Lightning&lt;/a&gt; that simplifies video understanding by providing SOTA pre-trained video models, datasets, and video-specific transformers. All the library’s models are highly optimized for inference and support different datasets and transforms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/PyTorchLightning/lightning-flash"&gt;&lt;strong&gt;Lightning Flash&lt;/strong&gt;&lt;/a&gt; is a collection of tasks for fast prototyping, baselining, and fine-tuning scalable Deep Learning models, built on &lt;a href="http://pytorchlightning.ai/"&gt;PyTorch Lightning&lt;/a&gt;. It allows you to train and finetune models without being overwhelmed by all the details, and then seamlessly override and experiment with Lightning for full flexibility. Learn more about Flash &lt;a href="https://lightning-flash.readthedocs.io/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/PyTorchLightning/lightning-flash"&gt;PyTorchLightning/lightning-flash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In its new coming release, &lt;a href="https://github.com/PyTorchLightning/lightning-flash"&gt;&lt;strong&gt;PyTorch Lightning Flash&lt;/strong&gt;&lt;/a&gt; will provide a deep integration with &lt;a href="https://pytorchvideo.org/"&gt;&lt;strong&gt;PyTorchVideo&lt;/strong&gt;&lt;/a&gt; and its backbones and transforms can be used alongside &lt;a href="https://kornia.github.io/"&gt;&lt;strong&gt;Kornia&lt;/strong&gt;&lt;/a&gt; ones to provide a seamless preprocessing, training, and fine-tuning experience of SOTA pre-trained video classification models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kornia.github.io/"&gt;&lt;strong&gt;Kornia&lt;/strong&gt;&lt;/a&gt; is a differentiable computer vision library for &lt;a href="https://pytorch.org/"&gt;PyTorch&lt;/a&gt; that consists of a set of routines and differentiable modules to solve generic computer vision problems.&lt;/p&gt;

&lt;p&gt;In this article you will learn how to train a custom video classification model in 5 simple steps using PyTorch Video, Lightning Flash, and Kornia, using the Kinetics dataset.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wlbzs8Rw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/700/0%2AHDKt6W6q2QGgAMWe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wlbzs8Rw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/700/0%2AHDKt6W6q2QGgAMWe.jpg" alt=""&gt;&lt;/a&gt;&lt;a href="https://deepmind.com/research/open-source/kinetics"&gt;The Kinetics human action video dataset&lt;/a&gt; released by DeepMind is comprised of annotated~10s video clips sourced from YouTube.&lt;/p&gt;

&lt;p&gt;Keep reading to train your own model in a  &lt;strong&gt;flash&lt;/strong&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  5 Simple Steps for Video Classification
&lt;/h3&gt;

&lt;p&gt;Before we get started all the steps in this tutorial are reproducible and can be run with the free trial of Grid by clicking the Run button below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://platform.grid.ai/#/runs?script=https://github.com/aribornstein/KineticsDemo/blob/4fcf30e1c2fd46247ec0fc1a6cb0886e9838586f/train.py&amp;amp;cloud=grid&amp;amp;instance=g4dn.xlarge&amp;amp;accelerators=1&amp;amp;disk_size=200&amp;amp;framework=lightning&amp;amp;script_args=grid%20train%20--g_name%20kinetics-demo-5%20--g_disk_size%20200%20--g_max_nodes%2010%20--g_instance_type%20g4dn.xlarge%20--g_gpus%201%20train.py%20--gpus%201%20--max_epochs%2010"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5Pnp6HHl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/262/0%2AfWRfmrWier8GPzrb.png" alt=""&gt;&lt;/a&gt;All the steps in this tutorial are reproducible and can be run with the free trial of Grid by clicking the Run button above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xzf3Pscx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEV6YVi7658ZChGD2nowG1A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xzf3Pscx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEV6YVi7658ZChGD2nowG1A.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;a href="https://docs.grid.ai/"&gt;Grid Platform&lt;/a&gt; enables you to seamlessly train hundreds machine learning models on the cloud from your laptop without modifying your code in a fully reproducible manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find the full end-to-end code for this tutorial code&lt;/strong&gt; &lt;a href="https://github.com/aribornstein/KineticsDemo"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;and the full reproducible run&lt;/strong&gt; &lt;a href="https://platform.grid.ai/#/runs?script=https://github.com/aribornstein/KineticsDemo/blob/188f1948725506914b67d3814073a7bec152ac0a/train.py&amp;amp;cloud=grid&amp;amp;instance=g4dn.xlarge&amp;amp;accelerators=1&amp;amp;disk_size=200&amp;amp;framework=lightning&amp;amp;script_args=grid%20train%20--g_name%20kinetics-demo-7%20--g_disk_size%20200%20--g_max_nodes%2010%20--g_instance_type%20g4dn.xlarge%20--g_gpus%201%20train.py%20--gpus%201%20--max_epochs%203"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;on ** &lt;a href="https://www.grid.ai/"&gt;&lt;strong&gt;Grid&lt;/strong&gt;&lt;/a&gt; **.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Prerequisite Imports
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tchaton/f3ceafeb246b46f8c8f7d9e4498760b0"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mDVYxVeW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AjDKYE93hbSdqp3qIPNUzaw.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/f3ceafeb246b46f8c8f7d9e4498760b0"&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/f3ceafeb246b46f8c8f7d9e4498760b0"&gt;https://gist.github.com/tchaton/f3ceafeb246b46f8c8f7d9e4498760b0&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1- Download The Kinetics Dataset
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tchaton/a4744780de0f534a613fc738afbac8c2"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dS2prbOJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AfWwARV6mhVCN2jSOHm8tJw.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/a4744780de0f534a613fc738afbac8c2"&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/a4744780de0f534a613fc738afbac8c2"&gt;https://gist.github.com/tchaton/a4744780de0f534a613fc738afbac8c2&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2- Specify Kornia Transforms to Apply to Video
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZPei4BoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/957/0%2AqofNAX_qSEYa5-gm.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZPei4BoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/957/0%2AqofNAX_qSEYa5-gm.gif" alt=""&gt;&lt;/a&gt;Flash helps you to place your transform exactly where you want and makes Video Pre and Post Processing Simple as easy as a few lines of initialization code&lt;/p&gt;

&lt;p&gt;As videos are memory-heavy objects, data sampling and data augmentation play a key role in the video model training procedure and need to be carefully crafted to reach SOTA results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/PyTorchLightning/lightning-flash"&gt;&lt;strong&gt;Flash&lt;/strong&gt;&lt;/a&gt; optimizes this process as transforms can be provided with simple dictionary mapping between hooks and the transforms. &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/data.html?highlight=hooks#terminology"&gt;Flash hooks&lt;/a&gt; are simple functions that can be overridden to extend Flash’s functionality.&lt;/p&gt;

&lt;p&gt;In the code below, we use the &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/data.html?highlight=post_tensor_transform#flash.data.process.Preprocess.post_tensor_transform"&gt;&lt;strong&gt;post_tensor_transform&lt;/strong&gt;&lt;/a&gt; hook to uniformly select 8 frames from a sampled video clip and resize them to 224 pixels. This is done in &lt;strong&gt;parallel&lt;/strong&gt; within each of the PyTorch DataLoader workers.&lt;/p&gt;

&lt;p&gt;We also use the &lt;a href="http://per_batch_transform_on_device"&gt;&lt;strong&gt;per_batch_transform_on_device&lt;/strong&gt;&lt;/a&gt; hook to enable Kornia normalization to be applied directly to a batch already moved to GPU or TPU .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tchaton/b627d56e6d3da89eb456aede48c3c171"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SXSOsHe2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A7TOHKpMtv8gLIgYrp80Yiw.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/b627d56e6d3da89eb456aede48c3c171"&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/b627d56e6d3da89eb456aede48c3c171"&gt;https://gist.github.com/tchaton/b627d56e6d3da89eb456aede48c3c171&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3- Load the Data
&lt;/h4&gt;

&lt;p&gt;Lightning &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html"&gt;DataModule&lt;/a&gt;s are shareable, reusable objects that encapsulate all data-related code. Flash enables you to quickly load videos and labels from various formats such as config files or folders into &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html"&gt;DataModule&lt;/a&gt;s.&lt;/p&gt;

&lt;p&gt;In this example, we use the Flash from_folders helper function to generate a &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html"&gt;DataModule&lt;/a&gt;. To use the from_folders functions videos should be organized into folders for each class as follows:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;dir_path//.{mp4, avi}&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iB8XOqPm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AhtT4iyi5MM229K4gjXE1yw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iB8XOqPm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AhtT4iyi5MM229K4gjXE1yw.png" alt=""&gt;&lt;/a&gt;Code for Initializing a D&lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html"&gt;ataModule&lt;/a&gt; on using Flash on the Kinetics Dataset&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4- Instantiate the VideoClassifier with a pre-trained Model Backbone .
&lt;/h4&gt;

&lt;p&gt;Training a SOTA video classifier from scratch on Kinetics can take around 2 days on 64 GPUS. Thankfully, the Flash &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/registry.html"&gt;Backbones Registry API&lt;/a&gt; makes it easy to integrate individual models as well as entire model hubs with Flash.&lt;/p&gt;

&lt;p&gt;By default, the &lt;a href="https://pytorchvideo.org/"&gt;PyTorchVideo&lt;/a&gt; &lt;a href="https://github.com/facebookresearch/pytorchvideo/tree/master/pytorchvideo/models/hub"&gt;Model Hub&lt;/a&gt; is pre-registered within the Video Classifier Task &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/registry.html"&gt;Backbone Registry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can list any of the available backbones and get more details as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tchaton/d2f2b52f77d7ed5a537d686e124c2b60"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HPecMOLH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AH031LIOrG3Ool7ecesPP3A.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/d2f2b52f77d7ed5a537d686e124c2b60"&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/d2f2b52f77d7ed5a537d686e124c2b60"&gt;https://gist.github.com/tchaton/d2f2b52f77d7ed5a537d686e124c2b60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you pick a backbone, you can initialize it with just one line of code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tchaton/0e83bff7e27e125cc675bcdd31d3bf23"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--04HQ72kY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2GtEGLWf7AwWIz1K-eZ3tA.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/0e83bff7e27e125cc675bcdd31d3bf23"&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/0e83bff7e27e125cc675bcdd31d3bf23"&gt;https://gist.github.com/tchaton/0e83bff7e27e125cc675bcdd31d3bf23&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/data.html?highlight=serializer#postprocess-and-serializer"&gt;Serializers&lt;/a&gt; are optional components that can be passed to Flash Classifiers that enable you to configure how to format the output of the Task Predictions asLogits, Probabilities , or Labels. Read more &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/data.html?highlight=serializer#postprocess-and-serializer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Step 5- FineTune Model
&lt;/h4&gt;

&lt;p&gt;Once we have a DataModule and VideoClassifier, we can configure the &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/training.html?highlight=trainer#trainer-api"&gt;&lt;strong&gt;Flash&lt;/strong&gt; Trainer&lt;/a&gt;. Out of the box, the &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/training.html?highlight=trainer#trainer-api"&gt;&lt;strong&gt;Flash&lt;/strong&gt; Trainer&lt;/a&gt; supports all the Lightning Trainer flags and &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags"&gt;tricks you know and love&lt;/a&gt; including &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html"&gt;distributed training&lt;/a&gt;, &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.model_checkpoint.html?highlight=checkpointing"&gt;checkpointing&lt;/a&gt;, &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html"&gt;logging&lt;/a&gt;, mixed &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html?highlight=precision#precision"&gt;precision&lt;/a&gt;, &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/advanced/training_tricks.html?highlight=SWA#stochastic-weight-averaging"&gt;SWA&lt;/a&gt;, &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/advanced/pruning_quantization.html"&gt;quantization&lt;/a&gt;, &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/common/early_stopping.html?highlight=early%20stopping"&gt;early stopping&lt;/a&gt;, and &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags"&gt;more&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition to enabling you to train models from scratch the Flash Trainer provides quick access to a host of &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/finetuning.html"&gt;strategies for fine-tuning pre-trained models on new data&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tchaton/685d58357241fe74c656a90798c1e0fe"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DMJrh_RB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ao4_yw9gGarEXXbS9CUxZIg.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/685d58357241fe74c656a90798c1e0fe"&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/685d58357241fe74c656a90798c1e0fe"&gt;https://gist.github.com/tchaton/685d58357241fe74c656a90798c1e0fe&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, we use the &lt;a href="https://lightning-flash.readthedocs.io/en/latest/general/finetuning.html#no-freeze"&gt;NoFreeze Strategy&lt;/a&gt; in which the backbone will be entirely trainable from the start. You can even define your own custom &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/extensions/generated/pytorch_lightning.callbacks.BaseFinetuning.html?highlight=finetuning%20strategies"&gt;FineTuning Strategies&lt;/a&gt; .&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;With these 5 simple steps you can now train your own Video Classification model on any Video Dataset. Once a model is trained, it can easily be saved and reused later for making predictions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tchaton/6fc6ba4454a6d7aca0b790f8cbf38fe6"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wf-XvKw1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AKJMtomTWlyWCyzhstW0QQw.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/6fc6ba4454a6d7aca0b790f8cbf38fe6"&gt;&lt;/a&gt;&lt;a href="https://gist.github.com/tchaton/6fc6ba4454a6d7aca0b790f8cbf38fe6"&gt;https://gist.github.com/tchaton/6fc6ba4454a6d7aca0b790f8cbf38fe6&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One major advantage of using PyTorch Video backbones is that they are optimized for mobile. With &lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;Lightning&lt;/a&gt;, models can be saved and exported as classic PyTorch Checkpoints or optimized for inference on the &lt;a href="https://pytorchvideo.org/docs/tutorial_accelerator_use_model_transmuter"&gt;edge&lt;/a&gt; with &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/common/production_inference.html"&gt;ONNX and or TorchScript&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find the end-to-end tutorial code ** &lt;a href="https://github.com/PyTorchLightning/lightning-flash/blob/master/flash_examples/finetuning/video_classification.py"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt; **.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We thank &lt;a href="https://www.linkedin.com/in/tullie/"&gt;Tullie Murrell&lt;/a&gt; from the PyTorchVideo team for the support and feedback leading to this integration in Flash.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;Now that you have train your first model it’s time to experiment. Flash Tasks provide full access to all the Lightning &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.core.hooks.html"&gt;hooks&lt;/a&gt; that can be overridden as you iterate towards the state of the art.&lt;/p&gt;

&lt;p&gt;Check out the following links to learn more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;PyTorch Lightning Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kornia/kornia"&gt;Kornia Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/facebookresearch/pytorchvideo/"&gt;PyTorch Video Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.grid.ai/"&gt;Learn more about Grid&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  About the Authors
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As Head of Developer Advocacy at Grid.ai, he collaborates with the Machine Learning Community, to solve real-world problems with game-changing technologies that are then documented, open-sourced, and shared with the rest of the world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/thomas-chaton-1529ba128/"&gt;&lt;strong&gt;Thomas Chaton&lt;/strong&gt;&lt;/a&gt; is PyTorch Lightning Research Engineering Manager. Previously, Senior Research Engineer at Fujitsu AI and Comcast-Sky Labs, Thomas is also the creator of &lt;a href="https://github.com/nicolas-chaulet/torch-points3d"&gt;TorchPoints3D&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>deeplearning</category>
      <category>pytorchlightning</category>
      <category>pytorch</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How to Baseline Deep Learning Tasks in a Flash</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Thu, 25 Feb 2021 13:02:02 +0000</pubDate>
      <link>https://dev.to/aribornstein/how-to-baseline-deep-learning-tasks-in-a-flash-4ceb</link>
      <guid>https://dev.to/aribornstein/how-to-baseline-deep-learning-tasks-in-a-flash-4ceb</guid>
      <description>&lt;h4&gt;
  
  
  This tutorial introduces how to get started building baselines for Deep Learning using &lt;a href="https://github.com/PyTorchLightning/lightning-flash"&gt;PyTorch Lightning Flash&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7z_2L0Se--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AQsFKMjmZNxHDIbJC" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7z_2L0Se--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AQsFKMjmZNxHDIbJC" alt=""&gt;&lt;/a&gt;Photo by Dmitry Zvolskiy from &lt;a href="https://www.pexels.com/photo/purple-lightning-at-night-1576369/"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is PyTorch Lightning Flash?
&lt;/h3&gt;

&lt;p&gt;PyTorch Lightning Flash is a new library from the creators of PyTorch Lightning to enable quick baselining of state-of-the-art Deep Learning tasks on new datasets in a matter of minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/PyTorchLightning/lightning-flash"&gt;PyTorchLightning/lightning-flash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Consistent with PyTorch Lightning’s goal of getting rid of the boilerplate, Flash aims to make it easy to train, inference, and fine-tune deep learning models.&lt;/p&gt;

&lt;p&gt;Flash is built on top of PyTorch Lightning to abstract away the unnecessary boilerplate for common Deep Learning Tasks ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data science&lt;/li&gt;
&lt;li&gt;Kaggle Competitions&lt;/li&gt;
&lt;li&gt;Industrial AI&lt;/li&gt;
&lt;li&gt;Applied research&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As such, Flash provides seamless support for distributed training and inference of Deep Learning models.&lt;/p&gt;

&lt;p&gt;Since Flash is built on top of PyTorch Lightning, as you learn more, you can override your Task code seamlessly with both Lightning and PyTorch to find the &lt;a href="https://towardsdatascience.com/setting-a-strong-deep-learning-baseline-in-minutes-with-pytorch-c0dfe41f7d7"&gt;right level of abstraction for your scenario&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---p1o_SxV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/838/0%2ALuRUEzOLqmDGdwAp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---p1o_SxV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/838/0%2ALuRUEzOLqmDGdwAp.png" alt=""&gt;&lt;/a&gt;Motorcycle Photo by Nikolai Ulltang from &lt;a href="https://www.pexels.com/photo/person-riding-red-sports-bike-529782/"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the remainder of this post, I will walk you through the 5 steps of building Deep Learning applications with an inline Flash Task code example.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating your first Deep Learning Baseline with Flash
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UiY5dFft--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AILR3O2aPPyHTJu5j" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UiY5dFft--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AILR3O2aPPyHTJu5j" alt=""&gt;&lt;/a&gt;Photo by Lalesh Aldarwish from &lt;a href="https://www.pexels.com/photo/timelapse-photography-of-road-with-white-and-red-lights-169976/"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All code for the following tutorial can be found in the Flash Repo &lt;a href="https://github.com/PyTorchLightning/lightning-flash/tree/master/flash_notebooks"&gt;under Notebooks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I will present five repeatable steps that you will be able to apply to any Flash Task on your own data.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose a Deep Learning Task&lt;/li&gt;
&lt;li&gt;Load Data&lt;/li&gt;
&lt;li&gt;Pick a State of the Art Model&lt;/li&gt;
&lt;li&gt;Fine-tune the Task&lt;/li&gt;
&lt;li&gt;Predict&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now let’s Get Started!!!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Choose A Deep Learning Task
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d5ea_Pnt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Ade2mNzBjRrvBW6Zz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d5ea_Pnt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Ade2mNzBjRrvBW6Zz.png" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://www.pexels.com/@pixabay?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Pixabay&lt;/a&gt; from &lt;a href="https://www.pexels.com/photo/yellow-cube-on-brown-pavement-208147/"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first step of the applied Deep Learning Process is to choose the task we want to solve. Out of the box, Flash provides support for common deep learning tasks such as &lt;a href="https://github.com/PyTorchLightning/lightning-flash/blob/master/flash_notebooks/finetuning/image_classification.ipynb"&gt;Image&lt;/a&gt;, &lt;a href="https://github.com/PyTorchLightning/lightning-flash/blob/master/flash_notebooks/finetuning/text_classification.ipynb"&gt;Text&lt;/a&gt;, Tabular Classification, and more complex scenarios such as Image Embedding, Object Detection, Document Summarization and Text Translation. New tasks are being added all the time.&lt;/p&gt;

&lt;p&gt;In this tutorial, we’ll build a Text Classification model using Flash for sentiment analysis on movie reviews. The model will be able to tell us that a review such as “&lt;em&gt;This is the worst movie in the history of cinema.”&lt;/em&gt; is negative and that a review such as “&lt;em&gt;This director has done a great job with this movie!&lt;/em&gt;” is positive.&lt;/p&gt;

&lt;p&gt;To get started, let’s first install Flash and import the required Python libraries for the Text Classification task.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Step 2: Load Data
&lt;/h3&gt;

&lt;p&gt;Now that we have installed flash and loaded our dependencies let’s talk about data. To build our first model, we will be using the IMDB Movie Review dataset stored in a CSV file format. Check out some sample reviews from the dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;review, sentiment"I saw this film for the very first time several years ago - and was hooked up in an instant. It is great and much better than J. F. K. cause you always have to think 'Can it happen to me? Can I become a murderer?' You cannot turn of the TV or your VCR without thinking about the plot and the end, which you should'nt miss under any circumstances.", positive"Winchester 73 gets credit from many critics for bringing back the western after WWII. Director Anthony Mann must get a lot of credit for his excellent direction. Jimmy Stewart does an excellent job, but I think Stephen McNalley and John McIntire steal the movie with their portrayal of two bad guys involved in a high stakes poker game with the treasured Winchester 73 going to the winner. This is a good script with several stories going on at the same time. Look for the first appearance of Rock Hudson as Young Bull. Thank God, with in a few years, we would begin to let Indians play themselves in western films. The film is in black and white and was shot in Tucson Arizona. I would not put Winchester 73 in the category of Stagecoach, High Noon or Shane, but it gets an above average recommendation from me.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;.", positive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The first thing we need to do is download the dataset using the following code.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Once we have downloaded the IMDB dataset, Flash provides a convenient TextClassificationData module that handles the complexity of loading Text Classification data stored in CSV format and converting it into a representation that Deep Learning models need train.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  3. Pick a State of the Art Model for Our Task
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UX-HOwP8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/950/0%2AndmfU8riul_f4BqU.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UX-HOwP8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/950/0%2AndmfU8riul_f4BqU.jpg" alt=""&gt;&lt;/a&gt;In the past few years in Natural Language Processing, many of the State of the Art Models are named after Sesame Street Characters. Photo used under CC BY-NC-ND 4.0 license from &lt;a href="https://pixy.org/4508354/"&gt;pixy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have loaded our dataset, we need to pick a model to train. Each Flash Task comes preloaded with support for State of the Art model backbones for you to experiment with instantly.&lt;/p&gt;

&lt;p&gt;By default, the TextClassifier task uses the &lt;a href="https://huggingface.co/prajjwal1/bert-tiny"&gt;tiny-bert&lt;/a&gt; model, enabling strong performance on most text classification tasks. Still, you can use any model from Hugging Face &lt;a href="https://huggingface.co/models?filter=text-classification,pytorch"&gt;transformers — Text Classification&lt;/a&gt; model repository or even bring your own. In Flash, all you need is just one line of code to load a backbone.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  4. Fine-tune the Task
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6b_fXpc6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/950/0%2AOqIoBwggKwKjOymK.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6b_fXpc6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/950/0%2AOqIoBwggKwKjOymK.jpg" alt=""&gt;&lt;/a&gt;Photo by Shae Calkins from &lt;a href="https://pixy.org/4795948/"&gt;Pixy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have chosen the model and loaded our data, it’s time to train the model on our classification task using the following two lines of code:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Because the Flash Trainer is built on top of PyTorch Lightning, it is seamless to distribute training to multiple GPUs, Cluster Nodes, and even TPUs.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Additionally, you get tons of otherwise difficult to implement features, such as automated model checkpointing and logging integrations with platforms such as Tensorboard, Neptune.ai, and MLFlow built-in without any hassle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o_BjdDxJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/390/0%2A1Tse2pFzWXdO9ILU.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o_BjdDxJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/390/0%2A1Tse2pFzWXdO9ILU.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ng886pwh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Azmre7gqX2sqkFTdN.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ng886pwh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Azmre7gqX2sqkFTdN.png" alt=""&gt;&lt;/a&gt;Photo generated by the author.&lt;/p&gt;

&lt;p&gt;Tasks come with native support for all standard task metrics. In our case, Flash Trainer will automatically benchmark your model for you on all the standard task metrics for classification, such as Precision, Recall, F1 and &lt;a href="https://pytorch-lightning.readthedocs.io/en/stable/extensions/metrics.html?highlight=metrics"&gt;more&lt;/a&gt; with just one line of code.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Task models can be checkpointed and shared seamlessly using the Flash Trainer as follows.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  5. Predict
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yHD2-YpY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AOy5Q4Oi-3ZM8AfiY" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yHD2-YpY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AOy5Q4Oi-3ZM8AfiY" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://www.pexels.com/@blitzboy?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Sindre Strøm&lt;/a&gt; from &lt;a href="https://www.pexels.com/photo/crystal-ball-on-person-s-hand-879718/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we’ve finished training our model, we can use it to predict on our data. with just one line of code:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Additionally, we can use the Flash Trainer to scale and distribute model inference for production.&lt;/p&gt;

&lt;p&gt;For scaling for inference on 32 GPUs, it is as simple as one line of code.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You can even export models to Onnx or Torch Script for edge device inference.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Putting it All Together
&lt;/h3&gt;

&lt;p&gt;How fun was that! The 5 steps above are condensed into the simple code snippet below and can be applied to any Flash deep learning task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/b5f1d203391db937738ea9e4d8e0ef6b/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/b5f1d203391db937738ea9e4d8e0ef6b/href"&gt;https://medium.com/media/b5f1d203391db937738ea9e4d8e0ef6b/href&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;Now that you have the tools to get started with building quick Deep Learning Baselines, I can’t wait for you to show us what you can build. If you liked this tutorial, feel free to Clap Below and give us a Star on GitHub.&lt;/p&gt;

&lt;p&gt;We are working tirelessly on adding more Flash tasks, so if you have any must-have tasks, comment below or reach out to us on Twitter &lt;a href="https://twitter.com/PyTorchLightnin"&gt;@pytorchlightnin&lt;/a&gt; or in our &lt;a href="https://pytorch-lightning.slack.com/"&gt;Slack channel&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As Head of Developer Advocacy at Grid.ai, he collaborates with the Machine Learning Community to solve real-world problems with game-changing technologies that are then documented, open-sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>deeplearning</category>
      <category>ai</category>
      <category>pytorch</category>
      <category>pytorchlightning</category>
    </item>
    <item>
      <title>Lessons Upon Leaving Microsoft After 7 Transformative Years</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Thu, 14 Jan 2021 09:19:12 +0000</pubDate>
      <link>https://dev.to/aribornstein/lessons-upon-leaving-microsoft-after-7-transformative-years-13n2</link>
      <guid>https://dev.to/aribornstein/lessons-upon-leaving-microsoft-after-7-transformative-years-13n2</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rwEVejdm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApSlEl-UdOKBQ-HC38LNE7Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rwEVejdm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApSlEl-UdOKBQ-HC38LNE7Q.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today is my last day at Microsoft. After almost 7 incredible years at Microsoft it’s time for me to move onto the next step in my professional career (more about that next week). It’s a bittersweet week but before the day is over I want to take this time to reflect and share my experiences.&lt;/p&gt;

&lt;p&gt;I joined Microsoft in 2014 after finishing my first degree in Computer Science and History at small liberal arts college in Baltimore Maryland.&lt;/p&gt;

&lt;p&gt;At the time, Microsoft was at a crossroads, trailing a distant 3rd in a mobile first world and recovering from a&lt;a href="https://www.cnet.com/reviews/microsoft-windows-8-1-review/#:~:text=The%20good%20Windows%208.1%20adds,for%20current%20Windows%208%20users"&gt;lukewarm reception of Windows 8&lt;/a&gt;. No one seemed to know which way the winds would blow. Yet, coming from a small liberal arts school, I was excited for the opportunity to learn and prove myself.&lt;/p&gt;

&lt;p&gt;The past 7 years have been transformative for both me and Microsoft.&lt;/p&gt;

&lt;p&gt;As the company has reinvented itself I had the opportunity to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learn cutting edge 1st and 3rd party technologies. From Xbox, Office 365, Windows Phone and Hololens to the intricacies of Azure across countless services (VR, Modern Web, Containers, Machine Learning, Blockchain, IoT, Databases, Automation and more).&lt;/li&gt;
&lt;li&gt;Engage with non profits, mentoring students and young professionals from underrepresented high schools to the Ivy Leagues.&lt;/li&gt;
&lt;li&gt;Host and be hosted by top technical communities in Israel and abroad.&lt;/li&gt;
&lt;li&gt;Build keynote demos and speak at first and third party conferences across the world.&lt;/li&gt;
&lt;li&gt;Lead and contribute to business critical engagements with many of the top brands in the world.&lt;/li&gt;
&lt;li&gt;Collaborate with cutting edge start ups from the smallest seed companies to larger Series C and D organizations.&lt;/li&gt;
&lt;li&gt;Complete a Masters Degree at Israel’s top Natural Language Processing Lab&lt;/li&gt;
&lt;li&gt;Publish AI Research at top Academic Conferences.&lt;/li&gt;
&lt;li&gt;Beta test and shape core Microsoft products from Windows 10 to Github and Azure ML.&lt;/li&gt;
&lt;li&gt;Develop open source solutions and write content consumed by hundreds of thousands if not millions of developers around the world.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am honored by those I have had the opportunity to work with including some of the most talented engineers and leaders in the world.&lt;/p&gt;

&lt;p&gt;As I transition to the next chapter of my life, I want to share my learnings from the past few years as a way of both formalizing them for my future self and paying it forward to any former colleagues. Please keep in mind that these opinions reflect my own experiences and do not in any way represent Microsoft.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lessons Learned after 7 years at Microsoft
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Start your Career with an Uphill Battle.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tx2sVGaH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/510/0%2AQXK26tA7U-O9VqGX.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tx2sVGaH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/510/0%2AQXK26tA7U-O9VqGX.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the best things that happened to me at Microsoft was starting on the Technical Evangelism team for Windows Phone in 2014. At this point Windows Phone was a distant third in the phone OS wars and morale was not at its highest.&lt;/p&gt;

&lt;p&gt;This provided me with amazing opportunities to learn from my peers and see how some of the smartest engineers I’ve ever met, coped with work not going their way. The need to innovate and try new things in an attempt to get a foothold, provided me with opportunities to work on important engagements that otherwise would have not been entrusted to a college hire. It was here that I was first able to experiment with then cutting edge technologies like progressive applications. My peers who were in more comfortable positions didn’t have such experiences and many became complacent assuming that the winning position is the default position.&lt;/p&gt;

&lt;p&gt;While windows phone development was not my passion, being in an uphill battle enabled me to experiment with technologies that I was interested in. I was actively encouraged to work on passion projects which changed the course of my career culminating in me achieving one of my biggest goals of moving to Israel.&lt;/p&gt;

&lt;h4&gt;
  
  
  Always have a Northern Star and Make your Career Goals Clear
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VQGEmr-2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AZ5IVpu0NB50UTTh-.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VQGEmr-2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AZ5IVpu0NB50UTTh-.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I started as a college hire, I came in at a relatively low starting level. There are a couple of techniques I learned that helped me to accelerate my career. One of the most useful was the concept of a northern star. A northern star is a document that you should refresh once every few months that outlines your goals, aspirations and how you plan to achieve them.&lt;/p&gt;

&lt;p&gt;At Microsoft, I saw many hard working and incredibly talented peers struggle to level up. Many assumed that their hard work and accomplishments would translate directly into a promotion. While this does happen overtime, it is often a slow process, as there are many factors that determine career progression.&lt;/p&gt;

&lt;p&gt;To rapidly grow a career, goals shouldn’t be defined by your manager, organization or even company, rather they should be aligned to them. You should make your goals and career expectations clear to your manager so that they can help advocate on your behalf. At the end of the day you are your own best advocate.&lt;/p&gt;

&lt;p&gt;If you succeed in properly aligning personal goals with both your manager and your organization, not only does everyone win, but your impact will be much greater than it would have if you simply just checked a bunch of boxes, or overfit other metrics.&lt;/p&gt;

&lt;p&gt;If you feel that there is no way to align your team or organizational goals to your northern star document, then it is time to make a change. The fact that you have been defining and measuring your own goals will help you stand out to teams and organizations that are aligned with your northern star in such cases lateral movements are often the key to career growth.&lt;/p&gt;

&lt;p&gt;That being said ….&lt;/p&gt;

&lt;h4&gt;
  
  
  Your Career is much more than a Level
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bpCRkf_7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2A2ymyes6pfa4IwSAc" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bpCRkf_7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2A2ymyes6pfa4IwSAc" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of my favorite quotes from Bob Berg’s The Go Giver is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Your true worth is determined by how much more you give in value than you take in payment.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the worst things to be at a company like Microsoft is to be in a position where you are either over promoted. In my 7 years at Microsoft, I saw that while it is possible to claw your way up a few levels by learning the latest tech buzzwords or playing politics it’s the biggest possible career trap. Overtime, the gap between what you are expected to know and what is possible to reasonably learn grows to a point where it is impossible to be long term successful and respected.&lt;/p&gt;

&lt;p&gt;Those who are respected at Microsoft are respected not because they have a fancy level or title but rather because people know that they are competent, fun to work with, and care about those around them.&lt;/p&gt;

&lt;p&gt;Which leads me to my next point …&lt;/p&gt;

&lt;h4&gt;
  
  
  Acknowledge Imposter Syndrome but do not Surrender to It
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---8qfwRG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/953/0%2A_WzAr9FzUrN2hIGo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---8qfwRG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/953/0%2A_WzAr9FzUrN2hIGo.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the most surprising aspects of the past years is that I learned that at some level almost everyone I respect has some form of imposter syndrome.&lt;/p&gt;

&lt;p&gt;Imposter syndrome is a double edged sword. If you simply dismiss it as being natural, it is easy to become complacent and get stuck in the career trap outlined in the last point. However if you surrender to it, it’s easy to settle and not grow or develop.&lt;/p&gt;

&lt;p&gt;While I do not have the cure for imposter syndrome, in my experience the best counter to imposter syndrome is developing a strong mentorship network. Mentors can help you navigate the challenges of imposter syndrome. With strong mentors, acknowledging imposter syndrome can provide opportunities to grow your skills and help grow your confidence over time. At Microsoft I was lucky to have some amazing mentors that helped me at every single career stage.&lt;/p&gt;

&lt;p&gt;One piece of advice I have regarding mentors, is that anyone can be a mentor, it doesn’t need to be a formal arrangement. I define a mentor as, “ anyone whose experience can help unblock you, guide you and boost your skills”. One of my favorite approaches to finding mentors is to scope a pet or assigned project with relevant technologies or skills I wanted to learn and then to engage people I wanted to learn from to help me out.&lt;/p&gt;

&lt;p&gt;Remember however to pay it forward, we were all juniors once and as you grow in your career make sure you are taking time to help out others when you can. The more authentic you are in your mentor/mentee relationships the more value you will gain from them and provide in return.&lt;/p&gt;

&lt;p&gt;Which leads me to my next point …&lt;/p&gt;

&lt;h4&gt;
  
  
  The best leaders lead through others.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RW__czS6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A6fB56Hsi4FQg8di7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RW__czS6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A6fB56Hsi4FQg8di7.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Managers that optimize for their own success end up creating toxic work cultures where as those who learn to bring success to others grow, succeed and are respected. I have been truly lucky to have some amazing managers from whom I’ve learned that true leadership comes from unblocking those who work for you.&lt;/p&gt;

&lt;p&gt;My first manager took a big risk hiring from a small liberal arts college, he provided me with some of the most difficult accounts in his portfolio and gave me opportunities to demonstrate leadership as a tech lead on an important keynote. When I finally found the opportunity to move to Israel he helped me fast track a promotion so that when I moved to Israel so I wouldn’t have to take as dramatic a pay cut. My second manager helped me transition to Israel pushed me to truly become a great software engineer and encouraged me to pursue a Masters Degree. My third and fourth managers provided me with the opportunity to help hire and build a global developer relations team and my fifth and final manager empowered me to develop and take my AI and leadership skills to the next level.&lt;/p&gt;

&lt;p&gt;In addition to my mentors am indebted to all these amazing people who helped me grow and achieve what I did at Microsoft.&lt;/p&gt;

&lt;p&gt;I hope these lessons are valuable to others and I want to thank all those that I had the pleasure of working with for an amazing 7 years.&lt;/p&gt;

&lt;p&gt;Looking forward to the future and being able to share with you my next step soon!&lt;/p&gt;

&lt;h3&gt;
  
  
  Some Memory Highlights of the Past 7 Years
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3897mvYx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AoVl9F8S3VvUxvh2s" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3897mvYx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AoVl9F8S3VvUxvh2s" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4A8i4ZLN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2A6HwijMk-jxXDVHJD" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4A8i4ZLN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2A6HwijMk-jxXDVHJD" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nELPVXzB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AI6koIfBl_8_HU_2d" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nELPVXzB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AI6koIfBl_8_HU_2d" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CeKWuFHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AQa1A-UwC1Ln7Qrhz" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CeKWuFHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AQa1A-UwC1Ln7Qrhz" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i65f25t7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Am64sGIbjk9ODT5Ef" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i65f25t7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Am64sGIbjk9ODT5Ef" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W3q0QM1g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/732/0%2AYqLIu5z-w2rg3OLP" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W3q0QM1g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/732/0%2AYqLIu5z-w2rg3OLP" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SEpUdNi4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AP0tvH20cS0LD1gQS" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SEpUdNi4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AP0tvH20cS0LD1gQS" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aP2Tq66d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2A0S8OLFsucR3Tkcl2" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aP2Tq66d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2A0S8OLFsucR3Tkcl2" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O-WUupzl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2ABihSZV0ALHKd9G9r" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O-WUupzl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2ABihSZV0ALHKd9G9r" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rNwJQMTj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/777/0%2AOY3hptIosaUuqC9d" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rNwJQMTj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/777/0%2AOY3hptIosaUuqC9d" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KgK2De_Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AZ5rUbk7I-xtEuWc8" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KgK2De_Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AZ5rUbk7I-xtEuWc8" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9oIiCQRt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2A-dXYBe7oupu6iUt1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9oIiCQRt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2A-dXYBe7oupu6iUt1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DxREMo2L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AdBUfmKgKJZu3iqVY" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DxREMo2L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AdBUfmKgKJZu3iqVY" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6-99EcS4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2AMzZsp16c_CqUFUnq" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6-99EcS4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2AMzZsp16c_CqUFUnq" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PegCGJg4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2AHZ24AvnUn4ZLqs3g" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PegCGJg4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2AHZ24AvnUn4ZLqs3g" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0kQn4FCZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AyKUfo_R-7MlQTLkS" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0kQn4FCZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AyKUfo_R-7MlQTLkS" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nokKskh_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AuovDU0OJ27Ay-H-i" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nokKskh_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AuovDU0OJ27Ay-H-i" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bBv2YMSP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AekjuxCuE2qEXCk9p" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bBv2YMSP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AekjuxCuE2qEXCk9p" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m0SrAwst--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AKVIo1jwWtKMUEO-G" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m0SrAwst--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AKVIo1jwWtKMUEO-G" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YI870DY5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2ARwtYziSZPWhcwhEW" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YI870DY5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2ARwtYziSZPWhcwhEW" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D7cZ_DJG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AicbCklqsQjxZgLtV" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D7cZ_DJG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AicbCklqsQjxZgLtV" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6JzhT1Zr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AX76DGzS0QydpBHzS" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6JzhT1Zr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AX76DGzS0QydpBHzS" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7zL9pSU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2Ae1CFoD18y_3ORCPq" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7zL9pSU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2Ae1CFoD18y_3ORCPq" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VZ7J4UzG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Auz_XFIfSqu7jh13h" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VZ7J4UzG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Auz_XFIfSqu7jh13h" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r0Mu5hby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AvV-N4-cf64QOUMF1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r0Mu5hby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AvV-N4-cf64QOUMF1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lcIxarw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AVGkqCqshKlXCK7pY" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lcIxarw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AVGkqCqshKlXCK7pY" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aDb795v4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2At1iXsHHWiNG5sFjc" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aDb795v4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2At1iXsHHWiNG5sFjc" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1EIOYdm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AA1l72vvDri9WpMvH" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1EIOYdm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AA1l72vvDri9WpMvH" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jqs3t0sN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2AE039Mx_OkxAseBFZ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jqs3t0sN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/720/0%2AE039Mx_OkxAseBFZ" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PVLt_Bd8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AOpwnAvX0PUGidOG1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PVLt_Bd8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AOpwnAvX0PUGidOG1" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TaC7ezGp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Ai3Vw_gVxXXDOtLKV" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TaC7ezGp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Ai3Vw_gVxXXDOtLKV" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7v9O8ecF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AVwy9X0EKJxUfuYMi" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7v9O8ecF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AVwy9X0EKJxUfuYMi" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FrrTq2_I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AvkQiaHYfqbRKKuhx" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FrrTq2_I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AvkQiaHYfqbRKKuhx" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9XEEbW2k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Af7BPj4_iDSufc4au" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9XEEbW2k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2Af7BPj4_iDSufc4au" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fu-khiM7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AhEpBe-7Ba9TxgpMw" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fu-khiM7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2AhEpBe-7Ba9TxgpMw" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MStq4BKO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2A0dvlV5y5tql-gepP" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MStq4BKO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/960/0%2A0dvlV5y5tql-gepP" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  About the Author
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborated with the Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>No Code Data Enhancement with Azure Synapse Analytics and Azure Auto ML</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Thu, 10 Dec 2020 13:36:56 +0000</pubDate>
      <link>https://dev.to/azure/no-code-data-enhancement-with-azure-synapse-analytics-and-azure-auto-ml-2jpc</link>
      <guid>https://dev.to/azure/no-code-data-enhancement-with-azure-synapse-analytics-and-azure-auto-ml-2jpc</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NAEtR9Bp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A8MpN2vjW3wFiR9g3RsvwZQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NAEtR9Bp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A8MpN2vjW3wFiR9g3RsvwZQ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TLDR; This post will walk through how to train and evaluate Azure ML AutoML Regression model on your data using Azure Synapse Analytics Spark and SQL pools.&lt;/p&gt;

&lt;p&gt;Before we get started let’s make sure we are all on the same page with the core Azure concepts needed to take your data to the next level.&lt;/p&gt;

&lt;p&gt;If you are new to &lt;a href="https://azure.microsoft.com/en-us/overview/what-is-azure/?WT.mc_id=aiml-0000-abornst"&gt;Azure&lt;/a&gt; you can get started a free subscription using the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc_id=aiml-0000-abornst"&gt;Create your Azure free account today | Microsoft Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Azure Synapse Analytics?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/services/synapse-analytics/?WT.mc_id=aiml-0000-abornst"&gt;Azure Synapse Analytics&lt;/a&gt; is an integrated service that accelerates extracting insightful across data warehouses and big data systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7KPoesA0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A9H_HZyN5vor_EUH8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7KPoesA0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A9H_HZyN5vor_EUH8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Azure Synapse ties together traditional relational SQL enterprise data warehousing, unstructured data stores and serverless &lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-overview?WT.mc_id=aiml-0000-abornst"&gt;Apache Spark&lt;/a&gt; , to enable limitless pipelines for &lt;a href="https://en.wikipedia.org/wiki/Extract,_transform,_load"&gt;ETL&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Extract,_load,_transform"&gt;ELT&lt;/a&gt; operations. Furthermore &lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/get-started-create-workspace?WT.mc_id=aiml-0000-abornst"&gt;Synapse Studio&lt;/a&gt; provides a unified interface for data monitoring, coding, and security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O8ZY3hmo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/910/0%2AMJjt-e_T8mqNkpep.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O8ZY3hmo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/910/0%2AMJjt-e_T8mqNkpep.jpg" alt=""&gt;&lt;/a&gt;Synapse has deep integration with other Azure services such as &lt;a href="https://docs.microsoft.com/en-us/power-bi/?WT.mc_id=aiml-0000-abornst"&gt;Power BI&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/introduction?WT.mc_id=aiml-0000-abornst"&gt;CosmosDB&lt;/a&gt;, and &lt;a href="https://azure.microsoft.com/en-us/services/machine-learning/?WT.mc_id=aiml-0000-abornst"&gt;AzureML&lt;/a&gt; which makes it perfect for wrangling insight out of your data.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Azure Machine Learning Auto ML?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eqnG9ZfD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/707/0%2AUXLolBZuO7_76m3T.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eqnG9ZfD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/707/0%2AUXLolBZuO7_76m3T.png" alt=""&gt;&lt;/a&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml?WT.mc_id=aiml-0000-abornst"&gt;Auto ML&lt;/a&gt;, is the process of automating the time consuming, iterative tasks of machine learning model development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml?WT.mc_id=aiml-0000-abornst"&gt;Azure Machine Learning&lt;/a&gt; (Azure ML) is a cloud-based service for creating and managing machine learning solutions. It’s designed to help data scientists and machine learning engineers to leverage their existing data processing and model development skills &amp;amp; frameworks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fCHWaSDd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/810/0%2AAUObBNfjiEKQcRl_.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fCHWaSDd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/810/0%2AAUObBNfjiEKQcRl_.png" alt=""&gt;&lt;/a&gt;What is &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml?WT.mc_id=aiml-0000-abornst"&gt;Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml?WT.mc_id=aiml-0000-abornst"&gt;Azure Machine Learning&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml?WT.mc_id=aiml-0000-abornst"&gt;Auto ML&lt;/a&gt; allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml?WT.mc_id=aiml-0000-abornst"&gt;Auto ML&lt;/a&gt; we can transform Synapse Analytics Data into actionable baseline models to enrich datasets at scale without writing a single line of machine learning code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-auto-train-models?WT.mc_id=aiml-0000-abornst"&gt;Regression&lt;/a&gt; is used to build models to forecast numeric values such as taxi fares based on learned input features.&lt;/p&gt;

&lt;p&gt;In the next section, I will walk you through an end to end example of how to enrich you Synapse data by training and evaluating a model with the &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-auto-train-models?WT.mc_id=aiml-0000-abornst"&gt;NYC Taxi Dataset&lt;/a&gt;. Once you get preform these steps you’ll be able to train and run your own Auto ML on models on any tabular dataset of your choosing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gn6GKkqC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/728/0%2AIYBdJqaULYeuL__S.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gn6GKkqC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/728/0%2AIYBdJqaULYeuL__S.jpg" alt=""&gt;&lt;/a&gt;Let’s ingest, train a model and enhance some data to predict taxi fares with the&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Set Up Azure Machine Learning and Synapse Workspaces
&lt;/h4&gt;

&lt;p&gt;First if you do not have them already we need to create our Azure ML and Synapse workspaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ma85U_xh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AJH1qxL6rTRHMVfUg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ma85U_xh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AJH1qxL6rTRHMVfUg.gif" alt=""&gt;&lt;/a&gt;Create Azure ML Workspace from the &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?WT.mc_id=aiml-0000-abornst"&gt;Portal&lt;/a&gt; or use the &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace-cli?WT.mc_id=aiml-0000-abornst"&gt;Azure CLI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/IQ9tkAywBlA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Link Azure Machine Learning Workspace to Synapse Service
&lt;/h4&gt;

&lt;p&gt;Once we have deployed our two workspaces we need to link them. Full steps for linking Azure ML and Synapse Workspaces can be found &lt;a href="https://github.com/NelGson/azure-docs-pr-ml/blob/nellie-ga%C2%A0upstream/release-synapse-ga/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RBEtqQa---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzWyQZTbTxjXkFskkSC5Ydg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RBEtqQa---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzWyQZTbTxjXkFskkSC5Ydg.png" alt=""&gt;&lt;/a&gt;Follow &lt;a href="https://github.com/NelGson/azure-docs-pr-ml/blob/nellie-ga%C2%A0upstream/release-synapse-ga/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md"&gt;these steps&lt;/a&gt; to create a service principle and link the azure ml and machine learning workspaces.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Create Dedicated Serverless Apache Spark Spark and SQL Pools
&lt;/h4&gt;

&lt;p&gt;To actually ingest and process our data we need to use pools. Azure Synapse Analytics offers various analytics engines to help you ingest, transform, model, analyze, and serve your data.&lt;/p&gt;

&lt;p&gt;For this tutorial since we are using a toy dataset we can use the cheapest pools available for you own data you may want to configure your pools accordingly.&lt;/p&gt;

&lt;p&gt;The serverless Apache Spark pools offers open-source big data compute capabilities. This is where the majority of our data processing and Auto ML code will run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PMm723gT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/903/0%2AoWuuz3RoapPLPCeY.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PMm723gT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/903/0%2AoWuuz3RoapPLPCeY.png" alt=""&gt;&lt;/a&gt;Steps for creating a Dedicated Spark Pool can be found &lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-apache-spark-pool-studio?WT.mc_id=aiml-0000-abornst"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A dedicated serverless SQL pool offers T-SQL based compute and storage capabilities. We will use this pool to store the data we want to enhance with our AutoML model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qj_tApg0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AjQr3SIi90ZMZtgNyHxZa9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qj_tApg0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AjQr3SIi90ZMZtgNyHxZa9w.png" alt=""&gt;&lt;/a&gt;Steps for creating a dedicated sql pool can be found &lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-sql-pool-studio?WT.mc_id=aiml-0000-abornst"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One key advantage of Azure Synapse Analytics is if you configure a time out you only pay for the compute when it’s in use.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Upload and run the Spark Taxi Data Notebook to create Spark Database and SQL Test Database
&lt;/h4&gt;

&lt;p&gt;Once we have our serverless Spark and SQL pools up and running we can now ingest our data setup our Spark and SQL tables for training and testing respectively.&lt;/p&gt;

&lt;p&gt;Download this Spark &lt;a href="https://go.microsoft.com/fwlink/?linkid=2149229"&gt;Create-Spark-Table-NYCTaxi- Data.ipynb&lt;/a&gt; notebook and import it into your workspace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rITo-sxm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ASt8zzgN5ffvaLFVpdsA0AQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rITo-sxm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ASt8zzgN5ffvaLFVpdsA0AQ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J4q4_FL---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Art_2VuOj2-GhrQ-aP7KxmA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J4q4_FL---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Art_2VuOj2-GhrQ-aP7KxmA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the notebook is uploaded, change the sql_pool_name value to match the name of your sql pool and then select the desired spark pool and run click all.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5: Launch Auto ML Wizard and Train Regression Model with AutoML using NYC Taxi Spark Table.
&lt;/h4&gt;

&lt;p&gt;Once the data is ingested we can use our spark nyc_taxi and Spark Pool to train an AutoML regression model for forecasting taxi fares.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ujJgzLZa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Adm5s27gLk17vr9VuxpdChA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ujJgzLZa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Adm5s27gLk17vr9VuxpdChA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the three steps in wizard below to train your model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lHprgyuw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AaFmaEyWjWeQFJ8OvnK68TQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lHprgyuw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AaFmaEyWjWeQFJ8OvnK68TQ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sdTXl2gc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Alul9c7QPqMJ0vdP8m-zXlA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sdTXl2gc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Alul9c7QPqMJ0vdP8m-zXlA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vkfC4C_5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AKXsYgN1QOTUzSzeCYq9qtQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vkfC4C_5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AKXsYgN1QOTUzSzeCYq9qtQ.png" alt=""&gt;&lt;/a&gt;Note: Be sure to set the target column to fareAmount and use the onnx model model compatibility option. For a more in depth explanation of the training steps check out the documentation &lt;a href="https://github.com/NelGson/azure-docs-pr-ml/blob/nellie-ga%C2%A0upstream/release-synapse-ga/articles/synapse-analytics/machine-learning/tutorial-automl.md"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will kick off your Auto ML Regression training job. It should take about 2hrs to run when it is complete we can then evaluate it on our SQL test table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gk08cG6Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A4wrC9dmsiEJ_aO5pt1QOSA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gk08cG6Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A4wrC9dmsiEJ_aO5pt1QOSA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UoY4_A3p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AycnBXAwasWNTEMHGv_vHIw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UoY4_A3p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AycnBXAwasWNTEMHGv_vHIw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6: Enhance SQL Table with Trained Auto ML Model
&lt;/h4&gt;

&lt;p&gt;Once we have the best model we can now evaluate it on our test SQL table using our SQL Pool.&lt;/p&gt;

&lt;p&gt;First we need to select the table we want to enhance with the model we just trained.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dh_GmbWf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/862/1%2AwCP3RTZowmPdTsh2cqzq9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dh_GmbWf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/862/1%2AwCP3RTZowmPdTsh2cqzq9w.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we select our new Auto ML model, map the our input table columns to what the model is expecting and choose or create a table for storing our model locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pOXt9tyY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/588/1%2A3sLKu8lJAuXRgfuvkTyAlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pOXt9tyY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/588/1%2A3sLKu8lJAuXRgfuvkTyAlg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ncrXMXQd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/765/1%2AwyjK6hBcvIJD6_kGw3CZ6A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ncrXMXQd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/765/1%2AwyjK6hBcvIJD6_kGw3CZ6A.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_zi9HZNw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/767/1%2AZYdxr-Pa-0DMCaBaVy1w9Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_zi9HZNw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/767/1%2AZYdxr-Pa-0DMCaBaVy1w9Q.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The wizard will generate a T-SQL script that evaluate our model against the test data and outputs the fare predictions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AlZX_oCZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ASdWu8AiUir7qBPAbhBUzUw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AlZX_oCZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ASdWu8AiUir7qBPAbhBUzUw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There you have it all you need to know to train and test you own AutoML models and make them actionable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b_1Aqjsm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A55u5Xp5aHVVm87nX.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b_1Aqjsm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A55u5Xp5aHVVm87nX.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additional Synapse Documentation and Walkthroughs worth checking out can be found below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/machine-learning/what-is-machine-learning?WT.mc_id=aiml-0000-abornst"&gt;Machine Learning in Azure Synapse Analytics - Azure Synapse Analytics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-automl?WT.mc_id=aiml-0000-abornst"&gt;Tutorial: Machine learning model training using AutoML - Azure Synapse Analytics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-sql-pool-model-scoring-wizard?WT.mc_id=aiml-0000-abornst"&gt;Tutorial: Machine learning model scoring wizard for dedicated SQL pools - Azure Synapse Analytics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment?WT.mc_id=aiml-0000-abornst"&gt;Tutorial: Sentiment analysis with Cognitive Services - Azure Synapse Analytics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics/unleash-the-power-of-predictive-analytics-in-azure-synapse-with/ba-p/1961252?WT.mc_id=aiml-0000-abornst"&gt;Unleash the power of predictive analytics in Azure Synapse with machine learning and AI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you’ve finished the steps above it is time to try them out on you own synapse data. Feel free to post in the comments if you have any questions and to share the cool models you make!&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/xf3Lej-MWCk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Look forward to seeing what AutoML and Azure Synapse can do for you!!&lt;/p&gt;

&lt;h3&gt;
  
  
  Acknowledgments
&lt;/h3&gt;

&lt;p&gt;Thanks to Nellie Gustafsson, Yifan Song and Chang Xu from the Azure Synapse product team for their great documentation and support during the writing this post.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with the Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>azuresynapseanalytic</category>
      <category>azure</category>
      <category>automl</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Multi Node Distributed Training with PyTorch Lightning &amp; Azure ML</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Mon, 26 Oct 2020 12:33:42 +0000</pubDate>
      <link>https://dev.to/azure/multi-node-distributed-training-with-pytorch-lightning-azure-ml-ilo</link>
      <guid>https://dev.to/azure/multi-node-distributed-training-with-pytorch-lightning-azure-ml-ilo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uIBSc5KC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/857/1%2AZhesF2ZhMh7XpWRZAVeuBw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uIBSc5KC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/857/1%2AZhesF2ZhMh7XpWRZAVeuBw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TL;DR This post outlines how distribute PyTorch Lightning training on Distributed Clusters with Azure ML&lt;/p&gt;

&lt;p&gt;If you are new to &lt;a href="https://azure.microsoft.com/en-us/overview/what-is-azure/?WT.mc_id=aiml-0000-abornst"&gt;Azure&lt;/a&gt; you can get started a free subscription using the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc_id=aiml-0000-abornst"&gt;Create your Azure free account today | Microsoft Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure ML and PyTorch Lighting
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BcG0VjCf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/0%2A7vJkT86Jf0PIj-OE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BcG0VjCf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/0%2A7vJkT86Jf0PIj-OE.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my last few posts on the subject, I outlined the benefits of both &lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;PyTorch Lightning&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/?WT.mc_id=aiml-0000-abornst"&gt;Azure ML&lt;/a&gt; to simplify training deep learning models and logging. Take a look you haven’t yet check it out!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/aribornstein/training-your-first-distributed-pytorch-lightning-model-with-azure-ml-4kga-temp-slug-5861491"&gt;Training Your First Distributed PyTorch Lightning Model with Azure ML&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aribornstein/configuring-native-azure-ml-logging-with-pytorch-lighting-d5e-temp-slug-6899682"&gt;Configuring Native Azure ML Logging with PyTorch Lighting&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you are familiar with both the benefits of Azure ML and PyTorch lighting let’s talk about how to take PyTorch Lighting to the next level with multi node distributed model training.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi Node Distributed Training
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O5aV05eK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AksFXuQWDVwR5wnN28RgJ0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O5aV05eK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AksFXuQWDVwR5wnN28RgJ0g.png" alt=""&gt;&lt;/a&gt;Sample traditional distributed training consideration from Azure Docs.&lt;/p&gt;

&lt;p&gt;Multi Node Distributed Training is typically the most advanced use case of the Azure Machine Learning service. If you want a sense of why it is traditionally so difficult, take a look at the Azure Docs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-distributed-training?WT.mc_id=aiml-0000-abornst"&gt;What is distributed training? - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PyTorch Lighting makes distributed training significantly easier by managing all the distributed data batching, hooks, gradient updates and process ranks for us. Take a look at the video by &lt;a href="https://medium.com/u/8536ebfbc90b"&gt;William Falcon&lt;/a&gt; here to see how this works.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/a6_pY9WwqdQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We only need to make one minor modification to our train script for Azure ML to enable PyTorch lighting to do all the heavy lifting in the following section I will walk through the steps to needed to run a distributed training job on a low priority compute cluster enabling faster training at an order of magnitude cost savings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-plan-manage-cost?WT.mc_id=aiml-0000-abornst#low-pri-vm"&gt;Plan and manage costs - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Step 1 — Set up Azure ML Workspace
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AEz3Y_WI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/60/0%2Ax56MKN6qao-BI3i5" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AEz3Y_WI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/60/0%2Ax56MKN6qao-BI3i5" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sMVLAZQU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Am-hwzME9dfJhjVu3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sMVLAZQU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Am-hwzME9dfJhjVu3.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create Azure ML Workspace from the &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?WT.mc_id=aiml-0000-abornst"&gt;Portal&lt;/a&gt; or use the &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace-cli?WT.mc_id=aiml-0000-abornst"&gt;Azure CLI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect to the workspace with the Azure ML SDK as follows&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azureml.core import Workspace
ws = Workspace.get(name="myworkspace", subscription_id='&amp;lt;azure-subscription-id&amp;gt;', resource_group='myresourcegroup')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2 — Set up Multi GPU Cluster
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python&amp;amp;WT.mc_id=aiml-0000-abornst"&gt;Create compute clusters - Azure Machine Learning&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException

# Choose a name for your GPU cluster
gpu_cluster_name = "gpu cluster"

# Verify that cluster does not exist already
try:
    gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)
    print('Found existing cluster, use it.')
except ComputeTargetException:
    compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC12s_v3',
max_nodes=2)
    gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)

gpu_cluster.wait_for_completion(show_output=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3 — Configure Environment
&lt;/h3&gt;

&lt;p&gt;To run PyTorch Lighting code on our cluster we need to configure our dependencies we can do that with simple yml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;channels:
  - conda-forge
dependencies:
  - python=3.6
  - pip:
    - azureml-defaults
    - mlflow
    - azureml-mlflow
    - torch
    - torchvision
    - pytorch-lightning
    - cmake
    - horovod # optional if you want to use a horovod backend 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then use the AzureML SDK to create an environment from our dependencies file and configure it to run on any Docker base image we want.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**from**  **azureml.core**  **import** Environment

env = Environment.from_conda_specification(environment_name, environment_file)

_# specify a GPU base image_
env.docker.enabled = **True**
env.docker.base_image = (
    "mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 — Training Script
&lt;/h3&gt;

&lt;p&gt;Create a ScriptRunConfig to specify the training script &amp;amp; arguments, environment, and cluster to run on.&lt;/p&gt;

&lt;p&gt;We can use any example train script from the &lt;a href="https://github.com/Azure/azureml-examples/blob/minxia/lightning/code/models/pytorch-lightning/mnist-autoencoder/train.py"&gt;PyTorch Lighting examples or our own experiments&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once we have our training script we need to make one minor modification by adding the following function that sets all the required environmental variables for distributed communication between the Azure nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/290fec1591cf1a84f074aa91af8a010b/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/290fec1591cf1a84f074aa91af8a010b/href"&gt;https://medium.com/media/290fec1591cf1a84f074aa91af8a010b/href&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then after parsing the input arguments call the above function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;args = parser.parse_args()

# -----------
# configure distributed environment
# -----------

set_environment_variables_for_nccl_backend(single_node=int(args.num_nodes) &amp;gt; 1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hopefully in the future this step will be abstracted out for us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5 — Run Experiment
&lt;/h3&gt;

&lt;p&gt;For Multi Node GPU training , specify the number of GPUs to train on per a node (typically this will correspond to the number of GPUs in your cluster’s SKU), the number of nodes(typically this will correspond to the number of nodes in your cluster) and the accelerator mode and the distributed mode, in this case DistributedDataParallel ("ddp"), which PyTorch Lightning expects as arguments --gpus --num_nodesand --accelerator, respectively. See their &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html"&gt;Multi-GPU training&lt;/a&gt; documentation for more information.&lt;/p&gt;

&lt;p&gt;Then set the distributed_job_config to a new MpiConfiguration with equal to one(since PyTorch lighting manages all the distributed training)and a node_count equal to the --num_nodes you provided as input to the train script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/9f896be17f677c2dd2a3681360c2db83/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/9f896be17f677c2dd2a3681360c2db83/href"&gt;https://medium.com/media/9f896be17f677c2dd2a3681360c2db83/href&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can view the run logs and details in realtime with the following SDK commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**from**  **azureml.widgets**  **import** RunDetails

RunDetails(run).show()
run.wait_for_completion(show_output= **True** )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you have it with out needing to deal with managing the complexity of distributed batching, Cuda, MPI, logging callbacks, or process ranks, PyTorch lighting scale your training job to as many nodes as nodes as you’d like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OahgMGig--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ABqV_7yyDlighhGlGFpB5bQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OahgMGig--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ABqV_7yyDlighhGlGFpB5bQ.png" alt=""&gt;&lt;/a&gt;Pictured a complete two node 4 gpu run.&lt;/p&gt;

&lt;p&gt;You shouldn’t but if you have any issues let me know in the comments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Acknowledgements
&lt;/h3&gt;

&lt;p&gt;I want to give a major shout out to &lt;a href="https://github.com/mx-iao"&gt;Minna Xiao&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/alex-shaojie-deng-b572347/"&gt;Alex Deng&lt;/a&gt; from the Azure ML team for their support and commitment working towards a better developer experience with Open Source Frameworks such as PyTorch Lighting on Azure.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with the Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>pytorchlightning</category>
      <category>azure</category>
      <category>pytorch</category>
      <category>ai</category>
    </item>
    <item>
      <title>Configuring Native Azure ML Logging with PyTorch Lighting</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Tue, 20 Oct 2020 12:17:23 +0000</pubDate>
      <link>https://dev.to/azure/configuring-native-azure-ml-logging-with-pytorch-lighting-2g4p</link>
      <guid>https://dev.to/azure/configuring-native-azure-ml-logging-with-pytorch-lighting-2g4p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--scN-6M8t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AW1Lc8pYVI0Ddsr7K.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--scN-6M8t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AW1Lc8pYVI0Ddsr7K.jpg" alt=""&gt;&lt;/a&gt;Combining Azure and Lightning leads to more powerful logging&lt;/p&gt;

&lt;p&gt;TL;DR: This post demonstrates how to connect &lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;PyTorch Lightning&lt;/a&gt; logging to &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/?WT.mc_id=aiml-0000-abornst"&gt;Azure ML&lt;/a&gt; natively with &lt;a href="https://mlflow.org/"&gt;ML Flow&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are new to &lt;a href="https://azure.microsoft.com/en-us/overview/what-is-azure/?WT.mc_id=aiml-0000-abornst"&gt;Azure&lt;/a&gt; you can get started a free subscription using the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc_id=aiml-0000-abornst"&gt;Create your Azure free account today | Microsoft Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure ML and PyTorch Lighting
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CuSSGc0O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/0%2A-L_6wlW_nFmR_8E1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CuSSGc0O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/0%2A-L_6wlW_nFmR_8E1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my last post on the subject, I outlined the benefits of both &lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;PyTorch Lightning&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/?WT.mc_id=aiml-0000-abornst"&gt;Azure ML&lt;/a&gt; to simplify training deep learning models. If you haven’t yet check it out!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aribornstein/training-your-first-distributed-pytorch-lightning-model-with-azure-ml-4kga-temp-slug-5861491"&gt;Training Your First Distributed PyTorch Lightning Model with Azure ML&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve trained your first distributed PyTorch Lighting model with Azure ML it is time to add logging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do we care about logging?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5jUMy8bk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/1%2AM-OoNIwuXbr01lTQIEfoKQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5jUMy8bk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/1%2AM-OoNIwuXbr01lTQIEfoKQ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Logs are critical for troubleshooting and tracking the performance of machine learning models. Since we often train on remote clusters, logs provide a simple mechanism for having a clear understanding of what’s going on at each phase of developing our model.&lt;/p&gt;

&lt;p&gt;As opposed to simple print statements, logs are time stamped, can be filtered by severity, and are used by Azure ML to visualize critical metrics such during training, validation, and testing. Logging metrics with Azure ML is alos a perquisite for using the Azure ML HyperDrive Service to help us find optimal model configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=aiml-0000-abornst"&gt;Tune hyperparameters for your model - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Logging is a perfect demonstration of how both PyTorch Lighting and Azure ML combine to show simplify your model training just by using lightning we can save ourselves dozens of lines of PyTorch code in our application earning readability in the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging with PyTorch Lighting
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5S02ssw4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/880/1%2ApgXlqXOxKvh2Swq-rQ1Oag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5S02ssw4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/880/1%2ApgXlqXOxKvh2Swq-rQ1Oag.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In vanilla PyTorch, keeping track and maintaining logging code can get complicated very quickly.&lt;/p&gt;

&lt;p&gt;ML frameworks and services such as Azure ML, Tensor Board, TestTube, Neptune.ai and Comet ML each have their own unique logging APIs. This means that ML engineers often need to maintain multiple log statements at each phase of training, validation and testing.&lt;/p&gt;

&lt;p&gt;PyTorch Lighting simplifies this process by providing a unified logging interface that comes with out of the box support with the most popular machine learning logging APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/loggers.html"&gt;Loggers - PyTorch Lightning 1.0.2 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multiple Loggers can even be chained together which greatly simplifies your code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**from** pytorch\_lightning.loggers **import** TensorBoardLogger, TestTubeLogger
logger1 **=** TensorBoardLogger('tb\_logs', name **=**'my\_model')
logger2 **=** TestTubeLogger('tb\_logs', name **=**'my\_model')
trainer **=** Trainer(logger **=** [logger1, logger2])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once, loggers are provide to a PyTorch Lighting trainer they can be accessed in any &lt;strong&gt;lightning_module_function_or_hook&lt;/strong&gt; outside of __init__.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**class**  **MyModule** (LightningModule):
**def**  **some\_lightning\_module\_function\_or\_hook** (self):
 some\_img **=** fake\_image()
_# Option 1_
 self **.** logger **.** experiment[0] **.** add\_image('generated\_images', some\_img, 0)
_# Option 2_
 self **.** logger[0] **.** experiment **.** add\_image('generated\_images', some\_img, 0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Azure ML Logging with PyTorch Lighting with ML Flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UoPfBF-_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/proxy/1%2AY3sfHDsucIVXbRvvFxd7Mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UoPfBF-_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/proxy/1%2AY3sfHDsucIVXbRvvFxd7Mw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since Azure ML has native integration with &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow?WT.mc_id=aiml-0000-abornst"&gt;ML Flow&lt;/a&gt;, we can take advantage of PyTorch Lighting’s ML Flow Logger module to get native metric visualizations across multiple experiment runs and utilize hyperdrive with very minor changes to our training code.&lt;/p&gt;

&lt;p&gt;Below I’ll outline the code needed to take advantage of Azure ML Logging with PyTorch lightning.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step #1 Environment
&lt;/h4&gt;

&lt;p&gt;Add PyTorch Lighting, Azure ML and ML Flow packages to the run environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip 
 - azureml-defaults
 - mlflow
 - azureml-mlflow
 - pytorch-lightning
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step #2 Get Azure ML Run Context and ML Flow Tracking URL
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azureml.core.run import Run

run = Run.get\_context()
mlflow\_url = run.experiment.workspace.get\_mlflow\_tracking\_uri()mlf\_logger = 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Step #3 Initialize PyTorch Lighting MLFlow Logger and Link Run.id&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MLFlowLogger(experiment\_name=amlexp.name, tracking\_uri=mlflow\_url)
mlf\_logger.\_run\_id = run.id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step #4 Add logging statements to the PyTorch Lighting the training_step, validation_step, and test_step Hooks
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def training\_step(self, batch, batch\_idx):

 # Calculate train loss here 
 self.log("train\_loss", loss)
 # return test loss

def validation\_step(self, batch, batch\_idx):

 # Calculate validation loss here 
 self.log("val\_loss", loss)
 # return test loss

def test\_step(self, batch, batch\_idx):
 # Calculate test loss here 
 self.log("test\_loss", loss)
 # return test loss 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step #5 Add the ML Flow Logger to the PyTorch Lightning Trainer
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trainer = pl.Trainer.from\_argparse\_args(args)

trainer.logger = mlf\_logger # enjoy default logging implemented by pl!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you have it! Now when you submit your PyTorch Lighting train script you will get real time visualizations and HyperDrive inputs at Train, Validation, and Test time with a fraction of the normal required code.&lt;/p&gt;

&lt;p&gt;You shouldn’t but if you have any issues let me know in the comments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;In the next post, I will show you how to configure Multi Node Distributed Training with PyTorch and Azure ML using Low Priority compute instances to minimize training cost by an order of magnitude.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-plan-manage-cost?WT.mc_id=aiml-0000-abornst"&gt;Plan and manage costs - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Acknowledgements
&lt;/h3&gt;

&lt;p&gt;I want to give a major shout out to &lt;a href="https://github.com/mx-iao"&gt;Minna Xiao&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/alex-shaojie-deng-b572347/"&gt;Alex Deng&lt;/a&gt; from the Azure ML team for their support and commitment working towards a better developer experience with Open Source Frameworks such as PyTorch Lighting on Azure.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with the Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>pytorchlightning</category>
      <category>logging</category>
      <category>deeplearning</category>
      <category>azure</category>
    </item>
    <item>
      <title>Training Your First Distributed PyTorch Lightning Model with Azure ML</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Tue, 13 Oct 2020 12:11:52 +0000</pubDate>
      <link>https://dev.to/azure/training-your-first-distributed-pytorch-lightning-model-with-azure-ml-3o1i</link>
      <guid>https://dev.to/azure/training-your-first-distributed-pytorch-lightning-model-with-azure-ml-3o1i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UfPotJuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/1%2AyPmoGkF5mtIG539tZc7a5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UfPotJuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/683/1%2AyPmoGkF5mtIG539tZc7a5w.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TLDR; This post outlines how to get started training Multi GPU Models with &lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;PyTorch Lightning&lt;/a&gt; using &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/?WT.mc_id=aiml-0000-abornst"&gt;Azure Machine Learning&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are new to &lt;a href="https://azure.microsoft.com/en-us/overview/what-is-azure/?WT.mc_id=aiml-0000-abornst"&gt;Azure&lt;/a&gt; you can get started a free subscription using the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc_id=aiml-0000-abornst"&gt;Create your Azure free account today | Microsoft Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is PyTorch Lightning?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ptohypa1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AWbGRlrVmmtnU5L9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ptohypa1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AWbGRlrVmmtnU5L9o.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PyTorch Lighting is a lightweight PyTorch wrapper for high-performance AI research. Lightning is designed with four principles that simplify the development and scalability of production PyTorch Models:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable maximum flexibility&lt;/li&gt;
&lt;li&gt;Abstract away unnecessary boilerplate, but make it accessible when needed.&lt;/li&gt;
&lt;li&gt;Systems should be self-contained (ie: optimizers, computation code, etc).&lt;/li&gt;
&lt;li&gt;Deep learning code should be organized into 4 distinct categories, Research code (the LightningModule), Engineering code (you delete, and is handled by the Trainer), Non-essential research code (logging, etc… this goes in Callbacks), Data (use PyTorch Dataloaders or organize them into a LightningDataModule).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code which is perfect for taking advantage of distributed cloud computing services such as Azure Machine Learning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/PyTorchLightning/pytorch-lightning"&gt;PyTorchLightning/pytorch-lightning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally PyTorch Lighting Bolts provide pre-trained models that can be wrapped and combined to more rapidly prototype research ideas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/PyTorchLightning/pytorch-lightning-bolts"&gt;PyTorchLightning/pytorch-lightning-bolts&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Azure Machine Learning?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_4mKCV3g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/810/1%2A9h0_bjt0sJKCzPY7GyXHYw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_4mKCV3g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/810/1%2A9h0_bjt0sJKCzPY7GyXHYw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml?WT.mc_id=aiml-0000-abornst"&gt;&lt;strong&gt;Azure Machine Learning&lt;/strong&gt; ( &lt;strong&gt;Azure ML&lt;/strong&gt; )&lt;/a&gt; is a cloud-based service for creating and managing &lt;strong&gt;machine learning&lt;/strong&gt; solutions. It’s designed to help data scientists and &lt;strong&gt;machine learning&lt;/strong&gt; engineers to leverage their existing data processing and model development skills &amp;amp; frameworks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml?WT.mc_id=aiml-0000-abornst"&gt;Azure Machine Learning&lt;/a&gt; provides the tools developers and data scientists need for their machine learning workflows, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Azure Compute Instances that can be accessed online or linked to remotely with Visual Studio Code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance?WT.mc_id=aiml-0000-abornst"&gt;What is an Azure Machine Learning compute instance? - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-vs-code-remote?WT.mc_id=aiml-0000-abornst"&gt;Connect to compute instance in Visual Studio Code (preview) - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Out of the box support for Machine Learning libraries such as PyTorch, Tensorflow, ScikitLearn and Keras.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py&amp;amp;WT.mc_id=aiml-0000-abornst"&gt;azureml.train.estimator.Estimator class - Azure Machine Learning Python&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code, Data, Model Management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local?WT.mc_id=aiml-0000-abornst"&gt;Tutorial: Get started with machine learning - Python - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scalable Distributed Training and Cheap Low Priority GPU Compute&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-distributed-training?WT.mc_id=aiml-0000-abornst"&gt;What is distributed training? - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Auto ML and Hyper Parameter Optimization&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=aiml-0000-abornst"&gt;Tune hyperparameters for your model - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml?WT.mc_id=aiml-0000-abornst"&gt;What is automated ML / AutoML - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Container Registry, Kubernetes Deployment and MLOps Pipelines&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-model-management-and-deployment?WT.mc_id=aiml-0000-abornst"&gt;MLOps: ML model management - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service?tabs=python&amp;amp;WT.mc_id=aiml-0000-abornst"&gt;Deploy ML models to Kubernetes Service - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Interpretability Tools and Data Drift Monitoring&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-aml?WT.mc_id=aiml-0000-abornst"&gt;Interpret &amp;amp; explain ML models in Python (preview) - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-datasets?WT.mc_id=aiml-0000-abornst"&gt;Analyze and monitor for data drift on datasets (preview) - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can even use external open source services like &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow"&gt;MLflow to track metrics and deploy models&lt;/a&gt; or Kubeflow to &lt;a href="https://www.kubeflow.org/docs/azure/"&gt;build end-to-end workflow pipelines&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Check out some AzureML best practices examples at&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Azure/azureml-examples"&gt;Azure/azureml-examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/microsoft/bert-stack-overflow"&gt;microsoft/bert-stack-overflow&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the advantages of PyTorch Lighting and Azure ML it makes sense to provide an example of how to leverage the best of both worlds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Step 1 — Set up Azure ML Workspace
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4XZ0CtJD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Ac95uUGx4H8j8SLq5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4XZ0CtJD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Ac95uUGx4H8j8SLq5.gif" alt=""&gt;&lt;/a&gt;Create Azure ML Workspace from the &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?WT.mc_id=aiml-0000-abornst"&gt;Portal&lt;/a&gt; or use the &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace-cli?WT.mc_id=aiml-0000-abornst"&gt;Azure CLI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect to the workspace with the Azure ML SDK as follows&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azureml.core import Workspace
ws = Workspace.get(name="myworkspace", subscription\_id='&amp;lt;azure-subscription-id&amp;gt;', resource\_group='myresourcegroup')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2 — Set up Multi GPU Cluster
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python&amp;amp;WT.mc_id=aiml-0000-abornst"&gt;Create compute clusters - Azure Machine Learning&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute\_target import ComputeTargetException

# Choose a name for your GPU cluster
gpu\_cluster\_name = "gpu cluster"

# Verify that cluster does not exist already
try:
 gpu\_cluster = ComputeTarget(workspace=ws, name=gpu\_cluster\_name)
 print('Found existing cluster, use it.')
except ComputeTargetException:
 compute\_config = AmlCompute.provisioning\_configuration(vm\_size='Standard\_NC12s\_v3',
max\_nodes=2)
 gpu\_cluster = ComputeTarget.create(ws, gpu\_cluster\_name, compute\_config)

gpu\_cluster.wait\_for\_completion(show\_output=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3 — Configure Environment
&lt;/h4&gt;

&lt;p&gt;To run PyTorch Lighting code on our cluster we need to configure our dependencies we can do that with simple yml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;channels:
 - conda-forge
dependencies:
 - python=3.6
 - pip
 - pip:
 - azureml-defaults
 - torch
 - torchvision
 - pytorch-lightning
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then use the AzureML SDK to create an environment from our dependencies file and configure it to run on any Docker base image we want.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**from**  **azureml.core**  **import** Environment

env = Environment.from\_conda\_specification(environment\_name, environment\_file)

_# specify a GPU base image_
env.docker.enabled = **True**
env.docker.base\_image = (
 "mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4 — Training Script
&lt;/h4&gt;

&lt;p&gt;Create a ScriptRunConfig to specify the training script &amp;amp; arguments, environment, and cluster to run on.&lt;/p&gt;

&lt;p&gt;We can use any example train script from the &lt;a href="https://github.com/Azure/azureml-examples/blob/minxia/lightning/code/models/pytorch-lightning/mnist-autoencoder/train.py"&gt;PyTorch Lighting examples or our own experiments&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5 — Run Experiment
&lt;/h4&gt;

&lt;p&gt;For GPU training on a single node, specify the number of GPUs to train on (typically this will correspond to the number of GPUs in your cluster’s SKU) and the distributed mode, in this case DistributedDataParallel ("ddp"), which PyTorch Lightning expects as arguments --gpus and --distributed_backend, respectively. See their &lt;a href="https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html"&gt;Multi-GPU training&lt;/a&gt; documentation for more information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**import**  **os**
 **from**  **azureml.core**  **import** ScriptRunConfig, Experiment

cluster = ws.compute\_targets[cluster\_name]

src = ScriptRunConfig(
 source\_directory=source\_dir,
 script=script\_name,
 arguments=["--max\_epochs", 25, "--gpus", 2, "--distributed\_backend", "ddp"],
 compute\_target=cluster,
 environment=env,
)

run = Experiment(ws, experiment\_name).submit(src)
run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can view the run logs and details in realtime with the following SDK commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**from**  **azureml.widgets**  **import** RunDetails

RunDetails(run).show()
run.wait\_for\_completion(show\_output= **True** )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Next Steps and Future Post
&lt;/h4&gt;

&lt;p&gt;Now that we’ve set up our first Azure ML PyTorch lighting experiment. Here are some advanced steps to try out we will cover them in more depth in a later post.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Link a Custom Dataset from Azure Datastore&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This example used the MNIST dataset from PyTorch datasets, if we want to train on our data we would need to integrate with the Azure ML Datastore which is relatively trivial we will show how to do this in a follow up post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-register-datasets?WT.mc_id=aiml-0000-abornst"&gt;Create Azure Machine Learning datasets to access data - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Create a Custom PyTorch Lightning Logger for AML and Optimize with Hyperdrive&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this example all our model logging was stored in the Azure ML driver.log but Azure ML experiments have much more robust logging tools that can directly integrate into PyTorch lightning with very little work. In the next post we will show how to do this and what we gain with HyperDrive.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/nateraw/pytorch-lightning-azureml"&gt;nateraw/pytorch-lightning-azureml&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/PyTorchLightning/pytorch-lightning-bolts/pull/223"&gt;[DRAFT] Add logger for Azure Machine Learning by dkmiller · Pull Request #223 · PyTorchLightning/pytorch-lightning-bolts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Multi Node Distributed Compute with PyTorch Lightining Horovod Backend&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this example we showed how to leverage all the GPUs on a one Node Cluster in the next post we will show how to distribute across clusters with the PyTorch Lightnings Horovod Backend.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Deploy our Model to Production
&lt;/h4&gt;

&lt;p&gt;In this example we showed how to train a distributed PyTorch lighting model in the next post we will show how to deploy the model as an AKS service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?WT.mc_id=aiml-0000-abornst&amp;amp;tabs=azcli"&gt;How and where to deploy models - Azure Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you enjoyed this article check out my post on 9 tips for Production Machine Learning and feel free to share it with your friends!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/azure/9-advanced-tips-for-production-machine-learning-4ccg"&gt;9 Advanced Tips for Production Machine Learning&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Acknowledgements
&lt;/h3&gt;

&lt;p&gt;I want to give a major shout out to &lt;a href="https://github.com/mx-iao"&gt;Minna Xiao&lt;/a&gt; from the Azure ML team for her support and commitment working towards a better developer experience with Open Source Frameworks such as PyTorch Lighting on Azure.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with the Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>pytorch</category>
      <category>azure</category>
      <category>deeplearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Accelerating Model Training with the ONNX Runtime</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Wed, 20 May 2020 10:28:12 +0000</pubDate>
      <link>https://dev.to/azure/accelerating-model-training-with-the-onnx-runtime-3o0e</link>
      <guid>https://dev.to/azure/accelerating-model-training-with-the-onnx-runtime-3o0e</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4G0aB4oM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A9w6EHxtXjGpsZRj-" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4G0aB4oM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A9w6EHxtXjGpsZRj-" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TDLR; This article introduces the new improvements to the ONNX runtime for accelerated training and outlines the 4 key steps for speeding up training of an existing pyTorch model with the ONNX Runtime (ORT).&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the ONNX Runtime (ORT)?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qy7xYZ8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Ab9VPseLvwcaMlnnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qy7xYZ8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Ab9VPseLvwcaMlnnp.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ONNX Runtime&lt;/strong&gt; is a performance-focused inference engine for &lt;a href="https://onnx.ai/"&gt;ONNX (Open Neural Network Exchange) models&lt;/a&gt;. ONNX Runtime was designed with a focus on performance and scalability in order to support heavy workloads in high-scale production scenarios. It also has extensibility options for compatibility with emerging hardware developments.&lt;/p&gt;

&lt;p&gt;Recently at &lt;a href="https://mybuild.microsoft.com/home?WT.mc_id=build2020_ca-medium-abornst&amp;amp;t=%257B%2522from%2522%253A%25222020-05-19T08%253A30%253A00%252B03%253A00%2522%252C%2522to%2522%253A%25222020-05-21T19%253A00%253A00%252B03%253A00%2522%257D"&gt;//Build 2020&lt;/a&gt;, Microsoft announced new capabilities to perform optimized training with the ONNX Runtime in addition to inferencing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloudblogs.microsoft.com/opensource/2020/05/19/announcing-support-for-accelerated-training-with-onnx-runtime/?WT.mc_id=build2020_ca-medium-abornst"&gt;Announcing accelerated training with ONNX Runtime-train models up to 45% faster - Open Source Blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These optimizations led to 45% speed of Microsoft’s own internal Transformer NLP models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YNtoGkSb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AMcdpYzQYwtRmC4iDfW8zpQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YNtoGkSb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AMcdpYzQYwtRmC4iDfW8zpQ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As part of our work to give back to the community Microsoft developed an example repo that demonstrates how to integrate ORT training into the official Nvidia &lt;a href="https://devblogs.nvidia.com/training-bert-with-gpus/"&gt;implementation of large BERT model&lt;/a&gt; with over &lt;em&gt;8.3Bn parameters&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxruntime-training-examples"&gt;microsoft/onnxruntime-training-examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This implementation can be found with the link above and can be even trained on your data using &lt;a href="https://github.com/microsoft/onnxruntime-training-examples/blob/sukha/nvbert/nvidia-bert/azureml-notebooks/run-pretraining.ipynb"&gt;Azure ML with the example notebook&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, while this is amazing for getting started it would be nice to understand what is going on under the hood which can be a little overwhelming at first glance.&lt;/p&gt;

&lt;p&gt;In the remainder of this article, I will walk through the 4 main modifications that need to be made to pyTorch models for taking advantage of ORT and point you to where in the example code repo you can deep dive to learn more.&lt;/p&gt;

&lt;p&gt;I won’t go through every detail of the modification but I will explain the core concepts so that you can get started on your own ORT journey.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Set Up ORT Distributed Training Environment
&lt;/h3&gt;

&lt;p&gt;To trains large neural networks often requires distributed compute clusters. In this way we can run a version of our script for each GPU on each VM in our cluster. To properly ensure that our data and our model gradients get updated we need to assign each version of our train script with a:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;World Rank — A &lt;a href="https://mpitutorial.com/tutorials/performing-parallel-rank-with-mpi/"&gt;rank&lt;/a&gt; for the process across all the VM Instances&lt;/li&gt;
&lt;li&gt;Local Rank — A &lt;a href="https://mpitutorial.com/tutorials/performing-parallel-rank-with-mpi/"&gt;rank&lt;/a&gt; for the script process on a given VM Instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ort_supplement provides a setup function that configures the ONNX runtime for distributed training for Open MPI and the Azure Machine Learning Service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;device = ort\_supplement.setup\_onnxruntime\_with\_mpi(args)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8fhzP1pa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/621/1%2AaCG8K2rB9EdJ0TJic8y2Kw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8fhzP1pa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/621/1%2AaCG8K2rB9EdJ0TJic8y2Kw.png" alt=""&gt;&lt;/a&gt;The Create ORTTrainer function can be found in the&lt;a href="https://github.com/microsoft/onnxruntime-training-examples/blob/sukha/nvbert/nvidia-bert/ort_patch/ort_supplement/ort_supplement.py"&gt; ort_supplement module&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create an ORT Trainer Model
&lt;/h3&gt;

&lt;p&gt;Once we have a distributed training environment the next step is to load the pyTorch model into the ORT for training to do this we use the &lt;em&gt;create_ort&lt;/em&gt; trainer method from the ort_supplement script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = ort\_supplement.create\_ort\_trainer(args, device, model)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4aq6tTG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/596/1%2AR67z3n3kI_6gSI-bi8ARRA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4aq6tTG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/596/1%2AR67z3n3kI_6gSI-bi8ARRA.png" alt=""&gt;&lt;/a&gt;The Create ORTTrainer function can be found in the&lt;a href="https://github.com/microsoft/onnxruntime-training-examples/blob/sukha/nvbert/nvidia-bert/ort_patch/ort_supplement/ort_supplement.py"&gt; ort_supplement module&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ORT Trainer Model requires a couple of important arguments to implement:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A &lt;strong&gt;pyTorch model&lt;/strong&gt; bundled with a &lt;strong&gt;loss function&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An optimizer function&lt;/strong&gt; by default we use the &lt;a href="https://arxiv.org/abs/1904.00962"&gt;Lamb Optimizer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;model description&lt;/strong&gt; using the &lt;strong&gt;IODescription&lt;/strong&gt; object to explain the model input and output tensor dimensions. Code for an example of the NVIDIA BERT Description looks as follows&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mBdrzxJR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/732/1%2A7xQL9IxIeBWYuwA-ed_taA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mBdrzxJR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/732/1%2A7xQL9IxIeBWYuwA-ed_taA.png" alt=""&gt;&lt;/a&gt;Note the tensor dimensions should be passed as numeric values to get full optimization benefits&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gradient Accumulation Steps &lt;/strong&gt; — Number of steps to run on a script instance before syncing the gradient&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Opset Version — &lt;/strong&gt; The operation set version for the ONNX runtime. The latest version is 12&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt; &lt;strong&gt;Post Processing Function (Optional)&lt;/strong&gt; that runs after the ONNX runtime converts the pyTorch model to ONNX that can further be used to support unsupported operations and optimize the ONNX model graph further&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Map Optimizer Attributes&lt;/strong&gt; -maps weight names to a set of optimization parameters.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For more information check out the following resource from the ORT training repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxruntime/blob/50f798dad6681b5f84ece1a97b4a90504aa330f0/orttraining/orttraining/python/ort_trainer.py"&gt;microsoft/onnxruntime&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Call ORT Training Steps to Train Model
&lt;/h3&gt;

&lt;p&gt;Once we’ve initialized our model we need to call &lt;strong&gt;run_ort_training_steps&lt;/strong&gt; to actually step forward with our model, calculate it’s local loss and propagate it’s aggregated gradient.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;loss, global\_step = ort\_supplement.run\_ort\_training\_step(args, global\_step, training\_steps, model, batch) # Runs the actual training steps
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kdwBhPSq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/725/1%2Aisw6xU4ljh7hAG34V0yuPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kdwBhPSq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/725/1%2Aisw6xU4ljh7hAG34V0yuPQ.png" alt=""&gt;&lt;/a&gt;The run_ort_training steps function can be found in the&lt;a href="https://github.com/microsoft/onnxruntime-training-examples/blob/sukha/nvbert/nvidia-bert/ort_patch/ort_supplement/ort_supplement.py"&gt; ort_supplement module&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Export Trained ONNX Model
&lt;/h3&gt;

&lt;p&gt;Lastly once we have completed all the distributed training iterations of our model. While exporting to ONNX is not mandatory for evaluation doing so enables us to take advantage of ORTs accelerated inferencing. We can export our ORT model to the ONNX format for evaluation by calling the model.save_as_onnx function and providing it with our output destination. This function can also be used to checkpoint our model at each epoch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; model.save\_as\_onnx(out\_path)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;With the four functions above you have the key tools you need to make sense of the full BERT LARGE ONNX training example. To see them in action check out the &lt;a href="https://github.com/microsoft/onnxruntime-training-examples/blob/master/nvidia-bert/ort_addon/run_pretraining_ort.py"&gt;run_pretraining_ort&lt;/a&gt; script below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxruntime-training-examples/blob/master/nvidia-bert/ort_addon/run_pretraining_ort.py"&gt;microsoft/onnxruntime-training-examples&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;Now that you are more familiar with how to leverage the ORT SDK take a look at some other really cool ONNX blog posts and examples.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/microsoftazure/accelerate-your-nlp-pipelines-using-hugging-face-transformers-and-onnx-runtime-2443578f4333"&gt;Accelerate your NLP pipelines using Hugging Face Transformers and ONNX Runtime&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-onnx?WT.mc_id=build2020_ca-medium-abornst"&gt;ONNX: high-perf, cross platform inference - Azure Machine Learning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/azure/evaluating-deep-learning-models-in-10-different-languages-with-examples-3b4b"&gt;Evaluating Deep Learning Models in 10 Different Languages (With Examples)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As the Runtime matures we are always looking for more contributors check out our contribution guidelines here. Hope this helps you on your journey to more efficient deep learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>azure</category>
      <category>onnx</category>
    </item>
    <item>
      <title>Visual Brand Detection with Azure Video Indexer</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Fri, 15 May 2020 15:00:03 +0000</pubDate>
      <link>https://dev.to/azure/visual-brand-detection-with-azure-video-indexer-nc0</link>
      <guid>https://dev.to/azure/visual-brand-detection-with-azure-video-indexer-nc0</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M5i5YkcQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ATZnVit_dzETymBl-KWW5yw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M5i5YkcQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ATZnVit_dzETymBl-KWW5yw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TLDR; This post will show how to use the Azure Video Indexer, Computer Vision API and Custom Vision Services to extract key frames and detect custom image tags in indexed videos.&lt;/p&gt;

&lt;p&gt;All code for the tutorial can be found in the notebook below. This code can be extended to support almost any image classification or object detection task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aribornstein/AzureVideoIndexerVisualBrandDetection/blob/master/Video%20Indexer%20Keyframe%20Brand%20Detection.ipynb"&gt;aribornstein/AzureVideoIndexerVisualBrandDetection&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The tutorial requires an Azure subscription, however everything can be achieved using the free tier. If you are new to Azure you can get a free subscription here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Create your Azure free account today | Microsoft Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Azure Video Indexer?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Iu_jCjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AjYdZ1vbeTnj6nH2k3XK5-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Iu_jCjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AjYdZ1vbeTnj6nH2k3XK5-Q.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-use-apis?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Azure Video Indexer&lt;/a&gt; automatically extracts metadata — such as spoken words, written text, faces, speakers, celebrities, emotions, topics, brands, and scenes from video and audio files. Developers can then access the data within their application or infrastructure, make it more discover-able, and use it to create new over-the-top (OTT) experiences and monetization opportunities&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-use-apis?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Use the Video Indexer API - Azure Media Services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Often, we wish to extract useful tags from videos content.These tags are often the differentiating factor for having successful engagement on social media services such as Instagram, Facebook, and YouTube&lt;/p&gt;

&lt;p&gt;This tutorial will show how to use Azure Video Indexer, Computer Vision API, and Custom Vision service to extract key frames and custom tags. We will use these Azure services to detect custom brand logos in indexed videos.&lt;/p&gt;

&lt;p&gt;This code can be extended to support almost any image classification or object detection task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step #1 Download A Sample Video with the pyTube API
&lt;/h3&gt;

&lt;p&gt;The first step is to download a sample video to be indexed. We will be downloading an episode of &lt;a href="https://www.youtube.com/watch?v=I1_kqOIKQTQ"&gt;Azure Mythbusters&lt;/a&gt; on &lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Azure Machine Learning&lt;/a&gt; by my incredible Co-Worker &lt;a href="https://twitter.com/AmyKateNicho"&gt;Amy Boyd&lt;/a&gt; using the Open Source &lt;a href="https://python-pytube.readthedocs.io/en/latest/"&gt;pyTube API&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ijtKxXiS4hE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;pyTube can be installed with pip&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install pytube3 --upgrade
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Code:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pytube import YouTube
from pathlib import Path

video2Index = YouTube('https://www.youtube.com/watch?v=ijtKxXiS4hE').streams[0].download()

video\_name = Path(video2Index).stem
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Step #2 Create An Azure Video Indexer Instance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jy7Mdtqa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/819/1%2AnGV770rBR9WMhHeih37MMg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jy7Mdtqa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/819/1%2AnGV770rBR9WMhHeih37MMg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to &lt;a href="https://www.videoindexer.ai/"&gt;https://www.videoindexer.ai/&lt;/a&gt; and follow the instructions to create an Account&lt;/p&gt;

&lt;p&gt;For the next steps, you will need your Video Indexer&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subscription Key&lt;/li&gt;
&lt;li&gt;Location&lt;/li&gt;
&lt;li&gt;Account Id&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These can be found in the account settings page in the Video Indexer Website pictured above. For more information see the documentation below. Feel free to comment below if you get stuck.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-use-apis?WT.mc_id=vikeyframedetection-notebook-abornst"&gt;Use the Video Indexer API - Azure Media Services&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step #3 Use the Unofficial Video Indexer Python Client to Process our Video and Extract Key Frames
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9KgIl2OC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AhDm9h-iNcXoPGHaX" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9KgIl2OC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AhDm9h-iNcXoPGHaX" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To interact with the Video Indexer API, we will use the &lt;a href="https://github.com/bklim5/python_video_indexer_lib"&gt;unofficial Python client&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install video-indexer
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialize Client:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi = VideoIndexer(vi\_subscription\_key='SUBSCRIPTION\_KEY',
                  vi\_location='LOCATION',
                  vi\_account\_id='ACCOUNT\_ID')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Upload Video:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;video\_id = vi.upload\_to\_video\_indexer(
              input\_filename = video2Index,
              video\_name=video\_name, #must be unique
              video\_language='English')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get Video Info
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;info = vi.get\_video\_info(video\_id, video\_language='English')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Extract Key Frame Ids
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;keyframes = []
for shot in info["videos"][0]["insights"]["shots"]:
    for keyframe in shot["keyFrames"]:
        keyframes.append(keyframe["instances"][0]['thumbnailId'])
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get Keyframe Thumbnails
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for keyframe in keyframes:
    img\_str = vi.get\_thumbnail\_from\_video\_indexer(video\_id,    
                                                  keyframe)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Step #3 Use the Azure Computer Vision API to Extract Popular Brands from Key Frames
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gBjjYdCc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AuBF1f9xlnIdKD5Fo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gBjjYdCc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AuBF1f9xlnIdKD5Fo.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Out of the box, Azure Video Indexer uses optical character recognition and audio transcript generated from speech-to-text transcription to detect references to popular brands.&lt;/p&gt;

&lt;p&gt;Now, that we have extracted the key frames we are going to leverage the &lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?pivots=programming-language-python&amp;amp;?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Computer Vision API&lt;/a&gt; to extend this functionality to see if there are any known brands in the key frames.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Brand detection - Computer Vision - Azure Cognitive Services&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First we will have to create a &lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?pivots=programming-language-python&amp;amp;?WT.mc_id=vikeyframedetection-notebook-abornst"&gt;Computer Vision API&lt;/a&gt; key. There is a free tier that can be used for the demo that can be generated with the instructions in the documentation link below. Once done you should get a Computer Vision &lt;strong&gt;subscription key&lt;/strong&gt; and  &lt;strong&gt;endpoint&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account?WT.mc_id=vikeyframedetection-notebook-abornst"&gt;Create a Cognitive Services resource in the Azure portal - Azure Cognitive Services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After we have our Azure Computer Vision &lt;strong&gt;subscription key&lt;/strong&gt; and &lt;strong&gt;endpoint&lt;/strong&gt; , we can then use the Client SDK to evaluate our video’s keyframes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install --upgrade azure-cognitiveservices-vision-computervision
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialize Computer Vision Client
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import CognitiveServicesCredentials

computervision\_client = ComputerVisionClient(endpoint, CognitiveServicesCredentials(subscription\_key))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Send Keyframe To Azure Computer Vision Service to Detect Brands
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time

timeout\_interval, timeout\_time = 5, 10.0
image\_features = ["brands"]

for index, keyframe in enumerate(keyframes):

if index % timeout\_interval == 0:
     print("Trying to prevent exceeding request limit waiting {} seconds".format(timeout\_time))
     time.sleep(timeout\_time)

# Get KeyFrame Image Byte String From Video Indexer
img\_str = vi.get\_thumbnail\_from\_video\_indexer(video\_id, keyframe)

# Convert Byte Stream to Image Stream
img\_stream = io.BytesIO(img\_str)

# Analyze with Azure Computer Vision
cv\_results = computervision\_client.analyze\_image\_in\_stream(img\_stream, image\_features)

print("Detecting brands in keyframe {}: ".format(keyframe))

if len(cv\_results.brands) == 0:
    print("No brands detected.")

else:
    for brand in cv\_results.brands:

        print("'{}' brand detected with confidence {:.1f}% at location {}, {}, {}, {}".format( brand.name, brand.confidence \* 100, brand.rectangle.x, brand.rectangle.x + brand.rectangle.w, brand.rectangle.y, brand.rectangle.y + brand.rectangle.h))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Azure Computer Vision API — General Brand Detection
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?pivots=programming-language-python#analyze-an-image"&gt;Quickstart: Computer Vision client library - Azure Cognitive Services&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step #4 Use the Azure Custom Vision Service to Extract Custom Logos from Keyframes
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LIpys-U1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Atpr1hMApdxIpZV4xLSwQ7A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LIpys-U1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Atpr1hMApdxIpZV4xLSwQ7A.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Azure Computer Vision&lt;/a&gt; API, provides the ability to capture many of the worlds most popular brands, but sometimes a brand may be more obscure. In the last section, we will use the Custom Vision Service, to train a custom logo detector to detect the Azure Developer Relation Mascot Bit in in the keyframes extracted by Video Indexer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QpDZriMc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AV9uAlX5vdf7Jcwkjx79BcQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QpDZriMc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AV9uAlX5vdf7Jcwkjx79BcQ.png" alt=""&gt;&lt;/a&gt;My training set for Custom Bit Detector&lt;/p&gt;

&lt;p&gt;This tutorial assumes you know how to train a &lt;a href="http://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/quickstarts/object-detection?pivots=programming-language-python?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Custom Vision Service object detection model for brand detection&lt;/a&gt;. If not check out the If not, check out the documentation below for a tutorial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/logo-detector-mobile?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Tutorial: Use custom logo detector to recognize Azure services - Custom Vision - Azure Cognitive Services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of deploying to mobile, however we will use the python client API for the &lt;a href="http://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/quickstarts/object-detection?pivots=programming-language-python?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Azure Custom Vision Service&lt;/a&gt;. All the information you’ll need can be found in the settings menu of your Custom Vision project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--czf7lJyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AntJkUfIVNhPuWPc32YN0hw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--czf7lJyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AntJkUfIVNhPuWPc32YN0hw.png" alt=""&gt;&lt;/a&gt;Settings menu for Custom Vision Service&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install azure-cognitiveservices-vision-customvision
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialize Custom Vision Service Client
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient

prediction\_threshold = .8
prediction\_key = "Custom Vision Service Key"
custom\_endpoint = "Custom Vision Service Endpoint"
project\_id = "Custom Vision Service Model ProjectId"
published\_name = "Custom Vision Service Model Iteration Name"

predictor = CustomVisionPredictionClient(prediction\_key, endpoint=published\_name)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Use Custom Vision Service Model to Predict Key Frames
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time
timeout\_interval, timeout\_time = 5, 10.0

for index, keyframe in enumerate(keyframes):
    if index % timeout\_interval == 0:
       print("Trying to prevent exceeding request limit waiting {} seconds".format(timeout\_time))
       time.sleep(timeout\_time)

    # Get KeyFrame Image Byte String From Video Indexer
    img\_str = vi.get\_thumbnail\_from\_video\_indexer(video\_id, keyframe)

    # Convert Byte Stream to Image Stream
    img\_stream = io.BytesIO(img\_str)

    # Analyze with Azure Computer Vision
    cv\_results = predictor.detect\_image(project\_id, published\_name, img\_stream)
    predictions = [pred for pred in cv\_results.predictions if pred.probability &amp;gt; prediction\_threshold]
    print("Detecting brands in keyframe {}: ".format(keyframe))

    if len(predictions) == 0:
       print("No custom brands detected.")
    else:
       for brand in predictions:
           print("'{}' brand detected with confidence {:.1f}% at location {}, {}, {}, {}".format( brand.tag\_name, brand.probability \* 100, brand.bounding\_box.left, brand.bounding\_box.top, brand.bounding\_box.width, brand.bounding\_box.height))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;And there we have it! I am able to find all the frames that have either Microsoft for or the Cloud Advocacy Bit Logo in my video.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GxCLxzDQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/716/1%2Ax9w-y9iaZYdmy0TwFEzqig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GxCLxzDQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/716/1%2Ax9w-y9iaZYdmy0TwFEzqig.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dq7enXRz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/709/1%2A_ZGg2CPHsvCB7_GjfMK95Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dq7enXRz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/709/1%2A_ZGg2CPHsvCB7_GjfMK95Q.png" alt=""&gt;&lt;/a&gt;Sample Key Frames with Bit&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MAWP0vk---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1014/1%2AbXyNWLU6fHK5ZKrF3hicng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MAWP0vk---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1014/1%2AbXyNWLU6fHK5ZKrF3hicng.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You now have all you need to extend the Azure Video Indexer Service with your own custom computer vision models. Below is a list of additional resources to take that will help you take your integration with Video Indexer to the next level.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Offline Computer Vision&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In a production system, you might see request throttling from a huge number of requests. In this case, the Azure Computer Vision service can be run in an offline container&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers?WT.mc_id=vikeyframedetection-medium-abornst"&gt;How to install and run containers - Computer Vision - Azure Cognitive Services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, the Custom Vision model can be run locally as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-deploy-custom-vision?WT.mc_id=vikeyframedetection-medium-abornst"&gt;Tutorial - Deploy Custom Vision classifier to a device using Azure IoT Edge&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Video Indexer + Zoom Media
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Partners/IntegrationWithZoommedia"&gt;Azure-Samples/media-services-video-indexer&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating an Automated Video Processing Flow in Azure
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://fabriciosanchez-en.azurewebsites.net/creating-an-automated-video-processing-flow-in-azure/"&gt;Creating an automated video processing flow in Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>machinelearning</category>
      <category>azure</category>
      <category>ai</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Evaluating Deep Learning Models in 10 Different Languages (With Examples)</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Tue, 05 May 2020 13:42:18 +0000</pubDate>
      <link>https://dev.to/azure/evaluating-deep-learning-models-in-10-different-languages-with-examples-3b4b</link>
      <guid>https://dev.to/azure/evaluating-deep-learning-models-in-10-different-languages-with-examples-3b4b</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--93vjUHNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ALJem382v2F1hUFZL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--93vjUHNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ALJem382v2F1hUFZL.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ONNX is an open format built to represent machine learning models.&lt;/strong&gt; ONNX defines a common set of operators — the building blocks of machine learning and deep learning models — and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. The following post is a compilation of code samples showing how to evaluate Onnx Models in 10 different programming languages.&lt;/p&gt;

&lt;h4&gt;
  
  
  #10 R
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rJfKWe3l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/724/0%2ABqjJTjVfSfQjCNUb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rJfKWe3l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/724/0%2ABqjJTjVfSfQjCNUb.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/onnx/onnx-r"&gt;onnx/onnx-r&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #9 C++
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ajs0MuBf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/505/1%2AABuEflLQ3cVOtfT5y1tKIA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ajs0MuBf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/505/1%2AABuEflLQ3cVOtfT5y1tKIA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxruntime/blob/master/samples#CC"&gt;microsoft/onnxruntime&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #8 Java
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--spxQ7d5B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/167/0%2A7Nwv1EgLTtx46Iba" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--spxQ7d5B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/167/0%2A7Nwv1EgLTtx46Iba" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxruntime/blob/master/docs/Java_API.md"&gt;microsoft/onnxruntime&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #7 .NET Core
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nmmby_pY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/400/0%2AAQOfL4rMx9rpnl3l" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nmmby_pY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/400/0%2AAQOfL4rMx9rpnl3l" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/object-detection-onnx"&gt;Tutorial: Detect objects using an ONNX deep learning model - ML.NET&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #6 Ruby
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uLcQw6zE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/241/0%2AKAyQ5z8U0txvGESJ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uLcQw6zE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/241/0%2AKAyQ5z8U0txvGESJ" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ankane/onnxruntime"&gt;ankane/onnxruntime&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #5 Rust
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ooVpe4Gi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Act5qgjAi1z2S0imH.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ooVpe4Gi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Act5qgjAi1z2S0imH.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxruntime-tvm/tree/master/rust"&gt;microsoft/onnxruntime-tvm&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #4 JavaScript
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--arw-FUvZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/264/0%2AokSXgcamskyXq4F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--arw-FUvZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/264/0%2AokSXgcamskyXq4F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxjs"&gt;microsoft/onnxjs&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #3 Python
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LJTX7Z-i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/512/0%2A9CkgIRmwjxDfgbAG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LJTX7Z-i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/512/0%2A9CkgIRmwjxDfgbAG.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md"&gt;onnx/onnx&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #2 Swift
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zKzoEEa9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ABpePt0HX2DUWAXkT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zKzoEEa9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ABpePt0HX2DUWAXkT.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://link.medium.com/uJVunyb4d4"&gt;Convert fast.ai trained image classification model to iOS app via ONNX and Apple Core ML&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  #1 C
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S6J7tEMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/349/1%2AsIVMLcNeyfxZixZOJ17NWg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S6J7tEMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/349/1%2AsIVMLcNeyfxZixZOJ17NWg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md"&gt;microsoft/onnxruntime&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/object-detection-onnx?WT.mc.id=aiapril-medium-abornst"&gt;Tutorial: Detect objects using an ONNX deep learning model - ML.NET&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/microsoft/OLive"&gt;microsoft/OLive&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>machinelearning</category>
      <category>azure</category>
      <category>onnx</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>AI April NLP Math Teacher Challenge</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Thu, 30 Apr 2020 10:52:38 +0000</pubDate>
      <link>https://dev.to/azure/ai-april-nlp-math-teacher-challenge-ghk</link>
      <guid>https://dev.to/azure/ai-april-nlp-math-teacher-challenge-ghk</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AfnqhtHoHfKi4tREk" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AfnqhtHoHfKi4tREk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TLDR; Build a model that can perform automatic problem solving, written in natural language on the provided dataset.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge
&lt;/h3&gt;

&lt;p&gt;Build a model that can perform automatic problem solving, written in natural language on the provided dataset.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A4v1oNJ_xHRbJT5A1Yi7IoA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A4v1oNJ_xHRbJT5A1Yi7IoA.png"&gt;&lt;/a&gt;Example Challenge Problem&lt;/p&gt;

&lt;h3&gt;
  
  
  Questions
&lt;/h3&gt;

&lt;p&gt;I will be monitoring the github issues of the challenge repo and on &lt;a href="https://twitter.com/pythiccoder" rel="noopener noreferrer"&gt;twitter&lt;/a&gt; feel free to reach out with any questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Download Dataset
&lt;/h3&gt;

&lt;p&gt;The math teacher dataset can be downloaded &lt;a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02//dolphin-number_word_std.zip" rel="noopener noreferrer"&gt;here&lt;/a&gt; For more information on the subsets see &lt;a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02//dolphin-sigmadolphin.datasets.pdf" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission
&lt;/h3&gt;

&lt;p&gt;Submit a submission by creating a pull request to the submissions folder in the Repo below!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aribornstein/AIApril2020NLPMAthTeacherChallenge" rel="noopener noreferrer"&gt;aribornstein/AIApril2020NLPMAthTeacherChallenge&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are new to Azure you can get started a free subscription using the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc.id=aiapril-medium-abornst" rel="noopener noreferrer"&gt;Create your Azure free account today | Microsoft Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Useful Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://spacy.io/" rel="noopener noreferrer"&gt;Spacy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://allennlp.org/" rel="noopener noreferrer"&gt;Allen NLP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aribornstein/pyNeurboParser" rel="noopener noreferrer"&gt;PyNeurboParser&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/?WT.mc_id=aiapril-medium-abornst" rel="noopener noreferrer"&gt;Azure Text Analytics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/huggingface/transformers" rel="noopener noreferrer"&gt;Hugging Face Transformers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://towardsdatascience.com/beyond-word-embeddings-part-2-word-vectors-nlp-modeling-from-bow-to-bert-4ebd4711d0ec" rel="noopener noreferrer"&gt;Word Vectors and NLP Modeling from BoW to BERT&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/azure/9-advanced-tips-for-production-machine-learning-4ccg"&gt;9 Advanced Tips for Production Machine Learning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2015/08/dolphin18k-v1.1.zip" rel="noopener noreferrer"&gt;Dolphin 18k Data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kevinzakka/NALU-pytorch" rel="noopener noreferrer"&gt;Neural Arithmetic Logic Units&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sympy.org/en/index.html" rel="noopener noreferrer"&gt;SymPy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://allenai.org/euclid/" rel="noopener noreferrer"&gt;Project Euclid&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/" rel="noopener noreferrer"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>azure</category>
      <category>nlp</category>
    </item>
    <item>
      <title>Protecting Personal Identifiable Information with Azure AI</title>
      <dc:creator>PythicCoder</dc:creator>
      <pubDate>Wed, 22 Apr 2020 18:47:34 +0000</pubDate>
      <link>https://dev.to/azure/protecting-personal-identifiable-information-with-azure-ai-13ao</link>
      <guid>https://dev.to/azure/protecting-personal-identifiable-information-with-azure-ai-13ao</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F600%2F0%2AZZO23S4dXW1hrnvh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F600%2F0%2AZZO23S4dXW1hrnvh.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TLDR; The following post will outline both first party and open source techniques for detecting PII with Azure.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is PII?
&lt;/h3&gt;

&lt;p&gt;Personally Identifiable information (PII), is any data that can be used used to identify a individuals such as names, driver’s license number, SSNs, bank account numbers, passport numbers, email addresses and more. Many regulations from GDPR to HIPPA require strict protection of user privacy.&lt;/p&gt;

&lt;p&gt;If you are new to Azure you can get started a free subscription using the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc.id=aiapril-medium-abornst" rel="noopener noreferrer"&gt;Create your Azure free account today | Microsoft Azure&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Detecting PII With Azure Cognitive Search (Preview)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/services/search/?WT.mc.id=aiapril-medium-abornst" rel="noopener noreferrer"&gt;Azure Cognitive Search&lt;/a&gt; is a cloud solution that provides developers APIs and tools for adding a rich search experience to their data, content and applications. With cognitive search you can add cognitive skills to apply AI processes during indexing. Doing so can add new information and structures useful for search and other scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AjQugUslgSGP0O9UfgFfs1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AjQugUslgSGP0O9UfgFfs1g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Azure &lt;strong&gt;PII Detection&lt;/strong&gt; skill (Currently in Preview) extracts personally identifiable information from an input text and gives you the option to mask it from that text in various ways. This skill uses the machine learning models provided by &lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview/?WT.mc.id=aiapril-medium-abornst" rel="noopener noreferrer"&gt;Text Analytics&lt;/a&gt; in Cognitive Services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-pii-detection/?WT.mc.id=aiapril-medium-abornst" rel="noopener noreferrer"&gt;PII Detection cognitive skill (preview) - Azure Cognitive Search&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Detecting PII With Microsoft Presidio
&lt;/h3&gt;

&lt;p&gt;In addition to the first party cognitive search Microsoft also provides an open source PII detection tool for Azure called Presidio which was developed by the Microsoft Commercial Software Engineering team in Israel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/presidio" rel="noopener noreferrer"&gt;microsoft/presidio&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use Presidio?
&lt;/h3&gt;

&lt;p&gt;Presidio is open-source, transparent, and scalable. Presidio allows developers and data scientists to customize or add new PII recognizers via API or code to best fit your anonymization needs. Presidio leverages docker and kubernetes for workloads at scale.&lt;/p&gt;

&lt;p&gt;Presidio automatically detects Personal-Identifiable Information (PII) in unstructured text, annonymizes it based on one or more anonymization mechanisms, and returns a string with no personal identifiable data. For example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A7SsFA7s10OOlFesK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A7SsFA7s10OOlFesK.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each PII entity, presidio returns a confidence score:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AOE5j2ifYa7lcgIaK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AOE5j2ifYa7lcgIaK.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Text anonymization in images&lt;/em&gt; (beta)&lt;/p&gt;

&lt;p&gt;Presidio uses OCR to detect text in images. It further allows the redaction of the text from the original image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AMKEACNa3uTdSKl-_.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AMKEACNa3uTdSKl-_.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo
&lt;/h3&gt;

&lt;p&gt;Check out a public demo to try out on your own data with the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://presidio-demo.azurewebsites.net/" rel="noopener noreferrer"&gt;Presidio&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying
&lt;/h3&gt;

&lt;p&gt;Installation Steps&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate into \deployment from command line.&lt;/li&gt;
&lt;li&gt;If You have helm installed, but havn’t run helm init, execute &lt;a href="https://github.com/microsoft/presidio/blob/master/deploy-helm.sh" rel="noopener noreferrer"&gt;deploy-helm.sh&lt;/a&gt; in the command line. It will install tiller (helm server side) on your cluster, and grant it sufficient permissions.&lt;/li&gt;
&lt;li&gt;Grant the Kubernetes cluster access to the container registry follow these instructions to &lt;a href="https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration?WT.mc.id=aiapril-medium-abornst" rel="noopener noreferrer"&gt;grant the AKS cluster access to the ACR&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;If you already have helm and tiller configured, or if you installed it in the previous step, execute &lt;a href="https://github.com/microsoft/presidio/blob/master/deploy-presidio.sh" rel="noopener noreferrer"&gt;deploy-presidio.sh&lt;/a&gt; in the command line as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy-presidio.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More information can be found on the github repo and near one click deployment options for Azure are coming soon!&lt;/p&gt;

&lt;p&gt;Additional deployment options can be found &lt;a href="https://github.com/microsoft/presidio#give-it-a-try" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;In this post you learned two of my favorite options for detecting PII in your data with Azure. If you are interested in Azure and AI be sure to check out &lt;a href="https://medium.com/@aribornstein" rel="noopener noreferrer"&gt;my other posts&lt;/a&gt; and the &lt;a href="https://medium.com/microsoftazure" rel="noopener noreferrer"&gt;Azure medium blog&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/aaron-ari-bornstein-22aa7a77/" rel="noopener noreferrer"&gt;&lt;strong&gt;Aaron (Ari) Bornstein&lt;/strong&gt;&lt;/a&gt; is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.&lt;/p&gt;




</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>azure</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
