<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dmitry Soshnikov</title>
    <description>The latest articles on DEV Community by Dmitry Soshnikov (@shwars).</description>
    <link>https://dev.to/shwars</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shwars"/>
    <language>en</language>
    <item>
      <title>Making an Interactive Cognitive Portrait Exhibit using some Creativity, .NET, Azure Functions and Cognitive Services Magic</title>
      <dc:creator>Dmitry Soshnikov</dc:creator>
      <pubDate>Wed, 15 Apr 2020 20:55:56 +0000</pubDate>
      <link>https://dev.to/azure/making-an-interactive-cognitive-portrait-exhibit-using-some-creativity-net-azure-functions-and-cognitive-services-magic-2ob1</link>
      <guid>https://dev.to/azure/making-an-interactive-cognitive-portrait-exhibit-using-some-creativity-net-azure-functions-and-cognitive-services-magic-2ob1</guid>
      <description>&lt;p&gt;As you may know, I am a big fan of Science Art. This January, Moscow &lt;a href="http://electromuseum.ru/en" rel="noopener noreferrer"&gt;ElectroMuseum&lt;/a&gt; made an open call for Open Museum 2020 Exhibition. In this post, I will describe the exhibit that I made, and how it was transformed due to quarantine and museum closing.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This post is a part of &lt;a href="http://aka.ms/AIApril" rel="noopener noreferrer"&gt;AI April&lt;/a&gt; initiative, where each day of April my colleagues publish new original article related to AI, Machine Learning and Microsoft. Have a look at the &lt;a href="http://aka.ms/AIApril" rel="noopener noreferrer"&gt;Calendar&lt;/a&gt; to find other interesting articles that have already been published, and keep checking that page during the month.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In my &lt;a href="http://soshnikov.com/scienceart/peopleblending/" rel="noopener noreferrer"&gt;earlier post&lt;/a&gt;, I described &lt;strong&gt;Cognitive Portrait&lt;/strong&gt; technique to produce blended portraits of people from a series of photographs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fart%2Folgaza.jpg" alt="Cognitive Portrait"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img alt="Cogntive Protrait" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fart%2FCirc3.jpg"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;em&gt;Olga&lt;/em&gt;, 2019, &lt;a href="http://aka.ms/peopleblending" rel="noopener noreferrer"&gt;People Blending&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;em&gt;People Circle&lt;/em&gt;, 2020, &lt;a href="http://aka.ms/cognitiveportrait" rel="noopener noreferrer"&gt;Cognitive Portrait&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This January, &lt;a href="http://electromuseum.ru/en" rel="noopener noreferrer"&gt;Moscow ElectroMuseum&lt;/a&gt; made a call to all artists to submit their ideas for &lt;a href="http://electromuseum.ru/event/otkrytyj-muzej-2020/" rel="noopener noreferrer"&gt;OpenMuseum&lt;/a&gt; exhibition, so I immediately thought of turning Cognitive Portrait idea into something more interactive. What I wanted to create is an interactive stand that will capture photographs of people passing nearby, and transform them into "average" photograph of exhibition visitors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit.png" alt="Exhibit Overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because Cognitive Portrait relies on using &lt;a href="https://docs.microsoft.com/azure/cognitive-services/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Cognitive Services&lt;/a&gt; to extract &lt;a href="https://docs.microsoft.com/azure/cognitive-services/face/concepts/face-detection#face-landmarks?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;face landmarks&lt;/a&gt;, the exhibit needs to be Internet-connected. This provides some additional advantages, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storing produced images and making them available for later expositions&lt;/li&gt;
&lt;li&gt;Being able to collect demographic portrait of people at the exhibition, including age distribution, gender, and the amount of time people spend in front of the exhibit. I will not explore this functionality in this post, though.&lt;/li&gt;
&lt;li&gt;If you want to chose different &lt;a href="http://github.com/CloudAdvocacy/CognitivePortrait" rel="noopener noreferrer"&gt;cognitive portrait technique&lt;/a&gt;, you would only need to change the functionality in the cloud, so you can in fact change exhibits without physically visiting the museum.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;From architectural point of view, the exhibit consists of two parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client UWP application&lt;/strong&gt;, which runs on a computer with the monitor and web cam installed at the museum. &lt;a href="https://docs.microsoft.com/windows/uwp/get-started/universal-application-platform-guide/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;UWP application&lt;/a&gt; can also be run on Raspberry Pi on &lt;a href="https://docs.microsoft.com/windows/iot-core/tutorials/rpi/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Windows IoT Core&lt;/a&gt;. Client application does the following:

&lt;ul&gt;
&lt;li&gt;Detects a person standing in front of the camera&lt;/li&gt;
&lt;li&gt;When a person stays rather still for a few seconds - it will take the picture and send it to the cloud&lt;/li&gt;
&lt;li&gt;When a result is received from the cloud - show it on the screen / monitor.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cloud backend&lt;/strong&gt;, which does the following:

&lt;ul&gt;
&lt;li&gt;Receive the picture from the client&lt;/li&gt;
&lt;li&gt;Apply affine transformation to align the picture with predefined eyes coordinates and store the result&lt;/li&gt;
&lt;li&gt;Create the result from a few previous pictures (which already have been aligned) and return it to the client&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Below, we will discuss different options for implementing client and cloud parts, and select the best ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit-Arch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit-Arch.png" alt="Exhibit Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Exhibit Bot
&lt;/h2&gt;

&lt;p&gt;When the exhibit has already been developed and demonstrated in the museum for a few weeks, the news came up that the museum will be closed for quarantine for indefinite period of time. The news sparkled the idea to turn the exhibit into the virtual one, so that people can interact with it without leaving their house.&lt;/p&gt;

&lt;p&gt;The interaction is done via Telegram chat-bot &lt;a href="http://t.me/peopleblenderbot" rel="noopener noreferrer"&gt;@PeopleBlenderBot&lt;/a&gt;. The bot calls cloud backend via the same REST API as the UWP client application, and the image that it receives back is given as the bot response. Thus, a user is able to send his picture, and get back its people-blended version, which incorporates other users of the bot, together with real people in the gallery. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;With &lt;a href="http://t.me/peopleblenderbot" rel="noopener noreferrer"&gt;&lt;strong&gt;@PeopleBlenderBot&lt;/strong&gt;&lt;/a&gt;, the exhibit spans both real and virtual worlds, blending people's faces from their homes and from the exhibition together into one collective image. This new kind of &lt;strong&gt;virtual art&lt;/strong&gt; is a true way to break boundaries and bring people together regardless of their physical location, city or country. You can test the bot &lt;a href="http://soshnikov.com/museum/peopleblenderbot" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  UWP Client Application
&lt;/h2&gt;

&lt;p&gt;The main reason why I chose to use &lt;a href="https://docs.microsoft.com/windows/uwp/get-started/universal-application-platform-guide/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Universal Windows Applications&lt;/a&gt; platform for client application is because it has face detection functionality out of the box. Also, it is available on Raspberry Pi controller, through the use of &lt;a href="https://docs.microsoft.com/windows/iot-core/tutorials/rpi/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Windows 10 IoT Core&lt;/a&gt;, however, it turns out to be quite slow, so for my exhibit I used &lt;a href="https://www.intel.ru/content/www/ru/ru/products/boards-kits/nuc.html" rel="noopener noreferrer"&gt;Intel NUC&lt;/a&gt; compact computer.  &lt;/p&gt;

&lt;p&gt;User interface of our application would look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-UWP-UI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-UWP-UI.png" alt="Cognitive Portrait UWP UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Corresponding XAML layout (slightly simplified) is like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"&amp;gt;
    &amp;lt;Grid.ColumnDefinitions&amp;gt;
        &amp;lt;ColumnDefinition Width="*"/&amp;gt;
        &amp;lt;ColumnDefinition Width="*"/&amp;gt;
    &amp;lt;/Grid.ColumnDefinitions&amp;gt;
    &amp;lt;Grid.RowDefinitions&amp;gt;
        &amp;lt;RowDefinition Height="*"/&amp;gt;
        &amp;lt;RowDefinition Height="170"/&amp;gt;
    &amp;lt;/Grid.RowDefinitions&amp;gt;
    &amp;lt;Grid x:Name="FacesCanvas" Grid.Row="0" Grid.Column="0"&amp;gt;
        &amp;lt;CaptureElement x:Name="ViewFinder" /&amp;gt;
        &amp;lt;Rectangle x:Name="FaceRect"/&amp;gt;
        &amp;lt;TextBlock x:Name="Counter" FontSize="60"/&amp;gt;
    &amp;lt;/Grid&amp;gt;
    &amp;lt;Grid x:Name="ResultCanvas" Grid.Row="0" Grid.Column="1"&amp;gt;
        &amp;lt;Image x:Name="ResultImage" Source="Assets/bgates.jpg"/&amp;gt;
    &amp;lt;/Grid&amp;gt;
    &amp;lt;ItemsControl x:Name="FacesLine" Grid.Row="1" Grid.ColumnSpan="2" 
              ItemsSource="{x:Bind Faces,Mode=OneWay}"/&amp;gt;
&amp;lt;/Grid&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the most important elements are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ViewFinder&lt;/code&gt; to display the live feed from the camera&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;FaceRect&lt;/code&gt; and &lt;code&gt;Counter&lt;/code&gt; are elements overlayed on top of &lt;code&gt;ViewFinder&lt;/code&gt; to display the rectangle around recognized face, and to provide countdown timer to display 3-2-1 before the picture is taken&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ResultImage&lt;/code&gt; is the area to display the result received from the cloud&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;FacesLine&lt;/code&gt; is the horizontal line of captured faces of previous visitors displayed at the bottom of the screen. It is declaratively bound to &lt;code&gt;Faces&lt;/code&gt; observable collection in our C# code, so to display a new face we just need to add an element to it&lt;/li&gt;
&lt;li&gt;All the rest of XAML code is used to lay out the elements properly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code to implement face detection is available &lt;a href="https://docs.microsoft.com/en-us/samples/microsoft/windows-universal-samples/camerafacedetection/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;in this sample&lt;/a&gt;. It is a little bit overcomplicated, so I took liberty so simplify it a bit, and will simplify it even further for the sake of clarity.&lt;/p&gt;

&lt;p&gt;First, we need to start the camera, and render it's output into the &lt;code&gt;ViewFinder&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;MC&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;MediaCapture&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;cameras&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;DeviceInformation&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FindAllAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                              &lt;span class="n"&gt;DeviceClass&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;VideoCapture&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;camera&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cameras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;First&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;MediaCaptureInitializationSettings&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
                          &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;VideoDeviceId&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;camera&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;MC&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;InitializeAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;ViewFinder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we create &lt;code&gt;FaceDetectionEffect&lt;/code&gt;, which will be responsible for detecting faces. Once the face is detected, it will fire &lt;code&gt;FaceDetectedEvent&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;def&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;FaceDetectionEffectDefinition&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;def&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SynchronousDetectionEnabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;def&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DetectionMode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FaceDetectionMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HighPerformance&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;FaceDetector&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;FaceDetectionEffect&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;MC&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddVideoEffectAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;def&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MediaStreamType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;VideoPreview&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="n"&gt;FaceDetector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FaceDetected&lt;/span&gt; &lt;span class="p"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;FaceDetectedEvent&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;FaceDetector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DesiredDetectionInterval&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;TimeSpan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromMilliseconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;100&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;FaceDetector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;MC&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StartPreviewAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once face is detected and &lt;code&gt;FaceDetectedEvent&lt;/code&gt; is called, we start the countdown timer that will fire every second, and update the &lt;code&gt;Counter&lt;/code&gt; textbox to display 3-2-1 message. Once the counter reaches 0, it captures the image from camera to &lt;code&gt;MemoryStream&lt;/code&gt;, and calls the web service in the cloud using &lt;code&gt;CallCognitiveFunction&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;MemoryStream&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;MC&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CapturePhotoToStreamAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="n"&gt;ImageEncodingProperties&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateJpeg&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AsRandomAccessStream&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;cb&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;GetCroppedBitmapAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;DFace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FaceBox&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;Faces&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cb&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;CallCognitiveFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;ResultImage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;BitmapImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Uri&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We assume that the REST service will take the input image, do all the magic to create the resulting image, store it somewhere in the cloud in the public blob, and return the URL of the image. We then assign this URL to the &lt;code&gt;Source&lt;/code&gt; property of &lt;code&gt;ResultImage&lt;/code&gt; control, which renders the image on the screen (and UWP runtime is responsible for downloading image from the cloud).&lt;/p&gt;

&lt;p&gt;What also happens here, the face is cut out from the picture using &lt;code&gt;GetCroppedBitmapAsync&lt;/code&gt;, and added to &lt;code&gt;Faces&lt;/code&gt; collection, which makes it automatically appear on the UI.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CallCognitiveFunction&lt;/code&gt; does pretty standard call to REST endpoint using &lt;code&gt;HttpClient&lt;/code&gt; library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;CallCognitiveFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MemoryStream&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Position&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;PostAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;function_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;StreamContent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReadAsStringAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Azure Function to Create Cognitive Portrait
&lt;/h2&gt;

&lt;p&gt;To do the main job, we will create an Azure Function in Python. Using Azure Functions to manage executable code in the cloud has many benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You do not have to think about the dedicated compute, i.e. how and where the code is executed. That's why Azure Functions are also called &lt;strong&gt;Serverless&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://docs.microsoft.com/azure/azure-functions/functions-scale#consumption-plan/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;&lt;em&gt;Functions consumption plan&lt;/em&gt;&lt;/a&gt;, you only pay for the actual function calls, and not for the uptime of the function. The only downside of this plan is that function execution time is limited, and if the call takes a long time to execute - it will time out. In our case, this should not be a problem, as our algorithm is pretty fast.&lt;/li&gt;
&lt;li&gt;Function is auto-scaled based on demand, so we don't have to manage scalability explicitly.&lt;/li&gt;
&lt;li&gt;Azure Function can be triggered by many different cloud events. In our case, we will fire the function using REST call, but it can also fire as a result of adding blob to storage account, or from new queue message.&lt;/li&gt;
&lt;li&gt;Azure Function can be simply integrated with storage in declarative manner.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just an example: suppose we want to imprint current date on a photograph. We can write a function, specify that it is triggered by a new blob item, and that output of the function should go to another blob. To implement the imprinting, we just need to provide the code for image manipulation, while input and output image will be taken from blob and stored into blob automatically, we will just use them as function parameters.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Azure Functions are so useful that my rule of thumb is to always use Azure Functions whenever you need to do some relatively simple processing in the cloud.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In our case, because Cognitive Portrait algorithm is developed in Python and requires OpenCV, we will create Python Azure Function. In fact, the new version (V2) of Python functions have been significantly improved, so if you have heard something bad about Python implementation of Azure Functions before - forget all about it. &lt;/p&gt;

&lt;p&gt;The easiest way to start developing a function is by starting to code locally. The process is &lt;a href="https://docs.microsoft.com/azure/azure-functions/functions-create-first-azure-function-azure-cli?pivots=programming-language-python&amp;amp;tabs=bash%2Cbrowser&amp;amp;WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;well-described in documentation&lt;/a&gt;, but let me outline it here. You may also read &lt;a href="https://docs.microsoft.com/azure/developer/python/tutorial-vs-code-serverless-python-01/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; to get familiar with doing some operations from VS Code.&lt;/p&gt;

&lt;p&gt;First, you create a function using CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;func init coportrait –python
&lt;span class="nb"&gt;cd &lt;/span&gt;coportrait
func new &lt;span class="nt"&gt;--name&lt;/span&gt; pdraw &lt;span class="nt"&gt;--template&lt;/span&gt; &lt;span class="s2"&gt;"HTTP trigger"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use this command, you would need to have &lt;a href="https://docs.microsoft.com/azure/azure-functions/functions-run-local?tabs=windows%2Ccsharp%2Cbash&amp;amp;WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Azure Functions Core Tools&lt;/a&gt; installed.&lt;/p&gt;

&lt;p&gt;The function is mainly described by two files: one contains the Python code (in our case, it is &lt;code&gt;__init__.py&lt;/code&gt;), and another one - &lt;code&gt;function.json&lt;/code&gt; - describes the integrations, i.e. how the function is triggered, and which Azure objects are passed to/from function as input/output parameters.&lt;/p&gt;

&lt;p&gt;For our simple function triggered by HTTP request, the &lt;code&gt;function.json&lt;/code&gt; would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"scriptFile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"__init__.py"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"bindings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"authLevel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"function"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"httpTrigger"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"in"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"req"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"methods"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"post"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"out"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$return"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}]}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the name of script file is specified, the input parameter is named &lt;code&gt;req&lt;/code&gt; and bound to incoming HTTP trigger, and output parameter is the HTTP response, which is returned as function value (&lt;code&gt;$return&lt;/code&gt;). We also specify here that the function supports only &lt;strong&gt;POST&lt;/strong&gt; method - I have removed &lt;code&gt;"get"&lt;/code&gt;, which was present there initially from the template.&lt;/p&gt;

&lt;p&gt;If we look into &lt;code&gt;__init__.py&lt;/code&gt;, we have some template code there to start with, which looks like this (in practice, it's a little bit more complicated than this, but never mind):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;def&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HttpRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Execution begins…&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;code&gt;req&lt;/code&gt; is the original request. To get the input image, which is encoded as binary JPEG stream, we need the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_body&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;nparr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromstring&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uint8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imdecode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nparr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IMREAD_COLOR&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To work with storage, in our case we will use &lt;code&gt;azure.storage.blob.BlockBlobService&lt;/code&gt; object, and not integrations. The reason for this is that we need quite a lot of storage operations, and passing many parameters in and out of the function may be confusing. You may want to see &lt;a href="https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-python?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;more documentation&lt;/a&gt; on working with Azure blob storage from Python.&lt;/p&gt;

&lt;p&gt;So we will begin by storing the incoming image into &lt;code&gt;cin&lt;/code&gt; blob container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BlockBlobService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...,&lt;/span&gt; &lt;span class="n"&gt;account_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...)&lt;/span&gt; 
&lt;span class="n"&gt;sec_p&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;end_date&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;total_seconds&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;  &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;sec_p&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;09&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;%Y%m%d-%H%M%S&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_blob_from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to do one trick here with file naming. Because later on we will need to be blending together 10 last photographs, we need a way to retrieve 10 last files without browsing through all blobs in a container. Since blob names are returned in alphabetical order, we need a way for all filenames to be sorted alphabetically &lt;strong&gt;in ascending order&lt;/strong&gt;, last file being the first. The way I solve this problem here is to calcuclate the number of seconds from current time till January 1, 2021, and appending this number with leading zeroes before the filename (which is normal &lt;em&gt;YYYYMMDD-HHMMSS&lt;/em&gt; datetime).&lt;/p&gt;

&lt;p&gt;Next things we will do is call &lt;a href="https://docs.microsoft.com/azure/cognitive-services/face/overview/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Face API&lt;/a&gt; to extract face landmarks and position the image so that two eyes and middle of the mouth occupy predefined coordinates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cogface&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;FaceClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cognitive_endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="nc"&gt;CognitiveServicesCredentials&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cognitive_key&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cogface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;face&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detect_with_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BytesIO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                        &lt;span class="n"&gt;return_face_landmarks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;tr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;affine_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;face_landmarks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_dict&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.jpg&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;tr&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_blob_from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cmapped&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tobytes&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Function &lt;code&gt;affine_transform&lt;/code&gt; is the same as in my &lt;a href="http://soshnikov.com/scienceart/peopleblending/" rel="noopener noreferrer"&gt;other post&lt;/a&gt;. Once the image has been transformed, it is stored as JPEG picture into &lt;code&gt;cmapped&lt;/code&gt; blob container.&lt;/p&gt;

&lt;p&gt;As the last step, we need to prepare the image from the last 10 pictures, store it into the blob, and return the URL. To get last 10 images from blob, we get the iterator for all blobs using &lt;code&gt;list_blobs&lt;/code&gt; function, take first 10 elements using &lt;code&gt;islice&lt;/code&gt;, and then get the decoded images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;imgs&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="nf"&gt;imdecode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_blob_to_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cmapped&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;
&lt;span class="err"&gt;         &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="ow"&gt;in&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;itertools&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;islice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_blobs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cmapped&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;imgs&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;imgs&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;float32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get the blended picture, we need to average all images along the first axis, which is done with just one numpy call. To make this averaging possible, we convert all values in the code above to &lt;code&gt;np.float32&lt;/code&gt;, and after averaging - back to &lt;code&gt;np.uint8&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;average&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;imgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uint8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, storing the image in blob is done in a very similar manner to the code above, encoding the image to JPEG with &lt;code&gt;cv2.imencode&lt;/code&gt;, and then calling &lt;code&gt;create_blob_from_bytes&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.jpg&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_blob_from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;out&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tobytes&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;result_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;act&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.blob.core.windows.net/out/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result_url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After putting the function code into &lt;code&gt;__init__.py&lt;/code&gt;, we also should not forget to specify all dependencies in &lt;code&gt;requirements.txt&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;azure-functions
opencv-python
azure-cognitiveservices-vision-face
azure-storage-blob==1.5.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, we can run the function locally by issuing the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;func start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will start a local web server and print the URL of the function, which we can try to call using Postman to make sure that it works. We need to make sure to use &lt;strong&gt;POST&lt;/strong&gt; method, and to pass the original image as binary body of the request.&lt;br&gt;
We can also specify this URL to &lt;code&gt;function_url&lt;/code&gt; variable in our UWP application, and run it on local machine.&lt;/p&gt;

&lt;p&gt;To publish Azure Function to the cloud, we need first to create the Python Azure Function through the &lt;a href="http://portal.azure.com/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Azure Portal&lt;/a&gt;, or through Azure CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az functionapp create &lt;span class="nt"&gt;--resource-group&lt;/span&gt; PeopleBlenderBot
    &lt;span class="nt"&gt;--os-type&lt;/span&gt; Linux   &lt;span class="nt"&gt;--consumption-plan-location&lt;/span&gt; westeurope
    &lt;span class="nt"&gt;--runtime&lt;/span&gt; python  &lt;span class="nt"&gt;--runtime-version&lt;/span&gt; 3.7
    &lt;span class="nt"&gt;--functions-version&lt;/span&gt; 2
    &lt;span class="nt"&gt;--name&lt;/span&gt; coportrait &lt;span class="nt"&gt;--storage-account&lt;/span&gt; coportraitstore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I recommend doing it through the Azure Portal, because it will automatically create required blob storage, and the whole process is easier for beginners. Once you start looking for ways to automate it - go with Azure CLI.&lt;/p&gt;

&lt;p&gt;After the function has been created, deploying it is really easy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;func azure functionapp publish coportrait
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the function has been published, go to &lt;a href="http://portal.azure.com/?WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;Azure Portal&lt;/a&gt;, look for the function and copy function URL.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit-AzFunc-Portal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit-AzFunc-Portal.png" alt="Azure Functions Portal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The URL should look similar to this: &lt;code&gt;https://coportrait.azurewebsites.net/api/pdraw?code=geE..e3P==&lt;/code&gt;. Assign this link (together with the key) to the &lt;code&gt;function_url&lt;/code&gt; variable in your UWP app, start it, and you should be good to go!&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating the Chat Bot
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;During social isolation, we need to create a way for people to interact with the exhibit from their homes. The best way to do it is by creating a chat bot using Microsoft Bot Framework.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the described architecture all the processing happens in the cloud, which makes it possible to create additional user interfaces to the same virtual exhibit. Let's go ahead and create chatbot interface using Microsoft Bot Framework.&lt;/p&gt;

&lt;p&gt;The process of creating a bot in C# is &lt;a href="https://docs.microsoft.com/azure/bot-service/dotnet/bot-builder-dotnet-sdk-quickstart?view=azure-bot-service-4.0&amp;amp;WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;described in Microsoft Docs&lt;/a&gt;, or &lt;a href="https://docs.microsoft.com/ru-ru/azure/bot-service/bot-builder-tutorial-basic-deploy?view=azure-bot-service-4.0&amp;amp;tabs=csharp&amp;amp;WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;in this short tutorial&lt;/a&gt;. You can also &lt;a href="https://docs.microsoft.com/azure/bot-service/python/bot-builder-python-quickstart?view=azure-bot-service-4.0&amp;amp;WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;create the bot in Python&lt;/a&gt;, but we will go with .NET as the better documented option.&lt;/p&gt;

&lt;p&gt;First of all, we need to install &lt;a href="https://marketplace.visualstudio.com/items?itemName=BotBuilder.botbuilderv4" rel="noopener noreferrer"&gt;VS Bot Template&lt;/a&gt; for Visual Studio, and then create Bot project with the &lt;strong&gt;Echo&lt;/strong&gt; template:&lt;br&gt;
&lt;a href="/images/blog/CoPort-Bot-Create.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/CoPort-Bot-Create.png" alt="Creating Bot Project"&gt;&lt;/a&gt;&lt;br&gt;
I will call the project &lt;code&gt;PeopleBlenderBot&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Main logic of the bot is located in &lt;code&gt;Bots\EchoBot.cs&lt;/code&gt; file, inside the &lt;code&gt;OnMessageActivityAsync&lt;/code&gt; function. This function received the incoming message as the &lt;code&gt;Activity&lt;/code&gt; object, which contains message &lt;code&gt;Text&lt;/code&gt;, as well as its &lt;code&gt;Attachments&lt;/code&gt;. We first need to check if the user has attached in image to his message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;turnContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Activity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Attachments&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="c1"&gt;// do the magic&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;turnContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SendActivityAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Please send picture"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the user has attached an image, it will be available to us as &lt;code&gt;ContentUrl&lt;/code&gt; field of &lt;code&gt;Attachment&lt;/code&gt; object. Inside the &lt;code&gt;if&lt;/code&gt; block, we first get this image as http-stream:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;HttpClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Attachments&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;ContentUrl&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;str&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReadAsStreamAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we pass this stream to our Azure function, in the same way as we did in the UWP application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;PostAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;function_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;StreamContent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;str&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReadAsStringAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we obtained the URL of the image, we need to pass it back to the user as Hero-card attachment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MessageFactory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Attachment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;HeroCard&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Images&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;CardImage&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;CardImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;ToAttachment&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;turnContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SendActivityAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run the bot locally and test it using &lt;a href="https://github.com/microsoft/BotFramework-Emulator" rel="noopener noreferrer"&gt;Bot Framework Emulator&lt;/a&gt;:&lt;br&gt;
&lt;a href="/images/blog/CoPort-Exhibit-BotEmulator.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/CoPort-Exhibit-BotEmulator.png" alt="Bot Framework Emulator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you make sure the bot works, you can deploy it to Azure as &lt;a href="https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-deploy-az-cli?view=azure-bot-service-4.0&amp;amp;tabs=csharp&amp;amp;WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;described in Docs&lt;/a&gt; using Azure CLI and ARM template. However, you can also do it manually through Azure Portal:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create new &lt;strong&gt;Web App Bot&lt;/strong&gt; in the portal, selecting &lt;strong&gt;Echo Bot&lt;/strong&gt; as the starting template. This will create the following two objects in your subscription:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bot Connector App&lt;/strong&gt;, which determines the connection between the bot and different channels, such as Telegram&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bot Web App&lt;/strong&gt;, where the bot web application itself will run&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Download the code of your newly created bot, and copy &lt;code&gt;appsettings.json&lt;/code&gt; file from it into the &lt;code&gt;PeopleBlenderBot&lt;/code&gt; you are devoping. This file contains &lt;strong&gt;App Id&lt;/strong&gt; and &lt;strong&gt;App Password&lt;/strong&gt; that are required to securely connect to bot connector.&lt;/li&gt;
&lt;li&gt;From Visual Studio, right-click on your &lt;code&gt;PeopleBlenderBot&lt;/code&gt; project and select &lt;strong&gt;Publish&lt;/strong&gt;. The select the existing Web App created during step 1, and deploy your code there.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the bot has been deployed to the cloud, you can make sure that it works by testing it in the web chat:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit-BotPortal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Exhibit-BotPortal.png" alt="Web Chat"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Making the bot work in Telegram is now a matter of configuring the channel, which is &lt;a href="https://docs.microsoft.com/azure/bot-service/bot-service-channel-connect-telegram?view=azure-bot-service-4.0&amp;amp;WT.mc_id=aiapril-blog-dmitryso" rel="noopener noreferrer"&gt;described in detail in the Docs&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Bot-Animated.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fblog%2FCoPort-Bot-Animated.gif" alt="Animated Bot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I have shared my own experience of creating an interactive exhibit with its logic in the cloud. I encourage you to use this approach if you are doing some work with museums and exhibitions, and if not - just to explore the direction of science art, because it is a lot of fun! You can start by playing with &lt;a href="http://github.com/CloudAdvocacy/CognitivePortrait" rel="noopener noreferrer"&gt;Cognitive Portrait Techniques Repository&lt;/a&gt;, and then move on! In my &lt;a href="http://aka.ms/creative_ai" rel="noopener noreferrer"&gt;other blog post&lt;/a&gt; I discuss AI and Art, and show the examples where AI becomes even more creative. Feel free &lt;a href="http://soshnikov.com/contact" rel="noopener noreferrer"&gt;to get in touch&lt;/a&gt; if you want to talk more about the topic, or collaborate!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The bot I have created is currently available in Telegram as &lt;a href="http://t.me/peopleblenderbot" rel="noopener noreferrer"&gt;@peopleblenderbot&lt;/a&gt; and &lt;a href="http://soshnikov.com/museum/peopleblenderbot" rel="noopener noreferrer"&gt;in my local virtual museum&lt;/a&gt;, and you can try it our yourself. Remember, that you are not only chatting with the bot, you are enjoying &lt;strong&gt;virtual art exhibit&lt;/strong&gt; that brings people together, and that &lt;strong&gt;your photos will be stored in the cloud and sent to other people in blended form&lt;/strong&gt;. Look forward for more news on when this exhibit will be available in museums!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Credits
&lt;/h4&gt;

&lt;p&gt;In this post, especially for creating drawings, I used some Creative Commons content &lt;a href="http://abluescarab.deviantart.com/art/Widescreen-Monitor-Rounded-181098608" rel="noopener noreferrer"&gt;here&lt;/a&gt;, &lt;a href="http://photo.stackexchange.com/questions/40869/why-are-some-webcam-lenses-recessed" rel="noopener noreferrer"&gt;here&lt;/a&gt;, &lt;a href="http://commons.wikimedia.org/wiki/File:-_Brickwall_01_-.jpg" rel="noopener noreferrer"&gt;here&lt;/a&gt;, &lt;a href="https://hardwarerecs.stackexchange.com/questions/510/high-dpi-21-or-23-monitor-for-13-macbook-pro" rel="noopener noreferrer"&gt;here&lt;/a&gt;. All the work of original authors is greatly appreciated!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Can AI be Creative? Let's Find Out!</title>
      <dc:creator>Dmitry Soshnikov</dc:creator>
      <pubDate>Mon, 13 Apr 2020 18:51:29 +0000</pubDate>
      <link>https://dev.to/itnext/can-ai-be-creative-let-s-find-out-4ac2</link>
      <guid>https://dev.to/itnext/can-ai-be-creative-let-s-find-out-4ac2</guid>
      <description>&lt;p&gt;Generative Adversarial Network can produce a lot of original paintings much much faster than human painter. But does it make AI creative? Let's discuss the nature of creativity, and try to challenge Artificial Intelligence on this front.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jump directly to the challenge&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This post is a part of &lt;a href="http://aka.ms/AIApril" rel="noopener noreferrer"&gt;AI April&lt;/a&gt; initiative, where each day of April my colleagues publish new original article related to AI, Machine Learning and Microsoft. Have a look at the &lt;a href="http://aka.ms/AIApril" rel="noopener noreferrer"&gt;Calendar&lt;/a&gt; to find other interesting articles that have already been published, and keep checking that page during the month.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you were reading some of my earlier blog posts, you know that I like using different AI techniques to produce some &lt;strong&gt;Science Art&lt;/strong&gt;, such as &lt;a href="http://soshnikov.com/scienceart/peopleblending/" rel="noopener noreferrer"&gt;Cognitive Portraits&lt;/a&gt; or &lt;a href="http://soshnikov.com/scienceart/creating-generative-art-using-gan-on-azureml/" rel="noopener noreferrer"&gt;GAN Paintings&lt;/a&gt; like that:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqt52pffjtfgb4j4n212t.jpg" alt="Cognitive Portrait"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img alt="GAN Generated Art" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyz6a3l14de3a2sh7ejwx.jpg"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;em&gt;Irina&lt;/em&gt;, 2019, &lt;a href="http://aka.ms/peopleblending" rel="noopener noreferrer"&gt;People Blending&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;em&gt;Summer Landscape&lt;/em&gt;, 2020, &lt;a href="https://github.com/shwars/keragan" rel="noopener noreferrer"&gt;keragan&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A lot of other artists are also using Artificial Intelligence to create their pieces. For example, virtual composer &lt;a href="https://en.wikipedia.org/wiki/AIVA" rel="noopener noreferrer"&gt;AIVA&lt;/a&gt; has been recognized by SACEM French Music Society. In fact, an attempt to use computers to produce music started much earlier &lt;a href="https://www.theguardian.com/science/2016/sep/26/first-recording-computer-generated-music-created-alan-turing-restored-enigma-code" rel="noopener noreferrer"&gt;with Alan Turing&lt;/a&gt;. Lately, on last Microsoft Digital Transformation Summit, our partners at &lt;a href="https://awara-it.com/" rel="noopener noreferrer"&gt;Awara IT&lt;/a&gt; &lt;a href="https://news.microsoft.com/ru-ru/neural-kandinsky/" rel="noopener noreferrer"&gt;combined music and visual arts in the "NeuroKandinsky" performance&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhr4l0iqcmsqr42jsxbhk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhr4l0iqcmsqr42jsxbhk.jpg" title="NeuroKandinsky Performance by Awara IT" alt="NeuroKandinsky"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Can AI Create a Piece of Art
&lt;/h2&gt;

&lt;p&gt;To figure this out, we first need to come up with the definition of &lt;strong&gt;art&lt;/strong&gt;. The &lt;a href="https://en.wikipedia.org/wiki/Art" rel="noopener noreferrer"&gt;definition in Wikipedia&lt;/a&gt; is the following:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Art&lt;/strong&gt; is a diverse range of &lt;strong&gt;human activities&lt;/strong&gt; in creating visual, auditory or performing artifacts (artworks), expressing the author's imaginative, conceptual ideas, or technical skill, intended to be appreciated for their beauty or emotional power&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For some reason, it explicitly limits artistic activities to humans. However, if we look at the piece of art by itself, without referring to its history, it would sometimes be hard to tell who was the original creator, human being or not. This, it would probably make sense to adopt another definition or art, which would allow us to distinguish art from garbage without looking at the author:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A piece of art is an artifact that is somehow &lt;strong&gt;valued by art connoisseurs&lt;/strong&gt;, for example, by being payed for at auctions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Which is to say that if an artistic piece is bought for a high price at an auction -- it is definitely art. Remember the case with &lt;em&gt;Edmond de Belamy&lt;/em&gt; piece &lt;a href="https://www.nytimes.com/2018/10/25/arts/design/ai-art-sold-christies.html" rel="noopener noreferrer"&gt;sold for more than $400K&lt;/a&gt; at Christie's?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F584jhxxb3uyujasnvhbj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F584jhxxb3uyujasnvhbj.PNG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, this definition does not reflect the inner value or beauty of an artifact, because in the case mentioned above the buyer payed such a high price not for the artwork as such, but rather for the fact of buying the first AI-created art piece. It is the same reason why most of the price for Coca-Cola or Pepsi bottle is not the price of a drink, but of brand. So the actual value lies not in the artwork, but in the &lt;strong&gt;story&lt;/strong&gt; behind it, and this story is created by a human being.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Actually Creates Art
&lt;/h2&gt;

&lt;p&gt;So, we have come to realize that &lt;strong&gt;it is always a human being who creates a story&lt;/strong&gt; behind a piece of art, because only human beings have a motivation to be creative. Now let's talk about the process of creating an artwork.&lt;/p&gt;

&lt;p&gt;In the case of &lt;a href="http://soshnikov.com/scienceart/peopleblending/" rel="noopener noreferrer"&gt;Cognitive Portrait&lt;/a&gt; we used AI to extract coordinates of facial landmarks from people's faces, which helps us to align pictures together in a certain way. How the pictures are aligned is defined by an algorithm written by a human being, and this algorithm helps her/him to achieve certain artistic effect and carry a specific message.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;a href="http://soshnikov.com/images/art/Ages2.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fart%2FAges2.jpg"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;em&gt;Growing up&lt;/em&gt;, 2020, &lt;a href="http://bit.do/cognitiveportrait" rel="noopener noreferrer"&gt;Cognitive Portrait&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For example, this picture is constructed from about 50 photos of my daughter, which are grouped together by age intervals, to reflect the process of her growing up. And the following piece, produced from the same set of photos, represents the circular structure of time, when you are caught up in some whirlpool of places and events:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;a href="http://soshnikov.com/images/art/Circ0.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fsoshnikov.com%2Fimages%2Fart%2FCirc0.jpg"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;em&gt;Circle of Faces&lt;/em&gt;, 2020, &lt;a href="http://bit.do/cognitiveportrait" rel="noopener noreferrer"&gt;Cognitive Portrait&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So, quite clearly, when creating cognitive portraits we used AI as a very powerful tool that helped us with image editing. We could have actually performed the same process in Photoshop manually, aligning all photos, but it would be much more time-consuming and limiting compared to using an algorithm.&lt;/p&gt;

&lt;p&gt;The case with Generative Adversarial Networks is more complex to tackle, because it looks like the Neural Network, after having been trained on a dataset of human artworks, has learnt to produce new original art pieces all by itself. In fact, the network learns similar to a human being learning to see or draw: when looking at many images it learns which low-level pixel combinations are used that represent brush strokes, and then how those strokes are typically combined together into larger objects, finally leading to the whole picture composition. In the same way a child learns to see by making sense of light patterns around him, and then painter looks at many earlier artworks to gain some inspiration.  &lt;/p&gt;

&lt;p&gt;Unlike human being, neural network then &lt;strong&gt;randomly combines&lt;/strong&gt; those pieces of knowledge into a painting, while a human artist would most probably have some goal or idea in mind, which he wants to express using techniques that he has previously seen or tried.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Human beings have a goal, an idea, a motivation that directs his/her work, while artificial intelligence acts randomly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As an example, look at the two pieces produced by the same network:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img alt="Bad Piece" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu4sbm2rh0sechaxmg2ea.png"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img alt="Countryside" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5j0tjyo4dpslhi22nwrs.jpg"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;
&lt;em&gt;Countryside&lt;/em&gt;, 2020&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Two images for us look quite different, somehow we know that image on the left is garbage, and the one on the right is not, because it creates some feelings inside us. However, for a computer those two images are more or less the same, and in fact they were scored similarly by GAN discriminator. So the human being is the one who &lt;em&gt;holds the truth&lt;/em&gt;, who knows how to tell the good from the bad. &lt;/p&gt;

&lt;p&gt;In order to create the artwork like the one above, the human has to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the dataset of the images to be used for training. Keep in mind that all those pieces were also created by human beings, reflecting our view on what is beautiful, and what is not.&lt;/li&gt;
&lt;li&gt;Perform hyperparameter optimization, selecting the parameters for the network which lead to the best results. &lt;/li&gt;
&lt;li&gt;Select the best works from hundreds of images generated by the network. Last two steps are something which requires human inspection and human understanding of what to consider art.&lt;/li&gt;
&lt;li&gt;Create some story behind the artwork, which at least requires naming it, and then possibly putting it up for sale at an auction, creating &lt;em&gt;artistic value&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Creating &lt;strong&gt;artificial art&lt;/strong&gt; is a joint process, where human artist works together with artificial intelligence tool to produce the artistic impact that he wants.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Actually, the problem of "who is the author of an artwork" has been considered much earlier than AI came about, and it is not specific to AI. For example, Japanese Photographer &lt;a href="https://www.artsy.net/artist/tetsuya-kusu" rel="noopener noreferrer"&gt;Tetsuya Kusu&lt;/a&gt; used a camera that &lt;a href="https://serindiagallery.com/blogs/news/auto-graph-by-tetsuya-kusu-4-4-5-5-2019" rel="noopener noreferrer"&gt;periodically takes pictures&lt;/a&gt;, and collected a gallery of artworks that were &lt;em&gt;taken by the camera itself&lt;/em&gt; during his travel through US. This process is in fact very similar to our GAN case: to come up with something &lt;em&gt;beautiful&lt;/em&gt; a human being needs to filter out a lot of crap from automatically-generated images. Human being is the one who understands the criteria of &lt;em&gt;beauty&lt;/em&gt;, not the camera, and not the artificial neural network.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Challenge
&lt;/h2&gt;

&lt;p&gt;At the end of this post, I want to challenge you to express your creativity and to prove that AI is in fact only a tool -- but a very useful one, which empowers software developers to be creative in the field of visual arts. I would challenge you to create your own &lt;strong&gt;Cognitive Portrait&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;To do so:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Cognitive Portrait Repository&lt;/strong&gt;: &lt;a href="http://github.com/CloudAdvocacy/CognitivePortrait" rel="noopener noreferrer"&gt;http://github.com/CloudAdvocacy/CognitivePortrait&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Have a look at the sample code there, which is presented in the form of Jupyter Notebooks.&lt;/li&gt;
&lt;li&gt;Fork the repo, and create your own portrait! I recommend following some rules:

&lt;ul&gt;
&lt;li&gt;If you are going to change the code in the notebook - please copy the notebook to a new file, so that original code is preserved. This would allow people to see both the original code and your new code.&lt;/li&gt;
&lt;li&gt;If you are uploading your own pictures -- create a subdirectory under &lt;code&gt;images/&lt;/code&gt;, so that your pictures do not interfere with others.&lt;/li&gt;
&lt;li&gt;Place the sample image from your script into &lt;code&gt;results&lt;/code&gt; folder, so that it is easy to see.&lt;/li&gt;
&lt;li&gt;The front page &lt;code&gt;readme.md&lt;/code&gt; contains the gallery of different cognitive portrait techniques, please add your name/resulting image there.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do a pull request&lt;/strong&gt;, and I will include your code into the original repository!&lt;/li&gt;
&lt;li&gt;Finally, post your results in comments here, for everyone to enjoy!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some ideas for you to explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start simple - just create your own cognitive portrait, by uploading your images into &lt;code&gt;images/your_name&lt;/code&gt;, running one of the notebooks which are already there, and then storing the result into &lt;code&gt;results/your_name.jpg&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Once you do that - copy one of the notebooks into a new file and start experimenting! You may try to position eyes along some geometrical curve, or adding some random movement to faces within cognitive portrait, or do some other crazy things related to the coordinates of facial landmarks!&lt;/li&gt;
&lt;li&gt;Do not forget to add your creation to the &lt;code&gt;readme.md&lt;/code&gt; and do a pull request!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of the month, I will feature the best creations in my blog and social networks, and we will all celebrate together!&lt;/p&gt;

</description>
      <category>challenge</category>
      <category>ai</category>
      <category>generativeart</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Creating Generative Art using GANs on Azure ML</title>
      <dc:creator>Dmitry Soshnikov</dc:creator>
      <pubDate>Wed, 08 Apr 2020 13:29:34 +0000</pubDate>
      <link>https://dev.to/azure/creating-generative-art-using-gans-on-azure-ml-5c9k</link>
      <guid>https://dev.to/azure/creating-generative-art-using-gans-on-azure-ml-5c9k</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post is a part of &lt;a href="http://aka.ms/AIApril"&gt;AI April&lt;/a&gt; initiative, where each day of April my colleagues publish new original article related to AI, Machine Learning and Microsoft. Have a look at the &lt;a href="http://aka.ms/AIApril"&gt;Calendar&lt;/a&gt; to find other interesting articles that have already been published, and keep checking that page during the month.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Deep Learning can look like Magic! I get the most magical feeling when watching neural network doing something creative, for example learning to produce paintings like an artist. Technology behind this is called Generative Adversarial Networks, and in this post we will look at how to train such a network on Azure Machine Learning Service.&lt;/p&gt;

&lt;p&gt;If you have seen my previous posts on Azure ML (about &lt;a href="http://soshnikov.com/azure/best-way-to-start-with-azureml/"&gt;using it from VS Code&lt;/a&gt; and &lt;a href="http://soshnikov.com/azure/using-azureml-for-hyperparameter-optimization/"&gt;submitting experiments and hyperparameter optimization&lt;/a&gt;), you should know that it is quite convenient to use Azure ML for almost any training tasks. However, all examples up to now have been done using toy MNIST dataset. Today we will focus on real problem: creating artificial paintings like those:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mPFl66Fs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ew2x5e9fjfj7op9mmfms.jpg" alt="Flowers"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2QFxSawO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8r4ws4lb344mhsttvi6c.jpg" alt="Portrait"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Flowers, 2019, &lt;em&gt;Art of the Artificial&lt;/em&gt;&lt;br&gt;&lt;a href="https://github.com/shwars/keragan"&gt;keragan&lt;/a&gt; trained on &lt;a href="https://www.wikiart.org/"&gt;WikiArt&lt;/a&gt; Flowers&lt;/td&gt;
&lt;td&gt;Queen of Chaos, 2019,&lt;br&gt;&lt;a href="https://github.com/shwars/keragan"&gt;keragan&lt;/a&gt; trained on &lt;a href="https://www.wikiart.org/"&gt;WikiArt&lt;/a&gt; Portraits&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Those painting are produced after training the network on paintings from &lt;a href="https://www.wikiart.org/"&gt;WikiArt&lt;/a&gt;. If you want to reproduce the same results, you may need to collect the dataset yourself, for example by using &lt;a href="https://github.com/lucasdavid/wikiart"&gt;WikiArt Retriever&lt;/a&gt;, or borrowing existing collections from &lt;a href="https://github.com/cs-chan/ArtGAN/blob/master/WikiArt%20Dataset/README.md"&gt;WikiArt Dataset&lt;/a&gt; or &lt;a href="https://github.com/rkjones4/GANGogh"&gt;GANGogh Project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Place images you want to train on somewhere in &lt;code&gt;dataset&lt;/code&gt; directory. For training on flowers, here is how some of those images might look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XpUPJWs_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jbdn7pglzak9bhzkb2mp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XpUPJWs_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jbdn7pglzak9bhzkb2mp.png" alt="Flowers Dataset"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need our neural network model to learn both high-level composition of flower bouquet and a vase, as well as low-level style of painting, with smears of paint and canvas texture. &lt;/p&gt;

&lt;h2&gt;
  
  
  Generative Adversarial Networks
&lt;/h2&gt;

&lt;p&gt;Those painting were generated using &lt;a href="https://en.wikipedia.org/wiki/Generative_adversarial_network)"&gt;&lt;strong&gt;Generative Adversarial Network&lt;/strong&gt;&lt;/a&gt;, or GAN for short. In this example, we will use my simple GAN implementation in Keras called &lt;a href="https://github.com/shwars/keragan"&gt;keragan&lt;/a&gt;, and I will show some simplified code parts from it.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shwars"&gt;
        shwars
      &lt;/a&gt; / &lt;a href="https://github.com/shwars/keragan"&gt;
        keragan
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Keras implementation of GANs
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;GAN consists of two networks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generator&lt;/strong&gt;, which generates images given some input &lt;strong&gt;noise vector&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discriminator&lt;/strong&gt;, whose role is to differentiate between real and "fake" (generated) paintings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fAjccB6g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/layup0xwjl0pmuczv8zh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fAjccB6g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/layup0xwjl0pmuczv8zh.png" alt="GAN Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Training the GAN involves the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Getting a bunch of generated and real images:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;noise&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;normal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;latent_dim&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;gen_imgs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;noise&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   
&lt;span class="n"&gt;imgs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_batch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Training discriminator to better differentiate between those two. Note, how we provide vector with &lt;code&gt;ones&lt;/code&gt; and &lt;code&gt;zeros&lt;/code&gt; as expected answer:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;d_loss_r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;train_on_batch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;imgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ones&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;d_loss_f&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;train_on_batch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gen_imgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;zeros&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;d_loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;d_loss_r&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;d_loss_f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Training the combined model, in order to improve the generator:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;g_loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;combined&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;train_on_batch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;noise&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ones&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;During this step, discriminator is not trained, because its weights are explicitly frozen during creation of combined model:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;discriminator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;create_discriminator&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;generator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;create_generator&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'binary_crossentropy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                      &lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'accuracy'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trainable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;

&lt;span class="n"&gt;z&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;latent_dim&lt;/span&gt;&lt;span class="p"&gt;,))&lt;/span&gt;
&lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;valid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;combined&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;valid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="n"&gt;combined&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'binary_crossentropy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Discriminator Model
&lt;/h2&gt;

&lt;p&gt;To differentiate between real and fake image, we use traditional &lt;a href="https://en.wikipedia.org/wiki/Convolutional_neural_network"&gt;&lt;strong&gt;Convolutional Neural Network&lt;/strong&gt;&lt;/a&gt; (CNN) architecture. So, for the image of size 64x64, we will have something like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;discriminator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="c1"&gt;# number of filters on next layer
&lt;/span&gt;    &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"same"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;AveragePooling2D&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;addBatchNormalization&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;momentum&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LeakyReLU&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Dropout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;discriminator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'sigmoid'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;We have 3 convolution layers, which do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Original image of shape 64x64x3 is passed over by 16 filters, resulting in a shape 32x32x16. To decrease the size, we use &lt;code&gt;AveragePooling2D&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Next step converts 32x32x16 tensor into 16x16x32&lt;/li&gt;
&lt;li&gt;Finally, after the next convolution layer, we end up with tensor of shape 8x8x64.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On top of this convolutional base, we put simple logistic regression classifier (AKA 1-neuron dense layer).&lt;/p&gt;
&lt;h2&gt;
  
  
  Generator Model
&lt;/h2&gt;

&lt;p&gt;Generator model is slightly more complicated. First, imagine if we wanted to convert an image to some sort of feature vector of length &lt;code&gt;latent_dim=100&lt;/code&gt;. We would use convolutional network model similar to the discriminator above, but final layer would be a dense layer with size 100.&lt;/p&gt;

&lt;p&gt;Generator does the opposite - converts vector of size 100 to an image. This involves a process called &lt;strong&gt;deconvolution&lt;/strong&gt;, which is essentially a &lt;em&gt;reversed convolution&lt;/em&gt;. Together with &lt;code&gt;UpSampling2D&lt;/code&gt; they cause the size of the tensor to increase at each layer:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;generator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"relu"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                                      &lt;span class="n"&gt;input_dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;latent_dim&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Reshape&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;UpSampling2D&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="n"&gt;strides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"same"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BatchNormalization&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;momentum&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Activation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"relu"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"same"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Activation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tanh"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;At the last step, we end up with tensor size 64x64x3, which is exactly the size of the image that we need.&lt;br&gt;
Note that final activation function is &lt;code&gt;tanh&lt;/code&gt;, which gives an output in the range of [-1;1] - which means that we need to scale original training images to this interval. All those steps for preparing images is handled by &lt;code&gt;ImageDataset&lt;/code&gt; class, and I will not go into detail there.&lt;/p&gt;
&lt;h2&gt;
  
  
  Training script for Azure ML
&lt;/h2&gt;

&lt;p&gt;Now that we have all pieces for training the GAN together, we are ready to run this code on Azure ML as an experiment! The code I will be showing here is available at GitHub:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/CloudAdvocacy"&gt;
        CloudAdvocacy
      &lt;/a&gt; / &lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter"&gt;
        AzureMLStarter
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This is some tutorial to get you started with Azure ML Service
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;There is one important thing to be noted, however: normally, when running an experiment in Azure ML, we want to track some metrics, such as accuracy or loss. We can log those values during training using &lt;code&gt;run.log&lt;/code&gt;, as described in my &lt;a href="http://soshnikov.com/azure/best-way-to-start-with-azureml/"&gt;previous post&lt;/a&gt;, and see how this metric changes during training on &lt;a href="http://ml.azure.com/?WT.mc_id=aiapril-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In our case, instead of numeric metric, we are interested in the visual images that our network generates at each step. Inspecting those images while experiment is running can help us decide whether we want to end our experiment, alter parameters, or continue.&lt;/p&gt;

&lt;p&gt;Azure ML supports logging images in addition to numbers, as described &lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments/?WT.mc_id=aiapril-blog-dmitryso"&gt;here&lt;/a&gt;. We can log either images represented as np-arrays, or any plots produced by &lt;code&gt;matplotlib&lt;/code&gt;, so what we will do is plotting three sample images on one plot. This plotting will be handled in &lt;code&gt;callbk&lt;/code&gt; callback function that gets called by &lt;code&gt;keragan&lt;/code&gt; after each training epoch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;callbk&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tr&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;gan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;epoch&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;gan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sample_images&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;fig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;ax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;subplots&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;ax&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;log_image&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Sample"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;plot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, the actual training code will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;gan&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;keragan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DCGAN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;imsrc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;keragan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ImageDataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;imsrc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;train&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;keragan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GANTrainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_dataset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;imsrc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;gan&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;gan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;train&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;callbk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that &lt;code&gt;keragan&lt;/code&gt; supports automatic parsing of many command-line parameters that we can pass to it through &lt;code&gt;args&lt;/code&gt; structure, and that is what makes this code so simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting the Experiment
&lt;/h2&gt;

&lt;p&gt;To submit the experiment to Azure ML, we will use the code similar to the one discussed in the &lt;a href="http://soshnikov.com/azure/using-azureml-for-hyperparameter-optimization/"&gt;previous post on Azure ML&lt;/a&gt;. The code is located inside [submit_gan.ipynb][&lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/submit_gan.ipynb"&gt;https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/submit_gan.ipynb&lt;/a&gt;], and it starts with familiar steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connecting to the Workspace using &lt;code&gt;ws = Workspace.from_config()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Connecting to the Compute cluster: &lt;code&gt;cluster = ComputeTarget(workspace=ws, name='My Cluster')&lt;/code&gt;. Here we need a cluster of GPU-enabled VMs, such as &lt;a href="https://docs.microsoft.com/azure/virtual-machines/sizes-gpu/?WT.mc_id=aiapril-blog-dmitryso"&gt;NC6&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Uploading our dataset to the default datastore in the ML Workspace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After that has been done, we can submit the experiment using the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;exp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Experiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;workspace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'KeraGAN'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;script_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;'--path'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_default_datastore&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="s"&gt;'--dataset'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'faces'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;'--model_path'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'./outputs/models'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;'--samples_path'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'./outputs/samples'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;'--batch_size'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;'--size'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;'--learning_rate'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.0001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;'--epochs'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;est&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;TensorFlow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source_directory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'.'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;script_params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;script_params&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;compute_target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;entry_script&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'train_gan.py'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;use_gpu&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;conda_packages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'keras'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'tensorflow'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'opencv'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'tqdm'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'matplotlib'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;pip_packages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'git+https://github.com/shwars/keragan@v0.0.1'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;est&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In our case, we pass &lt;code&gt;model_path=./outputs/models&lt;/code&gt; and &lt;code&gt;samples_path=./outputs/samples&lt;/code&gt; as parameters, which will cause models and samples generated during training to be written to corresponding directories inside Azure ML experiment. Those files will be available through Azure ML Portal, and can also be downloaded programmatically afterwards (or even during training).&lt;/p&gt;

&lt;p&gt;To create the estimator that can run on GPU without problems, we use built-in &lt;a href="https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py&amp;amp;WT.mc_id=aiapril-blog-dmitryso"&gt;&lt;code&gt;Tensorflow&lt;/code&gt;&lt;/a&gt; estimator. It is very similar to generic &lt;a href="https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py&amp;amp;WT.mc_id=aiapril-blog-dmitryso"&gt;&lt;code&gt;Estimator&lt;/code&gt;&lt;/a&gt;, but also provides some out-of-the-box options for distributed training. You can read more about using different estimators &lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-train-ml-models?WT.mc_id=aiapril-blog-dmitryso"&gt;in the official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another interesting point here is how we install &lt;code&gt;keragan&lt;/code&gt; library - directly from GitHub. While we can also install it from PyPI repository, I wanted to demonstrate you that direct installation from GitHub is also supported, and you can even indicate a specific version of the library, tag or commit ID.&lt;/p&gt;

&lt;p&gt;After the experiment has been running for some time, we should be able to observe the sample images being generated in the &lt;a href="http://ml.azure.com/?WT.mc_id=aiapril-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5h7iDeIH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rolvc09tlwnaci4cl0i5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5h7iDeIH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rolvc09tlwnaci4cl0i5.PNG" alt="GAN Training Experiment Results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Many Experiments
&lt;/h2&gt;

&lt;p&gt;The first time we run GAN training, we might not get excellent results, for several reasons. First of all, learning rate seems to be an important parameter, and too high learning rate might lead to poor results. Thus, for best results we might need to perform a number of experiments.&lt;/p&gt;

&lt;p&gt;Parameters that we might want to vary are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--size&lt;/code&gt; determines the size of the picture, which should be power of 2. Small sizes like 64 or 128 allow for fast exprimentation, while large sizes (up to 1024) are good for creating higher quality images. Anything above 1024 will likely not produce good results, because special techniques are required to train large resolutions GANs, such as &lt;a href="https://arxiv.org/abs/1710.10196"&gt;progressive growing&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--learning_rate&lt;/code&gt; is surprisingly quite an important parameter, especially with higher resolutions. Smaller learning rate typically gives better results, but training happens very slowly.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--dateset&lt;/code&gt;. We might want to upload pictures of different styles into different folders in the Azure ML datastore, and start training multiple experiments simultaneously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we already know how to submit the experiment programmatically, it should be easy to wrap that code into a couple of &lt;code&gt;for&lt;/code&gt;-loops to perform some parametric sweep. You may then check manually through Azure ML Portal which experiments are on their way to good results, and terminate all other experiments to save costs. Having a cluster of a few VMs gives you the freedom to start a few experiments at the same time without waiting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Experiment Results
&lt;/h2&gt;

&lt;p&gt;Once you are happy with results, it makes sense to get the results of the training in the form or model files and sample images. I have mentioned that during the training our training script stored models in &lt;code&gt;outputs/models&lt;/code&gt; directory, and sample images - to &lt;code&gt;outputs/samples&lt;/code&gt;. You can browse those directories in the Azure ML Portal and download the files that you like manually:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p5TJ1Yn5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmvi14r8nncd764v4d5u.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p5TJ1Yn5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmvi14r8nncd764v4d5u.PNG" alt="Azure Portal with Experiment Results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also do that programmatically, especially if you want to download &lt;em&gt;all&lt;/em&gt; images produced during different epochs. &lt;code&gt;run&lt;/code&gt; object that you have obtained during experiment submission gives you access to all files stored as part of that run, and you can download them like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;download_files&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'outputs/samples'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will create the directory &lt;code&gt;outputs/samples&lt;/code&gt; inside the current directory, and download all files from remote directory with the same name.&lt;/p&gt;

&lt;p&gt;I you have lost the reference to the specific run inside your notebook (it can happen, because most of the experiments are quite long-running), you can always create it by knowing the &lt;em&gt;run id&lt;/em&gt;, which you can look up at the portal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;experiment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;exp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;run_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'KeraGAN_1584082108_356cf603'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can also get the models that were trained. For example, let's download the final generator model, and use it for generating a bunch of random images. We can get all filenames that are associated with the experiment, and filter out only those that represent generator models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;fnames&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_file_names&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;fnames&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;startswith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'outputs/models/gen_'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="n"&gt;fnames&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;They will all look like &lt;code&gt;outputs/models/gen_0.h5&lt;/code&gt;, &lt;code&gt;outputs/models/gen_100.h5&lt;/code&gt; and so on. We need to find out the maximum epoch number:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;no&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'.'&lt;/span&gt;&lt;span class="p"&gt;)]),&lt;/span&gt; &lt;span class="n"&gt;fnames&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;fname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'outputs/models/gen_{}.h5'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;fname_wout_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fname&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;fname&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rfind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;
&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;download_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fname&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will download the file with the highers epoch number to local directory, and also store the name of this file (w/out directory path) into &lt;code&gt;fname_wout_path&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating new Images
&lt;/h2&gt;

&lt;p&gt;Once we have obtained the model, we can just need load it in Keras, find out the input size, and give the correctly sized random vector as the input to produce new random painting generated by the network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fname_wout_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;latent_dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;
&lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;normal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;latent_dim&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Output of the generator network is in the range [-1,1], so we need to scale it linearly to the range [0,1] in order to be correctly displayed by &lt;code&gt;matplotlib&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="n"&gt;fig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;ax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;subplots&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;figsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;ax&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here is the result we will get:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aZT6IoK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6rnheuldjbz5vr0sozz9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aZT6IoK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6rnheuldjbz5vr0sozz9.PNG" alt="GAN Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have a look at some of the best pictures produced during this experiment:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GqB5r0b8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/odvi7f8qaplwnnysqa92.jpg" alt="Colourful Spring"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yx5yQMJO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h4ym8zhpyy6yof6me07e.jpg" alt="Countryside"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;em&gt;Colourful Spring&lt;/em&gt;, 2020&lt;/td&gt;
&lt;td&gt;
&lt;em&gt;Countryside&lt;/em&gt;, 2020&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ohLbzaav--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qcgdr76l4l7ynpdd7jp5.jpg" alt="Summer Landscape"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fyChwIId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8n55sh3yakm4ldwdt7mg.jpg" alt="Summer Landscape"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;em&gt;Through the Icy Glass&lt;/em&gt;, 2020&lt;/td&gt;
&lt;td&gt;
&lt;em&gt;Summer Landscape&lt;/em&gt;, 2020&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Subscribe to &lt;a href="http://instagram.com/art_of_artificial"&gt;@art_of_artificial&lt;/a&gt; Instagram - I try to publish new pictures produced by GAN periodically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Observing The Process of Learning
&lt;/h2&gt;

&lt;p&gt;It is also interesting to look at the process of how GAN network gradually learns. I have explored this notion of learning in my exhibition &lt;a href="http://soshnikov.com/art/artofartificial"&gt;Art of the Artificial&lt;/a&gt;. Here are a couple of videos that show this process:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;&lt;tr&gt;
&lt;td&gt;

&lt;/td&gt;
&lt;td&gt;

&lt;/td&gt;
&lt;/tr&gt;&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Food for Thought
&lt;/h2&gt;

&lt;p&gt;In this post, I have described how GAN works, and how to train it using Azure ML. This definitely opens up a lot of room for experimentation, but also a lot of room for thought. During this experiment we have created original artworks, generated by Artificial Intelligence. But can they be considered &lt;strong&gt;ART&lt;/strong&gt;? I will discuss this in one of my next posts...&lt;/p&gt;

&lt;h2&gt;
  
  
  Acknowledgements
&lt;/h2&gt;

&lt;p&gt;When producing &lt;a href="https://github.com/shwars/keragan"&gt;keragan&lt;/a&gt; library, I was largely inspired by &lt;a href="https://towardsdatascience.com/generating-modern-arts-using-generative-adversarial-network-gan-on-spell-39f67f83c7b4"&gt;this article&lt;/a&gt;, and also by &lt;a href="https://github.com/Maximellerbach/Car-DCGAN-Keras"&gt;DCGAN implementation&lt;/a&gt; by Maxime Ellerbach, and partly by &lt;a href="https://github.com/rkjones4/GANGogh"&gt;GANGogh&lt;/a&gt; project. A lot of different GAN architectures implemented in Keras are presented &lt;a href="https://github.com/eriklindernoren/Keras-GAN"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>machinelearning</category>
      <category>scienceart</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Using Azure Machine Learning for Hyperparameter Optimization</title>
      <dc:creator>Dmitry Soshnikov</dc:creator>
      <pubDate>Tue, 10 Mar 2020 19:54:20 +0000</pubDate>
      <link>https://dev.to/azure/using-azure-machine-learning-for-hyperparameter-optimization-3kgj</link>
      <guid>https://dev.to/azure/using-azure-machine-learning-for-hyperparameter-optimization-3kgj</guid>
      <description>&lt;p&gt;Most machine learning models are quite complex, containing a number of so-called hyperparameters, such as layers in a neural network, number of neurons in the hidden layers, or dropout rate. To build the best model, we need to chose the combination of those hyperparameters that works best. This process is typically quite tedious and resource-consuming, but Azure Machine Learning can make it much simpler.&lt;/p&gt;

&lt;p&gt;In my &lt;a href="http://dev.to/azure/the-best-way-to-start-with-azure-machine-learning-17jl"&gt;previous post about Azure Machine Learning&lt;/a&gt; I have described how to start using &lt;a href="https://azure.microsoft.com/services/machine-learning/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML&lt;/a&gt; from Visual Studio Code. We will continue to explore the example described there, training simple model to do digit classification on MNIST dataset.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation with Azure ML Python SDK
&lt;/h2&gt;

&lt;p&gt;Hyperparameter optimization means that we need to perform large number of experiments with different parameters. We know that Azure ML allows us to accumulate all experiment results (including achieved metrics) in one place, Azure ML Workspace. So basically all we need to do is to submit a lot of experiments with different hyperparameters.&lt;/p&gt;

&lt;p&gt;Instead of doing it manually from VS Code, we can do it programmatically through &lt;a href="https://docs.microsoft.com/python/api/overview/azure/ml/?WT.mc_id=personal-blog-dmitryso"&gt;Azure ML Python SDK&lt;/a&gt;. All operations, including creating the cluster, configuring the experiment, and getting the results can be done with a few lines of Python code. This code can look a bit complex at first, but once you write (or understand) it once --- you will see how convenient it is to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Code
&lt;/h2&gt;

&lt;p&gt;The code that I refer to in this post is available in  &lt;a href="http://github.com/CloudAdvocacy/AzureMLStarter"&gt;Azure ML Starter&lt;/a&gt; repository. Most of the code that I will describe here is contained inside &lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/submit.ipynb"&gt;&lt;code&gt;submit.ipynb&lt;/code&gt;&lt;/a&gt; notebook. You can execute it in a few ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have local Python environment installed, you can simply start local instance of Jupyter by running &lt;code&gt;jupyter notebook&lt;/code&gt; in the directory with &lt;code&gt;submit.ipynb&lt;/code&gt;. In this case, you need to &lt;a href="https://docs.microsoft.com/python/api/overview/azure/ml/install/?WT.mc_id=devto-blog-dmitryso"&gt;install Azure ML SDK&lt;/a&gt; by running &lt;code&gt;pip install azureml-sdk&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;By uploading it to &lt;strong&gt;Notebook&lt;/strong&gt; section in your &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt; and running it from there. You will also probably need to create a VM for executing notebooks from Azure ML Workspace, but that can be done from the same web interface quite seamlessly.&lt;/li&gt;
&lt;li&gt;By uploading it to &lt;a href="http://aka.ms/whyaznb"&gt;Azure Notebooks&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you prefer working with plain Python files, the same code is available in &lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/submit.py"&gt;submit.py&lt;/a&gt; as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to Workspace and Cluster
&lt;/h2&gt;

&lt;p&gt;The first thing you need to do when using Azure ML Python SDK is to connect to Azure ML Workspace. To do so, you need to provide all required parameters such as subscription id, workspace and resource group name &lt;a href="https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py&amp;amp;WT.mc_id=devto-blog-dmitryso"&gt;more info on docs&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ws&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Workspace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subscription_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;resource_group&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;workspace_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The easiest way to connect is to store all required data inside &lt;code&gt;config.json&lt;/code&gt; file, and then instantiate the workspace reference like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ws&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Workspace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;from_config&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can download &lt;code&gt;config.json&lt;/code&gt; file from your Azure Portal, by navigating to the Azure ML Workspace page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tq2sWexk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/16z6ec6oufp8lykmimy1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tq2sWexk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/16z6ec6oufp8lykmimy1.PNG" alt="Azure ML Portal Config"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have obtained the workspace reference, we can get the reference to the compute cluster that we want to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cluster_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"AzMLCompute"&lt;/span&gt;
&lt;span class="n"&gt;cluster&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ComputeTarget&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;workspace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cluster_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This code assumes that you have already created the cluster manually (as described in the &lt;a href="http://dev.to/azure/the-best-way-to-start-with-azure-machine-learning-17jl"&gt;previous post&lt;/a&gt;). You can also create the cluster with required parameters programmatically, and the corresponding code is provided in &lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/submit.ipynb"&gt;submit.ipynb&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing and Uploading Dataset
&lt;/h2&gt;

&lt;p&gt;In our MNIST training example, we downloaded MNIST dataset from the OpenML repository on the Internet inside the training script. If we want to repeat the experiment many times, it would make sense to store the data somewhere close to the compute --- inside the Azure ML Workspace.&lt;/p&gt;

&lt;p&gt;First of all, let's create MNIST dataset as a file on disk in &lt;code&gt;dataset&lt;/code&gt; folder. To do that, run &lt;code&gt;[create_dataset.py](https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/create_dataset.py)&lt;/code&gt; file, and observe how &lt;code&gt;dataset&lt;/code&gt; folder is created, and all data files are stored there.&lt;/p&gt;

&lt;p&gt;Each Azure ML Workspace has a default datastore associated with it. To upload our dataset to the default datastore, we need just a couple of lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_default_datastore&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;upload&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'./dataset'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'mnist_data'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Submitting Experiments Automatically
&lt;/h2&gt;

&lt;p&gt;In this example, we will train the two-layer neural network model in Keras, using &lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/train_keras.py"&gt;train_keras.py&lt;/a&gt; training script. This script can take a number of command-line parameters, which allow us to set different values for hyperparameters of our model during training:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--data_folder&lt;/code&gt;, that specifies path to the dataset&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--batch_size&lt;/code&gt; to use (default is 128)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--hidden&lt;/code&gt;, size of the hidden layer (default is 100) &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--dropout&lt;/code&gt; to use after the hidden layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To submit the experiment with given parameters, we first need to create &lt;a href="https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py&amp;amp;WT.mc_id=devto-blog-dmitryso"&gt;&lt;code&gt;Estimator&lt;/code&gt;&lt;/a&gt; object to represent our script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;script_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;'--data_folder'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_default_datastore&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="s"&gt;'--hidden'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;est&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Estimator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source_directory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'.'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;script_params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;script_params&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;compute_target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;entry_script&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'train_keras.py'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;pip_packages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'keras'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'tensorflow'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this case, specified just one hyperparameter explicitly, but of course we can pass any parameters to the script to train the model with different hyperparameters. Also, note that estimator defines pip (or conda) packages that need to be installed in order to run our script.&lt;/p&gt;

&lt;p&gt;Now, to actually execute the experiment, we need to run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;exp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Experiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;workspace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'Keras-Train'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;est&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can then monitor the experiment right inside the notebook by printing our &lt;code&gt;run&lt;/code&gt; variable (it is recommended to have &lt;code&gt;azureml.widgets&lt;/code&gt; extension installed in Jupyter, if you are running it locally), or by going to the &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt;:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ia3-8qi---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7wqajmgvy78wnafzczkn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ia3-8qi---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7wqajmgvy78wnafzczkn.PNG" alt="Azure ML Portal Experiment"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Hyperparameter Optimization using HyperDrive
&lt;/h2&gt;

&lt;p&gt;Optimization of hyperparameters involves some sort of &lt;strong&gt;parametric sweep search&lt;/strong&gt;, which means that we need to run many experiments with different combinations of hyperparameters and compare the results. This can be done manually using the approach we have just discussed, or it can be automated using the technology called &lt;strong&gt;&lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-tune-hyperparameters/?WT.mc_id=devto-blog-dmitryso"&gt;Hyperdrive&lt;/a&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;In Hyperdrive, we need to define &lt;strong&gt;search space&lt;/strong&gt; for hyperparameters, and the &lt;strong&gt;sampling algorithm&lt;/strong&gt;, which controls the way hyperparameters are selected from that search space:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;param_sampling&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;RandomParameterSampling&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
         &lt;span class="s"&gt;'--hidden'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
         &lt;span class="s"&gt;'--batch_size'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt; 
         &lt;span class="s"&gt;'--epochs'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
         &lt;span class="s"&gt;'--dropout'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In our case, the search space is defined by a set of alternatives (&lt;code&gt;choice&lt;/code&gt;), while it is also possible to use continuos intervals with different probability distributions (&lt;code&gt;uniform&lt;/code&gt;, &lt;code&gt;normal&lt;/code&gt;, etc. -- more details &lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-tune-hyperparameters/?WT.mc_id=devto-blog-dmitryso"&gt;here&lt;/a&gt;). In addition to &lt;strong&gt;Random Sampling&lt;/strong&gt;, it is also possible to use &lt;strong&gt;Grid Sampling&lt;/strong&gt; (for all possible combinations of parameters) and &lt;strong&gt;Bayesian Sampling&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In addition, we can also specify &lt;strong&gt;Early Termination Policy&lt;/strong&gt;. It makes sense if our script reports metrics periodically during the execution -- in this case we can detect that accuracy being achieved by a particular combination of hyperparameters is lower than median accuracy, and terminate the training early:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;early_termination_policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MedianStoppingPolicy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;hd_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;HyperDriveConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;estimator&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;est&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;hyperparameter_sampling&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;param_sampling&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;early_termination_policy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;primary_metric_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'Accuracy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;primary_metric_goal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PrimaryMetricGoal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MAXIMIZE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;max_total_runs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;max_concurrent_runs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Having defined all the parameters for a hyperdrive experiment, we can submit it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;experiment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Experiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;workspace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'keras-hyperdrive'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;hyperdrive_run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;experiment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hd_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt;, hyperparameter optimization is represented by one experiment. To view all results on one graph, select &lt;strong&gt;include child runs&lt;/strong&gt; checkbox:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sPGsFRJ0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h9xylwmwg4ju0sayry50.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sPGsFRJ0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h9xylwmwg4ju0sayry50.PNG" alt="Hyperdrive Experiment Results"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Chosing the Best Model
&lt;/h2&gt;

&lt;p&gt;We can compare the results and select the best model manually in the portal. In our training script &lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter/blob/master/train_keras.py"&gt;train_keras.py&lt;/a&gt;, after training the model, we stored the result into &lt;code&gt;outputs&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;hist&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;makedirs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'outputs'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'outputs/mnist_model.hdf5'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Having done this, we can now locate the best experiment in the &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt; and get the corresponding &lt;code&gt;.hdf5&lt;/code&gt; file to be used in inference:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_OYi1Xlg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qnr5ii59u4h98vqur1jq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_OYi1Xlg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qnr5ii59u4h98vqur1jq.PNG" alt="Azure ML Experiment Output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, we can use &lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Model Management&lt;/a&gt; to &lt;strong&gt;register&lt;/strong&gt; the model, which would allow us to keep better track of it, and use it during &lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Deployment&lt;/a&gt;. We can programmatically find the best model and register it using the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;best_run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;hyperdrive_run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_best_run_by_primary_metric&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;best_run_metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;best_run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_metrics&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Best accuracy: {}'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;best_run_metrics&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Accuracy'&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
&lt;span class="n"&gt;best_run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;register_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'mnist_keras'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="n"&gt;model_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'outputs/mnist_model.hdf5'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have learnt how to submit Azure ML Experiments programmatically via Python SDK, and how to perform Hyperparameter optimization using Hyperdrive. While it takes a while to get used to this process, you will soon realize that Azure ML simplifies the process of model tuning, comparing to doing it "by hand" on a &lt;a href="https://docs.microsoft.com/azure/machine-learning/data-science-virtual-machine/overview/?WT.mc_id=devto-blog-dmitryso"&gt;Data Science Virtual Machine&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/azure/machine-learning/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://dev.to/azure/the-best-way-to-start-with-azure-machine-learning-17jl"&gt;Blog: The Best Way to Start with Azure ML&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://github.com/CloudAdvocacy/AzureMLStarter"&gt;GitHub Repository with Azure ML Starter&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The Best Way to Start With Azure Machine Learning</title>
      <dc:creator>Dmitry Soshnikov</dc:creator>
      <pubDate>Mon, 27 Jan 2020 19:24:11 +0000</pubDate>
      <link>https://dev.to/azure/the-best-way-to-start-with-azure-machine-learning-17jl</link>
      <guid>https://dev.to/azure/the-best-way-to-start-with-azure-machine-learning-17jl</guid>
      <description>&lt;p&gt;I know many data scientists, including myself, who do most of their work on a GPU-enabled machine, either locally or in the cloud, through Jupyter Notebooks or some Python IDE. During my two years as AI/ML software engineer that is exactly what I was doing, preparing data on one machine without GPU, and then using GPU VM in the cloud to do the training.&lt;/p&gt;

&lt;p&gt;On the other hand, you have probably heard of &lt;a href="https://docs.microsoft.com/azure/machine-learning/?WT.mc_id=devto-blog-dmitryso"&gt;Azure Machine Learning&lt;/a&gt; - a special platform service for doing ML. However, if you start looking at some &lt;a href="https://docs.microsoft.com/azure/machine-learning/tutorial-train-models-with-aml/?WT.mc_id=devto-blog-dmitryso"&gt;getting started tutorials&lt;/a&gt;, you will have the impression that using Azure ML creates a lot of unnecessary overhead, and the process is not ideal. For example, the training script in the example above is created as a text file in one Jupyter cell, without code completion, or any convenient ways of executing it locally or debugging. This extra overhead was the reason we did not use it as much in our projects.&lt;/p&gt;

&lt;p&gt;However, I recently found out that there is a &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-ai#overview"&gt;Visual Studio Code Extension for Azure ML&lt;/a&gt;. With this extension, you can develop your training code right in the VS Code, run it locally, and then submit the same code to be trained on a cluster with just a few clicks of a button. By doing so, you achieve several important benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can spend most of the time locally on your machine, and &lt;strong&gt;use powerful GPU resources only for training&lt;/strong&gt;. The training cluster can be automatically resized on demand, and by setting the min amount of machines to 0 you can spin up the VM on demand.&lt;/li&gt;
&lt;li&gt;You &lt;strong&gt;keep all results of your training&lt;/strong&gt;, including metrics and created models, in one central location - no need to keep the record of your accuracy for each experiment manually. &lt;/li&gt;
&lt;li&gt;If &lt;strong&gt;several people work on the same project&lt;/strong&gt; - they can use the same cluster (all experiments will be queued), and they can view each other's experiment result. For example, you can use &lt;strong&gt;Azure ML in a classroom environment&lt;/strong&gt;, and instead of giving each student an individual GPU machine you can create one cluster that will serve everyone, and foster the competition between students on model accuracy. &lt;/li&gt;
&lt;li&gt;If you need to perform many trainings, for example for &lt;strong&gt;hyperparameter optimization&lt;/strong&gt; - all that can be done with just few commands, no need to run series of experiments manually.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope you are convinced to try Azure ML yourself! Here is the best way to start:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;a href="http://code.visualstudio.com/?WT.mc_id=devto-blog-dmitryso"&gt;Visual Studio Code&lt;/a&gt;, &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account"&gt;Azure Sign In&lt;/a&gt; and &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-ai#overview"&gt;Azure ML&lt;/a&gt; Extensions&lt;/li&gt;
&lt;li&gt;Clone the repository &lt;a href="https://github.com/CloudAdvocacy/AzureMLStarter"&gt;https://github.com/CloudAdvocacy/AzureMLStarter&lt;/a&gt; - it contains some sample code to train the model to recognize MNIST digits. You can then open the cloned repository in VS Code.&lt;/li&gt;
&lt;li&gt;Read on!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Azure ML Workspace and Portal
&lt;/h2&gt;

&lt;p&gt;Everything in Azure ML is organized around a &lt;strong&gt;Workspace&lt;/strong&gt;. It is a central point where you submit your experiments, store your data and resulting models. There is also a special &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;&lt;strong&gt;Azure ML Portal&lt;/strong&gt;&lt;/a&gt; that provides web interface for your workspace, and from there you can perform a lot of operations, monitor your experiments and metrics, and so on.&lt;/p&gt;

&lt;p&gt;You can either create a workspace through &lt;a href="https://portal.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;Azure Portal&lt;/a&gt; web interface (see &lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-manage-workspace/?WT.mc_id=devto-blog-dmitryso"&gt;step-by-step instructions&lt;/a&gt;), or using Azure CLI (&lt;a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace-cli/?WT.mc_id=devto-blog-dmitryso"&gt;instructions&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;az extension add &lt;span class="nt"&gt;-n&lt;/span&gt; azure-cli-ml
az group create &lt;span class="nt"&gt;-n&lt;/span&gt; myazml &lt;span class="nt"&gt;-l&lt;/span&gt; northeurope
az ml workspace create &lt;span class="nt"&gt;-w&lt;/span&gt; myworkspace &lt;span class="nt"&gt;-g&lt;/span&gt; myazml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Workspace contains some &lt;strong&gt;Compute&lt;/strong&gt; resources. Once you have a training script, you can &lt;strong&gt;submit experiment&lt;/strong&gt; to the workspace, and specify &lt;strong&gt;compute target&lt;/strong&gt; - it will make sure the experiment runs there, and stores all the results of the experiment in the workspace for future reference.&lt;/p&gt;

&lt;h2&gt;
  
  
  MNIST Training Script
&lt;/h2&gt;

&lt;p&gt;In our example, we will show how to solve very &lt;a href="https://www.kaggle.com/c/digit-recognizer"&gt;traditional problem of handwritten digit recognition&lt;/a&gt; using MNIST dataset. In the same manner you will be able to run any other training scripts yourself.&lt;/p&gt;

&lt;p&gt;Our sample repository contains simple MNIST training script &lt;code&gt;train_local.py&lt;/code&gt;. This script downloads MNIST dataset from OpenML, and then uses SKLearn &lt;code&gt;LogisticRegression&lt;/code&gt; to train the model and print the resulting accuracy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;mnist&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fetch_openml&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'mnist_784'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;mnist&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'target'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;mnist&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'target'&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt;

&lt;span class="n"&gt;shuffle_index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;permutation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mist&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'data'&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mnist&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'data'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="n"&gt;shuffle_index&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;mnist&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'target'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="n"&gt;shuffle_index&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; 
  &lt;span class="n"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;random_state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;lr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;LogisticRegression&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;y_hat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;acc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;average&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int32&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_hat&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Overall accuracy:'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Of course, we are using Logistic Regression just as illustration, not implying that it is a good way to solve the problem...&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Script in Azure ML
&lt;/h2&gt;

&lt;p&gt;You can just run this script locally and see the result. If we chose to use Azure ML, however, it will give us two major benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduling and running training on a centralized compute resource, which is typically more powerful than a local computer. Azure ML will take care of packaging our script into a docker container with appropriate configuration&lt;/li&gt;
&lt;li&gt;Logging the results of training into a centralized location inside Azure ML Workspace. To do so, we need to add the following lines of code to our script, to record the metrics:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;azureml.core.run&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Run&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;    
    &lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_submitted_run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'accuracy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;pass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Modified version of our script is called &lt;code&gt;train_universal.py&lt;/code&gt; (it is just a bit more complicated than the code presented above), and it can be run both locally (without Azure ML), and on remote compute resource.&lt;/p&gt;

&lt;p&gt;To run it on Azure ML from VS Code, follow those steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Make sure your Azure Extension is connected to your cloud account. Select Azure icon in the left menu. If you are not connected, you will see a notification on the right bottom offering you to connect (&lt;a href="https://habrastorage.org/webt/7b/ii/u6/7biiu6ktpygayub0ff17-u36om4.png"&gt;see picture&lt;/a&gt;). Click on it, and sign in through browser. You can also press &lt;strong&gt;Ctrl-Shift-P&lt;/strong&gt; to bring up command palette, and type in &lt;strong&gt;Azure Sign In&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After that, you should be able to see your workspace in the &lt;strong&gt;MACHINE LEARNING&lt;/strong&gt; section of Azure bar:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--db7R9m4s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/uf/yu/da/ufyudahlxeed3roay5yppqu_cwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--db7R9m4s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/uf/yu/da/ufyudahlxeed3roay5yppqu_cwq.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;br&gt;
Here you should see different objects inside your workspace: compute resources, experiments, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go back to the list of files, and right-click on &lt;code&gt;train_universal.py&lt;/code&gt; and select &lt;strong&gt;Azure ML: Run as experiment in Azure&lt;/strong&gt;. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GfBPAe07--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/x7/i7/ex/x7i7exvh6uatgqqmhvtte9u89ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GfBPAe07--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/x7/i7/ex/x7i7exvh6uatgqqmhvtte9u89ae.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm your Azure subscription and you workspace, and then select &lt;strong&gt;Create new experiment&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yUX026iW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/uq/p1/l1/uqp1l1mazrais_juw3zcfegnyds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yUX026iW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/uq/p1/l1/uqp1l1mazrais_juw3zcfegnyds.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uYXYdSal--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/hk/of/ff/hkofffhrmy-mapz-zybagzi5pj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uYXYdSal--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/hk/of/ff/hkofffhrmy-mapz-zybagzi5pj4.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uppA0zaX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/hd/nb/0c/hdnb0clmrgnq534iaktd20q8w2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uppA0zaX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/hd/nb/0c/hdnb0clmrgnq534iaktd20q8w2u.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create new &lt;strong&gt;Compute&lt;/strong&gt; and &lt;strong&gt;compute configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute&lt;/strong&gt; defines a computing resource which is used for training/inference. You can use your local machine, or any cloud resources. In our case, we will use AmlCompute cluster. Please create a scalable cluster of STANDARD_DS3_v2 machines, with min=0 and max=4 nodes. You can do that either from VS Code interface, or from &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;ML Portal&lt;/a&gt;.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---rv4Ptrh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/az/qq/tt/azqqttrje6jx8nsepdycwtosh04.png" alt="Azure ML Workspace in VS Code"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compute Configuration&lt;/strong&gt; defines the options for containers which are created to perform training on a remote resource. In particular, it specifies all libraries that should be installed. In our case, select &lt;em&gt;SkLearn&lt;/em&gt;, and confirm the list of libraries.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jmmNby__--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/0x/wv/u_/0xwvu_iu7tovivowbhmrbjkml2m.png" alt="Azure ML Workspace in VS Code"&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FaXAgED7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/fx/t-/hv/fxt-hvhaeanmz6_ztcoh1q5tc8u.png" alt="Azure ML Workspace in VS Code"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will then see a window with JSON description of the next experiment. You can edit the information there, for example change the experiment or cluster name, and tweak some parameters. When you are ready, click on &lt;strong&gt;Submit Experiment&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--joFeivEz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/vj/r0/6_/vjr06_o6idgburn_bs84xtau7qe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--joFeivEz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/vj/r0/6_/vjr06_o6idgburn_bs84xtau7qe.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After the experiment has been successfully submitted in VS Code, you will get the link to the &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt; page with experiment progress and results. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2oWg9AGT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/_2/dc/mg/_2dcmguwlzuegyt8feqtmy2fyfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2oWg9AGT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/_2/dc/mg/_2dcmguwlzuegyt8feqtmy2fyfg.png" alt="Azure ML Experiment Result in Azure ML Portal"&gt;&lt;/a&gt;&lt;br&gt;
You can also find your experiment from &lt;strong&gt;Experiments&lt;/strong&gt; tab in &lt;a href="http://ml.azure.com/?WT.mc_id=devto-blog-dmitryso"&gt;Azure ML Portal&lt;/a&gt;, or from &lt;strong&gt;Azure Machine Learning&lt;/strong&gt; bar in VS Code:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZOptBzGn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/sf/aj/zi/sfajzixi7onq59cbfgnjzq2ay7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZOptBzGn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/sf/aj/zi/sfajzixi7onq59cbfgnjzq2ay7u.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to run the experiment again after adjusting some parameters in your code, this process would be much faster and easier. When you right-click your training file, you will see a new menu option &lt;strong&gt;Repeat last run&lt;/strong&gt; - just select it, and the experiment will be submitted right away. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o_ITr5kJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/uh/u0/vg/uhu0vgjdtifxczq6saeerxhsdys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o_ITr5kJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://habrastorage.org/webt/uh/u0/vg/uhu0vgjdtifxczq6saeerxhsdys.png" alt="Azure ML Workspace in VS Code"&gt;&lt;/a&gt;&lt;br&gt;
You will then see the metric results from all runs on Azure ML Portal, as in the screenshot above.   &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you know that submitting runs to Azure ML is not complicated, and you get some goodies (like storing all statistics from your runs, models, etc.) for free.&lt;/p&gt;

&lt;p&gt;You may have noticed that in our case the time it takes for the script to run on the cluster is more than running locally - it may even take several minutes. Of course, there is some overhead in packaging the script and all environment in a container, and sending it to the cloud. If the cluster is set to automatically scale down to 0 nodes - there might be some additional overhead due to VM startup, and all that is noticeable when you have a small sample script that otherwise takes a few seconds to execute. However, in real life scenarios, when training takes tens of minutes and sometimes much more - this overhead becomes barely important, especially given the speed improvements you can expect to get from the cluster. &lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;Now that you know how to submit any scripts for execution on a remote cluster, you can start taking advantage of Azure ML in your daily work. It will allow you to develop scripts on a normal PC, and then schedule it for execution on GPU VM or cluster automatically, keeping all the results in one place. &lt;/p&gt;

&lt;p&gt;However, there are more advantages from using Azure ML than just those two. Azure ML can also be used for data storage and dataset handling - making it super-easy for different training scripts to access the same data. Also, you can submit experiments automatically through the API, varying the parameters - and thus performing some hyperparameter optimization. Moreover, there is a specific technology built into Azure ML called &lt;a href="https://docs.microsoft.com/azure/machine-learning/how-to-tune-hyperparameters/?WT.mc_id=devto-blog-dmitryso"&gt;&lt;strong&gt;Hyperdrive&lt;/strong&gt;&lt;/a&gt;, which does more clever hyperparameter search. I will talk more about those features and technologies in my next posts.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Useful Resources
&lt;/h2&gt;

&lt;p&gt;You may find the following courses from Microsoft Learn useful, in case you want to know more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/learn/modules/intro-to-azure-machine-learning-service/?WT.mc_id=devto-blog-dmitryso"&gt;Introduction to Azure Machine Learning Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/ru-ru/learn/paths/build-ai-solutions-with-azure-ml-service/?WT.mc_id=devto-blog-dmitryso"&gt;Building AI Solutions with Azure ML Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/ru-ru/learn/modules/train-local-model-with-azure-mls/?WT.mc_id=devto-blog-dmitryso"&gt;Training Local Models with Azure ML Service&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>machinelearning</category>
      <category>vscode</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>8 Reasons You Absolutely Need to Use Azure Notebooks</title>
      <dc:creator>Dmitry Soshnikov</dc:creator>
      <pubDate>Tue, 19 Nov 2019 16:46:36 +0000</pubDate>
      <link>https://dev.to/shwars/8-reasons-you-absolutely-need-to-use-azure-notebooks-3512</link>
      <guid>https://dev.to/shwars/8-reasons-you-absolutely-need-to-use-azure-notebooks-3512</guid>
      <description>&lt;p&gt;If you are a data scientist or machine learning software engineer as I am, you probably write most of your code in Jupyter Notebooks. For those of you who are not -- Jupyter is a great system that allows you to combine markdown-based text and executable code in one web-based and web-editable document called &lt;strong&gt;notebook&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fl5%2Fz5%2Fqd%2Fl5z5qds6pguhhgrcxjv4zsnlcp4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fl5%2Fz5%2Fqd%2Fl5z5qds6pguhhgrcxjv4zsnlcp4.gif" alt="Azure Notebooks Intro"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The way most data scientists work is to install their own copy of Python dev environment (such as &lt;a href="https://anaconda.org/" rel="noopener noreferrer"&gt;Anaconda&lt;/a&gt; or even better &lt;a href="https://docs.conda.io/en/latest/miniconda.html" rel="noopener noreferrer"&gt;Miniconda&lt;/a&gt;) on their computer, start Jupyter server and then edit/run code on your own PC/Mac. In more demanding situations, your dev environment can be hosted on some high-performance compute server, and accessed through the web. However, &lt;strong&gt;using cloud resources as notebook dev environment sounds like even better alternative&lt;/strong&gt;. This is exactly what &lt;strong&gt;&lt;a href="http://notebooks.azure.com/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;Azure Notebooks&lt;/a&gt;&lt;/strong&gt; are --- public Jupyter server hosted in the Azure cloud, which you can use from anywhere through the browser to write your code.&lt;/p&gt;

&lt;p&gt;There are many advantages of using &lt;a href="http://notebooks.azure.com/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;Azure Notebooks&lt;/a&gt; instead of local Jupyter installation, and I will try to cover them here.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Start Coding Immediately
&lt;/h2&gt;

&lt;p&gt;Whether you want to learn Python, or do some experimentation with F# --- you typically need to install your development environment, be that &lt;a href="https://docs.microsoft.com/visualstudio/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;Visual Studio&lt;/a&gt; or Anaconda. This requires some time and disk space, and while it will probably pay off if you are into serious development --- spending time on environment setup is not something you want to do just to try a piece of code, or if you drop by to a party at your friend's house and want to show them your latest data analysis result. In those cases, you can just log into online Notebook environment and start coding right away in any of the supported languages: &lt;strong&gt;Python&lt;/strong&gt; 2 and 3, &lt;strong&gt;R&lt;/strong&gt; or &lt;strong&gt;F#&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Access to your Code from Anywhere
&lt;/h2&gt;

&lt;p&gt;Coming back to the point of visiting your friend's house --- you probably want to be able to have access to your code immediately, without the need to carry the USB stick or download it from OneDrive. Azure Notebooks allow you to keep all your projects online. Notebooks are organized into &lt;strong&gt;Projects&lt;/strong&gt;, which are similar to GitHub repositories, but without version control, and any project can be made either &lt;strong&gt;private&lt;/strong&gt;, or &lt;strong&gt;public&lt;/strong&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  3. Easily Share Code
&lt;/h2&gt;

&lt;p&gt;Azure Notebooks is a great way to &lt;a href="https://docs.microsoft.com/azure/notebooks/quickstart-create-share-jupyter-notebook/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;share the code&lt;/a&gt; with other people. Each project has a unique link, and if you want to share it -- just use this link (also making sure the project is marked as public). With the link other people will be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;view&lt;/strong&gt; your code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/notebooks/quickstart-clone-jupyter-notebook/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;clone&lt;/a&gt;&lt;/strong&gt; it into their own copy of your project, and start playing and editing notebooks online&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fk1%2F2k%2Fom%2Fk12koma63b9p7cz4mefmnrwhcha.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fk1%2F2k%2Fom%2Fk12koma63b9p7cz4mefmnrwhcha.gif" alt="Azure Notebooks Share"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because, unlike in Google Colab, you share on per-project basis, you can share several notebooks through one link, and you can also include data, README, and other useful information into the project.&lt;/p&gt;

&lt;p&gt;If you need to configure your Python environment in some way or &lt;a href="https://docs.microsoft.com/azure/notebooks/install-packages-jupyter-notebook/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;install specific packages&lt;/a&gt; - you can also do it through &lt;a href="https://docs.microsoft.com/azure/notebooks/quickstart-create-jupyter-notebook-project-environment/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;config files&lt;/a&gt;, or including &lt;code&gt;pip install&lt;/code&gt; commands into the notebook. Installing packages on F# notebooks is done through &lt;code&gt;Paket&lt;/code&gt; manager, and is documented &lt;a href="https://docs.microsoft.com/azure/notebooks/install-packages-jupyter-notebook/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Write Documented Code / Data-Driven Journalism
&lt;/h2&gt;

&lt;p&gt;Notebook is a great way to add thorough instructions to your code, or to add executable code to your text. There are plenty of scenarious where it can be useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing instructions or explaining some concepts that are related to algorithms. For example, if you want to explain what affine transformation is, you can provide the explanation first (which can also include formulae, because Azure Notebooks support &lt;strong&gt;LaTeX&lt;/strong&gt;), and then include some executable examples of applying affine transformation to a sample picture. Readers would not only be able to see how the code works, but they will be able to modify the code in place and further play with it&lt;/li&gt;
&lt;li&gt;Writing a text with some arguments supported by data, for example, in &lt;strong&gt;&lt;a href="https://en.wikipedia.org/wiki/Data-driven_journalism" rel="noopener noreferrer"&gt;data-driven journalism&lt;/a&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;a href="https://en.wikipedia.org/wiki/Computational_journalism" rel="noopener noreferrer"&gt;computational journalism&lt;/a&gt;&lt;/strong&gt;. You can write an article in the form of an Azure Notebook, which will automatically gather data from public sources, produce some live graphs, and compute resulting figures that will be used to drive reader to the conclusion.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Make Presentations
&lt;/h2&gt;

&lt;p&gt;One great feature that differentiates Azure Notebooks from all other similar services is the pre-installed &lt;a href="https://rise.readthedocs.io/" rel="noopener noreferrer"&gt;RISE extension&lt;/a&gt;, which allows you to make presentations. You can mark cells as separate slides, or as continuations to previous slide - and then in presentation mode it will look like animation. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Ft0%2Fnr%2F87%2Ft0nr87u14vezytpqxswdbnz9cnw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Ft0%2Fnr%2F87%2Ft0nr87u14vezytpqxswdbnz9cnw.gif" alt="Azure Notebooks Present"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While you will not be able to make fancy presentations in terms of design, but for many cases content and simplicity matter. Azure Notebooks are especially great for academic-style presentations, especially because you can use &lt;strong&gt;LaTeX&lt;/strong&gt; formulae inside any text.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Clone-and-Run any GitHub Repo
&lt;/h2&gt;

&lt;p&gt;If you are the owner of a GitHub-hosted Python project, you probably want to give people the simple opportunity to try your project out. One of the easiest way to do so is to provide a set of notebooks, which you will then be able to clone immediately in Azure Notebooks. Azure Notebooks support &lt;a href="https://docs.microsoft.com/ru-ru/azure/notebooks/quickstart-clone-jupyter-notebook/?WT.mc_id=devto-blog-dmitryso" rel="noopener noreferrer"&gt;direct cloning from any GitHub repository&lt;/a&gt; - so all you need to do is place a piece of code into the &lt;code&gt;Readme.md&lt;/code&gt; file for your repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;a href="https://notebooks.azure.com/import/gh/&amp;lt;git_user&amp;gt;/&amp;lt;repo&amp;gt;"&amp;gt;
&amp;lt;img src="https://notebooks.azure.com/launch.png" /&amp;gt;&amp;lt;/a&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fyq%2Flg%2Fvu%2Fyqlgvuw7et09w6xlcsgh3-tf4ww.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fyq%2Flg%2Fvu%2Fyqlgvuw7et09w6xlcsgh3-tf4ww.gif" alt="Azure Notebooks Clone"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Run the Code on Different Compute
&lt;/h2&gt;

&lt;p&gt;In most of the data science tasks, you first spend some time writing your code and making it work on small-scale data, and then you run it using more powerful compute options. For example, if you need GPU for neural network training, it may be wise to start development on non-GPU machine, and then switch to GPU once your code is ready.&lt;/p&gt;

&lt;p&gt;Azure Notebooks allow you to do it seamlessly. When starting/opening a library, you can chose to run it on &lt;strong&gt;Free compute&lt;/strong&gt;, or you can select any compatible virtual machine from an Azure subscription associated to your account (by compatible I mean Data Science Virtual Machine under Ubuntu). So, in most of my tasks, I will start with free compute option, develop most of my code, and then switch to the VM. Azure Notebooks will make sure that the same project environment (including notebooks and other project files) is transferred (or, to be more precise, &lt;em&gt;mounted&lt;/em&gt;) to the target VM and used there seamlessly.&lt;/p&gt;

&lt;p&gt;I have to mention that the free compute option that you are getting is also quite good, with 4 Gb of memory and 1 Gb of disk space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fmk%2Fzl%2Fmn%2Fmkzlmntf4shy1lullku2luwfnke.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhabrastorage.org%2Fwebt%2Fmk%2Fzl%2Fmn%2Fmkzlmntf4shy1lullku2luwfnke.gif" alt="Azure Notebooks Compute"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Teach
&lt;/h2&gt;

&lt;p&gt;I personally teach a couple of courses at University, and I find Azure Notebooks to be &lt;strong&gt;extremely useful in teaching&lt;/strong&gt;. I also have to give a lot of presentations, labs, workshops as a Cloud Advocate. Here's how you can use Notebooks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Give lectures&lt;/strong&gt; using presentation feature. I find Azure Notebooks slides far easier to maintain and write, because you do not need to focus on design, but on content, which stays in markdown format. Often it is much easier to manipulate plain text, and inserting mathematical formulae in &lt;strong&gt;TeX&lt;/strong&gt; is far easier than using Word Equations. On the down side, inserting diagrams and pictures is more painful, so do not use Notebooks for marketing presentations. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write Textbook with Samples&lt;/strong&gt; If you write slides in Notebooks, you can also include some additional text which is not included into the slides, but which elaborates more on the topic. Thus &lt;em&gt;the same notebook can be used both as slides and as a textbook&lt;/em&gt;. Moreover, such a textbook will include executable examples, which act as demos, or starting point for student's own work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Give Labs/Exams&lt;/strong&gt;. Prepare all the materials and initial code for your lab/exam into one Azure Notebooks project, and then share it immediately with all students through one link. Your students will then clone the code, and start working on it. To collect results, you can get their individual project links from students (if the work is not precisely time-bound), or ask them to send you or upload &lt;code&gt;.ipynb&lt;/code&gt;-file somewhere (if you want to make sure that student finished working on his code on time).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also use more advanced things like &lt;a href="http://aka.ms/azfn" rel="noopener noreferrer"&gt;Azure Functions&lt;/a&gt; to collect the results from labs, but that is another story, which I will probably also share one day...&lt;/p&gt;

&lt;h2&gt;
  
  
  Some Drawbacks
&lt;/h2&gt;

&lt;p&gt;While Azure Notebooks is a great tool, there are some things that you need to keep in mind when using them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Network access&lt;/strong&gt; is somehow limited. Because with Azure Notebooks you are getting some free compute, it would not be very wise to let you do whatever you want on the internet, like sending spam. For this reason, network access inside Azure Notebooks is limited to a certain set of resources, that include all Azure resources, GitHub, Kaggle, OneDrive, and some more. So, if you want your notebook to get some data from the outside, you may want to place the data on GitHub/OneDrive, or upload it manually into the project directory of a notebook through the web interface. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inserting Images&lt;/strong&gt;/Diagrams into the text. Because you enter all the text as plain text, you lack the simple PowerPoint/Word features of copy-pasting images and drawing diagrams. So, if you want to insert a picture or a diagram, you need to export it to JPEG/PNG, upload somewhere on the internet (I use GitHub most of the time), and insert it into the text using Markdown syntax. So, if you are creating a marketing/motivation presentation -- use PowerPoint, and for scientific/academic presentation or a university course -- use notebooks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU Access&lt;/strong&gt; is not included into Free Compute options for Azure Notebooks at this time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;p&gt;A good way to learn about specific features of Azure Notebooks, like installing packages, getting external data and drawing a graph, is to look at the &lt;a href="https://notebooks.azure.com/#sample-redirect" rel="noopener noreferrer"&gt;sample notebooks&lt;/a&gt;. A great collection of Jupyter samples can be found &lt;a href="https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and remember -- you can run any Jypyter notebook in Azure by uploading &lt;code&gt;.ipynb&lt;/code&gt; file into a project.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Azure Notebooks are a great tool which can be useful in many scenarious that I have tried to cover here. If you have some more use-cases - feel free to share them in the comments, I would be very interested to know!&lt;/p&gt;

&lt;p&gt;While while there are &lt;a href="https://www.dataschool.io/cloud-services-for-jupyter-notebook/" rel="noopener noreferrer"&gt;some other options&lt;/a&gt; for running notebooks in the cloud, including &lt;a href="https://colab.research.google.com/" rel="noopener noreferrer"&gt;Google Colab&lt;/a&gt; and &lt;a href="https://mybinder.org/" rel="noopener noreferrer"&gt;Binder&lt;/a&gt;, the comparison shows that Azure Notebooks is a great tool that has the most useful features. &lt;/p&gt;

&lt;p&gt;I hope you will enjoy using Azure Notebooks in your everyday life as much as I do!&lt;/p&gt;

&lt;p&gt;P.S. Some more useful documentation may be found here: &lt;a href="http://aka.ms/aznb" rel="noopener noreferrer"&gt;http://aka.ms/aznb&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>machinelearning</category>
      <category>python</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
