<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shashank Banerjea</title>
    <description>The latest articles on DEV Community by Shashank Banerjea (@codepossible).</description>
    <link>https://dev.to/codepossible</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/codepossible"/>
    <language>en</language>
    <item>
      <title>A MongoDB Learning journey </title>
      <dc:creator>Shashank Banerjea</dc:creator>
      <pubDate>Fri, 02 Jul 2021 15:27:18 +0000</pubDate>
      <link>https://dev.to/codepossible/a-mongodb-learning-journey-3hc</link>
      <guid>https://dev.to/codepossible/a-mongodb-learning-journey-3hc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This is my tale of fumbling and tumbling into love with MongoDB, as the data store for the product, we are building at &lt;a href="https://www.introvoke.com"&gt;Introvoke&lt;/a&gt;, to power right scale branded virtual and hybrid event experiences for organizers, OEMs and enterprises.&lt;/p&gt;

&lt;p&gt;Most of professional life while developing applications using different technology stacks, I have primarily used RDBMS as my data store. Many times it was Microsoft SQL Server as I was primarily a .NET/C# developer working at &lt;a href="https://www.microsoft.com"&gt;Microsoft&lt;/a&gt;. I had the pleasure of working with MySQL, PostgreSQL, IBM DB2 and yes, Oracle as well, working for start ups, software vendors, educational institutions and large enterprises. Good and robust products, for the solutions they power and price points they offer.&lt;/p&gt;

&lt;p&gt;Until I joined my new role at Introvoke in late April 2021, my primary exposure to NoSQL technology was via &lt;a href="https://azure.microsoft.com/en-us/services/cosmos-db/"&gt;Azure Cosmos DB&lt;/a&gt;. To appeal to enterprise developers like myself, Cosmos DB offers a SQL like query language and that is what I defaulted to when I used Cosmos DB.&lt;/p&gt;

&lt;p&gt;The choice to use MongoDB and in a hosted fashion using &lt;a href="https://www.mongodb.com/cloud/atlas"&gt;MongoDB Atlas&lt;/a&gt; was done prior to my arrival at Introvoke.&lt;/p&gt;

&lt;p&gt;My primary responsibility at Introvoke is to build and manage analytics, integrations and API for the platform. &lt;/p&gt;

&lt;p&gt;The first task that I was pulled into was to make calculations of consumption on our platform perform better and more precise, so I had to dive right into learning MongoDB, right away. &lt;em&gt;(Ahh... I only heard of &lt;strong&gt;tech intensity&lt;/strong&gt; as an industry buzzword from Satya Nadella, when I was at Microsoft and now I was feeling it)&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning Path
&lt;/h2&gt;

&lt;p&gt;This is the path I took. As anything else in life, there are always more ways than one and varies by learning style.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pluralsight Courses
&lt;/h3&gt;

&lt;p&gt;The reason I jumped in here was primarily - HABIT. As a engineer at Microsoft, this was one of many learning resources that was available to me and over the years it became the good starting point.&lt;/p&gt;

&lt;p&gt;The courses which helped me on my path were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://app.pluralsight.com/library/courses/foundations-document-databases-mongodb/table-of-contents"&gt;Foundations of Document Databases with MongoDB by Pinal Dave&lt;/a&gt;: Pinal Dave has been a well known expert (See: &lt;a href="https://blog.sqlauthority.com/"&gt;SQL Authority&lt;/a&gt;) in the SQL world as well, so I trusted the content authored by Pinal would be high quality. It also seemed to me that Pinal would have insights into both SQL and NoSQL worlds, so easy choice there.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://app.pluralsight.com/library/courses/aggregating-data-across-documents-mongodb/table-of-contents"&gt;Aggregating Data across Documents in MongoDB by Axel Sirota&lt;/a&gt;: Aggregation is a powerful feature in MongoDB. I rave about it in the later section. This course is a good premier to what is possible with it. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://app.pluralsight.com/library/courses/mongodb-nodejs/table-of-contents"&gt;Using MongoDB with Node.js by Jonathan Mills&lt;/a&gt;: Since I was building applications at Introvoke using Node.js, this course was good fit to combine them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are tons of other courses on Node.js and MongoDB on Pluralsight to explore, but I found these relevant immediately to what I aiming for and course content was up to date.&lt;/p&gt;

&lt;h3&gt;
  
  
  MongoDB University
&lt;/h3&gt;

&lt;p&gt;What better place to learn MongoDB but from the people who made it. Here is where &lt;a href="https://university.mongodb.com/"&gt;MongoDB University&lt;/a&gt; comes in. &lt;/p&gt;

&lt;p&gt;Easy to learn courses and well defined learning path and best of all at no cost! I believe, the end goal for the courses, would be obtain MongoDB certification. I started getting links to this via e-mail after signing up for a free MongoDB Atlas account.&lt;/p&gt;

&lt;h3&gt;
  
  
  MongoDB documentation
&lt;/h3&gt;

&lt;p&gt;Believe it or not, &lt;a href="https://docs.mongodb.com/"&gt;MongoDB documentation&lt;/a&gt; is very good. It is very rich in examples. I was able to follow the documentation in most cases easily and apply the examples in my queries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Start - MongoDB and Node.js article and YouTube series
&lt;/h3&gt;

&lt;p&gt;I came across this four-part &lt;a href="https://www.mongodb.com/blog/post/quick-start-nodejs-mongodb--how-to-get-connected-to-your-database"&gt;article&lt;/a&gt; and &lt;a href="https://www.youtube.com/watch?v=fbYExfeFsI0"&gt;YouTube&lt;/a&gt; series on using Node.js and MongoDB by MongoDB's Developer Advocate Lauren Schaefer, while looking for a way on how to react to changes in the data in MongoDB collection. That would be &lt;a href="https://docs.mongodb.com/manual/changeStreams/"&gt;change streams&lt;/a&gt; in MongoDB similar to &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed"&gt;Change Feed&lt;/a&gt; in Azure Cosmos DB or &lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server?view=sql-server-ver15"&gt;Change Data Capture&lt;/a&gt; in SQL Server.&lt;/p&gt;

&lt;p&gt;I loved Lauren's style of delivery, her deep technical expertise and how beautifully she explains advanced topics with ease and lots of examples. The video, I started out with was last in the series, but I liked it so much, I went back read the previous three articles and watched the full video series.&lt;/p&gt;

&lt;h3&gt;
  
  
  Of course, Stack Overflow
&lt;/h3&gt;

&lt;p&gt;This is probably a no-brainer but worth mentioning anyhow. MongoDB has a rich developer community backing it. If there is anything you are thinking about, the chances are someone has asked or has solved it for you at &lt;a href="https://www.stackoverflow.com/"&gt;Stack Overflow&lt;/a&gt;. The only caveat is sometimes, the solutions relate to older versions of MongoDB, but I have seen the community very good about making that distinction while responding to queries, especially between MongoDB 3.x and 4.x.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Primary Mental Blockers
&lt;/h2&gt;

&lt;p&gt;As with learning new technology, I had to unlearn some old habits. Among them was how to think about storing data differently from the world of &lt;code&gt;SQL JOINS&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;At Introvoke, I did inherit a very de-normalized data store with various collections. But where there was gaps, I was trying to create JOINS, which was not always easy to pull off syntax wise for a newbie. Over past two months, I have gradually become better.&lt;/p&gt;

&lt;p&gt;The same thought process also influencing me on how to store data in a normalized manner focusing on storing ids and instead of embedding documents.&lt;/p&gt;

&lt;p&gt;One of the articles, which has help me design better and think more in terms on how data is to be accessed versus trying to store data concisely was this article - &lt;a href="https://developer.mongodb.com/article/mongodb-schema-design-best-practices/"&gt;MongoDB Schema Design Best Practices by Joe Karlssson&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I think, it is a learning curve, most developers will face coming from the RDBMS world. As I discovered, getting a optimal solution does take some experimentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  My favorite features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  MongoDB Query Language
&lt;/h3&gt;

&lt;p&gt;The MongoDB Query Language (MQL) is very powerful and in most instances very intuitive, though it can be verbose at times.&lt;/p&gt;

&lt;h3&gt;
  
  
  Aggregates
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.mongodb.com/manual/aggregation/"&gt;aggregation framework&lt;/a&gt;, in my opinion is the most powerful feature in MongoDB and I use it on almost every functionality, I am building to return data. It can help you shape the data down to how your APIs need to return it as their data contract. &lt;strong&gt;Almost zero ORM.&lt;/strong&gt; Imagine that doing it in SQL world.&lt;/p&gt;

&lt;p&gt;The only place where I have not been able to use aggregation pipeline is data has been stored across different MongoDB clusters. That is where the Node.js application has to stitch results from different queries together, which to my amazement is real fast. That is probably a discussion for a different article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Facets
&lt;/h3&gt;

&lt;p&gt;I discovered this feature of the aggregation framework - &lt;a href="https://docs.mongodb.com/manual/reference/operator/aggregation/facet/"&gt;facet&lt;/a&gt;, two weeks into writing queries and tying them into the API. I found it especially useful in places, where I started with the same filtered data but had to run multiple different grouping calculation on it (dimensions for data warehouse folks out there). Prior to that I was writing multiple queries for it and calling it from the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Pet Peeves
&lt;/h2&gt;

&lt;h3&gt;
  
  
  BLANK results
&lt;/h3&gt;

&lt;p&gt;In the SQL world, for queries with incorrect names of the tables or columns, the schema enforcement would prompt an immediate feedback. Not so in MongoDB, it returns empty result, which can become a head scratcher at times.&lt;/p&gt;

&lt;p&gt;There have been instances where I have misspelled the collection or the field name. For example, in some collections the unique identifier for a Company entity is called - &lt;code&gt;company&lt;/code&gt; and &lt;code&gt;companyId&lt;/code&gt; in others. &lt;/p&gt;

&lt;p&gt;Another one is misspelling a collection named &lt;code&gt;EventsAggregates&lt;/code&gt;,  as &lt;code&gt;EventAggregates&lt;/code&gt; or &lt;code&gt;EventsAggregate&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;I do wish at least for the collection name, there was some validation, which could alert the developer of the error. The one with the document structure, the responsibility lies with the developer (or the development team) to establish naming patterns for columns across collections.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Developer Toolkit
&lt;/h2&gt;

&lt;p&gt;I found the following tools &lt;strong&gt;must-haves&lt;/strong&gt; for my developer journey learning how to use MongoDB and building queries for my day to day application&lt;/p&gt;

&lt;h3&gt;
  
  
  MongoDB Compass
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.mongodb.com/compass/current/"&gt;MongoDB Compass&lt;/a&gt; is a free cross-platform tool from MongoDB. It is great for exploring data and running queries in built in Mongo Shell. I found error reporting when developing queries so much more useful than on MongoDB extension is VS code. However for writing the MQL code, I would advice to go with an editor such as Visual Studio Code or Sublime.&lt;/p&gt;

&lt;h3&gt;
  
  
  MongoDB Extension for Visual Studio Code
&lt;/h3&gt;

&lt;p&gt;If you are developing using Node.js or Typescript, the chances are you are using Visual Studio code. There is an excellent &lt;a href="https://docs.mongodb.com/mongodb-vscode/"&gt;VS Code Extension for MongoDB&lt;/a&gt; which provides connection management, browsing MongoDB collections with VS code, syntax highlighting for MQL code.The only limitation I found with the plugin, is I have not been able to get the it to return more than 20 documents for a query. There is a setting when viewing document collections but it does not seem to apply to the custom queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Journey continues
&lt;/h2&gt;

&lt;p&gt;At the time of writing this article in June 2021, it has been only two months into learning this fantastic technology. I am sure, I will have more to say in the coming days.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.mongodb.com/"&gt;MongoDB Official Links&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.mongodb.com/"&gt;MongoDB documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://university.mongodb.com/"&gt;MongoDB University&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/blog/post/quick-start-nodejs-mongodb--how-to-get-connected-to-your-database"&gt;Four part article series for MongoDB with Node.js by Lauren Schaefer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mongodb.com/article/mongodb-schema-design-best-practices/"&gt;MongoDB Schema Design Best Practices by Joe Karlssson&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;a href="https://www.pluralsight.com"&gt;Pluralsight&lt;/a&gt; courses&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://app.pluralsight.com/library/courses/foundations-document-databases-mongodb/table-of-contents"&gt;Foundations of Document Databases with MongoDB by Pinal Dave&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.pluralsight.com/library/courses/aggregating-data-across-documents-mongodb/table-of-contents"&gt;Aggregating Data across Documents in MongoDB by Axel Sirota&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.pluralsight.com/library/courses/mongodb-nodejs/table-of-contents"&gt;Using MongoDB with Node.js by Jonathan Mills&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Developer Toolkit&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://code.visualstudio.com/"&gt;Visual Studio Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.mongodb.com/mongodb-vscode/"&gt;MongoDB for VS Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.mongodb.com/compass/current/"&gt;MongoDB Compass&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DISCLAIMER&lt;/strong&gt;: &lt;em&gt;Opinions expressed in this article are solely mine and do NOT represent those of my employer.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>nosql</category>
      <category>beginners</category>
      <category>node</category>
    </item>
    <item>
      <title>Azure Data Factory Automation — Adding Log Analytics for Monitoring</title>
      <dc:creator>Shashank Banerjea</dc:creator>
      <pubDate>Sun, 03 Jan 2021 19:01:48 +0000</pubDate>
      <link>https://dev.to/codepossible/azure-data-factory-automation-adding-log-analytics-for-monitoring-35ph</link>
      <guid>https://dev.to/codepossible/azure-data-factory-automation-adding-log-analytics-for-monitoring-35ph</guid>
      <description>&lt;p&gt;One of the challenges, that I often faced with automating deployment of Azure Data Factory has been to add monitoring to the Azure Data Factory in a scriptable way.&lt;/p&gt;

&lt;p&gt;The monitoring is essential to broad system health. For example, to generate alerts failures or if a pipeline is running longer than expected. The preferred and documented way of generating this alert has been to push the logs to Azure Log Analytics and analyzing the logs to generate alerts or provide a broad monitoring capability integrated with other parts of the infrastructure on Azure.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fdata-factory%2Fmonitor-using-azure-monitor%23enable-diagnostic-logs-via-the-azure-monitor-rest-api"&gt;Azure Data Factory documentation&lt;/a&gt; does provide a method using the Azure Monitor REST API. However it is little unwieldy to use, , especially in CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;There is another method to do this is actually that is well documented but not often referred to. It is available in the documentation for &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-monitor%2Fplatform%2Fdiagnostic-settings%3FWT.mc_id%3DPortal-Microsoft_Azure_Monitoring%23create-using-azure-cli"&gt;Azure Monitor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, associating an Log Analytics as a diagnostics log and metrics sink for Azure Data Factory using Azure CLI becomes as simple as running the script below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az monitor diagnostic-settings create &lt;span class="se"&gt;\&lt;/span&gt;
 - name LogAnalytics02-Diagnostics &lt;span class="se"&gt;\&lt;/span&gt;
 - resource /subscriptions/&lt;span class="o"&gt;(&lt;/span&gt;your-subscription&lt;span class="o"&gt;)&lt;/span&gt;/resourceGroups/&lt;span class="o"&gt;(&lt;/span&gt;your-resource-group&lt;span class="o"&gt;)&lt;/span&gt;/providers/Microsoft.DataFactory/factories/&lt;span class="o"&gt;(&lt;/span&gt;data-factory-name&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
 - logs &lt;span class="s1"&gt;'[{"category": "PipelineRuns","enabled": true}]'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
 - metrics &lt;span class="s1"&gt;'[{"category": "AllMetrics","enabled": true}]'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
 - workspace /subscriptions/&lt;span class="o"&gt;(&lt;/span&gt;your-subscription&lt;span class="o"&gt;)&lt;/span&gt;/resourcegroups/&lt;span class="o"&gt;(&lt;/span&gt;your-resource-group&lt;span class="o"&gt;)&lt;/span&gt;/providers/microsoft.operationalinsights/workspaces/&lt;span class="o"&gt;(&lt;/span&gt;your-log-analytics-workspace-name&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pre-requisite to run this scripts are - Azure Data Factory Instance and Log Analytics workspace should already be provisioned.&lt;/p&gt;

&lt;p&gt;Just substitute the values for subscriptions and resources with values that applies to you. The example above shows just adding the logs for &lt;em&gt;PipelineRuns&lt;/em&gt; but you can add more values in JSON string for &lt;em&gt;logs&lt;/em&gt; parameter.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azuredatafactory</category>
      <category>azureloganalytics</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Using Azure CLI with local storage emulator</title>
      <dc:creator>Shashank Banerjea</dc:creator>
      <pubDate>Mon, 02 Dec 2019 04:31:04 +0000</pubDate>
      <link>https://dev.to/codepossible/using-azure-cli-with-local-storage-emulator-40i8</link>
      <guid>https://dev.to/codepossible/using-azure-cli-with-local-storage-emulator-40i8</guid>
      <description>&lt;h1&gt;
  
  
  The Beginnings - Unusual Circumstances
&lt;/h1&gt;

&lt;p&gt;While working with a fellow engineer who uses virtualized Linux machine as their primary development environment on Windows 10 PC hosted on Virtual Box, we ran into challenges with accessing the Azure local storage emulator - &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azurite"&gt;Azurite&lt;/a&gt;, with the &lt;a href="https://azure.microsoft.com/en-us/features/storage-explorer/"&gt;Azure Storage Explorer&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The root cause was the corporate proxy settings associated with the laptop that we &lt;strong&gt;just&lt;/strong&gt; could not to resolve. &lt;/p&gt;

&lt;p&gt;A mighty unusual start in a way, but that is hand we were dealt.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Finding the path forward
&lt;/h1&gt;

&lt;p&gt;Our project involved developing Azure Functions using Python, which was dependent on use of Azure Storage Queues and access to local development was critical for developer isolation and productivity. But using Storage Explorer became an issue, we were left few options. &lt;/p&gt;

&lt;p&gt;First one, drop Linux as our development environment and move to Windows or  MacOS, which did not seem to have this issue. Ugly, huh? It also flies right in the face of cross-platform development.&lt;/p&gt;

&lt;p&gt;Second one, living the hashtag - #WeAreDevelopers, write some test harness/setup code using the Azure Python SDK and keep it up to date. It was an option, not the most efficient one.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Middle Ground - Hacked it! or really on shoulder of giants
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest"&gt;Azure CLI&lt;/a&gt; is a cross platform development for developers and administrators to interact with the Azure Cloud. And until now that is how I had been using Azure CLI, to work with Azure Cloud and not for local development.&lt;/p&gt;

&lt;p&gt;So &lt;strong&gt;what if&lt;/strong&gt;, there is a way to use the Azure CLI storage command to work with local storage versus working with Cloud Storage??? &lt;/p&gt;

&lt;p&gt;IF Only, &lt;strong&gt;What IF??&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As it turns out, &lt;strong&gt;THERE IS&lt;/strong&gt;. Though not as well documented (as in here in your face... &lt;em&gt;DOCUMENTED&lt;/em&gt;), it is possible to stitch the solution using the documentation for &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator"&gt;Azure Storage Emulator&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/cli/azure/azure-cli-configuration?view=azure-cli-latest"&gt;Configuration for Azure CLI&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  The solution
&lt;/h1&gt;

&lt;p&gt;It turns out to be as simple as setting the value a environment variable -  &lt;em&gt;AZURE_STORAGE_CONNECTION_STRING&lt;/em&gt;, which becomes the default connection string that Azure CLI uses, if a storage account is not specified.&lt;/p&gt;

&lt;p&gt;Here is the command to run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ AZURE_STORAGE_CONNECTION_STRING&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"UseDevelopmentStorage=true"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;AZURE_STORAGE_CONNECTION_STRING
&lt;span class="nv"&gt;$ &lt;/span&gt;az storage queue create &lt;span class="nt"&gt;-n&lt;/span&gt; myqueue
&lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="s2"&gt;"created"&lt;/span&gt; : &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;az storage queue create &lt;span class="nt"&gt;-n&lt;/span&gt; anotherqueue
&lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="s2"&gt;"created"&lt;/span&gt; : &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;span class="nv"&gt;$ &lt;/span&gt;az storage queue create &lt;span class="nt"&gt;-n&lt;/span&gt; newqueue
&lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="s2"&gt;"created"&lt;/span&gt; : &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;span class="nv"&gt;$ &lt;/span&gt;az storage queue list &lt;span class="nt"&gt;--verbose&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  The Conclusion
&lt;/h1&gt;

&lt;p&gt;Unfortunately, the firewall issues with the corporate laptop still persisted even with this. This approach &lt;strong&gt;did not solve&lt;/strong&gt; the original problem. &lt;/p&gt;

&lt;p&gt;But this unusual set of circumstances led me to discover a new way to use Azure CLI. For example - I can use it to test setup scripts, offline for Azure Storage.&lt;/p&gt;

&lt;p&gt;So, today as I write this, I am living the motto of my group within Microsoft - &lt;em&gt;"We reserve the right to become smarter"&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;So my fellow developers - keep hacking and be UNSTOPPABLE!&lt;/p&gt;

</description>
      <category>azcli</category>
      <category>azure</category>
      <category>azurestorage</category>
      <category>emulator</category>
    </item>
    <item>
      <title>Useful Azure Data Explorer queries for Azure Data Factory pipelines</title>
      <dc:creator>Shashank Banerjea</dc:creator>
      <pubDate>Fri, 04 Oct 2019 06:38:22 +0000</pubDate>
      <link>https://dev.to/codepossible/useful-azure-data-explorer-queries-for-azure-data-factory-pipelines-30dl</link>
      <guid>https://dev.to/codepossible/useful-azure-data-explorer-queries-for-azure-data-factory-pipelines-30dl</guid>
      <description>&lt;h2&gt;
  
  
  Monitoring the Azure Data Factory Pipelines
&lt;/h2&gt;

&lt;p&gt;Monitoring Azure Data Factory is enabled through collecting diagnostic logs and posting them to Log Analytics (part of Azure Monitor). The other options include – Storage Account or Event Hub for custom processing. There are additional settings provide options to decide what information that can are to be captured. It is also possible to have multiple settings for collecting diagnostic logs, where it is possible to send different data to different collection stores.&lt;/p&gt;

&lt;p&gt;The most readily available monitoring is available through Log Analytics Workspace and Azure Data Factory Workbook available in the Azure Marketplace. Detailed documentation to enable monitoring through is available &lt;a href="https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor#monitor-data-factory-metrics-with-azure-monitor" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful Azure Data Explorer Queries
&lt;/h2&gt;

&lt;p&gt;While the workbook is helpful in surfacing some of the key metrics across multiple data factory instance it is helpful to write couple of custom queries in Azure Data Explorer to review some performance statistics of a Pipeline run.&lt;/p&gt;

&lt;p&gt;Following the instructions in the link provided in the section above, you will have created a Log Analytics Workspace to store the Azure Data Factory diagnostic data. The data for Pipeline run is available in table called – “ADFPipelineRun”.&lt;/p&gt;

&lt;h3&gt;
  
  
  Viewing Duration of Pipeline Runs
&lt;/h3&gt;

&lt;p&gt;One of the useful queries that I find is to view how long my succeeding pipeline are taking. For this example, we assume, what we are monitoring a pipeline called - &lt;em&gt;gsc2adlsgen2copy&lt;/em&gt;. A query for the pipeline that would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;    &lt;span class="n"&gt;ADFPipelineRun&lt;/span&gt; 
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nv"&gt;"Succeeded"&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;PipelineName&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nv"&gt;"gcs2adlsgen2copy"&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;project&lt;/span&gt; &lt;span class="n"&gt;PipelineName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RunId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;End&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;End&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="k"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The query above returns all the execution time of the pipeline. This query returns all the executions of the pipeline (within the time frame set by the query explorer) and how much time they took.&lt;/p&gt;

&lt;h3&gt;
  
  
  Viewing number of Pipeline Runs exceeding a threshold
&lt;/h3&gt;

&lt;p&gt;If we have established an SLA for the Pipeline run, we can add a threshold value into the query to view pipelines that exceed the threshold. The query would look like this with an arbitrary threshold of 15 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;    &lt;span class="n"&gt;ADFPipelineRun&lt;/span&gt; 
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nv"&gt;"Succeeded"&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;PipelineName&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nv"&gt;"gcs2adlsgen2copy"&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;End&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="k"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;project&lt;/span&gt; &lt;span class="n"&gt;PipelineName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RunId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;End&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;End&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="k"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Generating Alerts on missed SLA
&lt;/h4&gt;

&lt;p&gt;To generate an alert from the query above, click on the “+ New Alert” button on top of the query window and select to build a new alert on the Custom query.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Frnpt2y920d4stcsm42ra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Frnpt2y920d4stcsm42ra.png" alt="New Alert Button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Shown below is the alert created for the query above, based on number of rows returned, monitored every 24 hours for past 24 hours.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fdqovu9r41z0ko0xpco1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fdqovu9r41z0ko0xpco1g.png" alt="New Alert Action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Azure Pipelines can be monitored using Azure Monitor. Alerts can be generated using custom Azure Data Explorer queries when a pipeline takes too long or does not run over a period. More reasons to rejoice and rest easy, my fellow cloud dwellers.&lt;/p&gt;

</description>
      <category>azuredatafactory</category>
      <category>azuremonitor</category>
      <category>azuredataexplorer</category>
      <category>pipelines</category>
    </item>
    <item>
      <title>Bringing R Workloads to Azure Machine Learning Service</title>
      <dc:creator>Shashank Banerjea</dc:creator>
      <pubDate>Thu, 26 Sep 2019 02:51:48 +0000</pubDate>
      <link>https://dev.to/codepossible/bringing-r-workloads-to-azure-machine-learning-service-2ic</link>
      <guid>https://dev.to/codepossible/bringing-r-workloads-to-azure-machine-learning-service-2ic</guid>
      <description>&lt;h1&gt;
  
  
  Making the case
&lt;/h1&gt;

&lt;p&gt;The two of the most popular platforms for data science workloads are – R and Python. Both platforms have a large ecosystem of libraries and support which enable developers and innovators to build solutions quickly and efficiently. Looking more closely however, it is quite evident that around the academic and research, R draws at least as many supporters as Python.&lt;/p&gt;

&lt;p&gt;Azure Machine Learning (AML) Service is a hosted service which empowers organizations to streamline the building, training and deployment of machine learning models at scale.&lt;/p&gt;

&lt;p&gt;Azure Machine Learning currently provides direct support for Python and popular ML frameworks like ONNX, PyTorch, scikit-learn, and TensorFlow. With its Studio interface and Juypter notebooks there are additional capabilities available through a Python SDK.&lt;/p&gt;

&lt;p&gt;This article demonstrates the use of the AML Python SDK to run R Workloads in Azure Machine Learning pipelines.&lt;/p&gt;

&lt;p&gt;This article assumes that reader has basic knowledge of the R and Python languages, familiarity with Azure Machine Learning Service, and with use of the Azure Portal. It also requires basic understanding of Docker and container registries.&lt;/p&gt;

&lt;h1&gt;
  
  
  Broadly Speaking
&lt;/h1&gt;

&lt;p&gt;The approach to run R workloads on Azure is enabled by AML’s ability to run workloads in purpose-built Docker containers across different choices of compute.&lt;/p&gt;

&lt;p&gt;The first step is to build a Docker container that can support R workloads. This container packages the R runtime and essential libraries required to execute the R workload.&lt;/p&gt;

&lt;p&gt;The container is built and published to a container repository; in this example we’ll use Azure Container Registry.&lt;/p&gt;

&lt;p&gt;The next step adds execution of Docker container to an Estimator step, which becomes part of an AML pipeline. Prior to execution, the R script will be uploaded to the AML workspace and configured as a parameter to the AML step bootstrap routine written in Python. Finally, this new AML pipeline is published to the AML Service for execution.&lt;/p&gt;

&lt;p&gt;When the pipeline is invoked it will download the docker container, obtain the R script from workspace storage and execute it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Configuring the Azure Machine Learning Service Workspace
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Azure ML Workspace
&lt;/h2&gt;

&lt;p&gt;To be able to execute R script described in this article, the Azure Machine Learning Services (AMLS) Workspace should be configured as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workspace provisioned and available.&lt;/li&gt;
&lt;li&gt;A choice of compute successfully provisioned. This article uses the compute option of Machine Learning Compute&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating Service Principal
&lt;/h2&gt;

&lt;p&gt;The article uses the Python SDK to execute code against the AML workspace, to execute the code successfully, it also requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Azure AD Service Principal created with a secret.&lt;/li&gt;
&lt;li&gt;The service principal given access to the Azure Machine Learning Workspace as a “Contributor”.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The R Docker Container
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Creating the container locally
&lt;/h2&gt;

&lt;p&gt;Since we are using a Python script to bootstrap the execution of R workload, the docker container needs to support both R and Python. One popular base image for this purpose is  continuumio/miniconda3:4.6.14&lt;/p&gt;

&lt;p&gt;The Dockerfile we are using for this article is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; continuumio/miniconda3:4.6.14&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;yes &lt;/span&gt;Y | apt-get &lt;span class="nb"&gt;install &lt;/span&gt;build-essential

&lt;span class="k"&gt;RUN &lt;/span&gt;conda &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; r r-essentials
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To create a docker image, run the following command in the same path as the docker definition, stored as Dockerfile.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; aml-r &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Pulling in Additional R libraries
&lt;/h2&gt;

&lt;p&gt;There may be scenarios, were additional R libraries may be required for execution of the intended workload. To enable that, the Docker image used for execution of the workload can be extended by adding line such as these to the Dockerfile.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;R &lt;span class="nt"&gt;--vanilla&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'install.packages(c( "RPostgreSQL", &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "DBI",         &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "lubridate",   &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "RSQLite",     &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "magrittr",    &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "zoo",         &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "functional",  &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "moments",     &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "fpc",         &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "RcppRoll",    &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "cowplot",     &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "tsne",        &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "config",      &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "factoextra",  &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "NMF",         &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "ggcorrplot",  &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s1"&gt;                                        "umap"), repos="http://cran.r-project.org")'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The above shows the docker image being built with additional R libraries to support – PostgreSQL, SQLite and more advanced capabilities.&lt;/p&gt;
&lt;h2&gt;
  
  
  Publish the Docker Container to ACR
&lt;/h2&gt;

&lt;p&gt;The instructions to publish a docker container image to Azure Container Registry (ACR) are provided on this &lt;a href="https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After authenticating with the Azure Container Registry, based on the instructions provided, run the following commands (assuming myregistry is the name of the ACR created):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker tag aml-r myregistry.azurecr.io/aml-r
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will create an alias for the image in the local repository. Next, push the image to registry with the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push myregistry.azurecr.io/aml-r
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  Configuring the AML Pipeline Bootstrapper
&lt;/h1&gt;

&lt;p&gt;To execute the R script, we use a initializer script written in Python. This python script (called bootstrapper.py) creates a bash shell and launches the RScript executable with the provided R script name.&lt;/p&gt;

&lt;p&gt;Source code for the bootstrapper.py is as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;boot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;ret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;ret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;returncode&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pipeline step execution was terminated by signal&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
             &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;returncode&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
              &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pipeline step execution returned&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;ret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;returncode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;OSError&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Execution failed:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;returncode&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;entry_script&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;boot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Rscript&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--no-site-file&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--no-environ&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--no-restore&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;entry_script&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  Running the R code in the AML Pipeline
&lt;/h1&gt;

&lt;p&gt;To enable running of the arbitrary workload in Azure Machine Learning Pipeline, the Python SDK provides us with two constructs – Estimator and EstimatorStep classes.&lt;/p&gt;

&lt;p&gt;The Estimator class is designed for use with machine learning frameworks that do not already have an Azure Machine Learning pre-configured estimator. It wraps run configuration information to help simplify the tasks of specifying how a script is executed.&lt;/p&gt;

&lt;p&gt;The EstimatorStep class executes the estimator as part of an Azure ML Pipeline.&lt;/p&gt;

&lt;p&gt;The code to create an Estimator and EstimatorStep with the bootstrapper.py initializer file and &lt;em&gt;hello.r&lt;/em&gt; R script in the custom Docker image is shown as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;r_script&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hello.r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

&lt;span class="n"&gt;aml_experiment_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;experimenthellor&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;aml_compute_target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;defaultcompute&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;acr_details&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ContainerRegistry&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;acr_details&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mydockerimageregistry.azurecr.io&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;acr_details&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mydockerimageregistry&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;acr_details&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mysupersecretacrpassword!&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;acr_image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;aml-r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

&lt;span class="n"&gt;estimator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Estimator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source_directory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;src&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;entry_script&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bootstrapper.py&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;compute_target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;aml_compute_target&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;custom_docker_image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;acr_image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;image_registry_details&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;acr_details&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;user_managed&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;bootstrap_args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;r_script&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;inputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="n"&gt;step&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;EstimatorStep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;execute-r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;estimator&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;estimator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;estimator_entry_script_arguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bootstrap_args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;outputs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;compute_target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;aml_compute_target&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;allow_reuse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For this article, the initializer and R Scripts are placed in a sub-folder called src.&lt;/p&gt;
&lt;h2&gt;
  
  
  Putting it all together
&lt;/h2&gt;

&lt;p&gt;Once, the estimator step is created, create an AML Pipeline and run it as part of the experiment. That is accomplished by the following code:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;aml_pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AmlPipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                 &lt;span class="n"&gt;workspace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;aml_workspace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                 &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;AmlStepSequence&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
                 &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Run R Workloads&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;aml_run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;published_pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;workspace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;aml_workspace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                       &lt;span class="n"&gt;experiment_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;aml_experiment_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;aml_run&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Tracking Execution of the R Workloads
&lt;/h2&gt;

&lt;p&gt;The execution of the pipeline can be viewed from the Azure Portal, as part of an AML Experiment. To do this, log on to Azure Portal and navigate to AML Workspace. Click on the “Experiments” option in the left panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F68cnuwntapxnt1b73q82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F68cnuwntapxnt1b73q82.png" alt="AML Workspace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the experiment created for this workload. In this article, we are using “experimenthellor”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fn3pyjfowirfmzsvkq9ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fn3pyjfowirfmzsvkq9ty.png" alt="Experiment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the experiment name to get the list of runs for the experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fdgc0llsleahmh6xkpqft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fdgc0llsleahmh6xkpqft.png" alt="Pipeline Runs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the latest successful run to get the list of steps in the pipeline run. During the first execution of the pipeline, the compute may require time to spin up, so the pipeline run status may show as – “NotStarted”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F36bmdxg3kztf78tq6vn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F36bmdxg3kztf78tq6vn6.png" alt="Pipeline Step"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the pipeline step – “execute-r”, to show the obtain the details of the step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ftcoss12hsj3gm6u3daim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ftcoss12hsj3gm6u3daim.png" alt="Pipeline Step Details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on “Logs” tab to view the details of the execution of the “R” script&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fds6pvn1u64wlpmrk7ao5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fds6pvn1u64wlpmrk7ao5.png" alt="Pipeline Step Details - Logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the logs, you can see the output of the hello.r script is available as “[1] hello, world”. If there were errors, they would show up in the logs for troubleshooting.&lt;/p&gt;
&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In addition to supporting Python workload, Azure Machine Learning Service is flexible to support execution of R workloads as well. That is just one more reason to try out Azure Machine Learning Service for your next data science project. While this repository has a simple implementation intended for a quick rollout, more advanced implementations are possible with mounted volumes.&lt;/p&gt;
&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-us/services/machine-learning-service/" rel="noopener noreferrer"&gt;Azure Machine Learning Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py" rel="noopener noreferrer"&gt;AML Python SDK&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Credits
&lt;/h1&gt;

&lt;p&gt;This solution was designed by Josh Lane (@jplane). Special shout to Josh for reviewing the contents of the repository and technical guidance.&lt;/p&gt;
&lt;h1&gt;
  
  
  Code Repository
&lt;/h1&gt;

&lt;p&gt;The code is available on GitHub. &lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/codepossible" rel="noopener noreferrer"&gt;
        codepossible
      &lt;/a&gt; / &lt;a href="https://github.com/codepossible/AzureMachineLearningServiceWithR" rel="noopener noreferrer"&gt;
        AzureMachineLearningServiceWithR
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This repository contains guidance to execute R workloads in Azure Machine Learning Service.
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
      <category>azuremachinelearningservice</category>
      <category>pipelines</category>
      <category>r</category>
      <category>mlops</category>
    </item>
  </channel>
</rss>
