<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anirudh Mehta</title>
    <description>The latest articles on DEV Community by Anirudh Mehta (@animeh).</description>
    <link>https://dev.to/animeh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/animeh"/>
    <language>en</language>
    <item>
      <title>Data Acquisition &amp; Exploration: Exploring 5 Key MLOps Questions using AWS SageMaker</title>
      <dc:creator>Anirudh Mehta</dc:creator>
      <pubDate>Mon, 21 Aug 2023 05:32:18 +0000</pubDate>
      <link>https://dev.to/animeh/data-acquisition-exploration-exploring-5-key-mlops-questions-using-aws-sagemaker-38g9</link>
      <guid>https://dev.to/animeh/data-acquisition-exploration-exploring-5-key-mlops-questions-using-aws-sagemaker-38g9</guid>
      <description>&lt;p&gt;The ’&lt;a href="https://medium.com/towards-artificial-intelligence/31-questions-that-shape-fortune-500-ml-strategy-32af42bd7794"&gt;31 Questions that Shape Fortune 500 ML Strategy&lt;/a&gt;’ highlighted key questions to assess the maturity of an ML system.&lt;/p&gt;

&lt;p&gt;A robust ML platform offers managed solutions to easily address these aspects. In this blog, I will walk through &lt;strong&gt;AWS&lt;/strong&gt; &lt;strong&gt;SageMaker's&lt;/strong&gt; capabilities in addressing these questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;An MLOps workflow consists of a series of steps from data acquisition and feature engineering to training and deployment. As such, instead of covering all aspects in a single blog, we will focus on key questions surrounding &lt;strong&gt;Data Acquisition &amp;amp; Exploration (EDA).&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;▢ [Automation] Does the existing platform helps the data scientist to quickly analyze, visualize the data and automatically detect common issues&lt;br&gt;
▢ [Automation] Does the existing platform allows integrating and visualizing the relationship between datasets from multiple sources?&lt;br&gt;
▢ [Collaboration] How can multiple data scientists collaborate in real-time on the same dataset?&lt;br&gt;
▢ [Reproducibility] How do you track and manage different versions of acquired datasets&lt;br&gt;
▢ [Governance &amp;amp; Compliance] How do you ensure that the data privacy or security considerations have been addressed during the acquisition&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Use Case &amp;amp; Dataset
&lt;/h2&gt;

&lt;p&gt;The questions can be best answered in a context of a use case. For this series, we will consider &lt;strong&gt;“Fraud Detection”&lt;/strong&gt; as a use case with very simple rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Any transaction above 500 amount is considered fraud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any transaction outside the user’s billing address is considered fraud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any transaction outside the normal hours is considered fraud&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following script generates customer &amp;amp; transaction datasets with occasional fraudulent events.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Script to generate the transactions dataset
import numpy as np
import pandas as pd
from datetime import datetime, timedelta
import random

np.random.seed(123)

# Define count for the sample dataset
n_customers = 10
n_transactions = 100000

# Define dictionary for sample dataset
states = ['CA', 'NY', 'TX']
cities = ['Los Angeles', 'New York', 'Dallas']
streets = ["Main St", "Oak St", "Pine St", "Maple Ave", "Elm St", "Cedar St"]
zips = [10001, 10002, 90001, 90002, 33101, 33102, 75201, 75202, 60601, 60602]

# Generate customers
customer_df = pd.DataFrame({
    'customer_id': range(n_customers),
    'state': np.random.choice(states, n_customers),
    'city': np.random.choice(cities, n_customers),
    'street': np.random.choice(streets, n_customers),
    'zip': np.random.choice(zips, n_customers)
})
customer_states = dict(zip(customer_df['customer_id'], customer_df["state"]))

# Generate transactions
transaction_df = pd.DataFrame({
    'transaction_id': np.random.choice([random.randint(100000000, 999999999) for i in range(1000)], n_transactions),
    'customer_id': np.random.choice(range(n_customers), n_transactions),
    'amount': [random.uniform(0, 500) if random.random() &amp;lt; 0.9 else random.randint(500, 1000) for i in range(n_transactions)],
    'transaction_time': np.random.choice([datetime(2023, 4, 25, 22, 15, 16) - timedelta(days=random.randint(0, 30), hours=random.randint(0, 12), minutes=random.randint(0, 60)) for i in range(n_transactions)], n_transactions)
})

# Set transaction state to customer state 
transaction_df['transaction_state'] = [customer_states[x] if random.random() &amp;lt; 0.9 else np.random.choice(states) for x in transaction_df['customer_id']]

# Mark transaction as fraud if an outlier
transaction_df['fraud'] = transaction_df.apply(lambda x: random.random() &amp;lt; 0.1 or x['amount'] &amp;gt; 500 or x['transaction_time'].hour &amp;lt; 10 or x['transaction_time'].hour &amp;gt; 22 or x['transaction_state'] != customer_states[x['customer_id']], axis=1)

print(f"Not fraud: {str(transaction_df['fraud'].value_counts()[0])} \nFraud: {str(transaction_df['fraud'].value_counts()[1])}")

customer_df.to_csv("customers.csv", index=False)
transaction_df.to_csv("transactions.csv", index=False)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For real-world data, you can refer &lt;a href="https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud"&gt;Kaggle Credit Card Fraud Dataset&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;💡 AWS offers a fully managed service for customized fraud detection — &lt;a href="https://aws.amazon.com/fraud-detector/"&gt;Amazon Fraud Detector&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How?
&lt;/h2&gt;

&lt;p&gt;I have tried to structure the article to be easily readable. However, to truly understand SageMaker’s capabilities, I highly recommend taking a hands-on approach.&lt;/p&gt;

&lt;p&gt;For this series, I will be using &lt;a href="https://console.aws.amazon.com/sagemaker"&gt;**SageMaker Studio&lt;/a&gt;**, a fully managed ML &amp;amp; MLOps IDE. AWS also offers &lt;a href="https://studiolab.sagemaker.aws/"&gt;SageMaker Studio Lab&lt;/a&gt;, a free Jupyter-based IDE environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HNcqx5ot--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4292/1%2AVOV-BWgFXkGBYlsyiHFHNw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HNcqx5ot--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4292/1%2AVOV-BWgFXkGBYlsyiHFHNw.png" alt="" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;💡 AWS SageMaker is offered as part of the &lt;a href="https://aws.amazon.com/sagemaker/pricing/#Amazon_SageMaker_Free_Tier"&gt;free tier&lt;/a&gt; for the first 2 months with various sub-limits. I will include the sub-limits where applicable.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Organize with SageMaker Domain
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;[✓] [Governance &amp;amp; Compliance] How do you ensure that the data privacy or security considerations have been addressed during the acquisition&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In an enterprise, multiple models are often developed simultaneously. These models are based on different datasets and algorithms and are often managed by different teams. Effective organization and controlled access are critical for efficient management, as well as ensuring overall data privacy and security.&lt;/p&gt;

&lt;p&gt;SageMaker provides the concept of domains to organize ML resources such as notebooks, experiments, and models, and to manage access to them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a domain
&lt;/h3&gt;

&lt;p&gt;Creating a domain in Amazon SageMaker is a quick and straightforward process. The console offers two workflows: Quick Setup (1 min) and Standard Setup (10 min). The latter allows for additional security configurations, such as authentication, encryption, and VPC configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--esB0rq_f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5756/1%2Apbyn9HlE2_GZ_JugAaktww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--esB0rq_f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5756/1%2Apbyn9HlE2_GZ_JugAaktww.png" alt="Source: Image by the author." width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Manage access - User profiles&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Amazon SageMaker allows the creation of different profiles based on either custom or predefined personas, such as data scientists, MLOps engineers, or compute.&lt;br&gt;
These profiles enable an organization to effectively manage permissions &amp;amp; govern access across the platform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Control access to resources such as the SageMaker canvas or a particular bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Control user activities like creating ML jobs or publishing models.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vytVYJRo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3328/1%2AQxdKRBpkndtbqKuXGpF6nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vytVYJRo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3328/1%2AQxdKRBpkndtbqKuXGpF6nw.png" alt="Source: Image by the author." width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Explore with SageMaker Data Wrangler
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;**[✓] [Automation] *Does the existing platform helps the data scientist to quickly analyze, visualize the data and automatically detect common issues?&lt;/em&gt;*&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In any data science exercise, the first step is to make sense of the data and identify correlations, patterns, and outliers. The success of a model depends on the quality of the dataset, making this a crucial step.&lt;/p&gt;

&lt;p&gt;SageMaker Data Wrangler simplifies and accelerates this process.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;⚠️ The Data Wrangler’s free tier provides only 25 hours of ml.m5.4xlarge instances per month for 2 months. Additionally, there are associated costs for reading and writing to S3.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Importing the data
&lt;/h3&gt;

&lt;p&gt;Data Wrangler supports importing data from various sources, such as Amazon S3, Redshift, Snowflake, and more. For this article, I have already uploaded the customer and transaction datasets generated previously to Amazon S3. I have also granted SageMaker’s user profile access to this bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--roPPugLB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5324/1%2AZWxqJ1RX6yLhxTfaiqWu8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--roPPugLB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5324/1%2AZWxqJ1RX6yLhxTfaiqWu8g.png" alt="Source: Image by the author." width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated Analysis
&lt;/h3&gt;

&lt;p&gt;Out of the box, Data Wrangler automatically identifies the data types of various columns within the uploaded data. Additionally, Data Wrangler offers built-in capabilities such as data quality and insights reports.&lt;/p&gt;

&lt;p&gt;Let’s run it against our target column - “fraud”, and review the insights it automatically generates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JOXDK5n9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5752/1%2AdzCcW7sEArFIl71DgXu3EA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JOXDK5n9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5752/1%2AdzCcW7sEArFIl71DgXu3EA.png" alt="Source: Image by the author." width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Dataset Statistics: Statistical summaries of the dataset — feature count, count of valid and invalid records, and feature type distribution. It found 6 features and no duplicate or invalid records.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Target Column Distribution: Understand any imbalances in the dataset.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Feature Summary: Predictive power of individual features. As expected, the features — amount, time, and state play the most important role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Feature Distribution: Distribution of individual features w.r.t the target label.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Quick Model: Predicts how good a trained model on this dataset might be.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confusion Matrix: Performance of the quick model to detect fraud or not.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;[✓] [Automation]&lt;/em&gt; Does the existing platform allows integrating and visualizing the relationship between datasets from multiple sources?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SageMaker Wrangler enables data scientists to quickly join two datasets and visualize them together. In this particular case, we are joining customer data with transaction data. Once joined, data scientists can run the automated analysis of the combined data in a similar manner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AfCxuU6m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4176/1%2AabguTTmtd422O2kuB2tajw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AfCxuU6m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4176/1%2AabguTTmtd422O2kuB2tajw.png" alt="Source: Image by the author." width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will explore this further in the next blog on “Data Transformation and Feature Engineering”.&lt;/p&gt;




&lt;h2&gt;
  
  
  Collaborate with SageMaker Spaces
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;[✓] [Collaboration]&lt;/em&gt; How can multiple data scientists collaborate in real-time on the same dataset?**&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SageMaker Spaces enables users within a domain to collaborate and access the same resources, including notebooks, files, experiments, and models. It allows multiple users to access, edit and review the same notebooks in real time within a shared studio application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VWRoVxft--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4144/1%2AhadTUVezUFtAkG8I8TXolA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VWRoVxft--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4144/1%2AhadTUVezUFtAkG8I8TXolA.png" alt="Source: Image by the author." width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ct5G53L3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5564/1%2AWhVy1RZzmm8ambhqJ1zo8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ct5G53L3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/5564/1%2AWhVy1RZzmm8ambhqJ1zo8w.png" alt="Source: [Organize machine learning development using shared spaces in SageMaker Studio for real-time collaboration](https://aws.amazon.com/blogs/machine-learning/organize-machine-learning-development-using-shared-spaces-in-sagemaker-studio-for-real-time-collaboration/) (AWS Blog)" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Track with SageMaker Lineage &amp;amp; DVC
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;[✓] [Reproducibility]&lt;/em&gt; How do you track and manage different versions of acquired datasets&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Version control is a well-known concept in the coding and development world, but it is also essential for data science activities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dvc.org/"&gt;DVC (Data Version Control)&lt;/a&gt; is a popular open-source tool designed for the same purpose. It allows you to track and manage versions of your datasets, features, and models.&lt;/p&gt;

&lt;p&gt;DVC integrates with Git and allows data scientists to store and track references to the data stored in various locations such as Amazon S3, HTTP, or on disk.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Register S3 remote
dvc remote add -d myremote s3://&amp;lt;bucket&amp;gt;/&amp;lt;optional_key&amp;gt;

# Track file
# This creates ".dvc" file with information necessary for tracking
dvc add data/raw.csv 
git add training.csv.dvc # Version control ".dvc" file like any other git file

# Push data file to S3
dvc push

# Pull data files from S3
dvc pull

# Switch between versions
dvc checkout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;[✓] [Governance &amp;amp; Compliance]&lt;/em&gt; How do you ensure that the data privacy or security considerations have been addressed during the acquisition (cont..)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;DVC enables you to track data set versions. However, we may need to track additional information for governance and understanding the usage:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Where did the raw data originate from?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Who owns or manages the raw data?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What transformations and preprocessing are being applied to the raw data?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What models use a data set?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;SageMaker Lineage is capable of providing answers to several of these questions. We will explore this further in the next blog on “Data Transformation and Feature Engineering”.&lt;/p&gt;

&lt;p&gt;Here’s a quick example of creating a raw data artifact entity capturing source, origin, and owner information.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a artifact
aws sagemaker create-artifact 
--artifact-name raw-data 
--source SourceUri=s3://my_bucket/training.csv 
--artifact-type raw-data 
--properties owner=anirudh,topic=mlops,orgin=script # Additional details
--tags Key=cost_center,Value=research 

{
    "ArtifactArn": "arn:aws:sagemaker:us-east-1:removed:artifact/24c7ff167309de3b466aab30f95a8810"
}

# Describe a artifact
aws sagemaker decribe-artifact --artifact-arn arn:aws:sagemaker:us-east-1:removed:artifact/24c7ff167309de3b466aab30f95a8810
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  &lt;em&gt;⚠️ Clean-up&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;If you have been following along with the hands-on exercises, make sure to clean up to avoid charges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hxuq-U8Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3460/1%2AvKz7JK1Y4fglPxW2vcKutg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hxuq-U8Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3460/1%2AvKz7JK1Y4fglPxW2vcKutg.png" alt="Source: Image by the author." width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In summary, AWS SageMaker Wrangler greatly accelerates the complex tasks of data exploration for data scientists.&lt;/p&gt;

&lt;p&gt;In the next article, I will explore how SageMaker can assist with Data Transformation and Feature Engineering.&lt;/p&gt;

</description>
      <category>mlops</category>
      <category>machinelearning</category>
      <category>aws</category>
      <category>sagemake</category>
    </item>
    <item>
      <title>Solving the Image Promotion Challenge Across Multi-Environment with ArgoCD</title>
      <dc:creator>Anirudh Mehta</dc:creator>
      <pubDate>Fri, 18 Aug 2023 05:27:48 +0000</pubDate>
      <link>https://dev.to/animeh/solving-the-image-promotion-challenge-across-multi-environment-with-argocd-4lf8</link>
      <guid>https://dev.to/animeh/solving-the-image-promotion-challenge-across-multi-environment-with-argocd-4lf8</guid>
      <description>&lt;p&gt;When designing cloud environments, it is often recommended to set up multiple accounts. While this approach offers resource independence, isolation, better security, access, and billing boundaries, it also comes with its own set of issues. One such challenge is efficiently promoting and tracking applications between different environments.&lt;/p&gt;

&lt;p&gt;The GitOps approach, along with tools like ArgoCD and Kustomize, simplifies tracking and promotion. However, &lt;strong&gt;image promotion is often overlooked.&lt;/strong&gt; Many enterprises adopt a shared image registry, but it soon becomes bloated with many unused versions.&lt;/p&gt;

&lt;p&gt;This article explores a recent journey during which we examined the problem of promoting images and the innovative solution that was adopted, all while adhering to the principles of GitOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Challenge&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Recently, a scenario was presented where a company utilizing the &lt;strong&gt;shared ECR registry&lt;/strong&gt; was considering &lt;strong&gt;migrating to separate ECR registries&lt;/strong&gt; for cost-effectiveness, better governance, and streamlined lifecycle management.&lt;/p&gt;

&lt;p&gt;Here is a look at the existing state of infrastructure and pipelines:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yYv6cRTl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2906/0%2ATWK1b0ETwbBQkLm5" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yYv6cRTl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2906/0%2ATWK1b0ETwbBQkLm5" alt="Source: Image by the author." width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Each environment has a dedicated AWS account with its own cluster and ArgoCD installation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kustomize is used for managing configuration differences across environments.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── infra
  │   ├── charts/
  └── overlays
      ├── dev
      │   ├── patch-image.yaml
      └── production
          ├── patch-image.yaml
          └── patch-replicas.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Jenkins is used to continuously build new images in the development environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, none of the tools provided out-of-the-box support for promoting images between ECR registries, leading to the exploration of innovative solutions with some considerations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Considerations:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Selective Promotion&lt;/strong&gt;: The company’s application landscape is composed of multiple modules and teams with different timelines. Therefore, it is necessary to support the promotion of images for only selected modules in each release.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized Storage:&lt;/strong&gt; Environments such as production only need to store promoted image versions, reducing clutter and optimizing resource usage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Tag and Digest Replication:&lt;/strong&gt; Replicating image tags and digests between ECR registries is critical for security, and traceability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Potential Solutions
&lt;/h2&gt;

&lt;p&gt;At the outset, two potential solutions were proposed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ECR Cross Account Replication:&lt;/strong&gt; AWS’s ECR natively supports replicating images between two accounts. However, as of now, there is &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/replicate-filtered-amazon-ecr-container-images-across-accounts-or-regions.html#replicate-filtered-amazon-ecr-container-images-across-accounts-or-regions-architecture:~:text=However%2C%20there%20is%20no%20way%20to%20filter%20the%20images%20that%20are%20copied%20across%20AWS%20Regions%20or%20accounts%20based%20on%20any%20criteria.%C2%A0"&gt;no way to filter the images&lt;/a&gt; being replicated based on any criteria. Alternatively, AWS recommends &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/replicate-filtered-amazon-ecr-container-images-across-accounts-or-regions.html"&gt;event-based design&lt;/a&gt; to selectively replicate images based on tag naming conventions. However, since we are not aware of which versions will be promoted, it requires an additional step of retagging before promotion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Jenkins Promotion Pipeline&lt;/strong&gt;: A Jenkins pipeline that parses Kustomize Overlays for image tags and programmatical replicates them.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both options are viable, but they introduce an additional layer of complexity to the promotion process. Additionally, you need to ensure that images are promoted before Kustomize overlays are updated*&lt;em&gt;.&lt;/em&gt;*&lt;/p&gt;

&lt;h2&gt;
  
  
  The Winning Strategy: ArgoCD PreSync Job
&lt;/h2&gt;

&lt;p&gt;In this scenario, the client was already using ArgoCD for continuous deployment of the application changes. Therefore, we decided to also assign &lt;strong&gt;ArgoCD the responsibility of delivering images&lt;/strong&gt; to the target environment cluster.&lt;/p&gt;

&lt;p&gt;ArgoCD supports &lt;strong&gt;hooks&lt;/strong&gt; that allow you to run custom scripts before or after a deployment or synchronization process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--05vp3Ib7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2906/0%2A25hRhTrAz9YPYGFT" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--05vp3Ib7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2906/0%2A25hRhTrAz9YPYGFT" alt="Source: Image by the author." width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. ECR Repository Permission: Authorize cross-account pull access for Docker images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To enable ArgoCD to pull images from the source ECR, we need to add a resource-based policy to our repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// cross-account-ecr-read-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowPull",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::{DESTINATION_ACCOUNT}:root" // Replace with your destination account
      },
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the policy to ECR repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr set-repository-policy --repository-name example 
--policy-text "file://cross-account-ecr-read-policy.json"

// For multiple repositories:
aws ecr describe-repositories --query "repositories[].[repositoryName]" 
| xargs -I {} aws ecr set-repository-policy --repository-name {} --policy-text "file://cross-account-ecr-read-policy.json"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. PreSync Hook Job: Copy image between accounts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We use &lt;a href="https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md"&gt;Crane&lt;/a&gt; to copy images without changing their tag and digest.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The PreSync Hook job is stored in git along with other application manifests and monitored by ArgoCD. &lt;strong&gt;ArgoCD runs the job before the synchronizing changes.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The source account is the Development or DevOps account from which the images will be pulled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The destination account is the Production or target environment where the image needs to be copied.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Helm template example
apiVersion: batch/v1
kind: Job
metadata:
  generateName: argo-presync-promote-image-
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
  template:
    spec:
      volumes:
        - name: creds
          emptyDir: {}
      initContainers:
        - name: aws-creds
          image: public.ecr.aws/aws-cli/aws-cli
          command:
            - sh
            - -c
            - |
              aws ecr get-login-password &amp;gt; /creds/ecr
          volumeMounts:
            - name: creds
              mountPath: /creds
      containers:
        // For brevity, I have assumed that all Helm values are available on the root.
        - name: promote-image
          image: gcr.io/go-containerregistry/crane:debug
          command:
            - sh
            - -c
            - |
              // Login to both ECR registries
              cat /creds/ecr | crane auth login {{.Values.sourceAccount}}.dkr.ecr.us-east-1.amazonaws.com -u AWS --password-stdin
              cat /creds/ecr | crane auth login {{.Values.destinationAccount}}.dkr.ecr.us-east-1.amazonaws.com -u AWS --password-stdin
              // Copy image from source account to destination account
              crane copy {{.Values.image | replace .Values.destinationAccount .Values.sourceAccount}} {{.Values.image}}
          volumeMounts:
            - name: creds
              mountPath: /creds
      restartPolicy: Never
  backoffLimit: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, the team was able to promote images on demand by using the pre-sync hook. This made production promotion a single step of updating the Kustomize overlays.&lt;/p&gt;

&lt;p&gt;I would love to hear about other options that you have adopted. For instance, an alternative approach could be to use &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers"&gt;Kubernetes Dynamic Admission Control&lt;/a&gt; to intercept and pull missing images on demand.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>31 Questions that Shape Fortune 500 ML Strategy</title>
      <dc:creator>Anirudh Mehta</dc:creator>
      <pubDate>Thu, 17 Aug 2023 07:00:22 +0000</pubDate>
      <link>https://dev.to/animeh/31-questions-that-shape-fortune-500-ml-strategy-m7f</link>
      <guid>https://dev.to/animeh/31-questions-that-shape-fortune-500-ml-strategy-m7f</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U_mS9o96--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/9176/1%2AOZjCXATUz8UKNwjiTS0olg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U_mS9o96--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/9176/1%2AOZjCXATUz8UKNwjiTS0olg.png" alt="Source: Image by the author." width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In May 2021, &lt;a href="https://www.linkedin.com/in/ACoAAAlCOxQB-ZF85jicy5HcFqTzwWx0dLAqBLA"&gt;Khalid Salama&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/ACoAAAEhCpoBLfy6_6jmTyFGqPGGL7rXv2ylgPA"&gt;Jarek Kazmierczak&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/ACoAABQd3j8BwwaOn7mbhKY7bOthfBWSeWznM4U"&gt;Donna Schut&lt;/a&gt; published a white paper titled “&lt;a href="https://cloud.google.com/resources/mlops-whitepaper"&gt;Practitioners Guide to MLOps&lt;/a&gt;”. The white paper goes into great depth on the concept of MLOps, its lifecycle, capabilities, and practices. There are hundreds of blogs written on the same topic. As such, my intention with this blog is not to duplicate those definitions but rather to encourage you to question and evaluate your current ML strategy.&lt;/p&gt;

&lt;p&gt;I have listed a few critical questions that I often pose to myself and concerned stakeholders on the modernization journey. While ML algorithms &amp;amp; code play a crucial role in success, it’s just a small piece of the large puzzle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NfJN5rjB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AvCH1uKlAJw0C5ZiMH-UILg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NfJN5rjB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AvCH1uKlAJw0C5ZiMH-UILg.png" alt="Source: Image by the author." width="568" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To consistently achieve the same success, there are a vast array of cross-cutting concerns that need to be addressed. Thus, I have grouped the questions under different stages of an ML delivery pipeline. In no way, the questions are targeted for a particular role owning that stage, but are applicable to everyone involved in the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key objectives:
&lt;/h2&gt;

&lt;p&gt;Before diving into the questions, it’s important to understand the evaluation lens through which they are written. If you have additional objectives, you may want to add more questions to the list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automation&lt;/strong&gt;&lt;br&gt;
✓ The system must emphasize automation.&lt;br&gt;
✓ The goal should be to automate all aspects, from data acquisition and processing to training, deployment, and monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaboration&lt;/strong&gt;&lt;br&gt;
✓ The system should promote collaboration between data scientists, engineers, and the operation team.&lt;br&gt;
✓ It should allow data scientists to effectively share the artifacts and the lineage as created during the model-building process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reproducibility&lt;/strong&gt;&lt;br&gt;
✓ The system should allow for easy replication of the current state and progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance &amp;amp; Compliance&lt;/strong&gt;&lt;br&gt;
✓ The system must ensure data privacy, security, and compliance with relevant regulations and policies.&lt;/p&gt;




&lt;h2&gt;
  
  
  Critical Questions:
&lt;/h2&gt;

&lt;p&gt;Now that we have defined objectives, it’s time to look into the key questions to ask to evaluate the effectiveness of your current AI strategy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Acquisition &amp;amp; Exploration (EDA)&lt;/strong&gt;&lt;br&gt;
Data is a fundamental building block of any ML system. Data Scientist understands it, identifies and addresses common issues like duplication, missing data, imbalance, outliers, etc. A significant amount of data scientist time goes into this activity of data exploration. Thus, our strategy should focus to support &amp;amp; accelerate these activities and answer the following questions:&lt;/p&gt;

&lt;p&gt;▢ &lt;em&gt;[Automation]&lt;/em&gt; Does the existing platform helps the data scientist to quickly analyze, visualize the data and automatically detect common issues?&lt;br&gt;
▢ &lt;em&gt;[Automation]&lt;/em&gt; Does the existing platform allows integrating and visualizing the relationship between datasets from multiple sources?&lt;br&gt;
▢ &lt;em&gt;[Collaboration]&lt;/em&gt; How can multiple data scientists collaborate in real-time on the same dataset?&lt;br&gt;
▢ &lt;em&gt;[Reproducibility]&lt;/em&gt; How do you track and manage different versions of acquired datasets?&lt;br&gt;
▢ &lt;em&gt;[Governance &amp;amp; Compliance]&lt;/em&gt; How do you ensure that the data privacy or security considerations have been addressed during the acquisition?&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Data Transformation &amp;amp; Feature Engineering&lt;/strong&gt;&lt;br&gt;
After gaining an understanding of the data, the next step is to build and scale the transformations across the dataset. Here are some key questions to consider during this phase:&lt;/p&gt;

&lt;p&gt;▢ &lt;em&gt;[Automation]&lt;/em&gt; How can the transformation steps be effectively scaled to the entire dataset?&lt;br&gt;
 ▢ &lt;em&gt;[Automation]&lt;/em&gt; How can the transformation steps be applied in real-time to the live data before inference? &lt;br&gt;
 ▢ &lt;em&gt;[Collaboration]&lt;/em&gt; How can a data scientist share and discover the engineered features to avoid effort duplication?&lt;br&gt;
 ▢ &lt;em&gt;[Reproducibility]&lt;/em&gt; How do you track and manage different versions of transformed datasets?&lt;br&gt;
 ▢ &lt;em&gt;[Reproducibility]&lt;/em&gt; Where are the transformation steps and associated code stored?&lt;br&gt;
 ▢ &lt;em&gt;[Governance &amp;amp; Compliance]&lt;/em&gt; How do you track the lineage of data as it moves through transformation stages to ensure reproducibility and audibility?&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Experiments, Model Training &amp;amp; Evaluation&lt;/strong&gt;&lt;br&gt;
Model training is an iterative process where data scientist explores and experiments with different combinations of settings and algorithm to find the best possible model. Here are some key questions to consider during this phase:&lt;/p&gt;

&lt;p&gt;▢ &lt;em&gt;[Automation]&lt;/em&gt; How can data scientists automatically partition the data for training, validation, and testing purposes?&lt;br&gt;
 ▢ &lt;em&gt;[Automation]&lt;/em&gt; Does the existing platform helps to accelerate the evaluation of multiple standard algorithms and tune hyperparameters&lt;br&gt;
 ▢ &lt;em&gt;[Collaboration]&lt;/em&gt; How can a data scientist share the experiment, configurations &amp;amp; trained models?&lt;br&gt;
 ▢ &lt;em&gt;[Reproducibility]&lt;/em&gt; How can you ensure the reproducibility of the experiment outputs?&lt;br&gt;
 ▢ &lt;em&gt;[Reproducibility]&lt;/em&gt; How do you track and manage different versions of trained models?&lt;br&gt;
 ▢ &lt;em&gt;[Governance &amp;amp; Compliance]&lt;/em&gt; How do you track the model boundaries allowing you to explain the model decisions and detect bias?&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Deployment &amp;amp; Serving&lt;/strong&gt;&lt;br&gt;
In order to realize the business value of a model, it needs to be deployed. Depending on the nature of your business, it may be distributed, deployed in-house, on the cloud, or at the edge. Effective management of the deployment is crucial to ensure uptime and optimal performance. Here are some key questions to consider during this phase:&lt;/p&gt;

&lt;p&gt;▢ [&lt;em&gt;Automation]&lt;/em&gt; How do you ensure that the deployed models can scale with increasing workloads?&lt;br&gt;
 ▢ &lt;em&gt;[Automation]&lt;/em&gt; How are the new versions rolled out and the process to compare them against the running version? (A/B testing, canary, shadow, etc.)&lt;br&gt;
 ▢ &lt;em&gt;[Automation]&lt;/em&gt; Are there mechanisms to roll back or revert deployments if issues arise?&lt;br&gt;
 ▢ &lt;em&gt;[Collaboration]&lt;/em&gt; How can multiple data scientists understand the impact of their version before releasing it? (A/B testing, canary, shadow, etc.)&lt;br&gt;
 ▢ &lt;em&gt;[Reproducibility]&lt;/em&gt; How do you package your ML models for serving in the cloud or at the edge?&lt;br&gt;
 ▢ &lt;em&gt;[Governance &amp;amp; Compliance]&lt;/em&gt; How do you track the predicted decisions for auditability and accountability?&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Model Pipeline, Monitoring &amp;amp; Continuous Improvement:&lt;/strong&gt;&lt;br&gt;
As we have seen, going from raw data to actionable insights involves complex series of steps. However, by orchestrating, monitoring, and reacting throughout the workflow, we can easily scale, adapt and make the process more efficient. Here are some key questions to consider during this phase:&lt;/p&gt;

&lt;p&gt;▢ &lt;em&gt;[Automation]&lt;/em&gt; How is the end-to-end process of training and deploying the models managed currently?&lt;br&gt;
 ▢ &lt;em&gt;[Automation]&lt;/em&gt; How can you detect the data or concept drift w.r.t to the historical baseline?&lt;br&gt;
 ▢ &lt;em&gt;[Automation]&lt;/em&gt; How do you determine when a model needs to be retrained or updated?&lt;br&gt;
 ▢ &lt;em&gt;[Collaboration]&lt;/em&gt; What are the agreed metrics to measure the effectiveness of each stage and new deployments?&lt;br&gt;
 ▢ &lt;em&gt;[Reproducibility]&lt;/em&gt; Are there automated pipelines to handle the end-to-end process of retraining and updating models to incorporate feedback and make enhancements?&lt;br&gt;
 ▢ &lt;em&gt;[Governance &amp;amp; Compliance]&lt;/em&gt; How do you ensure data quality, integrity and detect model deviation throughout the process?&lt;br&gt;
 ▢ [*Governance &amp;amp; Compliance] *How do you budget and plan for the infrastructure requirements for the building of your models?&lt;/p&gt;




&lt;p&gt;A MLOps system streamlines &amp;amp; brings structure to your strategy and thus, allowing you to answer these questions. It provides the capability to version control and to track various artifacts through the &lt;strong&gt;dataset, feature, metadata, and model repositories&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the upcoming blogs, I will demonstrate how to implement these MLOps best practices using a simple case study on &lt;a href="https://pub.towardsai.net/data-acquisition-exploration-exploring-5-key-mlops-questions-using-aws-sagemaker-a5b7518eba3e"&gt;AWS&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sagemaker</category>
      <category>mlops</category>
      <category>checklist</category>
    </item>
  </channel>
</rss>
