<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: GauravKankaria</title>
    <description>The latest articles on DEV Community by GauravKankaria (@gauravkankaria).</description>
    <link>https://dev.to/gauravkankaria</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gauravkankaria"/>
    <language>en</language>
    <item>
      <title>Creating Feature Store for streamlined Model Building using Amazon SageMaker</title>
      <dc:creator>GauravKankaria</dc:creator>
      <pubDate>Mon, 06 Feb 2023 13:21:05 +0000</pubDate>
      <link>https://dev.to/gauravkankaria/creating-feature-store-for-streamlined-model-building-using-amazon-sagemaker-3k1f</link>
      <guid>https://dev.to/gauravkankaria/creating-feature-store-for-streamlined-model-building-using-amazon-sagemaker-3k1f</guid>
      <description>&lt;p&gt;Referring to my earlier blog on “&lt;a href="https://medium.com/@gaurav.kankaria/performing-advance-analytics-in-nbfcs-using-data-lake-and-customer-360-using-feature-store-on-aws-2bc451b5c35b" rel="noopener noreferrer"&gt;Performing Advance Analytics in NBFCs using Data Lake and Customer 360 using Feature Store on AWS cloud environment&lt;/a&gt;”, we learnt how utilising the Feature Store feature on Amazon SageMaker helped businesses save 1,000+ person-hours and increased their ML model building speed and capabilities. &lt;/p&gt;

&lt;p&gt;This blog will be a hands-on guide for technical folks on how they can create a Feature Store on Amazon SageMaker for your organization. Suppose you are an aspiring data scientist or machine learning engineer wanting to work on the AWS cloud ML Tech stack. In that case, this blog will be the perfect starting point for learning and exploring Amazon SageMaker’s Feature Store. &lt;/p&gt;

&lt;p&gt;You can follow the steps below to learn how to create and use a feature store. You can also download the data and code used for the tutorial from the below links &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Code - &lt;a href="https://drive.google.com/file/d/1gj79SZ97crTGoxtmiJ9_Jpc095CXsFJz/view?usp=share_link" rel="noopener noreferrer"&gt;Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Data - &lt;a href="https://drive.google.com/file/d/1k65yQTgX4vMAYz90j6hQVN2DUSCFLHFq/view?usp=share_link" rel="noopener noreferrer"&gt;Link&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;*&lt;em&gt;Step 1: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Create/Use a s3 bucket where we would keep our data. &lt;/p&gt;

&lt;p&gt;Example:- I created a bucket named “quickstart-feature-store-demo”. On successful creation I uploaded the transaction data csv file on s3. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfu2ys6db17sqbn9mckk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfu2ys6db17sqbn9mckk.png" alt="Image description" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Step 2: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Open AWS Sagemaker Studio and create a new notebook named Feature Store&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6lvkeoys3z1pnhhln6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6lvkeoys3z1pnhhln6h.png" alt="Image description" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Get started with importing modules &amp;amp; doing basic setup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Imports
from sagemaker.feature_store.feature_group import FeatureGroup
from time import gmtime, strftime, sleep
from random import randint
import pandas as pd
import numpy as np
import subprocess
import sagemaker
import importlib
import logging
import time
import sys
import boto3
from datetime import datetime, timezone, date

if sagemaker.__version__ &amp;lt; '2.48.1':
    subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'sagemaker==2.48.1'])
    importlib.reload(sagemaker)
logger = logging.getLogger('__name__')
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.StreamHandler())
logger.info(f'Using SageMaker version: {sagemaker.__version__}')
logger.info(f'Using Pandas version: {pd.__version__}')

#Default Settings
sagemaker_session = sagemaker.Session()
default_bucket = sagemaker_session.default_bucket()
logger.info(f'Default S3 bucket = {default_bucket}')
prefix = 'sagemaker-feature-store'
region = sagemaker_session.boto_region_name

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Step 4: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Read data from input S3 bucket&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Update to variables below to point to your input data
bucket='quickstart-feature-store-demo'
data_key = 'Transaction Dataset Sample Features.csv'
data_location = 's3://{}/{}'.format(bucket, data_key)

orders_df = pd.read_csv(data_location)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: If you get a Forbidden error, you can follow the steps below&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Print the role being used using the below command&lt;/li&gt;
&lt;li&gt;  Ensure the role printed has read access to the s3 bucket containing the input data
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Get IAM role used by sagemaker
role = sagemaker.get_execution_role()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Step 5: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Data Processing&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Generate a unique event timestamp
def generate_event_timestamp(x):
    # naive datetime representing local time
    naive_dt = datetime.now()
    # take timezone into account
    aware_dt = naive_dt.astimezone()
    # time in UTC
    utc_dt = aware_dt.astimezone(timezone.utc)
    # transform to ISO-8601 format
    event_time = utc_dt.isoformat(timespec='milliseconds')
    event_time = event_time.replace('+00:00', 'Z')
    return event_time

orders_df['event_time'] = orders_df['Date'].apply(lambda x : generate_event_timestamp(x))

for cols in orders_df.columns:
    orders_df[cols] = orders_df[cols].astype('string')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Step 6: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Create a feature group &amp;amp; its definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Create feature group
current_timestamp = strftime('%m-%d-%H-%M', gmtime())
orders_feature_group_name = f'orders-{current_timestamp}'
%store orders_feature_group_name

#Create feature group definition
logger.info(f'Orders feature group name = {orders_feature_group_name}')
orders_feature_group = FeatureGroup(name=orders_feature_group_name, sagemaker_session=sagemaker_session)
orders_feature_group.load_feature_definitions(data_frame=orders_df)

def wait_for_feature_group_creation_complete(feature_group):
    status = feature_group.describe().get('FeatureGroupStatus')
    print(f'Initial status: {status}')
    while status == 'Creating':
        logger.info(f'Waiting for feature group: {feature_group.name} to be created ...')
        time.sleep(5)
        status = feature_group.describe().get('FeatureGroupStatus')
    if status != 'Created':
        raise SystemExit(f'Failed to create feature group {feature_group.name}: {status}')
    logger.info(f'FeatureGroup {feature_group.name} was successfully created.')

orders_feature_group.create(s3_uri=f's3://{default_bucket}/{prefix}', 
                            record_identifier_name='OrderId', 
                            event_time_feature_name='event_time', 
                            role_arn=role, 
                            enable_online_store=True)
wait_for_feature_group_creation_complete(orders_feature_group)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ingest data into feature store&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%%time

logger.info(f'Ingesting data into feature group: {orders_feature_group.name} ...')
orders_feature_group.ingest(data_frame=orders_df, max_processes=16, wait=True)
logger.info(f'{len(orders_df)} order records ingested into feature group: {orders_feature_group.name}')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note:- It can take a couple of minutes for the data to reflect on the s3 buckets&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;You can boto session to list the feature groups and query the data as well&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Get a list of all feature stores
boto_session = boto3.Session(region_name=region)
sagemaker_client = boto_session.client(service_name='sagemaker', region_name=region)
sagemaker_client.list_feature_groups()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use the following &lt;a href="https://sagemaker.readthedocs.io/en/stable/amazon_sagemaker_featurestore.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; link to try out other features to fetch bulk batch data etc,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;AWS Sagemaker feature store keeps metadata information in the AWS Glue. We can use the glue console to view the table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpvpbtawbwyasrz8xkrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpvpbtawbwyasrz8xkrr.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17rfg4i0hs0bb1gq2kzf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17rfg4i0hs0bb1gq2kzf.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;In this article we have learned the following things.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Importance for feature store&lt;/li&gt;
&lt;li&gt;How to create and Implement a feature store on Amazon Sagemaker&lt;/li&gt;
&lt;li&gt;Ways to access the feature stores on Amazon Sagemaker&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>web3</category>
      <category>announcement</category>
      <category>devto</category>
      <category>crypto</category>
    </item>
    <item>
      <title>Automated Demand Planning for faster decision-making using Amazon QuickSight</title>
      <dc:creator>GauravKankaria</dc:creator>
      <pubDate>Sun, 29 Jan 2023 15:59:29 +0000</pubDate>
      <link>https://dev.to/gauravkankaria/automated-demand-planning-for-faster-decision-making-using-amazon-quicksight-351n</link>
      <guid>https://dev.to/gauravkankaria/automated-demand-planning-for-faster-decision-making-using-amazon-quicksight-351n</guid>
      <description>&lt;p&gt;The demand planning team is responsible for ensuring that demand for each of their SKU is measured as accurately as possible. They typically must liaise with the Sales and marketing, finance, and warehouse management teams to submit their reports for production planning. Their work depends on Demand Forecasting models (Link on how businesses can build intelligent demand forecasting models). Demand planners need to look at other aspects like Model accuracy across category/Pareto SKUs, model biases, and Trend signals to take decisions. Collating this information is a tedious job and often consumes much time. &lt;/p&gt;

&lt;p&gt;Building an automated dashboard that would calculate this information prior would enable faster decision-making and less time spent on data collection. This blog explains how demand planners can use Amazon QuickSight (AWS’s native Business Intelligence visualization tool) to build a dashboard capturing these KPIs. &lt;/p&gt;

&lt;p&gt;Let’s begin with understanding the KPIs and decisions the demand planner would need to take before creating the dashboard. Please note that the dashboard design would depend on the decision and KPIs the business user would want to see, and not the other way around. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Forecasted Demand &amp;amp; Accuracy: *&lt;/em&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Overall model accuracy of the previous month – To get perspective on how the model performed last month &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Last three months average forecast accuracy – To get perspective on how the model has been performing consistently and to decide whether the model output is reliable or not &lt;/li&gt;
&lt;li&gt;Forecast accuracy across Categories – To decide which categories they would need to adjust manually or could use the model output directly
&lt;/li&gt;
&lt;li&gt;SKU-wise demand and accuracy – Similar decision as above&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SKUs sales in volume and value – This will help them decide which SKUs to focus on ensuring no-stockout and maintaining required service levels &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trend and Biases of the SKUs – to gain a perspective on where the market is headed. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Given that some of the decision above are clear, let’s look at how the data is being captured by the system (Note: the snapshot presented is mock data with only selected information required for the blog is displayed).  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo21p4locr0h30nhb98sr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo21p4locr0h30nhb98sr.png" alt="Image description" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These datasets are moved to Amazon Simple Storage Service (S3) bucket (GUI based manual upload) using the reference architecture below and an ETL job is written using AWS Athena. This data is then moved to Amazon QuickSight using a job which is triggered on AWS Athena. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sq7fwrwg7wpl052sxor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sq7fwrwg7wpl052sxor.png" alt="Image description" width="574" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sample query to join forecast and actual sales data with other master data:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;select a.location, c.location_type,  c.region, a.item,  b.item_description, b.category, b.subcategory, b.brand, b.manufacturer, b.per_unit_price, a.date, a.forecast, d.sales&lt;br&gt;
from forecast_data as a&lt;br&gt;
left join &lt;br&gt;
category_master as b&lt;br&gt;
on a.item=b.item_number&lt;br&gt;
left join&lt;br&gt;
location_master as c&lt;br&gt;
on a.location=b.location&lt;br&gt;
left join&lt;br&gt;
sales_data as d&lt;br&gt;
on a.location=d.location&lt;br&gt;
and a.item=d.item&lt;br&gt;
and a.date=d.date&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
*&lt;em&gt;Final Design of the QuickSight Dashboard: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wydeyu0d0ax5hvkuqje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wydeyu0d0ax5hvkuqje.png" alt="Image description" width="348" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea behind the dashboard design is to give an overall view of the trend in demands observed at various levels to the demand planners and help them understand the gaps in their system which needs attention to achieve higher forecast accuracies and optimize their planning.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Key KPIs such as sales in volume and value, forecast accuracies and BIAS at different intervals of time are highlighted to give users an overall health check of their forecast models in the background.&lt;/li&gt;
&lt;li&gt;A trend chart denoting the actual sales Vs forecasted sales helps the user gauge the model performance over a period and understand if the category sales meet the target or not. &lt;/li&gt;
&lt;li&gt;Tree map consolidating the forecast value (denoted by size) and accuracy (denoted by color, green and red implying high and low accuracy respectively):&lt;/li&gt;
&lt;li&gt;ABC-XYZ analysis – The SKUs have been distributed between different buckets based on the model accuracy and the sales value.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;XYZ – The SKUs are classified based on the respective accuracies with X being the high accuracy SKUs (&amp;gt;=90%), Y being moderate accuracy SKUs (between 60 &amp;amp; 90%) and the remaining in Z bucket (less than 60% accuracy). &lt;/li&gt;
&lt;li&gt;ABC – The SKUs are classified are classified based on the last 3 month sales value with A being the high selling SKUs, B being the moderate selling SKUs and C being the least selling SKUs.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;The users can also deep dive into specific SKU related information if and when required.&lt;/li&gt;
&lt;li&gt;The actions feature available in QuickSight enables interactive dashboard by enabling users filter visuals, attach URL option in just a click of a button on an existing visual.&lt;/li&gt;
&lt;li&gt;Relevant controls to filter the data at required levels are provided to the users as well.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key decisions driven by the dashboard:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The users can infer is the forecast model running in the background requires tuning or not based on the accuracy and BIAS numbers reflecting in the dashboard. If the accuracy has been consistently low for a certain category/SKU – it’s quite possible that the model running behind needs to be adjusted.&lt;/li&gt;
&lt;li&gt;The dashboard helps the demand planning team to focus on high value SKUs to improve their forecast accuracy thereby increasing the sales and margins.&lt;/li&gt;
&lt;li&gt;ABC-XYZ analysis of SKUs enables better planning for SKUs basis the bucket they’re in. For example,&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;CZ – Low sales, low accuracy – since their sales and accuracy both are low, maintaining a low service level and minor tweaking of model would suffice.&lt;/li&gt;
&lt;li&gt;AX – High sales, high accuracy – best performing SKUs, no need of intervention. Service levels can however be increased since they’re major drivers of sales.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The dashboard is designed in a manner where maximum decisioning is enabled and the users get to take quick actions based on the insights provided by the dashboard. Demand planning is a crucial part of the supply chain process and the dashboard helps the demand planning team to reduce their time to truth and take immediate corrective actions. &lt;/p&gt;

&lt;p&gt;For the readers of the article, I am enclosing the JSON script of the dashboard and link to the dataset which the user can use to re-create these dashboards in their Amazon QuickSight Account for practice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{'ResponseMetadata': {'RequestId': '8f13c52c-8499-4490-a7c9-eb319097dd20',
  'HTTPStatusCode': 200,
  'HTTPHeaders': {'date': 'Sun, 29 Jan 2023 14:26:12 GMT',
   'content-type': 'application/json',
   'content-length': '1780',
   'connection': 'keep-alive',
   'x-amzn-requestid': '8f13c52c-8499-4490-a7c9-eb319097dd20'},
  'RetryAttempts': 0},
 'Status': 200,
 'Template': {'Arn': 'arn:aws:quicksight:ap-south-1:176660025607:template/UniqueIDTemplate',
  'Name': 'QuickSight Template Test',
  'Version': {'CreatedTime': datetime.datetime(2023, 1, 29, 19, 42, 53, 689000, tzinfo=tzlocal()),
   'VersionNumber': 1,
   'Status': 'CREATION_SUCCESSFUL',
   'DataSetConfigurations': [{'Placeholder': 'Demand Forecast - Data',
     'DataSetSchema': {'ColumnSchemaList': [{'Name': 'Location Type',
        'DataType': 'STRING'},
       {'Name': 'Item', 'DataType': 'INTEGER'},
       {'Name': 'Manufacturer', 'DataType': 'STRING'},
       {'Name': 'Forecast', 'DataType': 'INTEGER'},
       {'Name': 'Item Description', 'DataType': 'STRING'},
       {'Name': 'Category', 'DataType': 'STRING'},
       {'Name': 'Region', 'DataType': 'STRING'},
       {'Name': 'Date', 'DataType': 'DATETIME'},
       {'Name': 'Brand', 'DataType': 'STRING'},
       {'Name': 'Subcategory', 'DataType': 'STRING'},
       {'Name': 'Sales', 'DataType': 'INTEGER'},
       {'Name': 'Per unit Price', 'DataType': 'INTEGER'}]},
     'ColumnGroupSchemaList': []}],
   'Description': '1.0',
   'SourceEntityArn': 'arn:aws:quicksight:ap-south-1:176660025607:analysis/3cea7fe4-8b47-48ec-b8de-0dec71ffe2eb',
   'Sheets': [{'SheetId': 'c312051d-4052-446f-9083-3972c5f22456',
     'Name': 'Demand Forecast View'}]},
  'TemplateId': 'UniqueIDTemplate',
  'LastUpdatedTime': datetime.datetime(2023, 1, 29, 19, 42, 53, 675000, tzinfo=tzlocal()),
  'CreatedTime': datetime.datetime(2023, 1, 29, 19, 42, 53, 675000, tzinfo=tzlocal())},
 'RequestId': '8f13c52c-8499-4490-a7c9-eb319097dd20'}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Author: &lt;br&gt;
&lt;a href="https://www.linkedin.com/in/gaurav-kankaria/" rel="noopener noreferrer"&gt;Gaurav H Kankaria&lt;/a&gt; is the Head of Strategic Partnerships and Engagement Manager (ex- Senior Data Scientist) at Ganit Business Solutions Private Limited. He has over 8+ years of experience designing and implementing solutions to help organizations in the Retail Domain. He is an AWS certified Solution Architect — Professional and Data Analytics — Specialty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/vaishnavi-b-b48054208/" rel="noopener noreferrer"&gt;Vaishnavi B&lt;/a&gt; is an Data Scientist at Ganit Business Solutions Private Limited. She has over 2+ years of experience in building strategic decision boards and demand forecasting solutions to help the supply chain team of Retail organisations. She is an AWS certified Data Analytics — Specialty.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
