<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Machine Learning Tech Stories</title>
    <description>The latest articles on DEV Community by Machine Learning Tech Stories (@gansai9).</description>
    <link>https://dev.to/gansai9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gansai9"/>
    <language>en</language>
    <item>
      <title>What are the various machine learning usecases for companies/enterprises ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Thu, 24 Jun 2021 04:44:05 +0000</pubDate>
      <link>https://dev.to/gansai9/what-are-the-various-machine-learning-usecases-for-companies-enterprises-1heh</link>
      <guid>https://dev.to/gansai9/what-are-the-various-machine-learning-usecases-for-companies-enterprises-1heh</guid>
      <description>&lt;p&gt;What are the machine learning usecases which companies/ enterprise focus on ?&lt;/p&gt;

&lt;p&gt;(This is a placeholder for my exploration in the context of the machine learning problems solved by various companies/ enterprises, for research purposes.)&lt;/p&gt;

&lt;p&gt;Aside:&lt;br&gt;
Any user facing website/webapp/mobile app / edge app could perform data collection of the users. This is in the context of data collection.&lt;br&gt;
Since there are limitations with amount of data, for certain usecases, people have also come up with data synthesis, where you can create data. Image synthesis/ generation or text synthesis.&lt;/p&gt;

&lt;p&gt;As this is a collection, a template would be helpful to follow, for standard format.&lt;/p&gt;

&lt;p&gt;Company&lt;br&gt;
Datasets which the company might possess&lt;br&gt;
Machine Learning problems/usecases&lt;/p&gt;

&lt;p&gt;Data - is primarily in the form of text, images, videos ( sequence of images ), audio ( speech )&lt;/p&gt;

&lt;h3&gt;
  
  
  Google
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Datasets:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;user data, &lt;/li&gt;
&lt;li&gt;web pages accessible across internet, &lt;/li&gt;
&lt;li&gt;images in webpages, &lt;/li&gt;
&lt;li&gt;text content in webpages, &lt;/li&gt;
&lt;li&gt;any video uploaded in youtube, &lt;/li&gt;
&lt;li&gt;any video practically accessible in the internet (which has allowed crawling)&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Machine learning problems/usecases:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;google meet - how to blur the background, inorder to focus on the subject in the video call ?&lt;/li&gt;
&lt;li&gt;google meet - how to reduce background noise, inorder to focus on the speech of the subject ?&lt;/li&gt;
&lt;li&gt;youtube - how to recommend a particular video among a collection of videos, which is potentially watchable by that user, for a long period of time ?&lt;/li&gt;
&lt;li&gt;google search - which webpage or set of webpages matches the user query, in the closest possible way ?&lt;/li&gt;
&lt;li&gt;google search - what is the user trying to search for ?&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  CRED
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Datasets:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;user data&lt;/li&gt;
&lt;li&gt;user credit history&lt;/li&gt;
&lt;li&gt;user purchase patterns&lt;/li&gt;
&lt;li&gt;startups to be showcased in CRED catalogue&lt;/li&gt;
&lt;li&gt;user investment history&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Machine learning problems/usecases:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;product recommendations - What product/ set of products could be recommended to a given user, which could potentially be converted to a purchase ?&lt;/li&gt;
&lt;li&gt;credit worthiness - What is the credit worthiness score for a user , based on various parameters ?&lt;/li&gt;
&lt;li&gt;purchase predictions - What is the likelihood of a product being purchased by a user in the future based on purchase history ?&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Paypal
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Tesla
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Microsoft
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Amazon
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Facebook
&lt;/h3&gt;

&lt;h3&gt;
  
  
  NetFlix
&lt;/h3&gt;

&lt;p&gt;Learning continues...&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>usecases</category>
    </item>
    <item>
      <title>What does a machine learning toolbox consist of ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Thu, 24 Jun 2021 04:11:18 +0000</pubDate>
      <link>https://dev.to/gansai9/what-does-a-machine-learning-toolbox-consist-of-7b6</link>
      <guid>https://dev.to/gansai9/what-does-a-machine-learning-toolbox-consist-of-7b6</guid>
      <description>&lt;p&gt;(This is a placeholder for my exploration of various tools used in the machine learning ecosystem)&lt;/p&gt;

&lt;p&gt;As we consider machine learning projects, from end to end fashion, following are some of the phases and based on each phase, we have different toolsets which enable the engineer to perform the data processing in each phase.&lt;/p&gt;

&lt;h5&gt;
  
  
  Phases in Machine Learning project
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Data ingestion&lt;/li&gt;
&lt;li&gt;Data exploration&lt;/li&gt;
&lt;li&gt;Data analysis&lt;/li&gt;
&lt;li&gt;Data visualization&lt;/li&gt;
&lt;li&gt;Model training&lt;/li&gt;
&lt;li&gt;Model testing&lt;/li&gt;
&lt;li&gt;Model deployment&lt;/li&gt;
&lt;li&gt;Model serving&lt;/li&gt;
&lt;li&gt;Model logging/healthchecks etc&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  Tools used in each phase
&lt;/h5&gt;

&lt;p&gt;(unordered by its usage)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;numpy&lt;/li&gt;
&lt;li&gt;Tensorflow&lt;/li&gt;
&lt;li&gt;Apache Spark&lt;/li&gt;
&lt;li&gt;pandas&lt;/li&gt;
&lt;li&gt;R&lt;/li&gt;
&lt;li&gt;SAS&lt;/li&gt;
&lt;li&gt;jupyter notebooks&lt;/li&gt;
&lt;li&gt;kubeflow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Learning continues...&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>tools</category>
    </item>
    <item>
      <title>What does it take to gain expertise in the field of Machine Learning ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Tue, 15 Jun 2021 19:09:43 +0000</pubDate>
      <link>https://dev.to/gansai9/what-does-it-take-to-gain-expertise-in-the-field-of-machine-learning-5cp8</link>
      <guid>https://dev.to/gansai9/what-does-it-take-to-gain-expertise-in-the-field-of-machine-learning-5cp8</guid>
      <description>&lt;p&gt;( This is a placeholder for my learnings in the context of MLExpert , updated almost daily, until I complete the course)&lt;/p&gt;

&lt;h3&gt;
  
  
  Foundational Knowledge in Machine Learning
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Supervised Learning
&lt;/h4&gt;

&lt;p&gt;When machine is learning from the provided data, it is supervised, in the sense that, for it to make sense of data, some labeling information of the data is provided to it, so that, it can map labels with the data.&lt;br&gt;
For example: label all images of cats with 'Cat' / label all images of dogs with 'Dog'&lt;/p&gt;

&lt;h4&gt;
  
  
  Unsupervised Learning
&lt;/h4&gt;

&lt;p&gt;In this also, machine is provided with data, but it is not supervised, in the sense, that, there is no hand holding here. Somehow, machine has to figure out grouping of data and has to create cluster of data points having some sort of similarity.&lt;br&gt;
For example: give a stream of tweets from twitter and let the machine try to find patterns from the textual data. It would clusterize the tweets according to the category of topic associated with the tweet. It could be related to sports or tech or politics etc&lt;/p&gt;

&lt;h4&gt;
  
  
  Deep learning
&lt;/h4&gt;

&lt;p&gt;In this also, data is provided to the machine, but the data would not be clearly structured, for example: image data. This is where neural networks are involved, specifically they are set of functions which try to determine pattern from data. There are lot of hidden layers which try to decode the pattern associated with the data.&lt;/p&gt;

&lt;h4&gt;
  
  
  Recommendation Systems
&lt;/h4&gt;

&lt;p&gt;In this, for the end user, something needs to be recommended. For example, in case of ecommerce platform, we need to recommend products which could be potentially bought by the end user. Or, in case of Youtube / NetFlix, we need to recommend videos, which could be potentially watched / viewed by the end user. ( more time you spend on youtube, its an indicator to advertisers that, the audience is engaged with this platform and it makes sense for the advertisers to spend on Youtube. more time you spend on netflix, its an indicator that, Netflix is providing content suited to your taste, so, the end user is maintained with the perpetual subscription. ) So, ultimately, there are bunch of videos/ items to be recommended to user. Recommendation algorithm picks the best video to be shown to the user&lt;/p&gt;

&lt;h4&gt;
  
  
  Ranking
&lt;/h4&gt;

&lt;p&gt;In this, for the end user, some kind of ranking to be done on content which is being recommended to the end user. So, what's the first video, what is the next best video to engage the user, what video after that, so, there is already set of videos which is stored/created. Ranking algorithms picks the best set of videos and ranks them and the videos are recommended in the order of rank of that video.&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How does Tensorflow play a key role in solving machine learning problems ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Tue, 15 Jun 2021 17:33:45 +0000</pubDate>
      <link>https://dev.to/gansai9/how-does-tensorflow-play-a-key-role-in-solving-machine-learning-problems-5e74</link>
      <guid>https://dev.to/gansai9/how-does-tensorflow-play-a-key-role-in-solving-machine-learning-problems-5e74</guid>
      <description>&lt;p&gt;( This is a placeholder for my learnings in the context of Tensorflow , updated almost daily, until I complete the course)&lt;/p&gt;

&lt;p&gt;Tensorflow is a tool created by Google, which was opensourced, inorder to provide capabilities to solve machine learning problems, for researchers, developers, to not just toy with ML problems, but also produce solutions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Machine learning vs Traditional Programming
&lt;/h4&gt;

&lt;p&gt;Traditional programming -- We have data coming from diff data sources and we have rules, as to how to manipulate / query the data , to arrive at some answers.&lt;/p&gt;

&lt;p&gt;Machine learning -- Its a new paradigm. Given the data and answers, the machine figures out the rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cf-TterV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xy9yz3n2lb3bx9495gy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cf-TterV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xy9yz3n2lb3bx9495gy1.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: &lt;a href="https://www.linkedin.com/in/laurence-moroney/"&gt;Laurence Moroney&lt;/a&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  Neural Networks
&lt;/h4&gt;

&lt;p&gt;Neural Networks is basically a set of functions, which can learn patterns from data.&lt;/p&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>tensorflow</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>What are the practical aspects to Data Science ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Tue, 15 Jun 2021 17:29:10 +0000</pubDate>
      <link>https://dev.to/gansai9/what-are-the-practical-aspects-to-data-science-245</link>
      <guid>https://dev.to/gansai9/what-are-the-practical-aspects-to-data-science-245</guid>
      <description>&lt;p&gt;( This is a placeholder for my learnings in the context of 'Practical Data Science Specialization', updated almost daily, until I complete the course )&lt;/p&gt;

&lt;h4&gt;
  
  
  What is Data Science ?
&lt;/h4&gt;

&lt;p&gt;It is an intersection between various domains/ toolsets, to solve problems dealing with data, namely: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Artificial Intelligence&lt;/li&gt;
&lt;li&gt;Machine Learning&lt;/li&gt;
&lt;li&gt;Deep Learning&lt;/li&gt;
&lt;li&gt;Having domain knowledge of the business&lt;/li&gt;
&lt;li&gt;Having knowledge about mathematics behind techniques to deal with data&lt;/li&gt;
&lt;li&gt;Statistics&lt;/li&gt;
&lt;li&gt;Visualization - to deal with data visualization&lt;/li&gt;
&lt;li&gt;Programming - to program with python, numpy etc&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cSZPEzJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9a8cl3mq9sldrcuo57z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cSZPEzJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9a8cl3mq9sldrcuo57z.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;courtesy; [Andrew Ng]&lt;/p&gt;

&lt;h5&gt;
  
  
  Doing Machine Learning Projects or Data Science projects in Laptop vs Cloud
&lt;/h5&gt;

&lt;p&gt;When we do our projects in our local laptop, we are actually limited by the resources provided by that laptop. The maximum amount of memory, processing power and whether it is CPU or GPU etc. All this matters in how efficient you can run your machine learning project in all phases of the project, for example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data ingestion&lt;/li&gt;
&lt;li&gt;Data exploration&lt;/li&gt;
&lt;li&gt;Data analysis&lt;/li&gt;
&lt;li&gt;Data visualization&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All the above steps are key steps which are prior to forming up the machine learning model, which will be used to make some inference. &lt;br&gt;
So, each of the above steps need to be done, in efficient manner and Cloud provides resources at scale and on need basis, you can increase the amount of storage/RAM/processing power. ( which brings me to the question - suppose there is some project which is undergoing some kind of training in the cloud, and I , as a ML engineer/researcher realize that, it is happening very slow ( yes, it depends on the mathematical equation/parameters which is fitting the data ), but, it is slow. How do I , move this running project into another machine with greater capacity, without downtime for my training. For example, already as part of my old machine RAM, some kind of computations would have been and stored. How would these computations be pushed to the new machine ? Basically, how does scale work, while machine learning training happens, or even for that matter, machine learning inference matters - i don't have answers, but, the parallel I am looking at is, from software projects, where we have application servers, which are mapped to kubernetes pods and you can instantaneouly scale the pods, so, effectively, it becomes a distributed machine learning problem. So, the training / inference happens over a distributed system. So, the data required for training / inference gets distributed across systems. So, when a new machine is added which brings in the scalability, effectively , we are giving some portion of existing data to this new machine to process , whether it is training phase / inference phase - I am still curious as to how this happens - I have heard of tools like KubeFlow, need to explore them )&lt;/p&gt;

&lt;p&gt;Obviously companies like Google, Microsoft, OpenAI, definitely use distributed machine learning. For example, when they say that, there are 1.75 trillion parameters for a model, which is able to infer something. Ofcourse the architecture would involve either a giant machine with huge processing power or RAM OR scores of distributed systems which participate in the overall processing.&lt;/p&gt;

&lt;h6&gt;
  
  
  Data slicing or Data transformations in parallel
&lt;/h6&gt;

&lt;p&gt;While doing our machine learning projects, we might come across scenarios, where we need to slice data (slicing &amp;amp; dicing) meaning that, we want to reduce complex set of data, into small meaningful or focused set of columns . For example, in the initial dataset, we might have 100's of columns. But, we might be interested, in only few columns , so we can limit the data. we can also rename columns, to our needs.&lt;/p&gt;

&lt;h5&gt;
  
  
  General steps in any data science process.
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;We deal with massive data sets&lt;/li&gt;
&lt;li&gt;We need to extract relevant features from this dataset&lt;/li&gt;
&lt;li&gt;We then to gain knowledge/insight from these set of relevant features&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0OM0l8De--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31txl3hbxupgkxsm67dy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0OM0l8De--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31txl3hbxupgkxsm67dy.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: aws/deep-learning.ai&lt;/p&gt;

&lt;h5&gt;
  
  
  Advantages of doing data science projects in the cloud
&lt;/h5&gt;

&lt;p&gt;With our laptop, we are limited by the hardware, sometimes, training the model might consume whole of our RAM and even CPU might get hogged. In this case, if it were cloud, we could have switched from cpu to gpu compute instance and also chosen a sizeable RAM to continue our task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CZImyb-H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4nk7yv8ss4luv8mwm18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CZImyb-H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4nk7yv8ss4luv8mwm18.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: aws/deep-learning.ai&lt;/p&gt;

&lt;h5&gt;
  
  
  What is the machine learning workflow, to be worked on ?
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qgE4N3F8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/57b4pzabngxqgvuxbobc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qgE4N3F8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/57b4pzabngxqgvuxbobc.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;courtesy: aws/deep-learning.ai&lt;/p&gt;

&lt;p&gt;As we see above, in the first phase - which is Ingest &amp;amp; Analyze, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initially, we need to ingest the data, this will be done using Amazon S3, &lt;/li&gt;
&lt;li&gt;The data exploration will be done using SQL queries, using Amazon Athena&lt;/li&gt;
&lt;li&gt;We need to perform statistical bias detection on the input data, which will be done using Amazon Sagemaker Clarify (at this point, I am clueless about what is bias detection and why do we need them ? )&lt;/li&gt;
&lt;li&gt;AWS Glue will be used to catalogue the data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the next phase - Prepare &amp;amp; Transform&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We need to extract relevant features from the input data set and this involves feature engineering, which will be done using SageMaker Datawrangler, Processing Jobs&lt;/li&gt;
&lt;li&gt;The features which are extracted, needs to be stored, for which we will use SageMaker FeatureStore&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the next phase - Train &amp;amp; Tune&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We need to now , go into the phase of model development. Using Autopilot, we will be having set of model candidates which will be trained on the data, which will be then chosen upon for the best candidate, with best score/accuracy.&lt;/li&gt;
&lt;li&gt;Trainer &amp;amp; Debugger could be used to detect, for improving the model accuracy.&lt;/li&gt;
&lt;li&gt;Hyperparameter Tuning - why this is required? &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the next phase - Deploy &amp;amp; Manage&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Here is when, the model has been built, now its ready for use. So, we need to deploy the model . &lt;/li&gt;
&lt;li&gt;We will be creating automated pipeline, so that, once the model
is built, it is automatically deployed as well. &lt;/li&gt;
&lt;li&gt;SageMaker endpoints will serve the model, which can be used for inference. &lt;/li&gt;
&lt;li&gt;BatchTransform and Pipelines will also be used in this stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  Popular Machine learning tasks
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FXr7nnVO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1jqld0xwcnh0a6pc6ld8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FXr7nnVO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1jqld0xwcnh0a6pc6ld8.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;courtesy: aws/deeplearning.ai&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Supervised - In this, the machine learns with examples - Classification could involve categorizing sentiment of text into positive, neutral, negative. Regression could involve predicting a continuous value, given set of parameters. For example, predicting house price value.&lt;/li&gt;
&lt;li&gt;Unsupervised - This will involve determining patterns and clustering / grouping the datapoints&lt;/li&gt;
&lt;li&gt;Imageprocessing / CV - In this we need to determine whether an image contains a dog/cat. OR, we need , for self driving cars, to differentiate between speed signs and trees.&lt;/li&gt;
&lt;li&gt;NLP/ NLU - In this, we need to do sentiment analysis / machine translation / transfer learning / question-answering.&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  Multi-class classification for sentimental analysis of product reviews.
&lt;/h5&gt;

&lt;p&gt;We have set of product reviews, for example, from amazon.com. &lt;br&gt;
For each product review, we will have to classify the sentiment, whether it is positive/negative and other classes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WDKYU3Zb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pycjakg8c5umscr6b49t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WDKYU3Zb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pycjakg8c5umscr6b49t.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: aws/deep-learning.ai &lt;/p&gt;

&lt;p&gt;We need to do training, this is a supervised machine learning problem, so we need to provide labels as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mzs8uavL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqg5q7918uzm81ezor20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mzs8uavL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqg5q7918uzm81ezor20.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;courtesy: aws/deep-learning.ai &lt;/p&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>python</category>
    </item>
    <item>
      <title>What all goes into the landscape of AWS Machine Learning ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Tue, 15 Jun 2021 17:22:50 +0000</pubDate>
      <link>https://dev.to/gansai9/what-all-goes-into-the-landscape-of-aws-machine-learning-473p</link>
      <guid>https://dev.to/gansai9/what-all-goes-into-the-landscape-of-aws-machine-learning-473p</guid>
      <description>&lt;p&gt;( This is a placeholder for my learnings in the context of 'AWS Machine Learning' , updated almost daily, until I complete the course)&lt;/p&gt;

&lt;p&gt;What are the different toolsets available in Amazon WebServices, which cater to machine learning based applications ?&lt;/p&gt;

&lt;p&gt;Currently, what I know is Amazon Sagemaker , using which you can create a model.&lt;/p&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>aws</category>
    </item>
    <item>
      <title>How essential is Mathematics to Machine learning ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Tue, 15 Jun 2021 17:18:52 +0000</pubDate>
      <link>https://dev.to/gansai9/how-essential-is-math-to-machine-learning-nk</link>
      <guid>https://dev.to/gansai9/how-essential-is-math-to-machine-learning-nk</guid>
      <description>&lt;p&gt;( This is a placeholder for my learnings in the context of Mathematics for Machine Learning , updated almost daily, until I complete the course)&lt;/p&gt;

&lt;p&gt;How does Mathematics play a very important role in Machine Learning ?&lt;/p&gt;

&lt;p&gt;Mathematics topics such as Probability, Statistics, Linear Algebra, Calculus - are the basis for Machine Learning.&lt;/p&gt;

&lt;h4&gt;
  
  
  Linear Algebra Motivation
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Solving Simultaneous linear equations
&lt;/h5&gt;

&lt;p&gt;Suppose we have 2 variables - apples, bananas.&lt;br&gt;
And we want to determine the prices of apples, bananas - given set of incidents.&lt;/p&gt;

&lt;p&gt;First incident - it was - 2a + 3b = 13&lt;br&gt;
Second incident - it was - 3a + 4b = 27&lt;/p&gt;

&lt;p&gt;We go to market and buy apples and bananas and we want to solve the above equations, to determine the price of a, price of b.&lt;/p&gt;

&lt;p&gt;The above problem can be formulated in terms of matrices and vectors.&lt;/p&gt;

&lt;p&gt;| 2 3 | |a| = |13|&lt;br&gt;
| 3 4 | |b|   |27|&lt;/p&gt;

&lt;p&gt;We have a 2x2 matrix, multiplied with a column vector, producing another column vector.&lt;/p&gt;

&lt;h5&gt;
  
  
  Optimization problem of fitting data with an equation with fitting parameters
&lt;/h5&gt;

&lt;p&gt;Suppose we have an histogram of heights of people in a given region. Histogram means that, we have range of heights in x-axis of our graph and in y-axis, we have count of people who fall in that range. Now, with good amount of data, we have plotted an histogram. Now, we want to fit an equation or a line to this data. Basically, we want to find the optimal value of the parameters governing that particular equation/line which will fit the data.&lt;br&gt;
The reason for fitting the line/ deriving the equation is that, we want to determine how the heights are distributed across people. Basically we also want to answer some questions, like: what is the average height of people in this region ? If we pick a random person from this region, what is the most likely height for this person ? We want to understand the distribution of the data. We have lot of data points, but, we want to understand the pattern, something like - there are very few people who are extremely tall. Most of the population in this region tend to have this height etc.&lt;/p&gt;

&lt;p&gt;Another example to understand histogram is, distribution of night prices for airbnb houses , courtesy: (Yan Holtz) &lt;a href="https://www.data-to-viz.com/graph/histogram.html"&gt;https://www.data-to-viz.com/graph/histogram.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c0g_0NpL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzk1ss2mkz7l7kajneuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c0g_0NpL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzk1ss2mkz7l7kajneuu.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I learnt from Yan Holtz's blog , the beauty of histogram is to understand the distribution of data primarily and that is evident visually from the following image, courtesy: Yan Holtz&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hQ_Z5eZy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxqzfp490xk1ykbtj90f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hQ_Z5eZy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxqzfp490xk1ykbtj90f.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see multiple types of distribution:&lt;br&gt;
a. Skewed&lt;br&gt;
b. Normal&lt;br&gt;
c. Uniform - almost all ranges have equal amount of values.&lt;br&gt;
d. Comb&lt;br&gt;
e. edge peak&lt;br&gt;
f. bimodal&lt;/p&gt;

&lt;p&gt;With neural networks/ machine learning, there are 2 things to consider about fitting equation;&lt;br&gt;
a. What is the mathematical equation which would fit this data?&lt;br&gt;
b. How to best fit this equation - in the sense, what are the optimal values of the parameters of the equation, such that , this line representing the equation fits the data, in the best way possible. &lt;/p&gt;

&lt;h5&gt;
  
  
  Vectors in Linear Algebra
&lt;/h5&gt;

&lt;p&gt;As we saw in the previous example of histogram, we want to fit an equation to the data representing the heights of people.&lt;br&gt;
Then, let's consider a Gaussian distribution or a Normal distribution ( Normal means that, area under the curve is 1 ). In this equation, mu represents the center height of the curve. And sigma represents the width of the curve from the center. And it is represented by an equation shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YbJuhVny--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/osnqambt9tb8t3t0uhnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YbJuhVny--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/osnqambt9tb8t3t0uhnc.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: imperial college, london - coursera&lt;/p&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>mathematics</category>
    </item>
    <item>
      <title>Why is MLOps crucial in the domain of Machine Learning Engineering for Production ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Mon, 14 Jun 2021 19:07:34 +0000</pubDate>
      <link>https://dev.to/gansai9/machine-learning-engineering-for-production-mlops-k9g</link>
      <guid>https://dev.to/gansai9/machine-learning-engineering-for-production-mlops-k9g</guid>
      <description>&lt;p&gt;( This is a placeholder for my learning in the space of MLOps, updated almost daily, until I complete the course )&lt;/p&gt;

&lt;h4&gt;
  
  
  Intro
&lt;/h4&gt;

&lt;p&gt;Generally, a machine learning engineer / modeler would start their project in a Jupyter notebook environment to create a model.&lt;br&gt;
Machine learning does not end with creating a good model. ( with some good accuracy )&lt;br&gt;
But, the focus is to put the model into production.&lt;br&gt;
How do you deploy a machine learning model into production, so that, it can be used for making predictions, given some input data ?&lt;/p&gt;

&lt;h4&gt;
  
  
  What is MLOps ?
&lt;/h4&gt;

&lt;p&gt;MLOps is a discipline of building and maintaining production machine learning applications, with the processes and tools. &lt;/p&gt;

&lt;h4&gt;
  
  
  Ponderings
&lt;/h4&gt;

&lt;p&gt;Some of the ponderings/ questions we need to answer is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What goes into the lifecycle of a production machine learning application ?&lt;/li&gt;
&lt;li&gt;How will a model be deployed ?&lt;/li&gt;
&lt;li&gt;How will a model be updated in production, as and when it is tuned by the modelers, to improve accuracy ?&lt;/li&gt;
&lt;li&gt;What goes into testing the model ? Do we some integration tests, which run on machine learning models, just as we have them for normal software projects ?&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Project Lifecycle of a production machine learning application
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B2ahi5em--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwbjuoxjlnfaz872peyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B2ahi5em--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwbjuoxjlnfaz872peyy.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: &lt;a href="https://www.linkedin.com/in/andrewyng/"&gt;Andrew Ng&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What goes into the process of MLOps ?
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--89ijARha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbxnq8d8qf6uo30d3py0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--89ijARha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbxnq8d8qf6uo30d3py0.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: &lt;a href="https://www.linkedin.com/in/andrewyng/"&gt;Andrew Ng&lt;/a&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  Data drift
&lt;/h4&gt;

&lt;p&gt;The machine learning model which you have developed in your lab would be trained on some data. Once you deploy this model into production, you might encounter several issues and one of them is data drift. What this means is that, the distribution of data which was used , during the time of training, will be very different from the distribution of data during inference. So, accordingly the model needs to be retrained, taking into account the data drift.&lt;/p&gt;

&lt;h4&gt;
  
  
  Importance of 'Production machine learning' knowledge
&lt;/h4&gt;

&lt;p&gt;You might be working on a product which is performing some kind of inference in production. And this product operates continuously in production. So, the knowledge of modern software practices is very essential to maintain your product.&lt;/p&gt;

&lt;h4&gt;
  
  
  Model creation vs Model deployment &amp;amp; maintenance
&lt;/h4&gt;

&lt;p&gt;The scenario of model creation will be way different from the scenario of model deployment. For example, during model deployment, there might be a case that, you deploy on smartphones. And the requirement from end users could be that, the data on their device, should never leave that device for privacy. Then the model deployment/maintenance should be such a way that, model on the smartphone/device is continuously updated/refreshed, whenever the model is updated/tuned.&lt;/p&gt;

&lt;p&gt;The objective is to have a very good sense of entire lifecycle of a machine learning project.&lt;/p&gt;

&lt;h5&gt;
  
  
  Reality of machine learning projects
&lt;/h5&gt;

&lt;p&gt;Let's assume that, our usecase is this. We have a manufacturing factory which makes smartphones. The job of machine learning system is to take pictures of the manufacturing assembly line containing the smartphone. And figure out, whether a smartphone is defective or not. If the smartphone is defective, then the inspection module will remove the smartphone from the assembly.&lt;/p&gt;

&lt;p&gt;It starts with training the model. The final step is to put the model into a prediction server, by setting up API interfaces.&lt;br&gt;
These API interfaces of prediction server could be hosted in the cloud. OR, we could have the edge deployment itself, where the API interfaces of prediction server is available within the factory itself, without relying on internet connection. &lt;/p&gt;

&lt;p&gt;Within the lab/jupyter notebook, we might be able to train the model and check its accuracy, it might be very accurate on the testing dataset. But, in practice, in production deployment, based on the real world data, the model may not perform very well due to various reasons. One of the reason is data drift. In lab, we might be working with training dataset, where the lighting is good enough. But, for example, in a manufacturing factory, which could be a edge deployment, the lighting conditions might not be so good that, there might be some darker images, sent to the prediction server. And the prediction server might not do well on these kinds of images. &lt;/p&gt;

&lt;p&gt;What about the amount of code involved in machine learning ? OR the amount of aspects to deal with, in machine learning projects ? We might assume that, we should be responsible just for the model code or just the algorithm. But, in reality, the ecosystem consists of several moving parts which enables the prediction. The entire infrastructure required for machine learning, will require various components for :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configuration&lt;/li&gt;
&lt;li&gt;Data collection&lt;/li&gt;
&lt;li&gt;Data verification&lt;/li&gt;
&lt;li&gt;Feature extraction&lt;/li&gt;
&lt;li&gt;Serving infrastructure&lt;/li&gt;
&lt;li&gt;ML model code&lt;/li&gt;
&lt;li&gt;Monitoring infrastructure&lt;/li&gt;
&lt;li&gt;Analysis tools&lt;/li&gt;
&lt;li&gt;Process management tools&lt;/li&gt;
&lt;li&gt;Machine resource management tools.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, ML code is just 5-10% of the overall ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fo9eys-p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bw7h8329bh1jgapiez5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fo9eys-p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bw7h8329bh1jgapiez5w.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;courtesy: Andrew Ng. &lt;/p&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>mlops</category>
    </item>
    <item>
      <title>Why do we need to think of Explainable AI ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Sun, 13 Jun 2021 11:23:30 +0000</pubDate>
      <link>https://dev.to/gansai9/explainable-ai-f9b</link>
      <guid>https://dev.to/gansai9/explainable-ai-f9b</guid>
      <description>&lt;p&gt;(This is a placeholder for my learnings in the space of Explainable AI, updated almost daily, until I complete the course)&lt;/p&gt;

&lt;h4&gt;
  
  
  Introduction
&lt;/h4&gt;

&lt;p&gt;Most of the times, AI algorithms might give us good results, probably accurate results, as to what products to be shown to customers while they browse the catalogue.&lt;br&gt;
Or whether a particular customer is not genuine, in the incident of fraud detection scenarios.&lt;br&gt;
Or it could also predict house price accurately, given certain factors.&lt;/p&gt;

&lt;p&gt;But, can AI explain, why did it reach to a particular conclusion ?&lt;br&gt;
AI remains as a blackbox for the humans. And humans cannot reason enough, as to how AI algorithm was able to arrive at a particular output, given a particular input.&lt;/p&gt;

&lt;p&gt;Explainable AI is the key to solve such problems.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is XAI ?
&lt;/h4&gt;

&lt;p&gt;XAI is a type of AI that gives insights, as to the reasons behind the outcomes of AI.&lt;/p&gt;

&lt;h4&gt;
  
  
  My ponderings
&lt;/h4&gt;

&lt;p&gt;Some of the questions which I have got are as follows:&lt;br&gt;
1) Do we really need models to be explainable ?&lt;br&gt;
2) If models cannot explain why it has reached a particular conclusion, what are the consequences?&lt;br&gt;
3) Given a set of domains, where machine learning plays key role, what would be the case, if models cannot be explained ?&lt;/p&gt;

&lt;p&gt;For example, in the domain of finance, if a customer incident is classified as a fraud transaction, how did it arrive at that conclusion ? Or, in the case of a medical diagnosis, why did a model arrive at a conclusion that, a patient is diagnosed with a brain tumour. Because, day by day, machines/systems are going to be trusted with their decisions. Sometimes, blindly trusting machines might work, but, given certain domains, blindly trusting will not work.&lt;br&gt;
4) What should a machine learning engineer be aware of, if they need to make their models explainable ?&lt;/p&gt;

&lt;h4&gt;
  
  
  With / Without xplainable AI (famous picture related to this topic):
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nc7KBa5R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qann5978rjotwaozi8hl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nc7KBa5R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qann5978rjotwaozi8hl.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: Aki Ohashi&lt;/p&gt;

&lt;h4&gt;
  
  
  What are some companies who provide their solutions related to XAI ?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Know the how &amp;amp; why behind your AI solutions by fiddler.ai &lt;a href="https://www.fiddler.ai/explainable-ai"&gt;fiddler.ai&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Related resources:
&lt;/h4&gt;

&lt;p&gt;( Throughout the course of learning about explainable AI, I do collect some interesting online resources, books, youtube videos so that, we get a full and deeper picture in this field )&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/wangyongjie-ntu/Awesome-explainable-AI"&gt;Awesome Explainable AI collection on github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/learning/learning-xai-explainable-artificial-intelligence"&gt;Linkedin course on 'Explainable AI'&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How can we build Modern Java apps on AWS ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Thu, 10 Jun 2021 12:19:48 +0000</pubDate>
      <link>https://dev.to/gansai9/build-modern-java-apps-on-aws-eae</link>
      <guid>https://dev.to/gansai9/build-modern-java-apps-on-aws-eae</guid>
      <description>&lt;p&gt;( This is going to be placeholder, as I learn more about how to build modern apps on AWS using various amazon webservices, such as : Amazon Cognito, Lambda, Stepfunctions, S3 for storage, Cloudwatch for monitoring, Raytracing for distributed tracing etc , updated almost daily, until I complete the course)&lt;/p&gt;

&lt;p&gt;When we want to build modern Java apps on AWS, what do we mean by that ?&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern Java app on AWS - workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Firstly, we are going to have an end 2 end project running on AWS.&lt;/li&gt;
&lt;li&gt;By end 2 end, in the sense, there's going to be user authentication/ authorization using Amazon Cognito&lt;/li&gt;
&lt;li&gt;Amazon S3 will be used to store some data&lt;/li&gt;
&lt;li&gt;Amazon API Gateway will be used to design APIs and perform traffic monitoring, providing security to the APIs.&lt;/li&gt;
&lt;li&gt;Amazon Lambda will be used to retrieve data from S3, when a GET API is invoked. And for adding data to S3, a POST API will be used. But for posting data, there will be a validation performed and then data will be posted to S3. Since we have stepwise approach to add data, we would leverage Amazon Step Functions to do that.&lt;/li&gt;
&lt;li&gt;Amazon CloudWatch will be used for monitoring. Amazon RayTracing will be used for distributed tracing purposes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall that's the end2end project for building modern Java apps on AWS.&lt;/p&gt;

&lt;p&gt;So, we will be leveraging the serverless capabilities of Amazon Lambda, along with the authentication services of Amazon Cognito.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Cognito
&lt;/h3&gt;

&lt;h4&gt;
  
  
  User Authentication &amp;amp; Authorization workflow with Amazon Cognito &amp;amp; API Gateway
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Within Amazon Cognito, there is a user pool maintained. For example, you could maintain data like: useremail and password.&lt;/li&gt;
&lt;li&gt;And then use this capability to authenticate the users.&lt;/li&gt;
&lt;li&gt;The next thing is Amazon Cognito can be integrated with 3rd party identity providers for authentication purposes. For example, OpenIDConnect, Social Sign-in providers. This is based on OAuth2 technology.&lt;/li&gt;
&lt;li&gt;For example, once user signs in with Facebook, a token will be generated and sent to the user. The user will exchange this token with Amazon Cognito and Amazon Cognito will validate that token with the identity provider. Once validated, Amazon Cognito will provide a JWT token to the user/client.&lt;/li&gt;
&lt;li&gt;Client can then provide this JWT token to API Gateway, in order to access an API.&lt;/li&gt;
&lt;li&gt;API Gateway will then, validate this JWT token with Cognito, against this user/client. And once validated, will allow the user/client to access the backend API, which is proxied by APIGateway.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The above flow is called Federation with Cognito user pools, wherein the identity is authenticated via identity providers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Federated Identities
&lt;/h4&gt;

&lt;p&gt;Apart from this , Cognito also has another feature called Federated identities. And with this, basically, once identity is authenticated with identity provider, that is used to assign IAM role, with which AWS services can be accessed, including API Gateway.&lt;/p&gt;

&lt;h4&gt;
  
  
  Unauthenticated Identities
&lt;/h4&gt;

&lt;p&gt;There's another small feature within Amazon Cognito, which is unauthenticated identities, which is used, where users don't want to share their identities, but still are allowed to access AWS services, but the scope would be limited.&lt;/p&gt;

&lt;h4&gt;
  
  
  HostedUI
&lt;/h4&gt;

&lt;p&gt;Amazon Cognito also has a nice functionality called: hostedUI - using which you can signup and signin users, if you are not going for 3rd party identity providers.&lt;br&gt;
So, you can define the redirect URI, once the user is signed in.&lt;br&gt;
For example, you can have a webpage within your website, which can be treated as your callback html file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Lambda
&lt;/h3&gt;

&lt;p&gt;Amazon Lambda is powered by a virtualization technology called Firecracker. Using this, microvm's are created, which is used for execution of code within lambda.&lt;/p&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>java</category>
    </item>
    <item>
      <title>Principles of economics</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Wed, 09 Jun 2021 13:10:03 +0000</pubDate>
      <link>https://dev.to/gansai9/what-it-takes-to-think-like-an-economist-528b</link>
      <guid>https://dev.to/gansai9/what-it-takes-to-think-like-an-economist-528b</guid>
      <description>&lt;p&gt;( This is going to be a placeholder for things I learn in the context of 'Principles of economics' , updated almost daily, until I complete the course)&lt;/p&gt;

&lt;p&gt;What it takes to think like an economist?&lt;/p&gt;

&lt;p&gt;The fundamental idea in economics is scarcity.&lt;br&gt;
Any entity - individual, corporation, government - has access to scarce resources and they need to decide how and where to allocate them.&lt;/p&gt;

&lt;p&gt;People make choices with scarce resources, and they interact with other people ( in market ) when they make these choices.&lt;/p&gt;

&lt;p&gt;Choices, Scarcity, Interaction.&lt;/p&gt;

&lt;p&gt;What choices do people make with the scarce resource and how they interact with the world ? - This is the mantra of economics.&lt;/p&gt;

&lt;p&gt;Suppose you are a developer - you have a limited resource - time / money. How do you allocate that and where do you allocate that ?&lt;/p&gt;

&lt;p&gt;Suppose you are a corporation - you have a limited resource - time / money / people working for you. How do you allocate these and where do you allocate these and when do you allocate these ?&lt;/p&gt;

&lt;p&gt;Suppose you are a government - you have limited resources - time, money, natural resources, people. How do you allocate these and where do you allocate these and when do you allocate these ?&lt;/p&gt;

&lt;p&gt;When we examine the behavior of governments, what we are dealing with is: macroeconomics.&lt;br&gt;
When we deal with individual firms / individuals - what we are dealing with is: microeconomics.&lt;/p&gt;

&lt;p&gt;And there are opportunity costs - it is the cost of what you give up, if you pick up something among various choices. For example, if you can do some coding which will take 1 hour. And you can also watch Netflix webseries which will take 1 hour. You have chosen to watch NetFlix, then you have paid opportunity cost of not coding 1 hour. You could as well have done coding and completed some task, for example. &lt;/p&gt;

&lt;p&gt;What's happening in the world economy ?&lt;br&gt;
What's happening in economy in your country ?&lt;/p&gt;

&lt;p&gt;Did you know that downtime of Fastly affects economy in a way ? Fastly outage can affect sales of Shopify or any other ecommerce business based on Fastly.&lt;/p&gt;

&lt;p&gt;In a way, all things are related.&lt;/p&gt;

&lt;h5&gt;
  
  
  Production Possibilities Curve
&lt;/h5&gt;

&lt;p&gt;Production Possibilities Curve is a depiction of increasing opportunity costs for goods/products from a sector, when workers from that sector move to other sector, producing more goods/products of the other sector.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FDYAHHNp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e5dyv3mfontu4cz2mnfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FDYAHHNp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e5dyv3mfontu4cz2mnfc.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: Stanford University&lt;/p&gt;

&lt;p&gt;For example, if we consider 2 sectors - movies, computers. We initially have skilled workers in computer sector. When we move these workers gradually to movies sector, slowly there is a decline in the output of no. of computers being produced. So, the involvement of workers in movies sector is an opportunity cost in the computers sector. More of something leads to less of something else.&lt;/p&gt;

&lt;p&gt;And the opportunity costs are increasing , in the sense, that the rate with which the productivity reduces with computer, with the increase in productivity of movies. The curve represents the production possibilities of goods/services representing these sectors. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4BAT9N4r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o52ztg3pmlnxti3rtao1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4BAT9N4r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o52ztg3pmlnxti3rtao1.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: Stanford University&lt;/p&gt;

&lt;h6&gt;
  
  
  Possibilities and Impossibilities -
&lt;/h6&gt;

&lt;p&gt;The curve also represents the possibilities and impossibilities, for example, we cannot produce 30000 computers and simultaneously produce 1000 movies. The curve and the area under the curve represent the possibilities of production.&lt;/p&gt;

&lt;h6&gt;
  
  
  Efficient/ Inefficient -
&lt;/h6&gt;

&lt;p&gt;The curve also allows to understand efficient production / inefficient production. If production of goods/services in both the sectors happen according to the boundaries of the curve, then, there's efficient productivity. But, if the production of the products lies within the curve, then there's inefficient productivity. For example, there's an efficient possibility of producing 5000 computers and 200 movies. But if the number of movies  produced are less than 200, for example, 150, then it means that there's inefficient productivity of the movies. &lt;/p&gt;

&lt;h6&gt;
  
  
  Evolution of PPC -
&lt;/h6&gt;

&lt;p&gt;As the economy evolves, the old PPC is replaced with new PPC, when the possible no. of goods/services produced increases, which lead to economic growth. When we produce more, we earn more, the standard of living improves etc. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MhKj6Iaf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b02p63oyk3paoch5np7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MhKj6Iaf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b02p63oyk3paoch5np7z.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
courtesy: Stanford University&lt;/p&gt;

&lt;h6&gt;
  
  
  Investment vs Consumption
&lt;/h6&gt;

&lt;p&gt;Products such as computers lead to investment, you can use them to produce something, in the future.&lt;br&gt;
Products such as movies are consumption products, because you can just consume them.&lt;/p&gt;

&lt;h6&gt;
  
  
  Real Gross Domestic Product
&lt;/h6&gt;

&lt;p&gt;A given country consists of producing various goods/services across sectors. And there is a quantifiable number which represents the total no. of goods/services produced by the country, which represents the economic state of the country, which is GDP. The real GDP is GDP, where inflation is not accounted for. &lt;br&gt;
So, over the years, the GDP as a number can be plotted against the year. And we can see that GDP curve increasing to top right. &lt;br&gt;
Sometimes , there might be dips in the GDP curve, which indicates the dip in productivity, which could be caused, by various factors, for example - recession.&lt;/p&gt;

&lt;h6&gt;
  
  
  No. of employed workers curve
&lt;/h6&gt;

&lt;p&gt;And also, there is another curve which represents the total no. of employed workers in an economy, which can be plotted against the year. So, more number of workers, more productivity.&lt;br&gt;
Sometimes, again, there could be dips in no. of employed workers, due to recession. (covid pandemic - black swan).&lt;/p&gt;

&lt;h6&gt;
  
  
  Real GDP per worker curve
&lt;/h6&gt;

&lt;p&gt;Since we have GDP and we also have no. of employed workers per year, so, we could plot another curve, which would represent real GDP , per worker.&lt;/p&gt;

&lt;p&gt;(Opportunity costs in cases of good/services with concentration of workers in sectors , mean that, when workers embrace the opportunity in one sector (for eg: movies), they simultaneously embrace the opportunity cost in another sector (for eg: computers), that means same no. of workers, same set of opportunities, but, when workers move across sectors, this incurs opportunity cost of productivity in one sector)&lt;/p&gt;

&lt;h6&gt;
  
  
  GDP studies across countries
&lt;/h6&gt;

&lt;p&gt;There could also be an evaluation of GDP growth across countries, there could be dips/ peaks / rise - so, what were the factors which led to these characteristics of the curve - that could be studied in context of economics.&lt;/p&gt;

&lt;h6&gt;
  
  
  Observing and Explaining the Economy
&lt;/h6&gt;

&lt;p&gt;There are lots of datasets related to economy. One of them is income distribution.&lt;br&gt;
The question is:&lt;/p&gt;

&lt;p&gt;In a country like United States, How much do the top 10% earn ( on an average ) and bottom 10% earn ( on an average ) and how has this distribution evolved over the years ?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8m4GgxPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2khpqyt7tfy8h8rtqln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8m4GgxPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2khpqyt7tfy8h8rtqln.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sometimes, change in data representation helps. For example, absolute values might not help, but, a slighly diff way of looking at things, for example: percentage change in income.&lt;/p&gt;

&lt;p&gt;What about percentage change ?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jCyNXNrx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dic9t15z0oiizwi2bhyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jCyNXNrx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dic9t15z0oiizwi2bhyl.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In different time periods, different growth rates are there for both the groups. So, specifically, what happened in the 80s and 90s, what led to the steep growth of top 10% ? What were the economic policies which led to that or , what opportunities opened up during that time ?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QKa5cFm4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31e5b2bv2epgywc71mq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QKa5cFm4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31e5b2bv2epgywc71mq5.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's an economic theory that, during the same time period, relative returns from higher education, meant that , higher the education/ skills, more returns. So, that's one possibility to see the widening of income distribution around 80s and 90s. &lt;/p&gt;

&lt;h6&gt;
  
  
  # Positive economics and Normative economics
&lt;/h6&gt;

&lt;p&gt;Positive economics deals with explaining why something occurs, using economic theory.&lt;/p&gt;

&lt;p&gt;Normative economics deals with suggestions/ recommendations of economic policies.&lt;/p&gt;

&lt;h6&gt;
  
  
  Supply and demand model
&lt;/h6&gt;

&lt;p&gt;This is one of the most important and basic models in economics.&lt;/p&gt;

&lt;p&gt;It can explain the chaotic nature of markets like auction markets.&lt;/p&gt;

&lt;p&gt;You can have any kind of markets. Market of goods/services. For example, bike market, labour market, stock market, ( we can also treat internet as a marketplace, where the goods can be data centers provided by major cloud providers / services provided by diff software companies and many other entities )&lt;/p&gt;

&lt;p&gt;Demand is relationship between price of a good and the quantity being demanded. Economists like to understand, how the price of a good affects its demand. Higher the price probably lowers the demand. It describes behaviors of consumers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tpV0ICqk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wyx5bi5s30xjiuhriouz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tpV0ICqk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wyx5bi5s30xjiuhriouz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  # Movement along the Demand Curve
&lt;/h6&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OqMFwKgX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ds3noppxyuxpwfj8fhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OqMFwKgX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ds3noppxyuxpwfj8fhx.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where we see the impact of prices on the demand. So, as price increases, what happens to demand and if price decreases, what happens to demand. You go up the curve, when the demand is less and price is high.&lt;/p&gt;

&lt;h6&gt;
  
  
  Shift in demand curve
&lt;/h6&gt;

&lt;p&gt;This can happen, where prices stay the same, but demand either increases/ decreases depending on other factors. For example, price of roller blades might decreases drastically, people might demand fewer bicycles because of that. OR, depending on the weather or environment concerns, the demand for bicycles might go up , so there's shift in the demand curve.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c4LB5wVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vgrgr3ghz1v4em9a5dv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c4LB5wVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vgrgr3ghz1v4em9a5dv.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Supply
&lt;/h6&gt;

&lt;p&gt;Supply is a relationship between price of a good and quantity supplied.&lt;br&gt;
We also have supply curve, which is upward sloping. &lt;br&gt;
It describes behaviors of firms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RdOCKbf0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9yfoln5aackjvn22zi7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RdOCKbf0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9yfoln5aackjvn22zi7o.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Market equilibrium
&lt;/h6&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4RS7DJVC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hrm71u3rviipx6mh5uxt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4RS7DJVC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hrm71u3rviipx6mh5uxt.png" alt="image"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This is where quantity being demanded equals quantity being supplied. And the price at which this occurs, is the equilibrium price. And the quantity is equilibrium quantity.&lt;/p&gt;

&lt;p&gt;What if there's an increase in demand ?&lt;/p&gt;

&lt;p&gt;That means, from the equilibrium, we want to shift and create a new demand curve, hitting the supply curve at some intersection point, which is the new equilibrium, new price which is increased and quantity also increased. So, basically prices have increased, as a result of demand increase, to maintain the new equilibrium.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BZDrXxYX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ekc8p0ummvww604ggl1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BZDrXxYX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ekc8p0ummvww604ggl1.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Elasticity
&lt;/h6&gt;

&lt;h4&gt;
  
  
  Resources
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/antontarasenko/awesome-economics"&gt;Awesome Economics collection on Github&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, what do economists do ?&lt;br&gt;
They look at data and try to interpret data.&lt;br&gt;
Once data is there, they plot the data, to explain.&lt;/p&gt;

&lt;p&gt;Causation: one event brings about another event&lt;br&gt;
Correlation: one event closely occur alongside another event, but does not necessarily imply.&lt;/p&gt;

&lt;p&gt;Learning continues....&lt;/p&gt;

</description>
      <category>economics</category>
      <category>behavior</category>
      <category>humans</category>
    </item>
    <item>
      <title>What's the key mantra for machine learning ?</title>
      <dc:creator>Machine Learning Tech Stories</dc:creator>
      <pubDate>Mon, 07 Jun 2021 11:56:12 +0000</pubDate>
      <link>https://dev.to/gansai9/what-s-the-key-mantra-for-machine-learning-h4f</link>
      <guid>https://dev.to/gansai9/what-s-the-key-mantra-for-machine-learning-h4f</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YBYStHv3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iknjyn30w6cym5ll6bvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YBYStHv3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iknjyn30w6cym5ll6bvg.png" alt="image" width="848" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Have lots of data&lt;/li&gt;
&lt;li&gt;Have the data labelled&lt;/li&gt;
&lt;li&gt;Let the machine figure out the relationships between the data and the labels&lt;/li&gt;
&lt;li&gt;The output of above exercise is the machine learning model, which can be used to predict/classify.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L79LUdsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07nqknyy9bufh6bsphpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L79LUdsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07nqknyy9bufh6bsphpt.png" alt="image" width="617" height="462"&gt;&lt;/a&gt; Courtesy: &lt;a class="mentioned-user" href="https://dev.to/lmoroney"&gt;@lmoroney&lt;/a&gt; &lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
