DEV Community

Cover image for Things to know about Data-Driven Architecture on cloud
hridyesh bisht for AWS Community Builders

Posted on

Things to know about Data-Driven Architecture on cloud

As data becomes more diverse and valuable, we will see more emphasis on data-driven architecture . Developers need to understand the importance of accuracy, consistency, and quality of data. So they can develop quality data pipelines, and products to make sure we put the data first.

This blog explains what data is, how can we enrich our data, how can we analyse our data, and how to best use our data. We will be covering AWS Glue, AWS QuickSight, and AWS Sagemaker.

Inspiration of this blog, was after reading the Forbes Blog on, "The Age Of Analytics And The Importance Of Data Quality".

  1. https://www.forbes.com/sites/forbesagencycouncil/2019/10/01/the-age-of-analytics-and-the-importance-of-data-quality/?sh=76cca4fa5c3c
  2. https://www.freecodecamp.org/news/is-data-important-to-your-business/

Q.What is Data?

Data is raw information. For example, your daily consumption of coffee. It is raw information about the amount of coffee you have consumed, but if you analyse it and gain insights from it.

  1. Types of coffee beans or coffee flavour
  2. How much sugar do you put into the coffee?

Image Credits: https://ciscocanada.files.wordpress.com/2013/09/cisco_blog_canada_coffee.png

Now that we differentiate between information and data. There are many formats to store and transfer data, these formats depend on the type of data. For example, 

  1. Write coffee ingredients on a piece of paper i.e unstructured 
  2. Write it in a .csv file i.e structured 
  3. A combination of both i.e semi-structured.

Image Credits: https://programmerprodigycode.files.wordpress.com/2022/03/ecd9e-1sbcb7tf8jjwzchdtt_sodw.png

Q.How to enrich our data?

As a data engineer, you would like to maximise the insights you could gather from your data. Some data formats are developer-friendly, and some are not. So we need to convert data to developer-friendly formats, there are many ways of doing it.

An example of no/low code could be AWS Glue,

AWS Glue is a fully managed ETL (extract, transform, and load) service that makes it simple and cost-effective to categorise your data, clean it, enrich it, and move it reliably between various data stores and data streams.

You can store your data using various AWS services and still maintain a unified view of your data using the AWS Glue Data Catalogue. Use Data Catalogue to search and discover the datasets that you own, and maintain the relevant metadata in one central repository. 

Image Credits: https://d1.awsstatic.com/aws-glue-graphics/Product-page-diagram_AWS-Glue_Elixir%402x.6511bc93abc20bb7bc8d03ebe2be1cbb7f2623fe.png

Q.How does AWS Glue work?

You define jobs in AWS Glue to do the work that's required to extract, transform, and load (ETL) data from a data source to a data target. You perform the following actions:

  1. For datastore sources, you define a crawler to populate your AWS Glue Data Catalogue with metadata table definitions.
    1. Point your crawler at a data store, and the crawler creates table definitions in the Data Catalogue. 
  2. AWS Glue can generate a script to transform your data or, you can provide the script in the AWS Glue console or API.( currently in Python and Scala scripts)
  3. You can run your job on-demand, or you can set it up to start when a specified trigger occurs. The trigger can be a time-based schedule or an event.

You use the AWS Glue console to define and orchestrate your ETL workflow. The console calls several API operations in the AWS Glue Data Catalogue and AWS Glue Jobs system to perform the following tasks:

  1. Define AWS Glue objects such as jobs, tables, crawlers, and connections.
  2. Schedule when crawlers run.
  3. Define events or schedules for job triggers.
  4. Search and filter lists of AWS Glue objects.
  5. Edit transformation scripts.

Image Credits: https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/04/17/PartitionedData2.jpg

If you don't prefer a No/Low code solution, you should try Pandas Library. Pandas library is great for data wrangling, and most of the data engineers will have experience with Pandas.

For more information, feel free to listen to my session on introduction to AWS Glue where I compare No/Low code solutions to Pandas library:

  1. https://www.youtube.com/watch?v=njxWiaqlErQ&t=963s

Q.What to do after enriching your data?

Data visualisation helps you to visualise your data as maps or graphs and interact with them. This makes it much easier for the human mind to digest the data and thus allowing it to spot patterns and trends in a much better way. This could be either done by standard business analysis tools like Tableau or R or python. A few key benefits are,

  1. Identifying important trends depending on the type of visualisation can help you to determine trends over time amongst a data set. 
  2. Being able to spot and identify relationships within your data is key, it can help you to both drive future business decisions in the right direction and also to make corrective actions elsewhere. 
  3. Having a quick reference to a visualisation allows the data to collaborate with many recipients. 

Image Credits: https://i.pinimg.com/originals/7a/42/8e/7a428e9a180bb7e4911d5eaab8297982.jpg

There are a variety of ways to present your data, depending on what type of data you are trying to show. For each use case, there will be a specific type of chart, for example:

  1. To present data that shows relationships between data points, use scatter or bubble chart.
  2. To compare data between two or more data sets, use either a Bar, Column or Line chart.
  3. Looking at the distribution of data across an entire data set, use a histogram.
  4. Represent the part-to-whole relationship of a data set, use a pie chart, stacked column chart, 100% stacked column chart, or a treemap.

Image Credits: https://az801952.vo.msecnd.net/uploads/b9335f90-bb61-4773-899e-3927c923b9be.png

An example of no/low code could be AWS QuickSight

Amazon QuickSight allows everyone to understand your data by asking questions in natural language, exploring through interactive dashboards, or looking for patterns and outliers powered by machine learning.

Quicksight allows you to share dashboards, email reports, and embedded analytics. By taking your data and visually displaying the questions you want to answer you can gain relevant insights into your company data

It allows you to draw various graphs and charts using options in User Interface. There are a lot of different options to work with. Let's cover a few terminologies

  1. Fields: These reflect the columns of the table in the database.
  2. Visual Types: This is how your data will be represented. This can be from a simple sum to a chart/graph or even a heat map.
  3. Sheets: These allow for many visuals to be stored together on a single page. To keep things simple, we'll be working with only one sheet.

Try changing the Visual Type of this data and see how it's represented. You might need to add extra fields to the Field wells to make them populate correctly.

Image Credits: https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2017/09/21/quicksight-sept-4.gif

QuickSight, by default, has an automatic save feature enabled by default for each analysis. Personally, the case study of Quicksight in the NFL has to be one of the interesting use cases reads.

If you don't prefer a No/Low code solution, you should try looking into MatplotLib, Seaborn, and Bokeh Library. They are great for data visualisation and most of the data engineers will have experience with them.

Q.How can we predict an outcome using our data?

After Data visualisation helps us understand patterns in data. We would like to predict/classify an outcome based on historical data. 

Q.What is Machine learning?

Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.

Image Credits: https://analyticsinsight.b-cdn.net/wp-content/uploads/2021/08/ML-System.jpg

An example of low code Machine learning solutions. When considering Machine learning solutions, there are many use cases to consider. Let us consider a few,

  1. Extract text and data from documents: Rather than building up your Model from scratch, you could use AWS Textract.
    1. Amazon Textract extracts text, handwriting, and data from scanned documents. 
  2. If you want to build Chatbots, then AWS Lex would help you build chatbots.
    1. To design, build, test, and deploy conversational interfaces in applications using advanced natural language models.
  3. If you want to automate speech recognition, AWS Transcribe. 
    1. An automatic speech recognition service that makes it easy to add speech to text capabilities to any application. Consider the use case of Alexa.

For more information on various machine learning general use case solutions refer to,

  1. https://aws.amazon.com/machine-learning/

If you don't prefer Low code solution, you should try looking into Sagemaker. They are good for computing and deploying your ML models, as you get AWS compute servers.

Q.What is a sagemaker?

At its core, sagemaker is a fully managed service that provides the tools to build, train and deploy machine learning models. It has some components in it such as managing notebooks and helping label and train models deploy models with a variety of ways to use endpoints.

SageMaker algorithms are available via container images. Each region that supports SageMaker has its copy of the images. You will begin by retrieving the URI of the container image for the current session's region. You can also utilise your own container images for specific ML algorithms.

Image Credits: http://programmerprodigycode.files.wordpress.com/2022/03/0bc53-1mfyty2swftpsulqybcgy-w.png

Q.How can we host Sagemaker models?

SageMaker can host models through its hosting services. The model is accessible to the client through a SageMaker endpoint. The Endpoint is accessible over HTTPS and SageMaker Python SDK.

Another way would be using AWS Batch. It manages the processing of large datasets within the limits of specified parameters. When a batch transform job starts, SageMaker initialises compute instances and distributes the inference or pre-processing workload between them.

In Batch Transform, you provide your inference data as an S3 URI and SageMaker will care of downloading it, running the prediction and uploading the results afterwards to S3 again.

Batch Transform partitions the Amazon S3 objects in the input by key and maps Amazon S3 objects to an instance. To split input files into mini-batches you create a batch transform job, set the SplitType parameter value to Line. You can control the size of the mini-batches by using the BatchStrategy and MaxPayloadInMB parameters. 

After processing, it creates an output file with the same name and the .out file extension. The batch transforms job stores the output files in the specified location in Amazon S3, such as s3://awsexamplebucket/output/.

Image credits: https://aws.amazon.com/blogs/machine-learning/performing-batch-inference-with-tensorflow-serving-in-amazon-sagemaker/

The predictions in an output file are in the same order as the corresponding records in the input file. To combine the results of many output files into a single output file, set the AssembleWith parameter to Line.

For more information refer,

  1. https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html
  2. https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html
  3. https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-model-deployment.html#ex1-batch-transform

Q. How to Select the right to compute instance?

Choosing a Compute instance completely biased on either price or compute, might not be a good option. As you select a cheaper compute instance, it takes you about 30 mins. But if you would have selected a better compute instance, it takes 10 mins. The second alternative would have been a better alternative economically and time-based.

Some points to remember while choosing CPU and GPU will be

  1. The CPU time grows proportional to the size of the matrix squared or cubed.
  2. The GPU time grows almost linearly with the size of the matrix for the sizes used in the experiment. It can add more compute cores to complete the computation in much shorter times than a CPU.
  3. Sometimes the CPU performs better than GPU for these small sizes. In general, GPU excel for large-scale problems.
  4. For larger problems, GPUs can offer speedups in the hundreds. For Example, an application used for facial or object detection in an image or a video will need more computing. Hence GPUs might be a better solution.

For more information, feel free to listen to my session on introduction to Algorithms and AWS Sagemaker:

  1. https://vimeo.com/586886985/7faddfb340

For more information on Sagemaker,

  1. https://aws.amazon.com/blogs/aws/sagemaker/

After considering all the no/low code solutions and coding solutions. Let's consider a use case,

If you have a relatively small business with not that much need of customisation, then perhaps no/low code solutions. But if you want to customise your application, you would have to you coding solutions. A point to remember, depending on your datasets size, diversity and quality, you could either go for CPU(less compute) or GPU (more compute).

Oldest comments (0)