Unlike the other AWS Whitepapers for which I wrote summaries , this content builds on an AWS DevOps Monitoring dashboard Architecture Diagram published on August 09 ,2021. We will go through all the details and we will shed the light on several facts,so I can give you ,dear reader, a complete grasp of the architecture under discussion.
The suggested approach is built to set up a DevOps reporting tool on the AWS cloud infrastructure by using AWS native tools. It automates the process of ingesting, analyzing, and visualizing continuous integration/continuous delivery (CI/CD) metrics.
If you are engaged in building sophisticated applications on an AWS infrastructure using AWS native DevOps tools ,you must be going through lots of deployments .Thus, you need to consider a solution that helps you to visualize, track and analyze your deployments through their DevOps lifecycle .
On the left side of the architecture , you can remark the box that presents Customer AWS CI/CD Pipeline . It is crystal clear that the customer has implemented a full Devops CI/CD pipeline with AWS dedicated DevOps Tools.
AWS DevOps Tools comprise a collection of services that are designed to work together so we can safely store and manage our application's source code, as well as automatically create,test and deploy applications to AWS or on-premises environment. In other words AWS DevOps Tools work in harmony to cover the entire software development lifecycle starting from code reviews to deployment and monitoring.
1. AWS CodePipeline:
Is commonly viewed as the equivalent of Jenkins (a CI/CD tool that can integrate with many cloud providers) but Code Pipeline is specific to AWS.It is a fully managed, PAYGO(pay-as-you-go) , continuous delivery solution that automates your release pipeline's build, test, and deploy phases for a fast and dependable application.
2. AWS CodeCommit:
As it name implies ( From Cambridge Dictionary Commit (verb) :To actively put information in your memory or write it down) , Code Commit is a managed source control service that hosts private Git repositories where contributors write down their code instantly . It is designed to be safe , highly scalable and to enable teams to collaborate on code in a secure manner. The contributions are encrypted in transit and at rest. There is no need to worry about scalability as well as the management of the source control system .
3. AWS CodeBuild:
Is an AWS service designed to build your software packages that are ready to deploy. It is a fully managed and a scalable continuous integration service that compiles source code, runs tests, and then produces software packages. CodeBuild processes multiple builds concurrently, so your builds are not left waiting in a queue.
4. AWS CodeDeploy:
Is a fully managed and scalable deployment service used to automate software deployments while eliminating the need for error-prone manual operations. Deployment can be applied to a variety of AWS compute services (Amazon EC2, AWS Fargate, AWS Lambda,on-premises servers)
In this paragraph ,we will use the same architecture provided by AWS along with the same reference numbers. But since it looks crowded,we will " divide to rule ". In other terms, we will break the architecture down into smaller sections for a clearer understanding and a better visualization .
To have a Monitoring dashboard in general, we need to go through a process that contain four main steps :
1. Tracking Event Sources
2. Gathering Data
3. Analyzing Gathered Information
4. Visualizing Analysis Results
Our architecture, obeys to the same philosophy of the Four-Step process . This is the reason why we will divide it into two main sections :
Section1: Tracking and Gathering Data (Figure a and Figure b )
Section2: Analyzing data and Visualizing Analysis Results ( Figure c )
The Figure below explains the process and mentions which AWS Services contribute to the success of the required task at each step.
PS: You can deploy this solution using the available AWS CloudFormation template (Infrastructure As A Code)
In this paragraph, we mainly focus on the sections responsible for information gathering.
While tracking an application deployed with AWS DevOps tools, we have two main parts to shed the light on:
1. CODE BUILD :
which announces if the build SUCCEEDED or FAILED. (This will be explained in SECTION 1.0).
2. DEPLOYMENT :
After having a successful code build, we have to know whether deployment is SUCCESSFUL or NOT. In case if failure ,we need to see the logs to figure out the cause of this issue.(This is detailed in SECTION 1.1)
Through those two major parts , we will need an appropriate AWS service that will alert us about success or failure events, and the data related to them.In fact, once a contributor (from the development team) initiates an activity in AWS CI/CD Pipeline,his deeds need to be detected so we can visualize it on our DevOps Monitoring Dashboard. The convenient service for this task is AWS CloudWatch.
What is AWS CloudWatch ?
It is a monitoring and observability AWS service . It is also a metric repository that will provide data and actionable insights in form of events . For more understanding here is an AWS CloudWatch use case:
You want to be notified with an SMS (
perform an action) once the CPU Utilization of an instance exceeds 60% (
Condition on metric is fullfilled). In this scenario , you need to use AWS CloudWatch to watch for the metric CPU utilization until it exceeds the threshold 60%. Once this happens ,it is called an event that will initiate an action .The event will trigger AWS SNS Service which alerts you immediately with an SMS.
AWS CloudWatch is used to:
-Continously Stream important Data, in real-time, through Amazon Kinesis Data Firehose to Amazon S3 buckets. (Discussed in Section 1.0)
-Send data ,in near real-time,to Amazon EventBridge, through Amazon Kinesis Data Firehose and finally to Amazon S3 buckets.(Discussed in Section 1.1)
As we have already mentioned, AWS CloudWatch continuously Streams data events related to code source compilations, tests, and software packaging produced by AWS CodeBuild to Amazon Kinesis Data Firehose -View Step 1 and Step 2 in the Figure (a)-. The delivery is in near real-time and with low latency.
At this Point ,Amazon Kinesis Data Firehose will perform an ETL operation (Extract, Transform, Load) where a Lambda function undergoes the Extraction and the Transformation .
In fact, each time AWS CloudFormation detects data in real-time ,The Lambda function is activated for few seconds to extracts the relevant data for each metric and to transform it into the convenient format.
Finally, Amazon Kinesis Data Firehose loads the data in real-time to the Amazon S3 data lake for downstream processing. View Step 4 in the Figure (a).
In this second portion of our architecture, we can simply see how to detect any action performed by the Developement Team on AWS CodeCommit, AWS CodeDeploy and AWS CodePipeline.View Figure (b).
Once a developer commits a code to AWS CodeCommit and deploys his application with AWS CodeDeploy,the actions on the predefined AWS Event Sources are detected by Amazon CloudWatch as events and then transfered to Amazon EventBridge (Step 1).
Amazon CloudWatch alarms also monitor the status of an Amazon CloudWatch synthetics canary service (Step 3). Canaries are customizable scripts that monitor endpoints and APIs on a regular basis. Even if you don't have any user activity on your applications, they actually conduct the same behaviors as a customer to assist you check your customer experience. You can detect problems before your consumers do by utilizing canaries.
PS: Step 2 and 4 are the same as the previous section.
What is Amazon EventBridge ?
It is a serverless event bus service that is used to capture events from an Amazon CloudWatch alarm .We can assume that Amazon EventBridge is the successor of AWS CloudWatch .In fact, it was formely called Amazon CloudWatch Events because it uses the same CloudWatch Events API . It is similar to Amazon CloudWatch but with additional features.
In the second paragraph , we will go the final steps in the architecture. At this point , we have all the information gathered in an Amazon S3 bucket ,in the appropriate format for analysis (thanks to AWS Lambda Function).Now ,we have two more steps to go: analysis and visualization.
A detailed examination of the gathered information is performed by AWS Athena. It is a serverless AWS tool that enables performing interactive queries and data analysis on the big sets of data stored in Amazon S3 while using standard SQL. Analysis results will be mostly delivered within few seconds.
To extract easy-to-understand insights, nothing is more adequat that AWS QuickSight. It provides an interactive and visual dashboard while ensuring scalability. In this perspective, AWS QuickSight is our window to deeper DevOps insights. Furthermore, thanks to Amazon QuickSight Q, Management team members can ask questions about DevOps data in natural language ( plain English) , and receive accurate responses with relevent visualizations which make insights clearer.
Aws QuickSight is an important asset for Management team members. In fact ,it describes DevOps insight in a very simple manner . Every one in management team should be able to understant the insights , even if he lacks data science and DevOps experience.