<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dorra ELBoukari</title>
    <description>The latest articles on DEV Community by Dorra ELBoukari (@dorraelboukari).</description>
    <link>https://dev.to/dorraelboukari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dorraelboukari"/>
    <language>en</language>
    <item>
      <title>Container Engine Vs Container Runtime</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Sat, 16 Jul 2022 17:23:41 +0000</pubDate>
      <link>https://dev.to/dorraelboukari/container-engine-vs-container-runtime-560f</link>
      <guid>https://dev.to/dorraelboukari/container-engine-vs-container-runtime-560f</guid>
      <description>&lt;p&gt;During the last few days, I have been working on comparing container engines. I wanted to study separately popular container engines in order to highlight the vulnerabilities related to each product. To make an unbiased judgement and to have a very clear perspective, I went through a myriad of articles that are published online. I remarked something strange. Even some well-experienced technical writers can be confused about the difference between a "Container Engine" and a "Container Runtime". I remarked that many use these two terms as synonyms, which is not the case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Runtime
&lt;/h3&gt;

&lt;p&gt;Let's bring it this way: &lt;br&gt;
The container runtime can be considered the core component of a container engine. It is the beating heart that enables and initiates containerization. In other terms, without the container runtime, the container engine cannot communicate with the operating system and the containerization process will never be launched. Thus, the container will be never brought to life. The container runtime is a low-level element that handles all the tasks related to running the containerization process. It mounts the container and clones system calls to communicate with the kernel of the operating system on which you intend to run the containers. Cloning system calls mean creating new processes in a way similar to fork() system call ) that host the containerization mission.&lt;br&gt;
We can specify two types of Container Runtimes:&lt;br&gt;
&lt;strong&gt;CRI-Compliant Container Runtime:&lt;/strong&gt; &lt;br&gt;
are those who support CRI (Container Runtime Interface ) . CRI is the API that Kubernetes uses to manage container runtimes. How Kubernetes should communicate with a container runtime is outlined in the CRI API. Consequently, CRI is an interface that can be used with any supported runtime, whereas containerd and Cri-O are the specialized container runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OCI-Compliant Container Runtime:&lt;/strong&gt;&lt;br&gt;
 Are runtimes that obey the OCI standard.  OCI is a framework for specifying how container images are organized.OCI images can be run on any container runtime that supports OCI since they have a standard format such as runC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Engine
&lt;/h3&gt;

&lt;p&gt;On the other side, container engines are software programs that handle user inputs, including those from the command line interfaces (CLI), fetches images, and executes the container. To fulfil some of its functionalities, a container engine uses container runtime. In other words, the architecture container engine contains a container runtime along with other elements for networking, orchestration capabilities , etc.&lt;br&gt;
Some container runtime such as Containerd can be viewed as no more than low-level container engines with the most basic functionalities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Illustration:
&lt;/h3&gt;

&lt;p&gt;Here is a figure that illustrates how container engines work through a simplified example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gz06Tr2_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lcqbgzjnp6aw3zcivz9p.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gz06Tr2_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lcqbgzjnp6aw3zcivz9p.PNG" alt="Image description" width="800" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>containers</category>
      <category>containerd</category>
      <category>runc</category>
      <category>crio</category>
    </item>
    <item>
      <title>AWS DevOps Monitoring Dashboard | AWS Whitepaper Summary</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Mon, 20 Dec 2021 14:47:51 +0000</pubDate>
      <link>https://dev.to/awsmenacommunity/aws-devops-monitoring-dashboard-aws-whitepaper-summary-5d3c</link>
      <guid>https://dev.to/awsmenacommunity/aws-devops-monitoring-dashboard-aws-whitepaper-summary-5d3c</guid>
      <description>&lt;p&gt;Unlike the other AWS Whitepapers for which I wrote summaries , this content builds on an AWS DevOps Monitoring dashboard Architecture Diagram published on August 09 ,2021. We will go through all the details and we will shed the light on several facts,so I can give you ,dear reader, a complete grasp of the architecture under discussion.&lt;/p&gt;

&lt;h2&gt;
  
  
  I. Architecture Description:
&lt;/h2&gt;

&lt;p&gt;The suggested approach is built to set up a DevOps reporting tool on the AWS cloud infrastructure by using AWS native tools. It automates the process of ingesting, analyzing, and visualizing continuous integration/continuous delivery (CI/CD) metrics. &lt;/p&gt;

&lt;h3&gt;
  
  
  I.1 Use Case of the discussed architecture:
&lt;/h3&gt;

&lt;p&gt;If you are engaged in building sophisticated applications on an AWS infrastructure using AWS native DevOps tools ,you must be going through lots of deployments .Thus, you need to consider a solution that helps you to visualize, track and analyze your deployments through their DevOps lifecycle .&lt;/p&gt;

&lt;h3&gt;
  
  
  I.2 AWS CI/CD pipeline
&lt;/h3&gt;

&lt;p&gt;On the left side of the architecture , you can remark the box that presents &lt;strong&gt;Customer AWS CI/CD Pipeline&lt;/strong&gt; . It is crystal clear that the customer has implemented a full Devops CI/CD pipeline with AWS dedicated DevOps Tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhjrpso0v89rqac8e12p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhjrpso0v89rqac8e12p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  I.3 AWS DevOps Tools
&lt;/h3&gt;

&lt;p&gt;AWS DevOps Tools comprise a collection of services that are designed to work together so we can safely store and manage our application's source code, as well as automatically create,test and deploy applications to AWS or on-premises environment. In other words AWS DevOps Tools work in harmony to cover the entire software development lifecycle starting from code reviews to deployment and &lt;strong&gt;monitoring&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. AWS CodePipeline:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Is commonly viewed as the equivalent of Jenkins (a CI/CD tool that can integrate with many cloud providers) but Code Pipeline is specific to AWS.It is a fully managed, PAYGO(pay-as-you-go) , continuous delivery solution that automates your release pipeline's build, test, and deploy phases for a fast and dependable application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AWS CodeCommit:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;As it name implies ( From Cambridge Dictionary &lt;strong&gt;Commit&lt;/strong&gt; (verb) &lt;strong&gt;:To actively put information in your memory or write it down&lt;/strong&gt;) , Code Commit is a managed source control service that hosts private Git repositories where contributors write down their code instantly . It is designed to be safe , highly scalable and to enable teams to collaborate on code in a secure manner. The  contributions are encrypted in transit and at rest. There is no need to worry about scalability as well as the management of the source control system .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. AWS CodeBuild:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Is an AWS service designed to &lt;strong&gt;build&lt;/strong&gt; your software packages that are ready to deploy. It  is a fully managed and a scalable continuous integration service that compiles source code, runs tests, and then produces software packages. CodeBuild processes multiple builds concurrently, so your builds are not left waiting in a queue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. AWS CodeDeploy:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Is a fully managed and scalable deployment service used to automate software deployments while eliminating the need for error-prone manual operations. Deployment can be applied to a variety of AWS compute services (Amazon EC2, AWS Fargate, AWS Lambda,on-premises servers)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nf00fn1oyolpz09nutq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nf00fn1oyolpz09nutq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  II. Walking Through The DevOps Monitoring Dashboard Architecture
&lt;/h2&gt;

&lt;p&gt;In this paragraph ,we will use the same architecture provided by AWS along with the same reference numbers. But since it looks crowded,we will " divide to rule ". In other terms, we will break the architecture down into smaller sections for a clearer understanding and a better visualization .&lt;br&gt;
To have a Monitoring dashboard in general, we need to go through a process that contain four main steps :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Tracking Event Sources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Gathering Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Analyzing Gathered Information&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Visualizing Analysis Results&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our architecture, obeys to the same philosophy of &lt;strong&gt;the Four-Step process&lt;/strong&gt; . This is the reason why we will divide it into two main sections :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section1:&lt;/strong&gt; Tracking and Gathering Data (Figure &lt;strong&gt;a&lt;/strong&gt; and Figure &lt;strong&gt;b&lt;/strong&gt; ) &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section2:&lt;/strong&gt; Analyzing data and Visualizing Analysis Results ( Figure &lt;strong&gt;c&lt;/strong&gt; ) &lt;/p&gt;

&lt;p&gt;The Figure below explains the process and mentions  which AWS Services contribute to the success of the required task at each step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvzuogswispo8vc8aq8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvzuogswispo8vc8aq8c.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PS:&lt;/strong&gt; You can deploy this solution using the available AWS CloudFormation template (Infrastructure As A Code)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh1x1l56ikl97zppavfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh1x1l56ikl97zppavfu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  II.1 Section 1 : Tracking and Gathering Data
&lt;/h2&gt;

&lt;p&gt;In this paragraph, we mainly focus on the   sections responsible for information gathering. &lt;br&gt;
While tracking an application deployed with AWS DevOps tools, we have two main parts to shed the light on:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. CODE BUILD :&lt;/strong&gt; &lt;br&gt;
which announces if the build SUCCEEDED or FAILED. (This will be explained in &lt;strong&gt;SECTION 1.0&lt;/strong&gt;). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. DEPLOYMENT :&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
After having a successful code build, we have to know whether deployment is SUCCESSFUL or NOT. In case if failure ,we need to see the logs to figure out the cause of this issue.(This is detailed in &lt;strong&gt;SECTION 1.1&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;Through those two major parts , we will need an appropriate AWS service that will alert us about success or failure events, and the data related to them.In fact, once a contributor (from the development team) initiates an activity in AWS CI/CD Pipeline,his deeds need to be detected so we can visualize it on our DevOps Monitoring Dashboard. The convenient service for this task is &lt;strong&gt;AWS CloudWatch&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is AWS CloudWatch ?&lt;/strong&gt;&lt;br&gt;
It is a monitoring and observability AWS service . It is also a metric repository that will provide data and actionable insights in form of events  . For more understanding here is an &lt;em&gt;AWS CloudWatch use case:&lt;/em&gt;&lt;br&gt;
You want to be notified with an SMS (&lt;code&gt;perform an action&lt;/code&gt;) once the CPU Utilization of an instance exceeds 60% (&lt;code&gt;Condition on metric is fullfilled&lt;/code&gt;). In this scenario , you need to use AWS CloudWatch to &lt;strong&gt;watch&lt;/strong&gt; for the metric &lt;strong&gt;CPU utilization&lt;/strong&gt; until it exceeds the &lt;strong&gt;threshold&lt;/strong&gt; 60%. Once this happens ,it is called an &lt;strong&gt;event&lt;/strong&gt; that will initiate an &lt;strong&gt;action&lt;/strong&gt; .The event will trigger AWS SNS Service which alerts you immediately with an SMS.&lt;/p&gt;

&lt;p&gt;AWS CloudWatch is used to:&lt;br&gt;
-Continously Stream important Data, &lt;strong&gt;in real-time&lt;/strong&gt;, through Amazon Kinesis Data Firehose to Amazon S3 buckets. (Discussed in &lt;strong&gt;Section 1.0&lt;/strong&gt;)&lt;br&gt;
-Send data ,&lt;strong&gt;in near real-time&lt;/strong&gt;,to Amazon EventBridge, through Amazon Kinesis Data Firehose and finally to Amazon S3 buckets.(Discussed in &lt;strong&gt;Section 1.1&lt;/strong&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  II.1.0 Section 1.0
&lt;/h3&gt;

&lt;p&gt;As we have already mentioned, AWS CloudWatch continuously Streams data events related to code source compilations, tests, and software packaging produced by AWS CodeBuild to Amazon Kinesis Data Firehose -View Step &lt;strong&gt;1&lt;/strong&gt; and Step &lt;strong&gt;2&lt;/strong&gt; in the &lt;strong&gt;Figure (a)&lt;/strong&gt;-. The delivery is in near real-time and with low latency.&lt;br&gt;
 At this Point ,Amazon Kinesis Data Firehose will perform an ETL operation (&lt;strong&gt;E&lt;/strong&gt;xtract,  &lt;strong&gt;T&lt;/strong&gt;ransform, &lt;strong&gt;L&lt;/strong&gt;oad) where a Lambda function undergoes the &lt;strong&gt;Extraction&lt;/strong&gt; and the &lt;strong&gt;Transformation&lt;/strong&gt; . &lt;br&gt;
 In fact, each time AWS CloudFormation detects data in real-time ,The Lambda function is activated for few seconds to extracts the relevant data for each metric and to transform it into the convenient format. &lt;br&gt;
Finally, Amazon Kinesis Data  Firehose &lt;strong&gt;loads&lt;/strong&gt; the data in real-time to the &lt;strong&gt;Amazon S3 data lake&lt;/strong&gt; for downstream processing. View Step &lt;strong&gt;4&lt;/strong&gt; in the &lt;strong&gt;Figure (a)&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluha6l3gg4msewa01pt9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluha6l3gg4msewa01pt9.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  II.1.1 Section 1.1
&lt;/h3&gt;

&lt;p&gt;In this second portion of our architecture, we can simply see how to detect any action performed by  the Developement Team on AWS CodeCommit, AWS CodeDeploy and AWS CodePipeline.View &lt;strong&gt;Figure (b)&lt;/strong&gt;.&lt;br&gt;
Once a developer &lt;strong&gt;commits&lt;/strong&gt; a code to &lt;strong&gt;AWS CodeCommit&lt;/strong&gt; and &lt;strong&gt;deploys&lt;/strong&gt; his application with AWS &lt;strong&gt;CodeDeploy&lt;/strong&gt;,the actions on the predefined AWS Event Sources are detected by Amazon CloudWatch as events and then transfered to &lt;strong&gt;Amazon EventBridge&lt;/strong&gt; (Step &lt;strong&gt;1&lt;/strong&gt;). &lt;br&gt;
Amazon CloudWatch alarms also monitor the status of an Amazon CloudWatch synthetics canary service (Step &lt;strong&gt;3&lt;/strong&gt;). &lt;strong&gt;Canaries&lt;/strong&gt; are customizable scripts that monitor endpoints and APIs on a regular basis. Even if you don't have any user activity on your applications, they actually conduct the same behaviors as a customer to assist you check your customer experience. You can detect problems before your consumers do by utilizing canaries.&lt;br&gt;
&lt;strong&gt;PS:&lt;/strong&gt; Step &lt;strong&gt;2&lt;/strong&gt; and &lt;strong&gt;4&lt;/strong&gt; are the same as the previous section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Amazon EventBridge ?&lt;/strong&gt;&lt;br&gt;
It is a serverless event bus service that is used to capture events from an Amazon CloudWatch alarm  .We can assume that  Amazon EventBridge is the successor of AWS CloudWatch .In fact, it was formely called Amazon CloudWatch Events because it uses the same CloudWatch Events API . It is similar to Amazon CloudWatch but  with additional features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqippoyqa62muuv20t38.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqippoyqa62muuv20t38.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  III.1 Section 2: Analyzing data and Visualizing Analysis Results
&lt;/h2&gt;

&lt;p&gt;In the second paragraph , we will go the final steps in the architecture. At this point , we have all the information gathered in an Amazon S3 bucket ,in the appropriate format for analysis (thanks to AWS Lambda Function).Now ,we have two more steps to go: &lt;strong&gt;analysis&lt;/strong&gt; and &lt;strong&gt;visualization&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Analysis:
&lt;/h3&gt;

&lt;p&gt;A detailed examination of the gathered information is performed by AWS Athena. It is a serverless AWS tool that enables performing interactive queries and data analysis on the big sets of data stored in Amazon S3 while using standard SQL. Analysis results will be mostly delivered within few seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Visualization:
&lt;/h3&gt;

&lt;p&gt;To extract easy-to-understand insights, nothing is more adequat that AWS QuickSight. It provides an interactive and visual dashboard while ensuring scalability. In this perspective, AWS QuickSight is our window to deeper DevOps insights. Furthermore, thanks to Amazon QuickSight Q, Management team members can ask questions about DevOps data in &lt;strong&gt;natural language&lt;/strong&gt; ( plain English) , and receive accurate responses with relevent visualizations which make insights clearer.&lt;br&gt;
Aws QuickSight is an important asset for Management team members. In fact ,it describes DevOps insight in a very simple manner . Every one in management team should be able to understant the insights , even if he lacks data science and DevOps experience. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i84t8l46rnd3wxwjcwd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i84t8l46rnd3wxwjcwd.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloudwatch</category>
      <category>athena</category>
    </item>
    <item>
      <title>Serverless Architectures with AWS Lambda Summary | AWS Whitepaper Summary</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Mon, 15 Nov 2021 20:17:33 +0000</pubDate>
      <link>https://dev.to/awsmenacommunity/serverless-architectures-with-aws-lambda-summary-aws-whitepaper-summary-5c08</link>
      <guid>https://dev.to/awsmenacommunity/serverless-architectures-with-aws-lambda-summary-aws-whitepaper-summary-5c08</guid>
      <description>&lt;p&gt;AWS Lambda is a serverless computing service launched in 2014 .It brought to existence a new architecture paradigm that doesn't rely on servers.AWS Lambda has also enabled a faster development speed and experimentation comparing to server-based architectures.&lt;br&gt;
This paper summerizes the AWS Whitepaper entitled 'Serverless Architectures with AWS Lambda' released in 2017 to shed the light on Lambda serverless compute concepts .We focus mainly on the compute layer of the serverless applications where the code is executed, as well as the AWS developer tools and services used for best practices. &lt;/p&gt;

&lt;h1&gt;
  
  
  What Is Serverless?
&lt;/h1&gt;

&lt;p&gt;Serverless literally means: "&lt;strong&gt;without need to provision or manage any servers&lt;/strong&gt;". A serverless platform is responsible for:&lt;br&gt;
 1.Server Management:&lt;br&gt;
Provisioning, installing and patching software,OS patching,etc.&lt;br&gt;
2.Flexible scaling:&lt;br&gt;
Applications are scaled automatically or by adjusting the capacity through the units of consumption(throughput,memory,etc)&lt;br&gt;
3.High Availability:&lt;br&gt;
Serverless applications have built-in availability and fault tolerance.&lt;br&gt;
4.No idle capacity:&lt;br&gt;
There is no charge when your code isn’t running&lt;/p&gt;

&lt;p&gt;Here is a list of AWS different services that can used in serverless application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda:&lt;/strong&gt; Compute &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon API Gateway:&lt;/strong&gt; APIs &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 (Amazon Simple Storage Service):&lt;/strong&gt; Storage &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon DynamoDB :&lt;/strong&gt; Databases &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon SNS (Simple Notification Service) and Amazon SQS (Simple Queue Service):&lt;/strong&gt; Interprocess messaging &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Step Functions and Amazon CloudWatch Events:&lt;/strong&gt; Orchestration &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Kinesis:&lt;/strong&gt; Analytics
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;PS&lt;/strong&gt;: Here is an additional list of AWS Serverless services that were not mentioned in the original document.&lt;/p&gt;

&lt;p&gt;-&lt;strong&gt;AWS Redshift Spectrum:&lt;/strong&gt; Analytics and interactive querying on S3 &lt;br&gt;
-&lt;strong&gt;Amazon Athena:&lt;/strong&gt; interactive querying on S3 with read-on-schema technology&lt;br&gt;
-&lt;strong&gt;Amazon Aurora Serverless:&lt;/strong&gt; Databases &lt;br&gt;
-&lt;strong&gt;AWS Fargate:&lt;/strong&gt; Containerization &lt;br&gt;
-&lt;strong&gt;Amazon QuickSight:&lt;/strong&gt; Analytics and Visualization&lt;br&gt;
-&lt;strong&gt;Amazon Cognito:&lt;/strong&gt; Authentication ,authorization and user management&lt;br&gt;
-&lt;strong&gt;AWS KMS:&lt;/strong&gt; Key Management&lt;br&gt;
-&lt;strong&gt;AWS Glue:&lt;/strong&gt; ETL tool (Extract Transform and Load)&lt;br&gt;
-&lt;strong&gt;Amazon EventBridge:&lt;/strong&gt; Builds an event-driven architecture.&lt;br&gt;
-&lt;strong&gt;Amazon AppSync:&lt;/strong&gt; Create,publish and monitor secure GraphQL APIs and Subscriptions&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS Lambda -the Basics
&lt;/h1&gt;

&lt;p&gt;Lambda is a high-scale, provision-free serverless compute service . It is a (FaaS) Function-as-a-Service which scales precisely with the size of the workload .It runs more copies of the function in parallel in case of multiple simultaneous events to scale your code with high availability.AWS Lambda enables building reactive events as it works when triggered by an event .This reduces server's idle time and wasted capacity.&lt;br&gt;
Each Lambda function contains: &lt;br&gt;
-The &lt;strong&gt;Function Code&lt;/strong&gt; that you want to execute&lt;br&gt;
-The &lt;strong&gt;Function Configuration&lt;/strong&gt; that defines &lt;strong&gt;how&lt;/strong&gt; your code is executed.&lt;br&gt;
-(Optional) The &lt;strong&gt;Event Sources&lt;/strong&gt; that &lt;strong&gt;detect events&lt;/strong&gt; and &lt;strong&gt;invoke you function&lt;/strong&gt; as they occur. For example, an API Gateway(Event source) invokes a Lambda function when an API method created with API Gateway receives an HTTPS request.&lt;br&gt;
 You don’t need to write any code to integrate an event source with your Lambda function ,to manage infrastructure or scaling.Once you configure an event source for your function, your code is invoked when the event occurs.&lt;br&gt;
Also ,it’s a natural fit to build microservices using Lambda functions thanks to the inherent decoupling that is enforced in serverless applications through integrating Lambda functions and event sources.&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS Lambda- Diving Deeper
&lt;/h1&gt;

&lt;p&gt;This section provides a further explanation of AWS Lambda's components mentioned above  .  &lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda Function Code
&lt;/h3&gt;

&lt;p&gt;It is the code you will run with AWS Lambda.AWS Lambda natively supports Java, Go, PowerShell, Node. js, C#, Python,PHP,SmallTAlk and Ruby code.It also supports libraries,artifacts and compiled native binaries.&lt;br&gt;
 &lt;strong&gt;AWS SAM Local&lt;/strong&gt;: A set of tools used to compile and test the components you plan to run inside of Lambda within a matching environment .&lt;/p&gt;

&lt;h4&gt;
  
  
  The Function Code Package:
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;Function Code Package&lt;/strong&gt; contains all of the assets you want to have available locally upon execution of your code (additional files,  classes, and libraries to be imported, binaries to be executed, or configuration files that your code might reference upon invocation.).&lt;br&gt;
While creating the Lambda function (through the AWS Management Console or CreateFunction API) and even while publishing an updated code to existing Lambda functions (through UpdateFunctionCode API),you can upload the code package directly or you can refer to the S3 bucket and object key where the package is uploaded.&lt;br&gt;
At minimum,the &lt;strong&gt;Function Code Package&lt;/strong&gt; includes the &lt;strong&gt;code function&lt;/strong&gt; to be executed when your function is invoked.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Handler:
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;handler&lt;/strong&gt;  is the code method (Java,C#) or the function (Node.js,Python) that is in your &lt;strong&gt;function code&lt;/strong&gt; that processes events.It is the first thing that executes when a Lambda function is invoked.  &lt;/p&gt;

&lt;h4&gt;
  
  
  The Event Object:
&lt;/h4&gt;

&lt;p&gt;The Event Object is one of the parameters provided to the &lt;strong&gt;handler function&lt;/strong&gt;.It includes all of the data and metadata that Lambda function needs.the event object differs in structure and contents,  depending on which event source created it.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Context Object:
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;context object&lt;/strong&gt; allows your function code to interact with the Lambda execution environment. It's contents and structure varies depending on the language runtime used by Lambda function.But at minimum it will contain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS RequestId&lt;/strong&gt; :Used to track specific invocations of a Lambda function(important for error reporting)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remaining time&lt;/strong&gt; :The amount of time in milliseconds that remain before your function timeout occurs (maximum 300 seconds)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt; :The information about which CloudWatch Logs stream your log statements will be sent to.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Writing Code for AWS Lambda—Statelessness and Reuse
&lt;/h4&gt;

&lt;p&gt;While writing your code for Lambda ,it’s important to understand that your code cannot make assumptions that state will be preserved from one invocation to the next.&lt;br&gt;
However, each time a function container is created and invoked, it remains active and available for subsequent&lt;br&gt;
invocations for at least a few minutes before it is terminated. We can define :&lt;br&gt;
-&lt;strong&gt;Warm container&lt;/strong&gt;:When subsequent&lt;br&gt;
invocations occur on a container that has already been active and invoked at least once before&lt;br&gt;
-&lt;strong&gt;Cold start&lt;/strong&gt;:When an invocation occurs for a Lambda function that requires your function code package to be created and invoked for the first time&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4frxcwi7rcnmw2cc2px.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4frxcwi7rcnmw2cc2px.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fig(1):Invocations of warm function containers and cold function containers&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda Function Event Sources
&lt;/h3&gt;

&lt;p&gt;Event sources are the triggers that invoke your code on AWS Lambda function through the Invoke API.You don't have to write,  scale, or maintain any of the software that integrates the triggers with your Lambda function.&lt;/p&gt;

&lt;h4&gt;
  
  
  Invocation Patterns:
&lt;/h4&gt;

&lt;p&gt;There are two models for invoking a Lambda function: &lt;br&gt;
-&lt;strong&gt;Push Model&lt;/strong&gt;: Lambda function is invoked every time a particular event occurs within another AWS service.&lt;br&gt;
-&lt;strong&gt;Pull Model&lt;/strong&gt;:Lambda polls a data source and invokes your function with it.&lt;br&gt;
A a Lambda function can be executed synchronously or asynchronously through the &lt;strong&gt;InvocationType&lt;/strong&gt; parameter . This parameter has three possible values. &lt;strong&gt;RequestResponse&lt;/strong&gt; to execute &lt;strong&gt;Synchronously&lt;/strong&gt;, &lt;strong&gt;Event&lt;/strong&gt; to execute &lt;strong&gt;Asynchronously&lt;/strong&gt; and &lt;strong&gt;DryRun&lt;/strong&gt; to test that the invocation is permitted for the caller, but don’t&lt;br&gt;
execute the function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda Function Configuration
&lt;/h3&gt;

&lt;p&gt;This section is about the various configuration options that define how your code is executed within Lambda. &lt;/p&gt;

&lt;h4&gt;
  
  
  Function Memory
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Function Memory&lt;/strong&gt; Helps to define the resources allocated to your executing Lambda function by increasing/decreasing function resources (memory/RAM).You can optimize the price and performance of Lambda function by selecting the appropriate memory allocation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Versions and Aliases :
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Versioning&lt;/strong&gt; is possible for AWS Lambda functions.Each and every Lambda function has a default version built in: $LATEST that addresses the most recent uploaded code. Each version has its own Amazon Resource Name (ARN).&lt;br&gt;
PS: When calling the Invoke API or creating an event source for your Lambda function, you can also specify a specific version of the Lambda function .Otherwise, $LATEST is invoked by default.&lt;br&gt;
Each Lambda function container is specific to a particular version of your function. A different set of containers is installed and managed for each function version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Invoking your Lambda functions by their version numbers is useful during testing activities. However, this is not recommended for real application traffic. This this requires updating all of the triggers and clients invoking your Lambda function with Lambda alias to point at a new function version each time you wanted to update your code .Lambda aliases can be used to represent your Lambda function version (live/prod/active),to enable  blue/green deployment pattern ,and for debuging when an alias is integrated with a testing stack for example 
&lt;/h2&gt;

&lt;h2&gt;
  
  
  last paragraph  page 15
&lt;/h2&gt;

&lt;h4&gt;
  
  
  IAM Role
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Lambda Function Permissions
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Network Configuration
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Environment Variables
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Dead Letter Queues
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Timeout
&lt;/h4&gt;

&lt;h1&gt;
  
  
  Serverless Best Practices
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Serverless Architecture Best Practices
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Security Best Practices
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Reliability Best Practices
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Performance Efficiency Best Practices
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Operational Excellence Best Practices
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Cost Optimization Best Practices
&lt;/h4&gt;

&lt;h3&gt;
  
  
  Serverless Development Best Practices
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Infrastructure as Code – the AWS Serverless Application Model (AWS SAM)
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Local Testing – AWS SAM Local
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Coding and Code Management Best Practices
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Testing
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Continuous Delivery
&lt;/h4&gt;

&lt;h1&gt;
  
  
  Sample Serverless Architectures
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>lambda</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>AWS Redshift (Part 1)</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Fri, 12 Nov 2021 15:03:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-redshift-part-1-4nnk</link>
      <guid>https://dev.to/aws-builders/aws-redshift-part-1-4nnk</guid>
      <description>&lt;p&gt;As an AWS solutions architect, you must set up a solution that helps the data analysts in your company to process large historical data for some released products. The data scientists and the developers suggest collecting all the results of the queries for additional analytics with Amazon EMR, Athena and SageMaker. What AWS solution can you use in this context?&lt;/p&gt;

&lt;p&gt;To answer this question, you need first to know what type of database you are dealing with.&lt;/p&gt;

&lt;p&gt;Generally, we can classify databases into two groups, according to the approach that they use, which affects the type of data we want to extract eventually:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.      On-Line Transactional Processing databases(OLTP) :
&lt;/h4&gt;

&lt;p&gt;Like RDS, it has a high transaction volume of simple and short queries. OLTP databases rely on four main operations: Create, Read, Update and Delete.&lt;/p&gt;

&lt;p&gt;For example, with RDS you can CREATE a table containing products and their corresponding prices, you can READ the content of the table, UPDATE the names or the prices of the products and DELETE a product that you will no longer sell for the customers.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.     On-Line Analytical Processing Databases(OLAP):
&lt;/h4&gt;

&lt;p&gt;It has a relatively low transaction volume of sophisticated and long queries that urge aggregations. OLAP DBs are used mainly for analytics.  &lt;/p&gt;

&lt;p&gt;Through the previous definitions, it became obvious that an OLAP is required in our context. An example of an OLAP database on AWS is Redshift.&lt;/p&gt;

&lt;p&gt;Redshift is fully managed by AWS. It is a petabyte-scale data warehouse service.&lt;/p&gt;

&lt;p&gt;Unlike RDS and many other OLTP databases which use rows, Redshift uses columns to store data. It also uses advanced compression and Massive parallel processing of data . This makes it ten times faster than SQL databases.&lt;/p&gt;

&lt;p&gt;Redshift helps to report visualize and analyze collected data. You can save the results of your queries to an S3 data lake so you can do additional analytics with services provided by AWS like Athena and SageMaker.&lt;/p&gt;

&lt;p&gt;Although Redshift is fully managed by AWS, it is set up ONLY in ONE availability zone and can’t take large data ingestion in real-time.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>redshift</category>
      <category>database</category>
    </item>
    <item>
      <title>Amazon EKS Distro (EKS-D)</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Fri, 12 Nov 2021 14:58:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-eks-distro-eks-d-1jkk</link>
      <guid>https://dev.to/aws-builders/amazon-eks-distro-eks-d-1jkk</guid>
      <description>&lt;p&gt;Since June 2018, AWS has provided Amazon Elastic Kubernetes Service (EKS) to its customers. It is an upstream and certified conformant version of Kubernetes. This service helped to manage containerized workloads and services in the AWS Cloud and in on-premises. Amazon EKS have always guaranteed scalability, reliability, performance and high availability.&lt;/p&gt;

&lt;p&gt;This service was satisfying for many users as they have enjoyed applying it effeciently to their projects.&lt;/p&gt;

&lt;p&gt;On the 1st of December 2020, AWS announced their new service Amazon EKS Distro (EKS-D) to the audience interested in Kubernetes, the portable, extensible and open-source platform of orchestration. As everyone was curious about this concept, a myriad of questions immerged: What is EKS-D? Why Amazon created this product? What is the advantage of EKS-D?&lt;/p&gt;

&lt;p&gt;To answer those questions, we have to explain first the meaning of "Kubernetes distribution" to avoid any confusion.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Kubernetes Distribution?
&lt;/h3&gt;

&lt;p&gt;The Cloud Native Computing Foundation (CNCF) defined this term a long time ago, as the pieces that an end-user needs to install and run Kubernetes on the public cloud or on the on-premises. Here is a spreadsheet that details Kubernetes Distributions and Platforms: &lt;a href="https://docs.google.com/spreadsheets/d/1LxSqBzjOxfGx3cmtZ4EbB_BGCxT_wlxW_xgHVVa23es/edit#gid=0"&gt;https://docs.google.com/spreadsheets/d/1LxSqBzjOxfGx3cmtZ4EbB_BGCxT_wlxW_xgHVVa23es/edit#gid=0&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is EKS-D?
&lt;/h3&gt;

&lt;p&gt;EKS-D is a Kubernetes distribution that relies basically on Amazon EKS and holds the same benefits that his 'ancestor' has. At this point, I found it useful to use the word 'Ancestor' because it is crystal clear that EKS-D is just an evolution and exploitation of the Amazon EKS service. But it is more sophisticated since it creates reliable and secure clusters to host Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why EKS Distro?
&lt;/h3&gt;

&lt;p&gt;Amazon EKS is convenient for many users, but not all users can take advantage of it. To explain that, you have to consider the Amazon EKS responsibility model in the figure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c2t-F9mq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2i6slf8d0or89lpio9n0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c2t-F9mq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2i6slf8d0or89lpio9n0.png" alt="Image description" width="763" height="663"&gt;&lt;/a&gt;&lt;br&gt;
AWS wants to simplify Kubernetes managing for the customers who may not find the right approach to leverage their applications. Customers must spend a minimal duration on operating Kubernetes. Instead, they need to focus on their business. This is the reason why Amazon EKS takes responsibility for Tactical Operations. This sounds great, but in fact, it deprives a considerable number of customers of using Amazon EKS.&lt;/p&gt;

&lt;p&gt;Some users need for example to apply their custom tools on the control plane as their applications require a customization of the control plane flags. Another category of customers may have specific security patches to apply according to their compliance. Others have a wide variety of computing requirements (Hardware, CPU, environment, etc.)&lt;/p&gt;

&lt;p&gt;Those considerable requirements urged the appearance of EKS Distro. It aims to help users get consistent Kubernetes builds and have a more reliable and secure distribution for an extended number of versions. Customers can now run Kubernetes on your own self-provisioned hardware infrastructure, on bare-metal or on cloud environment.&lt;/p&gt;

&lt;p&gt;For more details about the subject , visit:&lt;br&gt;
 &lt;a href="https://aws.amazon.com/eks/eks-distro/"&gt;https://aws.amazon.com/eks/eks-distro/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Exploring OpenStack through MicroStack : Installing MicroStack</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Tue, 19 Oct 2021 21:28:40 +0000</pubDate>
      <link>https://dev.to/dorraelboukari/exploring-openstack-through-microstack-installing-microstack-16i6</link>
      <guid>https://dev.to/dorraelboukari/exploring-openstack-through-microstack-installing-microstack-16i6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zimNka-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rcaffl116emtl8ib29l.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zimNka-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rcaffl116emtl8ib29l.jpeg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;1.What is OpenStack?&lt;br&gt;
    1.1 About OpenStack versions&lt;br&gt;
    1.2 What is an NFV?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;What is MicroStack? Why MicroStack ?&lt;br&gt;
2.1  Why MicroStack for our LAB?&lt;br&gt;
2.2  What MicroStack requires&lt;br&gt;
2.3  Why MicroStack is agile&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installation in Action&lt;br&gt;
   3.1.Virtual Machine Characteristics&lt;br&gt;
    3.1.1.RAM requirements&lt;br&gt;
    3.1.2.Hard disk requirements&lt;br&gt;
    3.1.3.Processing requirements&lt;br&gt;
3.2 Installing OS&lt;br&gt;
3.3 Installing MicroStack&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;1.&lt;strong&gt;What is OpenStack?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OpenStack is a cloud framework that provides an agile scale-out infrastructure. It can power standard cloud services like compute ,network, storage resource provisioning and self-service automation.&lt;br&gt;
   It is an open source project that helps to build a private or public cloud. It was deployed by thousands in the open source community as a set of software components that provide services for the cloud infrastructure. It is built entirely on open industry standards and APIs. This is why it is highly adaptable. No more worries about vendor lock-in.&lt;/p&gt;

&lt;p&gt;1.1. &lt;strong&gt;About OpenStack versions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While writing this briefing, the vast majority of the customers are still using or migrating to version 13. This is because the newest versions of OpenStack do not support all the Network Functions Virtualization features that are already by the older versions (10 and 13)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P_5Yaalk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmm4d5m044rdcyelu2fd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P_5Yaalk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmm4d5m044rdcyelu2fd.png" alt="Image description" width="710" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1.2 &lt;strong&gt;What is an NFV?&lt;/strong&gt;&lt;br&gt;
Network functions virtualization (NFV) with which the virtualization of network devices happens.&lt;br&gt;
Networking devices (such as switches, routers,firewalls ) run ordinarily on hardware devices. But through NVF, you can make the services provided by network devices packaged as virtual machines .Network can be run on standard servers instead of physical hardware devices. This improves the scalability and the agility by allowing service providers to shape their network easily with no need for additional resources .&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;What is MicroStack? Why MicroStack ?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As it name implies MicroStack is OpenStack for micro clouds. It is a single or multi-node OpenStack deployment . It has small-scale cost-efficient private cloud infrastructure.&lt;br&gt;
“MicroStack enables enterprises to quickly deploy cost-efficient private cloud infrastructure from single-node installations to micro cloud clusters.” [1]&lt;br&gt;
A single node deployment means that it can run on one workstation .The core services are included and  are located on a single node. &lt;/p&gt;

&lt;p&gt;2.1 &lt;strong&gt;Why MicroStack for our LAB?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;As one node OpenStack deployment, MicroStack is a good tool to use to deploy OpenStack at a small scale.&lt;/li&gt;
&lt;li&gt;MicroStack has a &lt;strong&gt;straightforward installation&lt;/strong&gt;
Link :
&lt;a href="https://microstack.run/?_ga=2.26389432.404340272.1634561755-1799874885.1632764872&amp;amp;_gac=1.250131700.1634069838.Cj0KCQjw5JSLBhCxARIsAHgO2SepSJ03AZTrqTGHC28cEQYEOFOUCa1e7VM_83hB-0T4rUZqfmZgFOoaArN-EALw_wcB"&gt;https://microstack.run/?_ga=2.26389432.404340272.1634561755-1799874885.1632764872&amp;amp;_gac=1.250131700.1634069838.Cj0KCQjw5JSLBhCxARIsAHgO2SepSJ03AZTrqTGHC28cEQYEOFOUCa1e7VM_83hB-0T4rUZqfmZgFOoaArN-EALw_wcB&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2.2 &lt;strong&gt;What MicroStack requires:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A multi-core CPU&lt;/li&gt;
&lt;li&gt;8 GB RAM&lt;/li&gt;
&lt;li&gt;100 GB storage
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5FBVbbNy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0lt9xe2ic0ty0snbgnh.png" alt="Image description" width="568" height="179"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2.3 &lt;strong&gt;Why MicroStack is agile:&lt;/strong&gt;&lt;br&gt;
It does NOT require so much resources and it can be installed on :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workstations&lt;/li&gt;
&lt;li&gt;Edge devices&lt;/li&gt;
&lt;li&gt;Build clusters.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Installation in Action&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As it was mentioned in paragraph 2.2 the requirement of MicroStack in terms of  memory ,processing and storage, we apply those on our virtual machine .&lt;br&gt;
3.1 &lt;strong&gt;Virtual Machine Characteristics&lt;/strong&gt;&lt;br&gt;
3.1.1 &lt;strong&gt;RAM requirements&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dKuEOlkO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6l44b13pg9plly5ziz9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dKuEOlkO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6l44b13pg9plly5ziz9d.png" alt="Image description" width="676" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.1.2 &lt;strong&gt;Hard disk requirements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_EmqCNM7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aytob2nu1wfeehmwbfq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_EmqCNM7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aytob2nu1wfeehmwbfq1.png" alt="Image description" width="690" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.1.3 &lt;strong&gt;Processing requirements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zGpjwq5X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3lfkmxqusg5b1nquf4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zGpjwq5X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3lfkmxqusg5b1nquf4b.png" alt="Image description" width="720" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.2 &lt;strong&gt;Installing OS&lt;/strong&gt;&lt;br&gt;
The OS used is Ubuntu 64-bit release in 04.2021&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YH792j1v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yfwr2k2qrj6aybsmvt7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YH792j1v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yfwr2k2qrj6aybsmvt7d.png" alt="Image description" width="800" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.3*&lt;em&gt;Installing MicroStack&lt;/em&gt;*&lt;br&gt;
The following command&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QQt0xV0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihs7ol798x7f6x97wqej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QQt0xV0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihs7ol798x7f6x97wqej.png" alt="Image description" width="547" height="35"&gt;&lt;/a&gt;&lt;br&gt;
Gives:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Hh8_KQx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lom0zkrcu981spbz5dz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Hh8_KQx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lom0zkrcu981spbz5dz.png" alt="Image description" width="781" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To see the information about the installed snap write:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UOQMkjn7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cu4w4308lvio2z0lcvxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UOQMkjn7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cu4w4308lvio2z0lcvxf.png" alt="Image description" width="528" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This gives:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bOQmeHGS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqt6ubl665mxytvuj8he.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bOQmeHGS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqt6ubl665mxytvuj8he.png" alt="Image description" width="572" height="83"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Microstack Ussuri is installed successfully&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;3.4*&lt;em&gt;Initializing MicroStack&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5kP9Th-M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l7dnb042n954omx6spi3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5kP9Th-M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l7dnb042n954omx6spi3.png" alt="Image description" width="721" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hrdvb3WO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9u6ezgo9zw61e1gt3vv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hrdvb3WO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9u6ezgo9zw61e1gt3vv.png" alt="Image description" width="716" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JLgpMlCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfi2zcnnkvk50nkhtdws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JLgpMlCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfi2zcnnkvk50nkhtdws.png" alt="Image description" width="714" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.5*&lt;em&gt;Interacting with MicroStack&lt;/em&gt;*&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this step , we generate the password of the user admin with through which we will interact with our single node OpenStack. 
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WoYBLSZv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nc41l1613f5x27gz1yvm.png" alt="Image description" width="717" height="58"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*Interact with your cloud via the web UI visit &lt;a href="http://10.20.20.1/"&gt;http://10.20.20.1/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8ic842K8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2pcb4ziwylsgk5lokmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8ic842K8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2pcb4ziwylsgk5lokmq.png" alt="Image description" width="728" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your private cloud is ready to manage in terms of Compute,Volume , Network (IaaS)&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eQ1tArf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avyh0aawzo7mhmsl6ldr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eQ1tArf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avyh0aawzo7mhmsl6ldr.png" alt="Image description" width="729" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Bibliography:
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;[1] Last seen at 05:06  18,October 2021&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://microstack.run/?_ga=2.26389432.404340272.1634561755-1799874885.1632764872&amp;amp;_gac=1.250131700.1634069838.Cj0KCQjw5JSLBhCxARIsAHgO2SepSJ03AZTrqTGHC28cEQYEOFOUCa1e7VM_83hB-0T4rUZqfmZgFOoaArN-EALw_wcB"&gt;https://microstack.run/?_ga=2.26389432.404340272.1634561755-1799874885.1632764872&amp;amp;_gac=1.250131700.1634069838.Cj0KCQjw5JSLBhCxARIsAHgO2SepSJ03AZTrqTGHC28cEQYEOFOUCa1e7VM_83hB-0T4rUZqfmZgFOoaArN-EALw_wcB&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[2] Last seen at 19:40  18,October 2021&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://ubuntu.com/openstack/install/"&gt;https://ubuntu.com/openstack/install/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>openstack</category>
      <category>opensource</category>
      <category>privatecloud</category>
    </item>
    <item>
      <title>Amazon EKS Distro (EKS-D)</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Sat, 04 Sep 2021 14:24:00 +0000</pubDate>
      <link>https://dev.to/dorraelboukari/amazon-eks-distro-eks-d-3cim</link>
      <guid>https://dev.to/dorraelboukari/amazon-eks-distro-eks-d-3cim</guid>
      <description>&lt;p&gt;Since June 2018, AWS has provided Amazon Elastic Kubernetes Service (EKS) to its customers. It is an upstream and certified conformant version of Kubernetes. This service helped to manage containerized workloads and services in the AWS Cloud and in on-premises. Amazon EKS have always guaranteed scalability, reliability, performance and high availability.&lt;/p&gt;

&lt;p&gt;This service was satisfying for many users as they have enjoyed applying it effeciently to their projects.&lt;/p&gt;

&lt;p&gt;On the 1st of December 2020, AWS announced their new service Amazon EKS Distro (EKS-D) to the audience interested in Kubernetes, the portable, extensible and open-source platform of orchestration. As everyone was curious about this concept, a myriad of questions immerged: What is EKS-D? Why Amazon created this product? What is the advantage of EKS-D?&lt;/p&gt;

&lt;p&gt;To answer those questions, we have to explain first the meaning of "Kubernetes distribution" to avoid any confusion.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a Kubernetes Distribution?
&lt;/h1&gt;

&lt;p&gt;The Cloud Native Computing Foundation (CNCF) defined this term a long time ago, as the pieces that an end-user needs to install and run Kubernetes on the public cloud or on the on-premises. Here is a spreadsheet that details Kubernetes Distributions and Platforms: &lt;a href="https://docs.google.com/spreadsheets/d/1LxSqBzjOxfGx3cmtZ4EbB_BGCxT_wlxW_xgHVVa23es/edit#gid=0" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What is EKS-D?
&lt;/h1&gt;

&lt;p&gt;EKS-D is a Kubernetes distribution that relies basically on Amazon EKS and holds the same benefits that his 'ancestor' has. At this point, I found it useful to use the word 'Ancestor' because it is crystal clear that EKS-D is just an evolution and exploitation of the Amazon EKS service. But it is more sophisticated since it creates reliable and secure clusters to host Kubernetes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why EKS Distro?
&lt;/h1&gt;

&lt;p&gt;Amazon EKS is convenient for many users, but not all users can take advantage of it. To explain that, you have to consider the Amazon EKS responsibility model in the figure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19phvfzo4ap2t75a92fh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19phvfzo4ap2t75a92fh.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS wants to simplify Kubernetes managing for the customers who may not find the right approach to leverage their applications. Customers must spend a minimal duration on operating Kubernetes. Instead, they need to focus on their business. This is the reason why Amazon EKS takes responsibility for &lt;em&gt;Tactical Operations&lt;/em&gt;. This sounds great, but in fact, it deprives a considerable number of customers of using Amazon EKS.&lt;/p&gt;

&lt;p&gt;Some users need for example to apply their custom tools on the control plane as their applications require a customization of the control plane flags. Another category of customers may have specific security patches to apply according to their compliance. Others have a wide variety of computing requirements (Hardware, CPU, environment, etc.)&lt;/p&gt;

&lt;p&gt;Those considerable requirements urged the appearance of EKS Distro. It aims to help users get consistent Kubernetes builds and have a more reliable and secure distribution for an extended number of versions. Customers can now run Kubernetes on your own self-provisioned hardware infrastructure, on bare-metal or on cloud environment.&lt;/p&gt;

&lt;p&gt;For more details about the subject , visit: &lt;a href="https://aws.amazon.com/eks/eks-distro/" rel="noopener noreferrer"&gt;https://aws.amazon.com/eks/eks-distro/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Automated Archival for Amazon Redshift | AWS White Paper Summary</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Sun, 08 Aug 2021 16:22:24 +0000</pubDate>
      <link>https://dev.to/awsmenacommunity/automated-archival-for-amazon-redshift-4fc0</link>
      <guid>https://dev.to/awsmenacommunity/automated-archival-for-amazon-redshift-4fc0</guid>
      <description>&lt;p&gt;Since its appearance, AWS provided a variety of database services to help users manage their data according to their needs. In AWS, you can run OLAP DBs as well as OLTP DBs .&lt;br&gt;
The paper provides a further explanation of the whitepaper entitled “Automated Archival for Amazon Redshift” published In July 2021.It will shed the light on AWS Redshift service and its specifications especially the automated archival.&lt;/p&gt;

&lt;h1&gt;
  
  
  OLAP databases VS OLTP databases
&lt;/h1&gt;

&lt;p&gt;Those two types of databases each rely on a different processing system. According to the type of information that you want to extract from your database, you generally select one of those two categories. In fact:&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;OLAP( OnLine Analytical Processing)&lt;/strong&gt; : Used  for business intelligence and any other operation that require complex analytical calculations .The  data is mostly coming from a data warehouse. This process is ideal for reporting and analyzing historical data. Many businesses rely on this type of database to have a clear visualization about their budgeting, sales forecasting and to track the success rate of released products.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;OLTP(OnLine Transactional Processing)&lt;/strong&gt; : Used for a high volume of simple transactions and short queries. It relies mainly on four operations that can be performed on the databases (CRUD: CREATE, READ, UPDATE, DELETE). Businesses rely on this category to get detailed and current data from organized tables.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a Data Warehouse?
&lt;/h1&gt;

&lt;p&gt;A Data Warehouse is a concept that refers to any aggregation of considerable amounts of data from different sources for the sake of analytics. Those resources could be internal (in your own network like marketing) or external (like customer categories, system partners,etc) .It helps to centralize historical data into one consistent repository of information. The Data warehouse concept aims to run analytics on hundreds of gigabytes to petabytes of data.&lt;/p&gt;

&lt;h1&gt;
  
  
  OLAP usage in Data Warehouse
&lt;/h1&gt;

&lt;p&gt;OLAP is a system dedicated to perform multi-dimensional analysis on considerable volumes of data. It is ideal for querying and analyzing data warehouses. OLAP undergoes complex queries on data from different perspectives.&lt;/p&gt;

&lt;h1&gt;
  
  
  What AWS Redshift?
&lt;/h1&gt;

&lt;p&gt;Amazon Redshift is a data warehouse service provided by AWS. It is fully managed and it can analyze up to a petabyte of data or more. It enables AWS users to run complex queries that involve aggregation on historical data rather than real-time data. Those analytics are crucial for business reporting and for visualization purposes . this helps managers to have clear insights into the evolution of the business  .&lt;/p&gt;

&lt;h1&gt;
  
  
  Automated Archival for Amazon Redshift
&lt;/h1&gt;

&lt;p&gt;In this paragraph, we discuss the architecture illustrated in Figure (a). This architecture automates the periodic data archival process for an Amazon Redshift database. We will go through each step and explain the ambiguous ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dAFZXXf7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/al8rk73cdogm442dw627.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dAFZXXf7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/al8rk73cdogm442dw627.PNG" alt="Alt Text" width="608" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1)    Data is ingested into the Amazon Redshift cluster at various frequencies:&lt;/strong&gt;&lt;br&gt;
 The ingestion of data is literally the transportation of data from assorted sources like Simple Storage Service S3,Copy command (EMR, DynamoDB), database migration service or data pipeline to the data warehouse like Amazon Redshift cluster. Each given dataset has its own ‘Frequency Of Ingestion’ which defines how often we ingest it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.a)  After every ingestion load, the process creates a queue of metadata about tables populated into Amazon Redshift tables in Amazon RDS:&lt;/strong&gt;&lt;br&gt;
In order to have a clear visualization of the data stored in the data warehouse,  the process creates a queue of metadata for every ingested data and store it in an Amazon RDS (Relational Database Service).This archived metadata contains various information about the tables populated into Amazon Redshift such as Table Name, Cluster, Region , Processed Date, Archival Date, etc.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;2.b) Data Engineers may also create the archival queue manually, if needed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3)    Using Amazon EventBridge, an AWS Lambda function is triggered periodically to read the queue from the RDS table and create an Amazon SQS message for every period due for archival. The user may choose a single SQS queue or an SQS queue per schema based on the volume of tables.&lt;/strong&gt;&lt;br&gt;
Amazon EventBridge is a serverless event bus to build event-driven applications. Here EventBridge is used to initiate an AWS Lambda function Daily,Weekly or Monthly to get archival tables from the Amazon RDS mentioned in step(1)&lt;br&gt;
&lt;strong&gt;4)    A proxy Lambda function de-queues the Amazon SQS messages and for every message invokes AWS Step Functions.&lt;/strong&gt;&lt;br&gt;
The proxy Lambda function links each Amazon SQS message to the corresponding AWS Step Functions. &lt;br&gt;
AWS Step Functions is a low-code visual workflow service used to orchestrate AWS services. It  manages failures, retries, parallelization, service integrations, and observability .&lt;br&gt;
&lt;strong&gt;5)    AWS Step Functions unloads the data from the Amazon Redshift cluster into an Amazon S3 bucket for the given table and period.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;6)    Amazon S3 Lifecycle configuration moves data in buckets from S3 Standard storage class to S3 Glacier storage class after 90 days.&lt;/strong&gt;&lt;br&gt;
The definition of the S3 lifecycle relies on the S3 retention policy. The transition action happens every 90 days to Amazon S3 Glacier (which is the minimum storage duration charge of S3 Glacier).S3 Glacier is a deep , long-term and durable archive storage that will conserve the data until deleted, while S3 is not a long term storage device. &lt;br&gt;
&lt;strong&gt;7)    Amazon S3 inventory tool generates manifest files from the Amazon S3 bucket dedicated for cold data on daily basis and stores them in an S3 bucket for manifest files.&lt;/strong&gt;&lt;br&gt;
S3 inventory is a tool provided by AWS to help manage the simple storage service. In this context, S3 inventory lies between the standard S3 storage and the S3 Glacier. It keeps track of the cool data archived in S3 Glacier. It generates manifest files that list all the file inventory lists that are stored in the S3 Glacier every 90 days when the transition takes action.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;8)    Every time an inventory manifest file is created in a manifest S3 bucket, an AWS Lambda function is triggered through an Amazon S3 Event Notification.&lt;/strong&gt;&lt;br&gt;
The manifest files created by the S3 inventory tool are stored in Amazon S3 . The acquisition of the data on the S3 bucket produces an S3 event notification which will initiate an AWS Lambda function. &lt;br&gt;
&lt;strong&gt;9)    A Lambda function normalizes the manifest file for easy consumption in the event of restore.&lt;/strong&gt;&lt;br&gt;
The triggered Lambda Function starts to normalize the inventory data.&lt;br&gt;
&lt;strong&gt;10)   The data stored in the S3 bucket used for cold data can be queried using Amazon Redshift Spectrum.&lt;/strong&gt;&lt;br&gt;
 Amazon Redshift Spectrum will be used to query data directly from files on the Amazon S3 where the normalized manifests are stored.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reference :
&lt;/h1&gt;

&lt;p&gt;Automated Archival for Amazon Redshift Whitepaper published On July ,2021 : &lt;a href="https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/automated-archival-for-amazon-redshift-ra.pdf?did=wp_card&amp;amp;trk=wp_card"&gt;https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/automated-archival-for-amazon-redshift-ra.pdf?did=wp_card&amp;amp;trk=wp_card&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>amazonredshift</category>
      <category>lamdba</category>
    </item>
    <item>
      <title>Architecting Amazon EKS for PCI DSS Compliance Summary | AWS Whitepaper Summary</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Mon, 19 Jul 2021 20:55:42 +0000</pubDate>
      <link>https://dev.to/awsmenacommunity/architecting-amazon-eks-for-pci-dss-compliance-summary-20ko</link>
      <guid>https://dev.to/awsmenacommunity/architecting-amazon-eks-for-pci-dss-compliance-summary-20ko</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes8rt2itq3g7shv07hp0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes8rt2itq3g7shv07hp0.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
It was in 2013 when I first heard about PCI DSS compliance after the consecutive and massive credit-card data breaches that happened in the US. I was 16 years old and I was excited to know how the breach happened and what does ‘PCI DSS Compliance’ even mean .Today, with a clearer visual on AWS provided technologies on this subject, I selected this whitepaper to enlighten every curious person about the data security standard for payment cards.  This paper, provided by two senior solution architects Arindam Chatterji and Tim Sills, outlines the best practices to configure Amazon Elastic Kubernetes services for AWS Fargate or Amazon Elastic Compute Cloud (Amazon EC2) launch types for Payment Card Industry Data Security Standard (PCI DSS)  .It also provides various solutions to mitigate security risks while using the provided AWS services.&lt;br&gt;
    This document is dedicated to persons who are involved in projects where AWS is applied for PCI DSS compliance.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Payment Card Industry Data Security Standard (PCI DSS)?
&lt;/h1&gt;

&lt;p&gt;• Provides technical and operational guidance on securing payment card processing environments&lt;br&gt;
• Entities that store, process, or transmit cardholder data (CHD) must be PCI DSS certified ,so they have proven that the followed policies ,procedures, guidelines and best practices to build cardholder data environment (CDE) &lt;/p&gt;

&lt;h1&gt;
  
  
  AWS for PCI DSS Compliance
&lt;/h1&gt;

&lt;p&gt;• AWS provided many services that meet PCI DSS Compliance&lt;br&gt;
• AWS Artifact: a central resource for compliance-related information. It can be accessed by companies, on-demand, to reduce compliance efforts .The services provided are containerized by AWS. So companies take advantage of platform independence, deployment speed and resource efficiency.&lt;br&gt;
&lt;strong&gt;PS:&lt;/strong&gt; A service listed as PCI DSS compliant doesn’t mean that it makes a customer’s compliant by default. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b7pvi5za5ng0hsj8j3u.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b7pvi5za5ng0hsj8j3u.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PCI DSS compliance status of AWS Services&lt;/strong&gt;&lt;br&gt;
• AWS is a Level 1 PCI DSS Service Provider: AWS customers meet easily compliance requirements.&lt;/p&gt;

&lt;p&gt;• Any data provided by the customer has :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Primary Account Numbers (PAN)&lt;/li&gt;
&lt;li&gt; Sensitive Authentication Data (SAD)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;•The annually updated PCI DSS assessment includes physical security requirements for AWS datacenters.&lt;br&gt;
&lt;strong&gt;AWS Shared Responsibility model&lt;/strong&gt;&lt;br&gt;
Security and compliance responsibilities and shared between AWS and the customer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz92zqq5x7lwzn3nbvpzx.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz92zqq5x7lwzn3nbvpzx.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;• AWS : Security , management, control of AWS cloud infrastructure (Hardware ,software, networking and facilities)&lt;/p&gt;

&lt;p&gt;• Customer:  Security of all the systems components and services provisioned on AWS (included in or connected to the customer’s CDE) like access control ,log settings, encryption , etc&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PS:&lt;/strong&gt; The division of responsibilities depends on the AWS service selected by the customer. For example:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkacq488jviwv2b9pxoj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkacq488jviwv2b9pxoj.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PCI DSS scope determination and validation&lt;/strong&gt;&lt;br&gt;
The cardholder data flow determines:&lt;br&gt;
• Applicability of PCI DSS&lt;br&gt;
• Scope of PCI DSS; boundaries and components of CDE&lt;br&gt;
-The customer must have a procedure for PCI DSS scope determination to assure its completeness and to detect changes and violations of the scope.&lt;br&gt;
-Steps that comprise the PCI DSS scope identification are:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6dx96u7kj2ddp7pfxfr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6dx96u7kj2ddp7pfxfr.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PS:&lt;/strong&gt;   Customers need to be aware of container configuration parameters through all the phases of a container lifecycle to ensure the satisfaction of the compliance requirements.&lt;/p&gt;

&lt;h1&gt;
  
  
  Securing an Amazon EKS Deployment
&lt;/h1&gt;

&lt;p&gt;While architecting a container-based environment for PCI DSS compliance, you have to follow the best practices recommendations for those key topics:&lt;br&gt;
• Network segmentation&lt;br&gt;
• Host and container image hardening&lt;br&gt;
• Data protection&lt;br&gt;
• Restricting user access&lt;br&gt;
• Event logging&lt;br&gt;
• Vulnerability scanning and penetration testing&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Segmentation (Requirement N°1):&lt;/strong&gt;&lt;br&gt;
 PCI DSS doesn’t require network segmentation, but it helps to reduce the scope of the customer’s environment.&lt;br&gt;
• &lt;strong&gt;VPC, subnets and security groups&lt;/strong&gt; provide logical isolation of CDE-related resources.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fias48jgt8gg5lbuxa3df.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fias48jgt8gg5lbuxa3df.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To enforce your VPC’s network policy you can use Calico is an open-source policy engine from Tigera.It works well with Amazon EKS supports extended network policies and can be integrated with service mesh .&lt;br&gt;
• &lt;strong&gt;Security groups&lt;/strong&gt; act as a virtual firewall and provide stateful inspection, they restrict communications by IP address, port, and protocol.They are used by Amazon EKS to control the traffic between the Kubernetes control plane and the cluster's worker nodes.&lt;br&gt;
&lt;strong&gt;PS:&lt;/strong&gt; It is strongly recommended that you use a dedicated security group for each control plane (one for each cluster).&lt;br&gt;
• Individual AWS accounts for PCI DSS provide the highest level of segmentation boundaries on the AWS platform. Their resources are logically isolated from other accounts.&lt;/p&gt;

&lt;p&gt;• To isolate containerized application communications ,you need to :&lt;/p&gt;

&lt;p&gt;1)  Isolate pods on separate nodes based on the sensitivity of services and isolate CDE workloads in a separate cluster with a dedicated Security group. &lt;/p&gt;

&lt;p&gt;2)  Use AWS security groups to limit communication between nodes and control plane and external communications. &lt;/p&gt;

&lt;p&gt;3)  Implement micro-segmentation with Kubernetes network policies and consider the usage of the service mesh, Networking and Cryptography library (NaCI) encryption and Container Network Interfaces (CNIs) to limit and secure communications. &lt;/p&gt;

&lt;p&gt;4)  Implement a network segmentation and tenant isolation network policy. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Host and image hardening (Requirement N°2)&lt;/strong&gt;&lt;br&gt;
Host and image hardening help to minimize attack vectors by disabling support for vendors for security parameters.In which:&lt;br&gt;
• Customers should create trusted base container images that have been assessed and confirmed to use patched libraries and applications. Use a trusted registry to secure container images, such as Amazon Elastic Container Registry (Amazon ECR). Amazon ECR provides image scanning based upon the Common Vulnerabilities and Exposures (CVEs) database and can identify common software vulnerabilities.&lt;br&gt;
• Container optimized Amazon Machine Image (AMI) contains only essential libraries for deployments. Non-essential services and libraries should be disabled or removed.&lt;br&gt;
• Container builds should be limited and should adopt a model of microservices where a container provides one primary function. &lt;br&gt;
• It is recommended to use special-purpose operating systems (OS) like Bottlerocket that includes a reduced attack surface , a disk image that is verified on boot, and enforced permission boundaries using SELinux.&lt;br&gt;
• Establish configuration standards under the shadow of the industry-accepted system hardening guidelines.&lt;br&gt;
&lt;strong&gt;Data protection (Requirements N°3 and 4)&lt;/strong&gt;&lt;br&gt;
This is about the PCI DSS requirement to protect sensitive data in rest and in transit. PCI DSS compliant services and features to assist with these compliance efforts.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Protect the data in rest:&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Secure all the sensitive stored data of PCI DSS workloads on secure stores or databases NOT on the container host.&lt;br&gt;
• Consider the use of AWS Key Management Service (KMS) to secure encryption key storage, access controls and annual rotation.&lt;br&gt;
• Use AWS Secrets Manager and AWS Systems Manager Parameter Store to secure sensitive data within container build files.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Protect the data in transit:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;PCI DSS urges the encryption of sensitive data during transmission over open, public networks. Customers are responsible for configuring strong cryptography and security controls.&lt;br&gt;
• Consider a variety of AWS services like Amazon API Gateway and Application Load Balancer&lt;br&gt;
• Encryption in transit for inter-pod communication can also be implemented with a service mesh like AWS App Mesh with support for mTLS.&lt;br&gt;
• Use envelope encryption of Kubernetes secrets in EKS to add a customer-managed layer of encryption for application secrets or user data that is stored within a Kubernetes cluster. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Other protection measures:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;• Restrict access to authorized personnel(Requirement N°7 and 8), grant least privileges and authenticate with strong authentication requirements that align with the PCI DSS. &lt;br&gt;
• Run containers with non-privileged user accounts and restrict all access to container images.&lt;br&gt;
• Consider disabling the use of the secure shell (SSH) and instead leverage AWS Systems Manager’s Run Command&lt;br&gt;
• Urge that users sign into the Amazon EKS cluster with an IAM identity(either an IAM user or IAM role) &lt;br&gt;
• Create the cluster with a dedicated IAM role which should be regularly audited. &lt;br&gt;
• Make the Amazon EKS Cluster endpoint private. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tracking and monitoring access (Requirement N°10)&lt;/strong&gt;&lt;br&gt;
This is about the use of event logs to track suspicious activities and even anticipate possible threats.  So:&lt;br&gt;
• EKS Cluster audit logs need to be enabled as well as VPC Flow Logs, Amazon CloudWatch and Amazon Kinesis&lt;br&gt;
• CloudWatch dashboard should be configured to monitor and alert on all captured event log activity&lt;br&gt;
• Captured event data have to be stored securely within encrypted Amazon S3 buckets to be analyzed with Amazon Athena and Amazon CloudWatch Logs Insights.&lt;br&gt;
• Amazon GuardDuty provides threat detection .&lt;br&gt;
&lt;strong&gt;Network Intrusion detection(Requirement N°11)&lt;/strong&gt;&lt;br&gt;
• Monitoring of all traffic at the perimeter and critical points of the CDE&lt;br&gt;
• Use network  inspection options outside of the container host on AWS like :&lt;br&gt;
&lt;strong&gt;Amazon GuardDuty:&lt;/strong&gt; a managed service that provides threat detection across multiple AWS data sources to identify threats.&lt;br&gt;
&lt;strong&gt;Amazon VPC Traffic Mirroring:&lt;/strong&gt;  traditional IDS/IPS solution.&lt;br&gt;
&lt;strong&gt;Virtual IDS/IPS device from the AWS Marketplace:&lt;/strong&gt; helps to inspect in transit traffic.You can use a VPC Gateway to route all traffic to on-premises IDS/IPS infrastructure.&lt;br&gt;
&lt;strong&gt;Vulnerability scanning and penetration testing(Requirement N°11.2)&lt;/strong&gt;&lt;br&gt;
It aims to test systems and processes regularly to identify and fix vulnerabilities.&lt;br&gt;
• Penetration testing is to be performed on an annual basis and after any significant environmental changes.&lt;br&gt;
• Penetration testing of AWS resources is allowed at any time for certain permitted services in the perimeter of the penetration testing policy .&lt;br&gt;
• PCI DSS Compliance provides guidance and methodologies to perform penetration testing .It depends on customer’s environment.&lt;br&gt;
• When deploying Amazon EKS on Amazon EC2 instances, customers must perform vulnerability scanning of the underlying host.&lt;br&gt;
• Amazon Inspector is a security assessment tool that helps identify vulnerabilities and prioritizes findings by level of severity.&lt;br&gt;
• The Center for Internet Security (CIS) Kubernetes Benchmark provides guidance for Amazon EKS node security configurations. &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;AWS provides a convenient infrastructure for customers to address PCI DSS requirements for their containerized workloads. Various security measures are ready to use in order to reduce management complexities for the users.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudnative</category>
      <category>kubernetes</category>
      <category>pcidss</category>
    </item>
    <item>
      <title>Docker On AWS | AWS Whitepaper Summary</title>
      <dc:creator>Dorra ELBoukari</dc:creator>
      <pubDate>Sun, 11 Jul 2021 15:54:16 +0000</pubDate>
      <link>https://dev.to/awsmenacommunity/docker-on-aws-nji</link>
      <guid>https://dev.to/awsmenacommunity/docker-on-aws-nji</guid>
      <description>&lt;p&gt;This content is the summary of the AWS whitepaper entitled “ Docker on AWS “ written by Brandon Chavis and Thomas Jones. It discusses the exploitation of the container’s benefits in AWS. I tried to simplify and to gather the most interesting points from each paragraph, in order to give the readers very brief and effective content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PS&lt;/strong&gt;: Although the introduction is always ignored by many readers, I found that the authors provided an excellent set of information as an opening to our subject. This is why I found it fruitful o summarize the introduction as well by an explanative figure. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpq8fsb2cvafokwnav2b5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpq8fsb2cvafokwnav2b5.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  I.    Container Benefits:
&lt;/h1&gt;

&lt;p&gt;The benefits of containers reach all the elements of organizations and they are:&lt;br&gt;
    &lt;strong&gt;Speed&lt;/strong&gt;: Helps all the contributors to software development activities to act quickly.&lt;br&gt;
 &lt;strong&gt;Because:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The architecture of containers allows for full process isolation by using the Linux kernel namespaces and cgroups. Containers are independent and  share kernel on host OS(No need for full virtualization or for  hypervisor)&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Containers can be created quickly thanks to their modular nature and lightweight. This becomes more observable in development lifecycle. The granularity leads an easy versioning of released applications .Also it leads to a reduction in resource sharing between application components which minimizes compatibility issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency&lt;/strong&gt;: The ability to relocate entire development environments by moving a container between systems highlights. &lt;br&gt;
Containers provide predictable, consistent and stable applications in all the stages of their lifecycle (Development, test and production) as it encapsulates the exact dependencies, thus minimizes the risk of bugs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Density and Resource Efficiency&lt;/strong&gt;: The enormous support of the community to the Docker project increased density and modularity of computing resources.&lt;br&gt;
• Containers increase the efficiency and the agility of applications thanks to the abstraction from OS and hardware. Multiple containers run on a single system.&lt;br&gt;
• You can make a compromise between what resources containers need and what are the hardware limits of the host to reach a maximum number of containers: Higher density, increasing efficiency of computing resources, saving money of the excessed capacity, changing the number of assigned containers to a host instead of horizontal scaling to reach optimal utilization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility:&lt;/strong&gt; Based on Docker portability, ease of deployment, and small size.&lt;br&gt;&lt;br&gt;
• Unlike other applications that require intensive instructions, Docker provides (just like Git) a simple mechanism to download and install containers and their subsequent applications using this command:&lt;br&gt;
$  docker pull &lt;br&gt;
• Docker provides a standard interface : It is easy to deploy&lt;br&gt;
wherever you like and it’s portable between different versions of Linux.&lt;br&gt;
• Containers make microservice architecture possible where services are isolated to adjacent service’s failure and errant patches or upgrades.&lt;br&gt;
• Docker provides clean ,reproducible and modular environment &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  II.   Containers in AWS:
&lt;/h1&gt;

&lt;p&gt;There are two ways to deploy containers in AWS :&lt;br&gt;
    &lt;strong&gt;AWS Elastic Beanstalk:&lt;/strong&gt; It is a management layer for AWS services like Amazon EC2, Amazon RDS and ELB.&lt;/p&gt;

&lt;p&gt;• It is used to deploy, manage and scale containerized applications &lt;br&gt;
• It can deploy containerized applications to Amazon ECS &lt;br&gt;
• After you specify your requirements (memory, CPU, ports, etc.),it places your containers across your cluster and monitors their health. &lt;br&gt;
• The command-line utility eb can be used to manage AWS Elastic Beanstalk and Docker containers.&lt;br&gt;
• It is used for deploying a limited number of containers&lt;br&gt;
    &lt;strong&gt;Amazon EC2 Container Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Amazon ECS is a high performant management system for Docker containers on AWS .&lt;br&gt;
• It helps to launch, manage, run distributed applications and orchestrate thousands of Linux containers on a managed cluster of EC2 instances, without having to build your own cluster management backend.&lt;br&gt;
• It offers multiple ways to manage container scheduling, supporting various applications.&lt;br&gt;
• Amazon ECS container agent is open source and free ,it can be built into any AMI to be used with Amazon ECS&lt;br&gt;
• On a cluster , a task definition is required to define each Docker image ( name ,location, allocated resources,etc.).&lt;br&gt;
• The minimum unit of work in Amazon ECS is ‘a task’ which is a running instance of a task definition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the clusters in this context:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Clusters are EC2 instances running the ECS container agent that communicates instance and container state information to the cluster manager and dockerd .&lt;br&gt;
• Instances register with the default or specified cluster.&lt;br&gt;
• A cluster has an Auto Scaling group to satisfy the needs of the container workloads.&lt;br&gt;
• Amazon ECS allows managing a large cluster of instances and containers programmatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container-Enabled AMIs:&lt;/strong&gt; The Amazon ECS-Optimized Amazon Linux AMI includes the Amazon ECS container agent (running inside a Docker container), dockerd (the Docker daemon), and removes the not required packages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container Management:&lt;/strong&gt;&lt;br&gt;
Amazon ECS provides optimal control and visibility over containers, clusters, and applications with a simple, detailed API. You just need to call the relevant actions to carry out your management tasks.&lt;br&gt;
Here is a list containing examples of available API Operations for Amazon ECS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vevq3xajvyxf7qu1osf.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vevq3xajvyxf7qu1osf.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Scheduling ensures that an appropriate number of tasks are constantly running , that tasks are registered against one or more load balancers, and they are rescheduled when a task fails .&lt;br&gt;
• Amazon ECS API actions like StartTask can make appropriate placement decisions based on specific parameters (StartTask decisions are  based on business and application requirements). &lt;br&gt;
• Amazon ECS allows the integration with custom or third-party schedulers.&lt;br&gt;
• Amazon ECS includes two built-in schedulers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;The RunTask:&lt;/strong&gt; randomly distributes tasks across your cluster.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;CreateService:&lt;/strong&gt; ideally suited to long-running stateless services.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Container Repositories&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Amazon ECS is repository-agnostic so customers can use the repositories of their choice.&lt;br&gt;
• Amazon ECS can integrate with private Docker repositories running in AWS or an on-premises data center.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logging and Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon ECS supports monitoring of cluster contents with Amazon CloudWatch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Amazon ECS allows to store and share information between multiple containers using data volumes. They can be shared on a host as:&lt;br&gt;
••  Empty, non-persistent scratch space for containers&lt;/p&gt;

&lt;p&gt;OR&lt;/p&gt;

&lt;p&gt;••  Exported volume from one container to be mounted by other containers on mountpoints called containerPaths.&lt;/p&gt;

&lt;p&gt;• ECS task definitions can refer to storage locations (instance storage or EBS volumes) on the host as data volumes. The optional  parameter referencing a directory on the underlying host is called sourcePath.If it is not provided, data volume is treated as scratch space.&lt;br&gt;
• volumesFrom parameter : defines the relationship of storage between two containers .It requires sourceContainer argument to specify which container's data volume should be mounted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Amazon ECS allows networking features (port mapping ,container linking ,security groups ,IP addresses and ressources, network interfaces, etc.).&lt;/p&gt;

&lt;h1&gt;
  
  
  III.  Container Security
&lt;/h1&gt;

&lt;p&gt;• AWS  Costumers combine  software capabalities (of Docker, SElinux, iptables,etc) with AWS security measures( IAM, security groups, NACL,VPC) provided in AWS architecture for EC2 and scaled by clusters&lt;br&gt;
• AWS customers maintain, control and configure of the EC2 instances, OS and Docker daemon through AWS deployment &amp;amp;management services.&lt;br&gt;
• Security measures are scaled through clusters.&lt;/p&gt;

&lt;h1&gt;
  
  
  IV.   Container Use Cases
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;1.Batch jobs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Packaging containers that can batch, extract, transform, and load jobs and deploy them into clusters. Jobs then start quickly. Better performance is witnessed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Distributed Applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers build:&lt;/p&gt;

&lt;p&gt;• Distributed applications, which provide loose coupling ,elastic and scalable design. They are quick to deploy across heterogeneous servers ,as they are characterized by density ,consistency and flexibility.&lt;/p&gt;

&lt;p&gt;• microservices into adequate encapsulation units. &lt;/p&gt;

&lt;p&gt;• Batch job processes  which can run on a large numbers of containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Continuous Integration and Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers are a keystone component of continuous integration (CI) and continuous deployment (CD) workflows. It supports streamlined build, test, and deployment from the same container images. As it leverages CI features in tools like GitHub, Jenkins, and DockerHub &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Platform As a Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PaaS is a type of service model that presents a set of software, tools and and an underlying infrastructure where the cloud provider manages networking, storage ,OS ,Middleware  and the customer performs resources configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The issue :&lt;/strong&gt; Users and their resources need to be isolated .This is a challenging task for PaaS providers .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The solution:&lt;/strong&gt; Containers provide the needed isolation concept .they allows also creating and deploying template resources to simplify isolation process.&lt;br&gt;
Also each product offered by the PaaS provider could be built into its own container and deployed on demand quickly.&lt;/p&gt;

&lt;h1&gt;
  
  
  V.    Architectural Considerations
&lt;/h1&gt;

&lt;p&gt;All the containers defined in a task are placed onto a single instance in the cluster. So a task represents an application with multiple tiers requiring inter-container communication.&lt;br&gt;
Tasks give users the ability to allocate resources to containers, so containers can be evaluated on resource requirements and collocated.&lt;br&gt;
Amazon ECS provides three API actions for placing containers onto hosts:&lt;br&gt;
    &lt;strong&gt;RunTask :&lt;/strong&gt; allows a specific cluster instance to be passed as a value in the API call&lt;br&gt;
       &lt;strong&gt;StartTask:&lt;/strong&gt; uses Amazon ECS scheduler logic to place a task on an open host&lt;br&gt;
       &lt;strong&gt;CreateService:&lt;/strong&gt; allows for the creation of a Service object, which, combination of a TaskDefinition object and an existing Elastic Load Balancing load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service discovery:&lt;/strong&gt; Solves challenges with advertising internal container state, such as current IP address and application status, to other containers running on separate hosts within the cluster. The Amazon ECS describe API actions like describe-service can serve as primitives for service discovery functionality.&lt;/p&gt;

&lt;h1&gt;
  
  
  VI.   Walkthrough
&lt;/h1&gt;

&lt;p&gt;Since the commands used in this Walkthrough can be exploited in other complex projects, I suggest a bash file that can help to solve repetitive and difficult real-world problems:&lt;br&gt;
                               View: &lt;a href="https://github.com/DorraBoukari/Walkthrough/blob/main/walkthrough.sh" rel="noopener noreferrer"&gt;link&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;1.Create your first cluster named ‘Walkthrough’ with &lt;code&gt;create-cluster&lt;/code&gt; command&lt;br&gt;
    PS: each AWS account is limited to two clusters.&lt;br&gt;
2.Add instances&lt;br&gt;
If you would like to control which cluster the instances register to(not to default cluster), you need to input UserData to populate the cluster name into the &lt;code&gt;/etc/ecs/ecs.config&lt;/code&gt; file.&lt;br&gt;
In this  lab, we will launch a web server ,so we configure the correct security group permissions and allow inbound access from anywhere on port 80.&lt;/p&gt;

&lt;p&gt;3.Run a quick check with  the &lt;code&gt;list-container-instances&lt;/code&gt; command:&lt;br&gt;
PS: To dig into the instances more, use the &lt;code&gt;describe-container-instances&lt;/code&gt; command &lt;/p&gt;

&lt;p&gt;4.Register a task definition before running in on ECS cluster:&lt;/p&gt;

&lt;p&gt;a)Create the task definition:&lt;br&gt;
It is created a the JSON file called ‘nginx_task.json’ . This specific task launches a pre-configured NGINX container from the Docker Hub repository.&lt;/p&gt;

&lt;p&gt;View:   &lt;a href="https://github.com/DorraBoukari/Walkthrough/blob/main/nginx_task.json" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b)Register the task definition with Amazon ECS:&lt;/p&gt;

&lt;p&gt;5.Run the Task with &lt;code&gt;run-task&lt;/code&gt; command:&lt;br&gt;
                  PS: &lt;br&gt;
• Note of the taskDefinition instance value (Walkthrough:1) returned after task registration in the previous step.&lt;br&gt;
• To obtain the ARN use the &lt;code&gt;aws ecs list-task-definitions&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;6.Test the container: The container port is mapped to the instance port 80, so you can curl the utility to test the public IP address .&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This whitepaper summary can be a useful resource for those interested in the cloud native technologies. It sheds the light on containers generally, and Docker on AWS specifically. It  details the benefits of those technologies especially while using the EC2 cluster, it gives a step-by-step guide for beginners to deploy their first container on a cluster and also provides a bash  Script that helps to automate those tasks in more complex projects.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>aws</category>
      <category>containers</category>
      <category>cloudskills</category>
    </item>
  </channel>
</rss>
