<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kelvin Onuchukwu</title>
    <description>The latest articles on DEV Community by Kelvin Onuchukwu (@kelvinskell).</description>
    <link>https://dev.to/kelvinskell</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kelvinskell"/>
    <language>en</language>
    <item>
      <title>Getting Started with AWS CDK in Python: A Comprehensive and Easy-to-Follow Guide</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Wed, 31 Jul 2024 12:37:39 +0000</pubDate>
      <link>https://dev.to/kelvinskell/getting-started-with-aws-cdk-in-python-a-comprehensive-and-easy-to-follow-guide-2k44</link>
      <guid>https://dev.to/kelvinskell/getting-started-with-aws-cdk-in-python-a-comprehensive-and-easy-to-follow-guide-2k44</guid>
      <description>&lt;h1&gt;
  
  
  A Practical Guide to AWS CDK with Python
&lt;/h1&gt;

&lt;p&gt;The AWS Cloud Development Kit (CDK) is a powerful tool that allows developers to define cloud infrastructure using familiar programming languages. In this guide, we'll focus on using AWS CDK with Python to provision and manage AWS resources. This comprehensive guide will cover everything from setting up your environment to advanced use cases and best practices for CI/CD with CDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS CDK?
&lt;/h2&gt;

&lt;p&gt;AWS CDK is an open-source software development framework that allows you to define your cloud infrastructure using code. Instead of writing lengthy JSON or YAML CloudFormation templates, you can use familiar programming languages like Python, JavaScript, TypeScript, Java, and C#. This approach enables you to leverage the full power of programming languages, such as loops, conditions, and functions, to create reusable and maintainable infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use AWS CDK?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Advantages over Terraform
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Familiarity&lt;/strong&gt;: Use your preferred programming language to define infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reusability&lt;/strong&gt;: Create reusable constructs that can be shared across projects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt;: Seamlessly integrate with other AWS services and SDKs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modularity&lt;/strong&gt;: Break down your infrastructure into logical components for better organization and management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Library&lt;/strong&gt;: Leverage a rich library of AWS constructs (L1, L2, L3) to simplify complex infrastructure definitions.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up Your Environment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Node.js and NPM&lt;/strong&gt;: CDK CLI is built on Node.js, so you need to have Node.js and npm installed.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; aws-cdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt;: Install Python and set up a virtual environment.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv .env
   &lt;span class="nb"&gt;source&lt;/span&gt; .env/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt;: Configure AWS CLI with your credentials.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   pip &lt;span class="nb"&gt;install &lt;/span&gt;awscli
   aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Initialize a New CDK Project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create and Initialize CDK App&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;mkdir &lt;/span&gt;my-cdk-app
   &lt;span class="nb"&gt;cd &lt;/span&gt;my-cdk-app
   cdk init app &lt;span class="nt"&gt;--language&lt;/span&gt; python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Dependencies&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Defining Your Infrastructure
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Basic Stack Example
&lt;/h3&gt;

&lt;p&gt;Create a new file &lt;code&gt;my_stack.py&lt;/code&gt; under the &lt;code&gt;my_cdk_app&lt;/code&gt; directory and define your stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# my_cdk_app/my_stack.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk.aws_s3&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyBucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;versioned&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update &lt;code&gt;app.py&lt;/code&gt; to include your stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_cdk_app.my_stack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MyStack&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyStack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Synthesizing and Deploying
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Synthesize CloudFormation Template&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk synth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deploy Stack&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cross-Stack References
&lt;/h3&gt;

&lt;p&gt;Share resources between stacks using exports and imports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack A&lt;/strong&gt;: Define and export the S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# my_cdk_app/stack_a.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk.aws_s3&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;StackA&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyBucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;CfnOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BucketArnOutput&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stack B&lt;/strong&gt;: Import the S3 bucket from Stack A.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# my_cdk_app/stack_b.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk.aws_s3&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;StackB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;bucket_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;import_value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BucketArnOutput&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;imported_bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_bucket_arn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ImportedBucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bucket_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update &lt;code&gt;app.py&lt;/code&gt; to include both stacks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_cdk_app.stack_a&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StackA&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_cdk_app.stack_b&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StackB&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;stack_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StackA&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;StackA&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nc"&gt;StackB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;StackB&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Multi-Account and Multi-Region Deployment
&lt;/h3&gt;

&lt;p&gt;Deploy infrastructure across multiple AWS accounts and regions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_cdk_app.my_stack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MyStack&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;prod_env&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Environment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;123456789012&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-west-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;dev_env&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Environment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;987654321098&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ProdStack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;prod_env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DevStack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dev_env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CI/CD with CDK
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Step 1: Create the CDK Project
&lt;/h4&gt;

&lt;p&gt;Initialize your CDK project if you haven't already.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;my-cdk-app
&lt;span class="nb"&gt;cd &lt;/span&gt;my-cdk-app
cdk init app &lt;span class="nt"&gt;--language&lt;/span&gt; python
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Define Your Infrastructure
&lt;/h4&gt;

&lt;p&gt;Define the resources you need in your CDK stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: S3 Bucket Stack&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# my_cdk_app/my_stack.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk.aws_s3&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyBucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;versioned&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3: Add CDK Pipeline Construct
&lt;/h4&gt;

&lt;p&gt;AWS CDK provides a higher-level construct for setting up CI/CD pipelines called &lt;code&gt;CodePipeline&lt;/code&gt;. This construct simplifies creating a pipeline with stages for source, build, and deploy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Pipeline Stack&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# my_cdk_app/pipeline_stack.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk.pipelines&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CodePipeline&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CodePipelineSource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ShellStep&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_cdk_app.my_stack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MyStack&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PipelineStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CodePipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pipeline&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                &lt;span class="n"&gt;synth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ShellStep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Synth&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                                &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CodePipelineSource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;git_hub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-org/my-repo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                                                &lt;span class="n"&gt;commands&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
                                                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;npm install -g aws-cdk&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;python -m venv .env&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source .env/bin/activate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pip install -r requirements.txt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cdk synth&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                                                &lt;span class="p"&gt;]))&lt;/span&gt;

        &lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_stage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;MyApplicationStage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Deploy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyApplicationStage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stage&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyStack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nc"&gt;PipelineStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PipelineStack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Modularize Your Code&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break down your infrastructure into reusable constructs. This promotes code reuse and maintainability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Environment Variables and Context&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use context values (&lt;code&gt;cdk.json&lt;/code&gt;) and environment variables to manage configuration across different environments.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Leverage CDK Patterns&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use higher-level constructs (L3) and patterns to standardize your infrastructure setup.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement unit tests for your constructs to ensure correctness.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Documentation and Comments&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document your code and provide comments to explain complex logic or configurations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use CDK Metadata&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add metadata to your constructs to provide additional context and information.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Version Control&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use version control for your CDK projects, and version your constructs to manage changes over time.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Follow AWS Best Practices&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure your infrastructure follows AWS best practices for security, performance, and cost management.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  CDK Commands
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cdk init&lt;/strong&gt;: Initializes a new CDK project.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk init app &lt;span class="nt"&gt;--language&lt;/span&gt; python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cdk synth&lt;/strong&gt;: Synthesizes and generates the CloudFormation template.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk synth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cdk deploy&lt;/strong&gt;: Deploys the CloudFormation template to AWS.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cdk destroy&lt;/strong&gt;: Destroys the deployed stack.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cdk diff&lt;/strong&gt;: Compares the deployed stack with the local stack.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk diff
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cdk bootstrap&lt;/strong&gt;: Bootstraps the environment to create necessary resources for CDK.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk bootstrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cdk context&lt;/strong&gt;: Manages cached context values.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   cdk context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Struct
&lt;/h3&gt;

&lt;p&gt;uring Your CDK Directory&lt;/p&gt;

&lt;h4&gt;
  
  
  Example Directory Structure
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-cdk-app/
├── app.py
├── cdk.json
├── requirements.txt
├── setup.py
├── README.md
└── my_cdk_app/
    ├── __init__.py
    ├── stack_a.py
    ├── stack_b.py
    ├── constructs/
    │   ├── __init__.py
    │   └── my_construct.py
    └── tests/
        ├── __init__.py
        └── test_stack.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;app.py&lt;/strong&gt;: Entry point of the CDK app where stacks are instantiated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cdk.json&lt;/strong&gt;: CDK configuration file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;requirements.txt&lt;/strong&gt;: Lists Python dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;setup.py&lt;/strong&gt;: Setup script for the Python package.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;README.md&lt;/strong&gt;: Project description and instructions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;my_cdk_app/&lt;/strong&gt;: Directory for the CDK application code.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;stack_a.py&lt;/strong&gt;: Defines Stack A.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;stack_b.py&lt;/strong&gt;: Defines Stack B.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;constructs/&lt;/strong&gt;: Directory for reusable constructs.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;my_construct.py&lt;/strong&gt;: Example construct.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tests/&lt;/strong&gt;: Directory for tests.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;test_stack.py&lt;/strong&gt;: Example test file.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS CDK with Python provides a powerful and flexible way to define and manage your AWS infrastructure using code. By following best practices, leveraging modular design, and integrating with CI/CD pipelines, you can create scalable, maintainable, and automated infrastructure. This guide has covered the basics and advanced use cases, offering a comprehensive overview of how to get started and succeed with AWS CDK and Python.&lt;/p&gt;




&lt;p&gt;By incorporating these elements into your AWS CDK projects, you'll be well-equipped to harness the full power of AWS infrastructure as code, ensuring efficient and reliable deployments in your cloud environments. Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cdk</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Comparing AWS RDS to NoSQL Databases like DynamoDB: When to Use Which</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Wed, 19 Jun 2024 11:33:49 +0000</pubDate>
      <link>https://dev.to/kelvinskell/comparing-aws-rds-to-nosql-databases-like-dynamodb-when-to-use-which-1k0n</link>
      <guid>https://dev.to/kelvinskell/comparing-aws-rds-to-nosql-databases-like-dynamodb-when-to-use-which-1k0n</guid>
      <description>&lt;p&gt;This post was originally published on &lt;a href="https://practicalcloud.net"&gt;Practical Cloud&lt;/a&gt;.&lt;br&gt;
Read the unabridged version &lt;a href="https://practicalcloud.net/comparing-aws-rds-to-nosql-databases-like-dynamodb-when-to-use-which/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Amazon Web Services (AWS) provides a range of database solutions to cater to various application needs. Two popular options are Amazon Relational Database Service (RDS) and Amazon DynamoDB, a managed NoSQL database. This comparison highlights the key differences between RDS and DynamoDB, offering guidance on when to use each based on specific use cases and requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of AWS RDS
&lt;/h2&gt;

&lt;p&gt;Amazon RDS is a managed relational database service that supports several database engines, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Aurora&lt;/li&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;li&gt;MariaDB&lt;/li&gt;
&lt;li&gt;Oracle Database&lt;/li&gt;
&lt;li&gt;Microsoft SQL Server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RDS automates administrative tasks such as hardware provisioning, database setup, patching, and backups, allowing developers to focus on their applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Amazon DynamoDB
&lt;/h2&gt;

&lt;p&gt;Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB is designed for applications that require consistent, single-digit millisecond latency at any scale. It is highly available, with built-in support for multi-region and multi-master replication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Differences Between RDS and DynamoDB
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Data Model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS:&lt;/strong&gt; Relational database with a structured schema. Data is organized into tables with predefined columns, and relationships between tables are defined using foreign keys.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB:&lt;/strong&gt; NoSQL database with a flexible schema. Data is stored in tables, but each item (row) can have a different set of attributes (columns). This allows for more flexible and dynamic data structures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Query Language
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS:&lt;/strong&gt; Uses SQL (Structured Query Language) for querying and managing data. SQL is a powerful and expressive language that supports complex queries, joins, and transactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB:&lt;/strong&gt; Uses a proprietary query language called PartiQL, which is a SQL-compatible query language for querying DynamoDB tables. While PartiQL supports basic SQL-like queries, it does not support complex joins or transactions as SQL does.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Consistency Model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS:&lt;/strong&gt; Supports strong consistency by default. Transactions and ACID (Atomicity, Consistency, Isolation, Durability) properties are fully supported, making RDS suitable for applications that require reliable and consistent data integrity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB:&lt;/strong&gt; Offers two consistency models:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Eventual Consistency:&lt;/strong&gt; Read operations might not immediately reflect the results of a recently completed write operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strong Consistency:&lt;/strong&gt; Ensures that read operations always reflect the latest write operations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scalability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS:&lt;/strong&gt; Vertical and horizontal scaling:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vertical Scaling:&lt;/strong&gt; Increase the instance size (CPU, memory) or storage capacity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Horizontal Scaling:&lt;/strong&gt; Use read replicas for read-heavy workloads.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB:&lt;/strong&gt; Horizontal scaling by design. DynamoDB automatically partitions data and spreads the load across multiple servers. It can handle virtually unlimited throughput and storage capacity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS:&lt;/strong&gt; Performance depends on the instance type, storage configuration, and database engine tuning. It can handle complex queries and transactions efficiently but might require careful management and optimization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB:&lt;/strong&gt; Designed for high performance with low latency at any scale. It is optimized for key-value and document data models, providing consistent performance even under high load.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;h4&gt;
  
  
  When to Use RDS
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transactional Applications:&lt;/strong&gt; Applications that require ACID transactions, such as financial systems, order management, and inventory systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex Queries and Joins:&lt;/strong&gt; Use cases where complex SQL queries, joins, and data relationships are essential, such as business intelligence and reporting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relational Data Models:&lt;/strong&gt; Applications that benefit from a structured schema with well-defined relationships between entities, such as CRM systems and content management systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  When to Use DynamoDB
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-Throughput and Low-Latency Applications:&lt;/strong&gt; Applications that require consistent, single-digit millisecond latency and can scale horizontally, such as gaming leaderboards, real-time bidding, and IoT applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Schema Requirements:&lt;/strong&gt; Use cases where the data model evolves frequently, and a fixed schema is restrictive, such as user profile stores and content recommendation systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability and Multi-Region Replication:&lt;/strong&gt; Applications that need to be highly available with seamless multi-region replication, such as global e-commerce platforms and distributed applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cost Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS:&lt;/strong&gt; Costs are based on instance type, storage, and I/O operations. It can become expensive as the instance size and storage requirements increase, especially for high-availability setups with Multi-AZ deployments and read replicas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB:&lt;/strong&gt; Pricing is based on the provisioned throughput (read and write capacity units) and storage used. DynamoDB offers on-demand pricing, which can be cost-effective for workloads with variable traffic patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Management and Maintenance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS:&lt;/strong&gt; AWS manages the underlying infrastructure, including hardware provisioning, patching, backups, and automated failover. However, database tuning and optimization are the user's responsibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB:&lt;/strong&gt; Fully managed service with minimal maintenance requirements. AWS handles all aspects of infrastructure management, including scaling, replication, and backups.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Detailed Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  RDS Use Cases
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;E-commerce Platforms:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RDS is ideal for handling transactional operations, product catalogs, and user orders. The relational model supports complex queries for inventory management and customer relationship management.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enterprise Resource Planning (ERP):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ERPs require robust data integrity and complex relationships between entities like suppliers, customers, and inventory. RDS supports the transactional requirements and complex queries needed in ERP systems.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Financial Applications:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Banking and financial applications need strong consistency and ACID transactions to ensure data accuracy and integrity. RDS's support for transactions and relational models makes it suitable for these applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  DynamoDB Use Cases
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Real-Time Data Processing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applications like real-time bidding, gaming leaderboards, and live event tracking benefit from DynamoDB's low latency and high throughput. The flexible schema allows for rapid iteration and updates to data models.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Internet of Things (IoT):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IoT applications generate massive amounts of data that need to be ingested and queried quickly. DynamoDB's ability to scale horizontally and handle high write and read rates makes it ideal for IoT data storage and processing.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Content and User Personalization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DynamoDB is well-suited for storing user profiles, session data, and personalized content recommendations. Its flexible schema allows for storing diverse attributes and evolving data models without downtime.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Choosing between AWS RDS and Amazon DynamoDB depends on your application's specific requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;RDS&lt;/strong&gt; when you need strong consistency, ACID transactions, complex queries, and a relational data model.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;DynamoDB&lt;/strong&gt; for applications requiring high throughput, low latency, flexible schemas, and horizontal scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding the strengths and limitations of each service will help you make an informed decision and design a robust and efficient database architecture for your application.&lt;/p&gt;

</description>
      <category>database</category>
      <category>cloud</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Comprehensive Guide to AWS API Gateway: Everything You Need to Know - Part I</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Wed, 12 Jun 2024 10:25:47 +0000</pubDate>
      <link>https://dev.to/kelvinskell/comprehensive-guide-to-aws-api-gateway-everything-you-need-to-know-part-i-1hia</link>
      <guid>https://dev.to/kelvinskell/comprehensive-guide-to-aws-api-gateway-everything-you-need-to-know-part-i-1hia</guid>
      <description>&lt;p&gt;AWS API Gateway is a powerful service that allows you to create, publish, manage, and secure APIs with ease. Whether you're building RESTful APIs, WebSocket APIs, or HTTP APIs, API Gateway provides a range of features and functionalities to streamline API development, authentication, authorization, documentation, and monitoring. In this comprehensive guide, we'll dive deep into different aspects of AWS API Gateway, covering key concepts, best practices, use cases, and implementation strategies. &lt;/p&gt;

&lt;p&gt;This is the first part of a two-part series. Read the second part &lt;a href="https://practicalcloud.net/comprehensive-guide-to-aws-api-gateway-everything-you-need-to-know-part-ii/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Introduction to AWS API Gateway
&lt;/h2&gt;

&lt;p&gt;Comprehensive Guide To AWS API Gateway&lt;br&gt;
AWS API Gateway is a fully managed service that allows you to create, deploy, and manage APIs at scale. It acts as a front-door for your backend services, enabling clients to access your APIs securely and efficiently. With API Gateway, you can build RESTful APIs, WebSocket APIs, and HTTP APIs, leverage serverless computing with AWS Lambda integrations, implement authentication and authorization mechanisms, monitor API usage and performance, and generate interactive API documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. API Gateway Features and Components
&lt;/h2&gt;

&lt;p&gt;Integration Types&lt;br&gt;
AWS API Gateway supports multiple integration types to connect your API methods with backend services, AWS resources, Lambda functions, HTTP endpoints, and external APIs. The key integration types in API Gateway include:&lt;/p&gt;

&lt;p&gt;HTTP Integration:&lt;br&gt;
Integrates API Gateway with HTTP/HTTPS endpoints such as web services, microservices, and external APIs.&lt;br&gt;
Supports standard HTTP methods (GET, POST, PUT, DELETE) and custom headers, query strings, and payloads.&lt;br&gt;
Allows seamless communication between API Gateway and backend HTTP services for data exchange and processing.&lt;br&gt;
AWS Lambda Integration:&lt;br&gt;
Integrates API Gateway with AWS Lambda functions for serverless computing, event-driven processing, and backend logic.&lt;br&gt;
Executes Lambda functions in response to API requests, passing event data, headers, and payloads to Lambda.&lt;br&gt;
Enables serverless API endpoints, dynamic content generation, and scalable compute resources.&lt;br&gt;
AWS Service Integration:&lt;br&gt;
Integrates API Gateway with AWS services such as AWS DynamoDB, AWS S3, AWS Step Functions, AWS SQS, and AWS SNS.&lt;br&gt;
Enables direct API access to AWS resources, data storage, messaging services, workflow orchestration, and event-driven workflows.&lt;br&gt;
Utilizes AWS IAM roles and policies for secure access control and resource permissions.&lt;br&gt;
Mock Integration:&lt;br&gt;
Provides mock responses and simulations for API methods without actual backend integrations.&lt;br&gt;
Useful for API testing, development, prototyping, and simulating various API responses.&lt;br&gt;
Allows developers to define mock responses, status codes, headers, and payloads for API endpoints.&lt;br&gt;
HTTP Proxy Integration:&lt;br&gt;
Acts as a reverse proxy for HTTP/HTTPS requests, forwarding requests to backend services and routing responses back to clients.&lt;br&gt;
Supports path-based routing, URI rewriting, header forwarding, and response transformation.&lt;br&gt;
Enables API Gateway to proxy requests to backend HTTP services, legacy systems, or external APIs.&lt;br&gt;
Lambda Proxy Integration:&lt;br&gt;
Integrates API Gateway with Lambda functions using Lambda proxy integration for direct Lambda invocation and response handling.&lt;br&gt;
Passes entire API requests (including path parameters, query strings, headers, and payloads) to Lambda as event data.&lt;br&gt;
Simplifies API Gateway configuration, reduces overhead, and improves performance for Lambda-based APIs.&lt;br&gt;
Integration Requests and Responses&lt;br&gt;
AWS API gateway Integration requests and responses&lt;br&gt;
Integration requests and responses in API Gateway define the format, structure, transformation, and mapping of data between API Gateway and backend integrations. Key aspects of integration requests and responses include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request Mapping Templates:&lt;/strong&gt;&lt;br&gt;
Define request mapping templates to transform incoming API requests into backend integration requests.&lt;br&gt;
Use Velocity Template Language (VTL) to manipulate headers, parameters, payloads, and data formats.&lt;br&gt;
Implement data mappings, transformations, conditional logic, and error handling in request templates.&lt;br&gt;
Response Mapping Templates:&lt;br&gt;
Define response mapping templates to transform backend integration responses into API Gateway responses.&lt;br&gt;
Use VTL to format, filter, map, and transform backend response data for client consumption.&lt;br&gt;
Handle status codes, headers, error messages, and response structures in response templates.&lt;br&gt;
Data Mapping and Transformation:&lt;br&gt;
Map request parameters, path variables, query strings, headers, and payloads to backend integration data.&lt;br&gt;
Transform data formats (e.g., JSON, XML, form data) between API Gateway and backend services.&lt;br&gt;
Implement data validation, serialization, deserialization, and content negotiation in integration mappings.&lt;br&gt;
Payload Encoding and Decoding:&lt;br&gt;
Encode and decode request/response payloads using standard encoding schemes (e.g., base64, URL encoding).&lt;br&gt;
Handle binary data, multipart/form-data, and content encoding for API Gateway integrations.&lt;br&gt;
Configure payload transformations, size limits, and content-type conversions for data exchange.&lt;br&gt;
Response Handling and Error Management:&lt;br&gt;
Handle backend integration responses, status codes, headers, and error messages in API Gateway.&lt;br&gt;
Map backend errors to API Gateway error responses, error codes, and error handling behavior.&lt;br&gt;
Implement response transformations, caching directives, and content negotiation for API clients.&lt;br&gt;
Method Requests and Responses&lt;br&gt;
AWS API gateway Method reponses and requests&lt;br&gt;
Method requests and responses in API Gateway define the input parameters, request models, response models, validation rules, and error handling for API methods. Key aspects of method requests and responses include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request Parameters and Models:&lt;/strong&gt;&lt;br&gt;
Define method request parameters (e.g., path parameters, query strings, headers, body parameters) for API methods.&lt;br&gt;
Specify request models, schemas, and validation rules using JSON Schema or OpenAPI Specification (Swagger).&lt;br&gt;
Validate input data, enforce data types, and handle missing or invalid parameters in method requests.&lt;br&gt;
Response Models and Content Negotiation:&lt;br&gt;
Define method response models, schemas, and content types for API method responses.&lt;br&gt;
Specify response data formats (e.g., JSON, XML), data structures, and validation rules.&lt;br&gt;
Implement content negotiation, response encoding, and media type selection based on client preferences.&lt;br&gt;
Request Validation and Error Handling:&lt;br&gt;
Validate method requests against defined request models, schemas, and validation rules.&lt;br&gt;
Handle request validation errors, missing parameters, invalid input data, and parameter constraints.&lt;br&gt;
Define custom error responses, error codes, error messages, and error handling behavior for API methods.&lt;br&gt;
Response Formatting and Mapping:&lt;br&gt;
Format method responses based on defined response models, content types, and media types.&lt;br&gt;
Map backend integration responses to API method responses using response mapping templates.&lt;br&gt;
Implement response transformations, data mappings, and content negotiation for API clients.&lt;br&gt;
Practical Scenario: Transforming Responses from a Step Functions orchestration to an API Gateway&lt;br&gt;
To transform responses from the step functions service, you configure the integration response in API Gateway to transform a response from an AWS Step Function connected to API Gateway.&lt;/p&gt;

&lt;p&gt;Here's a breakdown of the functionalities:&lt;/p&gt;

&lt;p&gt;Integration Response: This setting controls how API Gateway handles the response it receives from the backend service (in this case, the Step Function). You can use a mapping template to modify the response body, headers, and status code before passing it on to the client.&lt;br&gt;
Method Response: This setting defines the format of the response that the API Gateway method itself will return to the client. It specifies the HTTP status code, response parameters, and response models. While method responses are useful for defining the expected API output, they are not the primary way to transform a response from a backend integration.&lt;br&gt;
In essence, the integration response acts as an intermediary, allowing you to manipulate the Step Function's output before it reaches the client via the method response.&lt;/p&gt;

&lt;p&gt;Here's how integration and method responses work together in API Gateway:&lt;/p&gt;

&lt;p&gt;Client Request: A client sends a request to your API Gateway endpoint.&lt;br&gt;
Integration: API Gateway forwards the request (possibly with transformations) to your backend service (the Step Function in this case).&lt;br&gt;
Backend Response: The Step Function processes the request and returns a response.&lt;br&gt;
Integration Response: This is where the transformation happens. You can use a mapping template in the integration response to modify the Step Function's response data (body, headers, status code) before passing it on.&lt;br&gt;
Method Response (Optional): API Gateway uses the configured method responses to define the final format of the response sent back to the client. This includes:&lt;br&gt;
HTTP Status Code: Defines the success or error code (e.g., 200 for success, 404 for not found).&lt;br&gt;
Response Parameters: Specifies additional headers you want to include in the response.&lt;br&gt;
Response Models: Defines the expected structure of the response body using pre-defined models in API Gateway. This is helpful for generating client-side SDKs for your API.&lt;br&gt;
Since you're transforming the response in the integration response for the Step Function scenario, you don't necessarily need a method response. However, method responses are useful in several situations:&lt;/p&gt;

&lt;p&gt;Default Responses: Define common response formats for success (200) and error conditions (400, 500, etc.) across your API. This provides consistency for developers using your API.&lt;br&gt;
Customizing Headers: You might want to add specific headers not present in the backend response (e.g., API Gateway throttling information).&lt;br&gt;
Response Model Validation: Define a model for the expected response body structure. This helps validate the backend's response data and ensures consistency. This is particularly beneficial when generating client-side SDKs for your API.&lt;br&gt;
To summarize, integration responses offer flexibility to manipulate data directly from the backend service. Method responses define the public interface of your API, specifying what developers can expect in the final response. You can leverage both for a more controlled and well-defined API experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Best Practices for API Gateway
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Security Best Practices&lt;/strong&gt;&lt;br&gt;
Implement HTTPS for secure API communication and data encryption.&lt;br&gt;
Use API keys, IAM roles, and custom authorizers(Lambda Authorizer or Cognito Authorizer) for authentication and access control.&lt;br&gt;
Secure sensitive data, headers, and payloads using encryption and access policies.&lt;br&gt;
Implement rate limiting, throttling, and usage plans to prevent abuse and control API usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Optimization&lt;/strong&gt;&lt;br&gt;
Use caching, content delivery networks (CDNs), and response compression for performance optimization.&lt;br&gt;
Minimize round-trip times, reduce latency, and optimize integration execution for faster responses.&lt;br&gt;
Implement asynchronous processing, batch operations, and parallel execution for scalable API performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability Strategies&lt;/strong&gt;&lt;br&gt;
Design scalable APIs with distributed architectures, stateless services, and horizontal scaling capabilities.&lt;br&gt;
Use AWS Auto Scaling, Lambda concurrency controls, and caching to handle varying loads and traffic spikes.&lt;br&gt;
Implement load balancing, fault tolerance, and distributed caching for high availability and resilience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Versioning and Lifecycle Management&lt;/strong&gt;&lt;br&gt;
Implement API versioning strategies, backward compatibility policies, and version control for API evolution.&lt;br&gt;
Use API Gateway stages, deployment slots, and version aliases for seamless API deployment and testing.&lt;br&gt;
Manage API lifecycle stages, deprecations, retirements, and sunset policies for legacy APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD Integration and Deployment&lt;/strong&gt;&lt;br&gt;
Integrate API Gateway with CI/CD pipelines, version control systems, and deployment automation tools.&lt;br&gt;
Use AWS CodePipeline, AWS CodeBuild, and AWS SAM (Serverless Application Model) for automated API deployment.&lt;br&gt;
Implement continuous integration, continuous deployment, and infrastructure as code (IaC) for API lifecycle management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Optimization and Resource Management&lt;/strong&gt;&lt;br&gt;
Monitor API usage metrics, performance metrics, and cost metrics for cost optimization.&lt;br&gt;
Optimize resource utilization, capacity planning, and resource scaling based on usage patterns.&lt;br&gt;
Use AWS Budgets, AWS Cost Explorer, and resource tagging for cost allocation and budget management.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Final Thoughts and Future Trends
&lt;/h2&gt;

&lt;p&gt;In conclusion, AWS API Gateway is a comprehensive platform for building, deploying, and managing APIs with agility, scalability, security, and performance. By leveraging integration types, integration requests and responses, method requests, and method responses, you can design robust APIs, connect with backend services, and deliver seamless experiences for API consumers. As organizations continue to adopt cloud-native architectures, serverless computing, and digital transformation initiatives, AWS API Gateway will play a crucial role in enabling API-driven innovation, real-time communication, and scalable solutions. Looking ahead, future trends in API Gateway may include enhanced integration capabilities, AI-driven API management, event-driven architectures, API monetization, and ecosystem collaboration, shaping the future of API-driven development and digital ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Happy Clouding !!!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Optimising Your Cloud Spend: Top Strategies for AWS Cost Management</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Thu, 06 Jun 2024 00:22:55 +0000</pubDate>
      <link>https://dev.to/kelvinskell/optimising-your-cloud-spend-top-strategies-for-aws-cost-management-4l13</link>
      <guid>https://dev.to/kelvinskell/optimising-your-cloud-spend-top-strategies-for-aws-cost-management-4l13</guid>
      <description>&lt;p&gt;Visit my blog &lt;a href="https://practicalcloud.net"&gt;PracticalCloud&lt;/a&gt; for more indepth cloud computing articles.&lt;/p&gt;

&lt;p&gt;In today's cloud-driven world, businesses are increasingly migrating to AWS to leverage its scalability, agility, and wide range of services. However, managing cloud costs effectively is essential to ensure you're getting the most out of your AWS investment. In this article, we will  explore several key strategies for optimizing your AWS costs and maximizing your return on investment (ROI).&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AWS Pricing
&lt;/h2&gt;

&lt;p&gt;Before diving into cost optimization strategies, it’s crucial to understand how AWS pricing works. AWS uses a pay-as-you-go model, which means you only pay for the services you use. However, the complexity of AWS pricing can make cost management challenging. Here are the main components to consider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compute Costs: Charges for virtual servers (EC2 instances), Lambda executions, etc.&lt;/li&gt;
&lt;li&gt;Storage Costs: Charges for data storage (S3, EBS, etc.) and data transfer.&lt;/li&gt;
&lt;li&gt;Data Transfer Costs: Charges for data moving in and out of AWS.&lt;/li&gt;
&lt;li&gt;Miscellaneous Costs: Charges for additional services like RDS, CloudFront, etc.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Top Strategies for AWS Cost Management
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Right-Sizing Your Instances
Right-sizing involves matching instance types and sizes to your workload’s needs. Over-provisioning resources leads to unnecessary costs, while under-provisioning can impact performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How to Right-Size:&lt;/p&gt;

&lt;p&gt;Analyze Usage Patterns: Use AWS CloudWatch and Cost Explorer to analyze your instance usage and performance.&lt;br&gt;
Choose the Right Instance Types: AWS offers various instance types optimized for different workloads (compute-optimized, memory-optimized, etc.).&lt;br&gt;
Utilize Auto Scaling: Set up Auto Scaling to automatically adjust capacity to maintain steady, predictable performance at the lowest possible cost.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Leverage AWS Savings Plans and Reserved Instances
AWS offers Savings Plans and Reserved Instances (RIs) for long-term commitments, which can significantly reduce costs.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Savings Plans vs. Reserved Instances:
&lt;/h2&gt;

&lt;p&gt;Savings Plans: Flexible pricing plans offering savings over on-demand pricing, applicable across any region and instance family.&lt;br&gt;
Reserved Instances: Offer significant discounts (up to 75%) compared to On-Demand pricing in exchange for a commitment to use AWS for a 1- or 3-year term.&lt;br&gt;
How to Use Them:&lt;/p&gt;

&lt;p&gt;Analyze Historical Usage: Use Cost Explorer to identify stable workloads that can benefit from long-term commitments.&lt;br&gt;
Choose the Right Plan: Based on your usage pattern, select either a Compute Savings Plan or an EC2 Instance Savings Plan.&lt;br&gt;
Regular Review: Periodically review and adjust your reservations to match your evolving workload.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Utilize Spot Instances
Spot Instances offer unused EC2 capacity at up to 90% discount compared to On-Demand prices. They are ideal for flexible, stateless, and fault-tolerant workloads.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Best Practices for Spot Instances:
&lt;/h2&gt;

&lt;p&gt;Spot Fleet: Use Spot Fleet to automate the allocation and management of Spot Instances.&lt;br&gt;
Instance Interruption Handling: Implement fault-tolerant design patterns to handle potential Spot Instance interruptions.&lt;br&gt;
Combine with On-Demand and RIs: Use a mix of Spot, On-Demand, and Reserved Instances for optimal cost savings and reliability.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Monitor and Optimize Storage Costs
Storage costs can quickly escalate if not properly managed. AWS provides tools and best practices to optimize storage expenses.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Strategies for Storage Optimization:
&lt;/h2&gt;

&lt;p&gt;S3 Lifecycle Policies: Automate transitioning of objects to lower-cost storage classes (e.g., from S3 Standard to S3 Glacier) based on their lifecycle.&lt;br&gt;
Delete Unused Data: Regularly audit and delete unused or old data.&lt;br&gt;
Optimize EBS Volumes: Identify and delete unused EBS volumes, take advantage of EBS volume types that best match your performance and cost needs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implement Cost Allocation and Tagging
Cost allocation and tagging allow you to track and manage AWS costs by associating them with different departments, projects, or teams.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to Implement Tagging:
&lt;/h2&gt;

&lt;p&gt;Define a Tagging Strategy: Establish a consistent tagging strategy across your organization.&lt;br&gt;
Use AWS Cost Allocation Tags: Tag resources with cost allocation tags to categorize and track costs.&lt;br&gt;
Leverage Cost Explorer and AWS Budgets: Use these tools to analyze tagged resources and monitor spending.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use AWS Cost Management Tools
AWS provides several tools to help you manage and optimize your cloud spend:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS Cost Explorer: Visualize, understand, and manage your AWS costs and usage over time.&lt;br&gt;
AWS Budgets: Set custom cost and usage budgets and receive alerts when you exceed them.&lt;br&gt;
AWS Trusted Advisor: Provides real-time guidance to help you optimize your AWS infrastructure, improve security, and reduce costs.&lt;br&gt;
Scenario-Based Cost Optimization Examples&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario 1: Seasonal Traffic Spikes
&lt;/h2&gt;

&lt;p&gt;Problem: An e-commerce website experiences high traffic during holiday seasons.&lt;br&gt;
Solution: Use Auto Scaling to automatically adjust the number of instances based on traffic. Leverage Spot Instances during peak hours to save costs and Reserved Instances for baseline capacity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario 2: Development and Testing Environments
&lt;/h2&gt;

&lt;p&gt;Problem: Development and testing environments are running 24/7, leading to high costs.&lt;br&gt;
Solution: Implement start/stop schedules for non-production instances using AWS Instance Scheduler. Use Spot Instances for testing workloads that can tolerate interruptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario 3: Data Analytics and Big Data Processing
&lt;/h2&gt;

&lt;p&gt;Problem: High costs associated with big data processing.&lt;br&gt;
Solution: Use S3 Lifecycle policies to move old data to cheaper storage classes. Leverage Spot Instances for data processing jobs. Use Amazon Athena for cost-effective querying of data stored in S3.&lt;/p&gt;

&lt;p&gt;Optimizing AWS costs is crucial for maximizing the value of your cloud investment. By understanding AWS pricing and employing strategies such as right-sizing, leveraging Savings Plans and Spot Instances, monitoring storage costs, implementing cost allocation and tagging, and using AWS cost management tools, you can significantly reduce your cloud spend. Regularly review and adjust your strategies to align with your evolving workloads and business needs. With these best practices, you'll be well on your way to mastering AWS cost management.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Advanced End-to-End DevOps Project: Deploying A Microservices APP To AWS EKS</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Sat, 10 Feb 2024 13:11:00 +0000</pubDate>
      <link>https://dev.to/kelvinskell/advanced-end-to-end-devops-project-deploying-a-microservices-app-to-aws-eks-using-terraform-helm-jenkins-and-argocd-part-i-3a53</link>
      <guid>https://dev.to/kelvinskell/advanced-end-to-end-devops-project-deploying-a-microservices-app-to-aws-eks-using-terraform-helm-jenkins-and-argocd-part-i-3a53</guid>
      <description>&lt;p&gt;This post was originally publised at &lt;a href="https://practicalcloud.net/advanced-end-to-end-devops-project-deploying-a-microservices-app-to-aws-eks-using-terraform-helm-jenkins-and-argocd-part-i/" rel="noopener noreferrer"&gt;Practical Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DevOps is a rapidly evolving space in the IT industry. As a DevOps engineer, it is paramount to keep up with the space of developments to avoid beign left behind.&lt;/p&gt;

&lt;p&gt;One popular paradigm that has evolved to maturity in this field is GitOPs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitOps&lt;/strong&gt; is a DevOps framework or practice whereby we make our Git repository a single source of truth whilst applying CI/CD and version control to infrastructure automation.&lt;br&gt;
&lt;a href="https://www.redhat.com/en/topics/devops/what-is-gitops" rel="noopener noreferrer"&gt;Red Hat&lt;/a&gt; defines it as "using Git repositories as a single source of truth to deliver infrastructure as code."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/what-is/devsecops/#:~:text=DevSecOps%20is%20the%20practice%20of,is%20both%20efficient%20and%20secure." rel="noopener noreferrer"&gt;DevSecOps&lt;/a&gt;&lt;/strong&gt; on the other hand is a new and improved version of DevOps that inculcates security tools and controls within the SDLC(software development life-cycle). The main goal of devsecops methodology is “shifting security left” i.e. security should be a part of the development lifecycle from the very beginning rather than being an afterthought.&lt;/p&gt;

&lt;p&gt;In this project guide, we will be applying GitOps practices while implementing an advanced end-to-end DevSecOps pipleine that incorporates many tools.&lt;/p&gt;

&lt;h1&gt;
  
  
  Project Overview
&lt;/h1&gt;

&lt;p&gt;This is a two-part project. In the first part we will be setting up our EC2 instances on which the CI pipeline will run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To learn how to build a standard continuous integration pipleine with jenkins, click &lt;a href="https://dev.to/kelvinskell/a-practical-guide-to-building-a-standard-continuous-integration-pipeline-with-jenkins-2kp9"&gt;here&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the second part, we will set up the EKS Cluster, ArgoCD for Continuous Delivery and configure Prometheus and Grafana for application monitoring. &lt;/p&gt;

&lt;p&gt;In this project, we will be covering the following:&lt;br&gt;
&lt;strong&gt;- Infrastructure-as-Code:&lt;/strong&gt; We will use terraform to provision our EC2 instances as well as the EKS Cluster. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Jenkins Server Configuration:&lt;/strong&gt; Install and configure essential tools on the Jenkins server, including Jenkins itself, Docker, OWASP Dependency Check, Sonarqube, and Trivy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- EKS Cluster Deployment:&lt;/strong&gt; Utilize Terraform to create an Amazon EKS cluster, a managed Kubernetes service on AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Load Balancer Configuration:&lt;/strong&gt; Configure AWS Application Load Balancer (ALB) for the EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- ArgoCD Installation:&lt;/strong&gt; Install and set up ArgoCD for continuous delivery and GitOps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Sonarqube Integration:&lt;/strong&gt; Integrate Sonarqube for code quality analysis in the DevSecOps pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Monitoring Setup:&lt;/strong&gt; Implement monitoring for the EKS cluster using Helm, Prometheus, and Grafana.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- ArgoCD Application Deployment:&lt;/strong&gt; Use ArgoCD to deploy the microservices application, including database and ingress components.&lt;/p&gt;

&lt;h1&gt;
  
  
  PART I: Setting Up The CI Pipeline
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;- Step 1: Provision The EC2 Instance&lt;/strong&gt;&lt;br&gt;
Clone the &lt;a href="https://github.com/Kelvinskell/microservices-devops-1" rel="noopener noreferrer"&gt;Git repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd&lt;/code&gt; into the terraform directory.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;terraform init&lt;/code&gt;, after that run &lt;code&gt;terraform plan&lt;/code&gt; to see proposed infrastructural changes.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;terraform apply&lt;/code&gt; to effect these changes and provision the instance.&lt;/p&gt;

&lt;p&gt;The instance is bootstraped with user-data that will automatically install jenkins, sonarqube, trivy and docker once it is provisioned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bpfkke1mp7k5xnvdhjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bpfkke1mp7k5xnvdhjw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Step 2: Modify The Application Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is a simple yet crucial step. In the &lt;strong&gt;Jenkinsfile&lt;/strong&gt; contained in the repository you just cloned, you must change all occurances of "&lt;strong&gt;kelvinskell&lt;/strong&gt;" to your DockerHub username. This is very necessary if you want to implement this project on your end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Step 3: Configure The Jenkins Server&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the browser, login to the jenkins server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcryly0tf1nzxwbdlha9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcryly0tf1nzxwbdlha9o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install suggested plugins and complete the registration.&lt;/li&gt;
&lt;li&gt;Go to Manage Jenkins, Plugins,and install the following plugins: Docker, Docker Commons, Docker pipeline, SonarQube Scanner, Sonar quality gates, SSH2 Easy, OWASP dependency check, OWASP Markup Formatter Plugin, GitHub API pluin and GitHub pipeline plugin.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr12a2xjz5p9svyii685e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr12a2xjz5p9svyii685e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure tools: Go to Dashborad &amp;gt; Manage jenkins &amp;gt; Tools
&lt;strong&gt;Git Installation&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvhas5tzgkmdbokqkz2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvhas5tzgkmdbokqkz2a.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sonar Scanner Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foil2k0fymyutyttuqljn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foil2k0fymyutyttuqljn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency Check&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xvuiz5hxndem72wce9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xvuiz5hxndem72wce9j.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dgt502xm5qxpmgpeof5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dgt502xm5qxpmgpeof5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Step 4: Configure SonarQube&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the browser, connect to your Jenkins server IP address on port 9000 and the log in to the server.
The default username and password is "admin".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6f84jf2phosumnfnt4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6f84jf2phosumnfnt4v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you log in, click on "Manually".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz4r3d83eitw5hbyb0x8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz4r3d83eitw5hbyb0x8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the directions in the image above, then click on "set up".&lt;br&gt;
&lt;strong&gt;NB:&lt;/strong&gt; Your project key has to exactly be &lt;strong&gt;newsread-microservices-application&lt;/strong&gt;. That way, you won't have to edit the Jenkinsfile.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;"With Jenkins"&lt;/strong&gt; and choose GitHub as the DevOps platform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bnnbea6j43tvj0h3ljw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bnnbea6j43tvj0h3ljw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on &lt;strong&gt;Configure analysis&lt;/strong&gt;, on step 3, copy the "sonar.projectKey", you will be needing it later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdcus7h1uzotllmnnuq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdcus7h1uzotllmnnuq6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on "Account" &amp;gt; "My Account" &amp;gt; "Generate token".
Give it a name and click on "generate".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tz5qe7ooadml77g0cu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tz5qe7ooadml77g0cu4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to "Manage Jenkins" &amp;gt; "Credentials"&lt;/li&gt;
&lt;li&gt;Select secret tex and paste in the token you just copied.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz59t3tbuhckci2fxgt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz59t3tbuhckci2fxgt0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now go to Jenkins Dashboard &amp;gt; Configure Jenkins &amp;gt; System &amp;gt; Sonarqube Server &amp;gt; Add Sonarqube&lt;/li&gt;
&lt;li&gt;Give it the name "SonarQube Server", enter the server url and credential ID for the secret token.
Notice here that our server url is localhost, since SonarQube is hosted on the same server as jenkins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a770dmbbjbbbg66pbft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a770dmbbjbbbg66pbft.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;click on "Save".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Step 5: Integrate your DockerHub Credentials&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This stage is essential for Jenkins to have access to your DockerHub account.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to "Manage Jenkins" &amp;gt; "Credentials" &amp;gt; "Add Credentials"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oi45zq6c0wmtl08u53v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oi45zq6c0wmtl08u53v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Step 6: Configure The Jenkins Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From Jenkins' dashboard, click New Item and create a Pipeline Job. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxwu61m452794k2crxqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxwu61m452794k2crxqp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under Build Triggers, choose Trigger builds remotely. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqu6svazhn7zjyotm9yk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqu6svazhn7zjyotm9yk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set a secret token under the "Authentication Token" box. We will use it when creating a GitHub Webhook.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under Pipeline, make sure the parameters are set as follows:

&lt;ul&gt;
&lt;li&gt;Definition: Pipeline script from SCM&lt;/li&gt;
&lt;li&gt;SCM: Configure your SCM. Make sure to only build your main branch. For example, if your main branch is called "main", put "*/main" under Branches to build.&lt;/li&gt;
&lt;li&gt;Script Path: Jenkinsfile&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqy8e5uicfpwm523701p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqy8e5uicfpwm523701p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; You have to fork my &lt;a href="https://github.com/Kelvinskell/microservices-devops-1" rel="noopener noreferrer"&gt;repository&lt;/a&gt; into your own GitHub account. This is necessary for you to have acess to repository and able to configure it.&lt;br&gt;
Once this has been done, create a GitHub Personal Access Token. &lt;/p&gt;

&lt;p&gt;We will use the GitHub PAT to authenticate to our repository from Jenkins.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to the EC2 instance, switch to the jenkins user, create an SSH key-pair. The public key will be uploaded to GitHub as your PAT, while the private key will be added in our Jenkins configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dy0fk403lqv50eeh1nd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dy0fk403lqv50eeh1nd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Back on the Jenkins server, click on "Add credential"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9t7mfdbdun64k7n80m6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9t7mfdbdun64k7n80m6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The error message has now disappeared.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cwoepv5sgkzaq16u5f4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cwoepv5sgkzaq16u5f4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on "Save".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Step 7: Create A GitHub WebHook&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is necessary for trigerring our jenkins builds remotely.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the GitHub Webhook creation page for your repository and enter the following information:
URL: Enter the following URL, replacing the values between *** as needed:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

***JENKINS_SERVER_URL***/job/***JENKINS_JOB_NAME***/build?token=***JENKINS_BUILD_TRIGGER_TOKEN***


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qb14f6bp78i03f2jbs9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qb14f6bp78i03f2jbs9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Step 8: Execute The Pipeline&lt;/strong&gt;&lt;br&gt;
Now, we are done configuring this pipeline. It is time to test our work.&lt;br&gt;
You can either trigger the pipeline by making a change and pushing to your GitHub repository. If the web hook trigger is properly configured, this will automatically start the pipeline.&lt;/p&gt;

&lt;p&gt;Alternatively, you can simply click on "Build now" to execute the pipeline.&lt;/p&gt;

&lt;p&gt;If everything is configured as they should, you will get this output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzesc1d3m2wtebg5z9al.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzesc1d3m2wtebg5z9al.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have now come to the end of the first part of this project. In this first part, we configured and set up the continuous integration pipeline. The second part will involve implementing GitOps using ArgoCD.&lt;/p&gt;

&lt;p&gt;We will provision an EKS cluster using terraform, then use ArgoCD for contuinuous deployment to the EKS Cluster.&lt;/p&gt;

&lt;p&gt;The idea here is that you can have seperate teams managing the two parts of the process - continuous integration and continuous deployment. Thus further decoupling and streamlining the whole process while using Git as our single source of truth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PS:&lt;/strong&gt; I am open to remote DevOps, Cloud and technical writing offers.&lt;br&gt;
Connect with me on &lt;a href="https://linkedin.com/in/kelvin-onuchukwu-3460871a1" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>iac</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Advanced Terraform: Getting Started With Terragrunt</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Fri, 19 Jan 2024 23:59:16 +0000</pubDate>
      <link>https://dev.to/kelvinskell/advanced-terraform-getting-started-with-terragrunt-b9b</link>
      <guid>https://dev.to/kelvinskell/advanced-terraform-getting-started-with-terragrunt-b9b</guid>
      <description>&lt;p&gt;This post was originally published at &lt;a href="https://practicalcloud.net" rel="noopener noreferrer"&gt;Practical Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View the longer, original version &lt;a href="https://practicalcloud.net/advanced-terraform-getting-started-with-terragrunt/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Terragrunt?
&lt;/h2&gt;

&lt;p&gt;Terragrunt is an open source tool developed by GruntWork that helps to reduce code duplication accross your Terraform projects - effectively keeping your terraform code DRY.&lt;/p&gt;

&lt;p&gt;DRY is a popular acronym which means &lt;strong&gt;Do not Repeat Yourself&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Terragrunt can be used to manage Terraform code across multiple enviornments, multiple AWS accounts, handling dependency management, custom actions and versioning.&lt;/p&gt;

&lt;p&gt;Terragrunt simplifies the management of multiple environments by providing a clear separation between them.&lt;br&gt;
Terragrunt hooks can be used to perform actions before or after Terraform commands.&lt;/p&gt;

&lt;p&gt;Terragrunt can be integrated with your CI/CD pipelines for automated infrastructure deployment.&lt;/p&gt;

&lt;p&gt;Terragrunt is able to integrate with external secret management tools like Hasicorp Vault or AWS Secrets Manager for managing sensitive data.&lt;/p&gt;

&lt;p&gt;In fact, you can use Terragrunt commands in place of basic terrafrom comands.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terragrunt init&lt;/li&gt;
&lt;li&gt;terragrunt plan&lt;/li&gt;
&lt;li&gt;terragrunt apply&lt;/li&gt;
&lt;li&gt;terragrunt output&lt;/li&gt;
&lt;li&gt;terragrunt destroy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terragrunt lets you specify the IAM role to use. You can do this by using the &lt;code&gt;_--terragrunt-iam-role_&lt;/code&gt; CLI argument or the &lt;strong&gt;TERRAGRUNT_IAM_ROLE&lt;/strong&gt; environmental variable. Terragrunt will call the sts-assume-role API and then expose the credentials it receives as environmental variables when invoking terraform. This makes it possible to seamlessly deploy infrastructure across different environments without having to store AWS credentials in plaintext.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Terragrunt
&lt;/h2&gt;

&lt;p&gt;To install terragrunt, make sure you have alredy installed terraform beforehand. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow this &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;link&lt;/a&gt; to install terraform.&lt;/li&gt;
&lt;li&gt;Download the Terragrunt binary from the &lt;a href="https://github.com/gruntwork-io/terragrunt/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select the correct binary for your OS.&lt;/li&gt;
&lt;li&gt;Copy the link and download on your terminal using the wget command. Example: &lt;code&gt;wget https://github.com/gruntwork-io/terragrunt/releases/download/v0.54.19/terragrunt_linux_amd64&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Rename the binary to terragrunt: &lt;code&gt;mv terragrunt_linux_amd64 terragrunt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Make the file executable: &lt;code&gt;chmod u+x terragrunt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Move the file to the &lt;code&gt;/usr/local/bin&lt;/code&gt; directory: `sudo mv terragrunt /usr/local/bin/terragrunt'&lt;/li&gt;
&lt;li&gt;To verify your installation, run: &lt;code&gt;terragrunt --version&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Case Study
&lt;/h2&gt;

&lt;p&gt;Let us now present a scenario for managing infrastructure with Terragrunt.&lt;/p&gt;

&lt;p&gt;The following is a sample terraform code for creating an EC2 instance using Terraform.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
terraform {&lt;br&gt;
  required_providers {&lt;br&gt;
    aws = {&lt;br&gt;
      source  = "hashicorp/aws"&lt;br&gt;
      version = "~&amp;gt; 4.16"&lt;br&gt;
    }&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;required_version = "&amp;gt;= 1.2.0"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider "aws" {&lt;br&gt;
  region  = "us-west-2"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_instance" "app_server" {&lt;br&gt;
  ami           = "ami-830c94e3"&lt;br&gt;
  instance_type = "t2.micro"&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "ExampleAppServerInstance"&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The probelm is, how do you create this instance across several environments? &lt;/p&gt;

&lt;p&gt;Ordinarily, you would want to duplicate the same code for each environment and then modify the parameters. &lt;br&gt;
You might wish to specify different instance types for each environment. For example, you might want to use a t2.micro instance for development environment, a t3.medium for test environment and a t2.large for production. &lt;/p&gt;

&lt;p&gt;The probelm is that this would be a very manual process -inefficient and leaves plenty room for error. Using Terragrunt, we can improve thiscode without having to copy and paste the code across several environments.&lt;/p&gt;

&lt;p&gt;Also these environments might be spread out across diffrent AWS accounts and you'd also have to worry about managing the different AWS credentials and IAM roles required.&lt;br&gt;
The terraform backend does not support variables or any type of expression whatsoever. So you would have to manually copy and edit the backend configuration code for each of the environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working With Terragrunt
&lt;/h2&gt;

&lt;p&gt;First we need to create seperate directories for each of the enviornments. In each of the directories, a &lt;code&gt;terragrunt.hcl&lt;/code&gt; file will be created. This is the configuration file for each of the environments.&lt;/p&gt;

&lt;p&gt;Your directory structure should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq31k0jy4rcyecdho21q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq31k0jy4rcyecdho21q7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, we have to define a terraform block and declare the source of our &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/ec2-instance/aws/latest" rel="noopener noreferrer"&gt;EC2 module&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
terraform {&lt;br&gt;
    source = "tfr:///terraform-aws-modules/ec2-instance/aws?version=5.6.0"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;`&lt;br&gt;
Now let's define a provider.&lt;br&gt;
`&lt;/code&gt;&lt;br&gt;
generate "provider" {&lt;br&gt;
    path = "provider.tf"&lt;br&gt;
    if_exists = "overwrite_terragrunt"&lt;br&gt;
    contents = &amp;lt;&amp;lt;EOF&lt;br&gt;
    provider "aws" {&lt;br&gt;
        profile = "default"&lt;br&gt;
        region = "us-east-1"&lt;br&gt;
    }&lt;br&gt;
EOF&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;br&gt;
Now we need to declare an &lt;code&gt;inputs&lt;/code&gt; block. This is where most of the magic of terragrunt happens. &lt;/p&gt;

&lt;p&gt;we can simply define one terraform code for all our environments and then use the &lt;code&gt;inputs&lt;/code&gt; block to change values as reuired across all the environments.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
inputs = {&lt;br&gt;
    ami = "ami-0005e0cfe09cc9050"&lt;br&gt;
    instance_type = "t2.micro"&lt;br&gt;
    tags = {&lt;br&gt;
        Name = "grunt-ec2"&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;br&gt;
What is happening here is that the instance type, ami and tags are values that are intended to differ based on the environment. &lt;/p&gt;

&lt;p&gt;My &lt;code&gt;terragrunt.hcl&lt;/code&gt; file now looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyzn9kt5de5dpw6yepwi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyzn9kt5de5dpw6yepwi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have replicated this in my &lt;code&gt;test&lt;/code&gt; directory with a few changes. The instance type is t2.medium, the tags have also been modified to reflect the environment.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;terragrunt.hcl&lt;/code&gt; file in my test environment looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg34qmg8fke5deu881w7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg34qmg8fke5deu881w7i.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next stage is to execute terragrunt commands to read our configuration and invoke terraform with it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switch to the &lt;code&gt;dev&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;First, run &lt;code&gt;terragrunt init&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Now run &lt;code&gt;terragrunt plan&lt;/code&gt; to see proposed infrastructure changes.
This is how mine looks:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n9bthpj7sd7nac0bgtb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n9bthpj7sd7nac0bgtb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terragrunt apply&lt;/code&gt; to actually create the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repeat the steps above for the test environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you followed this tutorial until now, I'm sure you must have begun to see the importance of terragrunt and how it helps manage terraform code acoss different environemnts while keeping your terraform code DRY.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>What Is AWS AppConfig?</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Tue, 18 Jul 2023 11:23:24 +0000</pubDate>
      <link>https://dev.to/kelvinskell/what-is-aws-appconfig-3194</link>
      <guid>https://dev.to/kelvinskell/what-is-aws-appconfig-3194</guid>
      <description>&lt;p&gt;This post was originally published at &lt;a href="https://practicalcloud.net"&gt;Practical Cloud&lt;/a&gt;&lt;br&gt;
This is an abridged version.&lt;/p&gt;

&lt;p&gt;View the longer, original version &lt;a href="https://practicalcloud.net/aws-appconfig-efficient-configuration-management-and-deployment/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;AppConfig is a service offering from AWS that helps you to efficiently create, manage and deploy your application configurations without altering your application code.&lt;/p&gt;

&lt;p&gt;In a nut shell, AppConfig helps you to dynamically change values without redeploying your application. Thereby eliminating downtime or service disruptions.&lt;br&gt;
AppConfig seamlessly integrates with applications running on EC2 instances, Lambda, Containers, Mobile apps and IOT devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases For AppConfig
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Flags:&lt;/strong&gt; &lt;a href="https://www.abtasty.com/feature-flags/"&gt;Feature flags&lt;/a&gt; are a concept in software development that allows you to enable or disable features in your application without modifying source code or redeploying your application. A feature of an application can be delivered to production in an inactive form by hiding it behind a feature flag. With AWS AppConfig, we can turn these features on at any specified time, making them available to either a selected group of users or to all users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application tuning:&lt;/strong&gt; With AWS AppConfig, you can change the behavior of your application on the fly. You can for instance, save timeout settings in your configuration data and alter them when needed, without needing to change your source code or restarting the application. Similarly, a DevOps engineer can increase or reduce the verbosity of the application logs without needing to redeploy the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Whitelisting or Blacklisting:&lt;/strong&gt; AppConfig enables you to expose certain features of your application to a select group of users, using a dynamic allow-list. You can conversely block a group of users from viewing your application by creating a deny-list. This is useful if you want some features of your application to only be available to paying customers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Use AppConfig?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AppConfig enables controlled deployment of configurations to applications, allowing you to swiftly and safely manage configurations. You can simply turn features on or off without having to redeploy or restart your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AppConig can protect you against costly configuration mistakes that can cause application downtime.With built-in validation checks, you can ensure your configurations are always syntactically valid and devoid of any errosr.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AppConfig gives you the ability to safely deploy changes using progressive rollouts and rollback alarmas. By integrating with CloudWatch alarms, AppConfig will monitor the application to see that deployment is successful. If errors are encountered and alarms go off, it will automatically rollback any configuration changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AppConfig enables auditablity by providing a &lt;a href="https://en.wikipedia.org/wiki/Changelog#:~:text=A%20changelog%20is%20a%20log,fixes%2C%20new%20features%2C%20etc."&gt;change log&lt;/a&gt; where you can view your update history.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How To Use AppConfig
&lt;/h2&gt;

&lt;p&gt;To effectively use AppConfig, there are three steps you need to go through.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setup AppConfig:&lt;/strong&gt; This step involves properly configuring the service to suit your needs. This can either be done via the AWS console, CLI or CDKs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrate Application:&lt;/strong&gt; This is where you integate your application with AppConfig. This can either be done through APIs or AWS Lambda AppConfig extension.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: This is where you deploy your configuration changes. It involves updating your values, choosing a deployment strategy and applying your deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  AppConfig Terminology
&lt;/h2&gt;

&lt;p&gt;In order to be comfortable working with AppConfig, there are some basic terminologies that you must understand.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Environment:&lt;/strong&gt; These are logical deployment groups that you can use to deploy your configuration. It is basically a collection of AWS resources - EC2 instances, ECS tasks, Lambda functions, etc. For example, you can have a dev environment and a prod environment. You can change the configurations for dev and prod separately.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application:&lt;/strong&gt; This is a logical unit of deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration Profile:&lt;/strong&gt; This is where your configuration resides. A configuration profile is a collection of configurations for an application, associated with a specific environment. Configuration profiles can either be Freeform or Feature flags. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Strategies:&lt;/strong&gt; These are a variety of methods or sets of rules specifying how to roll out configuration updates to an environment. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, AWS AppConfig is a great service for building feature flags, performing operations tuning on your application and changing your application behavior without ever needing to alter source code, redeploy or restart the application.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>development</category>
      <category>cloud</category>
    </item>
    <item>
      <title>DevOps On AWS: CodePipeline In Action</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Fri, 02 Jun 2023 17:26:43 +0000</pubDate>
      <link>https://dev.to/kelvinskell/devops-on-aws-codepipeline-in-action-60k</link>
      <guid>https://dev.to/kelvinskell/devops-on-aws-codepipeline-in-action-60k</guid>
      <description>&lt;p&gt;This post was originally published at &lt;a href="https://practicalcloud.net"&gt;Practical Cloud&lt;/a&gt;.&lt;br&gt;
View the original post &lt;a href="https://practicalcloud.net/implementing-devops-on-aws-codebuild-codedeploy-and-codepipeline-in-action/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is the second part of a three series project on designing and deploying a three-tier architecture on AWS. &lt;a href="https://dev.to/kelvinskell/a-practical-guide-to-deploying-a-complex-production-level-three-tier-architecture-on-aws-2hf0"&gt;Here&lt;/a&gt; is the link to the first part of this project.&lt;/p&gt;

&lt;p&gt;DevOps on AWS is the implementation of the DevOps philosophy on the AWS Cloud platform. This includes implementing an effective CI/CD solution that provides components such as Load Balancing, Content Delivery, Storage, Security, Database, Caching and Scaling.&lt;br&gt;
AWS provides a set of tools for implementing DevOps which includes,but may not be limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://aws.amazon.com/codecommit/"&gt;AWS CodeCommit&lt;/a&gt;&lt;/strong&gt;: This is a cloud version control service that is quite secure and highly scalable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://aws.amazon.com/codebuild/"&gt;AWS CodeBuild&lt;/a&gt;&lt;/strong&gt;: This is a fully managed build and continuous integration service in the cloud. It provides similar capability to Jenkins. It is also highly scalable, offering continuous scaling and high availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html"&gt;AWS CodeDeploy&lt;/a&gt;&lt;/strong&gt;: This is a deployment service that automatically deploys applications to EC2 instances, On-premises servers, Lambda functions and ECS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html"&gt;AWS CodePipeline&lt;/a&gt;&lt;/strong&gt;: This is a managed continuous delivery service that can be used to automate a release pipeline. 
AWS also offers other supplementary services such as &lt;strong&gt;&lt;a href="https://aws.amazon.com/codestar/"&gt;CodeStar&lt;/a&gt;&lt;/strong&gt;, &lt;strong&gt;Device Farm&lt;/strong&gt;, etc which can help improve the quality of your DevOps experience on AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this project, we'll be making use of &lt;strong&gt;CodePipeline&lt;/strong&gt; as the continuous delivery service for automating the deployment of our three-tier application to AWS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We'll be using GitHub, instead of CodeCommit for source control.&lt;/li&gt;
&lt;li&gt;We'll use CodeBuild  for building and testing our application code. CodeBuild will also be used to execute our terraform scripts in order to provision environments for the application. At this stage we'll also add a manual step, which involves sending SNS notifications to the user after a "terraform plan". If we are happy with the plan, then we go ahead to approve, which takes it to the next step.&lt;/li&gt;
&lt;li&gt;CodeDeploy will then be used to deploy the Flask application to our EC2 instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lets Go!&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Fork The Application repository
&lt;/h2&gt;

&lt;p&gt;This is a very important first step. CodePipeline will listen for changes to our application. To follow through with the rest of the project, you'll need to have the repository in your GitHub account. &lt;strong&gt;&lt;a href="https://github.com/Kelvinskell/terra-tier/tree/master"&gt;Here&lt;/a&gt;&lt;/strong&gt; is the link to the project on GitHub. Fork the repository.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Provision The Environment
&lt;/h2&gt;

&lt;p&gt;Clone the repository to your local computer. &lt;br&gt;
Navigate into the &lt;em&gt;terraform&lt;/em&gt; directory. Edit the &lt;em&gt;backend.tf&lt;/em&gt; file and add your own backend.&lt;br&gt;
Run &lt;code&gt;terraform plan&lt;/code&gt;. if you like the plan, then run &lt;code&gt;terraform apply&lt;/code&gt;. After the apply is successfully executed, you should see your bastion host's IP address and your application's DNS address on your screen.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Create A Build Project
&lt;/h2&gt;

&lt;p&gt;This is where we integrate our source control.&lt;br&gt;
CodeBuild enables us to perform testing and other necessary builds and integrations. It is our build server of choice for this project.&lt;br&gt;
On the CodeBuild Console, create a project. Choose a suitable name and description. Enable build badge.&lt;br&gt;
Build badges are images provided through a public URL which displays the status of our latest build.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiihbcnu7oelowwzkotg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiihbcnu7oelowwzkotg.png" alt="AWS CodeBuild" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your source provider should be GitHub. Connect to your GitHub account. Source version is master (The master branch).&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7mv67kre4wqh6wnmdw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7mv67kre4wqh6wnmdw9.png" alt="Codebuild connect to GitHub" width="800" height="644"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose Ubuntu as your operating system. Choose the standard runtime and select "&lt;strong&gt;create a new service role&lt;/strong&gt;".&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfiese8iza6ogcnc4kdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfiese8iza6ogcnc4kdq.png" alt="CodeBuild" width="800" height="870"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because I want CodeBuild to have private access to my servers, I will click on "&lt;strong&gt;Additional configuartion&lt;/strong&gt;".&lt;br&gt;
Under VPC, Ill select the project-x-vpc that I provisioned using terraform.&lt;br&gt;
Leave every other setting as is. Click on &lt;strong&gt;create build project&lt;/strong&gt;.&lt;br&gt;
&lt;strong&gt;N.B:&lt;/strong&gt; The secret sauce for CodeBuild is the &lt;strong&gt;&lt;code&gt;buildspec&lt;/code&gt;&lt;/strong&gt; file at the root of the application's repository. This is where I define what happens during the build phase. CodeBuild cannot run without a buildspec file. By default, this file must be named &lt;code&gt;buildspec.yml&lt;/code&gt; though you can change this behaviour.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Create An IAM Role For CodeDeploy
&lt;/h2&gt;

&lt;p&gt;CodeDeploy will require IAM permissions in order to access the application servers.&lt;br&gt;
Go to the IAM Console, click on "&lt;strong&gt;Roles&lt;/strong&gt;", click on "&lt;strong&gt;Create role&lt;/strong&gt;".&lt;br&gt;
We will create a service roles which CodeDeploy will assume to gain access to our EC2 servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4zmzafe8fgvayeiv715.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4zmzafe8fgvayeiv715.png" alt="IAM role for CodeDeploy" width="800" height="394"&gt;&lt;/a&gt;&lt;br&gt;
Click on &lt;strong&gt;Next&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimfjf8zpiatknl5u775q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimfjf8zpiatknl5u775q.png" alt="Iam for codedeploy" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since our AutoScaling group will be originally launched from a launch template, we will need to add extra permissions.&lt;br&gt;
Click on &lt;strong&gt;Add permissions&lt;/strong&gt;, &lt;strong&gt;Attach policy&lt;/strong&gt;, &lt;strong&gt;Create policy&lt;/strong&gt;&lt;br&gt;
Our json policy willl look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags",
                "ec2:RunInstances",
                "iam:PassRole"
            ],
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click on &lt;strong&gt;Next&lt;/strong&gt;. Review and create your policy. It should look like this:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9mfs417xwnmt87j5gql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9mfs417xwnmt87j5gql.png" alt="Iam Policy" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go back and attach this permission, it should look like this:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfwp556cnzwx60sf7h8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfwp556cnzwx60sf7h8b.png" alt="CodeDeploy Role" width="800" height="371"&gt;&lt;/a&gt;&lt;br&gt;
Click &lt;strong&gt;Next&lt;/strong&gt;, Review and create your role.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Configure CodeDeploy
&lt;/h2&gt;

&lt;p&gt;AWS CodeDeploy is a deployment service that automates the continuous deployment of application updates/versions to your servers.&lt;br&gt;
Click on CodeDeploy drop down on the Left hand side of your screen. Click on "&lt;strong&gt;Applications&lt;/strong&gt;". Click on "Create Application".&lt;br&gt;
Application in CodeDeploy is a mechanism by which it manages revisions, deployment configurations and deployment groups.&lt;br&gt;
Choose a name for your application and select a platform. Click on "&lt;strong&gt;Create application&lt;/strong&gt;".&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsx80e7ylz5wi6vuyhlw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsx80e7ylz5wi6vuyhlw.png" alt="CodeDeploy" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on "&lt;strong&gt;Create deployment group&lt;/strong&gt;".&lt;br&gt;
Choose a name for your group. Select the CodeDeploy service role we created in step 4.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favkay5laz9ywtsvqzz2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favkay5laz9ywtsvqzz2n.png" alt="Create A CodeDeployment Group" width="800" height="696"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now because I want MINIMUM downtime, I'll be using a blue/green deployment type. You can visit &lt;a href="https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment"&gt;here&lt;/a&gt; to get more information about blue/green deployments.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3d9izk65ryzm9xrynxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3d9izk65ryzm9xrynxg.png" alt="code deploy" width="800" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;&lt;em&gt;project-x-asg&lt;/em&gt;&lt;/strong&gt; auto scaling group and also choose terminate after 5 minutes. Under Application Load Balancer, select your provisoned &lt;strong&gt;&lt;em&gt;project-x-lb-tg&lt;/em&gt;&lt;/strong&gt; target group as the target group of choice.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0zcdka8hu4xnkn33b6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0zcdka8hu4xnkn33b6j.png" alt="code deploy" width="800" height="870"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on "&lt;strong&gt;advanced options&lt;/strong&gt;". Enable deployment rollbacks on failures. Create your deployment group.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nfvrh3ner9nwi0th3g2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nfvrh3ner9nwi0th3g2.png" alt="Code deploy" width="800" height="822"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We don't need any triggers for this deployment group. you typically use triggers if you are simply automating your deployment. But since this is a CI/CD pipeline, we'll leave everything to CodePipeline to orchestrate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;N.B:&lt;/strong&gt; The secret sauce for CodeDeploy is the &lt;strong&gt;&lt;code&gt;appspec&lt;/code&gt;&lt;/strong&gt; file at the root of the application's repository. This is where I define what happens during the deploy phase. CodeDeploy cannot run without an appspec file. By default, this file must be named &lt;code&gt;appspec.yml&lt;/code&gt; though you can change this behaviour.&lt;br&gt;
Also note that the only reason this would work is because I already included instructions for installing the CodeDeploy as part of the user data on the logic tier servers. Therefore they will be automatically provisoned by terraform. If you were to use a different environment, remeber to install the CodeDeploy agent on your servers. You can find instructions on how to do that &lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Create A Pipeline
&lt;/h2&gt;

&lt;p&gt;AWS Codepipeline acts as our continuous delivery service. It is the glue that holds our entire together and also orchestrates it. CodePipeline will seamlessly handle the integration of source control, CodeBuild and Codedeploy.&lt;/p&gt;

&lt;p&gt;Go to the Pipeline Console, click on "&lt;strong&gt;Create Pipeline&lt;/strong&gt;".&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qcjzb20d5ogl2y3yn1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qcjzb20d5ogl2y3yn1o.png" alt="AWS pipeline console" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a name for your pipeline. Click on &lt;strong&gt;Next&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wo2pyeul6cutsh65c6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wo2pyeul6cutsh65c6k.png" alt="Creating an AWS CodePipeline" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;source stage&lt;/strong&gt;, select GitHub from the drop down. Authorize access to GitHub.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzc7upcfet94c16wt3jn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzc7upcfet94c16wt3jn.png" alt="Creating a pipeline" width="800" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CodePipeline allows us to choose either Jenkins or CodeBuild as our build provider. We are going to choose CodeBuild.&lt;br&gt;
Under &lt;strong&gt;project name&lt;/strong&gt;, select the build project we created in step 3.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yk1f9hbs3unzuzvm0wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yk1f9hbs3unzuzvm0wy.png" alt="Creating a pipleine" width="800" height="701"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;Add Deploy stage&lt;/strong&gt;, we will use CodeDeploy as our deploy provider. Then we will select the application name and deployment group we created earlier on in step 5.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hsg7mow9nfwzwxyv3y3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hsg7mow9nfwzwxyv3y3.png" alt="CodePipeline" width="800" height="605"&gt;&lt;/a&gt;&lt;br&gt;
Review and create your pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aaaaaaaaaaaaaaaaaaannnnnnndd............Done!!!&lt;/strong&gt;&lt;br&gt;
We have successfully created a CI/CD pipeline on AWS using AWS CodePipeline.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Before You Go..&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Let's take this a notch further by adding a manual approval feature through which manual approval must be given before the application is deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Create An SNS Topic
&lt;/h2&gt;

&lt;p&gt;For manual the manual approval stage, we want to be able to send an email notification to the team lead that the buid is successful and about to deploy. The team lead must then review the artifacts and/or log outputs from the build and  make a decision whether to continue to deploy or not.&lt;/p&gt;

&lt;p&gt;Go to the AWS SNS console, create a topic. &lt;br&gt;
Choose a standard topic. Give a name and display name to your topic. create the topic.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz4ioz80x2yisr7p46nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz4ioz80x2yisr7p46nt.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Modify The Pipeline
&lt;/h2&gt;

&lt;p&gt;Now we have to include an action group after the build stage to enable a manual approval feature.&lt;/p&gt;

&lt;p&gt;Go to the pipeline, click on &lt;strong&gt;Edit&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj85tzyy57efmhml21df7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj85tzyy57efmhml21df7.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the Build Stage, click on &lt;strong&gt;Edit&lt;/strong&gt;.&lt;br&gt;
Here, we will add an action group just after the build.&lt;br&gt;
An Action Group is a set of one or more actions through which you can further refine your pipeline workflow. There are six types of actions - source, build, test, deploy, approval and invoke. Read more about them &lt;a href="https://docs.aws.amazon.com/codecatalyst/latest/userguide/workflows-group-actions.html"&gt;here&lt;/a&gt;.&lt;br&gt;
Your pipelie should now look like this:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8elgkwbp9nf3wl8pba4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8elgkwbp9nf3wl8pba4s.png" alt="Pipeline" width="718" height="930"&gt;&lt;/a&gt;&lt;br&gt;
Without the red lines of course!&lt;/p&gt;

&lt;p&gt;That's it. I hope you enjoyed it. Happy Clouding!!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
    <item>
      <title>A Practical Guide To Deploying A Three-tier Application On AWS</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Thu, 01 Jun 2023 09:26:55 +0000</pubDate>
      <link>https://dev.to/kelvinskell/a-practical-guide-to-deploying-a-complex-production-level-three-tier-architecture-on-aws-2hf0</link>
      <guid>https://dev.to/kelvinskell/a-practical-guide-to-deploying-a-complex-production-level-three-tier-architecture-on-aws-2hf0</guid>
      <description>&lt;p&gt;This post was originally published at &lt;a href="https://practicalcloud.net" rel="noopener noreferrer"&gt;Practical Cloud&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;View the longer, original version &lt;a href="https://practicalcloud.net/a-practical-guide-to-deploying-a-complex-production-level-three-tier-architecture-on-aws/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-tiered architectures&lt;/strong&gt; have become the most popular ways of designing and building applications in the cloud. This is primarily because they offer high availability, replication, flexibility, security and many other numerous benefits. This is as opposed to single-tier architectures which typically involves packaging all the requisite components of a software application into a single server. &lt;/p&gt;

&lt;p&gt;The most popular multi-tier design pattern is a three-tier architecture. The three-tier architecture consists of the following layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Presentation Layer:&lt;/strong&gt;  This is the outermost layer of the application and provides an interface for interacting with the user. It also provides a secure communication channel with the other tiers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logic Layer:&lt;/strong&gt; This is where information is processed and application logic is executed. It processes input from the presentation layers, processes it and also communicates with the data layer when necessary. it is also known as the &lt;em&gt;application layer&lt;/em&gt; or &lt;em&gt;middleware&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Layer:&lt;/strong&gt; This is where the database management system sits, thus providing it with a secure, isolated environment for storing and managing application information.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this guide, we are going to design a very fault-tolerant, highly scalable Flask application on AWS using the three-tier architecture design pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural Design
&lt;/h2&gt;

&lt;p&gt;The Presentation layer will comprise of; Cloudfront, Elastic Load Balancer, Internet Gateway, NAT Gateway and two Bastion Hosts.&lt;br&gt;
The Application (Logic) Layer will consist of EC2 instances based on EBS volumes, provisioned through an Auto Scaling group.&lt;br&gt;
The Data Layer will be made up of a PostgresQL Database with a Read Replica. The EC2 instances provisioned in the application layer will be connected to an Elastic Filesystem for data storage. So technically, the EFS is also residing in this layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bonus:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/route53/" rel="noopener noreferrer"&gt;AWS Route 53&lt;/a&gt; will be used to provide a domain name for the application.&lt;/li&gt;
&lt;li&gt;We will integrate &lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;ClodFront&lt;/a&gt; with the Application Load Balancer to provide worldwide accelerated content delivery and reduce latency.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Kelvinskell/terra-tier/tree/master/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; will be used as the Infrastructure-as-code (IAC) tool to automate this whole process from end to end.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;To automatically create this architecture from end to end, click on this &lt;a href="https://github.com/Kelvinskell/terra-tier" rel="noopener noreferrer"&gt;Link&lt;/a&gt; to get the Terraform code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is a first-part of a three series project. &lt;br&gt;
In the &lt;a href="https://dev.to/kelvinskell/devops-on-aws-codepipeline-in-action-60k"&gt;second project&lt;/a&gt;, we will implement &lt;strong&gt;DevOps on AWS&lt;/strong&gt; via &lt;strong&gt;CodePipeline&lt;/strong&gt;, &lt;strong&gt;CodeBuild&lt;/strong&gt; and &lt;strong&gt;CodeDeploy&lt;/strong&gt;. &lt;br&gt;
The third project introduces a &lt;strong&gt;Reporting Layer&lt;/strong&gt; into the architecture.&lt;br&gt;
This part focuses soley on designing and deploying three-tier architecture for a Flask application on AWS.&lt;/p&gt;

&lt;p&gt;Let's begin. Shall we?&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbncci0sp1k4j8fthe4sz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbncci0sp1k4j8fthe4sz.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create The Environment.
&lt;/h2&gt;

&lt;p&gt;Firstly we will create a &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html" rel="noopener noreferrer"&gt;VPC&lt;/a&gt;. A virtual private cloud is a secure, isolated, networking environment hosted in a public Cloud. A VPC is more or less a virtual data center in the cloud. This is where most of our application resources will be hosted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4mllajrcqc45cnxto9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4mllajrcqc45cnxto9d.png" alt="AWS VPC environment"&gt;&lt;/a&gt;&lt;br&gt;
My VPC has a name tag of project-x. This means that all other resources created within this VPC will have the prefix of "project-x".&lt;br&gt;
You can also notice that I am assigning a CIDR block of 10.0.0.0/16 to my VPC. This is a block of private IP addresses from which my VPC resources will get their local  addresses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eruqo01r3p879e8alq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eruqo01r3p879e8alq7.png" alt="AWS VPC Console"&gt;&lt;/a&gt;&lt;br&gt;
Notice that I am in the us-east-1 region (North Virginia).&lt;br&gt;
Our VPC will be provisioned in three availability zones. 6 subnets will be created in total. Three public and three private ones. The public subnets are public soley because an internet gateway will be attached to them.&lt;br&gt;
Also notice that I am creating a NAT gateway. This NAT gateway will be used by the EC2 instances hosting our application to connect to the internet (E.g: For Updates).&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create A Security Group
&lt;/h2&gt;

&lt;p&gt;A security group acts like a firewall and determines how traffic will enter or exit instances.&lt;br&gt;
Since our web servers will be hosted in the logic-tier, it is important that we restrict the type of traffic entering them.&lt;/p&gt;

&lt;p&gt;The name of our security group will be "project-x-logic-tier-sg"&lt;br&gt;
The security group must allow inbound traffic on &lt;strong&gt;port 5000&lt;/strong&gt; since that is the port on which Flask listens on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqr2lx9xj0djpf6njnp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqr2lx9xj0djpf6njnp1.png" alt="Creating a security group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create A Launch Template
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.html" rel="noopener noreferrer"&gt;Launch Template&lt;/a&gt; is a way of specifying important configuration details such that we can then launch instances using the details in the template. This template will be used by an auto scaling group to create instances in our Application tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucdf24p3vo4s204l015j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucdf24p3vo4s204l015j.png" alt="AWS EC2 Management Console"&gt;&lt;/a&gt;&lt;br&gt;
I assigned a name and description to my template. Notice also that i duly tagged the environment as "prod".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp18jo7e5gkliyeefk8dj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp18jo7e5gkliyeefk8dj.png" alt="Ec2 Console"&gt;&lt;/a&gt;&lt;br&gt;
I am using a "t3.xlarge" instance type. &lt;br&gt;
Be sure to select the "project-x-logic-tier-sg" security group.&lt;br&gt;
Leave every other detting as is.&lt;br&gt;
Under &lt;strong&gt;"Advanced details"&lt;/strong&gt;, I am going to paste in the following under user data:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#!/bin/bash

# Mount EFS
fsname=fs-093de1afae7166759.efs.us-east-1.amazonaws.com # You must change this value to represent your EFS DNS name.
mkdir /efs
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport $fsname:/ /efs

# install and set up Flask
apt update
apt upgrade -y
apt install python3-flask mysql-client mysql-server python3-pip python3-venv -y
apt install sox ffmpeg libcairo2 libcairo2-dev -y
apt install python3-dev default-libmysqlclient-dev build-essential -y



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Step 4: Create an Autoscaling Group, Elastic Load balancer And Target Group
&lt;/h2&gt;

&lt;p&gt;The Load balancer is the entry point to the application.&lt;br&gt;
The &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html" rel="noopener noreferrer"&gt;Application Load Balancer&lt;/a&gt;, residing in the &lt;strong&gt;presentation layer&lt;/strong&gt;, will route traffic through the &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html" rel="noopener noreferrer"&gt;AutoScaling Group&lt;/a&gt; to logic-tier instances residing in the &lt;strong&gt;logic layer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35d7als2rz8zq167gx1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35d7als2rz8zq167gx1u.png" alt="EC2 Management Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29aswslro3px62b07nrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29aswslro3px62b07nrq.png" alt="Creating an autoscaling group"&gt;&lt;/a&gt;&lt;br&gt;
I am naming this Autoscaling group "project-x-asg"&lt;br&gt;
Click on &lt;strong&gt;next&lt;/strong&gt;. Select the "&lt;strong&gt;project-x-vpc&lt;/strong&gt;" we created earlier. Also make sure that you select only the private subnets from the three availability zones. This is very crucial since our VPC will be launched in the logic tier. The subnets must not be public.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcokqlzna6ppfx8lg759.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcokqlzna6ppfx8lg759.png" alt="Creating an ASG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;next&lt;/strong&gt;.&lt;br&gt;
Under "&lt;strong&gt;Configure advanced options&lt;/strong&gt;", select "&lt;strong&gt;Attach to a new load balancer&lt;/strong&gt;". Also select "Create a target group".&lt;br&gt;
On the next page, we'll configure our ASG to have a minium capacity of 2, a desired capacity of 4 and a maximum capacity of 6. We'll also set up scaling based on target tracking metrics to scale based on average CPU Utilization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsqypntd2n2gw4d860w6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsqypntd2n2gw4d860w6.png" alt="Create an ASG"&gt;&lt;/a&gt;&lt;br&gt;
You can decide to add tags. I'll add a tag name of "Environment" with a value of "Prod". this will be needed during the CodeDeploy stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ppnknmsh4ewz3b8klg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ppnknmsh4ewz3b8klg.png" alt="creating an ASG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create your ASG.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Attach An Elastic Filesystem
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/efs/" rel="noopener noreferrer"&gt;AWS EFS&lt;/a&gt; is a fully managed, highly scalable shared storage solution in the cloud. It is NFS compatible.&lt;br&gt;
This Elastic filesystem will provide shared storage for all our application tier servers. Since it provides storage, EFS sits in the &lt;strong&gt;data layer&lt;/strong&gt; of the three tier architecture.&lt;/p&gt;

&lt;p&gt;First &lt;strong&gt;Create a new security group&lt;/strong&gt;. This security group should allow only inbound NFS traffic from the security group of our logic-tier instances. You can get a detailed guide on how to create that &lt;strong&gt;&lt;a href="https://catalog.workshops.aws/general-immersionday/en-US/basic-modules/60-s3/efs/3-efs#:~:text=Creating%20a%20Security%20Group%20for%20EFS&amp;amp;text=Click%20on%20Create%20Security%20Groups%20in%20the%20top%20right%20of%20the%20screen.&amp;amp;text=EC2%20to%20EFS-,VPC%20%2D%20This%20is%20important.,created%20in%20an%20earlier%20step." rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;br&gt;
**&lt;br&gt;
Go to the EFS console. Click on "&lt;/strong&gt;Create filesystem*&lt;em&gt;" and then click on **customize&lt;/em&gt;*.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg99g4m4sgy6uqhmd6iu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg99g4m4sgy6uqhmd6iu.png" alt="EFS Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assign a name to your EFS. Leave every other setting as default. Click on &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zvianh79qt6hm83huey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zvianh79qt6hm83huey.png" alt="EFS Console"&gt;&lt;/a&gt;&lt;br&gt;
On the &lt;strong&gt;Network access page&lt;/strong&gt;, select your &lt;em&gt;propject-x&lt;/em&gt; VPC. Select the EFS security group you have created. Click on next.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Filesystem Policy&lt;/strong&gt;, leave everything as default. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje6e5qmq6jt2c2aeydqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje6e5qmq6jt2c2aeydqu.png" alt="EFS Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Skip to &lt;strong&gt;Review&lt;/strong&gt; and then Create your filesystem.&lt;br&gt;
Now we must update our user data to look like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#!/bin/bash

# Use Google's DNS
echo "nameserver 8.8.8.8" &amp;gt;&amp;gt; /etc/resolv.conf

# Force apt to use IPV4
apt-get -o Acquire::ForceIPv4=true update

# Change hostname
echo "project-x-app-server" &amp;gt; /etc/hostname

# Install efs-utils
apt-get install awscli -y
mkdir /efs
sudo apt-get -y install git binutils
git clone https://github.com/aws/efs-utils
cd /efs-utils
./build-deb.sh
apt-get -y install ./build/amazon-efs-utils*deb

# Mount EFS
fsname=$(aws efs describe-file-systems --region us-east-1 --creation-token project-x --output table |grep FileSystemId |awk '{print $(NF-1)}')
mount -t efs $fsname /efs

# Get DB credentials
DB=$(aws rds describe-db-instances --db-instance-identifier --region us-east-1 database-1 --output table |grep DBName |awk '{print $(NF-1)}')
HOST=$( aws rds describe-db-instances --db-instance-identifier --region us-east-1 database-1 --output table |grep Address |awk '{print $(NF-1)}')
ARN=$(aws secretsmanager list-secrets --region us-east-1 --filters "Key=tag-value, Values=project-x-rds-mysqldb-instance" --output table |grep ARN |awk '{print $(NF-1)}')
USER=$(aws secretsmanager get-secret-value --region us-east-1 --secret-id $ARN --output table |grep -w SecretString |awk '{print $3}' |cut -d: -f2 |sed 's/password//' |tr -d '",')
PRE_PASSWORD=$(aws secretsmanager get-secret-value --region us-east-1 --secret-id $ARN --output table |grep -w SecretString |awk '{print $3}' |cut -d: -f3 |tr -d '"')
PASSWORD=${PRE_PASSWORD%?}

# install and set up Flask
apt-get update -y &amp;amp;&amp;amp; apt-get upgrade -y 
apt-get install python3-flask mysql-client mysql-server python3-pip python3-venv -y 
apt-get install sox ffmpeg libcairo2 libcairo2-dev -y 
apt-get install python3-dev default-libmysqlclient-dev build-essential -y 

# Clone the app
cd /
git clone https://github.com/Kelvinskell/terra-tier.git
cd /terra-tier

# Populate App with environmental variables
echo "MYSQL_ROOT_PASSWORD=$PASSWORD" &amp;gt; .env
cd /terra-tier/application
echo "MYSQL_DB=$DB" &amp;gt; .env
echo "MYSQL_HOST=$HOST" &amp;gt;&amp;gt; .env
echo "MYSQL_USER=$USER" &amp;gt;&amp;gt; .env
echo "DATABASE_PASSWORD=$PASSWORD" &amp;gt;&amp;gt; .env
echo "MYSQL_ROOT_PASSWORD=$PASSWORD" &amp;gt;&amp;gt; .env
echo "SECRET_KEY=08dae760c2488d8a0dca1bfb" &amp;gt;&amp;gt; .env # FLASK EXTENSION KEY. NOT NECESSARILY A "SECRET".
echo "API_KEY=f39307bb61fb31ea2c458479762b9acc" &amp;gt;&amp;gt; .env 
# YOU TYPICALLY DON'T ADD SECRETS SUCH AS API KEYS AS PART OF SOURCE CONTROL IN PLAIN TEXT.
# THIS IS BEIGN ADDED HERE SO THAT YOU CAN EASILY REPLICATE THIS INFRASTRUCTURE WITHOUT ANY HASSLES.
# YOU CAN REPLACE IT WITH YOUR OWN MEDIASTACK API KEY.

# Setup virtual environment
cd /terra-tier
python3 -m venv venv
source venv/bin/activate

# Run Flask Application
pip install -r requirements.txt
export FLASK_APP=run.py
export FLASK_ENV=production
flask run -h 0.0.0.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is where you find your filesystem's DNS name.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqcuscdmjsacmli29j1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqcuscdmjsacmli29j1f.png" alt="AWS EFs Console"&gt;&lt;/a&gt;&lt;br&gt;
Make sure to change the value of the "&lt;strong&gt;fsname&lt;/strong&gt;" variable to represent your own filesystem's DNS name. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Create A Bastion Host
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://dev.to/aws-builders/bastion-host-in-aws-vpc-2i63"&gt;Bastion Host&lt;/a&gt; is a special server used to manage access to servers sitting in an internal network or other private AWS resources from an external network. The bastion host sits in the public network and provides limited access to administrators to log in to servers sitting in an isolated network. It is also commonly referred to as a Jump Box or Jump Server.&lt;/p&gt;

&lt;p&gt;From the bastion host, we'll be able to gain SSH access into our application layer servers, for administrative purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.strongdm.com/blog/bastion-hosts-with-audit-logging-part-one" rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;/strong&gt; is a good resource on how to create your bastion host and connect to your logic layer servers through it.&lt;br&gt;
&lt;strong&gt;Security Caution:&lt;/strong&gt; Connecting to bastion hosts using ssh key-pairs is no longer the recommended practice. Instead use &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html" rel="noopener noreferrer"&gt;AWS Systems Manager&lt;/a&gt; for a more secure connection and tunneling through the bastion host to your private instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Create A Database
&lt;/h2&gt;

&lt;p&gt;The database solidly sits in the data layer, together with the Elastic filesystem.&lt;br&gt;
Our Flask application will need to connect to a relational database. We will be using MYSQL for this architecure.&lt;br&gt;
&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/what-is-mysql.html" rel="noopener noreferrer"&gt;MYSQL&lt;/a&gt; is a widely relational database management system (DBMS) which is free and open source. MYSQL is renowned for its ability to support large production workloads, hence its suitability for our project. &lt;/p&gt;

&lt;p&gt;We will however be utilizing &lt;a href="https://aws.amazon.com/rds/mysql/" rel="noopener noreferrer"&gt;Amazon RDS for MYSQL&lt;/a&gt;. It is a fully managed database solution in the cloud.&lt;/p&gt;

&lt;p&gt;On the RDS Console, click on &lt;strong&gt;Databases&lt;/strong&gt; and then clcik on "&lt;strong&gt;Create database&lt;/strong&gt;".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ki17yavayflw9c3bl08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ki17yavayflw9c3bl08.png" alt="RDS Console"&gt;&lt;/a&gt;&lt;br&gt;
I am selecting a &lt;strong&gt;Production&lt;/strong&gt; template and choosing &lt;strong&gt;Multi-AZ DB-instance&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcal54wz82c3lj89puln7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcal54wz82c3lj89puln7.png" alt="AWS RDS Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your master username and DB Instance Class. We'll be using AWS SEcrets manager for managing our DB credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ollcketa928eyz991l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ollcketa928eyz991l.png" alt="Creating RDS database"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under "&lt;strong&gt;Connectivity&lt;/strong&gt;", select the &lt;strong&gt;project-x-vpc&lt;/strong&gt;. Click on "&lt;strong&gt;Create a new security group&lt;/strong&gt;".&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jylinnm1l23xldmzz4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jylinnm1l23xldmzz4c.png" alt="Create MYSQL RDS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a security group that &lt;strong&gt;only&lt;/strong&gt; allows incoming traffic on &lt;strong&gt;port 3306&lt;/strong&gt; from security group of the application layer servers. Select the security group.&lt;/p&gt;

&lt;p&gt;Under "&lt;strong&gt;Database Authentication&lt;/strong&gt;", choose "&lt;strong&gt;password&lt;/strong&gt;".&lt;/p&gt;

&lt;p&gt;Go down to "&lt;strong&gt;Advanced Options&lt;/strong&gt;", Under "&lt;strong&gt;database name&lt;/strong&gt;", choose "&lt;strong&gt;newsreadb&lt;/strong&gt;". (You must choose this exact name for the Flask App to be able to connect to the database). &lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Create database&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now our architecture is almost ready !!!&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8zzrfv4uui3qmzd8591.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8zzrfv4uui3qmzd8591.png" alt="AWS 3-tier Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To connect to the database, our application will execute API calls to AWS Secrets Manager in order to get the DB credentials. Therefore, we must create an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; role that allows it to perform that. We will also modify the launch template to attach the role as instance profile for our application layer servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Create An IAM Role And Modify Launch Template
&lt;/h2&gt;

&lt;p&gt;Go to the IAM Dashboard and click on &lt;strong&gt;Roles&lt;/strong&gt;.&lt;br&gt;
Click on &lt;strong&gt;Create Role&lt;/strong&gt;. Under &lt;strong&gt;UseCase&lt;/strong&gt;, select EC2 and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;br&gt;
Search for secrets and select the permission that pops up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5ws6a8f0pt35x2uszk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5ws6a8f0pt35x2uszk2.png" alt="AWS IAM Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;next&lt;/strong&gt; and click on &lt;strong&gt;Create Role&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Next, we'll need to modify our launch template to include this role so that our logic tier servers can assume it to communicate with AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;Go to the EC2 Console, select launch templates, Actions and click on "&lt;strong&gt;Modify launch template&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcuvfof08vgbgxcgl0lb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcuvfof08vgbgxcgl0lb.png" alt="Modify launch template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to "&lt;strong&gt;Advanced details&lt;/strong&gt;", Click on the box under &lt;strong&gt;IAM instance profile&lt;/strong&gt; and then select your newly created role. Click on &lt;strong&gt;Create&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz05yli7xxewr1vcqnbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz05yli7xxewr1vcqnbd.png" alt="Launch template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, be sure to set the new version as the default version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb81zisnhxlsvhbavuq4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb81zisnhxlsvhbavuq4x.png" alt="Launch template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AND YOU're DONE !!!!!!!&lt;/strong&gt;&lt;br&gt;
Our three-tier application should now be up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9: Accessing Your Application
&lt;/h2&gt;

&lt;p&gt;If you have followed through with all these steps, Congrats.&lt;br&gt;
Now to access your application, you have to visit the EC2 console to get the domain name of your load balancer.&lt;/p&gt;

&lt;p&gt;Go to the &lt;strong&gt;EC2 Console&lt;/strong&gt; and click on &lt;strong&gt;Load Balancers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte703triqyvnirl09fsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte703triqyvnirl09fsi.png" alt="EC@ Load balancer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy the DNS Name&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0uuqkpqm5wo34egjf6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0uuqkpqm5wo34egjf6u.png" alt="AWS EC2 Console: Load balancer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now paste this into a web browser.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxbewljcfhie0yv48jw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxbewljcfhie0yv48jw2.png" alt="Flask app running on AWS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe4dbp7q55xorssua9tg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe4dbp7q55xorssua9tg.png" alt="Flask app running on AWS using 3-tier architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlg1wrg19ydso2oiiyw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlg1wrg19ydso2oiiyw9.png" alt="Flask Appplication running on AWs using 3-tier architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it. If you made it this far, Congratulations.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Kubernetes Security: Exploring Role Based Access Control (RBAC)</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Thu, 15 Sep 2022 14:03:08 +0000</pubDate>
      <link>https://dev.to/kelvinskell/kubernetes-security-exploring-role-based-access-control-rbac-4b2g</link>
      <guid>https://dev.to/kelvinskell/kubernetes-security-exploring-role-based-access-control-rbac-4b2g</guid>
      <description>&lt;p&gt;Security in kubernetes can be quite complex, as there are a lot of disparate parts which must be pieced together in order to get a full picture of the overall architecture. One of the best ways to understand security in kubernetes is to study the individual concepts one at a time. Today we are going to explore the concept of Role Based Access Controls (RBAC) and how they fit in with the general architecture of kubernetes.&lt;/p&gt;

&lt;p&gt;Imagine that you are a kubernetes administrator, working on a large cluster or clusters comprising of many nodes and complex applications. How do you define a systematic approach to authorization for your team? Surely, not every team member needs to have access to all parts of the cluster. How do you define what rights and privileges are granted to each member of your team to create, view, modify or delete resources? Luckily, kubernetes provides Role Based Access Control (RBAC). RBAC helps us to achieve fine-grained control over who can do what within our cluster(s).&lt;/p&gt;

&lt;p&gt;To understand RBAC, it is necessary to understand how kubernetes processes requests. The API server is responsible for processing requests in kubernetes. When a request comes into the API server, it has to go through a series of events before it is processed by the API server. These series of steps are broadly categorized into &lt;strong&gt;Authentication, Authorization and Admission Control.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Authentication validates the identity of the requester that sent the API request. This could be users communicating via kubectl, service accounts within pods or the control-plane and worker nodes. Authentication is enabled through authentication plugins. With plugins, we can define several authentication methods to the API server. There are several authentication plugins that enable authentication, some of which include: client certificates, authentication tokens or Basic HTTP.&lt;/p&gt;

&lt;p&gt;Authorization is the process that determines if the requester is allowed to perform actions on resources as stated in the submitted API request. Authorization is also enabled through authorization plugins. This could be Role Based Access Control (RBAC), Node or Attribute Based Access Control (ABAC). &lt;strong&gt;Role Based Access Control in kubernetes is implemented at the authorization stage of the API request.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Admission control basically enables administrative control over API requests. This is achieved by admission controllers, which intercept requests to the API server before the object is persisted in etcd.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Are Role Based Access Controls?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Role based access control is an authorization plugin implemented on the API Server. RBAC is a method of regulating access to cluster resources based on defined roles assigned to subjects. RBAC allows us to write rules which determine what actions can be performed on resources in our cluster. Since RBAC denies by default, only allow rules can be written which permits actions to be performed on the resource. These rules will then have to be bound to subjects. Subjects in RBAC refer to users, groups or service accounts. &lt;strong&gt;RBAC implements several API objects such as role, cluster role, role binding and cluster role binding.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roles&lt;/strong&gt; define what actions that can be performed on resources within a namespace. Roles can be made up of one or several rules which define the Verbs for resources. Roles are defined using RESTful HTTP semantics. Verbs are similar to HTTP styled verbs such as GET, DELETE, etc. Since RBAC denies access by default, only allow rules can be defined for roles. Since Roles only exist within a namespace, the rules that are defined are bound within that namespace. It is not possible to define a role to perform an action outside the namespace in which it exists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterRoles&lt;/strong&gt; are quite similar to roles. They both define permission for actions to be performed on resources. The fundamental difference is that ClusterRole is not a namespaced API object. What this means is that ClusterRole is cluster-scoped. They provide access across all namespaces and to cluster-scoped resources such as Nodes and Persistent volumes. Using a ClusterRole enables us to define a single set of rules and apply them across all resources within our cluster. This greatly reduces administrative overhead since we do not need to define the same rule set over and over across all namespaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RoleBinding&lt;/strong&gt; grants a subject access to a Role or ClusterRole. Essentially, a RoleBinding defines who can do what has been defined in a Role or ClusterRole. When a Role is attached to a RoleBinding, the intention is to grant permissions within a namespace. When a ClusterRole is attached to a RoleBinding, the intention is to grant permissions across several or all namespaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterRoleBinding&lt;/strong&gt; is used to grant cluster-wide permissions across all namespaces. when a ClusterRole is attached to a ClusterRoleBinding, permissions are granted across all namespaces and cluster-scoped (non-namespaced) resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes provides default roles that can immediately be put to use if we do not wish to define custom Roles/ClusterRoles.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cluster-admin: This ClusterRole is the super user in kubernetes. It can potentially perform all actions on all resources across all namespaces. When it is attached to a RoleBinding, it has unrestricted access and rights across the defined namespace. When used with a ClusterRoleBinding, it assumes unrestricted rights and privileges across all namespaces and cluster-scoped resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Admin: This role permits unlimited read/write access to resources within a namespace. However, it does not grant access to the namespace itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edit: This role grants read/write access within a given Kubernetes namespace. Access to view and modify Roles and RoleBindings is however denied.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;View: This role allows read-only access within a given namespace. A subject in a RoleBinding attached to this role will only be able to view resources but unable to modify, create or delete them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Properly implementing Role Based Access Control within a cluster is a very important aspect of security in kubernetes. RBAC gives an administrator the ability to implement least-privileged access in Kubernetes.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Linux For Cloud Engineers: Understanding SELinux</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Thu, 15 Sep 2022 13:54:49 +0000</pubDate>
      <link>https://dev.to/kelvinskell/linux-for-cloud-engineers-understanding-selinux-3epk</link>
      <guid>https://dev.to/kelvinskell/linux-for-cloud-engineers-understanding-selinux-3epk</guid>
      <description>&lt;p&gt;Visit my website: &lt;a href="https://practicalcloud.net"&gt;Practical Cloud&lt;/a&gt; for more posts and projects on Devops and Cloud engineering.&lt;/p&gt;

&lt;p&gt;I find that most Cloud Administrators, DevOps engineers and even some sysadmins shy away from the subject of SELinux.&lt;br&gt;
This is not encouraging, giving the fact that a properly implemented SELinux policy can greatly reduce the attack surface across our Linux-based EC2 instances and on-premise systems.&lt;/p&gt;

&lt;p&gt;The best way to think of SeLinux is to think of it as an access control mechanism, which implements Mandatory Access Control. This MAC is implemented on top of Discretionary Access Control (DAC) which is built into all Linux systems. What this effectively means is that SELinux will not grant a permission that has been denied by DAC. It can only deny permissions that might have been granted by your Discretionary Access Control.&lt;br&gt;
In addition to implementing MAC, you can argue that SELinux implements Role-Based Access Control (more on that in the later paragraphs).&lt;/p&gt;

&lt;p&gt;Discretionary Access Control can simply be thought of as the read, write and execute (rwx) permissions for users, groups and 'others' that you define for your files across your filesystem.&lt;/p&gt;

&lt;p&gt;The purpose of SELinux is to regulate the permissions granted to a process and how it accesses files within a Linux environment. Such that a process will always run under the permissions granted to it, no matter if it is started by a privileged or non-privileged user. In essence, a process will only run with the rights necessary for it to perform its function.&lt;/p&gt;

&lt;p&gt;Suppose an application running in our EC2 instance - with root privileges - is compromised by a hacker, the hacker can not be able to cause major damage to our environment. This is because even though the application is running under the context of the root user, our SELinux policy has restricted the permissions granted to this application and the running process is not able to access resources beyond which has been granted to it by an appropriately configured SELinux Policy.&lt;/p&gt;

&lt;p&gt;There are quite a few terminologies associated with SELinux.&lt;br&gt;
A few of them are; users, roles, subjects, objects, domains, types and type enforcement.&lt;/p&gt;

&lt;p&gt;An SELinux user executes a process. That process, is known as a subject.&lt;br&gt;
This could be a user creating a directory, or modifying a file.&lt;br&gt;
So basically, a subject is a running process (a daemon).&lt;/p&gt;

&lt;p&gt;But in order for a user to execute a process, it must have access to a role and that role itself must have access to the subject.&lt;br&gt;
This is where Role Based Access Control (RBAC) comes in. A role acts like a bridge between a user and a subject. The role defines which users can have access to itself. Also, the role has a defined list of subjects that it can access.&lt;/p&gt;

&lt;p&gt;An object in SELinux is anything that a subject acts upon. This could a file, a directory, or even a server.&lt;br&gt;
Domains are contexts for subjects while Types are contexts for objects.&lt;/p&gt;

&lt;p&gt;Bye for now and thank you for reading.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Architecture: Demystifying etcd Data Store.</title>
      <dc:creator>Kelvin Onuchukwu</dc:creator>
      <pubDate>Thu, 15 Sep 2022 13:53:18 +0000</pubDate>
      <link>https://dev.to/kelvinskell/kubernetes-architecture-demystifying-etcd-data-store-2jjk</link>
      <guid>https://dev.to/kelvinskell/kubernetes-architecture-demystifying-etcd-data-store-2jjk</guid>
      <description>&lt;p&gt;The kubernetes API server is the gateway to the kubernetes cluster. This is the primary medium through which we interact with our cluster. The front end of the kubernetes control plane. It is also the medium of communication between resources on the control node and those on the worker node. Nothing escapes the Kube API server. All communication passes through it.&lt;/p&gt;

&lt;p&gt;However, anybody who has worked with Kubernetes long enough knows that the API server is stateless. This is where etcd comes in. The etcd data store is where cluster configuration is stored. All configurations and secrets are persisted in the etcd data store in order to maintain state. While this is obvious, what is not so obvious is that extra steps must be taken, especially in a production environment, to protect this data and ensure its continuous availability. This etcd is deployed as a pod on the control node and has a its default data directory location in /var/lib/etcd.&lt;/p&gt;

&lt;p&gt;This is why knowing how to backup and restore the etcd data store becomes important. For instance, you need to recover from an administrative error such as a delete operation in which you accidentally deleted the data store. Or perhaps, you lost the server in which etcd was being hosted. Knowing how to backup and restore your etcd server is a very important skill which you must add to your skillset.&lt;/p&gt;

&lt;p&gt;For starters, etcdctl is the command line client for interacting with etcd. It also provides backup and restore capabilities for etcd data stores. When you backup etcd, it creates a backup file which will contain the complete state of the data stored in etcd. Etcdctl is not installed by default, so you may need to download it first. You can either download the binary from GitHub or you can run the image as a container.&lt;/p&gt;

&lt;p&gt;Before making a backup of your etcd, you need to have a backup plan. When making a backup plan, there are a few factors you must consider. The first is encrypting your backup file. Since they are plain-text by default, you need to ensure that they are encrypted after backup. The second factor to consider is an offsite location for storing your backup file(s). You do not want to lose your backup file alongside your etcd data store if the server goes down or you lose access to your cluster.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
