<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matt</title>
    <description>The latest articles on DEV Community by Matt (@mlevenson88).</description>
    <link>https://dev.to/mlevenson88</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mlevenson88"/>
    <language>en</language>
    <item>
      <title>Static to Elastic Web App</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Sat, 26 Aug 2023 15:12:39 +0000</pubDate>
      <link>https://dev.to/mlevenson88/static-to-elastic-web-app-1njd</link>
      <guid>https://dev.to/mlevenson88/static-to-elastic-web-app-1njd</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;The Web App - Single Server to Elastic Evolution is a project created by Adrian Cantrill of &lt;a href="https://learn.cantrill.io/"&gt;https://learn.cantrill.io/&lt;/a&gt; which is a popular resource for people looking to earn their AWS certifications. In this project we evolve the architecture of a WordPress web app deployed on a single EC2 instance into a scalable and resilient architecture.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 1 - Setup the environment and manually build WordPress
&lt;/h4&gt;

&lt;p&gt;I used a CloudFormation template to set up the base 3-tier architecture which was 1 VPC with 3 AZ in us-east.&lt;/p&gt;

&lt;p&gt;Next I manually created the EC2 to hold the WordPress instance. &lt;/p&gt;

&lt;p&gt;Storing configuration information within the SSM Parameter store scales much better than attempting to script them in some way. In this sub-section you are going to create parameters to store the important configuration items for the platform you are building. I connected to the EC2 instance and installed WordPress. &lt;/p&gt;

&lt;p&gt;I mapped environment variables in the EC2 to the parameters I created above. I am doing this step manually so later when I use IaC I can feel the benefit it brings.&lt;/p&gt;

&lt;p&gt;I ran command to update OS on the instance manually and installed MariaDB, Apache web server, wget, some libraries, and a stress test utility.&lt;/p&gt;

&lt;p&gt;I set the DB and HTTP server to run automatically when EC2 starts and set root pw to the MariaDB as the parameters we initialized earlier.&lt;/p&gt;

&lt;p&gt;I manually downloaded and installed WordPress. Next I created the db and initialized settings manually. I went to EC2 and copied the IPv4 and logged into WordPress and made a post to this solution working.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g3n3tjsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdmaeexunfqfxt82gqvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g3n3tjsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdmaeexunfqfxt82gqvj.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Limitations of this implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;App and DB configured manually&lt;/li&gt;
&lt;li&gt;App and DB on the same instance so they can't scale separately&lt;/li&gt;
&lt;li&gt;Content is also stored locally in the instance so it can't scale&lt;/li&gt;
&lt;li&gt;User connection is directly to the EC2 instance
IP address for WordPress from the EC2 instance is stored statically in a DB so if I stop the EC2 instance and get assigned a new public IP I lose the original connection via the old IP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;End result of architecture of this section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TUoKFonF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/68i9bvf360tqh5rjz2ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TUoKFonF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/68i9bvf360tqh5rjz2ti.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 2 - Automate the build using a Launch Template
&lt;/h4&gt;

&lt;p&gt;I created an EC2 launch template so I don't have to manually configure the steps for setting up EC2 with WordPress and the DB in part 1.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This template can have versions which we will iterate on as time goes on&lt;/li&gt;
&lt;li&gt;In the User Data section I can paste all the commands I used in the manual step to have it automatically initialized on startup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same limitations as the previous stage except we automated the build of the WordPress instance using the LT.&lt;/p&gt;

&lt;p&gt;End result of architecture of this section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p2lEuk9j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nm3k648o47onjnep4exp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p2lEuk9j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nm3k648o47onjnep4exp.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 3 - Split out the DB into RDS and Update the LT
&lt;/h4&gt;

&lt;p&gt;I created a subnet group so RDS can select from a range of subnets that RDS can put it's DBs inside.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 subnets over 3 AZ&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next I created the RDS instance and migrated the data from EC2 to RDS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To do this I connected with the session manager to the EC2 instance and exported the data from the local MariaDB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KPc_ohfv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y75xcr69027tk049ppff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KPc_ohfv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y75xcr69027tk049ppff.png" alt="Image description" width="576" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I restored the exported SQL file into my RDS instance and changed the endpoint to point to RDS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then I ran the SQL command to import the exported SQL file into RDS&lt;/li&gt;
&lt;li&gt;Next I pointed the WordPress site to RDS instead of MariaDB and disable MariaDB&lt;/li&gt;
&lt;li&gt;I went back to EC2 and open the IPv4 address and I can see the site still hosted except now it's using RDS

&lt;ul&gt;
&lt;li&gt;To be clear though the media and WordPress post data are stored in separate DBs where the images are still in the local EC2 folder called wp-content while the post data has been migrated off EC2 to RDS&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Finally I updated the launch template User Data section to remove references to MariaDB and set the new template as the default&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;End result of architecture of this section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--coh3DZI_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xm2t18ox82a5ltggyik4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--coh3DZI_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xm2t18ox82a5ltggyik4.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 4 - Split out the WP filesystem into EFS and Update the LT
&lt;/h4&gt;

&lt;p&gt;I still have the media (uploaded images) for WordPress stored locally in folder called wp-content on the EC2 instance. Next I will migrate that data to an EFS.&lt;/p&gt;

&lt;p&gt;I created an EFS manually and added the EFS ID to the parameter store.&lt;/p&gt;

&lt;p&gt;I went into the running EC2 instance and installed a package needed to connect to EFS.&lt;/p&gt;

&lt;p&gt;I added a line to fstab file to mount the EFS volume to the wp-content folder each time EC2 is started.&lt;/p&gt;

&lt;p&gt;I rebooted the EC2 instance to see if it mounts the EFS file system to it which it did successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2xEcDsaL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6wn5jr81w023z2cxu5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2xEcDsaL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6wn5jr81w023z2cxu5x.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I migrated the images to the folder the EFS file system is mounted to and now EC2 is decoupled and can scale separately from the media or the posts.&lt;/p&gt;

&lt;p&gt;Finally I updated the launch template to configure EFS in the User Data section to do the steps I did above automatically and set this version 3 of the LT as the default now.&lt;/p&gt;

&lt;p&gt;End result of architecture of this section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N6efDYgv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/68kzbms4bpoq8s2yax0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N6efDYgv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/68kzbms4bpoq8s2yax0w.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 5 - Enable elasticity via a ASG &amp;amp; ALB and fix WordPress (hardcoded WPHOME)
&lt;/h4&gt;

&lt;p&gt;Users can still connect directly to the EC2 instance and I don't have health checks or auto-healing capabilities and the IP address of the instance is hardcoded into the DB.&lt;/p&gt;

&lt;p&gt;I created a load balancer will solve this. The LB DNS will be used in place of the EC2 IP address in order to map the DNS to each EC2 create a parameter for the LB DNS.&lt;/p&gt;

&lt;p&gt;I created an ASG and attached it to an existing LB with a desired capacity of 1.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I enable ELB health checks and metric collection with CloudWatch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I terminated the old EC2 instance I created manually and watched the ASG automatically create an instance to match the desired amount I defined earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mu2A5cUM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/undfgyxxidjsjajxtjop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mu2A5cUM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/undfgyxxidjsjajxtjop.png" alt="Image description" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wbtkBHfd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/961g71yscys5mev919pp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wbtkBHfd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/961g71yscys5mev919pp.png" alt="Image description" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I want to create a more dynamic scaling policy based on CPU&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created a CloudWatch alarm to go off when average CPU utilization is above 40%&lt;/li&gt;
&lt;li&gt;I created a CloudWatch alarm to go off when average CPU utilization is below&lt;/li&gt;
&lt;li&gt;Now I updated the ASG to have a max capacity of 3 units&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I connected to the ASG EC2 and ran the stress command to test the scaling. The ASG detected high CPU so it provisioned a new EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c5ma6zkk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffrx6l05g9bosxgbowt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c5ma6zkk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffrx6l05g9bosxgbowt5.png" alt="Image description" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I terminated this instance and watched the ASG detect this with health checks and it added another EC2 automatically to demonstrate self healing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ORx0kgt5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ydoqdvq6hvm68zw6zxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ORx0kgt5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ydoqdvq6hvm68zw6zxa.png" alt="Image description" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;End result of architecture of this project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DFT-ncU0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nykehwfp7vdaw29fxw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DFT-ncU0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nykehwfp7vdaw29fxw0.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Overall this was another great hands on experience. It helped to reenforce VPC and subnet concepts since I split the web app into a 3 tier architecture. It demonstrated the value having a launch template to save having to manually configuring EC2 instances. It helped me learn how to migrate data from one database to another in a different subnet and how EFS is useful for serving static files to multiple EC2 instances. Finally it demonstrated the power of ASG and ALB by making the solution elastic and enabling health checks and self healing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>CodeCommit, Build, Deploy &amp; Pipeline using containers and ECS</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Sat, 26 Aug 2023 14:49:59 +0000</pubDate>
      <link>https://dev.to/mlevenson88/codecommit-build-deploy-pipeline-using-containers-and-ecs-4ai3</link>
      <guid>https://dev.to/mlevenson88/codecommit-build-deploy-pipeline-using-containers-and-ecs-4ai3</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;The CodePipeline project is a project created by Adrian Cantrill of &lt;a href="https://learn.cantrill.io/"&gt;https://learn.cantrill.io/&lt;/a&gt; which is a popular resource for people looking to earn their AWS certifications. In this project I created a CI/CD CodePipeline with CodeCommit to store static web app and Dockerfile. CodeBuild to build the Docker images and store them in ECR. ECS Cluster, TGs , and ALB configured to deploy to ECS Fargate.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 1 : Configure Security &amp;amp; Create a CodeCommit Repo
&lt;/h4&gt;

&lt;p&gt;First I connected to CodeCommit through SSH and created a repo in CodeCommit and cloned it to my local drive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Tj0bQ5s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ubzfodfuplm422vqh8yx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Tj0bQ5s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ubzfodfuplm422vqh8yx.png" alt="Image description" width="562" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;End result of architecture of this section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iXhwVXgD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dm33gzwxnmwl02ddfizt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iXhwVXgD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dm33gzwxnmwl02ddfizt.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 2 : Configure CodeBuild to clone the repo, create a container image and store on ECR
&lt;/h4&gt;

&lt;p&gt;I set up ECR to store the output of CodeBuild then I created CodeBuild project and updated role for the CodeBuild project to give it permissions to interact with ECR.&lt;/p&gt;

&lt;p&gt;I created buildspec.yml and pushed it to the repo. The buildspec.yml file is what tells codebuild how to build your code, the steps involved, what things the build needs, and any testing and what to do with the output (artifacts).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wl0AqkK8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kz79kzsti53iun2d7cf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wl0AqkK8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kz79kzsti53iun2d7cf7.png" alt="Image description" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ran the CodeBuild project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UR_pSXl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfe6bsu60phr2h5duol4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UR_pSXl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfe6bsu60phr2h5duol4.png" alt="Image description" width="761" height="925"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I should see a Docker image in the ECR registry I created that CodeBuild built.&lt;/p&gt;

&lt;p&gt;I created an EC2 instance and installed Docker. Next I downloaded the Docker image to the EC2 instance from ECR that was built from the CodeBuild step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--abQHrsb3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i8fu9ijum1msfbzge8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--abQHrsb3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i8fu9ijum1msfbzge8r.png" alt="Image description" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I ran the Docker container in the EC2 instance and then checked the IPv4 of the EC2 to see if the webpage was running successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DxjxiAWC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/28ov3sjdgv6pgmlwz3u3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DxjxiAWC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/28ov3sjdgv6pgmlwz3u3.png" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;End result of architecture of this section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yOGF7PGG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwzk822kkxxwwnyidlu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yOGF7PGG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwzk822kkxxwwnyidlu7.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 3 : Configure a CodePipeline with commit and build steps to automate build on commit
&lt;/h4&gt;

&lt;p&gt;In this section I automated the pipeline so when code is committed to CodeCommit it is automatically built in CodeBuild and pushed to ECR. I still haven't automated deployment part though (Running the image).&lt;/p&gt;

&lt;p&gt;I created a pipeline in CodePipeline and ran it. Success.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PhdGXbus--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2jv06npzf644aask7j1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PhdGXbus--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2jv06npzf644aask7j1.png" alt="Image description" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I updated the buildspec.yml to get the commit hash from CodeCommit and put it in a variable along with the image tag&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ran docker push with the latest tag and docker push with the image tag &lt;/li&gt;
&lt;li&gt;Finally create an imagedefinitions.json which will be used by CodeDeploy later to deploy to ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I commit this new yaml file and now in the ECR I can see the latest image tagged as 'latest' along with the commit ID. If I were to push another it would remove latest from the name and just be the commit ID.&lt;/p&gt;

&lt;p&gt;In S3 I can see the imagedefinitions.json that got created during the automatic build that I will use to automate the deploy to ECS next. It contains the name and imageUri.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O3_e8VCq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cd7w5lub3s3csvijlr2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O3_e8VCq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cd7w5lub3s3csvijlr2u.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;End result of architecture of this section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--shd2cHPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/05q1z0nopn9ypt9mb6ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--shd2cHPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/05q1z0nopn9ypt9mb6ak.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 4 : Create an ECS Cluster, TG's , ALB and configure the code pipeline for deployment to ECS Fargate
&lt;/h4&gt;

&lt;p&gt;I added deploy stage to ECS and created a ALB to attach to ECS cluster to perform scaling. Next I created an ECS cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nsWGr7pQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71af0dry776kejnve0ta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nsWGr7pQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71af0dry776kejnve0ta.png" alt="Image description" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tested the webpage is being hosted. Success. Next I added a stage to CodePipeline to deploy the image. I then modify the HTML to see if this change is propagated to the web app.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the options when configuring select the BuildArtifact which is the S3 bucket holding the JSON file which contains the commit info and imageUri needed to deploy the built image.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZfNoSffI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldlxwnhn24i6mwmb8zlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZfNoSffI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldlxwnhn24i6mwmb8zlk.png" alt="Image description" width="800" height="848"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final pipeline is whenever I commit something to CodeCommit it's going to run a build and generate a docker image, store that docker image in ECR and deploy the image to the ECS service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aXJ-QEWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ygivqt9cm9uq7l9885f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aXJ-QEWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ygivqt9cm9uq7l9885f.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;End result of architecture of this project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kbW5OGtv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3das5sdql8nbn80ft50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kbW5OGtv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3das5sdql8nbn80ft50.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Overall this was some great hands on experience to go along with the theory already I had. It helped to reenforce CI/CD concepts by breaking them down into steps. It also gave some insight into what makes ECS Fargate powerful since I didn't have to do all the EC2 management like I did initially to get the container deployed in EC2.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Cloud Resume Challenge</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Sat, 26 Aug 2023 04:24:44 +0000</pubDate>
      <link>https://dev.to/mlevenson88/the-cloud-resume-challenge-22a8</link>
      <guid>https://dev.to/mlevenson88/the-cloud-resume-challenge-22a8</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;The Cloud Resume Challenge by Forrest Brazeal is a project framework that guides participants as they make a resume website fully hosted in the cloud utilizing many of the different services. This isn't a step by step tutorial that shows you have to do every part but more of an assignment that tells you what to implement next but not exactly how. It requires the participants to be resourceful, make mistakes, and in the end, helps reinforce the theory they have learned in the classroom.&lt;/p&gt;

&lt;h1&gt;
  
  
  Part 1: Cloud Website:
&lt;/h1&gt;

&lt;p&gt;In this section I set up my HTML/CSS, host it in S3, point a CDN to it and assign the CDN a custom DNS and secure it with an SSL certificate.&lt;/p&gt;

&lt;h4&gt;
  
  
  HTML, CSS
&lt;/h4&gt;

&lt;p&gt;My HTML, CSS, and JavaScript skills are enough to be dangerous but I'm no front end dev. I found a free template and spent admittingly way too much time customizing it to the way I like before starting the main purpose of this challenge which is deploying and building on this solution in the cloud.&lt;/p&gt;

&lt;h4&gt;
  
  
  Set up MFA/IAM
&lt;/h4&gt;

&lt;p&gt;I set up MFA using my phone on my root account and then created an IAM user to use as it's best practice to not use the root account.&lt;/p&gt;

&lt;p&gt;I installed and set up AWS Vault on my local machine. AWS Vault is a tool to securely store and access AWS credentials in a development environment. AWS Vault stores IAM credentials in your operating system's secure keystore and then generates temporary credentials from those to expose to your shell and applications. It's designed to be complementary to the AWS CLI tools.&lt;/p&gt;

&lt;h4&gt;
  
  
  Static Website using S3
&lt;/h4&gt;

&lt;p&gt;I wanted to set up IaC using AWS SAM now instead of the later to get practice using it to set up my services.&lt;/p&gt;

&lt;p&gt;First I needed to install the SAM CLI.&lt;/p&gt;

&lt;p&gt;Next I initialized a SAM template and then set the IAM policy.&lt;/p&gt;

&lt;p&gt;Once that was created I changed to the created directory and ran sam build but It's having problems finding my python.exe&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build Failed Error: PythonPipBuilder:Validation - Binary validation failed for python, searched for python in following locations  : ['C:\Users\mleve\AppData\Local\Microsoft\WindowsApps\python.EXE', 'C:\Users\mleve\AppData\Local\Microsoft\WindowsApps\python3.EXE'] which did not satisfy constraints for runtime: python3.8. Do you have python for runtime: python3.8 on your PATH?&lt;/li&gt;
&lt;li&gt;I had to install Python 3.8.10 and checkbox the setting to Add Python 3.8 to PATH which fixed the issue&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploy SAM with 'aws-vault exec my-user --no-session -- sam deploy --guided'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I got another error Error: Failed to create managed resources: An error occurred (InvalidClientTokenId) when calling the CreateChangeSet operation: The security token included in the request is invalid.&lt;/li&gt;
&lt;li&gt;To resolve this I deleted my old access key and created a new one and re-initialized my-user in the aws-vault&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now if I go to CloudFormation (and switch to the correct region) I will see two new stacks created.&lt;/p&gt;

&lt;p&gt;I want to create and S3 bucket using SAM so I added in the below command to my template.yml file in the Resources section then re-ran sam build to re-build the template then ran 'aws-vault exec my-user --no-session -- sam deploy'. No need for the --guided part of the cmd as those settings were saved in a file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use this in future to do both in 1 cmd: sam build &amp;amp;&amp;amp; aws-vault exec my-user --no-session -- sam deploy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z4AeRdWv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucyhtr5sdup4ejde3h5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z4AeRdWv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucyhtr5sdup4ejde3h5k.png" alt="Image description" width="355" height="67"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step was to add onto this config below but I kept getting ACL access errors.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bucket cannot have ACLs setwith ObjectOwnership's BucketOwnerEnforced setting (Service: Amazon S3; Status Code: 400; Error Code: InvalidBucketAclWithObjectOwnership

&lt;ul&gt;
&lt;li&gt;After some research it turns out "This is a legacy property, and it is not recommended for most use cases. A majority of modern use cases in Amazon S3 no longer require the use of ACLs, and we recommend that you keep ACLs disabled. For more information, see Controlling object ownership in the Amazon S3 User Guide."&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AIcDJBWH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glp9mzlhpehd24hrf2zq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AIcDJBWH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glp9mzlhpehd24hrf2zq.png" alt="Image description" width="653" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now on the properties tab I can see Static Web Hosting is enabled with this URL &lt;a href="http://cloud-resume-evenson.s3-website.us-east-2.amazonaws.com"&gt;http://cloud-resume-evenson.s3-website.us-east-2.amazonaws.com&lt;/a&gt; but in the sam build I'm getting a new error "API: s3:PutBucketPolicy Access Denied".&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;According to this post (&lt;a href="https://github.com/aws/aws-cdk/issues/25358"&gt;https://github.com/aws/aws-cdk/issues/25358&lt;/a&gt;) around April 2023 S3 has changed the default to ObjectOwnership: BucketOwnerEnforced, which means it is no longer possible to configure ACLs on the buckets and on objects by default.&lt;/li&gt;
&lt;li&gt;In the amazon docs it shows what settings you need to grant public access to S3 buckets in YAML format (&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html#aws-properties-s3-bucket--examples"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html#aws-properties-s3-bucket--examples&lt;/a&gt;) which I added to my template.yml file which resolved this issue and set my S3 bucket to public by default and added my read policy correctly&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  CDN
&lt;/h4&gt;

&lt;p&gt;Since I wanted to use IaC to build as much of this as possible instead of using the web interface I updated my template.yml file with a CloudFront distribution section and ran sam build &amp;amp;&amp;amp; deploy to create the CDN. This created a CDN that I could use to access my S3 bucket through an auto generated domain name that wasn't very pretty (&lt;a href="https://d2yhamyo6y5t0x.cloudfront.net"&gt;https://d2yhamyo6y5t0x.cloudfront.net&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Now that I can access S3 directly through the CDN and the S3 link I removed the public access I granted earlier to S3 so the only way to access the web app is through the CDN.&lt;/p&gt;

&lt;h4&gt;
  
  
  DNS/HTTPS
&lt;/h4&gt;

&lt;p&gt;To get a nicer looking domain name I went to the Route 53 DNS and bought a domain that I can use for my website.&lt;/p&gt;

&lt;p&gt;I also want to secure the site so I went to AWS Certificate manager to create an SSL certificate for the domain I bought.&lt;/p&gt;

&lt;p&gt;I added a Route53Recordset to my yaml file with my updated domain name and host zone ID and mapped it back to the S3 origin name that the CDN distribution section points to.&lt;/p&gt;

&lt;p&gt;I need to allow my custom domain access to the CloudFront distro so I need to also add a CertificateManager section with my domain name I purchased and create an SSL for to the yaml file.&lt;/p&gt;

&lt;p&gt;Originally I created all my infra in us-east-2 and kept getting an error when trying to use SAM to attach a certificate to my CloudFront CDN. I tried creating my certificate in us-east-1 and 2, nothing seemed to work. I deleted my whole stack and restarted deploying everything in us-east-1 the next day and low and behold it works now. IaC already coming in handy.&lt;/p&gt;

&lt;h1&gt;
  
  
  Part 2: Serverless API
&lt;/h1&gt;

&lt;p&gt;In this section the challenge is to set up the infrastructure to later add JavaScript to the front end to keep track of the number of visitors. In order to do this I set up a DynamoDB table to store the number of visits, create a POST route on my API Gateway, and write some python in a Lambda function to update the database.&lt;/p&gt;

&lt;h4&gt;
  
  
  Database
&lt;/h4&gt;

&lt;p&gt;Next I created a DynamoDB table. Since there is no real schema I just needed to define the key schema and ID attributes.&lt;/p&gt;

&lt;p&gt;I'll test the table once I set up the API.&lt;/p&gt;

&lt;h4&gt;
  
  
  API
&lt;/h4&gt;

&lt;p&gt;SAM already creates an API gateway by default so I just need to define a POST method on the /visit route.&lt;/p&gt;

&lt;p&gt;For the lambda I wrote it using Python since I'm familiar with it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I used the get_item method to return the number of visits from the database&lt;/li&gt;
&lt;li&gt;I then incremented the counter by 1 and used the put_item method to update the value in the database&lt;/li&gt;
&lt;li&gt;Finally in the return section I return statusCode 200 with the visit_count in the body. I also set the CORS headers to allow all so I don't get an error since I didn't implement CORS for this project&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Python
&lt;/h4&gt;

&lt;p&gt;I followed an example written in Golang and translated it to Python.&lt;/p&gt;

&lt;p&gt;I went to run a test event in lambda and got the below error. After some research it seems the error is with support for requests for urllib3 for python versions &amp;gt;=3.7 and &amp;lt;3.10. I can either specify "requests &amp;gt;= 2.28.2, &amp;lt; 2.29.0" which will use an older version of openssl in my requirement.txt or upgrade my python dependency in my yaml file to be 3.10 instead of 3.8 and on my local machine. I choose the update my requirements.txt and that fixed the issue.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"errorMessage": "Unable to import module 'app': urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.0.2k-fips 26 Jan 2017'. See: &lt;a href="https://github.com/urllib3/urllib3/issues/2168"&gt;https://github.com/urllib3/urllib3/issues/2168&lt;/a&gt;"&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Part 3: Front End / Back End Integration
&lt;/h1&gt;

&lt;p&gt;In this section I added some JavaScript to invoke the API Gateway which will trigger the Lambda function and update the count in my database. I also added some unit tests to my Python code.&lt;/p&gt;

&lt;h4&gt;
  
  
  JavaScript
&lt;/h4&gt;

&lt;p&gt;This next section looks simple but took more time than I thought. I created an AJAX function to call my API endpoint and then update the amount in the HTML. At first I didn't realize I needed the jQuery script tag to run the AJAX code. Then I accidently made the id the same for the div and span so my "Visitors:" word was also getting removed when the AJAX was run.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tests
&lt;/h4&gt;

&lt;p&gt;I created a simple unit test to check for a 200 response code from the lambda function and then if it returned the visit count in the body as a numeric like expected.&lt;/p&gt;

&lt;p&gt;I had to update my JavaScript to parse this object since before I was sending the raw number and now I was sending {visit_count: 60} and it was displaying as [object Object] in my html before the fix.&lt;/p&gt;

&lt;h1&gt;
  
  
  Part 4: Infrastructure as Code and CI/CD
&lt;/h1&gt;

&lt;p&gt;In this section you were supposed to set up the IaC but I started building it from the beginning and I feel like this helped me incrementally iterate on the template as IaC is a new topic for me. I also set up a CI/CD pipeline to test, build, and deploy my infra on each commit. I did this using GitHub Actions because I was more familiar with it from previous projects and hadn't gotten a chance to use the CodePipline AWS provides.&lt;/p&gt;

&lt;h4&gt;
  
  
  Infrastructure as Code
&lt;/h4&gt;

&lt;p&gt;I've been using AWS SAM which under the hood builds on CloudFormation since the beginning. This has come in extremely handy but also caused some headaches. There were times where I needed a fresh restart of my stack to start over at a point where I knew the code was working. In this case IaC came in very helpful as doing this all manually would have been a pain. On the other hand there were times where I didn't fully understand why I would get errors, mainly in CloudFront when trying to update my infra. This would cause me to chase down some pretty nondescriptive bugs and end up just deleting the stack and starting over adding in 1 piece at a time until I could determine what was causing the error. This was especially true when I was trying to add my lambda function using SAM. I had to narrow it down to the CodeUri path being incorrect by re-launching until I narrowed it down to that command since the error message was generic.&lt;/p&gt;

&lt;h4&gt;
  
  
  CI/CD
&lt;/h4&gt;

&lt;p&gt;First I set up GitHub Actions by creating a .gihub/workflows/main.yml in my root directory of my project. I added an access key and secret access key from my IAM user to GitHub actions so I can reference these variables in my main.yml file.&lt;/p&gt;

&lt;p&gt;The first test I automated was the python lambda code.&lt;/p&gt;

&lt;p&gt;Next I added a test that depends on the first one passing that will run a sam build and sam deploy to deploy the infra.&lt;/p&gt;

&lt;p&gt;Finally in the last block if the previous pass I automate the front end upload by having it upload the static files to the S3 bucket.&lt;/p&gt;

&lt;p&gt;It's failing in the build and deploy infra block.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Wczb6Dc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pl9g0n85pehnhjxqi0ef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Wczb6Dc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pl9g0n85pehnhjxqi0ef.png" alt="Image description" width="617" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Adding --use-container to sam deploy solved this and now it works&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;As someone new to the cloud this project definitely lived up to the "Challenge" part of the name. I want to give credit to the YouTube channels "Open Up The Cloud" and "Cumulus Cycles" because without them this journey would have been much longer and more frustrating. Overall I really enjoyed this project. I feel like I grew a lot as a developer and gained confidence in my cloud skills. I plan to keep this project updated to display for years to come.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
