<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: #developer #awsdeveloper</title>
    <description>The latest articles on DEV Community by #developer #awsdeveloper (@tikoosuraj).</description>
    <link>https://dev.to/tikoosuraj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tikoosuraj"/>
    <language>en</language>
    <item>
      <title>Effortlessly migrate your on-premise machines &amp; application to any Cloud platform</title>
      <dc:creator>#developer #awsdeveloper</dc:creator>
      <pubDate>Wed, 12 Apr 2023 08:57:55 +0000</pubDate>
      <link>https://dev.to/tikoosuraj/effortlessly-migrate-your-on-premise-machines-application-to-any-cloud-platform-4c9g</link>
      <guid>https://dev.to/tikoosuraj/effortlessly-migrate-your-on-premise-machines-application-to-any-cloud-platform-4c9g</guid>
      <description>&lt;h2&gt;
  
  
  Effortlessly migrate your on-premise machines &amp;amp; application to any Cloud platform
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rkgmXUhy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4200/0%2Auyhfvc_ZJxSo839d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rkgmXUhy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4200/0%2Auyhfvc_ZJxSo839d.png" alt="Image taken from AWS" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog is more about how we can effortlessly migrate our on-prem machines and Applications to the AWS cloud platform. In order to do this, we will be consuming the **CloudEndure **service which easily assists us to do our end to end migration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CloudEndure&lt;/strong&gt; is a SAAS service that automatically does a lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS.&lt;/p&gt;

&lt;p&gt;In this article, we will be migrating our virtual machines from one region to another region. The source can vary in your case it can be your on-premies. For our demonstration, we are doing region to region migration.&lt;/p&gt;

&lt;p&gt;Getting started with CloudEndure sign up for the **CloudEndure **Account. The CloudEndure is a separate portal not available directly on amazon. Below is the URL for the portal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://console.cloudendure.com/#/register/register"&gt;https://console.cloudendure.com/#/register/register&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o_UGt-EW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3810/1%2Abr0RHaXHiOvZYkwh-kUwrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o_UGt-EW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3810/1%2Abr0RHaXHiOvZYkwh-kUwrg.png" alt="" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once Registration is done you can login into the directly portal. In the portal, we have created a project called &lt;strong&gt;DemoMigration&lt;/strong&gt;. For our use case, we have amazon Linux which is running in the EU(&lt;strong&gt;Ireland&lt;/strong&gt;) and we will migrate the same in another region which is AWS EU(&lt;strong&gt;Stockholm&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZfDmwIa1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3832/1%2AkuEzr2s_pMRPpcpqSQmJ5A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZfDmwIa1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3832/1%2AkuEzr2s_pMRPpcpqSQmJ5A.png" alt="Image taken from [Surajtikoo](undefined)" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudEndure consists of there parts which are the source, replication, and target. Once you have set-up the project configures the below settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Li1Qicif--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2194/1%2AlsIdXH5l750LEpZpKfL_Hg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Li1Qicif--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2194/1%2AlsIdXH5l750LEpZpKfL_Hg.png" alt="" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;REPLICATION SETTINGS&lt;/strong&gt; tab enables us to define your Source and Target environments, and the default Replication Servers in the Staging Area of the Target infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5iVjqSCH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3390/1%2Aj8h-h6UJNhI_CnbhswA_BA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5iVjqSCH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3390/1%2Aj8h-h6UJNhI_CnbhswA_BA.png" alt="Image taken by [Surajtikoo](undefined)" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once all the required configuration is setup. Install the CloudEndure agent on the source machine to initiate replication. We have an amazon virtual machine VM with docker service installed in the AWS environment(source machine). See more on installing the CloudEndure agent &lt;a href="https://docs.cloudendure.com/Content/Installing_the_CloudEndure_Agents/Installing_the_Agents/Installing_the_Agents.htm"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The replication instance will be launched on the target machine. The Replicated data will be in EBS snapshots.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c1WfGZNz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3008/1%2A-0hzxyatEsLiR6YbGCsNlQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c1WfGZNz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3008/1%2A-0hzxyatEsLiR6YbGCsNlQ.png" alt="" width="800" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before proceeding make sure you have defined the blueprint for the target machine. Based on that the target machine will be created. Once all the required configuration is setup. Time to start the replication process to migrate the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_jvCrX2L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3836/1%2AC6Fd3LUszpzcVl5Nd1gKhw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_jvCrX2L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3836/1%2AC6Fd3LUszpzcVl5Nd1gKhw.png" alt="Image taken by [Surajtikoo](undefined)" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudEndure gives the option to launch in test mode and cutover mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test Mode-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each Source machine for which a Target machine is launched will be marked as having a test Target machine launched on this date.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CutOver Mode-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each Source machine for which a Target machine is launched will be marked as having a cutover Target machine launched on this date.&lt;/p&gt;

&lt;p&gt;Once the migration is completed we can see the machine is getting created in the target region with the same set of configurations.&lt;/p&gt;

&lt;p&gt;This migration is very much useful when an organization needs to migrate a lot of its servers and applications into the cloud.&lt;/p&gt;

&lt;p&gt;Below you can find some helpful links and documentation regarding best practices and become more familiar with the CloudEndure Migration service: &lt;a href="https://docs.cloudendure.com/Content/Configuring_and_Running_Migration/Migration_Best_Practices/Migration_Best_Practices.htm"&gt;https://docs.cloudendure.com/Content/Configuring_and_Running_Migration/Migration_Best_Practices/Migration_Best_Practices.htm&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>**Restructuring NAT Gateway Usage and replacing it with Squid Proxy**</title>
      <dc:creator>#developer #awsdeveloper</dc:creator>
      <pubDate>Thu, 06 Apr 2023 06:06:29 +0000</pubDate>
      <link>https://dev.to/tikoosuraj/restructuring-nat-gateway-usage-and-replacing-it-with-squid-proxy-551i</link>
      <guid>https://dev.to/tikoosuraj/restructuring-nat-gateway-usage-and-replacing-it-with-squid-proxy-551i</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Restructuring NAT Gateway Usage and replacing it with Squid Proxy&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We generally deploy our instances in a private network within our VPC in order to securely access them. However, for these instances, we require an outbound connection in order to have the latest OS security patches, and sometimes even some applications might need to connect to third-party URLs for various reasons. To have access to the internet we more often use cloud-provided services like NAT Gateway.&lt;/p&gt;

&lt;p&gt;The below diagram depicts the typical deployment and usage of NAT Gateway&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IILoIWdF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2AB5d-w4Apdkg_Wz-09yoCig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IILoIWdF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2AB5d-w4Apdkg_Wz-09yoCig.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nat gateway is highly available and takes care of most of the responsibility in terms of managing the service. But it becomes complex when organizations have several AWS accounts and then NAT gateway is deployed and managed in each of these accounts&lt;/p&gt;

&lt;p&gt;Below are the few disadvantages with respect to use the Nat Gateway&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Cost increases with the number of NAT Gateways deployed in each account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is no ruleset to allow or deny traffic based on the rule list. This also includes access to malicious websites.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Detailed logging is missing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logs are not collected at a centralized location and no central control over them.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Solution Used&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A centralized proxy solution is used to overcome the above-mentioned challenges&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Squid Proxy(Open Source) is used for the outbound connection which gives more control at a centralized place&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The solution is highly available, scalable, and is supported by different tenants.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This will save costs and operational effort while reducing the number of NAT gateways and the need for an Outbound proxy for each and every account.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---zpGBypv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AMnqog1KIDhrX-e_MdsZm6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---zpGBypv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AMnqog1KIDhrX-e_MdsZm6g.png" alt="" width="880" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This proxy will be deployed in a centralized account with a dedicated VPC. In order to have the Proxy VPC reachable from each tenant, VPC Interface endpoints powered by AWS private link will be used. Routing inside each VPC may be required in order to reach the VPC endpoint which will in fact be an ENI on a specific subset of the VPC.&lt;/p&gt;

&lt;p&gt;To build this infrastructure solution. I have used terraform. In my opinion, terraform gives you more flexibility by writing less code compared to cloud formation.&lt;/p&gt;

&lt;p&gt;Followed the concept of modules which gives more readability and easy understanding to the code. These modules can easily be reused. This also helps in reducing duplication, enable isolation, and enhance testability.&lt;/p&gt;

&lt;p&gt;To over this challenge, SQUID Proxy is one of the possible ways to overcome&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SQUID proxy is an open-source that can be used easily.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This proxy provides the filtering mechanism by which can easily restrict IPs and other websites.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This solution is designed to achieve high -availably using the autoscaling group with Network Load balancer.&lt;/p&gt;

&lt;p&gt;The below is the structure for the Terraform modules for this use-case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XM5inbHX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Arq0k9zX9c113MzRH6li1aA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XM5inbHX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Arq0k9zX9c113MzRH6li1aA.png" alt="" width="379" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can create different separations for each environment. This gives more control over the entire management of the infrastructure. With this approach, we can easily define have had lights configuration(Instance Type, high-availability) for dev and acceptance compared to the production environment.&lt;/p&gt;

&lt;p&gt;One of the major challenges which we faced while building this solution was how we can consume the existing VPC and subnets which are part of the account itself. We wanted to avoid the hardcoding of the exiting VPC and subnets under variables files. For this, we use the filtering mechanism of the terraform. Based on the tag name we can easily search the desired VPC name and it’s respective subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h8ut6xVW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2158/1%2AZmslQeGiiP1uRVNP4A7l9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h8ut6xVW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2158/1%2AZmslQeGiiP1uRVNP4A7l9w.png" alt="" width="880" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This overall solution personally I feel very useful and we can easily avoid the use of multiple NAT gateways&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GitLab's Integration with AWS CodePipeline for ECS</title>
      <dc:creator>#developer #awsdeveloper</dc:creator>
      <pubDate>Fri, 03 Mar 2023 09:01:11 +0000</pubDate>
      <link>https://dev.to/tikoosuraj/gitlabs-integration-with-aws-codepipeline-for-ecs-12d8</link>
      <guid>https://dev.to/tikoosuraj/gitlabs-integration-with-aws-codepipeline-for-ecs-12d8</guid>
      <description>&lt;h2&gt;
  
  
  GitLab's Integration with AWS CodePipeline for ECS
&lt;/h2&gt;

&lt;p&gt;Aws codepipeline provides integration with most of the third-party repositories but for Gitlab’s the pipeline doesn’t supports the build in integration therefore it becomes challenging for developers to have a complete stack set of CICD pipeline.&lt;/p&gt;

&lt;p&gt;This blog is more about how we can overcome this challenge and build a complete end-to-end pipeline. There are different ways to achieve this. Here we are using one of the easiest technique&lt;/p&gt;

&lt;p&gt;The following diagram depicts how the CICD pipeline is set up for GitLab’s using s3 as a source and its different components&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zDzP-5ru--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2476/1%2AqBnbcVlaPNSKT3sIhQfKfQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zDzP-5ru--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2476/1%2AqBnbcVlaPNSKT3sIhQfKfQ.png" alt="" width="880" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to integrate Gitlabs with Code Pipeline, the GitLab gives us a provision to do this by using the &lt;strong&gt;.gitlab-ci.yml&lt;/strong&gt; file. The GitLab file should include the script with the required set of IAM Permission to S3 bucket which allows users to push the Object to s3 bucket. The file should be included under the project repository.&lt;/p&gt;

&lt;p&gt;Therefore, whenever the developer commits the code into the GitLab’s repository with the use of GitLab-ci.yaml file which contains the script to push the repo code to the desired s3 bucket. Once the Object is uploaded into the s3 bucket the Code Pipeline has an event that is S3 as a source which in turn helps to trigger the pipeline and later executes the rest of the phases for the code pipeline will be executed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create the S3 Bucket. This bucket will be used to push the object.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the gitlab yaml file which includes the below scripts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OEqdUQhq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2452/1%2Ah5e3omx9IV5Sbb-LxdDF-A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OEqdUQhq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2452/1%2Ah5e3omx9IV5Sbb-LxdDF-A.png" alt="" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above sample contains the variables and script. Script bascially includes the logic to upload the code to S3 bucket. Gitlabs provides the pre-defined variables like &lt;strong&gt;CI_COMMIT_SHORT_SHA&lt;/strong&gt;. This commit code can be used to reterive the last commit hash made to the repository. We can used this commit hash code as a tagging to the bucket.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the IAM user which has permissions permisson to S3 bucket to put object as well as put object tagging.We are adding tag to a bucket with the commit hash code which help us in order to back trace any request in case of any issues.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ImULI71m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ALuJRgbLygrp6PY1xFMXKEw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ImULI71m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ALuJRgbLygrp6PY1xFMXKEw.png" alt="" width="781" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure the AccesskeyID, AccessSecretkey in Gitlabs under Secret Variables.AWS Cli will automatically consume these varaibles whenever the scripts executes in order to upload the files as a zip to S3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y6aua1_L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2524/1%2AnFCD8C52TT7f0tLncI7V7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y6aua1_L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2524/1%2AnFCD8C52TT7f0tLncI7V7w.png" alt="" width="880" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At last create the Codepipeline with S3 as source with Build and deploy steps.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclussion:
&lt;/h2&gt;

&lt;p&gt;This is one of the easist techique which overcomes the challenges to setup end to end CICD pipeline quickly.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
