<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Klaus Bild</title>
    <description>The latest articles on DEV Community by Klaus Bild (@kbild).</description>
    <link>https://dev.to/kbild</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kbild"/>
    <language>en</language>
    <item>
      <title>Create a Custom Source for AWS CodePipeline - How to Use Azure DevOps Repos with AWS Pipelines - Part 1</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Thu, 12 Nov 2020 12:00:00 +0000</pubDate>
      <link>https://dev.to/kbild/create-a-custom-source-for-aws-codepipeline-how-to-use-azure-devops-repos-with-aws-pipelines-part-1-57p2</link>
      <guid>https://dev.to/kbild/create-a-custom-source-for-aws-codepipeline-how-to-use-azure-devops-repos-with-aws-pipelines-part-1-57p2</guid>
      <description>&lt;p&gt;Recently a very interesting blog post on the AWS DevOps blog was published which goes into much detail&lt;br&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/devops/event-driven-architecture-for-using-third-party-git-repositories-as-source-for-aws-codepipeline/"&gt;how to use third-party Git repositories as source for your AWS CodePipelines&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Unfortunately, this article came too late for our own integration of Azure DevOps Repos into our AWS CI/CD Pipelines and we had to find our own solution when we started to move our code Repos to Azure DevOps earlier this year.  &lt;/p&gt;

&lt;p&gt;I’m happy to share some more details how we succeeded with this integration using a custom source for AWS CodePipeline. In this Part 1 of the blog posts I will show you all details of the solution and in a Part 2 I plan to describe every step which is needed to deploy such a solution.&lt;/p&gt;


&lt;h3&gt;
  
  
  Solution Overview
&lt;/h3&gt;

&lt;p&gt;Big parts of the solution are equal to the architecture described in the &lt;a href="https://aws.amazon.com/blogs/devops/event-driven-architecture-for-using-third-party-git-repositories-as-source-for-aws-codepipeline/"&gt;Blog post by Kirankumar&lt;/a&gt; but there are some small but important differences.&lt;/p&gt;

&lt;p&gt;Let’s look at the architecture of our solution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202011/CustomSourcePipelineArch.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---l1g4Z5d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202011/CustomSourcePipelineArch.png" alt="SolutionOverview" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s go through all the steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A developer commits a code change to the Azure DevOps Repo&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The commit triggers an Azure DevOps webhook&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Azure DevOps webhook calls a CodePipeline webhook&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The webhook starts the CodePipeline&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The CodePipeline puts the first stage into 'Progress' and starts the source stage&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A CloudWatch Event Rule is triggered by the stage change to 'STARTED'&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The event rule triggers AWS CodeBuild and submits the pipeline name&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS CodeBuild polls the source stage job details and acknowledges the job&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The SSH key is received by CodeBuild from the AWS Secrets Manager&lt;/p&gt;

&lt;p&gt;a) Successful builds&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a) CodeBuild uploads the zipped artifact to the S3 artifact bucket&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a) CodeBuild puts the source stage into 'Succeeded'&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a) CodePipeline executes the next stage&lt;/p&gt;

&lt;p&gt;b) Failed builds&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;b) A CloudWatch Event Rule is triggered by the state change to 'FAILED'&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;b) The event rule triggers a Lambda function and provides pipeline execution/job details&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;b) Depending where the CodeBuild process failed the source stage is put into 'Failed' or the pipeline execution is stopped/abandoned&lt;/p&gt;



&lt;p&gt;As you can see the solution is very similar, but we omit a long running Lambda function and put all the logic into CodeBuild. We only need a short running Lambda function for error handling. Whenever CodeBuild fails we interconnect this Lambda function and CodeBuild through a CloudWatch Event Rule.&lt;/p&gt;

&lt;p&gt;But let’s do a deep dive into the different parts of the solution. Again, you will find the complete example in my &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/tree/master/Azure_CodePipeline_Source"&gt;AWS_Cloudformation_Examples Github&lt;/a&gt; Repo.&lt;/p&gt;


&lt;h3&gt;
  
  
  Webhooks
&lt;/h3&gt;

&lt;p&gt;This part of the solution is pretty straightforward and almost the same configuration for all the different third-party Git repository providers.&lt;br&gt;&lt;br&gt;
You will find the CloudFormation code for the CodePipeline webhook in &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/Azure_CodePipeline_Source/AzureDevopsPipeline.yaml"&gt;AzureDevopsPipeline.yaml&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
Let’s look at the Azure DevOps specific parts of the webhook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;120 Webhook:
121 Type: 'AWS::CodePipeline::Webhook'
122 Properties:
123 AuthenticationConfiguration: {}
124 Filters:
125 - JsonPath: "$.resource.refUpdates..name"
126 MatchEquals: !Sub 'refs/heads/${Branch}'
127 Authentication: UNAUTHENTICATED
128 TargetPipeline: !Ref AppPipeline
129 TargetAction: Source
130 Name: !Sub AzureDevopsHook-${AWS::StackName}
131 TargetPipelineVersion: !Sub ${AppPipeline.Version}
132 RegisterWithThirdParty: False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we look at &lt;code&gt;line 125&lt;/code&gt; we see the JSON Path which will be used to find the branch of the Repo which triggered the Azure DevOps branch. For most third-party Git repositories this path equals to '$.ref' but the structure of the request generated by the Azure DevOps Webhook looks different and we will find the branch using '$.resource.refUpdates..name' as JSON path.&lt;br&gt;&lt;br&gt;
Almost every third-party Git repository provider gives you access to the history of webhook executions and you will find the complete requests there. So, whenever you try to integrate a third-party provider look at the webhook requests first and define the correct JSON path for your branch filter.&lt;br&gt;&lt;br&gt;
This filter will now be used to decide if the branch which triggered the AzureDevops Webhook is the one we are using in our Source Stage definition of our pipeline and will trigger the CodePipeline execution (step 4).&lt;/p&gt;


&lt;h3&gt;
  
  
  CodePipeline CustomActionType
&lt;/h3&gt;

&lt;p&gt;The CustomActionType for the CodePipeline Source Stage will be created by the &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/Azure_CodePipeline_Source/AzureDevopsPreReqs.yaml"&gt;AzureDevopsPreReqs.yaml&lt;/a&gt; CloudFormation template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11 AzureDevopsActionType:
12 Type: AWS::CodePipeline::CustomActionType
13 Properties:
14 Category: Source
15 Provider: "AzureDevOpsRepo"
16 Version: "1"
17 ConfigurationProperties:
18 -
19 Description: "The name of the MS Azure DevOps Organization"
20 Key: false
21 Name: Organization
22 Queryable: false
23 Required: true
24 Secret: false
25 Type: String
26 -
27 Description: "The name of the repository"
28 Key: true
29 Name: Repo
30 Queryable: false
31 Required: true
32 Secret: false
33 Type: String
34 -
35 Description: "The name of the project"
36 Key: false
37 Name: Project
38 Queryable: false
39 Required: true
40 Secret: false
41 Type: String
42 -
43 Description: "The tracked branch"
44 Key: false
45 Name: Branch
46 Queryable: false
47 Required: true
48 Secret: false
49 Type: String
50 -
51 Description: "The name of the CodePipeline"
52 Key: false
53 Name: PipelineName
54 Queryable: true
55 Required: true
56 Secret: false
57 Type: String
58 InputArtifactDetails:
59 MaximumCount: 0
60 MinimumCount: 0
61 OutputArtifactDetails:
62 MaximumCount: 1
63 MinimumCount: 1
64 Settings:
65 EntityUrlTemplate: "https://dev.azure.com/{Config:Organization}/{Config:Project}/_git/{Config:Repo}?version=GB{Config:Branch}"
66 ExecutionUrlTemplate: "https://dev.azure.com/{Config:Organization}/{Config:Project}/_git/{Config:Repo}?version=GB{Config:Branch}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Large parts of the code are self-explanatory.&lt;/p&gt;

&lt;p&gt;We need the Azure DevOps Organization, Project, Reponame and Branch to git clone the required repo branch.&lt;br&gt;&lt;br&gt;
All these properties are must fields and as you can see are sufficient to create a back link to the Project in Azure DevOps as seen on &lt;code&gt;line 65&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The property PipelineName isn’t needed to get the Git repo but will be used to identify the correct CodePipeline job which should be processed. Therefore, this property has to be query able, otherwise you will get an error later on when using the query-param parameter on &lt;code&gt;line 148&lt;/code&gt; (had to find out this the hard way).&lt;/p&gt;


&lt;h3&gt;
  
  
  CloudWatch Events Rules
&lt;/h3&gt;

&lt;p&gt;This part is found in the &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/Azure_CodePipeline_Source/AzureDevopsPreReqs.yaml"&gt;AzureDevopsPreReqs.yaml&lt;/a&gt; CloudFormation template as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;210 CloudWatchEventRule:
211 Type: AWS::Events::Rule
212 Properties:
213 EventPattern:
214 source:
215 - aws.codepipeline
216 detail-type:
217 - 'CodePipeline Action Execution State Change'
218 detail:
219 state:
220 - STARTED
221 type:
222 provider:
223 - AzureDevOpsRepo
224 Targets:
225 -
226 Arn: !Sub ${BuildProject.Arn}
227 Id: triggerjobworker
228 RoleArn: !Sub ${CloudWatchEventRole.Arn}
229 InputTransformer:
230 InputPathsMap: {"executionid":"$.detail.execution-id", "pipelinename":"$.detail.pipeline"}
231 InputTemplate: "{\"environmentVariablesOverride\": [{\"name\": \"executionid\", \"type\": \"PLAINTEXT\", \"value\": &amp;lt;executionid&amp;gt;},{\"name\": \"pipelinename\", \"type\": \"PLAINTEXT\", \"value\": &amp;lt;executionid&amp;gt;}]}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I only want to draw your attention to &lt;code&gt;lines 125-126&lt;/code&gt; where the event input will be transformed to an output which later will be used by CodeBuild (step 7).&lt;/p&gt;

&lt;p&gt;We will hand over two CodeBuild environment variables, executionid and pipelinename. Creating the &lt;code&gt;InputTemplate&lt;/code&gt; was challenging, as you can see you have to carefully escape all double quotes and you have to override the CodeBuild environment variables.&lt;/p&gt;

&lt;p&gt;Fortunately the &lt;a href="https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html"&gt;API Reference Guide for AWS CodeBuild&lt;/a&gt; is very well documented and you find the needed request syntax there → use &lt;code&gt;environmentVariablesOverride&lt;/code&gt; and provide an array of &lt;a href="https://docs.aws.amazon.com/codebuild/latest/APIReference/API_EnvironmentVariable.html"&gt;EnvironmentVariable objects&lt;/a&gt;, in this case executionid and pipelinename.  &lt;/p&gt;

&lt;p&gt;Now let’s look at the second CloudWatch Event Rule which will be triggered if CodeBuild fails (step 10b):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;232 CloudWatchEventRuleBuildFailed:
233 Type: AWS::Events::Rule
234 Properties:
235 EventPattern:
236 source:
237 - aws.codebuild
238 detail-type:
239 - 'CodeBuild Build State Change'
240 detail:
241 build-status:
242 - FAILED
243 project-name:
244 - !Sub ${AWS::StackName}-GetAzureDevOps-Repo
245 Targets:
246 -
247 Arn: !Sub ${LambdaCodeBuildFails.Arn}
248 Id: failtrigger
249 InputTransformer:
250 InputPathsMap: {"loglink":"$.detail.additional-information.logs.deep-link", "environment-variables":"$.detail.additional-information.environment.environment-variables", "exported-environment-variables":"$.detail.additional-information.exported-environment-variables"}
251 InputTemplate: "{\"loglink\": &amp;lt;loglink&amp;gt;, \"environment-variables\": &amp;lt;environment-variables&amp;gt;, \"exported-environment-variables\": &amp;lt;exported-environment-variables&amp;gt;}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, I want to draw your attention to the &lt;code&gt;InputPathsMap&lt;/code&gt; and &lt;code&gt;InputTemplate&lt;/code&gt; part.&lt;br&gt;&lt;br&gt;
Here we extract 3 variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;loglink (single string value) → deeplink to the CloudWatch logs for CodeBuild execution&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;environment-variables (array of objects) → execution_id and pipelinenameobjects&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;exported-environment-variables (again array of objects) → jobId object&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;InputTemplate&lt;/code&gt; creates a simple JSON file which will be later used by the Lambda function (step 11b).&lt;/p&gt;


&lt;h3&gt;
  
  
  CodeBuild
&lt;/h3&gt;

&lt;p&gt;Most of the logic of this solution can be found in the CodeBuild project. The project will have 2 environment variables pipelinename and executionid (&lt;code&gt;lines 127-131&lt;/code&gt;) and as seen before will be pre-filled by the Webhook event (step 7).&lt;br&gt;&lt;br&gt;
Now let’s get to the meat of the project, the BuildSpec part:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;134 BuildSpec: !Sub |
135 version: 0.2
136 env:
137 exported-variables:
138 - jobid
139 phases:
140 pre_build:
141 commands:
142 - echo $pipelinename
143 - echo $executionid
144 - wait_period=0
145 - |
146 while true
147 do
148 jobdetail=$(aws codepipeline poll-for-jobs --action-type-id category="Source",owner="Custom",provider="AzureDevOpsRepo",version="1" --query-param PipelineName=$pipelinename --max-batch-size 1)
149 provider=$(echo $jobdetail | jq '.jobs[0].data.actionTypeId.provider' -r)
150 wait_period=$(($wait_period+10))
151 if [$provider = "AzureDevOpsRepo"];then
152 echo $jobdetail
153 break
154 fi
155 if [$wait_period -gt 300];then
156 echo "Haven't found a pipeline job for 5 minutes, will stop pipeline."
157 exit 1
158 else
159 echo "No pipeline job found, will try again in 10 seconds"
160 sleep 10
161 fi
162 done
163 - jobid=$(echo $jobdetail | jq '.jobs[0].id' -r)
164 - echo $jobid
165 - ack=$(aws codepipeline acknowledge-job --job-id $(echo $jobdetail | jq '.jobs[0].id' -r) --nonce $(echo $jobdetail | jq '.jobs[0].nonce' -r))
166 - Branch=$(echo $jobdetail | jq '.jobs[0].data.actionConfiguration.configuration.Branch' -r)
167 - Organization=$(echo $jobdetail | jq '.jobs[0].data.actionConfiguration.configuration.Organization' -r)
168 - Repo=$(echo $jobdetail | jq '.jobs[0].data.actionConfiguration.configuration.Repo' -r)
169 - Project=$(echo $jobdetail | jq '.jobs[0].data.actionConfiguration.configuration.Project' -r)
170 - ObjectKey=$(echo $jobdetail | jq '.jobs[0].data.outputArtifacts[0].location.s3Location.objectKey' -r)
171 - BucketName=$(echo $jobdetail | jq '.jobs[0].data.outputArtifacts[0].location.s3Location.bucketName' -r)
172 - aws secretsmanager get-secret-value --secret-id ${SSHKey} --query 'SecretString' --output text | base64 --decode &amp;gt; ~/.ssh/id_rsa
173 - chmod 600 ~/.ssh/id_rsa
174 - ssh-keygen -F ssh.dev.azure.com || ssh-keyscan ssh.dev.azure.com &amp;gt;&amp;gt;~/.ssh/known_hosts
175 build:
176 commands:
177 - git clone "git@ssh.dev.azure.com:v3/$Organization/$Project/$Repo"
178 - cd $Repo
179 - git checkout $Branch
180 - zip -r output_file.zip *
181 - aws s3 cp output_file.zip s3://$BucketName/$ObjectKey
182 - aws codepipeline put-job-success-result --job-id $(echo $jobdetail | jq '.jobs[0].id' -r)
183 artifacts:
184 files:
185 - '**/*'
186 base-directory: '$Repo'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First of all, we define a custom environment variable which will be filled with the jobid later on (&lt;code&gt;lines 136-128&lt;/code&gt;). Defining a custom environment variable for the jobid will ensure that we have a value for the jobid in the CodeBuild response (which will later be received by the CloudWatch Event Rule in case of errors).  &lt;/p&gt;

&lt;p&gt;Polling CodePipeline for jobs usually needs more than one try to get a result, therefore we use a while loop and poll all 10 seconds (step 8).&lt;br&gt;&lt;br&gt;
As you can see on &lt;code&gt;line 148&lt;/code&gt; we only poll for jobs with the correct PipelineName (remember that we defined this property as query able).&lt;br&gt;&lt;br&gt;
If we don’t get a result within 5 minutes we will exit the CodeBuild execution with a non-zero exit code which will lead to 'FAILED' state and which will trigger the CloudWatch Event Rule for errors (&lt;code&gt;lines 147-162&lt;/code&gt;).  &lt;/p&gt;

&lt;p&gt;Now we acknowledge the job and we ask the CodePipeline to provide more details on the job (&lt;code&gt;lines 163-171&lt;/code&gt;, step 8):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Branch, Organization, Repo, Project → Azure DevOps properties&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ObjectKey, BucketName → these two parameters are very essential for the next CodePipeline step&lt;/p&gt;

&lt;p&gt;Before we can clone the repo we have to put the decoded base64 ssh key received from the Secrets Manager into the correct file in the CodeBuild container (step 9).&lt;br&gt;&lt;br&gt;
We change the access permissions on the created key file to 600 and add the Azure DevOps public keys to the known_hosts file. (&lt;code&gt;lines 172-174&lt;/code&gt;)  &lt;/p&gt;

&lt;p&gt;Now the actual build process starts, and the repo is cloned using the copied SSH key for authentication. Before zipping all the repo content, the appropriate branch is checked out and the zipped artifact is then uploaded to the artifact store (step 10a).&lt;/p&gt;

&lt;p&gt;Here we see again the two parameters ObjectKey and BucketName received earlier from the job details. The artifact has to use the value of ObjectKey as filepath/name and BucketName as S3 bucket name for the upload. It is very crucial to use the correct filepath/name because the next CodePipeline Step/Stage will try to download the artifact form the Artifact Bucket using these two parameters and will fail if you used wrong values during upload.  &lt;/p&gt;

&lt;p&gt;Last action of the CodeBuild project is to inform the CodePipeline of a successful execution of the job (&lt;code&gt;line 182&lt;/code&gt;, step 11a).&lt;/p&gt;


&lt;h3&gt;
  
  
  Lambda Function
&lt;/h3&gt;

&lt;p&gt;The Lambda function will only be used for error handling. The logic is pretty simple as you can see here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;338 def lambda_handler(event, context):
339 LOGGER.info(event)
340 try:
341 job_id = event['exported-environment-variables'][0]['value']
342 print(job_id)
343 execution_id = event['environment-variables'][0]['value']
344 print(execution_id)
345 pipelinename = event['environment-variables'][1]['value']
346 print(pipelinename)
347 loglink = event['loglink']
348 print(loglink)
349 if ( job_id != "" ) :
350 print("Found an job id")
351 codepipeline_failure(job_id, "CodeBuild process failed", loglink)
352 else :
353 print("Found NO job id")
354 codepipeline_stop(execution_id, "CodeBuild process failed", pipelinename)
355 except KeyError as err:
356 LOGGER.error("Could not retrieve CodePipeline Job ID!\n%s", err, pipelinename)
357 return False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First we get all variable values which were provided by the CloudWatch Event Rule and then we only check if there is a value for job_id.  &lt;/p&gt;

&lt;p&gt;If there is a value we will trigger the &lt;code&gt;codepipeline_failure&lt;/code&gt; function which then will inform CodePipeline of a failure result of this job (&lt;code&gt;lines 312-323&lt;/code&gt;).&lt;br&gt;&lt;br&gt;
Whenever CodeBuild fails without getting a job_id before the error occurs the Lambda function will call the &lt;code&gt;codepipeline_stop&lt;/code&gt; part. The execution_id and pipelinename is then used to stop and abandon the correct CodePipeline execution (&lt;code&gt;lines 324-337&lt;/code&gt;).&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I hope this post showed you how you can create your own CodePipeline sources and how the different parts of such a solution are playing together. This was my first time creating a custom CodePipeline source and I’m fascinated how powerful this is. You may include completely different sources into your CodePipelines, not limited to Repos at all. Wherever you have a solution which can trigger a Webhook and provide some input you are fine to use it as your own CodePipeline source.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Step Functions as CloudFormation Custom Resources - Automatic Certificate Creation Across AWS Accounts</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Fri, 24 Jul 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/kbild/aws-step-functions-as-cloudformation-custom-resources-automatic-certificate-creation-across-aws-accounts-afn</link>
      <guid>https://dev.to/kbild/aws-step-functions-as-cloudformation-custom-resources-automatic-certificate-creation-across-aws-accounts-afn</guid>
      <description>&lt;p&gt;Last year I wrote a CloudFormation example which deployed a &lt;a href="https://kbild.ch/blog/2019-02-25-pipeline_cloudformation/"&gt;CodePipeline for the Hugo CMS&lt;/a&gt;.This was an almost fully automated solution for a Hugo deployment, the only manual step was to create the needed certificate with the Amazon Certificate Manager.&lt;/p&gt;

&lt;p&gt;Some weeks ago AWS added the possibility of fully automated &lt;a href="https://aws.amazon.com/blogs/security/how-to-use-aws-certificate-manager-with-aws-cloudformation/"&gt;certificate creation via CloudFormation&lt;/a&gt; if you add the HostedZoneId to your CloudFormation certificate resource.&lt;/p&gt;

&lt;p&gt;This solution is neat but will not work on our company accounts because we have all Route 53 DNS Zones in a different AWS account.Therefore I needed a solution which works fully automated across different AWS accounts.&lt;/p&gt;

&lt;p&gt;Searching for examples gave me a good starting point to create my own solution, a custom CloudFormation resource.Here are some examples which will help you to understand custom CloudFormation resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cloudar.be/awsblog/validate-acm-certificates-in-cloudformation"&gt;Validate ACM certificates in Cloudformation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/binxio/cfn-certificate-provider"&gt;Custom Certificate Provider with DNS validation support&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.dwolla.com/updates/certificate-validator/"&gt;Automatic Certificate Validation with Certificate Validator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All these examples work perfectly and can be easily modified to work across different AWS accounts but the fact that they use long running Lambda Functions didn’t satisfy me.&lt;br&gt;&lt;br&gt;
Occasionally executing long running Lambda Functions doesn’t cost much but nevertheless I always prefer short running ones and to use AWS Step Functions for the Workflow logic combined with Lambda Functions with a single purpose.&lt;/p&gt;



&lt;p&gt;Challenge accepted, let’s create a CloudFormation Custom Resource which will work with Step Functions.&lt;/p&gt;



&lt;p&gt;Unfortunately only &lt;strong&gt;Lambda Functions&lt;/strong&gt; or &lt;strong&gt;SNS topics&lt;/strong&gt; may be used as Custom Resource in CloudFormation, so we first have to create a Lambda Function which can be used as Custom Resource in CloudFormation and which interconnects CloudFormation with our Step Functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202006/CF_Custom_Resource.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BGQHwtjW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202006/CF_Custom_Resource.png" alt="SolutionOverview" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have 3 components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CustomResourceCertificate&lt;/strong&gt; → The custom resource in the CloudFormation template&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;LambdaCallStateMachine&lt;/strong&gt; → The Lambda Function which will be triggered by the CloudFormation custom resource and which will call the Step Functions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CertificateStateMachine&lt;/strong&gt; → The actual Step Functions which consists of some logic and Lambda Functions&lt;/p&gt;



&lt;p&gt;You will find all the examples explained below in this &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/tree/master/stepFunctionCertCreation"&gt;AWS_Cloudformation_Examples Github&lt;/a&gt; Repo.&lt;/p&gt;

&lt;p&gt;The Custom Resource including &lt;strong&gt;CertificateStateMachine&lt;/strong&gt; Step Functions with all Lambda Functions and Roles/Policies can be found in the &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/stepFunctionCertCreation/certificate_xaccount_customresource.yaml"&gt;certificate_xaccount_customresource.yaml&lt;/a&gt;&lt;br&gt;&lt;br&gt;
CloudFormation template.&lt;/p&gt;


&lt;h3&gt;
  
  
  CertificateStateMachine
&lt;/h3&gt;

&lt;p&gt;Most of the work is done by the Step Functions called &lt;strong&gt;CertificateStateMachine&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202006/stepfunctions_graph.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1CEBbAwE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202006/stepfunctions_graph.png" alt="CertificateStateMachine" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CertificateStateMachine&lt;/strong&gt; starts with a Choice Action and if you look at the CloudFormation template which creates this Step Functions you see that the variable &lt;code&gt;$.RequestType&lt;/code&gt; is used as switch. This variable is sent by AWS CloudFormation and will give us the information if this is a &lt;code&gt;Create&lt;/code&gt;, &lt;code&gt;Update&lt;/code&gt; or &lt;code&gt;Delete&lt;/code&gt; request.&lt;/p&gt;
&lt;h4&gt;
  
  
  Create or Update Path
&lt;/h4&gt;

&lt;p&gt;Following the &lt;code&gt;Create&lt;/code&gt; or &lt;code&gt;Update&lt;/code&gt; path will first call the Create step which triggers a Lamba Function called &lt;strong&gt;LambdaCreateCertificateRequest&lt;/strong&gt;. As the name suggests this simple Function calls ACM and requests to create a certificate. We use the parameters &lt;code&gt;HostedZoneId&lt;/code&gt;, &lt;code&gt;WebSiteURL&lt;/code&gt; and &lt;code&gt;Region&lt;/code&gt; which we will get from &lt;strong&gt;CustomResourceCertificate&lt;/strong&gt; whenever this Custom Resource is used in a CloudFormation template&lt;br&gt;&lt;br&gt;
→ find more details later in this post&lt;br&gt;&lt;br&gt;
As response we will get the &lt;code&gt;CertificateArn&lt;/code&gt; which we will need in the next steps.&lt;/p&gt;

&lt;p&gt;After Wait_10_seconds another Lambda Function &lt;strong&gt;LambdaDescribeCertificateRequest&lt;/strong&gt; is called in step DescribeCert. This function takes the &lt;code&gt;CertificateArn&lt;/code&gt; as input and calls ACM again to get the needed DNS &lt;code&gt;CNAME entries&lt;/code&gt; for the validation and the &lt;code&gt;ValidationStatus&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;CreateDNS triggers &lt;strong&gt;LambdaCreateDNSEntry&lt;/strong&gt; Lambda Function and takes the CNAME entries as input. Here the magic for the cross account creation happens. The Lambda Function will use the ARN of the Role which is created in our Route 53 Domain AWS Account and will call Route 53 to create the DNS Record Set.&lt;/p&gt;

&lt;p&gt;CheckCert will again use &lt;strong&gt;LambdaDescribeCertificateRequest&lt;/strong&gt; to get the ValidationStatus of the Cert creation.Cert Ready? will loop using Wait_100_seconds_for_certificate until the &lt;code&gt;ValidationStatus&lt;/code&gt; equals &lt;code&gt;SUCCESS&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Last Step SendResultCreation calls the Lambda Function &lt;strong&gt;LambdaSendResult&lt;/strong&gt;. This Function returns a success response to AWS CloudFormation via the &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-lambda-function-code-cfnresponsemodule.html"&gt;cfn-response&lt;/a&gt; module. This module knows the AWS CloudFormation Endpoint for the response through the variable &lt;code&gt;responseUrl&lt;/code&gt;. This variable is provided when AWS CloudFormation calls the ServiceToken of the Custom Resource.&lt;br&gt;&lt;br&gt;
The &lt;code&gt;CertificateArn&lt;/code&gt; is used as &lt;code&gt;physicalResourceId&lt;/code&gt; for the Custom Resource, so this will be the Return value of the Custom Resource.&lt;/p&gt;
&lt;h4&gt;
  
  
  Delete Path
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;Delete&lt;/code&gt; Path starts with the Lambda Function &lt;strong&gt;LambdaDescribeCertificateRequest&lt;/strong&gt; in step DescribeCertDeletion. In contrast to the use of the same Function in the Create Path the CertificateArn is provided via the variable &lt;code&gt;PhysicalResourceId&lt;/code&gt;. The response includes the DNS &lt;code&gt;CNAME entries&lt;/code&gt; which were used for the validation.&lt;/p&gt;

&lt;p&gt;Delete step will call the Lambda Function &lt;strong&gt;LambdaDeleteResource&lt;/strong&gt;. This Function will first delete the DNS Entries in our Route 53 Domain AWS Account, again done via assuming a Role in this Account. Second the Function deletes the according Certificate in ACM.&lt;/p&gt;

&lt;p&gt;Last Step SendResultDeletion calls the Lambda Function &lt;strong&gt;LambdaSendResult&lt;/strong&gt; and returns a success response to AWS CloudFormation equal to the use in the Create Path.&lt;/p&gt;


&lt;h3&gt;
  
  
  LambdaCallStateMachine
&lt;/h3&gt;

&lt;p&gt;Next let’s look at the &lt;strong&gt;LambdaCallStateMachine&lt;/strong&gt; Python Function. The Function can as well be found in the CloudFormation template &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/stepFunctionCertCreation/certificate_xaccount_customresource.yaml"&gt;certificate_xaccount_customresource.yaml&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;246 from botocore.exceptions import ClientError
247 import boto3
248 import cfnresponse
249 import os
250 import json
251
252 statemachineARN = os.getenv('statemachineARN')
253
254 def lambda_handler(event, context):
255 sfn_client = boto3.client('stepfunctions')
256 try:
257 response = sfn_client.start_execution(stateMachineArn=statemachineARN,input=(json.dumps(event)))
258 sfn_arn = response.get('executionArn')
259 print(sfn_arn)
260 except Exception:
261 print('Could not run the Step Functions')
262 responseData = {}
263 responseData['Error'] = "CouldNotCallStateMachine"
264 response=cfnresponse.send(event, context, FAILED, responseData)
265 return(response)
266 return(sfn_arn)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see on line 252 we will get the ARN of the &lt;strong&gt;CertificateStateMachine&lt;/strong&gt; Step Functions (aka statemachineARN) as environment variable.&lt;br&gt;&lt;br&gt;
This ARN will be automatically filled with the correct ARN of the &lt;strong&gt;CertificateStateMachine&lt;/strong&gt; Step Functions during CloudFormation deployment (Line 242 → statemachineARN : !Ref CertificateStateMachine).&lt;/p&gt;

&lt;p&gt;In line 257 we call the Step Functions and provide the Lambda Function input event unchanged as json string to the Step Functions. This event input will be provided by AWS CloudFormation during Custom Resource Creation/Update/Deletion.&lt;/p&gt;

&lt;p&gt;This is an example what you can expect in such an input event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "StackId": "arn:aws:cloudformation:eu-central-1:700000000000:stack/cloudeecms/6g300000-cc00-00ea-aaba-0a0f000aced0",
  "ResponseURL": "https://cloudformation-custom-resource-response-eucentral1.s3.eu-central-1.amazonaws.com/arn%3Aaws%3Acloudformation%3Aeu-central-1%3A711632663682%3Astack/cloudeecms/6f371890-cc16-11ea-bbab-0a3f741aced4%7CCustomResourceCertificate%7C4dfd25c6-43c4-4a38-97f5-c14845f454ee?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;amp;X-Amz-Date=20200722T122539Z&amp;amp;X-Amz-SignedHeaders=host&amp;amp;X-Amz-Expires=7200&amp;amp;X-Amz-Credential=BLGBZZHSTLS2MMALHGQI%3G30400000%2Feu-central-1%2Fs3%2Faws4_request&amp;amp;X-Amz-Signature=3c2424f204c3e935024046g3fd28ld42hs04hd2b1ad2jegw25ls1f924hsf2lsr",
  "ResourceProperties": {
    "HostedZoneId": "Z00000000AZD0FWVZH0RA",
    "WebSiteURL": "www.cloudee-cms.biz",
    "Region": "us-east-1",
    "ServiceToken": "arn:aws:lambda:eu-central-1:700000000000:function:CallStateMachine-700000000000"
  },
  "RequestType": "Create",
  "ServiceToken": "arn:aws:lambda:eu-central-1:700000000000:function:CallStateMachine-700000000000",
  "ResourceType": "Custom::CreateCertificate",
  "RequestId": "5ehq93f9-28d2-9d20-53g5-d63926g294dw",
  "LogicalResourceId": "CustomResourceCertificate"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything works as expected the Lambda will be terminated and the Step Functions will take care of returning a response to the &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-lambda-function-code-cfnresponsemodule.html"&gt;cfn-response&lt;/a&gt; module.If the Step Functions can’t be triggered we will return an error (line 264) through the &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-lambda-function-code-cfnresponsemodule.html"&gt;cfn-response&lt;/a&gt; module.&lt;/p&gt;




&lt;h3&gt;
  
  
  CustomResourceCertificate
&lt;/h3&gt;

&lt;p&gt;A custom resource in CloudFormation is defined by a &lt;code&gt;Type&lt;/code&gt; starting with &lt;code&gt;'Custom::'&lt;/code&gt; and the custom resource name, here &lt;code&gt;'CreateCertificate'&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
The resource must have a &lt;code&gt;ServiceToken&lt;/code&gt;. This token represents the ARN of the Lambda Function or SNS Topic which should be called. In this case we import the ARN of the Lambda Function &lt;code&gt;LambdaCallStateMachine&lt;/code&gt; (which was already created by the certificate_xaccount_customresource.yaml CloudFormation template).&lt;br&gt;&lt;br&gt;
This custom resource needs 3 additional properties, the &lt;code&gt;WebSiteURL&lt;/code&gt; for which the certificate should be created, the &lt;code&gt;HostedZoneId&lt;/code&gt; of the Route 53 domain in which the needed DNS entry for validation will be created and the &lt;code&gt;Region&lt;/code&gt; where the certificate should be created.&lt;br&gt;&lt;br&gt;
We already saw these 3 properties inside the Step Functions where &lt;strong&gt;LambdaCreateCertificateRequest&lt;/strong&gt; is called.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;37 CustomResourceCertificate:
38 Type: 'Custom::CreateCertificate'
39 Properties:
40 ServiceToken: !ImportValue LambdaCallStateMachineCertArn
41 WebSiteURL: !Ref WebSiteURL
42 HostedZoneId: !Ref HostedZoneId
43 Region: !Ref Region
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find the full version of the CloudFormation template which creates the certificate &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/stepFunctionCertCreation/deploycert.yaml"&gt;here&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Outlook
&lt;/h2&gt;

&lt;p&gt;I tried to write the Lambda Functions generic so that they can be reused in other Step Functions. This gives us the freedom to use them in a second Step Functions example called &lt;strong&gt;DNSStateMachine&lt;/strong&gt;.These Step Functions will be used for a Custom CloudFormation Resource which creates DNS entries in our Route 53 Domain AWS Account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202006/stepfunctions_graph2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rj3HFgi2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202006/stepfunctions_graph2.png" alt="DNSStateMachine" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see we reuse the same Lambda Functions &lt;strong&gt;LambdaCreateDNSEntry&lt;/strong&gt; , &lt;strong&gt;LambdaDeleteResource&lt;/strong&gt; and &lt;strong&gt;LambdaSendResult&lt;/strong&gt;. You find the &lt;strong&gt;DNSStateMachine&lt;/strong&gt; example in the CloudFormation template &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/stepFunctionCertCreation/certificate_xaccount_customresource.yaml"&gt;certificate_xaccount_customresource.yaml&lt;/a&gt; as well.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This example showed how you can combine a Custom CloudFormation Resource with Step Functions and automatically create an ACM certificate even if the Route 53 Domain for validation is in another AWS account. This gives you an idea how you can start using Step Functions for your own CloudFormation resources.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS CodePipeline Example which deploys to multiple AWS Accounts - Part2</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Tue, 12 May 2020 13:30:10 +0000</pubDate>
      <link>https://dev.to/kbild/aws-codepipeline-example-which-deploys-to-multiple-aws-accounts-part2-17ip</link>
      <guid>https://dev.to/kbild/aws-codepipeline-example-which-deploys-to-multiple-aws-accounts-part2-17ip</guid>
      <description>&lt;p&gt;In &lt;a href="https://kbild.ch/blog/2020-5-4-CF_multiple_accounts_regions/"&gt;Part 1&lt;/a&gt; you find CloudFormation templates which help you to create an AWS CodePipeline that deploys to multiple AWS Accounts. In this Part 2 we will go into some more details how these CF templates work.&lt;/p&gt;




&lt;h3&gt;
  
  
  cloudformation/01central-prereqs.yaml
&lt;/h3&gt;

&lt;p&gt;Let’s first look at the &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/01central-prereqs.yaml"&gt;01central-prereqs.yaml&lt;/a&gt; template which creates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/arch1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pHxTNrxm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/arch1.png" alt="01central-prereqs.yaml" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;S3 Artifact Bucket&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;KMS Key for encryption&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;IAM Roles/Policies&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeCommit Repo for the App&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeBuild Project for App&lt;/p&gt;

&lt;p&gt;Looking at the template reveals that the &lt;strong&gt;Dev/Test/Prod accounts&lt;/strong&gt; get access to the &lt;strong&gt;KMS Key&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
In the &lt;strong&gt;Central Account&lt;/strong&gt; CodeBuild/CodePipeline roles will create the output artifacts, which will be &lt;strong&gt;encrypted&lt;/strong&gt; with the help of the &lt;strong&gt;KMS Key&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
Later on these artifacts will be used to deploy the application in your &lt;strong&gt;Dev/Test/Prod accounts&lt;/strong&gt; , therefore the root in these accounts needs access to the &lt;strong&gt;KMS Key&lt;/strong&gt; to &lt;strong&gt;decrypt&lt;/strong&gt; the artifacts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hPNNsIOl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/code01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hPNNsIOl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/code01.png" alt="AWS CodePipeline Example" width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Same is needed for the artifact S3 bucket, but here the roles that will be created by the other template &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/02prereqs-accounts.yaml"&gt;02prereqs-accounts.yaml&lt;/a&gt; will need access to the bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;108 S3BucketPolicy:
109 Condition: AddPolicies
110 Type: AWS::S3::BucketPolicy
111 Properties:
112 Bucket: !Ref ArtifactBucket
113 PolicyDocument:
114 Statement:
115 -
116 Action:
117 - s3:GetObject
118 - s3:PutObject
119 Effect: Allow
120 Resource:
121 - !Sub arn:aws:s3:::${ArtifactBucket}
122 - !Sub arn:aws:s3:::${ArtifactBucket}/*
123 Principal:
124 AWS:
125 - !Sub arn:aws:iam::${TestAccount}:role/${Project}-CentralAcctCodePipelineCFRole
126 - !Sub arn:aws:iam::${TestAccount}:role/${Project}-cloudformationdeployer-role
127 - !Sub arn:aws:iam::${ProductionAccount}:role/${Project}-CentralAcctCodePipelineCFRole
128 - !Sub arn:aws:iam::${ProductionAccount}:role/${Project}-cloudformationdeployer-role
129 - !Sub arn:aws:iam::${DevAccount}:role/${Project}-CentralAcctCodePipelineCFRole
130 - !Sub arn:aws:iam::${DevAccount}:role/${Project}-cloudformationdeployer-role
131 - !Sub arn:aws:iam::${AWS::AccountId}:role/${Project}-codepipeline-Role
132 - !Sub arn:aws:iam::${AWS::AccountId}:role/${Project}-codebuild-Role
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see this policy is only added if the Condition "AddPolicies" is true, so for the first run of &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/01central-prereqs.yaml"&gt;01central-prereqs.yaml&lt;/a&gt; this S3 bucket policy will NOT be created.&lt;br&gt;&lt;br&gt;
Reason for this is that &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/02prereqs-accounts.yaml"&gt;02prereqs-accounts.yaml&lt;/a&gt; has to be deployed on the &lt;strong&gt;Dev/Test/Prod accounts&lt;/strong&gt; first to create all these needed roles.&lt;br&gt;&lt;br&gt;
Right afterwards &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/01central-prereqs.yaml"&gt;01central-prereqs.yaml&lt;/a&gt; has to be run a second time with "AddPolicies" parameter set to true.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The template would fail to run if these roles are not present in the other accounts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are two roles per Account which will get access to the bucket, the &lt;strong&gt;CentralAcctCodePipelineCFRole&lt;/strong&gt; and the &lt;strong&gt;cloudformationdeployer-role&lt;/strong&gt;. Let’s switch to the next template &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/02prereqs-accounts.yaml"&gt;02prereqs-accounts.yaml&lt;/a&gt; and look at these roles.&lt;/p&gt;


&lt;h3&gt;
  
  
  cloudformation/02prereqs-accounts.yaml
&lt;/h3&gt;

&lt;p&gt;Here you find the &lt;strong&gt;CentralAcctCodePipelineCFRole&lt;/strong&gt; and as you can see, this will be the role which will be assumed by the &lt;strong&gt;Central Account&lt;/strong&gt; CodePipeline to execute the CloudFormation commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;17Resources:
18 CFRole:
19 Type: AWS::IAM::Role
20 Properties:
21 RoleName: !Sub ${Project}-CentralAcctCodePipelineCFRole
22 AssumeRolePolicyDocument:
23 Version: 2012-10-17
24 Statement:
25 -
26 Effect: Allow
27 Principal:
28 AWS:
29 - !Ref CentralAccount
30 Action:
31 - sts:AssumeRole
32 Path: /
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looking at the policy used for this role we can see that CloudFormation, S3, IAM and KMS actions are added.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;33 CFPolicy:
34 Type: AWS::IAM::Policy
35 Properties:
36 PolicyName: !Sub ${Project}-CentralAcctCodePipelineCloudFormationPolicy
37 PolicyDocument:
38 Version: 2012-10-17
39 Statement:
40 -
41 Effect: Allow
42 Action:
43 - cloudformation:CreateStack
44 - cloudformation:DeleteStack
45 - cloudformation:UpdateStack
46 - cloudformation:DescribeStacks
47 - cloudformation:CreateChangeSet
48 - cloudformation:ExecuteChangeSet
49 - cloudformation:ListChangeSets
50 - cloudformation:DescribeChangeSet
51 - cloudformation:DeleteChangeSet
52 Resource: !Sub "arn:aws:cloudformation:${AWS::Region}:*"
53 -
54 Effect: Allow
55 Action:
56 - s3:PutObject
57 - s3:GetObject
58 Resource:
59 - !Sub arn:aws:s3:::${S3Bucket}/*
60 - !Sub arn:aws:s3:::${S3Bucket}
61 -
62 Effect: Allow
63 Action:
64 - iam:PassRole
65 Resource: !Sub "arn:aws:iam::${AWS::AccountId}:*"
66 -
67 Effect: Allow
68 Action:
69 - kms:Decrypt
70 - kms:Encrypt
71 Resource:
72 - !Ref CMKARN
73 Roles:
74 -
75 !Ref CFRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CF actions are used to deploy the CF templates into this account, S3 actions are needed to get the artifacts from the &lt;strong&gt;Central Account&lt;/strong&gt; and KMS actions are needed to decrypt the artifact.&lt;/p&gt;

&lt;p&gt;Looking at the &lt;strong&gt;cloudformationdeployer-role&lt;/strong&gt; or better the according policy we see that this role gets similar actions like &lt;strong&gt;CentralAcctCodePipelineCFRole&lt;/strong&gt;. Supplementary some IAM, Lambda and API Gateway actions are added.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 91 CFDeployerPolicy:
 92 Type: AWS::IAM::Policy
 93 Properties:
 94 PolicyName: !Sub ${Project}-cloudformationdeployer-policy
 95 PolicyDocument:
 96 Version: 2012-10-17
 97 Statement:
 98 - Sid: cf
 99 Effect: Allow
100 Action:
101 - cloudformation:CreateStack
102 - cloudformation:DeleteStack
103 - cloudformation:UpdateStack
104 - cloudformation:DescribeStacks
105 - cloudformation:CreateChangeSet
106 - cloudformation:ExecuteChangeSet
107 - cloudformation:ListChangeSets
108 - cloudformation:DescribeChangeSet
109 - cloudformation:DeleteChangeSet
110 Resource: !Sub "arn:aws:cloudformation:${AWS::Region}:*"
111 - Sid: s3
112 Effect: Allow
113 Action:
114 - s3:PutObject
115 - s3:GetBucketPolicy
116 - s3:GetObject
117 - s3:ListBucket
118 Resource:
119 - !Sub "arn:aws:s3:::${S3Bucket}/*"
120 - !Sub "arn:aws:s3:::${S3Bucket}"
121 - Sid: iam
122 Effect: Allow
123 Action:
124 - iam:CreateRole
125 - iam:DeleteRole
126 - iam:AttachRolePolicy
127 - iam:DetachRolePolicy
128 - iam:getRolePolicy
129 - iam:PutRolePolicy
130 - iam:DeleteRolePolicy
131 - iam:GetRole
132 - iam:PassRole
133 - iam:CreateServiceLinkedRole
134 Resource:
135 - !Sub "arn:aws:iam::${AWS::AccountId}:role/*"
136 - Sid: ssm
137 Effect: Allow
138 Action:
139 - ssm:GetParameters
140 Resource:
141 - !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/*"
142 - Sid: lambda
143 Effect: Allow
144 Action:
145 - lambda:CreateFunction
146 - lambda:DeleteFunction
147 - lambda:GetFunctionConfiguration
148 - lambda:AddPermission
149 - lambda:RemovePermission
150 - lambda:UpdateFunctionConfiguration
151 - lambda:UpdateFunctionCode
152 Resource:
153 - !Sub "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:*"
154 - Sid: apigw
155 Effect: Allow
156 Action:
157 - apigateway:POST
158 - apigateway:DELETE
159 - apigateway:PATCH
160 - apigateway:GET
161 Resource:
162 - !Sub "arn:aws:apigateway:${AWS::Region}::/*"
163 Roles:
164 -
165 !Ref CFDeployerRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All these actions are needed to create our application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;IAM actions to create all needed roles/policies&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lambda actions to create our serverless functions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;API Gateway actions to create endpoints of the application&lt;/p&gt;

&lt;p&gt;If you will use this example to deploy your own applications to multiple AWS accounts this role/policy has to be customized to fit your needs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All actions which will be needed to deploy your application have to be added here.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s look at the last template,&lt;/p&gt;




&lt;h3&gt;
  
  
  cloudformation/03central-pipeline.yaml
&lt;/h3&gt;

&lt;p&gt;This template which will be used in the &lt;strong&gt;Central account&lt;/strong&gt; creates only one resource, the CodePipeline.As you can see this CodePipeline will use the KMS Key as encryption key and will use the S3 bucket as artifact store.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QBI7krND--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/code1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QBI7krND--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/code1.png" alt="AWS CodePipeline Example" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see 5 stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Getting the source from CodeCommit&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build the templates with CodeBuild&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create/deploy change sets to &lt;strong&gt;Dev account&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create/deploy change sets to &lt;strong&gt;Test account&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create/deploy change sets to &lt;strong&gt;Prod account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CodeCommit and CodeBuild stages are easy to understand, no magic there, but let’s look at the create/deploy change sets to the different accounts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 57 - Name: Create_Change_Sets_and_Deploy_to_Dev
 58 Actions:
 59 - Name: CreateChangeSet_Dev
 60 ActionTypeId:
 61 Category: Deploy
 62 Owner: AWS
 63 Version: '1'
 64 Provider: CloudFormation
 65 Configuration:
 66 ChangeSetName: cicd-codepipeline-ChangeSet-Dev
 67 ActionMode: CHANGE_SET_REPLACE
 68 StackName: cicd-codepipeline-Dev
 69 Capabilities: CAPABILITY_IAM
 70 ParameterOverrides: |
 71 {
 72 "Environment" : "dev"
 73 }
 74 TemplatePath: BuildArtifact::packaged.yml
 75 RoleArn:
 76 Fn::ImportValue:
 77 !Sub "${Project}-Dev-cloudformationdeployer-role"
 78 InputArtifacts:
 79 - Name: BuildArtifact
 80 RunOrder: 1
 81 RoleArn:
 82 Fn::ImportValue:
 83 !Sub "${Project}-Dev-centralacctcodepipelineCFRole"
 84 - Name: ExecuteChangeSet_Dev
 85 ActionTypeId:
 86 Category: Deploy
 87 Owner: AWS
 88 Provider: CloudFormation
 89 Version: "1"
 90 Configuration:
 91 ActionMode: CHANGE_SET_EXECUTE
 92 RoleArn:
 93 Fn::ImportValue:
 94 !Sub "${Project}-Dev-cloudformationdeployer-role"
 95 StackName: cicd-codepipeline-Dev
 96 ChangeSetName: cicd-codepipeline-ChangeSet-Dev
 97 OutputArtifacts:
 98 - Name: cicd-codepipeline-ChangeSet-Dev
 99 RunOrder: 2
100 RoleArn:
101 Fn::ImportValue:
102 !Sub "${Project}-Dev-centralacctcodepipelineCFRole"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;The key point here are the values used for &lt;strong&gt;RoleArn’s&lt;/strong&gt; and that you can define roles for the creation/execution of CloudFormation change sets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;81 RoleArn:
82 Fn::ImportValue:
83 !Sub "${Project}-Dev-centralacctcodepipelineCFRole"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the role ( &lt;strong&gt;CentralAcctCodePipelineCFRole&lt;/strong&gt; ) used by the CodePipeline in the &lt;strong&gt;Central account&lt;/strong&gt; to execute the CloudFormation template in the &lt;strong&gt;Dev account&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;The role was created on the &lt;strong&gt;Dev account&lt;/strong&gt; with the &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/02prereqs-accounts.yaml"&gt;02prereqs-accounts.yaml&lt;/a&gt; template but the ARN value can be calculated.  &lt;/p&gt;

&lt;p&gt;This is done in the &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/01central-prereqs.yaml"&gt;01central-prereqs.yaml&lt;/a&gt; template and imported here in this template &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/03central-pipeline.yaml"&gt;03central-pipeline.yaml&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;Snippet from &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/01central-prereqs.yaml"&gt;01central-prereqs.yaml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;326 DevCodePipelineCloudFormationRole:
327 Value: !Sub arn:aws:iam::${DevAccount}:role/${Project}-CentralAcctCodePipelineCFRole
328 Export:
329 Name: !Sub ${Project}-Dev-centralacctcodepipelineCFRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;The second role ( &lt;strong&gt;cloudformationdeployer-role&lt;/strong&gt; ) is used to deploy the resources inside the &lt;strong&gt;Dev account&lt;/strong&gt; which are defined in the CloudFormation template. That’s the same IAM Role setting which you will see if you manually deploy a CloudFormation stack under "Configure stack options" → "Permissions"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;65 Configuration:
66 ChangeSetName: cicd-codepipeline-ChangeSet-Dev
67 ActionMode: CHANGE_SET_REPLACE
68 StackName: cicd-codepipeline-Dev
69 Capabilities: CAPABILITY_IAM
70 ParameterOverrides: |
71 {
72 "Environment" : "dev"
73 }
74 TemplatePath: BuildArtifact::packaged.yml
75 RoleArn:
76 Fn::ImportValue:
77 !Sub "${Project}-Dev-cloudformationdeployer-role"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see the RoleArn is defined inside the &lt;strong&gt;Configuration&lt;/strong&gt; part of the Create/Execute change set. This is the role/policy we saw before which has to be customized to fit your needs if you will use this example to deploy your own applications to multiple AWS accounts.&lt;/p&gt;




&lt;p&gt;Hope this Part 2 gave you more details on how this example works &amp;amp; how you can customize it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The magic happens at the RoleArn that are used at the different CodePipeline stages!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The beauty of this is that if you have a working solution like this you can reuse it almost everywhere.&lt;/p&gt;

&lt;p&gt;Comments and questions are welcome or contact me on &lt;a href="https://www.twitter.com/kbild"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>codepipeline</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>AWS CodePipeline Example which deploys to multiple AWS Accounts - Part1</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Mon, 04 May 2020 07:30:10 +0000</pubDate>
      <link>https://dev.to/kbild/aws-codepipeline-example-which-deploys-to-multiple-aws-accounts-part1-16cc</link>
      <guid>https://dev.to/kbild/aws-codepipeline-example-which-deploys-to-multiple-aws-accounts-part1-16cc</guid>
      <description>&lt;p&gt;At &lt;a href="https://webgate.biz"&gt;WebGate&lt;/a&gt;, we’re using AWS CodePipeline heavily for CI/CD of our serverless apps and we usually do 3-tier deployments (Dev, Test, Prod).&lt;/p&gt;

&lt;p&gt;Therefore we were looking for an example which describes how you have to build such a solution. Unfortunately we didn’t found a source which had a full blown solution matching our needs. Luckily we found some examples which gave us some clues on how to build such a Pipeline.&lt;/p&gt;

&lt;p&gt;Especially following two sites helped us to get started:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/devops/aws-building-a-secure-cross-account-continuous-delivery-pipeline/"&gt;- Building a Secure Cross-Account Continuous Delivery Pipeline&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/codepipeline-deploy-cloudformation/"&gt;- How do I use CodePipeline to deploy an AWS CloudFormation stack in a different account?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post I will show you an example which you can use for your own cross-account AWS CodePipelines.&lt;/p&gt;

&lt;p&gt;We will have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"Central Account" → App Repos, Pipelines…​&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"Dev Account" → Development Account for App&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"Test Account" → Testing Account for App&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"Prod Account" → Production Account for App&lt;/p&gt;


&lt;h2&gt;
  
  
  How to Deploy
&lt;/h2&gt;

&lt;p&gt;You will find the source code of this example in &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/tree/master/multipleAccountPipeline/"&gt;this Github repo&lt;/a&gt;, let’s first deploy the prerequisites and later the sample repo.&lt;/p&gt;


&lt;h3&gt;
  
  
  1. Prerequisites
&lt;/h3&gt;

&lt;p&gt;There are 3 AWS CloudFormation templates which you will need to deploy this solution, let’s first have a look at them:&lt;/p&gt;
&lt;h4&gt;
  
  
  cloudformation/01central-prereqs.yaml
&lt;/h4&gt;

&lt;p&gt;This &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/01central-prereqs.yaml"&gt;template&lt;/a&gt; will deploy all needed resources in the "Central Account":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/arch1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pHxTNrxm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/arch1.png" alt="01central-prereqs.yaml" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;S3 Artifact Bucket&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;KMS Key for encryption&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;IAM Roles/Policies&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeCommit Repo for the App&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeBuild Project for App&lt;/p&gt;
&lt;h4&gt;
  
  
  cloudformation/02prereqs-accounts.yaml
&lt;/h4&gt;

&lt;p&gt;This &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/02prereqs-accounts.yaml"&gt;template&lt;/a&gt; will deploy all needed resources in the "Dev Account", "Test Account" and "Prod Account":&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z9_j6qu0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/arch2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z9_j6qu0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/arch2.png" alt="01central-prereqs.yaml" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;IAM Roles/Policies&lt;/p&gt;

&lt;p&gt;|&lt;/p&gt;
&lt;h4&gt;
  
  
  cloudformation/03central-pipeline.yaml
&lt;/h4&gt;

&lt;p&gt;This &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/multipleAccountPipeline/cloudformation/03central-pipeline.yaml"&gt;template&lt;/a&gt; will deploy the actual Code Pipeline in the "Central Account".&lt;br&gt;&lt;br&gt;
For simplicity I’m only deploying a simple "Hello World" Lambda function and an API Gateway and I’m only using the Build and Deploy Stages in the Pipeline:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/arch3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lkT8Pxby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/arch3.png" alt="AWS CodePipeline Example" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. How to Deploy the Prerequisites
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Central Account
&lt;/h4&gt;

&lt;p&gt;First logon to your &lt;strong&gt;Central Account&lt;/strong&gt; and open up CloudFormation in the Region of choice.&lt;br&gt;&lt;br&gt;
Now create a new stack with the template &lt;strong&gt;01central-prereqs.yaml&lt;/strong&gt; and define:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/cf1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4-M2QduZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/cf1.png" alt="01central-prereqs.yaml" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stack Name i.e. &lt;em&gt;kbild-serverless-prereqs&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS Account Numbers for the Dev, Test and Prod Accounts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PreReqsOnAccounts, should stay "false"&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Project name i.e. &lt;em&gt;serverless&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Finish the stack deployment, it will take some minutes. When finished, open up the Outputs tab of the stack and take a note of the "ArtifactBucket" and "CMK" Key values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/cf5.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xoEVN8vM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/cf5.png" alt="01central-prereqs.yaml" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h4&gt;
  
  
  Dev Account
&lt;/h4&gt;

&lt;p&gt;Now logon to your &lt;strong&gt;Dev Account&lt;/strong&gt; and open up CloudFormation in the same Region as used for the "Central Account"&lt;br&gt;&lt;br&gt;
Here we create a new stack and we will use the template &lt;strong&gt;02prereqs-accounts.yaml&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/cf2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BcaxiVEf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/cf2.png" alt="01central-prereqs.yaml" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stack Name i.e. &lt;em&gt;kbild-serverless-prereqs&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CMKARN → Fill in the value of the "CMK" Key noted before&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CentralAccount → Fill in the account number of the "Central Account"&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Project name i.e. &lt;em&gt;serverless&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;S3Bucket → Fill in the value of the "ArtifactBucket" Key&lt;/p&gt;

&lt;p&gt;Wait for the stack deployment to be finished.&lt;/p&gt;
&lt;h4&gt;
  
  
  Test Account / Prod Account
&lt;/h4&gt;

&lt;p&gt;Now logon to your &lt;strong&gt;Test Account&lt;/strong&gt; / &lt;strong&gt;Prod Account&lt;/strong&gt; and repeat the steps for &lt;strong&gt;02prereqs-accounts.yaml&lt;/strong&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Central Account
&lt;/h4&gt;

&lt;p&gt;Now you have to go back to your &lt;strong&gt;Central Account&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please ensure that the prerequisites are already deployed to the "Dev/Test Prod Account",otherwise the following update will fail!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Do an update on the Prereqs Stack which you created some minutes ago. Choose "Use current template" and change the value of parameter &lt;strong&gt;PreReqsOnAccounts&lt;/strong&gt; from &lt;strong&gt;false&lt;/strong&gt; to &lt;strong&gt;true&lt;/strong&gt; and update the stack:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PBf6sfta--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/cf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PBf6sfta--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/cf4.png" alt="01central-prereqs.yaml" width="416" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will update the S3 Artifact bucket and KMS Policies and will add access for to the "Dev/Test Prod Accounts".&lt;/p&gt;


&lt;h3&gt;
  
  
  3. Deploy Pipeline and App
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Pipeline
&lt;/h4&gt;

&lt;p&gt;Again in the &lt;strong&gt;Central Account&lt;/strong&gt; create a CF stack with the &lt;strong&gt;03central-pipeline.yaml&lt;/strong&gt; template:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/cf3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KB7T59uz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/cf3.png" alt="03central-pipeline.yaml" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stack Name i.e. &lt;em&gt;kbild-serverless-pipeline&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Project name i.e. &lt;em&gt;serverless&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RepoBranch → The Repo branch to which the Pipeline Webhook will listen too&lt;/p&gt;

&lt;p&gt;Again wait for the stack deployment to be finished.&lt;/p&gt;

&lt;p&gt;Before we run our Code Pipeline for the first time, we have to add our "Hello World" app to the freshly created CodeCommit Repo.&lt;/p&gt;
&lt;h4&gt;
  
  
  App Deployment
&lt;/h4&gt;

&lt;p&gt;First clone the newly created CodeCommit Repo locally to your machine.&lt;br&gt;&lt;br&gt;
(If you have never used git with CodeCommit, go to the repo and click on "Clone URL" at the top → "Connection steps").&lt;br&gt;&lt;br&gt;
I will use SSH:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone ssh://git-codecommit.eu-central-1.amazonaws.com/v1/repos/serverless-ProjectRepo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now add the buildspec.yml and sam-app folder from the the &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/"&gt;Github repo&lt;/a&gt; to your local clone:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/screen1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_t-iQuNl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/screen1.png" alt="03central-pipeline.yaml" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Now commit and push the new files to the CodeCommit Repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -a
git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This push should trigger the Code Pipeline, so go back to your AWS console of the "Central Account" and open up CodePipeline and the new serverless-Pipeline:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/pipe1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TfZYWKFq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/pipe1.png" alt="Pipeline" width="800" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the pipeline was just triggered and you can follow how the Pipeline goes through all the stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build_Templates&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create_Change_Sets_and_Deploy_to_Dev&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create_Change_Sets_and_Deploy_to_Test&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create_Change_Sets_and_Deploy_to_Prod&lt;/p&gt;

&lt;p&gt;If all stages have finished you can logon to your "Dev Account" and go to:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;CloudFormation → cicd-codepipeline-Dev Stack → Outputs&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;Click on the value for the Key "HelloWorldApi", this will open the API Gateway Endpoint URL and will show you the "Hello World" example app.&lt;/p&gt;

&lt;p&gt;If everything worked as expected you should see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/screen3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V6a-Cmw4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/screen3.png" alt="HelloWorldApp" width="600" height="356"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Now go to your Test/Prod Account and open the according API Gateway Endpoint URL as well, you should see environment specific "Hello World" pages:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/202005/screen4.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ycit0Biy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/202005/screen4.png" alt="HelloWorldApp" width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I hope that this example helps you on your future CodePipeline journey!  &lt;/p&gt;

&lt;p&gt;In &lt;a href="https://kbild.ch/blog/2020-5-8-cf_multiple_accounts_regions_part2/"&gt;Part 2&lt;/a&gt; of this post I will get into more details how the CloudFormation templates work and how you may customize them.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>codepipeline</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>CloudFormation example for AWS CodePipeline - Hugo Deployment</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Tue, 26 Feb 2019 09:30:10 +0000</pubDate>
      <link>https://dev.to/kbild/cloudformation-example-for-aws-codepipeline-hugo-deployment-51o9</link>
      <guid>https://dev.to/kbild/cloudformation-example-for-aws-codepipeline-hugo-deployment-51o9</guid>
      <description>&lt;p&gt;I recently blogged on how you can use &lt;a href="https://kbild.ch/blog/2019-01-31-codepipeline/"&gt;AWS CodePipeline to automatically deploy your Hugo website to AWS S3&lt;/a&gt; and promised a CloudFormation template, so here we go.You can find the full template &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/HugoStaticWebpages/Deploy-Pipeline.yaml"&gt;in this GitHub repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you create a new stack with the template you will be asked for following parameters, let’s look at them in detail:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/201902/cloudformation.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dsgnvgFS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/cloudformation.png" alt="AWS CloudFormation" width="800" height="650"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Needed parameters
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitHub OAuth Token → The Token which will be used to create the webhook in the Repo&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitHub Owner → The owner of the GitHub Repo&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitHub Repo → The name of the GitHub Repo&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitHub Branch → The name of the Branch&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Artifacts S3 BucketName → The name of the S3 bucket where CodePipeline Artifacts will be saved, this bucket will be created!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Target S3 Bucket → The name of the S3 bucket where your Hugo Website will be deployed, this bucket will be created!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;S3 Bucket with Lambda Code ZIP → The existing S3 bucket which contains the ZIP file of the python script for the CloudFront invalidation. The file has to be named invalidateCloudFront.zip and can be found &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/blob/master/HugoStaticWebpages/invalidateCloudFront.zip"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CertificateArn → The Arn of the Certificate which should be used on CloudFront Distribution (has to be created in US East!)&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;HostedZoneId → The Id of the hosted Zone on Route53, will be used to create the following 2 subdomains/ WebsiteNames&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WebsiteName01 → subdomain1 of the HostedZone&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WebsiteName02 → subdomain2 of the HostedZone&lt;/p&gt;

&lt;h3&gt;
  
  
  Created AWS Resources
&lt;/h3&gt;

&lt;p&gt;If you create a Stack out of this Template following resources will be created automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PipelineArtifactsBucket → AWS::S3::Bucket Artifacts S3 BucketName&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PipelineWebpageBucket → AWS::S3::Bucket Target S3 Bucket&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BucketPolicy → AWS::S3::BucketPolicy which will be used for the S3 Bucket with the Hugo source files and allows PublicRead access&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;myCloudfrontDist → AWS::CloudFront::Distribution for the following subdomain names&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;domainDNSRecord1 → AWS::Route53::RecordSet WebsiteName01&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;domainDNSRecord2 → AWS::Route53::RecordSet WebsiteName02&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeBuildProject → AWS::CodeBuild::Project, the actual build project which will be used in the CodePipeline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodePipeline → AWS::CodePipeline::Pipeline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GithubWebhook → AWS::CodePipeline::Webhook&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CreateCodePipelinePolicy → AWS::IAM::ManagedPolicy, the managed policy which will be used for the according role/pipeline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodePipelineRole → AWS::IAM::Role with managed policy for CodePipeline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CreateCodeBuildPolicy → AWS::IAM::ManagedPolicy the managed policy which will be used for the according role for CodeBuild&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeBuildRole → AWS::IAM::Role with managed policy for CodeBuild&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CreateLambdaExecutionPolicy → AWS::IAM::ManagedPolicy&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LambdaExecutedRole → AWS::IAM::Role with managed policy to give Lambda enough rights&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LambdaCloudfrontInvalidation → AWS::Lambda::Function python function&lt;/p&gt;

&lt;h3&gt;
  
  
  Code examples
&lt;/h3&gt;

&lt;p&gt;Throughout the Template I tried to follow the principle of least privilege.I.e. if you look at the &lt;strong&gt;CodeBuild Policy&lt;/strong&gt; you see that CodeBuild is only allowed to work with the created S3 buckets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;108 CreateCodeBuildPolicy: 
109 Type: AWS::IAM::ManagedPolicy
110 Properties: 
111 ManagedPolicyName: CodeBuildAccess_Hugo
112 Description: "Policy for access to logs and Hugo S3 Buckets"
113 Path: "/"
114 PolicyDocument: 
115 Version: "2012-10-17"
116 Statement:
117 - Sid: VisualEditor0
118 Effect: Allow
119 Action: s3:*
120 Resource: [
121 !Join ['', ['arn:aws:s3:::',!Ref TargetS3Bucket] ],
122 !Join ['', ['arn:aws:s3:::',!Ref TargetS3Bucket, '/*'] ],
123 !Join ['', ['arn:aws:s3:::',!Ref ArtifactsBucketName] ],
124 !Join ['', ['arn:aws:s3:::',!Ref ArtifactsBucketName, '/*'] ]
125 ]
126 - Sid: VisualEditor1
127 Effect: Allow
128 Action: logs:*
129 Resource: '*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following part creates the &lt;strong&gt;CodePipeline&lt;/strong&gt; with all stages&lt;br&gt;&lt;br&gt;
(Source from GitHub, Build on CodeBuild, Deploy to S3 and call Lambda function)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;108 CodePipeline:
109 Type: AWS::CodePipeline::Pipeline
110 Properties:
111 Name: PipelineForStaticWebpageWithHugo
112 ArtifactStore:
113 Type: S3
114 Location: !Ref PipelineArtifactsBucket
115 RestartExecutionOnUpdate: true
116 RoleArn: !GetAtt CodePipelineRole.Arn
117 Stages:
118 - Name: Source
119 Actions:
120 - Name: Source
121 InputArtifacts: []
122 ActionTypeId:
123 Category: Source
124 Owner: ThirdParty
125 Version: 1
126 Provider: GitHub
127 OutputArtifacts:
128 - Name: SourceCode
129 Configuration:
130 Owner: !Ref GitHubOwner
131 Repo: !Ref GitHubRepo
132 Branch: !Ref GitHubBranch
133 PollForSourceChanges: false
134 OAuthToken: !Ref GitHubOAuthToken
135 RunOrder: 1
136 - Name: Build
137 Actions:
138 - Name: CodeBuild
139 ActionTypeId:
140 Category: Build
141 Owner: AWS
142 Provider: CodeBuild
143 Version: '1'
144 InputArtifacts:
145 - Name: SourceCode
146 OutputArtifacts:
147 - Name: PublicFiles
148 Configuration: 
149 ProjectName: !Ref CodeBuildProject
150 RunOrder: 1
151 - Name: Deploy
152 Actions:
153 - Name: S3Deploy
154 ActionTypeId:
155 Category: Deploy
156 Owner: AWS
157 Provider: S3
158 Version: '1'
159 InputArtifacts:
160 - Name: PublicFiles
161 Configuration: 
162 BucketName: !Ref TargetS3Bucket
163 Extract: 'true'
164 RunOrder: 1
165 - Name: LambdaDeploy
166 ActionTypeId:
167 Category: Invoke
168 Owner: AWS
169 Provider: Lambda
170 Version: '1'
171 Configuration: 
172 FunctionName: invalidateCloudfront
173 UserParameters: !Ref myCloudfrontDist
174 RunOrder: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the Lambda function written in python to create the CloudFront invalidation. I needed quiet some time to get the CodePipeline jobId and to get the Id of the CloudFront Distribution out of the UserParameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;108import time
109import logging
110from botocore.exceptions import ClientError
111import boto3
112
113LOGGER = logging.getLogger()
114LOGGER.setLevel(logging.INFO)
115
116def codepipeline_success(job_id):
117 """
118 Puts CodePipeline Success Result
119 """
120 try:
121 codepipeline = boto3.client('codepipeline')
122 codepipeline.put_job_success_result(jobId=job_id)
123 LOGGER.info('===SUCCESS===')
124 return True
125 except ClientError as err:
126 LOGGER.error("Failed to PutJobSuccessResult for CodePipeline!\n%s", err)
127 return False
128
129def codepipeline_failure(job_id, message):
130 try:
131 codepipeline = boto3.client('codepipeline')
132 codepipeline.put_job_failure_result(
133 jobId=job_id,
134 failureDetails={'type': 'JobFailed', 'message': message}
135 )
136 LOGGER.info('===FAILURE===')
137 return True
138 except ClientError as err:
139 LOGGER.error("Failed to PutJobFailureResult for CodePipeline!\n%s", err)
140 return False
141
142
143def lambda_handler(event, context):
144 LOGGER.info(event)
145 try:
146 job_id = event['CodePipeline.job']['id']
147 distId = event['CodePipeline.job']['data']['actionConfiguration']['configuration']['UserParameters']
148 client = boto3.client('cloudfront')
149 invalidation = client.create_invalidation(DistributionId=distId,
150 InvalidationBatch={
151 'Paths': {
152 'Quantity': 1,
153 'Items': ['/*']
154 },
155 'CallerReference': str(time.time())
156 })
157 codepipeline_success(job_id)
158        
159 except KeyError as err:
160 LOGGER.error("Could not retrieve CodePipeline Job ID!\n%s", err)
161 return False
162 codepipeline_failure(job_id, err)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope this Template helps you on building your own CodePipelines via CloudFormations.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>codepipeline</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>Proper Monitoring - How to use Prometheus with your AWS EC2 instances</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Mon, 18 Feb 2019 09:00:10 +0000</pubDate>
      <link>https://dev.to/kbild/proper-monitoring-how-to-use-prometheus-with-your-aws-ec2-instances-4npm</link>
      <guid>https://dev.to/kbild/proper-monitoring-how-to-use-prometheus-with-your-aws-ec2-instances-4npm</guid>
      <description>&lt;p&gt;As we are operating a lot of servers we need a proper monitoring solution.AWS offers CloudWatch which is an almost perfect solution for monitoring your AWS cloud infrastructure.&lt;/p&gt;

&lt;p&gt;But we also operate servers on other cloud providers (Softlayer, Azure,…​) and we need one monitoring solution to track all of these servers.&lt;/p&gt;

&lt;p&gt;As you might know, I’m a huge fan of &lt;a href="https://prometheus.io"&gt;Prometheus&lt;/a&gt;, the only graduated Monitoring project of &lt;a href="https://www.cncf.io"&gt;CNCF&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to know more what Prometheus is and how you can use it I recommend you to watch this YouTube Movie &lt;a href="https://www.youtube.com/watch?v=PDxcEzu62jk"&gt;"Monitoring, the Prometheus Way"&lt;/a&gt; where Julius Volz, Co-Founder Prometheus, gives a very good introduction into the topic.&lt;/p&gt;

&lt;p&gt;I also gave a talk on Prometheus and how to use it with IBM Connections last year at the DNUG event in Darmstadt. It’s only available in German but have a look at the presentation if you are interested in how you can monitor IBM Connections with Prometheus: &lt;br&gt;
    &lt;/p&gt;

&lt;p&gt;So how can we use Prometheus together with our AWS Cloud Infrastructure?&lt;/p&gt;

&lt;p&gt;We will need following parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Agents on the EC2 instances (called &lt;a href="https://github.com/prometheus/node_exporter"&gt;node_exporter&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prometheus with configured AWS Service Discovery (in this case only for EC2 instances)&lt;/p&gt;
&lt;h2&gt;
  
  
  node_exporter on EC2 instances
&lt;/h2&gt;

&lt;p&gt;Installing the node_exporter on EC2 instances is straight forward, just use following User data script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;useradd -m -s /bin/bash prometheus
# (or adduser --disabled-password --gecos "" prometheus)

# Download node_exporter release from original repo
curl -L -O https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz

tar -xzvf node_exporter-0.17.0.linux-amd64.tar.gz
mv node_exporter-0.17.0.linux-amd64 /home/prometheus/node_exporter
rm node_exporter-0.17.0.linux-amd64.tar.gz
chown -R prometheus:prometheus /home/prometheus/node_exporter

# Add node_exporter as systemd service
tee -a /etc/systemd/system/node_exporter.service &amp;lt;&amp;lt; END
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
ExecStart=/home/prometheus/node_exporter/node_exporter
[Install]
WantedBy=default.target
END

systemctl daemon-reload
systemctl start node_exporter
systemctl enable node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Prometheus server will scrape the node_exporter on the standard port 9100&lt;br&gt;&lt;br&gt;
→ don’t forget to add this port to your instance Security Group and grant access to the Prometheus Server&lt;/p&gt;

&lt;p&gt;You may test if the node_exporter is running as expected by running following command locally on the EC2 instance:curl &lt;a href="http://127.0.0.1:9100/metrics"&gt;http://127.0.0.1:9100/metrics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything works you should get back the metrics of your server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 2.8809e-05
go_gc_duration_seconds{quantile="0.25"} 3.7675e-05
go_gc_duration_seconds{quantile="0.5"} 4.8971e-05
go_gc_duration_seconds{quantile="0.75"} 6.1912e-05
go_gc_duration_seconds{quantile="1"} 0.000266006
go_gc_duration_seconds_sum 0.667055045
go_gc_duration_seconds_count 11450
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 9
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same will be scraped and recorded by the Prometheus server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus AWS Service Discovery
&lt;/h2&gt;

&lt;p&gt;The Prometheus server will talk to directly to the AWS API so you need to create a user with programmatic access and add following permission:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeInstances",
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;→ The Prometheus server can get all metadata of the EC2 instances like IP addresses or tags&lt;/p&gt;

&lt;p&gt;On the Prometheus server a scrape target has to be added to the &lt;strong&gt;prometheus.yml&lt;/strong&gt; file with the access and secret key of the added user.You can do some &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"&gt;relabeling magic&lt;/a&gt; which lets you reuse your EC2 tags and metadata in Prometheus which is very nice.&lt;br&gt;&lt;br&gt;
I.e. here we take the &lt;strong&gt;ec2_tag_name&lt;/strong&gt; as &lt;strong&gt;instance&lt;/strong&gt; value and we add two additional tags ( &lt;strong&gt;customer,role&lt;/strong&gt; ) which we get from the &lt;strong&gt;ec2_tag_customer&lt;/strong&gt; and &lt;strong&gt;ec2_tag_role&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - job_name: 'node'
    ec2_sd_configs:
      - region: YOURREGION
        access_key: YOURACCESSKEY
        secret_key: YOURSECRETKEY
        port: 9100
        refresh_interval: 1m
    relabel_configs:
      - source_labels:
        - '__meta_ec2_tag_Name'
        target_label: 'instance'
      - source_labels:
        - '__meta_ec2_tag_customer'
        target_label: 'customer'
      - source_labels:
        - '__meta_ec2_tag_role'
        target_label: 'role'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Prometheus server will now get the private IP addresses of all of your EC2 instances&lt;br&gt;&lt;br&gt;
(by default the private IPs, but you can use the public ones as well, see &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config"&gt;ec2_sd_config documentation&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;If you want to see which targets Prometheus gets through the Service Discovery browse to following URL of you Prometheus server:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;-&lt;a href="https://prometheus.server.com/service-discovery"&gt;https://prometheus.server.com/service-discovery&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here you will see all your EC2 instances with their metadata and which data is reused in Prometheus:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/201902/Prometheus_SD.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sp9F9vE---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/Prometheus_SD.png" alt="Prometheus Service Discovery" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Graphs and Dashboards
&lt;/h2&gt;

&lt;p&gt;We defined that the metrics are scraped every minute and after some minutes we can see the results in the Prometheus UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/201902/Prometheus_Graph.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b9mGVV8g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/Prometheus_Graph.png" alt="Prometheus Graph" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see here we even get the data for the instance memory which we don’t have if we use CloudWatch for Monitoring.If you wan’t to have real dashboards for your monitoring just add Grafana which is natively supporting Prometheus as Data Source and you can create such nice dashboards:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kbild.ch/201902/Grafana_Dashboard.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rbi8Vvs7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/Grafana_Dashboard.png" alt="Grafana Dashboard" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Maybe you know now why I’m a big fan of Prometheus and as someone which is also using Prometheus to monitor his Kubernetes environment I can tell you we just scratched on the surface of what is possible with Prometheus.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>prometheus</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>CloudFormation Template: Tag AWS Volumes for Lifecycle Manager Backups</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Tue, 12 Feb 2019 15:41:10 +0000</pubDate>
      <link>https://dev.to/kbild/cloudformation-template-tag-aws-volumes-for-lifecycle-manager-backups-3nl3</link>
      <guid>https://dev.to/kbild/cloudformation-template-tag-aws-volumes-for-lifecycle-manager-backups-3nl3</guid>
      <description>&lt;p&gt;If you wan’t a simple AWS Backup solution you can use AWS Lifecycle Manager to create snapshots from your AWS EC2 volumes.&lt;/p&gt;

&lt;p&gt;Lifecycle Manager is easy to use and even gives you some retention rules, no scripting needed for your Backups at all.&lt;/p&gt;

&lt;p&gt;You can easily define which target volumes Lifecycle Manager should snapshot through tags on your volumes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lifecycle Manager - Snapshot Lifecycle Policy
&lt;/h2&gt;

&lt;p&gt;In following example we will take snapshots all 24h of all volumes which are tagged &lt;code&gt;backupid: AUT01&lt;/code&gt; between 09 and 10 UTC and will retain 7 snapshots.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m8v1QZ0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/Lifecycle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m8v1QZ0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/Lifecycle.png" alt="AWS Lifecycle Manager" width="800" height="1141"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Usually we use CloudFormation to create our AWS environments and our EC2 instances. Unfortunately the tags you use for your EC2 instances are not automatically added to the according volumes of your instance. Bummer!&lt;/p&gt;

&lt;p&gt;This means we have to find a way to tag the instance volumes right after creation and of course easiest way to do this is using some magic in a User data script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Needed User data script
&lt;/h2&gt;

&lt;p&gt;Following script may be used as User data script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-tags --resources $(aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) --query 'Volumes[*].[VolumeId]' --region=eu-central-1 --out text | cut -f 1) --tags Key=$Key,Value=$Value --region eu-central-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two parts in this script:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Getting the VolumeIds of the volumes with the help of the local server metadata&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) --query 'Volumes[*].[VolumeId]' --region=eu-central-1 --out text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name=attachment.device,Values=/dev/xvdb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tag these Volumes with the provided key and value&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-tags --resources VOLUMEIDS --tags Key=$Key,Value=$Value --region eu-central-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see we are using an EC2 instance in the eu-central-1 region, you have to change this to the region you are using.&lt;/p&gt;

&lt;p&gt;The EC2 instance needs an IAM role with sufficient rights to get the volume id’s and to tag the volumes. We will add following policy to this role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "ec2:Describe*",
            "Resource": "*",
            "Effect": "Allow"
        },
        {
            "Action": "ec2:CreateTags",
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Lifecycle Policy
&lt;/h2&gt;

&lt;p&gt;Final step is to add the Snapshot Lifecycle Policy with the needed parameters (TargetTags…​)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  BasicLifecyclePolicy:
    Type: "AWS::DLM::LifecyclePolicy"
    Properties:
      Description: "Lifecycle Policy using CloudFormation"
      State: "ENABLED"
      ExecutionRoleArn: !GetAtt
        - lifecycleRole
        - Arn
      PolicyDetails:
        ResourceTypes:
          - "VOLUME"
        TargetTags:
          -
            Key: "backupid"
            Value: "AUT01"
        Schedules:
          -
            Name: "Daily Snapshots"
            TagsToAdd:
              -
                Key: "type"
                Value: "DailySnapshot"
            CreateRule:
              Interval: 24
              IntervalUnit: "HOURS"
              Times:
                - "09:00"
            RetainRule:
              Count: 7
            CopyTags: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see an execution role is needed as well (with proper policy attached).You will find this role and all additional needed resources in the full CloudFormation template on &lt;a href="https://github.com/kbild/AWS_Cloudformation_Examples/tree/master/Volume_Tagging"&gt;Github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feedback is always welcome!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>Use AWS CodePipeline to automatically deploy your Hugo website to AWS S3</title>
      <dc:creator>Klaus Bild</dc:creator>
      <pubDate>Tue, 05 Feb 2019 10:17:10 +0000</pubDate>
      <link>https://dev.to/kbild/use-aws-codepipeline-to-automatically-deploy-your-hugo-website-to-aws-s3-7li</link>
      <guid>https://dev.to/kbild/use-aws-codepipeline-to-automatically-deploy-your-hugo-website-to-aws-s3-7li</guid>
      <description>&lt;p&gt;So I have a &lt;a href="https://gohugo.io/"&gt;Hugo&lt;/a&gt; website now but deploying the generated HTML files to my AWS S3 bucket and invalidating my AWS Cloudfront deployment is very time consuming.&lt;/p&gt;

&lt;p&gt;Therefore I planned to have a AWS Code Pipeline which helps me in this process.Following steps are needed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8SZkSWZb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/AWSCodePipeline01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8SZkSWZb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/AWSCodePipeline01.png" alt="AWS CodePipeline" width="800" height="827"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reading guides from other Bloggers (&lt;a href="https://alimac.io/static-websites-with-s3-and-hugo-part-1/"&gt;alimac.io&lt;/a&gt;, &lt;a href="https://medium.com/@yagonobre/automatically-invalidate-cloudfront-cache-for-site-hosted-on-s3-3c7818099868"&gt;YagoYns&lt;/a&gt;, &lt;a href="https://github.com/symphoniacloud/github-codepipeline"&gt;Symphonia&lt;/a&gt;) gave me a good starting point to build my own solution based on AWS CodePipeline.&lt;/p&gt;

&lt;p&gt;I will publish a complete CloudFormation script in one of the following blog posts but will only talk about the &lt;a href="https://aws.amazon.com/codebuild/"&gt;AWS CodeBuild&lt;/a&gt; part today.&lt;/p&gt;

&lt;p&gt;During the Build process I need a functionality which generates the static HTML files based on my Hugo and asciidoc files in my GitHub repo. The easiest way getting this functionality is to use a Docker container which has Hugo and asciidoctor installed.&lt;/p&gt;

&lt;p&gt;So let’s start this by creating a build project in AWS CodeBuild by clicking on &lt;code&gt;Create project&lt;/code&gt; and defining a ProjectName &lt;code&gt;BuildContainerForHTML&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;During the creation of your build project you can choose either if you want to use an AWS Managed Docker Image or if you want to use a Custom Image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zzaihlD---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/DockerImage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zzaihlD---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/DockerImage.png" alt="Which Docker Image" width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we will choose &lt;code&gt;Managed Image&lt;/code&gt; and will use following parameters:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---NRdvqWC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/DockerImage02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---NRdvqWC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/DockerImage02.png" alt="Image Settings" width="680" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This means we will use the standard AWS Ubuntu 14.04 Base Docker container, so nothing installed per default.&lt;/p&gt;

&lt;p&gt;Now we need to install the needed software which is defined through a file called buildspec.yml (which should be placed in the source code root directory → GitHub repo).&lt;/p&gt;

&lt;p&gt;Hardest part for me was to find out how the &lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html"&gt;buildspec.yml&lt;/a&gt; has to look like. This buildspec file is used to create the build commands which will be used in the docker container.&lt;br&gt;&lt;br&gt;
As you can see I’m only using two phases in this example and define the output artifact:&lt;/p&gt;

&lt;p&gt;buildspec.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  install:
    commands:
      - echo Entered the install phase...
      - apt-get -qq update &amp;amp;&amp;amp; apt-get -qq install curl
      - apt-get -qq install asciidoctor
      - curl -s -L https://github.com/gohugoio/hugo/releases/download/v0.53/hugo_0.53_Linux-64bit.deb -o hugo.deb
      - dpkg -i hugo.deb
    finally:
      - echo Installation done
  build:
    commands:
      - echo Entered the build phase ...
      - echo Build started on `date`
      - cd $CODEBUILD_SRC_DIR
      - rm -f buildspec.yml &amp;amp;&amp;amp; rm -f .git &amp;amp;&amp;amp; rm -f README.md
      - hugo --quiet
    finally:
      - echo Building the HTML files finished
artifacts:
  files:
    - '**/*'
  base-directory: $CODEBUILD_SRC_DIR/public/
  discard-paths: no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;dl&gt;
&lt;dt&gt;&lt;strong&gt;install:&lt;/strong&gt;&lt;/dt&gt;
&lt;dd&gt;
&lt;p&gt;Here we have all commands to install Hugo and asciidoctor&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;strong&gt;build:&lt;/strong&gt;&lt;/dt&gt;
&lt;dd&gt;
&lt;p&gt;Here we switch to the directory with the source files from GitHub ($CODEBUILD_SRC_DIR) &amp;amp; execute the Hugo build command which will create all static pages in the directory &lt;strong&gt;public&lt;/strong&gt;&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;strong&gt;artifacts:&lt;/strong&gt;&lt;/dt&gt;
&lt;dd&gt;
&lt;p&gt;here we define $CODEBUILD_SRC_DIR/public/ as base directory and add all files in this directory to the output artifact&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;

&lt;p&gt;We also have to define a Policy which should be used by the CodeBuild Project. In this example we only need access to the Input (kbil-artifacts) and Output (kbild-yourwebsite) S3 bucket and the CloudWatch logs. Following policy is used:&lt;/p&gt;

&lt;p&gt;CodeBuild Policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::kbild-yourwebsite",
                "arn:aws:s3:::kbild-yourwebsite/*",
                "arn:aws:s3:::kbil-artifacts",
                "arn:aws:s3:::kbil-artifacts/*"
            ],
            "Effect": "Allow",
            "Sid": "VisualEditor0"
        },
        {
            "Action": "logs:*",
            "Resource": "*",
            "Effect": "Allow",
            "Sid": "VisualEditor1"
        }
    ]
}
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Build Project can now be added to the Code Pipeline as Build Stage.&lt;br&gt;&lt;br&gt;
&lt;code&gt;SourceCode&lt;/code&gt; artifact from GitHub will be used as input and the &lt;strong&gt;public html files&lt;/strong&gt; will be exported as &lt;code&gt;PublicFiles&lt;/code&gt; artifact which will be send to S3 later on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--33aLLaTi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/BuildAction.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--33aLLaTi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/BuildAction.png" alt="Build Action" width="600" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After pushing a new blog post to the GitHub Repo we will see our CodePipeline in full action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DlKwJ8yy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/Buildpipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DlKwJ8yy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kbild.ch/201902/Buildpipeline.png" alt="Build Pipeline" width="500" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice!&lt;/p&gt;

&lt;p&gt;Adding additional stages and steps is pretty easy and I guess I will use CodePipelines a lot in future.&lt;/p&gt;

&lt;p&gt;Stay tuned for a CloudFormation script which will create the whole pipeline.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
