When using AWS services, more often than not, we need multiple AWS accounts for various reasons. If you work on AWS DevOps tools, you must have come across AWS CodePipeline which is a pipeline automation tool from AWS. While working with CodePipeline in multi-account scenario, I'm sure you all have faced many issues like Access Denied errors which is the most puzzling and frustrating error IMHO because it doesn't always say where we're lacking permissions. In multi-account scenario for CodePipeline, if you're deploying your application in another AWS account, AWS has this excellent documentation which I'd highly recommend you to go through. But what if you have a unique requirement when the source of your pipeline itself is in another AWS account and the deployment is in the same account as of CodePipeline?
Recently, I came across this use-case and tried to find some starting guide to create a CodePipeline having source from another AWS account but couldn't find any useful resources. Hence, I decided to write this blog to help others who might run into such use-case.
This blog will walk you through an example of deploying from Amazon ECR to Amazon ECS with the help of CodePipeline. Here, I'm having a use-case in which,
- Account-A has already-built Docker image hosted in ECR (Source stage)
- Account-B has S3 bucket (Source stage) containing imagedefinitions.json and ECS (Deploy stage).
NOTE: All AWS resources in this example exist in the same AWS region.
Our requirement is such that we have a team of developers in Account-A who will build their Docker images locally, push it in ECR repository in Account-A and this Docker image should be deployed automatically to the resources (ECS) present in Account-B whenever our developers make changes to their Docker images and push it in ECR repository of Account-A. And, we want our CodePipeline to be set up in Account-B itself where our deployment is done. Sounds challenging right? No worries. Let's get started...
Prerequisites:
- Two active AWS accounts.
- A working Amazon ECS cluster up and running with all other related resources e.g., Task Definition, ECR repository, ECS Service, any Docker image in one of the AWS account.
Target Architecture:
The above diagram shows the following workflow:
- Initially, the developers will build the Docker image and push it to their central ECR repository in Account A.
- Account A will have a EventBridge Rule created listening to our ECR repository in Account A. As soon as a successful Docker image is pushed to our ECR repository, an EventBridge Rule will be triggered in Account A.
- Default EventBridge Bus in Account B will be configured as a target to our EventBridge rule in Account A.
- We will have another EventBridge rule created in Account B which will listen to the push events from the ECR repository in Account A.
- The EventBridge rule in Account B will trigger our CodePipeline.
- CodePipeline will assume a cross-account IAM role created in Account A to list and describe the latest Docker image pushed into ECR repository.
- CodePipeline will read the Docker image and pass it to the next stage of CodePipeline.
- The imagedefinitions.zip file from S3 bucket will contain the information about the container name and the image URI of our Docker image stored in Account A’s ECR repository which will be used in ECS Standard deployment.
Now, let's see how we can build such pipeline from AWS Management Console in a step-by-step manner,
Step 1: IN ACCOUNT-B -> Set up a normal CodePipeline:
- Firstly, create a standard pipeline having all the components of pipeline (ECR,S3,ECS) residing in same Account-B normally the way you would. We are creating this pipeline in the same account first just to get a backbone structure for our cross-account pipeline which we will modify according to our requirements in later steps.
-
In this pipeline, Source stage will have two actions i.e. ECR and S3 where ECR will be your repository with image tag and S3 bucket will be having zip file of the JSON file named imagedefinitions.json. This JSON file is required for the ECS deployment as it contains container name and the ECR image URI with its image tag. Refer to this official documentation to know more about this file. Do remember to zip the file when S3 is used in the source stage.
NOTE: This S3 bucket should be versioned.
When you include ECS (Standard deployment) as a deployment provider in your pipeline, make sure to point S3 action's output as an input to ECS because ECR action's output will contain information only about the Docker image and not the container name which is expected by CodePipeline job worker for ECS.
Once you have working pipeline in Account-B, we will modify it for cross-account deployments having source from Account-A.
Step 2: IN ACCOUNT-B -> Create custom AWS KMS key:
- For any cross-account deployments, CodePipeline require custom KMS key allowing another AWS account to access encrypted CodePipeline artifacts stored in CodePipeline’s S3 bucket (artifact store).
- Create custom AWS KMS key allowing Account-A to access CodePipeline artifacts residing in Account-B’s S3 bucket.
-
While creating KMS policy via console, when you reach to the 5th step Review and edit key policy, enter the below policy by replacing appropriate ARN at the commented places.
{ "Id": "key-consolepolicy", "Version": "2012-10-17", "Statement": [ { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<ACCOUNT-B_ID>:root" //MAKE CHANGES HERE }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow access for Key Administrators", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<ACCOUNT-B_ID>:user/<YOUR_IAM_USER>" //MAKE CHANGES HERE }, "Action": [ "kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:TagResource", "kms:UntagResource", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion" ], "Resource": "*" }, { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ACCOUNT-B_ID>:user/<YOUR_IAM_USER>", //MAKE CHANGES HERE "arn:aws:iam::<ACCOUNT-B_ID>:role/service-role/AWSCodePipelineServiceRole-<AWS_REGION>-<CODEPIPELINE_NAME>", //REPLACE WITH CODEPIPELINE SERVICE ROLE ARN "arn:aws:iam::<ACCOUNT-A_ID>:root" //MAKE CHANGES HERE ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }, { "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ACCOUNT-B_ID>:user/<YOUR_IAM_USER>", //MAKE CHANGES HERE "arn:aws:iam::<ACCOUNT-B_ID>:role/service-role/AWSCodePipelineServiceRole-<AWS_REGION>-<CODEPIPELINE_NAME>", //REPLACE WITH CODEPIPELINE SERVICE ROLE ARN "arn:aws:iam::<ACCOUNT-A_ID>:root" //MAKE CHANGES HERE ] }, "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": "true" } } } ] }
The above KMS key policy will allow your IAM user, CodePipeline service role and Account-A to access this KMS key with which our artifacts will be encrypted.
Step 3: IN ACCOUNT-A -> Create a custom IAM role for cross-account access:
- Create a new IAM role named CrossAccount-A_Role. You can name this role anything you want as long as it follows the naming conventions in IAM. Consider giving the role a name that clearly states its purpose.
- On the Create role page, choose Another AWS account option from Select type of trusted entity section. Enter the AWS Account ID of Account-B in the Specify accounts that can use this role section. Click Next: Permissions.
- Attach two AWS managed policies named AmazonEC2ContainerRegistryReadOnly and AmazonS3FullAccess to this IAM Role.
- Create another custom inline IAM policy named cross-account-KMS for this same IAM role to access Account-B’s KMS key which we created in Step 2. Enter the below IAM policy by replacing ARN of KMS key in Account-B at the commented place for this inline policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:Decrypt"
],
"Resource": [
"<ARN_OF_KMS_KEY_IN_ACCOUNT-B>" //MAKE CHANGE HERE
]
}
]
}
Step 4: IN ACCOUNT-B -> Create a policy for the S3 bucket that grants access to Account A:
- Choose your CodePipeline’s S3 bucket i.e. artifact store (not the source S3 bucket) and edit its Bucket policy according to the following policy by replacing appropriate ARN at the commented places.
- The below S3 bucket policy will allow Account-A to access our pipeline's artifacts.
{
"Version": "2012-10-17",
"Id": "SSEAndSSLPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<CODEPIPELINE'S_S3_BUCKET_NAME>/*", //MAKE CHANGE HERE
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "DenyInsecureConnections",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<CODEPIPELINE'S_S3_BUCKET_NAME>/*", //MAKE CHANGE HERE
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
}
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCOUNT-A_ID>:root" //MAKE CHANGE HERE
},
"Action": [
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::<CODEPIPELINE'S_S3_BUCKET_NAME>/*" //MAKE CHANGE HERE
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCOUNT-A_ID>:root" //MAKE CHANGE HERE
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<CODEPIPELINE'S_S3_BUCKET_NAME>" //MAKE CHANGE HERE
}
]
}
Step 5: IN ACCOUNT-B -> Create an inline policy for service role of CodePipeline:
- As we want CodePipeline service role to assume the cross-account IAM role we created in Step 3, we should add an extra inline IAM policy to the CodePipeline service role.
- Enter the below IAM policy by replacing Account-A ID at the commented place for this inline policy.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::<ACCOUNT-A_ID>:role/*" //MAKE CHANGE HERE
]
}
}
Step 6: IN ACCOUNT-A -> Apply ECR Repository Policy:
- To allow CodePipeline in Account-B to pull ECR images residing in Account-A, ECR repository should allow Account-B to pull those images from its repository. For this, we will apply resource-based policy to our ECR repository.
- You can use the below ECR repository policy by replacing appropriate ARN at the commented places.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Cross-account-policy",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<ACCOUNT-B_ID>:root", //MAKE CHANGE HERE
"arn:aws:iam::<ACCOUNT-A_ID>:role/CrossAccount-A_Role" //ARN OF CROSS-ACCOUNT IAM ROLE CREATED IN ACCOUNT-A AT STEP 3
]
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:GetDownloadUrlForLayer",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:UploadLayerPart"
]
}
]
}
Step 7: IN ACCOUNT-B -> Modify your pipeline’s configuration file:
- Now that we have most of the required components of our cross-account pipeline ready, we will modify the configuration file of our pipeline that we created in Step 1 by changing the source to our ECR repository in Account-A and we will also add KMS related details.
- To do that, run get-pipeline command
aws codepipeline get-pipeline --name <CODEPIPELINE_NAME> --region <AWS_REGION> > pipeline.json
from AWS CLI. -
Open this pipeline.json file in your favourite text editor (vim?) and add ‘encryptionKey’ in ‘artifactStore’ section.
"artifactStore": { "type": "S3", "location": "codepipeline-<AWS_REGION>-XXXXXXXXXX", //MAKE CHANGE HERE "encryptionKey": { "id": "arn:aws:kms:<AWS_REGION>:<ACCOUNT-B_ID>:key/XXXXXXXXXX-XXXX-XXX-XXXX-XXXXXXXXXX", //MAKE CHANGE HERE "type": "KMS" } }
Then, modify your ECR action of Source stage where you will change the values of ECR repository name and its image tag which is in Account-A.
-
In the same ECR action, include the
"roleArn": "arn:aws:iam::<ACCOUNT-A_ID>:role/CrossAccount-A_Role"
which we created in Step 3 so that this cross-account IAM role from Account-A will be used for ECR action. Following is the snippet for our cross-account source action in pipeline's JSON file.NOTE: Replace appropriate values at commented places below
{ "name": "Source", "actionTypeId": { "category": "Source", "owner": "AWS", "provider": "ECR", "version": "1" }, "runOrder": 1, "configuration": { "ImageTag": "latest", //MAKE CHANGE HERE IN CASE OF CUSTOM TAG "RepositoryName": "XXXXX" //MAKE CHANGE HERE }, "outputArtifacts": [ { "name": "SourceArtifact" } ], "inputArtifacts": [], "region": "<AWS_REGION>", //MAKE CHANGE HERE "namespace": "SourceVariables", "roleArn": "arn:aws:iam::<ACCOUNT-A_ID>:role/CrossAccount-A_Role" //MENTION CROSS-ACCOUNT IAM ROLE }
NOTE: Above is just a snippet of our ECR action of Source stage and not the entire JSON configuration of pipeline.
Make sure to perform the 4th step of this documentation to remove the unwanted metadata section from configuration file.
Once you make the above changes, update the pipeline from update-pipeline CLI command
aws codepipeline update-pipeline --region <AWS_REGION> --cli-input-json file://pipeline.json
-
After you update your pipeline, don't forget to update your imagedefintions.json file with the correct imageURI from Account-A if you haven't done so. It should look like below,
[ { "name": "<CONTAINER_NAME>", //MAKE CHANGES HERE "imageUri": "<ACCOUNT-A_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<REPOSITORY_NAME>:<IMAGE_TAG>" //MAKE CHANGES HERE } ]
Once updated the imagedefinitions.json file, zip it and upload it again to your source S3 bucket.
As you might be aware that CloudWatch Event (CWE) Rules are the resources in the background which trigger your pipeline whenever there are any changes to your source present in AWS. Since, one of our source action, i.e. ECR is existing in another AWS account, CodePipeline cannot create this CWE rule automatically for you for ECR as it did for S3 action. Hence, we will now create CWE rule for ECR repository with the help of CloudWatch Event Bus.
Step 8: IN ACCOUNT-B -> Edit default Event Bus to allow account A:
- Go to CloudWatch console and select your default Event Bus.
- Click on Add permission to add your Account-A ID and check the Everybody(*) option.
Step 9: IN ACCOUNT-A -> Create CWE Rule for your ECR repository:
- Follow this official documentation till Step 5 to create CWE rule for your ECR repository as a source for your CWE rule.
- As a target for this CWE rule, select Event bus in another AWS account. Enter your Account-B ID and select Create a new role for this specific resource option for IAM role which will automatically create IAM role for this CWE rule.
Step 10: IN ACCOUNT-B -> Create CWE Rule with CodePipeline as a target:
-
Create a new CWE rule with Event Source exactly same as you made in Account-A. Edit this event pattern by adding account section with the value as Account-A’s ID. This will listen to any changes occurring to your ECR repository present in Account-A with the help of Event Bus. Refer to this official documentation to know how to do this exactly. Once you create Event pattern, it should look like below,
{ "detail-type": [ "ECR Image Action" ], "account": [ "ACCOUNT-A_ID" //MAKE CHANGES HERE ], "source": [ "aws.ecr" ], "detail": { "action-type": [ "PUSH" ], "image-tag": [ "latest" //MAKE CHANGE HERE IN CASE OF CUSTOM TAG ], "repository-name": [ "<REPOSITORY_NAME>" //MAKE CHANGES HERE ], "result": [ "SUCCESS" ] } }
Put your CodePipeline ARN (
arn:aws:codepipeline:<AWS_REGION>:<ACCOUNT-B_ID>:<PIPELINE_NAME>
) as a target for this CWE rule.
Hooray!! Now, we have successfully set up CodePipeline in Account-B with ECR being in Account-A, S3 bucket containing only zipped imagedefinitions.json file for ECS deployment and ECS service in Account-B. Whenever you make any changes to any of your source, your pipeline will be triggered in a CI/CD manner.
Automation & Scale:
Following the step-by-step instructions from the above section can be useful if we want to understand what we are doing to implement the architecture. However, once we understand that and when we want to automate the whole target architecture, it can be deployed in an automated way using AWS CloudFormation which is an Infrastructure as Code (IaC) tool from AWS. To automate the deployment in multiple AWS accounts, make sure to use the CloudFormation templates from GitHub repository. In the repository, we have three CloudFormation templates which should be deployed sequentially as noted below:
- In Account B, create a new CloudFormation stack using AccB-pipeline-1.yaml template.
- In Account A, create a new CloudFormation stack using AccA-pipeline.yaml template.
- In Account B, run following AWS CLI command from Account B to allow Account A to send ECR push events to Event Bus in Account B.
aws events put-permission --action events:PutEvents --statement-id MySid --principal <ACCOUNT_A_ID>
- In Account B, update the CloudFormation stack created in Step 1 using AccB-pipeline-2.yaml template. In this template, we will use one additional parameter named CrossAccountRoleName which will be the name of cross-account role created in Step 2. The IAM role name will be available from the Outputs section of CloudFormation console. Along with that, we will also put the ECR repository name created in Account A in Step 2 which is available in the Outputs section of CloudFormation console.
-
In Account B, make sure to update imagedefintions.json file with the correct imageURI from Account A. It may look like below,
[ { "name": "<CONTAINER_NAME>", //MAKE CHANGES HERE "imageUri": "<ACCOUNT-A_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<REPOSITORY_NAME>:<IMAGE_TAG>" //MAKE CHANGES HERE } ]
In Account A, push already built Docker image to ECR repository in Account A.
In Account B, notice how CodePipeline will get triggered automatically as soon as any Docker image is pushed to ECR repository in Account A. This Docker image will be deployed to ECS cluster in Account B in CI/CD manner.
Thanks for reading this blog! I hope you will find this useful. Feel free to like, share and comment on this article with your constructive feedback. If you find this blog helpful or if you're stuck at any of the above step, do let me know via comments section and I'd be more than happy to help!
Have an AWSome Day! 😎
Top comments (2)
Hi Pranit, thanks for writing this tutorial.
I'm trying this setup by reading along, I think I may have found an error in Step 5:
5.1 says that we would like Account-A cross-account role to assume the CodePipeline's service role, but the JSON block below that description grants the reverse of that, i.e. it grants the CodePipeline's service role permission to assume the cross-account role in Account-A.
If the JSON is correct, I think the description is wrong. Or if we're really trying to allow Account-A to assume the CodePipeline service role in Account-B, I think this needs to be some JSON in the CodePipeline service role's Trust Policy JSON.
Hi @hebertl thanks for your constructive feedback. It's a good catch. I've updated the post now with the correct description.
Thanks again and have a nice one! :)