Being in a multi-account AWS environment, you quickly realize that while business domains are neatly isolated into separate accounts, real-world workloads often need to interact across those boundaries.
Recently, one of our app teams needed to trigger a platform pipeline whenever their EKS app uploaded a file to an S3 bucket. A simple ask - right? But in our world, that means messaging platform-team and waiting for them to push the magic “Run Pipeline” button, as there was no seamless way to trigger that workflow across accounts from S3. Hence, we are here today.
In this post, I’ll show how we eliminated that friction and built a clean, fully automated cross-account, S3 → CodePipeline trigger using AWS-native services.
🗺️ Architectural Overview
This is the sample accounts cenario:
- Account-A (source) - will call it ACA, has the S3 bucket
- Account-B (Target) - will call it ACB, runs the PipeLine
Here, We will build a cross-account trigger, where:
- ACA detects the S3 update event
- ACA fires an EventBridge rule
- EventBridge assumes a role in ACB
- The role has permission to start the CodePipeline execution
- ACB runs the pipeline
Here is the simple flow-diagram:
┌──────────────────────────────────────────────────────────┐
│ ACCOUNT A (Source) │
│ │
│ ┌────────────────────────────────────────────────────┐ │
CodeCommit / │ │ EventBridge Rule │ │
ECR / S3 →─────▶│ │ (Repo state change / image push / object put) │ │
etc. │ └────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ EventBridge Target │ │
│ │ - Uses IAM role in Account A │ │
│ │ with sts:AssumeRole permission │ │
│ └──────────────────────────────────────┘ │
│ │ (sts:AssumeRole) │
└──────────────────────────┼───────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ ACCOUNT B (Pipeline) │
│ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ IAM Role: cross-account-pipeline-start │ │
│ │ - Trusts Account A │ │
│ │ - Allows: codepipeline:StartPipelineExecution │ │
│ └────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ AWS CodePipeline │ │
│ │ (Build/Test/Deploy Pipeline) │ │
│ └────────────────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
There are resources to be created on the both sides. This can be done going to the individual account and deploy locally. Or, if there is any cross-account role configured, it can be done from one account as well.
In my case, as the S3 bucket (that used by EKS app to upload/update the file) was pre-created from ACB (as part of account vending process), I did everything from there. In that way, it was managed end-to-end from platform-managment side.
📤 In Source Account - ACA
This where we will create the:
- S3 bucket (with the Object/File inside),
- Assume Role/policy for EventBridge to do
PutEvents - EventBridge Rule and Target to forward the event ACB bus
1️⃣ NOTE:
provider = aws.eksappis optional.
As I’m running all ACA-side resources from the ACB account, soaws.eksappis simply a provider alias pointing Terraform to the ACA account (where the EKS app actually runs).
It's not needed, if deploying from within ACA directly.2️⃣ NOTE:
for_each = local.cdp_svc_enabled ? toset([var.service_name]) : []is just an On/Off switch for this entire set of resources.
Whenlocal.cdp_svc_enabledisfalse, none of the cross-account trigger infrastructure gets created.
Handy for conditional deployments or environment-specific setups.
Let’s break it down step-by-step.
1️⃣ Create the S3 bucket
This step is option for this process; assuming you already have a bucket - mentioning here just for the completeness. Feel free to use your usual way of creating bucket.
2️⃣ Enable notification from S3 bucket
# ----------------------------------------------------------
# Ensure bucket sends events to EventBridge
# ----------------------------------------------------------
resource "aws_s3_bucket_notification" "eb_xacc" {
for_each = local.cdp_svc_enabled ? toset([var.service_name]) : []
bucket = module.app_bucket[each.value].name
eventbridge = true
provider = aws.eksapp
}
3️⃣ Assume Role for EventBridge
# ----------------------------------------------------------
# Role for EB to assume to PutEvents to ACB bus
# ----------------------------------------------------------
resource "aws_iam_role" "eb_forward" {
for_each = local.cdp_svc_enabled ? toset([var.service_name]) : []
name = "${local.template_name}-xacc-s3eb-Role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Principal = { "Service" : "events.amazonaws.com" },
Action = ["sts:AssumeRole"]
}]
})
provider = aws.eksapp
}
resource "aws_iam_role_policy" "eb_forward" {
for_each = local.cdp_svc_enabled ? toset([var.service_name]) : []
role = aws_iam_role.eb_forward[each.value].id
policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Action = "events:PutEvents",
Resource = module.pipeline[each.value].event_bus.arn
}]
})
provider = aws.eksapp
}
4️⃣ Capture the S3-bucket event, when file updates
# ----------------------------------------------------------
# Events Capture and forward to ACB bus
# ----------------------------------------------------------
resource "aws_cloudwatch_event_rule" "eb_xacc" {
for_each = local.cdp_svc_enabled ? toset([var.service_name]) : []
name = "${local.template_name}-xacc"
description = "Forward S3 Object Created events to ACB event bus"
provider = aws.eksapp
event_pattern = jsonencode({
source = ["aws.s3"],
detail-type = ["Object Created"],
detail = {
bucket = {
name = [module.app_bucket[each.value].name]
},
object = {
key = [
{ prefix = data.aws_s3_object.aso_file[each.value].key }
]
}
}
})
}
# ----------------------------------------------------------
# Target = ACB account event bus
# ----------------------------------------------------------
resource "aws_cloudwatch_event_target" "eb_xacc" {
for_each = local.cdp_svc_enabled ? toset([var.service_name]) : []
rule = aws_cloudwatch_event_rule.eb_xacc[each.value].name
arn = module.pipeline[each.value].event_bus.arn
role_arn = aws_iam_role.eb_forward[each.value].arn
provider = aws.eksapp
}
🎯 In Target Account – ACB
This is where all the action actually happens — the CodePipeline lives here, the event bus lives here, and this is the account that ACA ultimately needs to poke to start the pipeline.
Here, we’ll configure:
- Accept the forwarded EventBridge event
- Create the IAM Role that EventBridge will assume
- Give it permission to start the pipeline
- Set up the CodePipeline start trigger
- And wire everything to the event bus that ACA sends into
Let’s wlak it through step-by-step.
1️⃣ Create an EventBridge Event Bus
If you already have a centralized event bus — great. You probably you have something like this:
event_bus = {
arn = aws_cloudwatch_event_bus.cdp_bus.arn
name = aws_cloudwatch_event_bus.cdp_bus.name
}
If not, here’s the minimal setup to get one. I used a dedicated one.
# ----------------------------------------------------------
# Custom bus to receive events from INC account
# ----------------------------------------------------------
resource "aws_cloudwatch_event_bus" "s3_trigger" {
name = "${var.name_prefix}-cdp-s3-trigger"
}
This is the bus ACA will forward events into.
2️⃣ Create the IAM Role that EventBridge will Assume
Unlike some other cross-account setup, where the other AWS account ID assumes a role, this uses the EventBridge service principal to assume this role.
# ----------------------------------------------------------
# EventBridge Assume-Role
# ----------------------------------------------------------
// Role-policy
data "aws_iam_policy_document" "s3_trigger" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
}
}
// Role
resource "aws_iam_role" "s3_trigger" {
name = "${var.name_prefix}-s3eb-Role"
assume_role_policy = data.aws_iam_policy_document.s3_trigger.json
}
3️⃣ Give that role permission to start the CodePipeline
This is the second half of the trust chain — once EventBridge assumes the role above, it must be allowed to start the CodePipeline(s):
# ----------------------------------------------------------
# EventBridge start-pipeline policy
# ----------------------------------------------------------
resource "aws_iam_role_policy" "s3_trigger" {
name = "${var.name_prefix}-s3eb-policy"
role = aws_iam_role.s3_trigger.id
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Action = ["codepipeline:StartPipelineExecution"],
Resource = [
for br in local.repo_branches :
module.auto_build[br].cdp_resource_arn
]
}
]
})
}
A few things to call out (for clarity):
- It supports triggering multiple pipelines via
local.repo_branches - The pipeline resource ARNs come from
module.auto_builddirectly, which actually builds the pipeline, without using any wildcards - Allopwed only very minimum IAM permission (
StartPipelineExecution)
4️⃣ Allow the Trigger Account (ACA) to PutEvents
# ----------------------------------------------------------
# Allow ACA account to PutEvents on this bus
# ----------------------------------------------------------
resource "aws_cloudwatch_event_permission" "s3_trigger" {
event_bus_name = aws_cloudwatch_event_bus.s3_trigger.name
principal = var.trigger_acc_id
action = "events:PutEvents"
statement_id = "AllowIncToPutEvents"
}
Here, var.trigger_acc_id is the ACA account ID that is allowed to send events into this bus.
5️⃣ EventBridge Rule + Target
# ----------------------------------------------------------
# EB rule: trigger CodePipeline on S3 Object Created
# ----------------------------------------------------------
resource "aws_cloudwatch_event_rule" "s3_trigger" {
name = "${var.name_prefix}-s3-trigger"
event_bus_name = aws_cloudwatch_event_bus.s3_trigger.name
description = "Trigger CodePipeline when object is created in ${var.trigger_bucket}"
event_pattern = jsonencode({
source = ["aws.s3"],
detail-type = ["Object Created"],
detail = {
bucket = { name = [var.trigger_bucket] },
object = {
key = [{ prefix = var.trigger_prefix }]
}
}
})
}
# ----------------------------------------------------------
# EB target: link S3-trigger to all pipelines
# ----------------------------------------------------------
resource "aws_cloudwatch_event_target" "s3_trigger" {
for_each = toset(local.repo_branches)
rule = aws_cloudwatch_event_rule.s3_trigger.name
event_bus_name = aws_cloudwatch_event_bus.s3_trigger.name
target_id = "trigger-${each.key}"
arn = module.auto_build[each.key].cdp_resource_arn
role_arn = aws_iam_role.s3_trigger.arn
}
output "auto_build" {
value = module.auto_build
}
output "event_bus" {
value = {
name = aws_cloudwatch_event_bus.s3_trigger.name
arn = aws_cloudwatch_event_bus.s3_trigger.arn
role_arn = aws_iam_role.s3_trigger.arn
}
}
The rule matches the forwarded S3 events on the custom bus, and the target uses the s3_trigger role to start the appropriate pipeline for each branch.
🎉 The Result
And that’s it — the full cross-account flow stitched together end-to-end.
Here’s what the final architecture looks like, visually:

🏁 Conclusion
Cross-account automation is one of the trickiest parts of AWS CI/CD — especially in a multi-account setup where teams are isolated by design. But with this pattern in place, the entire S3 → EventBridge → CodePipeline workflow now flows cleanly across accounts:
- ACA captures the S3 object update
- ACA forwards the event to ACB
- ACB’s custom event bus receives it
- ACB’s EventBridge rule assumes a dedicated “start-pipeline” role
- That role starts exactly the pipeline(s) it should — nothing more
And just like that, the whole process runs automatically, without Slack/Teams pings, without waiting for the “pipeline person,” and — most importantly — without humans in the loop at all 😛
Event-driven. Cross-account. Zero friction.
🪜 Next: A Real-Life Example
In the next episode, we’ll revisit the real reason this whole setup was needed in the first place — and walk through how we used this exact cross-account trigger to solve a very real, very everyday operational problem.
Top comments (0)