If you've ever worked in a multi-account AWS environment, you've probably hit the dreaded AccessDenied error when trying to access resources across accounts. Whether it's sharing data between dev and prod accounts, aggregating logs to a central security account, or enabling cross-team collaboration, cross-account access is essential—but it's also where many engineers struggle.
In this post, I'll show you exactly how to implement secure cross-account resource access using AWS Identity & Access Management (IAM) with AWS Security Token Service (STS) with a real-world example: a Lambda function that tracks the International Space Station's location and stores the data in an S3 bucket in a different AWS account.
Prerequisites
- Have Terraform installed
- Have access to 2 AWS accounts
🏗️ The Architecture
Here's what we're building:
Account A (Source):
- Lambda function that fetches ISS position data
- IAM execution role with permission to assume a role in Account B
- EventBridge trigger (runs every 5 minutes)
Account B (Target):
- S3 bucket for storing ISS position logs
- IAM role that trusts Account A's Lambda role
- Policies granting S3 access to the trusted role
The Flow:
- EventBridge triggers Lambda in Account A
- Lambda calls
sts:AssumeRoleto get temporary credentials for Account B - Lambda uses temporary credentials to read/write to S3 bucket in Account B
- Lambda fetches ISS position from public API and appends it to a file pulled from s3
- Lambda pushes the update file back to s3
🛠️ Let's Implement the Trust Chain
Step 1: Configure the Terraform Providers
Configure you cli with credentials from account A. Then run aws sts get-caller-identity to get your identity.
e.g
{
"UserId": "<redacted>",
"Account": "<redacted>",
"Arn": "arn:aws:iam::<redacted>:user/nobleman"
}
Next, create a role (called terraform, for example) in account B. In its trust relationship, allow the previous identity to assume it.
For example...
{
...
"Statement": [
{
...
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<redacted>:user/nobleman"
]
},
"Action": "sts:AssumeRole"
}
]
}
Next, configure Terraform providers for both accounts like this...
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0"
}
}
}
provider "aws" {
region = "us-east-1"
# role_arn is set in ~/.aws/config
}
provider "aws" {
region = "us-east-1"
alias = "account_b"
assume_role {
role_arn = "arn:aws:iam::${var.account_b_id}:role/terraform"
}
}
Step 2: Lambda's Execution Role (Account A)
Next, create a role that Lambda can assume:
# Account A: Role that Lambda assumes
resource "aws_iam_role" "cross_account_role" {
name = "connect-to-bridge"
assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json
}
# Trust policy: Allow Lambda service to assume this role
data "aws_iam_policy_document" "ec2_assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
What this means: The Lambda service can wear this role like a badge.
Step 3: Grant Role Permission to Assume Cross-Account Role
Now we give this role permission to assume another role in Account B:
# Account A: Policy that allows assuming role in Account B
data "aws_iam_policy_document" "assume_cross_account_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
resources = [aws_iam_role.access_s3_bucket.arn] # Role in Account B
}
}
resource "aws_iam_role_policy_attachment" "ec2_assume_cross_account" {
role = aws_iam_role.cross_account_role.name
policy_arn = aws_iam_policy.assume_cross_account_role.arn
}
What this means: "Hey Lambda role, you're allowed to assume the access_s3_bucket role in Account B."
Step 4: The Target Role in Account B
Here's the role in Account B with the crucial trust policy:
# Account B: Role that grants S3 access
resource "aws_iam_role" "access_s3_bucket" {
provider = aws.account_b # Important: This is in Account B
name = "access-s3-bucket"
assume_role_policy = data.aws_iam_policy_document.allow_role_assumption_a.json
}
# Trust policy: Allow Account A's role to assume this role
data "aws_iam_policy_document" "allow_role_assumption_a" {
statement {
effect = "Allow"
principals {
type = "AWS"
identifiers = [aws_iam_role.cross_account_role.arn] # Account A role ARN
}
actions = ["sts:AssumeRole"]
}
}
This is the key: Account B explicitly trusts Account A's role. Without this trust relationship, the assume role call will fail.
Step 5: Grant S3 Permissions to the Account B Role
data "aws_iam_policy_document" "s3_bucket_access" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
]
resources = [
module.reporting-bucket.s3_bucket_arn,
"${module.reporting-bucket.s3_bucket_arn}/*"
]
}
}
resource "aws_iam_role_policy_attachment" "s3_bucket_access_attachment" {
provider = aws.account_b
role = aws_iam_role.access_s3_bucket.name
policy_arn = aws_iam_policy.s3_bucket_access.arn
}
The complete trust chain:
→ Lambda Service assumes Lambda Execution Role (Account A)
→ Lambda Execution Role (Account A) assumes S3 Access Role (Account B)
→ S3 Access Role (Account B) has permission to S3 Bucket (Account B)
💻 The Lambda Implementation
Now let's look at how the Lambda function uses STS to assume the cross-account role:
import boto3
import os
import urllib3
import json
from datetime import datetime
def lambda_handler(event, context):
# Environment variables from Terraform
account2_role_arn = os.environ['ACCOUNT2_ROLE_ARN']
bucket_name = os.environ['BUCKET_NAME']
bucket_region = os.environ['BUCKET_REGION']
object_key = 'iss_position.txt'
# Step 1: Assume the role in Account B using STS
sts_client = boto3.client('sts')
assumed_role = sts_client.assume_role(
RoleArn=account2_role_arn,
RoleSessionName='LambdaCrossAccountS3Access'
)
# Step 2: Extract temporary credentials
credentials = assumed_role['Credentials']
# Step 3: Create S3 client with temporary credentials
s3_client = boto3.client(
's3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
region_name=bucket_region
)
# Step 4: Read existing file from S3
response = s3_client.get_object(Bucket=bucket_name, Key=object_key)
content = response['Body'].read().decode('utf-8')
# Step 5: Fetch ISS position from public API
http = urllib3.PoolManager()
response = http.request("GET", "http://api.open-notify.org/iss-now.json")
data = json.loads(response.data.decode("utf-8"))
timestamp = datetime.fromtimestamp(data['timestamp'])
# Step 6: Append new data
new_content = content + f"""Time: {timestamp}
Latitude: {data['iss_position']['latitude']}
Longitude: {data['iss_position']['longitude']}
"""
# Step 7: Write back to S3 using temporary credentials
s3_client.put_object(
Bucket=bucket_name,
Key=object_key,
Body=new_content
)
return {
'statusCode': 200,
'body': 'Cross-account S3 access successful'
}
Key Points:
sts_client.assume_role(): This is where the magic happens. Lambda uses its execution role to request temporary credentials for the Account B role.Temporary Credentials: STS returns short-lived credentials (valid for 1 hour by default). These credentials have all the permissions of the assumed role.
Session Name: The
RoleSessionNameappears in CloudTrail logs, making it easier to audit who assumed the role and when.Explicit Credential Usage: We explicitly pass the temporary credentials to the S3 client. This is different from the default boto3 behavior which uses the Lambda's execution role.
📦 Deployment
Prerequisites:
- Two AWS accounts (A & B)
- AWS CLI configured with access to Account A
- Terraform >= 1.0
- A pre-existing
terraformrole in Account B that Account A can assume
Steps:
- Clone the repository:
git clone https://github.com/nobleman97/cross-account-iam.git
cd cross-account-iam
- Create a
dev.tfvarsfile:
account_b_id = "123456789012" # Replace with your Account B ID
- Deploy the infrastructure:
cd infra
terraform init
terraform plan -var-file=dev.tfvars
terraform apply -var-file=dev.tfvars
- Watch it work:
# Check Lambda logs
aws logs tail /aws/lambda/write-report-to-crossaccount-s3 --follow
# After 5-10 minutes, check the S3 file in Account B
aws s3 cp s3://iss-reporting-24534576df/iss_position.txt - \
--profile account-b
If setup properly, the file should look something like this:
ISS Location Logs
===================
Time: 2025-12-06 17:19:45
Latitude: -50.5587
Longitude: 122.1887
Time: 2025-12-06 17:29:07
Latitude: -32.9030
Longitude: 163.3656
...
Common Pitfalls and Solutions
Problem 1: "User is not authorized to perform: sts:AssumeRole"
Cause: The role in Account A doesn't have permission to assume the role in Account B.
Solution: Verify the policy attachment:
aws iam list-attached-role-policies --role-name connect-to-bridge
Problem 2: "AccessDenied" when accessing S3
Causes:
- The trust policy in Account B doesn't trust Account A's role
- The role in Account B doesn't have S3 permissions
- The S3 bucket has a restrictive bucket policy
Debug:
# Check trust policy
aws iam get-role --role-name access-s3-bucket \
--profile account-b --query 'Role.AssumeRolePolicyDocument'
# Check attached policies
aws iam list-attached-role-policies --role-name access-s3-bucket \
--profile account-b
Problem 3: Terraform Can't Create Resources in Account B
Cause: Terraform doesn't have permission to assume the role in Account B.
Solution: Ensure you have a terraform role in Account B with this trust policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"AWS": "<arn_of_cli_identity>"
},
"Action": "sts:AssumeRole"
}]
}
🎓 Key Takeaways
-
Two-Way Trust: Cross-account access requires both accounts to cooperate:
- Account A must be allowed to assume roles in Account B
- Account B must trust Account A's role
STS is Your Friend: Temporary credentials are more secure than permanent access keys. They expire automatically and can be audited.
Conclusion
Cross-account IAM access doesn't have to be scary. By understanding the trust relationships, using STS for temporary credentials, and following least privilege principles, you can securely share resources across AWS accounts.
The pattern I've shown here—Lambda in Account A assuming a role in Account B to access S3—applies to many other scenarios:
- EC2 instances accessing DynamoDB in another account
- Step Functions orchestrating resources across accounts
- EventBridge forwarding events between accounts
- Centralized logging and monitoring
Have you implemented cross-account access in your AWS environment? What challenges did you face? Drop a comment below!
Resources
If you found this helpful, consider starring the GitHub repo and sharing it with your team!

Top comments (0)