DEV Community

Cover image for AWS S3 Cross-Account Uploads Failing with 403 AccessDenied
Ismail Kovvuru
Ismail Kovvuru

Posted on

AWS S3 Cross-Account Uploads Failing with 403 AccessDenied

Learn how a simple missing permission in an AWS S3 Access Point policy caused 403 AccessDenied errors in cross-account uploads, even when IAM roles and bucket policies were correct. Step-by-step fix and prevention guide inside.

A User saw 500 Internal Server Error on uploads. Tracing showed an S3 AccessDenied (403) coming from an S3 Access Point that belonged to a different AWS account.

The bucket policy allowed cross-account writes, but the Access Point policy did not include the Lambda role ARN — S3 blocked the request.

Fix: add the source Lambda’s role ARN to the Access Point policy (or use an alternative cross-account pattern). After the Access Point policy was updated, uploads succeeded.

Below is a clear, step-by-step explanation of what happened, why it happened and how to fix and prevent it written so engineers and managers both can follow.

1. what, where, when and how the problem showed up

What happened (observable symptom):
User reported File uploads to S3 are failing. The user-facing API returned 500 Internal Server Error.

Where in the system:
Uploads flow: Frontend → API Gateway → Lambda (in Account A) → S3 Access Point → S3 bucket (owned by Account B). The Access Point resource was in another AWS account (Account B).

When it happened:
At runtime when Lambda attempted to PutObject through the Access Point to the destination account.

How it presented in traces and logs:

  1. Frontend initiated an upload request via API Gateway.
  2. API Gateway invoked a Lambda function.
  3. Lambda attempted an S3 PutObject operation.
  4. S3 returned 403 AccessDenied.
  5. The upstream API returned 500 Internal Server Error to the user, masking the actual permission failure from S3.

This translation of the 403 into a 500 response made troubleshooting initially misleading.

Why it mattered / got missed initially:

  • Team checked: bucket health , IAM role with broad permissions , no network issues.
  • The subtlety: S3 Access Points have their own resource policies (separate from bucket policy). Even though the bucket policy allowed cross-account writes, the Access Point policy did not include the Lambda role ARN as a principal — S3 denied the operation at the Access Point layer.

2. Root cause

S3 Access Point is a resource that can have its own policy. When using an Access Point, S3 enforces both the bucket policy and the Access Point policy. In this case:

  1. The destination bucket’s policy allowed cross-account writes.
  2. The Access Point policy did not allow the Lambda’s role (principal) from the source account.
  3. S3 rejected the PutObject with AccessDenied (403) at the Access Point layer.
  4. The Lambda (or API Gateway) didn’t translate that permission error into a meaningful client response, so the client only saw 500 Internal Server Error.

Lesson: When cross-account operations use S3 Access Points, check the Access Point policy not just the bucket policy or IAM role.

3. The exact fix that worked

Fix performed: Update the S3 Access Point policy in the destination account (Account B) to include the source Lambda execution role ARN from Account A as an allowed principal for s3:PutObject (and other relevant S3 actions).

After redeploy: Uploads succeeded and the API returned 200 OK.

4. Concrete commands & policy examples

Important: replace account IDs, ARNs, access point names and regions with your own.

4.1 Inspect current Access Point policy (destination account)

aws s3control get-access-point-policy \
  --account-id 222233334444 \
  --name app-uploads \
  --region us-east-1
Enter fullscreen mode Exit fullscreen mode

If there is no policy returned, that itself is relevant: the Access Point may be implicitly more restrictive.

4.2 Minimal Access Point policy that allows a Lambda role in another account to PutObject

access-point-policy.json

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowLambdaPutFromAccountA",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/lambda-exec-role"
      },
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl"
      ],
      "Resource": "arn:aws:s3:us-east-1:222233334444:accesspoint/app-uploads/object/*"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Apply it:

aws s3control put-access-point-policy \
  --account-id 222233334444 \
  --name app-uploads \
  --policy file://access-point-policy.json \
  --region us-east-1
Enter fullscreen mode Exit fullscreen mode

4.3 Example bucket policy (destination account) that is compatible

A bucket policy can additionally allow writes; Access Point policy must explicitly allow the principal too.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/lambda-exec-role"
      },
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::my-bucket/*",
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "111122223333"
        }
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

4.4 Test a PutObject using the Access Point ARN (from source account)

aws s3api put-object \
  --bucket arn:aws:s3:us-east-1:222233334444:accesspoint/app-uploads \
  --key test.txt \
  --body ./test.txt \
  --region us-east-1
Enter fullscreen mode Exit fullscreen mode
  • Expected success: 200 OK.
  • If AccessDenied, confirm both Access Point policy and bucket policy include appropriate principals and conditions.

5. Why the API showed 500 instead of 403 (and how to avoid masking)

What happened: Lambda got a 403 from S3, but either:

  • Lambda code didn't catch/translate the exception and defaulted to an internal error, or
  • API Gateway integration mapping converted the Lambda error into a generic 500.

How to avoid masking in future: catch S3 errors and return meaningful HTTP status codes to clients.

Example (Node.js Lambda) — catch and propagate S3 errors:

const AWS = require('aws-sdk');
const s3 = new AWS.S3();

exports.handler = async (event) => {
  try {
    await s3.putObject({
      Bucket: 'arn:aws:s3:us-east-1:222233334444:accesspoint/app-uploads',
      Key: 'file.txt',
      Body: Buffer.from('hello')
    }).promise();

    return { statusCode: 200, body: 'OK' };

  } catch (err) {
    if (err.code === 'AccessDenied' || err.statusCode === 403) {
      return { statusCode: 403, body: 'Upload blocked: Access denied' };
    }
    console.error('Unexpected S3 error', err);
    return { statusCode: 500, body: 'Internal Server Error' };
  }
};
Enter fullscreen mode Exit fullscreen mode

Recommendation: surface the correct HTTP code for client-facing errors, and log the full S3 error payload for troubleshooting.

6. When to use this solution, and when not to

When to use: include the source role ARN in Access Point policy

  • Use when you need direct cross-account writes through an Access Point (multi-tenant access patterns, VPC-restricted access, or when Access Points are an architecture requirement).
  • Use when Access Points are used to manage fine-grained access to a large bucket by many consumers across accounts.

Pros:

  • Fine-grained control at Access Point level.
  • Scopes access specifically to that Access Point and object path (less blast radius than bucket policy alone).
  • Works well with Access Point features (VPC restrictions, policy scoping).

Cons / Risks:

  • Requires managing principals across accounts — can be error-prone if roles rotate or change names.
  • Policies can get complex; need automation for correctness.

When not to use — alternatives

If you don't require Access Points, or cross-account Access Point policy management is heavy for your org, consider alternatives:

Alternative A — Cross-account AssumeRole (recommended for programmatic cross-account access)

  • Create a role in the destination account (Account B) that allows s3:PutObject on the bucket. Grant Account A permission to assume that role.
  • The Lambda in Account A sts:AssumeRole into the role in Account B and call S3 with temporary credentials. This avoids needing to manage resource policies referencing Account A principals.

When to choose: if you prefer IAM role trust relationships and more centralized control; good for service-to-service cross-account interactions.

Sketch:

  1. In Account B, create S3WriteRole with bucket PutObject permissions. Trust policy allows arn:aws:iam::111122223333:role/lambda-exec-role (or the Account A principal) to assume it.
  2. In Lambda (Account A), call sts.assumeRole to get temporary credentials, then call S3.

Pros: centralized role in destination; simpler to audit.
Cons: adds STS usage and role assumption step.

Alternative B — Pre-signed URLs

  • Generate pre-signed PUT URL in Account B or in Account A via a role in Account B. Frontend uses that URL to upload directly to S3. No cross-account policy needs on Access Point if signed correctly.

When to choose: when user uploads from browser/mobile and you want to avoid long-lived credentials or cross-account writes from Lambda.

Pros: simple client flow, least privilege on server.
Cons: signature management; cannot perform server-side transformations before upload.

Alternative C — Use bucket policies (no Access Point)

  • If Access Points not required, direct bucket policies that allow cross-account principals might be simpler.

When to choose: single-tenant use, or small number of trusted accounts.

7 . Diagnostics checklist / playbook (step-by-step)

If an S3 upload fails and you see a 500 or 403, run through this checklist:

  1. Trace the request path — which resource did the client actually hit? (Direct bucket ARN, or Access Point ARN?)
  • Check Lambda code: what Bucket value does it pass to S3? If it uses an Access Point ARN, note the account id in that ARN.
  1. Check CloudTrail for S3 Data Events (PutObject) to see the exact errorCode/errorMessage. CloudTrail shows which principal was used and whether the error was AccessDenied.

  2. Check Access Point policy (destination account):

   aws s3control get-access-point-policy --account-id DEST --name APNAME
Enter fullscreen mode Exit fullscreen mode
  1. Check bucket policy (destination account) for PutObject allow/deny statements and aws:SourceAccount or aws:SourceArn conditions.

  2. Confirm the principal: ensure the policy includes the Lambda's execution role ARN or the appropriate account principal.

  3. Check Lambda IAM (source account): verify Lambda has s3:PutObject in its IAM permissions for the target resource (if using assumed role pattern, ensure assume role is allowed).

  4. Check VPC / endpoint restrictions: Access Points can be restricted to VPCs — confirm the call originates from an allowed place.

  5. Reproduce with AWS CLI using the same ARN to see exact error:

   aws s3api put-object --bucket arn:aws:s3:us-east-1:DEST:accesspoint/APNAME --key t.txt --body t.txt
Enter fullscreen mode Exit fullscreen mode
  1. Fix policy, then re-test. If still failing, enable S3 Server Access Logging or review CloudTrail events.

  2. Avoid masking: update Lambda error handling to surface S3 error codes to client and log full stack trace.

8. Prevention: automation, monitoring & best practices

Automated checks

  • Add CI/CD checks that verify Access Point policies and bucket policies include required principals (for cross-account flows). Eg: run aws s3control get-access-point-policy in a test job that compares expected principals.
  • Use infrastructure as code (Terraform/CloudFormation) for Access Points and policies so cross-account principals are reviewed and versioned.

Observability

  • Enable S3 Data Events in CloudTrail for sensitive buckets — this records PutObject and will show AccessDenied.
  • Enable S3 Server Access Logging on the bucket for additional forensic data.
  • Log Lambda exceptions and include error codes – make them searchable in CloudWatch Logs.

Policy hygiene

  • Prefer least privilege: grant only the actions required (s3:PutObject, possibly s3:PutObjectAcl).
  • Where possible use Condition elements (aws:SourceAccount, aws:SourceArn) to reduce risk.

Testing

  • Add integration tests that perform a real PutObject via the Access Point as part of deployment pipelines (run under a sandbox account).

9. Example: Add assume-role alternative (step by step)

1. In destination account (Account B) create role CrossAccountS3Writer:

  • Trust policy grants sts:AssumeRole to the source account (Account A) or the Lambda role.
  • Permissions: s3:PutObject on arn:aws:s3:::my-bucket/*.

2. In source account (Lambda) use STS to assume CrossAccountS3Writer and call S3 using those temporary credentials.

Why: avoids scattering destination resource policies referencing many source principals; centralizes control in the destination account.

10. Short checklist

  1. Identify if using Access Point ARN or direct bucket ARN.
  2. Check Access Point policy in destination account.
  3. Check bucket policy for PutObject allow/deny.
  4. Confirm principal ARN (Lambda role) is included in Access Point/bucket policy or that role assumption is configured.
  5. Reproduce with AWS CLI.
  6. Fix policy or implement assume-role & retest.
  7. Add automated tests & CI checks.
  8. Improve error handling so S3 403 becomes client 403, not 500.
  9. Document in team KB with example policies.

11. Recommendations (practical & actionable)

  1. Short term: Fix the Access Point policy to include the Lambda role ARN and re-test. Update Lambda to surface 403s clearly. Add a short KB entry referencing this incident.

  2. Medium term: Decide a cross-account access pattern for your org (Access Point policy vs assume-role vs presigned URLs). Standardize it and codify with IaC.

  3. Long term: Automate policy verification in CI, enable CloudTrail S3 data events for critical buckets, and add integration test coverage that performs a test upload through the exact path used in production.

12. Conclusion

This was a classic example of policy layering catching teams off guard: even with a permissive bucket policy and a qualified IAM role in the caller account, the Access Point — being a separate resource — enforces its own policy. The symptom (500) masked the true permission error (403) until you traced all hops and checked the Access Point policy.

Top comments (0)