<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: CK</title>
    <description>The latest articles on DEV Community by CK (@khaldoun488).</description>
    <link>https://dev.to/khaldoun488</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/khaldoun488"/>
    <language>en</language>
    <item>
      <title>The Missing Link: Triggering Serverless Events from Legacy Databases with AWS DMS</title>
      <dc:creator>CK</dc:creator>
      <pubDate>Wed, 07 Jan 2026 19:09:05 +0000</pubDate>
      <link>https://dev.to/khaldoun488/the-missing-link-triggering-serverless-events-from-legacy-databases-with-aws-dms-10l4</link>
      <guid>https://dev.to/khaldoun488/the-missing-link-triggering-serverless-events-from-legacy-databases-with-aws-dms-10l4</guid>
      <description>&lt;p&gt;We live in a world where we want everything to be &lt;strong&gt;Event-Driven&lt;/strong&gt;. We want a new user registration in our SQL database to immediately trigger a welcome email via SES, update a CRM via API, and start a Step Function workflow.&lt;/p&gt;

&lt;p&gt;If you are building greenfield on DynamoDB, this is easy (DynamoDB Streams). But what if your data lives in a legacy MySQL monolith, an on-premise Oracle DB, or a standard PostgreSQL instance?&lt;/p&gt;

&lt;p&gt;You need &lt;strong&gt;Change Data Capture (CDC)&lt;/strong&gt;. You need to stream these changes to the cloud.&lt;/p&gt;

&lt;p&gt;Naturally, you look at &lt;strong&gt;AWS DMS (Database Migration Service)&lt;/strong&gt;. It’s perfect for moving data. But then you hit the wall:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The Problem:&lt;/strong&gt; AWS DMS cannot target an AWS Lambda function directly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You cannot simply configure a task to say "When a row is inserted in Table X, invoke Function Y".&lt;/p&gt;

&lt;p&gt;So, how do we bridge the gap between the "Old World" (SQL) and the "New World" (Serverless)? We need a glue service. While many suggest Kinesis, often the most robust and cost-effective answer is &lt;strong&gt;Amazon S3&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here is the architecture pattern I use to modernize legacy backends without rewriting them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture: The "S3 Drop" Pattern
&lt;/h2&gt;

&lt;p&gt;The flow works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Source:&lt;/strong&gt; DMS connects to your Legacy Database and captures changes (INSERT/UPDATE/DELETE) via the transaction logs.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Target:&lt;/strong&gt; DMS writes these changes as JSON files into an &lt;strong&gt;S3 Bucket&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Trigger:&lt;/strong&gt; S3 detects the new file and fires an &lt;strong&gt;Event Notification&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Compute:&lt;/strong&gt; Your &lt;strong&gt;Lambda Function&lt;/strong&gt; receives the event, reads the file, and processes the business logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6me6anxbssgc5g8hx7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6me6anxbssgc5g8hx7e.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why S3 instead of Kinesis or Airbyte?
&lt;/h2&gt;

&lt;p&gt;You might face two common alternatives when designing this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Why not Kinesis Data Streams?&lt;/strong&gt;&lt;br&gt;
Kinesis is faster (sub-second latency). However, S3 is often superior for this specific use case because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; S3 is incredibly cheap compared to a provisioned Kinesis stream (especially if the legacy DB is quiet).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability:&lt;/strong&gt; You can literally &lt;em&gt;see&lt;/em&gt; the changes as files in your bucket. It makes debugging legacy data migration 10x easier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batching:&lt;/strong&gt; DMS writes to S3 in batches. This naturally prevents your Lambda from being overwhelmed if the database takes a massive write hit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Why not Airbyte or Fivetran?&lt;/strong&gt;&lt;br&gt;
Tools like Airbyte are fantastic for &lt;strong&gt;ELT&lt;/strong&gt; (Extract Load Transform) pipelines, typically moving data to a warehouse like Snowflake every 15 or 60 minutes.&lt;br&gt;
However, our goal here is &lt;strong&gt;Event-Driven capability&lt;/strong&gt;. We want to trigger a Lambda function as close to "real-time" as possible. AWS DMS offers continuous replication (CDC), giving us a granular stream of events that batch-based ELT tools often miss. Furthermore, staying 100% AWS native simplifies IAM governance in strict enterprise environments.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Implementation Guide
&lt;/h2&gt;

&lt;p&gt;Here are the specific settings you need to make this work smoothly.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. The DMS Endpoint Settings
&lt;/h3&gt;

&lt;p&gt;When creating your Target Endpoint in DMS (pointing to S3), don't just use the defaults. You want the data to be readable by your Lambda.&lt;/p&gt;

&lt;p&gt;Use these &lt;strong&gt;Extra Connection Attributes&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dataFormat=json;
datePartitionEnabled=true;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;dataFormat=json: By default, DMS might use CSV. JSON is much easier for your Lambda to parse using standard libraries.&lt;/p&gt;

&lt;p&gt;datePartitionEnabled=true: This organizes your files in S3 by date (/2023/11/02/...), preventing a single folder from containing millions of files.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Understanding the Event Structure
&lt;/h3&gt;

&lt;p&gt;When DMS writes a file, it looks like this inside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;101&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jdoe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"active"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"operation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"insert"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2023-11-02T10:00:00Z"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;102&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"asmith"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pending"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"operation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2023-11-02T10:05:00Z"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get the Operation (was it an insert or an update?) and the Data in one clean package.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Lambda Logic
&lt;/h3&gt;

&lt;p&gt;AWS DMS does not write a valid JSON array (e.g., [{...}, {...}]). It writes Line-Delimited JSON (NDJSON).&lt;/p&gt;

&lt;p&gt;If you try to json.loads() the entire file content at once, your code will crash. You must iterate line-by-line.&lt;/p&gt;

&lt;p&gt;Here is the Python boilerplate to handle this correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json

s3 = boto3.client('s3')

def handler(event, context):
    # 1. Get the S3 Key from the event trigger
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']

        print(f"Processing file: {key}")

        # 2. Fetch the actual file generated by DMS
        obj = s3.get_object(Bucket=bucket, Key=key)
        content = obj['Body'].read().decode('utf-8')

        # 3. Parse DMS JSON (NDJSON / Line-Delimited)
        # CRITICAL: Do not use json.loads(content) directly!
        for line in content.splitlines():
            if not line.strip(): continue # Skip empty lines

            row = json.loads(line)

            # 4. Filter: Only process what you need
            operation = row.get('metadata', {}).get('operation')

            if operation == 'insert':
                user_data = row.get('data')
                print(f"New User Detected: {user_data['username']}")
                # trigger_welcome_email(user_data)

            elif operation == 'update':
                print(f"User Updated: {row['data']['id']}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;You don't need to refactor your entire legacy database to get the benefits of Serverless.&lt;/p&gt;

&lt;p&gt;By using AWS DMS to unlock the data and S3 as a reliable buffer, you can trigger modern Lambda workflows from 20-year-old databases with minimal friction. It is a pattern that prioritizes stability and observability over raw speed—a trade-off that is usually worth it in enterprise migrations.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>architecture</category>
      <category>dms</category>
    </item>
    <item>
      <title>Mastering AWS Cross-Account Secrets with Terraform &amp; KMS</title>
      <dc:creator>CK</dc:creator>
      <pubDate>Wed, 03 Dec 2025 14:05:31 +0000</pubDate>
      <link>https://dev.to/khaldoun488/mastering-aws-cross-account-secrets-with-terraform-kms-c70</link>
      <guid>https://dev.to/khaldoun488/mastering-aws-cross-account-secrets-with-terraform-kms-c70</guid>
      <description>&lt;h2&gt;
  
  
  The Multi-Account Reality Check
&lt;/h2&gt;

&lt;p&gt;In modern cloud architecture, multi-account strategies are the norm. We separate development from production, and often centralized shared services into their own hubs.&lt;/p&gt;

&lt;p&gt;A very common scenario is having a central "Security" or "Shared Services" AWS account holding sensitive database credentials in AWS Secrets Manager, which need to be accessed by a Lambda function running in a completely different "Workload" account.&lt;/p&gt;

&lt;p&gt;It sounds simple on paper:&lt;/p&gt;

&lt;p&gt;Create the Secret in Account A.&lt;/p&gt;

&lt;p&gt;Attach a resource policy to the secret allowing Account B to read it.&lt;/p&gt;

&lt;p&gt;Give the Lambda in Account B IAM permission to read the secret.&lt;/p&gt;

&lt;p&gt;You deploy your Terraform, invoke your Lambda, and... fail. You get an AccessDeniedException or a vague KMS error.&lt;/p&gt;

&lt;p&gt;I recently ran into this exact wall. Here is why it happens, and the Terraform pattern to fix it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Gotcha": It’s Rarely IAM, It’s Usually KMS
&lt;/h2&gt;

&lt;p&gt;When debugging cross-account access failures with Secrets Manager (or S3, or SQS), 90% of the time developers focus on IAM policies. But when secrets are involved, you are fighting a two-front war: authentication (IAM) and cryptography (KMS).&lt;/p&gt;

&lt;p&gt;By default, when you create a secret in Secrets Manager without specifying encryption settings, AWS encrypts it using the AWS-managed key for the service (alias/aws/secretsmanager).&lt;/p&gt;

&lt;p&gt;Here is the trap: You cannot modify the Key Policy of an AWS-managed key. It is designed to only trust principals within the same account.&lt;/p&gt;

&lt;p&gt;No matter how wide open you make your IAM policies, the external account will never be allowed to decrypt the payload. The door is unlocked, but the box is welded shut.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Customer Managed Keys (CMK)
&lt;/h2&gt;

&lt;p&gt;To enable cross-account access, you must take control of encryption. You need to create a Customer Managed Key (CMK) in KMS and explicitly tell it to trust the external account.&lt;/p&gt;

&lt;p&gt;Here is the complete Terraform implementation to securely share secrets across accounts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;We need the AWS Account ID of the "consumer" account (the one running the Lambda function that needs the secret). Let's define it as a variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"external_consumer_account_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The AWS Account ID that needs read access to the secrets."&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="c1"&gt;# Example: "123456789012"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;# Helper to get current account ID for policy definitions&lt;/span&gt;
&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_caller_identity"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: The KMS Key (The Gatekeeper)
&lt;/h3&gt;

&lt;p&gt;This is the most critical step. We create a symmetric KMS key. The Key Policy must allow two things:&lt;/p&gt;

&lt;p&gt;The current account's root user to manage the key (otherwise you might lock yourself out).&lt;/p&gt;

&lt;p&gt;The external account root user to perform cryptographic operations (Decrypt, Describe).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_kms_key"&lt;/span&gt; &lt;span class="s2"&gt;"cross_account_secrets_key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"KMS Key for cross-account RDS credentials"&lt;/span&gt;
  &lt;span class="nx"&gt;deletion_window_in_days&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
  &lt;span class="nx"&gt;enable_key_rotation&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;Version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;
    &lt;span class="nx"&gt;Statement&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Sid&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Enable IAM User Permissions for current account"&lt;/span&gt;
        &lt;span class="nx"&gt;Effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
        &lt;span class="nx"&gt;Principal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;AWS&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nx"&gt;Action&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kms:*"&lt;/span&gt;
        &lt;span class="nx"&gt;Resource&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Sid&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow External Account to Decrypt"&lt;/span&gt;
        &lt;span class="nx"&gt;Effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
        &lt;span class="nx"&gt;Principal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;AWS&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::${var.external_consumer_account_id}:root"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c1"&gt;# The external account needs to decrypt the secret payload&lt;/span&gt;
        &lt;span class="nx"&gt;Action&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="s2"&gt;"kms:Decrypt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="s2"&gt;"kms:DescribeKey"&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;Resource&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_kms_alias"&lt;/span&gt; &lt;span class="s2"&gt;"secrets_key_alias"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"alias/cross-account-secrets-key"&lt;/span&gt;
  &lt;span class="nx"&gt;target_key_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_kms_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cross_account_secrets_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: The Secret (Using the Key)
&lt;/h3&gt;

&lt;p&gt;Now we create the secret. The crucial part here is the kms_key_id argument. If you omit this, Terraform will happily use the default key, and cross-account access will break.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_secretsmanager_secret"&lt;/span&gt; &lt;span class="s2"&gt;"database_credentials"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prod/rds/read-replica-creds"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Database credentials accessible by workload accounts"&lt;/span&gt;

  &lt;span class="c1"&gt;# CRITICAL: Force the use of our custom KMS key&lt;/span&gt;
  &lt;span class="nx"&gt;kms_key_id&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_kms_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cross_account_secrets_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Note: I'm only showing the secret container creation here).&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: The Secret Policy (Granting Access)
&lt;/h3&gt;

&lt;p&gt;Finally, we need to tell Secrets Manager itself that the external account is allowed to ask for the value. We attach a resource policy to the secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_secretsmanager_secret_policy"&lt;/span&gt; &lt;span class="s2"&gt;"database_credentials_policy"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;secret_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_secretsmanager_secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;database_credentials&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;

  &lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;Version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;
    &lt;span class="nx"&gt;Statement&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Sid&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AllowExternalRead"&lt;/span&gt;
        &lt;span class="nx"&gt;Effect&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
        &lt;span class="nx"&gt;Principal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
          &lt;span class="nx"&gt;AWS&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::${var.external_consumer_account_id}:root"&lt;/span&gt; 
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nx"&gt;Action&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"secretsmanager:GetSecretValue"&lt;/span&gt;
        &lt;span class="nx"&gt;Resource&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary: The Full Picture
&lt;/h2&gt;

&lt;p&gt;For a successful cross-account secret retrieval, three gates must open simultaneously. If any one of these fails, the request fails:&lt;/p&gt;

&lt;p&gt;The KMS Key Policy (in the source account) must trust the destination account to Decrypt.&lt;/p&gt;

&lt;p&gt;The Secret Resource Policy (in the source account) must trust the destination account to GetSecretValue.&lt;/p&gt;

&lt;p&gt;The IAM Role attached to the Lambda/EC2 (in the destination account) must have permissions to perform both actions on the respective ARNs.&lt;/p&gt;

&lt;p&gt;Using default AWS keys is the most common trap in multi-account architectures. By shifting to Customer Managed Keys and handling policy definitions explicitly in Terraform, you ensure secure, repeatable cross-account access.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
