<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chanaka Supun</title>
    <description>The latest articles on DEV Community by Chanaka Supun (@chanaka_supun_4aa57dbcc25).</description>
    <link>https://dev.to/chanaka_supun_4aa57dbcc25</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chanaka_supun_4aa57dbcc25"/>
    <language>en</language>
    <item>
      <title>How I Upgraded RDS PostgreSQL Version From 13.20 to 17.6</title>
      <dc:creator>Chanaka Supun</dc:creator>
      <pubDate>Tue, 04 Nov 2025 04:52:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-upgraded-rds-postgresql-version-from-1320-to-176-cgb</link>
      <guid>https://dev.to/aws-builders/how-i-upgraded-rds-postgresql-version-from-1320-to-176-cgb</guid>
      <description>&lt;p&gt;i had to upgrade PostgreSQL version last month because it will be end of life in next month.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nzg4258h49unthb1dvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nzg4258h49unthb1dvn.png" alt=" " width="780" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our organization uses postrgeSQL as the main database engine. We have staging and production environments both running version 13.20. According to announcements from AWS -&amp;gt; Amazon RDS PostgreSQL 13.x end of standard support is February 28, 2026. With community end of life being November 2025. So we had to prepare for the upgrade. Below describe the process i used and steps taken during the upgrade process.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Steps Followed:&lt;/strong&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Check Changelog&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;First i take a look at the release notes from 13.2 version to 17.6 version and note down the breaking/major changes that are there for my applications. Identify If any i need to pay attention to. Here are few critical changes that are there.&lt;br&gt;
&lt;a href="https://www.postgresql.org/docs/release/?source=post_page-----42fd5cb28ae5---------------------------------------" rel="noopener noreferrer"&gt;Release Notes&lt;/a&gt;&lt;br&gt;
Summary from RN&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Critical Breaking Changes (PostgreSQL 17)

### 1. Search Path Changes in Maintenance Operations ⚠️ CRITICAL

**Issue:** Functions used by expression indexes and materialized views must now specify search_path explicitly.

**Impact:** Maintenance operations (ANALYZE, CLUSTER, CREATE INDEX, CREATE MATERIALIZED VIEW, REFRESH MATERIALIZED VIEW, REINDEX, VACUUM) now use a safe search_path.

**Action Required:**
sql
-- Check for functions used in indexes/materialized views without explicit search_path
SELECT n.nspname, p.proname, pg_get_functiondef(p.oid)
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE p.oid IN (
  SELECT indexrelid::regclass::oid 
  FROM pg_index 
  WHERE indexprs IS NOT NULL
)
AND pg_get_functiondef(p.oid) NOT LIKE '%SET search_path%';

-- Fix: Add search_path to function definition
ALTER FUNCTION schema_name.function_name() SET search_path = schema_name, pg_catalog;

### 2. Removed: old_snapshot_threshold Parameter

**Issue:** Server variable `old_snapshot_threshold` has been removed.

**Action Required:**
sql
-- Check if parameter is set
SHOW old_snapshot_threshold;

-- If set, remove from postgresql.conf before upgrade
-- This feature may be re-added in future versions


### 3. Removed: db_user_namespace Feature

**Issue:** Per-database user simulation feature removed.

**Action Required:**
sql
-- Check if enabled
SHOW db_user_namespace;

-- If 'on', must be disabled before upgrade
-- Migrate to standard user management


### 4. Removed: adminpack Extension

**Issue:** adminpack contrib extension removed (was used by pgAdmin III).

**Action Required:**
sql
-- Check if installed
SELECT * FROM pg_extension WHERE extname = 'adminpack';

-- Drop before upgrade
DROP EXTENSION IF EXISTS adminpack;

### 5. System Catalog Column Renames

**Issue:** Several system catalog columns renamed in PostgreSQL 17.

**Action Required:**
sql
-- Check for queries/views using old column names:

-- pg_collation.colliculocale → colllocale
-- pg_database.daticulocale → datlocale
-- pg_attribute.attstattarget (now NULL for default instead of -1)
-- pg_stat_progress_vacuum columns renamed
-- pg_stat_slru columns renamed
-- pg_stat_statements: blk_read_time → shared_blk_read_time
-- pg_stat_statements: blk_write_time → shared_blk_write_time

-- Search for usage in views/functions
SELECT schemaname, viewname, definition
FROM pg_views
WHERE definition LIKE '%colliculocale%'
   OR definition LIKE '%daticulocale%'
   OR definition LIKE '%blk_read_time%'
   OR definition LIKE '%blk_write_time%';

### 6. Removed: pg_stat_bgwriter Columns

**Issue:** `buffers_backend` and `buffers_backend_fsync` removed from pg_stat_bgwriter.

**Action Required:**
sql
-- Check for queries using these columns
SELECT schemaname, viewname, definition
FROM pg_views
WHERE definition LIKE '%buffers_backend%'
   OR definition LIKE '%buffers_backend_fsync%';

-- Migrate to pg_stat_io view instead
SELECT * FROM pg_stat_io;

### 6. Postgres Extention version upgrade - check for breaking changes
pg_partman extention version needed to be upgraded from version "4.5.1" to version "5.2.4" . you can see the breaking changes here 
[changelog](https://github.com/pgpartman/pg_partman/blob/development/CHANGELOG.md)

## Breaking Changes (PostgreSQL 16)

### 1. PL/pgSQL Bound Cursor Variable Changes

**Issue:** String value of bound cursor variables no longer matches variable name during assignment.

**Action Required:**
sql
-- Review PL/pgSQL functions with bound cursors
SELECT n.nspname, p.proname, pg_get_functiondef(p.oid)
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE p.prolang = (SELECT oid FROM pg_language WHERE lanname = 'plpgsql')
  AND pg_get_functiondef(p.oid) LIKE '%CURSOR%FOR%';

-- If cursor variable name is used, assign explicitly before OPEN
-- Old behavior: cursor_var := 'cursor_name' (automatic)
-- New behavior: Must explicitly assign before OPEN

### 2. Removed: Postfix Operators

**Issue:** Support for postfix operators removed.

**Action Required:**
sql
-- Check for postfix operators
SELECT oprname, oprleft, oprright
FROM pg_operator
WHERE oprright = 0;

-- pg_dump and pg_upgrade will warn about these
-- Convert to prefix or infix operators before upgrade

## Breaking Changes (PostgreSQL 15)

### 1. PUBLIC Schema Permission Changes ⚠️ CRITICAL

**Issue:** PUBLIC creation permission removed from public schema by default.

**Impact:** Users can no longer create objects in public schema without explicit GRANT.

**Action Required:**
sql
-- Check current permissions
SELECT nspname, nspacl FROM pg_namespace WHERE nspname = 'public';

-- If needed, restore old behavior (not recommended for security)
GRANT CREATE ON SCHEMA public TO PUBLIC;

-- Better: Grant to specific roles
GRANT CREATE ON SCHEMA public TO app_role;

### 2. Removed: Exclusive Backup Mode

**Issue:** pg_start_backup()/pg_stop_backup() exclusive mode removed.

**Action Required:**sql
-- Check for usage of exclusive backup mode
-- Search application code for:
-- pg_start_backup(label, true)  -- Second parameter 'true' = exclusive mode

-- Migrate to non-exclusive mode:
-- pg_backup_start(label, false)  -- Note: function renamed
-- pg_backup_stop(false)

### 3. Removed: Python 2.x Support

**Issue:** plpython2u and plpythonu (Python 2) removed.

**Action Required:**
sql
-- Check for Python 2 functions
SELECT n.nspname, p.proname, l.lanname
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
JOIN pg_language l ON p.prolang = l.oid
WHERE l.lanname IN ('plpython2u', 'plpythonu');

-- Migrate to plpython3u
-- Rewrite functions for Python 3 compatibility

### 4. array_to_tsvector() Empty String Error

**Issue:** Now generates error for empty-string array elements.

**Action Required:**sql
-- Check for empty lexemes in tsvector columns
SELECT tablename, attname
FROM pg_stats
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
  AND atttypid = 'tsvector'::regtype;

-- Verify no empty lexemes exist
SELECT * FROM your_table WHERE to_tsvector('') = ANY(your_tsvector_column);

-- Clean up empty lexemes before upgrade

## Breaking Changes (PostgreSQL 14)

### 1. Array Function Signature Changes ⚠️ CRITICAL

**Issue:** Array functions changed from `anyarray` to `anycompatiblearray`.

**Affected Functions:**
- array_append()
- array_prepend()
- array_cat()
- array_position()
- array_positions()
- array_remove()
- array_replace()
- width_bucket()

**Action Required:**
sql
-- Find user-defined objects referencing these functions
SELECT n.nspname, p.proname, pg_get_functiondef(p.oid)
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE pg_get_functiondef(p.oid) ~ 'array_(append|prepend|cat|position|positions|remove|replace)|width_bucket'
  AND n.nspname NOT IN ('pg_catalog', 'information_schema');

-- Find aggregates using these functions
SELECT n.nspname, a.aggfnoid::regproc
FROM pg_aggregate a
JOIN pg_proc p ON a.aggfnoid = p.oid
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname NOT IN ('pg_catalog', 'information_schema');

-- Drop and recreate after upgrade

### 2. Removed: Containment Operators @ and ~

**Issue:** Deprecated operators @ and ~ removed for geometric types and contrib modules.

**Affected:** geometric data types, cube, hstore, intarray, seg

**Action Required:**
sql
-- Search for usage of @ and ~ operators
SELECT schemaname, viewname, definition
FROM pg_views
WHERE definition ~ '[@~]'
  AND schemaname NOT IN ('pg_catalog', 'information_schema');

-- Replace with &amp;lt;@ and @&amp;gt; operators
-- Old: geometry1 @ geometry2
-- New: geometry1 &amp;lt;@ geometry2

### 3. to_tsquery() and websearch_to_tsquery() Parsing Changes

**Issue:** Discarded tokens now properly parsed.

**Action Required:**
sql
-- Review queries using these functions with underscores or other discarded tokens
-- Test query results after upgrade
-- Example: websearch_to_tsquery('"pg_class pg"')
-- Old output: ( 'pg' &amp;amp; 'class' ) &amp;lt;-&amp;gt; 'pg'
-- New output: 'pg' &amp;lt;-&amp;gt; 'class' &amp;lt;-&amp;gt; 'pg'

### 4. Regular Expression \D and \W Changes

**Issue:** \D and \W now match newlines in newline-sensitive mode.

**Action Required:**
sql
-- Review regex patterns using \D or \W
SELECT schemaname, viewname, definition
FROM pg_views
WHERE definition ~ '\\[DW]'
  AND schemaname NOT IN ('pg_catalog', 'information_schema');

-- Use [^[:digit:]] or [^[:word:]] for old behavior

### 5. Removed: pg_standby Utility

**Issue:** contrib program pg_standby removed.

**Action Required:**
- Check if pg_standby is used in recovery.conf or restore_command
- Migrate to restore_command with alternative tools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case non of the above changes were affected my applications so no changes were required from my end. Typically that would be the case here unless you are using anything special to postgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check Official AWS upgrade Guide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then using the AWS documentation i generated a set of steps need to do in order to perform major version upgrade. You can check them from below link.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.MajorVersion.Process.html?source=post_page-----42fd5cb28ae5---------------------------------------" rel="noopener noreferrer"&gt;RDS Major Upgrade Guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;check target version&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After opening the doc i needed to decide my upgrade target 13.2-&amp;gt; 17.6 is supported. It was supported.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05y6f4ikxso7tv9v2yde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05y6f4ikxso7tv9v2yde.png" alt=" " width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check database instance compatibility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Afterward, I needed to checked if my database instance type was compatible with the target version. In my case db.m5.large. It was supported.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.Support.html?source=post_page-----42fd5cb28ae5---------------------------------------#gen-purpose-inst-classes" rel="noopener noreferrer"&gt;InstanceClass.Support&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw47gepmhndks8upepx3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw47gepmhndks8upepx3x.png" alt=" " width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decide on Upgrade Path&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I considered below upgrade paths&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Blue/Green Deployment (Recommended) : we had some pre-requisites that don’t match our database to support blue/green.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments.html?source=post_page-----42fd5cb28ae5---------------------------------------" rel="noopener noreferrer"&gt;blue-green-deployments&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Snapshot &amp;amp; In-Place Upgrade&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pg_dump/restore : pg_dump/restore would incur longer downtime (rebuilding indexes is slow for restore) .&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After doing some feasibility study identified the most feasible path for us is in-place upgrade with thorough testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create parameter group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I created custom parameter group for postgres17. In my currently database i am using pg_cron extention. So needed to update shared_libraries parameter to include pg_cron in new param group as well. Just like that you can compare the parameter groups and do required changes if you changed any defaults values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upgrading extentions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A PostgreSQL upgrade doesn’t upgrade any PostgreSQL extensions. To upgrade extensions, see&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.ExtensionUpgrades.html?source=post_page-----42fd5cb28ae5---------------------------------------" rel="noopener noreferrer"&gt;PostgreSQL.ExtensionUpgrades&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am using pg_partman and pg_cron extentions. In order to check supported version for postgreSQL 17 see below doc.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-extensions.html?source=post_page-----42fd5cb28ae5---------------------------------------" rel="noopener noreferrer"&gt;postgresql-extensions-support&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case upgrade requirement&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_partman : 4.5.1 to 5.2.4
pg_cron 1.3 to 1.6.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I checked the changes in partman and pg_cron release notes identified the changes that need to happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other checks needed..&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are some other checks in this doc that need to be fulfilled in order to do a successful upgrade. you can check below by following the official aws doc mentioned previously.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.Check for unsupported usage
2.Check for invalid databases
3.Handle logical replication slots 
4.Handle read replicas 
5.Handle zero-ETL integrations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case 1–3 all are zero and i don’t have replicas or zero-etl configured. So now i am all set for the upgrade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perform an upgrade dry run&lt;/strong&gt;&lt;br&gt;
Good thing about using AWS is you can quickly spin up new servers with click of a button. So in order to test my upgrade procedure working i created a new db instance using a snapshot of my existing db. Then perform pre-checks above mentioned on it and after verification i performed an in place upgrade to the new instance. You can do this my modifying the instance and update postgreSQL version to 17.6 and update parameter group to the new one created. Then applied the changes immediately. You can use the process to time your process and identify the downtime needed during the real upgrade.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp7ssb9rllnt7ehujvmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp7ssb9rllnt7ehujvmr.png" alt=" " width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case database upgrade took around 12mins so during this time database was unavailable. AWS takes a backup of instance before and after the upgrade. So after the upgrade it is recomended to run the ANALYZE on all databases to update pg_statistic table. This took around 30mins. Also i am using pg_partman, pg_cron which then i upgraded to supported versions. Then connected to database locally and performed validations. Then Documented all steps performed during dry-run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upgrade Staging/Production Database&lt;/strong&gt;&lt;br&gt;
After successful dry-run then its was time to perform staging upgrade. Can follow same steps for prod as well. Inform the stake holders beforehand and acquired a required downtime. Better to perform during an off-peak time interval. Then perform same steps as the dry-run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After upgrading, I checked the logs. There should be two new logs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;pg_upgrade_internal.log: Log of RDS upgrade (pg_upgrade)&lt;br&gt;
pg_upgrade_server.log: Log of stopping/starting RDS&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Checked any Error or Fatal log in those two logs and confirmed that “Upgrade Complete” is printed at the bottom of internal.log.&lt;/p&gt;

&lt;p&gt;After the upgrade connected to the database and perform some validations on content.&lt;/p&gt;

&lt;p&gt;I had an issue where some of the pods in eks were crashloopbackoff state after the upgrade. After checking the logs identified it was failing to connect database due to SSL error. Identified this was due to in postgreSQL17 param group rds.force_ssl is 1 by default and in postgreSQL13 it was 0. So after updating that pods were running without errors. Noted this down for production upgrade.&lt;/p&gt;

&lt;p&gt;Also note after the upgrade some SQL queries were taking longer time to execute hence increasing CPU load for rds and causing performance degradations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvyybelxuqh9334l1usu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvyybelxuqh9334l1usu.png" alt=" " width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By using performance insights in AWS identified this was mainly due to a single slow query that was happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vceudrhexg70qsa1c74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vceudrhexg70qsa1c74.png" alt=" " width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After analysing query identified it was due to a VIEW with anti join. Had to rewrite the query for it and after that performance was back to normal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uci8p1ibw0npjdry3jz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uci8p1ibw0npjdry3jz.png" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s all! Database was upgraded successfully.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>postgres</category>
      <category>rds</category>
    </item>
    <item>
      <title>How I Orchestrated 400K+ Nested API Calls with AWS Step Functions and Lambda</title>
      <dc:creator>Chanaka Supun</dc:creator>
      <pubDate>Tue, 19 Aug 2025 18:26:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-orchestrated-400k-nested-api-calls-with-aws-step-functions-and-lambda-3him</link>
      <guid>https://dev.to/aws-builders/how-i-orchestrated-400k-nested-api-calls-with-aws-step-functions-and-lambda-3him</guid>
      <description>&lt;p&gt;When working with large-scale data pipelines, a common challenge is efficiently orchestrating high-volume API calls that have multiple layers of dependency. Recently, I encountered such a challenge where I needed to generate 409,600 JSON files from deeply nested API responses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;&lt;br&gt;
The requirements looked straightforward at first but quickly revealed significant complexity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetch 100 latest content items via an API call.&lt;/li&gt;
&lt;li&gt;For each content item, fetch the 16 latest comments.&lt;/li&gt;
&lt;li&gt;For each comment, fetch the 16 latest replies.&lt;/li&gt;
&lt;li&gt;For each reply, fetch the 16 latest nested replies.&lt;/li&gt;
&lt;li&gt;This results in a large combinatorial expansion:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;100 (contents) × 16 (comments) × 16 (replies) × 16 (nested replies)  &lt;br&gt;
= 409,600 JSON files&lt;/code&gt;&lt;br&gt;
Each generated JSON file represents the fully expanded content tree for one leaf node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why AWS Step Functions&lt;/strong&gt;&lt;br&gt;
To manage this orchestration at scale, I chose AWS Step Functions. Specifically, I leveraged the Map state, which allows parallel execution over dynamic lists without needing to manually code distributed concurrency handling.&lt;/p&gt;

&lt;p&gt;Key benefits of using Step Functions here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Native parallelism → Handles thousands of executions concurrently.&lt;/li&gt;
&lt;li&gt;Error handling &amp;amp; retries → Simplifies recovery from transient API failures.&lt;/li&gt;
&lt;li&gt;Visibility → Provides execution history, making it easier to debug complex workflows.&lt;/li&gt;
&lt;li&gt;Integration → Works seamlessly with AWS Lambda for API calls and file generation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;High-Level Workflow&lt;/strong&gt;&lt;br&gt;
|-Initial API Call (Lambda): Get the 100 latest content items.&lt;/p&gt;

&lt;p&gt;— —|-Step Functions Map State (Level 1): Iterate over the 100 content items.&lt;/p&gt;

&lt;p&gt;— — — — -Inside this map:&lt;/p&gt;

&lt;p&gt;— — — — — — — — — — Call API to fetch 16 comments.&lt;/p&gt;

&lt;p&gt;— — — — — — — — — — |-Nested Map State (Level 2): Iterate over comments.&lt;/p&gt;

&lt;p&gt;— — — — — — — — — — — — —Fetch 16 replies per comment.&lt;/p&gt;

&lt;p&gt;— — — — — — — — — — — — —|-Nested Map State (Level 3): Iterate over replies.&lt;/p&gt;

&lt;p&gt;— — — — — — — — — — — — — — — — -Fetch 16 nested replies per reply.&lt;/p&gt;

&lt;p&gt;This recursive orchestration fans out into 409,600 parallel tasks at the deepest level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability Considerations&lt;/strong&gt;&lt;br&gt;
Handling such scale required careful planning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Map concurrency limits: Step Functions supports up to 40,000 concurrent executions per account (with quotas adjustable via AWS Support).&lt;/li&gt;
&lt;li&gt;API throttling: Implemented exponential backoff and batching logic in Lambda functions to avoid rate-limit errors.&lt;/li&gt;
&lt;li&gt;File storage: Each JSON file is stored in Amazon S3 with a structured key hierarchy for easy retrieval.&lt;/li&gt;
&lt;li&gt;Cost control: Since Step Functions are billed per state transition, I optimized workflows to reduce unnecessary states.&lt;/li&gt;
&lt;li&gt;Payload Size: AWS Step Functions has a payload size limit of 256 KB for both input and output data passed between states in a workflow execution. Instead we can save response to s3 bucket after combining results from each map iteration and pass object name as input to next stage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr9xismt1wjadimqwo87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr9xismt1wjadimqwo87.png" alt=" " width="800" height="913"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outcomes&lt;/strong&gt;&lt;br&gt;
Using Step Functions significantly reduced the operational complexity of managing this pipeline. Instead of building custom queueing, concurrency, and retry logic, I relied on AWS-native orchestration. The final system was scalable, fault-tolerant, and easy to monitor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is the stepfunction ASL&lt;/strong&gt;&lt;br&gt;
(you can use this to create your own workflow)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Comment": "Content, Comments, Replies, and Reply Comments Processing Workflow",
    "StartAt": "GenerateFeedJSON",
    "States": {
        "GenerateAnonymousFeedJSON": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke",
            "Parameters": {
                "FunctionName": "${GenerateFeedLambdaArn}"
            },
            "Retry": [
                {
                    "ErrorEquals": [
                        "Lambda.ServiceException",
                        "Lambda.AWSLambdaException",
                        "Lambda.SdkClientException",
                        "Lambda.TooManyRequestsException"
                    ],
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "BackoffRate": 2,
                    "JitterStrategy": "FULL"
                }
            ],
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "WorkflowFailed",
                    "ResultPath": "$.error"
                }
            ],
            "TimeoutSeconds": 300,
            "Next": "Pass"
        },
        "WorkflowFailed": {
            "Type": "Fail"
        },
        "Pass": {
            "Type": "Pass",
            "Next": "ProcessIndividualContent",
            "Parameters": {
                "input.$": "States.StringToJson($.Payload.body)"
            },
            "OutputPath": "$.input",
            "Assign": {
                "key.$": "$.input.key"
            }
        },
        "ProcessIndividualContent": {
            "Type": "Map",
            "ItemProcessor": {
                "ProcessorConfig": {
                    "Mode": "DISTRIBUTED",
                    "ExecutionType": "STANDARD"
                },
                "StartAt": "GenerateIndividualContentJSON",
                "States": {
                    "GenerateIndividualContentJSON": {
                        "Type": "Task",
                        "Resource": "arn:aws:states:::lambda:invoke",
                        "Parameters": {
                            "FunctionName": "${GenerateIndivcontentJsonsLambdaArn}",
                            "Payload.$": "$"
                        },
                        "Retry": [
                            {
                                "ErrorEquals": [
                                    "Lambda.ServiceException",
                                    "Lambda.AWSLambdaException",
                                    "Lambda.SdkClientException",
                                    "Lambda.TooManyRequestsException"
                                ],
                                "IntervalSeconds": 1,
                                "MaxAttempts": 3,
                                "BackoffRate": 2,
                                "JitterStrategy": "FULL"
                            }
                        ],
                        "TimeoutSeconds": 300,
                        "End": true
                    }
                }
            },
            "ItemsPath": "$.detail",
            "MaxConcurrency": 1000,
            "Next": "ProcessComments",
            "ResultPath": "$.processedContent",
            "ItemBatcher": {
                "MaxItemsPerBatch": 10
            },
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "WorkflowFailed",
                    "ResultPath": "$.error"
                }
            ]
        },
        "ProcessComments": {
            "Type": "Map",
            "ItemProcessor": {
                "ProcessorConfig": {
                    "Mode": "DISTRIBUTED",
                    "ExecutionType": "STANDARD"
                },
                "StartAt": "GenerateComments",
                "States": {
                    "GenerateComments": {
                        "Type": "Task",
                        "Resource": "arn:aws:states:::lambda:invoke",
                        "Parameters": {
                            "FunctionName": "${GenerateContentCommentJsonsLambdaArn}",
                            "Payload.$": "$"
                        },
                        "Retry": [
                            {
                                "ErrorEquals": [
                                    "Lambda.ServiceException",
                                    "Lambda.AWSLambdaException",
                                    "Lambda.SdkClientException",
                                    "Lambda.TooManyRequestsException"
                                ],
                                "IntervalSeconds": 1,
                                "MaxAttempts": 3,
                                "BackoffRate": 2,
                                "JitterStrategy": "FULL"
                            }
                        ],
                        "TimeoutSeconds": 300,
                        "End": true,
                        "OutputPath": "$.Payload.body"
                    }
                }
            },
            "ItemsPath": "$.detail",
            "MaxConcurrency": 1000,
            "ResultPath": "$",
            "Next": "IsReplyLevelCountReached",
            "ItemBatcher": {
                "MaxItemsPerBatch": 10
            },
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "WorkflowFailed",
                    "ResultPath": "$.error"
                }
            ],
            "ResultWriter": {
                "Resource": "arn:aws:states:::s3:putObject",
                "Parameters": {
                    "Bucket": "${content_sfn_artifact_bucket}",
                    "Prefix": "ProcessComments/"
                },
                "WriterConfig": {
                    "OutputType": "JSON",
                    "Transformation": "FLATTEN"
                }
            },
            "ResultSelector": {
                "output": {
                    "detail": {
                        "fileKey.$": "States.Format('ProcessComments/{}/SUCCEEDED_0.json', States.ArrayGetItem(States.StringSplit($.ResultWriterDetails.Key,'/'), 1))"
                    }
                },
                "counter": 0
            }
        },
        "IsReplyLevelCountReached": {
            "Type": "Choice",
            "Choices": [
                {
                    "Next": "consolidateoutputs",
                    "Variable": "$.counter",
                    "NumericLessThan": 2
                }
            ],
            "Default": "CopyToLatestJson"
        },
        "consolidateoutputs": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke",
            "Parameters": {
                "FunctionName": "${ConsolidateOutputsLambdaArn}",
                "Payload.$": "$"
            },
            "Retry": [
                {
                    "ErrorEquals": [
                        "Lambda.ServiceException",
                        "Lambda.AWSLambdaException",
                        "Lambda.SdkClientException",
                        "Lambda.TooManyRequestsException"
                    ],
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "BackoffRate": 2,
                    "JitterStrategy": "FULL"
                }
            ],
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "WorkflowFailed",
                    "ResultPath": "$.error"
                }
            ],
            "TimeoutSeconds": 300,
            "Next": "ProcessReplies",
            "ResultSelector": {
                "input.$": "States.StringToJson($.Payload.body)"
            },
            "OutputPath": "$.input"
        },
        "ProcessReplies": {
            "Type": "Map",
            "ItemProcessor": {
                "ProcessorConfig": {
                    "Mode": "DISTRIBUTED",
                    "ExecutionType": "EXPRESS"
                },
                "StartAt": "GenerateReplies",
                "States": {
                    "GenerateReplies": {
                        "Type": "Task",
                        "Resource": "arn:aws:states:::lambda:invoke",
                        "Parameters": {
                            "FunctionName": "${GeneratCommentRepliesJsonsLambdaArn}",
                            "Payload.$": "$"
                        },
                        "Retry": [
                            {
                                "ErrorEquals": [
                                    "Lambda.ServiceException",
                                    "Lambda.AWSLambdaException",
                                    "Lambda.SdkClientException",
                                    "Lambda.TooManyRequestsException"
                                ],
                                "IntervalSeconds": 1,
                                "MaxAttempts": 3,
                                "BackoffRate": 2,
                                "JitterStrategy": "FULL"
                            }
                        ],
                        "TimeoutSeconds": 300,
                        "End": true,
                        "OutputPath": "$.Payload.body"
                    }
                }
            },
            "MaxConcurrency": 500,
            "ResultPath": "$.output",
            "Next": "IsReplyLevelCountReached",
            "ItemReader": {
                "Resource": "arn:aws:states:::s3:getObject",
                "ReaderConfig": {
                    "InputType": "JSON"
                },
                "Parameters": {
                    "Bucket.$": "$.bucketName",
                    "Key.$": "$.fileName"
                }
            },
            "ItemBatcher": {
                "MaxItemsPerBatch": 10
            },
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "WorkflowFailed",
                    "ResultPath": "$.error"
                }
            ],
            "ResultWriter": {
                "Resource": "arn:aws:states:::s3:putObject",
                "Parameters": {
                    "Bucket": "${content_sfn_artifact_bucket}",
                    "Prefix": "ProcessReplies/"
                },
                "WriterConfig": {
                    "OutputType": "JSON",
                    "Transformation": "FLATTEN"
                }
            },
            "ResultSelector": {
                "detail": {
                    "fileKey.$": "States.Format('ProcessReplies/{}/SUCCEEDED_0.json', States.ArrayGetItem(States.StringSplit($.ResultWriterDetails.Key,'/'), 1))"
                }
            }
        },
        "CopyToLatestJson": {
            "Type": "Task",
            "Parameters": {
                "Bucket": "${content_s3_bucket}",
                "CopySource.$": "States.Format('${content_s3_bucket}/{}', $key)",
                "Key": "feed/anonymous/latest.json"
            },
            "Resource": "arn:aws:states:::aws-sdk:s3:copyObject",
            "Retry": [
                {
                    "ErrorEquals": [
                        "S3.ServiceException",
                        "S3.AWSServiceException",
                        "S3.SdkClientException"
                    ],
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "BackoffRate": 2,
                    "JitterStrategy": "FULL"
                }
            ],
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "WorkflowFailed",
                    "ResultPath": "$.error"
                }
            ],
            "Next": "ListObjectVersions"
        },
        "ListObjectVersions": {
            "Type": "Task",
            "Parameters": {
                "Bucket": "${content_s3_bucket}",
                "Prefix": "feed/anonymous/latest.json",
                "MaxKeys": 2
            },
            "Resource": "arn:aws:states:::aws-sdk:s3:listObjectVersions",
            "ResultSelector": {
                "OldVersionId.$": "$.Versions[1].VersionId",
                "LatestVersionId.$": "$.Versions[0].VersionId"
            },
            "Next": "CheckIfPreviousVersionExists",
            "Retry": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "BackoffRate": 2,
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "JitterStrategy": "FULL"
                }
            ],
            "TimeoutSeconds": 30,
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "ResultPath": "$.error",
                    "Next": "Fail"
                }
            ]
        },
        "Fail": {
            "Type": "Fail"
        },
        "CheckIfPreviousVersionExists": {
            "Type": "Choice",
            "Choices": [
                {
                    "Next": "CopyOldVersion",
                    "Variable": "$.OldVersionId",
                    "IsPresent": true
                }
            ],
            "Default": "Success"
        },
        "CopyOldVersion": {
            "Type": "Task",
            "Parameters": {
                "Bucket": "${content_s3_bucket}",
                "CopySource.$": "States.Format('${content_s3_bucket}/feed/anonymous/latest.json?versionId={}', $.OldVersionId)",
                "Key": "feed/anonymous/latest-previous.json",
                "MetadataDirective": "COPY",
                "TaggingDirective": "COPY"
            },
            "Resource": "arn:aws:states:::aws-sdk:s3:copyObject",
            "Next": "HeadObject",
            "ResultSelector": {
                "ArchivedVersionId.$": "$.VersionId"
            },
            "Retry": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "BackoffRate": 2,
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "JitterStrategy": "FULL"
                }
            ],
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "Fail",
                    "ResultPath": "$.error"
                }
            ],
            "TimeoutSeconds": 30,
            "ResultPath": "$.ArchivedVersion"
        },
        "HeadObject": {
            "Type": "Task",
            "Parameters": {
                "Bucket": "${content_s3_bucket}",
                "Key": "feed/anonymous/latest-previous.json",
                "VersionId.$": "$.ArchivedVersion.ArchivedVersionId"
            },
            "Resource": "arn:aws:states:::aws-sdk:s3:headObject",
            "Next": "DeleteObject",
            "Retry": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "BackoffRate": 2,
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "JitterStrategy": "FULL"
                }
            ],
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "Fail",
                    "ResultPath": "$.error"
                }
            ],
            "TimeoutSeconds": 30,
            "ResultPath": "$.verificationresults"
        },
        "DeleteObject": {
            "Type": "Task",
            "Parameters": {
                "Bucket": "${content_s3_bucket}",
                "Key": "feed/anonymous/latest.json",
                "VersionId.$": "$.OldVersionId"
            },
            "Resource": "arn:aws:states:::aws-sdk:s3:deleteObject",
            "Retry": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "BackoffRate": 2,
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "JitterStrategy": "FULL"
                }
            ],
            "Catch": [
                {
                    "ErrorEquals": [
                        "States.ALL"
                    ],
                    "Next": "Fail",
                    "ResultPath": "$.error"
                }
            ],
            "TimeoutSeconds": 30,
            "Next": "ValidateAnonFeed"
        },
        "ValidateAnonFeed": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke",
            "Parameters": {
                "FunctionName": "${validate_anonymous_feed}"
            },
            "Retry": [
                {
                    "ErrorEquals": [
                        "Lambda.ServiceException",
                        "Lambda.AWSLambdaException",
                        "Lambda.SdkClientException",
                        "Lambda.TooManyRequestsException"
                    ],
                    "IntervalSeconds": 1,
                    "MaxAttempts": 3,
                    "BackoffRate": 2,
                    "JitterStrategy": "FULL"
                }
            ],
            "End": true
        },
        "Success": {
            "Type": "Succeed"
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Lambda Functions in the Workflow&lt;/strong&gt;&lt;br&gt;
The workflow relies on multiple AWS Lambda functions, each designed with a single responsibility to keep the architecture modular and maintainable.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GenerateFeedLambda&lt;br&gt;
Purpose: Calls the API to retrieve the latest 100 content items.&lt;br&gt;
Input: Triggered at the start of the workflow.&lt;br&gt;
Output: A list of content IDs passed into the first Map state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GenerateIndivcontentJsonsLambda&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Purpose: Generate json for each of 100 items and save in s3 bucket. Output latest 100 content items to next stage.&lt;br&gt;
Input: Single content ID.&lt;br&gt;
Output: A list of content IDs passed into the first Map state.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GenerateContentCommentJsonsLambda&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Purpose: Given a content ID, calls the API to fetch the 16 latest comments for that content and save each in s3.&lt;br&gt;
Input: Single content ID.&lt;br&gt;
Output: List of comment IDs saved in s3. pass object name for the next processing stage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Replies Fetcher Lambda (Level 1/Level 2)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Purpose: Given a comment ID, fetches the 16 latest replies and save each in s3. In our case same API can be used to generate replies and nested replies. So same lambda is used in step function.&lt;br&gt;
Input: Single comment ID.&lt;br&gt;
Output: List of reply IDs saved in s3. pass object name for the next nested Map state.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Consolidateoutput Lambda&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Purpose: output from previous stage saved in s3 is not a list. we need a list type object to pass to map stage. this lambda does that conversion.&lt;br&gt;
Input: s3 object name (output from previous stage saved in s3)&lt;br&gt;
Output: JSON object written to S3 with a structured&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance metrics&lt;/strong&gt;&lt;br&gt;
⚡ By distributing work across nested Map states, the pipeline scaled horizontally without requiring manual queue or concurrency management. (duration ~30s to generate all)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2m4vqqnz5x7rqztfpi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2m4vqqnz5x7rqztfpi6.png" alt=" " width="701" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step Functions vs Manual Orchestration&lt;/strong&gt;&lt;br&gt;
Manual Orchestration (Queues + Custom Logic)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Must design and manage worker pools manually&lt;/li&gt;
&lt;li&gt;Requires custom retry/backoff logic&lt;/li&gt;
&lt;li&gt;Logs spread across services, harder to trace&lt;/li&gt;
&lt;li&gt;Pay for compute + queue infrastructure&lt;/li&gt;
&lt;li&gt;Higher engineering effort, more boilerplate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS Step Functions (Map State)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically scales with Map state parallelism&lt;/li&gt;
&lt;li&gt;Built-in retries, catchers, error paths&lt;/li&gt;
&lt;li&gt;Centralized execution history and visualization&lt;/li&gt;
&lt;li&gt;Pay per state transition + Lambda usage&lt;/li&gt;
&lt;li&gt;Faster implementation with declarative workflow&lt;/li&gt;
&lt;li&gt;Simplified workflow as state machine JSON&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This demonstrated the power of combining AWS Step Functions with Lambda to handle large-scale, highly nested workloads. By leveraging the Map state effectively, I was able to orchestrate large amount of dependent API calls and generate 409,600 JSON files — all while keeping the system resilient, observable, and cost-efficient.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>stepfunctions</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Transferring Data Between Amazon S3 Buckets Across AWS Accounts with AWS DataSync</title>
      <dc:creator>Chanaka Supun</dc:creator>
      <pubDate>Thu, 27 Mar 2025 17:33:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/transferring-data-between-amazon-s3-buckets-across-aws-accounts-with-aws-datasync-9i8</link>
      <guid>https://dev.to/aws-builders/transferring-data-between-amazon-s3-buckets-across-aws-accounts-with-aws-datasync-9i8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In multi-account AWS environments, teams often need to transfer data across Amazon S3 buckets residing in different AWS accounts. While traditional methods like S3 cross-account replication or AWS CLI-based transfers exist, AWS DataSync provides a more robust, managed solution that enables automated, secure, and high-performance data transfers with monitoring and scheduling capabilities.&lt;/p&gt;

&lt;p&gt;This guide walks you through setting up AWS DataSync to transfer data from a source S3 bucket in one AWS account to a destination S3 bucket in another AWS account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create IAM Roles for AWS DataSync&lt;/strong&gt;&lt;br&gt;
AWS DataSync requires IAM roles to access the source and destination S3 buckets securely.&lt;/p&gt;

&lt;p&gt;1.1 Create IAM Role in the Source Account&lt;br&gt;
Role: &lt;code&gt;datasync-source-role&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;IAM → Roles → Create Role&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select AWS Service → Choose &lt;strong&gt;DataSync&lt;/strong&gt; as the trusted entity&lt;/li&gt;
&lt;li&gt;Attach the following trust policy to allow DataSync to assume this role:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "datasync.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Attach the following inline policy to allow access to the source S3 bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": [
                "arn:aws:s3:::source-bucket"
            ]
        },
        {
            "Sid": "Statement2",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectTagging",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::source-bucket/*"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Role: datasync-destination-role&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;IAM → Roles → Create Role&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select AWS Service → Choose &lt;strong&gt;DataSync&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Attach the following trust policy to allow DataSync to assume this role:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "datasync.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attach the following inline policy to allow DataSync to write to the destination S3 bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": [
                "arn:aws:s3:::destination-bucket"
            ]
        },
        {
            "Sid": "Statement2",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectTagging",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:PutObjectTagging",
                "s3:AbortMultipartUpload",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::destination-bucket/*"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Update the S3 Bucket Policy in the Destination Account&lt;/strong&gt;&lt;br&gt;
To allow AWS DataSync to write data to the destination S3 bucket, add the following bucket policy to destination-bucket:&lt;/p&gt;

&lt;p&gt;Navigate to &lt;strong&gt;S3 → destination-bucket → Permissions → Bucket Policy&lt;/strong&gt;&lt;br&gt;
Add the following policy, replacing &lt;strong&gt;SOURCE_ACCOUNT_ID&lt;/strong&gt; with the actual AWS Account ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Allowdatasync",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SOURCE_ACCOUNT_ID:role/datasync-destination-role"
            },
            "Action": [
                "s3:List*",
                "s3:Get*",
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:PutObject",
                "s3:PutObjectTagging"
            ],
            "Resource": [
                "arn:aws:s3:::destination-bucket/*",
                "arn:aws:s3:::destination-bucket"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Configure AWS DataSync in the Source Account&lt;/strong&gt;&lt;br&gt;
3.1 Create the Source Location in AWS DataSync&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the AWS DataSync console in the source account.&lt;/li&gt;
&lt;li&gt;Click Create *&lt;em&gt;Location *&lt;/em&gt;→ Select Amazon S3.&lt;/li&gt;
&lt;li&gt;Choose the source-bucket as the location.&lt;/li&gt;
&lt;li&gt;Select datasync-source-role as the IAM role.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create Location.&lt;br&gt;
3.2 Create the Destination Location (via AWS CloudShell)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Since the destination S3 bucket belongs to a different AWS account, use AWS CLI (or AWS CloudShell) to create the destination location.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the following AWS CLI command in the source account:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws datasync create-location-s3 \
    --s3-bucket-arn arn:aws:s3:::destination-bucket \
    --s3-config '{ "BucketAccessRoleArn": "arn:aws:iam::SOURCE_ACCOUNT_ID:role/datasync-destination-role" }' \
    --region ap-southeast-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Replace ap-southeast-2 with the region where your bucket exists.&lt;/p&gt;

&lt;p&gt;you will get a response back as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "LocationArn": "arn:aws:datasync:ap-southeast-2:SOURCE_ACCOUNT_ID:location/loc-xxxxxxxx"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then refresh the page will be able to see two locations now created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create and Start the DataSync Task&lt;/strong&gt;&lt;br&gt;
4.1 Create the DataSync Task&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open AWS DataSync in the source account.&lt;/li&gt;
&lt;li&gt;Click Create Task.&lt;/li&gt;
&lt;li&gt;Select Source Location (previously created).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fli8yyze8ryqnxomun00z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fli8yyze8ryqnxomun00z.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next Select Destination Location (created via CLI).&lt;/li&gt;
&lt;li&gt;Configure the following settings:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Mode&lt;/strong&gt;: Ensure Enhanced.&lt;/li&gt;
&lt;li&gt;keep rest of the settings as default.&lt;/li&gt;
&lt;li&gt;Click Create Task.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4.2 Start the DataSync Task&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once the task is created, start the transfer using the AWS CLI or console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmbw193b6kmkxudn7mas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmbw193b6kmkxudn7mas.png" alt="Image description" width="800" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start via Console:&lt;/li&gt;
&lt;li&gt;Navigate to AWS DataSync → Tasks.&lt;/li&gt;
&lt;li&gt;Select your newly created task.&lt;/li&gt;
&lt;li&gt;Click Start Task with defaults.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Monitor the Data Transfer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to &lt;strong&gt;AWS DataSync&lt;/strong&gt; → &lt;strong&gt;Task Executions&lt;/strong&gt; to view progress.&lt;/li&gt;
&lt;li&gt;Check CloudWatch logs for errors if the task fails.&lt;/li&gt;
&lt;li&gt;Validate that files appear in the destination S3 bucket after completion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;By following this approach, you can securely and efficiently transfer data across AWS accounts using AWS DataSync. This solution offers:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated Data Movement: No need for manual copying.&lt;/li&gt;
&lt;li&gt;Incremental Transfers: Only modified files are transferred.&lt;/li&gt;
&lt;li&gt;Monitoring &amp;amp; Logs: AWS CloudWatch integration for tracking.&lt;/li&gt;
&lt;li&gt;Scalability: Can handle large-scale transfers.&lt;/li&gt;
&lt;li&gt;This method is ideal for one-time migrations as well as ongoing cross-account synchronization.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>s3</category>
      <category>datamigration</category>
      <category>daysync</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Public http API Gateway to Private API Gateway : Cross-Account Integration</title>
      <dc:creator>Chanaka Supun</dc:creator>
      <pubDate>Tue, 25 Mar 2025 18:33:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/public-http-api-gateway-to-private-api-gateway-cross-account-integration-4ee4</link>
      <guid>https://dev.to/aws-builders/public-http-api-gateway-to-private-api-gateway-cross-account-integration-4ee4</guid>
      <description>&lt;p&gt;When architecting services across multiple AWS accounts—especially in a bulkhead architecture—enabling secure service-to-service communication is crucial. A Private API Gateway is often used to keep internal APIs accessible only within a VPC, but what if you need external access?&lt;/p&gt;

&lt;p&gt;A Regional API Gateway with an authorizer (e.g., Lambda Authorizer) can act as a single-entry point for multiple private endpoints. This article demonstrates how to securely connect a Public Regional API Gateway in one AWS account (Account A) to a Private API Gateway in another AWS account (Account B), using:&lt;/p&gt;

&lt;p&gt;✅ VPC Link to securely route traffic to internal services&lt;br&gt;
✅ Network Load Balancer (NLB) to distribute traffic efficiently&lt;br&gt;
✅ VPC Endpoint to expose the Private API Gateway securely&lt;/p&gt;

&lt;p&gt;🏗 Prerequisites&lt;br&gt;
Before starting, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two AWS accounts (Account A &amp;amp; Account B)&lt;/li&gt;
&lt;li&gt;VPC networking in both accounts with Transit Gateway (TGW) connectivity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📌 Architecture Overview&lt;br&gt;
The diagram below illustrates the integration:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufi65gdjhlysmzipg5l7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufi65gdjhlysmzipg5l7.png" alt="Project design" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key Components:&lt;br&gt;
1️⃣ Private API Gateway (Account B) – Hosts internal APIs, accessible via a VPC Endpoint&lt;br&gt;
2️⃣ Network Load Balancer (NLB) (Account A)  – Routes traffic from the Public API Gateway to the Private API Gateway&lt;br&gt;
3️⃣ VPC Link (Account A) – Connects the Public API Gateway to the NLB&lt;br&gt;
4️⃣ Regional API Gateway (Account A) – Exposes the Private API securely to external users&lt;/p&gt;

&lt;p&gt;Lets Get started !&lt;/p&gt;

&lt;p&gt;🏗 &lt;strong&gt;Step 1: Create a Private API Gateway &amp;amp; VPC Endpoint (Account B)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Log into Account B and navigate to VPC Endpoints&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco4t4wyrl2brlatgcy9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco4t4wyrl2brlatgcy9w.png" alt="Image description" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2️⃣ Create a new VPC Endpoint:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljh0p8w2q3zece68ubqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljh0p8w2q3zece68ubqi.png" alt="Image description" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VPC: Select the VPC where the Private API Gateway resides&lt;br&gt;
Security Group: Allow inbound traffic from Account A&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kfvk7q2fbu053adyxlo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kfvk7q2fbu053adyxlo.png" alt="Image description" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d0zfv2rosg320urduyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d0zfv2rosg320urduyg.png" alt="Image description" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhjcabnql346q4x6c8zn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhjcabnql346q4x6c8zn.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Policy: Set to Full Access for now&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9mlqixa2e6j7zrzf9vu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9mlqixa2e6j7zrzf9vu.png" alt="Image description" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click create. Will take few minutes to status to become available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx40daha6iw1o1duf97nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx40daha6iw1o1duf97nt.png" alt="Image description" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3️⃣ Once created, note down the IP addresses of the VPC Endpoint&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbozy4qqqkx2y555xp8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbozy4qqqkx2y555xp8b.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create a Private API Gateway (Account B)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Log into Account B and navigate to API Gateway&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuj1wjr7chee736vxnqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuj1wjr7chee736vxnqw.png" alt="Image description" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select rest api private.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4a5truwvarfsofjvtmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4a5truwvarfsofjvtmv.png" alt="Image description" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhnkvfwptlmi4ooj8usd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhnkvfwptlmi4ooj8usd.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;br&gt;
3️⃣ Create a resource and a method (e.g., GET /test)&lt;br&gt;
4️⃣ Configure a mock integration for testing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8miypka6yzqlm33tihhw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8miypka6yzqlm33tihhw.png" alt="Image description" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fklgr6kupoiqta13nlh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fklgr6kupoiqta13nlh.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm using mock endpoint here. It will just return.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 "response" : "integration-successful"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select method and go to test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo59j5mgffvz6ada11bi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo59j5mgffvz6ada11bi9.png" alt="Image description" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jp2zyg40w5cxbzvctnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jp2zyg40w5cxbzvctnz.png" alt="Image description" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5️⃣ Set up a Resource Policy to allow traffic only from VPC Endpoint in Account B:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52o4v5zck7ncr15lio9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52o4v5zck7ncr15lio9x.png" alt="Image description" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For resource include arn of private endpoints and sourcevpce should be id of vpc endpoint created above. This will make sure traffic will accepted only from vpc endpoint.&lt;/p&gt;

&lt;p&gt;6️⃣ Deploy the API&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno4u1a1ln6b7irstjtx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno4u1a1ln6b7irstjtx5.png" alt="Image description" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 : Create a Regional API Gateway (Account A)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Type: REST API (Regional)&lt;/p&gt;

&lt;p&gt;Name: e.g., http-gateway&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d2skxrverakenmadr64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d2skxrverakenmadr64.png" alt="Image description" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cbmba33i0ex6ytd3k2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cbmba33i0ex6ytd3k2j.png" alt="Image description" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg0w6g48n3o5ljnmmv5b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg0w6g48n3o5ljnmmv5b.png" alt="Image description" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create the http gateway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 : Create a Network Load Balancer (Account A)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Navigate to EC2 &amp;gt; Load Balancers&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hqg20b7vqnx9yk9n1bh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hqg20b7vqnx9yk9n1bh.png" alt="Image description" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2️⃣ Create a new Network Load Balancer&lt;/p&gt;

&lt;p&gt;Scheme: Internal&lt;br&gt;
VPC: Must have connectivity to the Private API Gateway's VPC&lt;br&gt;
Subnets: Private subnets in Account A&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct22yn8ztrkser3dv8lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct22yn8ztrkser3dv8lh.png" alt="LB SG" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3️⃣ Create a Target Group:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finfhxnmlnoq1olv6zlon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finfhxnmlnoq1olv6zlon.png" alt="Image description" width="800" height="234"&gt;&lt;/a&gt;&lt;br&gt;
Target Type: IP&lt;br&gt;
Port: 443 (for HTTPS traffic to Private API Gateway)&lt;br&gt;
VPC: that has connectivity with Private API Gateway&lt;/p&gt;

&lt;p&gt;Target IPs: Enter VPC Endpoint IPs noted earlier&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfnof7gc2yueds4rkdg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfnof7gc2yueds4rkdg3.png" alt="Image description" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7q1yyg0isei7glcr4imi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7q1yyg0isei7glcr4imi.png" alt="Image description" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm11rjbme8bg0xibpm25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm11rjbme8bg0xibpm25.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For IP addresses enter the Ip's of VPV endpoint previously note down in step 01 and keep port as 443.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jjnkmpft8or2xzmwfw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jjnkmpft8or2xzmwfw5.png" alt="Image description" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff95narsfxqqdymnxtvfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff95narsfxqqdymnxtvfd.png" alt="Image description" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm6nu0arzb8v7wsqkt90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm6nu0arzb8v7wsqkt90.png" alt="Image description" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4️⃣ Attach the Target Group to the Load Balancer&lt;br&gt;
5️⃣ Create and activate the NLB&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 : Create a VPC Link (Account A)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Navigate to API Gateway &amp;gt; VPC Links&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2zmkjn1ictymkuhcxh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2zmkjn1ictymkuhcxh0.png" alt="Image description" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2️⃣ Create a new VPC Link&lt;/p&gt;

&lt;p&gt;Type: Network Load Balancer&lt;br&gt;
NLB ARN: Select the NLB created in Account A&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx1e7x80xefvrprer4sy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx1e7x80xefvrprer4sy.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t7lhs8il1w5r2jfvvpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t7lhs8il1w5r2jfvvpp.png" alt="Image description" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3️⃣ Wait for the VPC Link to become available&lt;/p&gt;

&lt;p&gt;step 5 : Integrate http API Gateway with VPC Link (Account A)&lt;/p&gt;

&lt;p&gt;1️⃣ Go back to the http API Gateway&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi63bnw9zpqhmx1ibqma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi63bnw9zpqhmx1ibqma.png" alt="Image description" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2️⃣ Create a new resource &amp;amp; method&lt;/p&gt;

&lt;p&gt;Method Type: ANY&lt;br&gt;
PATH (use greedy path to pass all requests to private API: {proxy+} &lt;br&gt;
Integration Type: VPC Link&lt;br&gt;
VPC Link: Select the one created earlier&lt;br&gt;
Endpoint URL: Use the Private API Gateway Invoke URL from Step 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bfd6k9uu285xv8wbeou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bfd6k9uu285xv8wbeou.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdru22ehdc0gyjbziolq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdru22ehdc0gyjbziolq.png" alt="Image description" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the load balancer created previously&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ohmh13qxk7chltd3936.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ohmh13qxk7chltd3936.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the vpc link created above. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79ya0r1a1z29b3dm2hfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79ya0r1a1z29b3dm2hfd.png" alt="Image description" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;in the advance section for server name include invoke domain of private api gateway. Otherwise will get a certificate validation error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8mktm4nh48huya5w51i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8mktm4nh48huya5w51i.png" alt="Image description" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6 : Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ test the integration ✅&lt;br&gt;
When sending the request to the regional api gateway invoke url need to include the header x-apigw-api-id with value as private api gateway ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yzrnb44krv23arnvded.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yzrnb44krv23arnvded.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apigateway</category>
      <category>vpclink</category>
      <category>privateintegration</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
