<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: omarrosadio</title>
    <description>The latest articles on DEV Community by omarrosadio (@omarrosadio).</description>
    <link>https://dev.to/omarrosadio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/omarrosadio"/>
    <language>en</language>
    <item>
      <title>Streaming data from Amazon QLDB to OpenSearch</title>
      <dc:creator>omarrosadio</dc:creator>
      <pubDate>Sun, 25 Sep 2022 06:09:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/streaming-data-from-amazon-qldb-to-opensearch-62f</link>
      <guid>https://dev.to/aws-builders/streaming-data-from-amazon-qldb-to-opensearch-62f</guid>
      <description>&lt;p&gt;The goal of this tutorial is to understand and implement a solution that streams data changes made on a ledger on Amazon QLDB to an OpenSearch Domain.&lt;br&gt;
A common use case for this architecture would be a scenario where is required an event-driven architecture to process data and perform analytics in near real-time for data stored on a ledger on QLDB.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Architecture&lt;/li&gt;
&lt;li&gt;Considerations&lt;/li&gt;
&lt;li&gt;Deploying the infrastructure using CloudFormation&lt;/li&gt;
&lt;li&gt;Configuring Access Control in OpenSearch&lt;/li&gt;
&lt;li&gt;Inserting data in the Ledger&lt;/li&gt;
&lt;li&gt;Querying the data in OpenSearch&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Farchitecture-diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Farchitecture-diagram.png" title="Architecture-Diagram" alt="Architecture-Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The way it works is as follows:&lt;br&gt;
    1) There is a Stream associated with the QLDB ledger. Every change made on the ledger is captured in the stream which sends that data to a Kinesis Stream (note that the QLDB stream is near-real time)&lt;br&gt;
    2) Once the data is in Kinesis Stream, consumers can process it. In this case to move the data to the destination (OpenSearch domain) a Kinesis Firehose Delivery Stream is configured. The delivery stream has associated a data transformation lambda function to filter only the relevant events we want to insert into OpenSearch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Considerations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The configuration settings are in a basic mode and most of them has the default value. For a production workload these configurations should be changed to select an optimal value. For example: &lt;/li&gt;
&lt;li&gt;The OpenSearch domain is created in a Public Networking mode but for a production scenario the VPC mode offers more control and security.&lt;/li&gt;
&lt;li&gt;The Kinesis Firehose delivery stream buffers are configured at the lowest value possible to see the data being moved to its destination as soon as possible, but for a production scenario these values should be carefully chosen.&lt;/li&gt;
&lt;li&gt;OpenSearch Domain Master user and password are configured as paramaters in the CFN template. However, there are more secure alternatives such as using Parameter Store parameters.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploying the infrastructure using CloudFormation
&lt;/h2&gt;

&lt;p&gt;We will create all the components showed in the architecture diagram using a CloudFormation template. After that, some manual configurations will be required.&lt;br&gt;
The template can be found in &lt;a href="https://github.com/omarrosadio/streaming-from-qldb-to-opensearch" rel="noopener noreferrer"&gt;this GitHub repository&lt;/a&gt; with the name &lt;em&gt;streaming-from-qldb-to-opensearch.yaml&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The parameter values you would need to change are:&lt;br&gt;
    - OpenSearchMasterUsername: The master username you will use to connect to the OpenSearch service&lt;br&gt;
    - OpenSearchMasterPassword: The master password you will use to connect to the OpenSearch service. Passwords must contain at least one uppercase letter, one lowercase letter, one number, and one special character.&lt;/p&gt;

&lt;p&gt;Create the Stack and before continue we need all components created. This can take up to 30 minutes (most of the time is consumed in the OpenSearch domain creation).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fcloudformation-stack.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fcloudformation-stack.jpg" title="Cloudformation-Stack" alt="Cloudformation-Stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Access Control in OpenSearch
&lt;/h2&gt;

&lt;p&gt;The IAM Role created that will be used by Kinesis Firehose to move the data to the OpenSearch Domain has the required IAM Permissions to do that (actions that includes &lt;em&gt;es&lt;/em&gt;). However, there is required to explicitly allow and assign the permissions from the OpenSearch Domain. To do this we need to configure these permissions entering to the dashboard.&lt;/p&gt;

&lt;p&gt;The dashboard URL can be obtained from the OpenSearch console or from the CloudFormation Stack - Outputs section with the name "OpenSearchDashboard":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fcloudformation-outputs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fcloudformation-outputs.jpg" title="Cloudformation-Outputs" alt="Cloudformation-Outputs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the username and password entered as input parameters during Stack creation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-login.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-login.jpg" title="Opensearch-Login" alt="Opensearch-Login"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then go to the hamburger menu in the top left corner -&amp;gt; "Security" -&amp;gt; "Roles":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-roles.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-roles.jpg" title="Opensearch-Roles" alt="Opensearch-Roles"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this demo, select the "all_access" role and in "Mapped users" section click on "Manage mapping":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-manage-mapping.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-manage-mapping.jpg" title="Opensearch-Manage-Mapping" alt="Opensearch-Manage-Mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And add the Kinesis Firehose Delivery Stream Role ARN in the "Backend roles" section. This value can be found in CloudFormation Stack - Outputs section with the name "KinesisFirehoseRole".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-backend-roles.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-backend-roles.jpg" title="Opensearch-Backend-Roles" alt="Opensearch-Backend-Roles"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And click on Map button.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inserting data in the Ledger
&lt;/h2&gt;

&lt;p&gt;Now that everything is configured, we need to insert some sample data to see the solution in action.&lt;/p&gt;

&lt;p&gt;Go to "Amazon QLDB" -&amp;gt; "PartiQL editor" on the web console. Select the recently created ledger (qldb-stream) and run the following script to create a table, an index and insert some data into the table.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE Person;
CREATE INDEX ON Person (GovId);
INSERT INTO Person VALUE {'GovId':'AAA-BB-0001', 'GovIdType': 'Passport', 'FirstName':'Raul', 'LastName':'Lewis', 'Country': 'USA', 'Age': 23}; 
INSERT INTO Person VALUE {'GovId':'CCC-DD-0002', 'GovIdType': 'Passport', 'FirstName':'Brent', 'LastName':'Logan', 'Country': 'USA', 'Age': 40};
INSERT INTO Person VALUE {'GovId':'EEE-FF-0003', 'GovIdType': 'Passport', 'FirstName':'Alexis', 'LastName':'Pena', 'Country': 'USA', 'Age': 29};
INSERT INTO Person VALUE {'GovId':'GGG-HH-0004', 'GovIdType': 'Passport', 'FirstName':'Melvin', 'LastName':'Parker', 'Country': 'Mexico', 'Age': 18};
INSERT INTO Person VALUE {'GovId':'III-JJ-0005', 'GovIdType': 'DNI', 'FirstName':'Salvatore', 'LastName':'Spencer', 'Country': 'Mexico', 'Age': 21};
INSERT INTO Person VALUE {'GovId':'LLL-MM-0006', 'GovIdType': 'DNI', 'FirstName':'Carlos', 'LastName':'Trump', 'Country': 'Peru', 'Age': 42};
INSERT INTO Person VALUE {'GovId':'NNN-OO-0007', 'GovIdType': 'DNI', 'FirstName':'John', 'LastName':'Connor', 'Country': 'Chile', 'Age': 36};
INSERT INTO Person VALUE {'GovId':'PPP-QQ-0008', 'GovIdType': 'Other', 'FirstName':'Diana', 'LastName':'Brown', 'Country': 'Brazil', 'Age': 30};
INSERT INTO Person VALUE {'GovId':'RRR-SS-0009', 'GovIdType': 'Other', 'FirstName':'Albert', 'LastName':'Johnson', 'Country': 'Peru', 'Age': 29};
INSERT INTO Person VALUE {'GovId':'TTT-UU-0010', 'GovIdType': 'Other', 'FirstName':'Freddy', 'LastName':'Rose', 'Country': 'Ecuador', 'Age': 33};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The statements should be executed successfully:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fqldb-insert-data.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fqldb-insert-data.jpg" title="QLDB-insert-data" alt="QLDB-insert-data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Querying the data in OpenSearch
&lt;/h2&gt;

&lt;p&gt;After the data is inserted in the ledger, it can take up to 1 minute to be send from QLDB Stream to Kinesis Data Stream. And then up to 1 minute from Kinesis Data Firehose to OpenSearch. So after about 2 minutes from data insertion it should be visible on OpenSearch.&lt;/p&gt;

&lt;p&gt;To validate the index was successfully created go to the hamburger menu in the top left corner -&amp;gt; "Query Workbench" and execute the query:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SHOW tables LIKE %;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And an index with the name we use as an input parameter in the CFN template should appear:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-validation.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-validation.jpg" title="Opensearch-Index-Validation" alt="Opensearch-Index-Validation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the index is not listed, then validate step 4 and make sure the role ARN was granted with the corresponding permissions.&lt;/p&gt;

&lt;p&gt;To see the data on OpenSearch go to the hamburger menu in the top left corner -&amp;gt; "Visualize" and click on "Create index pattern"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-pattern-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-pattern-1.jpg" title="Opensearch-Index-Pattern-1" alt="Opensearch-Index-Pattern-1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On "Index pattern name" field enter the index name configured during Stack creation (&lt;em&gt;OpenSearchIndexName&lt;/em&gt; parameter) and click on "Next step":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-pattern-2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-pattern-2.jpg" title="Opensearch-Index-Pattern-2" alt="Opensearch-Index-Pattern-2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally click on "Create index pattern":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-pattern-3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-pattern-3.jpg" title="Opensearch-Index-Pattern-3" alt="Opensearch-Index-Pattern-3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The inserted data on the ledger is visible on OpenSearch and new changes will also be streamed to OpenSearch. Go to the hamburger menu -&amp;gt; "Discover" section to see it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-data.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fomarrosadio%2Fstreaming-from-qldb-to-opensearch%2Fmaster%2Fimages%2Fopensearch-index-data.jpg" title="Opensearch-Index-Data" alt="Opensearch-Index-Data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Having the ability to analyze data in real or near real-time is crucial for analytics workloads. QLDB's supports this using Streams which sends the data to a Kinesis Stream. Once the data is on Kinesis Stream it can be processed in multiple ways. In this example it is delivered to a OpenSearh destination.&lt;br&gt;
I hope this example could help and could be used as a starting point for custom implementations. With the base infrastructure more components can be added: more delivery streams consuming from Kinesis Data Stream to delivery in parallel to another targets e.g. S3, Redshift, New Relic, etc.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>qldb</category>
      <category>opensearch</category>
      <category>streaming</category>
    </item>
    <item>
      <title>Implementing Change Data Capute (CDC) with Aurora Serverless v2</title>
      <dc:creator>omarrosadio</dc:creator>
      <pubDate>Fri, 22 Apr 2022 20:46:57 +0000</pubDate>
      <link>https://dev.to/omarrosadio/implementing-change-data-capute-cdc-with-aurora-serverless-v2-4i1l</link>
      <guid>https://dev.to/omarrosadio/implementing-change-data-capute-cdc-with-aurora-serverless-v2-4i1l</guid>
      <description>&lt;p&gt;Aurora Serverless v2 is now available after a long wait and it claim to solve many issues and limitations from its predecessor.&lt;br&gt;
One of the limitations of Aurora Serverless v1 is the impossibility of using CDC (Change Data Capture) with AWS DMS (Database Migration Service) but now it is possible using the v2.&lt;br&gt;
In this guide we will review step by step the configuration and implementation of CDC with Aurora Serverless v2 and AWS DMS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7or2gfh0bmj2a71yq9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7or2gfh0bmj2a71yq9g.png" alt="Solution diagram" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Actually, the configuration is pretty much the same as configuring CDC on a provisioned Aurora DB Cluster. Also consider that all the configurations are based on us-east-1 (N. Virginia) region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the Database&lt;/strong&gt;&lt;br&gt;
As first step, we need to create the Parameter Group, Subnet Group and the Database cluster. I am using almost all the default parameters, changing only the required ones to enable CDC (those parameters are in &lt;em&gt;italic letters&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Parameter Group&lt;/u&gt;&lt;br&gt;
Click on the 'Create Parameter Group' button:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay54l2ek8b2cxfplmn8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay54l2ek8b2cxfplmn8g.png" alt="Create Parameter Group" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select:&lt;br&gt;
Parameter Group family: aurora-mysql8.0&lt;br&gt;
(Currently it is the only one compatible with Aurora Serverless v2)&lt;/p&gt;

&lt;p&gt;Type:&lt;br&gt;
&lt;em&gt;DB Cluster Parameter Group&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Group Name:&lt;br&gt;
pg-servelessv2-cdc&lt;/p&gt;

&lt;p&gt;Description:&lt;br&gt;
Enable change data capture&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvj31t6j30dypnk4skr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvj31t6j30dypnk4skr5.png" alt="Configuring Parameter Group" width="746" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the Parameter group is created, select it and click on 'Edit parameters':&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7rp7zcg3h2mmw67yx2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7rp7zcg3h2mmw67yx2u.png" alt="Configuring Parameter Group" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to change these parameters:&lt;br&gt;
&lt;em&gt;binlog_format: ROW&lt;/em&gt;&lt;br&gt;
&lt;em&gt;binlog_checksum: NONE&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnj33qa7ku226kvxuzr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnj33qa7ku226kvxuzr3.png" alt="Configuring Parameter Group" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33xhybjs6mhymlvry1xm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33xhybjs6mhymlvry1xm.png" alt="Configuring Parameter Group" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And save changes:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzsnetodqf4mrc23a3me.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzsnetodqf4mrc23a3me.png" alt="Parameter Group created" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Subnet group&lt;/u&gt;&lt;br&gt;
Using subnet group we can specify in which subnets the database will be deployed. So in this case I will select public subnets to be able to connect to the DB easily.&lt;/p&gt;

&lt;p&gt;Click on the 'Create DB subnet group' button:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firj3youwyujw2lcw2e32.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firj3youwyujw2lcw2e32.png" alt="Creating subnet group" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name:&lt;br&gt;
sg-db-publicaccess&lt;br&gt;
Description:&lt;br&gt;
Using public subnets for demo only&lt;br&gt;
VPC:&lt;br&gt;
your_VPC_id&lt;br&gt;
Availability Zones:&lt;br&gt;
az_ids (is mandatory to select at least 2)&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz0bi0oo5s15vd1s06t1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz0bi0oo5s15vd1s06t1.png" alt="Configuring subnet group" width="800" height="542"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcd7273egenh7u9njnps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcd7273egenh7u9njnps.png" alt="Configuring subnet group" width="800" height="676"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now the subnet groups is created:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnrp6hyihsejtri0f13u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnrp6hyihsejtri0f13u.png" alt="Subnet group created" width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Database&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Click on the 'Create Database' button:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0eojz3i74zog0i2najcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0eojz3i74zog0i2najcz.png" alt="Creating database" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Database creation method: Standard create&lt;br&gt;
&lt;em&gt;Engine type: Amazon Aurora&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Edition: Amazon Aurora MySQL-Compatible Edition&lt;/em&gt;&lt;br&gt;
Replication features: Single-master (default)&lt;br&gt;
&lt;em&gt;Engine version: Aurora MySQL 3.02.0 (compatible with MySQL 8.0.23)&lt;/em&gt;&lt;br&gt;
Templates: Dev/Test&lt;br&gt;
DB cluster identifier: db-test-cdc&lt;br&gt;
Master username: admin&lt;br&gt;
Master password: supersecretpassword&lt;br&gt;
&lt;em&gt;DB instance class: Serverless v2 - new&lt;/em&gt;&lt;br&gt;
Capacity range - Minimum ACUs: 0.5&lt;br&gt;
Capacity range - Maximum ACUs: 1&lt;br&gt;
Multi-AZ deployment: Don't create an Aurora Replica&lt;br&gt;
Virtual private cloud (VPC): your_VPC_id&lt;br&gt;
&lt;em&gt;Subnet group: sg-db-publicaccess&lt;/em&gt; (the subnet group previously created)&lt;br&gt;
Public access: Yes&lt;br&gt;
VPC security group: Create new&lt;br&gt;
New VPC security group name: rds-publicaccess&lt;br&gt;
Availability Zone: No preference&lt;br&gt;
Database port: 3306&lt;br&gt;
Database authentication options: Password authentication&lt;br&gt;
Initial database name: sampledb&lt;br&gt;
&lt;em&gt;DB cluster parameter group: pg-servelessv2-cdc&lt;/em&gt; (the parameter group previously created)&lt;br&gt;
DB parameter group: default.aurora-mysql8.0&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvj0rjxbviskteixyjko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvj0rjxbviskteixyjko.png" alt="Creating database" width="800" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdl0wa20jmci6g5pescv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdl0wa20jmci6g5pescv.png" alt="Creating database" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6j1p1u2ge2jlzxy734r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6j1p1u2ge2jlzxy734r.png" alt="Creating database" width="777" height="828"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ydqw376iwyvv6mrme51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ydqw376iwyvv6mrme51.png" alt="Creating database" width="775" height="703"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fc7l7x6922q481z4gwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fc7l7x6922q481z4gwg.png" alt="Creating database" width="766" height="842"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1im4oexk4vlkg8cwz8da.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1im4oexk4vlkg8cwz8da.png" alt="Creating database" width="779" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0gqgxwtmn9a2njbgef7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0gqgxwtmn9a2njbgef7.png" alt="Creating database" width="779" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7xdcyolczlstxobfo7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7xdcyolczlstxobfo7y.png" alt="Creating database" width="770" height="845"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Create database (it takes approximately 15 minutes to create completely):&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy2setm0mdep7g5iqp82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy2setm0mdep7g5iqp82.png" alt="Creating database" width="786" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also modifiy the created Security Group (rds-publicaccess) to allow inboud traffic on port 3306 from 0.0.0.0/0&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdra7b1rm3nx9u6fr1bae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdra7b1rm3nx9u6fr1bae.png" alt="Modifying Security Group" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the S3 bucket&lt;/strong&gt;&lt;br&gt;
The next step is to create an S3 bucket which will be used as the target destination for the full load and change data capture task.&lt;br&gt;
It does not required any special configuration, we can use a bucket with default creation parameters:&lt;br&gt;
Name: test-dms-s3-target&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3c4ffbe9m3gth53nlvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3c4ffbe9m3gth53nlvx.png" alt="Creating S3 bucket" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgo1rpvx0xkvhae9rqtx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgo1rpvx0xkvhae9rqtx.png" alt="Creating S3 bucket" width="800" height="813"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx523zbqvyugn0g5l67v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx523zbqvyugn0g5l67v.png" alt="Creating S3 bucket" width="800" height="839"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the DMS Replication Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to create the DMS replication subnet group. Similar to database subnet group, it helps to specify the subnets that will be used. In this case I am selecting private subnets but can be any with proper connectivity to the database instance and S3 bucket.&lt;br&gt;
Name: sg-dms&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndeermgo3lmkexqic2kq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndeermgo3lmkexqic2kq.png" alt="Creating subnet group" width="800" height="817"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, create the Security group that will be attached to the DMS replication instance. It requires allow outbound traffic.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy4sk1eouhzj4w948czw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy4sk1eouhzj4w948czw.png" alt="Creating DMS security group" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And create the DMS Replication Instance:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju22vyvw7pe6qsppz69b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju22vyvw7pe6qsppz69b.png" alt="Creating DMS Replication Instance" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name: dms-instance&lt;br&gt;
Instance class: dms.t3.small&lt;br&gt;
Engine version: 3.4.6&lt;br&gt;
Allocated storage: 20GB&lt;br&gt;
VPC: The same VPC as the one RDs instance belongs to&lt;br&gt;
Multi AZ: Dev or test worload&lt;br&gt;
Publicly accessible: No&lt;br&gt;
Replication subnet group: sg-dms (the previously created)&lt;br&gt;
Availability zone: No preference&lt;br&gt;
VPC security group(s): dms-replication-instance (the previously created)&lt;br&gt;
KMS key: Default&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j4vrfdtdgxsqpieg98o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j4vrfdtdgxsqpieg98o.png" alt="Creating DMS Replication Instance" width="724" height="816"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ycz3xxumlur6gxfli1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ycz3xxumlur6gxfli1.png" alt="Creating DMS Replication Instance" width="733" height="854"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inserting sample data into the database&lt;/strong&gt;&lt;br&gt;
Before create the DMS task I am going to insert sample data into the database to validate the task works as expected. The database imported is from MySQL sample, specifically, &lt;a href="https://downloads.mysql.com/docs/sakila-db.zip" rel="noopener noreferrer"&gt;the Sakila sample database&lt;/a&gt;. These are 2 scripts to execute so it is very quick to replicate.&lt;br&gt;
Connect to the database using the user and password configured and setting the Database endpoint properly (in case of connection errors check the SG allow traffic on port 3306 for the IP range):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl412nl9a6a8wv1rlszla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl412nl9a6a8wv1rlszla.png" alt="Connecting to DB" width="793" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5slekhpqfy4arllgxlb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5slekhpqfy4arllgxlb.png" alt="Connecting to DB" width="782" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then execute the downloaded scripts: sakila-schema.sql and sakila-data.sql&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lmxqc2jndanksa83f79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lmxqc2jndanksa83f79.png" alt="Inserting sample data" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now we have populated the database:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3w16g6r9okmlxqan5i9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3w16g6r9okmlxqan5i9q.png" alt="Inserting sample data" width="376" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the DMS Tasks&lt;/strong&gt;&lt;br&gt;
Create the Source Endpoint to connect to the database and the Target Endpoint to connect to the S3 bucket:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjyw93rfwx3y4qabbzjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjyw93rfwx3y4qabbzjb.png" alt="Creating endpoint" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgkujhm9gfjcj8m6bkox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgkujhm9gfjcj8m6bkox.png" alt="Creating source endpoint" width="760" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qkeywerjprtq834v95r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qkeywerjprtq834v95r.png" alt="Creating source endpoint" width="745" height="694"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sjkgbq7d6k5odzro50r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sjkgbq7d6k5odzro50r.png" alt="Creating source endpoint" width="745" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And test the connection:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx480y3lbu0lxwii4z326.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx480y3lbu0lxwii4z326.png" alt="Testing connection" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38gk21wv6zaxxqcrbn2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38gk21wv6zaxxqcrbn2b.png" alt="Testing connection" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before create the Target Endpoint for the S3 bucket, we need to create an IAM role granting access to put/delete objects on the bucket:&lt;/p&gt;

&lt;p&gt;Select DMS as the AWS service:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u6fvq78tv15f1ssthx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u6fvq78tv15f1ssthx6.png" alt="Creating IAM Role" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't select any policies for now:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gl4iou8gjtifq09966y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gl4iou8gjtifq09966y.png" alt="Creating IAM Role" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select an proper name and create the role:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv7ev25knraz62as1hzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv7ev25knraz62as1hzy.png" alt="Creating IAM Role" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nb2nq4yowly822qz6rk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nb2nq4yowly822qz6rk.png" alt="Creating IAM Role" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And Add permissions -&amp;gt; Create inline policy&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3puwmzfw1vp0fzgshaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3puwmzfw1vp0fzgshaw.png" alt="Creating IAM Role" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The policy is as following (change the bucket name for the corresponding value):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version" : "2012-10-17",
    "Statement" : [ 
      {
        "Effect" : "Allow",
        "Action" : [
          "s3:PutObject",
          "s3:DeleteObject"
        ],
        "Resource" : "arn:aws:s3:::test-dms-s3-target/*"
      },
      {
        "Effect" : "Allow",
        "Action" : "s3:ListBucket",
        "Resource" : "arn:aws:s3:::test-dms-s3-target"
      } 
    ]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnxqa0bjohxtauy8jjay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnxqa0bjohxtauy8jjay.png" alt="Creating IAM Role" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlk5v9jq2av3avcfvdmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlk5v9jq2av3avcfvdmx.png" alt="Creating IAM Role" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we can create the Target endpoint:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykv46drluk9kmact35n1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykv46drluk9kmact35n1.png" alt="Creating Target Endpoint" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyf8fiyubxogc6f3aphe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyf8fiyubxogc6f3aphe.png" alt="Creating Target Endpoint" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27vhxj6ke6opeir80p3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27vhxj6ke6opeir80p3n.png" alt="Creating Target Endpoint" width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And test the connection:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faasfk65sy9a40fi8h49m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faasfk65sy9a40fi8h49m.png" alt="Testing Target Endpoint" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create the Database Migration Task we use the resources previously created:&lt;/p&gt;

&lt;p&gt;Task identifier: full-load-and-cdc-auroraserverless&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyehyg2b500srb904vmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyehyg2b500srb904vmp.png" alt="Creating dms task" width="688" height="820"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Task setting in this demo we can leave the default parameters except for the Enable CloudWatch logs which will be helpful to validate everything is Ok:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtl0ifjeku0afoqaycwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtl0ifjeku0afoqaycwp.png" alt="Creating dms task" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvdxb5yybh44h8th7lpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvdxb5yybh44h8th7lpw.png" alt="Creating dms task" width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Table mappings we only need to specify the schema (sakila in this case) and leave the other parameters with default values:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8l0x2amirz3nvvejmdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8l0x2amirz3nvvejmdd.png" alt="Creating dms task" width="800" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nmhce171bhfgcu1o6g2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nmhce171bhfgcu1o6g2.png" alt="Creating dms task" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally click on Create task:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnet8xscccksxpi795ey1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnet8xscccksxpi795ey1.png" alt="Creating dms task" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Validating the CDC feature&lt;/strong&gt;&lt;br&gt;
Lets keep the task running for some minutes untill the state is "Load complete, replication ongoing":&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yyk9ntyr70qvbsmzrb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yyk9ntyr70qvbsmzrb7.png" alt="Validating task" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check from table statistics that the data was loaded successfully:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3tzs8v05h4ljlho4215.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3tzs8v05h4ljlho4215.png" alt="Validating task" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The relevant fields are Full load rows and Total rows:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywjxxkn4i7yodbkwrhia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywjxxkn4i7yodbkwrhia.png" alt="Validating task" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you navigate through the bucket objects, you will notice there is a folder for each table:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr8enmma7fkqb6szkj6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr8enmma7fkqb6szkj6x.png" alt="Validating task" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And for the first load is structured as follows:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oqfrny4493utcnsm62p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oqfrny4493utcnsm62p.png" alt="Validating task" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check the CDC in action, lets insert some rows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSERT INTO `sakila`.`customer` (`customer_id`, `store_id`, `first_name`, `last_name`, `email`, `address_id`, `active`, `create_date`, `last_update`) VALUES ('600', '2', 'JOHN', 'SMITH', 'JOHN.SMITH@sakilacustomer.org', '605', '1', '2006-02-14 22:04:37', '2006-02-15 04:57:20');
INSERT INTO `sakila`.`customer` (`customer_id`, `store_id`, `first_name`, `last_name`, `email`, `address_id`, `active`, `create_date`, `last_update`) VALUES ('601', '2', 'WILL', 'CARTER', 'WILL.CARTER@sakilacustomer.org', '605', '1', '2006-02-14 22:04:37', '2006-02-15 04:57:20');
INSERT INTO `sakila`.`customer` (`customer_id`, `store_id`, `first_name`, `last_name`, `email`, `address_id`, `active`, `create_date`, `last_update`) VALUES ('602', '2', 'DONALD', 'JACKSON', 'DONALD.JACKSON@sakilacustomer.org', '605', '1', '2006-02-14 22:04:37', '2006-02-15 04:57:20');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And also delete some rows from the customer table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DELETE FROM `sakila`.`customer` WHERE (`customer_id` = '9');
DELETE FROM `sakila`.`customer` WHERE (`customer_id` = '15');
DELETE FROM `sakila`.`customer` WHERE (`customer_id` = '4');
DELETE FROM `sakila`.`customer` WHERE (`customer_id` = '433');
DELETE FROM `sakila`.`customer` WHERE (`customer_id` = '599');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a couple of minutes (it could take 2 or 3 minutes), the changes should be reflected on both the task statistics and S3 bucket:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpencvicbzwdhd160w6c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpencvicbzwdhd160w6c0.png" alt="Validating task" width="800" height="241"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf3wgcs52yj4vkfp67gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf3wgcs52yj4vkfp67gm.png" alt="Validating task" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctt3p3rfgwo4y9gchbmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctt3p3rfgwo4y9gchbmk.png" alt="Validating task" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
With the new version of Aurora serverless there is not limitation on setting up DMS migration tasks with Change Data Capture. It allows to take advantage of the serverless model and also the configuration is very similar to provisioned version.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>dms</category>
      <category>rds</category>
    </item>
    <item>
      <title>Using AWS SDK V2 for Java to read DynamoDB cross-account </title>
      <dc:creator>omarrosadio</dc:creator>
      <pubDate>Fri, 24 Dec 2021 03:03:15 +0000</pubDate>
      <link>https://dev.to/omarrosadio/using-aws-sdk-v2-for-java-to-read-dynamodb-cross-account-n3p</link>
      <guid>https://dev.to/omarrosadio/using-aws-sdk-v2-for-java-to-read-dynamodb-cross-account-n3p</guid>
      <description>&lt;p&gt;For medium and large size companies, is usual to have multiple AWS accounts grouped by functionality, environment, business goals, etc.&lt;br&gt;
Sometimes, when is required cross-account access for resources, could be a problem due to the number of steps involved to grant access. And in the case of DynamoDB, additionally steps appears (different from services like KMS where you configure cross-account access and it is enough, for services like DynamoDB it is needed first assume the role and then procede with API calls).&lt;/p&gt;

&lt;p&gt;In this sample, I will describe step by step how to configure permissions to allow DynamoDB cross account access and test it using AWS SDK V2 for Java deployed in EC2 (however, the same concept could be applied to AWS Lambda).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario&lt;/strong&gt;&lt;br&gt;
Access from an Java Application on Account "A" to and DynamoDB Table on Account "B".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diagram of resources to be created&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vx8ogqvc26lvhe8ysbq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vx8ogqvc26lvhe8ysbq.jpg" alt="Solution Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Account owner of DynamoDB Tables will be referenced as "Account A".&lt;/li&gt;
&lt;li&gt;Account in which Java Application will be deployed and need access to Tables from "Account A", will be reference as "Account B".&lt;/li&gt;
&lt;li&gt;Complete Java Code can be found on this &lt;a href="https://github.com/omarrosadio/aws-java-sdk-v2-dynamo-cross-account.git" rel="noopener noreferrer"&gt;repository&lt;/a&gt;. It uses maven as dependency manager and AWS SDK V2 for Java.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create Role on Account A (DynamoDB resources owner)
&lt;/h2&gt;

&lt;p&gt;On account A, create a Role and choose "Another AWS Account" as type of trusted entity. Enter the Account ID from Account B:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgqm265mlqc2bp7q1iaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgqm265mlqc2bp7q1iaj.png" alt="Role Creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For permissions policies, I am attaching "AmazonDynamoDBReadOnlyAccess" managed policy only for demo purposes, but on real usage it should follow the Principle of Least Privilege to limit access.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89lgyqjo9888nfq7ve73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89lgyqjo9888nfq7ve73.png" alt="Policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create the role, choose a suitable role name:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgex0tfs91t43wrjv99mu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgex0tfs91t43wrjv99mu.png" alt="Role name"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the Role ARN because it will be used in the next step:&lt;br&gt;
&lt;code&gt;arn:aws:iam::&amp;lt;ID_Account_B&amp;gt;:role/CrossAccountReadDynamo&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Create policy on Account B with permissions to assume role created on step 1
&lt;/h2&gt;

&lt;p&gt;On Account B, create a Policy to grant permissions to assume the role from Account A. For policy JSON use:&lt;br&gt;
&lt;code&gt;{&lt;br&gt;
    "Version": "2012-10-17",&lt;br&gt;
    "Statement": {&lt;br&gt;
        "Effect": "Allow",&lt;br&gt;
        "Action": "sts:AssumeRole",&lt;br&gt;
        "Resource": "arn:aws:iam::&amp;lt;ID_Account_A&amp;gt;:role/CrossAccountReadDynamo"&lt;br&gt;
    }&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And choose a name for the policy:&lt;br&gt;
&lt;code&gt;assume_role_account_A&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6vqcygmp88082ry0d3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6vqcygmp88082ry0d3i.png" alt="Policy name"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Create role on Account B and attach the policy created on step 2
&lt;/h2&gt;

&lt;p&gt;On Account B, create a role and select EC2 as type of trusted entity:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtfetbzxbc52emjkk5y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtfetbzxbc52emjkk5y0.png" alt="EC2 role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Attach the policy recently created:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq01illdvoaqfi5dmjuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq01illdvoaqfi5dmjuw.png" alt="Policy attachment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name: EC2_AssumeRole_Account_A&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2dkfqwgv1itfhhowbq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2dkfqwgv1itfhhowbq4.png" alt="Role name"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Test cross account access through Java Application
&lt;/h2&gt;

&lt;p&gt;The Java Application will be deployed on and EC2 Instance in Account B. This EC2 Instance needs to has attached the Role created previously.&lt;br&gt;
The first step on code requires make an API Call to the Security Token Service (AWS STS) and in this way generate temporary credentials to be able to assume the role from Account A.&lt;br&gt;
Modify the variables accordingly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;roleARN: substitute with Role ARN created on step 1&lt;/li&gt;
&lt;li&gt;roleSessionName: an identifier to the temporary session&lt;/li&gt;
&lt;li&gt;region: region where the DynamoDB resources are deployed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;/*Change this variables according to the role created*/&lt;br&gt;
        String roleARN = "arn:aws:iam::&amp;lt;ACCOUNT_A&amp;gt;:role/&amp;lt;ROLE_NAME&amp;gt;";&lt;br&gt;
        String roleSessionName = "DynamoCrossAccount";&lt;br&gt;
        Region region = Region.US_EAST_1;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once the temporary credentials are generated, is needed to set the AccessKey, SecretAccessKey and SessionToken:&lt;br&gt;
&lt;code&gt;AwsSessionCredentials awsCreds = AwsSessionCredentials.create(myCreds.accessKeyId(), myCreds.secretAccessKey(),&lt;br&gt;
                myCreds.sessionToken());&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With the new credentials configured, the subsequents API Calls will be used the role from Account A. So, now we can consume and do operations over DynamoDB Tables of Account A. To test it, I will list all the existing tables:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ListTablesResponse response = null;
                if (lastName == null) {
                    ListTablesRequest request = ListTablesRequest.builder().limit(10).build();

                    response = ddb.listTables(request);
                } else {
                    ListTablesRequest request = ListTablesRequest.builder().exclusiveStartTableName(lastName).build();
                    response = ddb.listTables(request);
                }

                List&amp;lt;String&amp;gt; tableNames = response.tableNames();

                if (tableNames.size() &amp;gt; 0) {
                    for (String curName : tableNames) {
                        System.out.format("* %s\n", curName);
                    }
                } else {
                    System.out.println("No tables found!");
                    System.exit(0);
                }

                lastName = response.lastEvaluatedTableName();
                if (lastName == null) {
                    moreTables = false;
                }



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And the expected output should list resources from the other account. Achieving cross-account access:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt5ze6a020hd346kghal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt5ze6a020hd346kghal.png" alt="Output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again, the complete code is on GitHub and it is based on Official AWS documentation, specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-sts-src-main-java-com-example-sts-AssumeRole.java.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-sts-src-main-java-com-example-sts-AssumeRole.java.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-dynamodb-tables.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-dynamodb-tables.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>java</category>
      <category>dynamodb</category>
      <category>cloudcomputing</category>
    </item>
  </channel>
</rss>
