DEV Community

Cover image for Testing Performance and Bulk Data Population: Console-based DynamoDB Read and Write with S3 Import
Oloruntobi Olurombi
Oloruntobi Olurombi

Posted on

Testing Performance and Bulk Data Population: Console-based DynamoDB Read and Write with S3 Import

Introduction

DynamoDB, Amazon's highly scalable NoSQL database, offers remarkable performance and flexibility for handling massive amounts of data. In this article, we'll explore how to leverage DynamoDB's power through the console to read, write, and populate a large dataset using an S3 import. We'll walk through the process step by step, from generating a sample CSV file to verifying the successful population of our DynamoDB table.

Generating and Preparing Data

Start by creating a CSV file containing mock data. You can utilise online tools like TableConvert to create the file structure. Our CSV includes fields such as id, first_name, last_name, home_airport, subscriber, ip_address, and last_flight.

Here's an example of the data within the CSV:

id,first_name,last_name,home_airport,subscriber,ip_address,last_flight

1,John,Doe,JFK,Yes,192.168.1.1,2023-08-14
2,Jane,Smith,LAX,No,10.0.0.1,2023-08-10
3,Michael,Johnson,SFO,Yes,172.16.0.1,2023-08-12
4,Emily,Williams,ORD,No,192.168.0.1,2023-08-09
5,David,Brown,DEN,Yes,192.168.2.1,2023-08-11
6,Susan,Miller,ATL,Yes,192.168.3.1,2023-08-13
7,Robert,Jones,DFW,No,10.0.0.2,2023-08-08
8,Linda,Davis,MIA,Yes,172.16.1.1,2023-08-06
9,William,Anderson,LAS,No,192.168.0.2,2023-08-07
10,Amy,Martin,SEA,Yes,192.168.4.1,2023-08-05
11,Mark,Thompson,BOS,Yes,192.168.5.1,2023-08-04
12,Karen,White,PDX,No,10.0.0.3,2023-08-03
13,James,Clark,PHX,Yes,172.16.2.1,2023-08-02
14,Nancy,Lee,IAH,No,192.168.0.3,2023-08-01
15,Charles,Hall,SAN,Yes,192.168.6.1,2023-07-31
Enter fullscreen mode Exit fullscreen mode

Image description

Uploading CSV Data to S3

Next, we need to create an S3 bucket to house our CSV file.

Follow these steps:

a. Log in to the AWS Management Console and navigate to the S3 service.

Image description

b. Navigate to Amazon S3 and click on "Create Bucket."

Image description

c. Provide a unique name for the bucket and choose the "US East (N. Virginia)" region (us-east-1).

Image description

d. Accept the default settings and create the bucket.

Image description

Image description

e. Upload the previously generated CSV file to the newly created S3 bucket.

Image description

Image description

Image description

Image description

Importing Data to DynamoDB

Now, let's import the data from the CSV file in the S3 bucket to a DynamoDB table.

Here's how:

a. Access the DynamoDB console.

b. Go back to the DynamoDB Page and click on the Import from S3 option.

Image description

c. Configure import settings, such as source S3 URL, S3 bucket owner (choose "This AWS account"), import file compression (No Compression), import file format (CSV), CSV header (use the first line of the source file), and CSV delimiter character (comma).

Image description

Image description

d. Proceed to specify table details, including the table name, partition key (id), and optionally, a sort key (last_flight).

Image description

e. Configure table settings using default values.

Image description

f. Review your settings and click "Import" to initiate the data import process.

Image description

Image description

Verifying Data Population

To confirm that the data population was successful, follow these steps:

a. Return to the DynamoDB console and navigate to the "Tables" section.

Image description

b. Select the table you imported data into.

Image description

c. Choose "Explore table items" to view the populated data.

Image description

d. You can now select individual items to explore the data in JSON format.

Image description

Image description

Image description

Conclusion

In this article, we explored the process of testing performance and populating a significant number of records in Amazon DynamoDB using the AWS Management Console. By creating a CSV file, uploading it to an S3 bucket, and then importing the data into a DynamoDB table, we demonstrated a streamlined approach to handling data storage and retrieval. DynamoDB's seamless integration with other AWS services makes it a powerful choice for managing large-scale datasets efficiently. As you continue to explore DynamoDB's capabilities, you'll discover its potential to enhance the scalability and performance of your applications.

Top comments (8)

Collapse
 
segun-olawale15 profile image
Segun Olawale

This was a delightful and insightful piece of literature. It provided me with valuable insights and left me with a sense of satisfaction. I particularly appreciated the well-structured narrative and the author's ability to engage the reader effectively. It's always a pleasure to come across such well-crafted and thought-provoking content.

Collapse
 
dpalmer profile image
Dani Palmer

Your article is truly delightful and informative. It captures the reader's attention with its captivating content and engaging style, making it a great source of knowledge and enjoyment. I particularly appreciate the depth of insight you've provided, which adds significant value to the topic. Keep up the fantastic work, and I look forward to reading more of your exceptional articles in the future.

Collapse
 
ngozi-tech profile image
Ngozi Micheal

Nice read

Collapse
 
walteronkane profile image
Kane Walters

Great stuff

Collapse
 
iretitayo profile image
Ireti Tayo

Nice one

Collapse
 
xabi1992 profile image
Xabi Ike

Nice article.

Collapse
 
fisheye profile image
Fish Taylor

Interesting article.

Collapse
 
adewale-john profile image
Adewale John

Cool read