<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sachith Fernando</title>
    <description>The latest articles on DEV Community by Sachith Fernando (@sachithmayantha).</description>
    <link>https://dev.to/sachithmayantha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sachithmayantha"/>
    <language>en</language>
    <item>
      <title>My 12-Month Sprint to AWS Community Builder 2025</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Wed, 30 Apr 2025 03:46:58 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/badges-blogs-belonging-my-12-month-sprint-to-aws-community-builder-2025-524l</link>
      <guid>https://dev.to/sachithmayantha/badges-blogs-belonging-my-12-month-sprint-to-aws-community-builder-2025-524l</guid>
      <description>&lt;p&gt;Twelve months ago I hit &lt;strong&gt;Publish&lt;/strong&gt; on my very first AWS blog post. Today I’m writing this as a &lt;strong&gt;AWS Community Builder&lt;/strong&gt;, a member of the &lt;strong&gt;AWS Emerging Talent Community&lt;/strong&gt;. Here’s everything that happened in between and what I learned along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  April 2024 – “Let’s try this blogging thing”
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;th&gt;Article&lt;/th&gt;
&lt;th&gt;Views&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;14 Apr 2024&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://dev.to/sachithmayantha/serverless-computing-and-containers-in-aws-43fg"&gt;&lt;em&gt;Serverless Computing and Containers in AWS&lt;/em&gt;&lt;/a&gt; – a side-by-side comparison of Lambda &amp;amp; Fargate&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I’d spent weeks tinkering with Lambda function for a university project, and friends kept asking, &lt;em&gt;“When would you ever choose containers instead?”&lt;/em&gt; Turning that answer into an article felt terrifying…right up until the moment I pressed &lt;strong&gt;Publish&lt;/strong&gt;. The post didn’t break the internet, but it did break my fear of writing blogs about AWS.&lt;/p&gt;




&lt;h3&gt;
  
  
  Highlight posts
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;th&gt;Article&lt;/th&gt;
&lt;th&gt;Views&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;16 Jun 2024&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://dev.to/sachithmayantha/amazon-rds-multi-az-deployments-vs-read-replica-1ki3"&gt;&lt;em&gt;Amazon RDS Multi-AZ Deployments vs Read Replica&lt;/em&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;376&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;8 Dec 2024&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://dev.to/sachithmayantha/connecting-aws-rds-to-spring-boot-387o"&gt;&lt;em&gt;Connecting AWS RDS to Spring Boot&lt;/em&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1 695&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Metric snapshot:&lt;/strong&gt; 13 AWS-focused posts · 4 076 total views · 145 reactions  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The December post took off after someone shared it in an enterprise Slack channel—proof that &lt;em&gt;depth&lt;/em&gt; plus &lt;em&gt;timing&lt;/em&gt; can out-perform follower counts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Badges that opened doors (Jun → Nov 2024)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Badge&lt;/th&gt;
&lt;th&gt;Date earned&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Getting Started with &lt;strong&gt;AWS Database&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Jun 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Getting Started with &lt;strong&gt;AWS Storage&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Jul 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Getting Started with &lt;strong&gt;AWS Compute&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Nov 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In &lt;strong&gt;November 2024&lt;/strong&gt; an email arrived:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“As an AWS Educate badge earner you qualify for the **AWS Emerging Talent Community&lt;/em&gt;&lt;em&gt;…”&lt;/em&gt;  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Free certification preparations, career guidance, and earn points by completing content that I can use for rewards in the ETC!&lt;/p&gt;




&lt;h2&gt;
  
  
  The email that changed everything (5 Mar 2025)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3k6rcj6cg5r2ytc7dqua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3k6rcj6cg5r2ytc7dqua.png" alt="AWS Community Builder acceptance email" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At 12:39 PM the subject line lit up:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“🚀 Welcome to the AWS Community Builders Program”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I re-read it three times before noticing my category: &lt;strong&gt;Data&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
A few days later the welcome pack landed on my doorstep:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsklit8urz4krbg7e61r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsklit8urz4krbg7e61r.jpg" alt="Photo of Swag" width="800" height="999"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why &lt;em&gt;diversity&lt;/em&gt; is more than a checkbox
&lt;/h2&gt;

&lt;p&gt;I’m originally from Sri Lanka, and moving to Melbourne, Australia, for my master’s forced me to adapt to new customs, time zones, and accents. That experience taught me how to explain tech ideas so people from any background can follow along. AWS’s selection criteria explicitly call out diversity, and it’s more than lip service, the Slack channels hum with accents from Lagos to Lima to Lahore. &lt;br&gt;
&lt;strong&gt;Unique backgrounds aren’t just tolerated, they are accelerators&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Balancing code, class &amp;amp; content
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🎓 &lt;strong&gt;Studies:&lt;/strong&gt; full-time Master (Coursework) in Information Technology
&lt;/li&gt;
&lt;li&gt;💼 &lt;strong&gt;Part-time job:&lt;/strong&gt; IT Support Technician (kept the bills paid and my troubleshooting sharp)
&lt;/li&gt;
&lt;li&gt;⏳ &lt;strong&gt;Hardest part:&lt;/strong&gt; Time management, learning to schedule writing sprints between lectures and night shifts. My trick: 25-minute Pomodoros + shutting every tab except the markdown file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Takeaways for &lt;em&gt;you&lt;/em&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Ship something small.&lt;/strong&gt; My first post had 47 views—and that was enough.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stack learning pathways.&lt;/strong&gt; Badge → blog → community invitation → bigger badge.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Find a feedback loop.&lt;/strong&gt; AWS Experts gave me real eyes on early drafts.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tell your unique story.&lt;/strong&gt; Backgrounds that feel “non-traditional” in tech are actually superpowers.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Ready to start your own journey?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Earn a foundational badge on &lt;a href="https://aws.amazon.com/education/awseducate/" rel="noopener noreferrer"&gt;AWS Educate&lt;/a&gt;.
&lt;/li&gt;
&lt;li&gt;Publish a &lt;em&gt;working-notes&lt;/em&gt; style article—perfection later, URL today.
&lt;/li&gt;
&lt;li&gt;Join a local &lt;strong&gt;AWS User Group&lt;/strong&gt; (or start one!).
&lt;/li&gt;
&lt;li&gt;Apply for the next round of &lt;a href="https://aws.amazon.com/developer/community/community-builders/" rel="noopener noreferrer"&gt;AWS Community Builders 2026&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Questions, feedback, or just want to say hi? Connect with me on &lt;a href="https://www.linkedin.com/in/sachith-mayantha-fernando/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thanks for reading, and see you in the cloud!&lt;/strong&gt; ☁️👋&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>news</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Seamless File Storage: Integrating AWS S3 with Spring Boot</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Sat, 08 Feb 2025 04:08:40 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/seamless-file-storage-integrating-aws-s3-with-spring-boot-3045</link>
      <guid>https://dev.to/sachithmayantha/seamless-file-storage-integrating-aws-s3-with-spring-boot-3045</guid>
      <description>&lt;p&gt;Hello Developers! 🚀&lt;/p&gt;

&lt;p&gt;Today, I’m going to talk about how to integrate Amazon S3 with Spring Boot for efficient and secure file storage. Whether you're building a cloud-based application, handling user-generated content, or just looking for a scalable way to store files, AWS S3 is a reliable and cost-effective solution. In this article, I’ll guide you through setting up an S3 bucket, configuring access policies, integrating with Spring Boot, and handling file uploads/downloads using the AWS SDK. By the end, you’ll have a fully functional backend that interacts seamlessly with S3. Let’s get started!&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS S3 Bucket Creation
&lt;/h3&gt;

&lt;h4&gt;
  
  
  General Configuration
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6mr0w2zxy5iwa3c0cbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6mr0w2zxy5iwa3c0cbf.png" alt="General Configuration" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ AWS Region&lt;br&gt;
Region: US East 1(N. Virginia) (us-east-1)&lt;br&gt;
This means the bucket will be created in the US East (N. Virginia) data center.&lt;br&gt;
Choose a region closest to your users to reduce latency and improve performance. Additionally, regions with multiple Availability Zones provide better redundancy and reliability. For most general-purpose applications, us-east-1 (N. Virginia) is a popular choice due to its low cost, high availability, and broad service support. &lt;/p&gt;

&lt;p&gt;2️⃣ Bucket Type&lt;br&gt;
General Purpose (Selected)&lt;br&gt;
Recommended for most workloads.&lt;br&gt;
Stores data across multiple Availability Zones for redundancy.&lt;/p&gt;

&lt;p&gt;Directory (Not Selected)&lt;br&gt;
Used for low-latency storage.&lt;br&gt;
Supports S3 Express One Zone for faster access but no redundancy across multiple zones.&lt;/p&gt;

&lt;p&gt;✅ Best Choice: General Purpose (default option, more reliable).&lt;/p&gt;

&lt;p&gt;3️⃣ Bucket Name&lt;br&gt;
Entered Name: spring-aws-s3&lt;br&gt;
The bucket name must be globally unique across all AWS accounts.&lt;br&gt;
AWS naming rules apply (lowercase, no spaces, no special characters except - and .).&lt;/p&gt;

&lt;p&gt;4️⃣ Copy Settings from Existing Bucket (Optional)&lt;br&gt;
Allows users to duplicate settings from an existing bucket.&lt;br&gt;
"Choose Bucket" button enables selecting an existing bucket.&lt;br&gt;
Not used in this case (default settings will be applied).&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure Block Public Access Settings
&lt;/h4&gt;

&lt;p&gt;Configure the Block Public Access settings to secure your bucket. You can allow public access for the Spring Boot application's REST API, customize these settings as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dfz4aajws2jx764nyk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dfz4aajws2jx764nyk1.png" alt="Allow Public Access" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5️⃣ Allowing Public Access for REST API&lt;/p&gt;

&lt;p&gt;Uncheck Block all public access and the related sub-options.&lt;/p&gt;

&lt;p&gt;This setting will enable public access to your bucket, which is required for accessing files via your Spring Boot REST API.&lt;/p&gt;

&lt;p&gt;Acknowledgement: Ensure you check the acknowledgment box to confirm the settings.&lt;/p&gt;

&lt;p&gt;Warning: Allowing public access might expose your bucket contents. Use this option cautiously and consider adding restrictive bucket policies to limit access to specific API calls.&lt;/p&gt;

&lt;p&gt;6️⃣ After configuring the settings, click on Create Bucket to finalize the process. Your bucket will now be ready to use!&lt;/p&gt;

&lt;h3&gt;
  
  
  Spring Boot Development
&lt;/h3&gt;

&lt;h4&gt;
  
  
  POM File Dependencies
&lt;/h4&gt;

&lt;p&gt;To enable AWS S3 integration in your Spring Boot application, you need to add the required dependencies to your pom.xml file. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85dtg14w5x9d4sk8bxp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85dtg14w5x9d4sk8bxp2.png" alt="POM File" width="709" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Ensure that you use the correct versions of dependencies to avoid compatibility issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration for AWS Credentials
&lt;/h4&gt;

&lt;p&gt;Add the AWS credentials and region configuration in the application.properties file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexl5mkh750hwn0hzfk96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexl5mkh750hwn0hzfk96.png" alt="Config" width="609" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS S3 Client Configuration
&lt;/h4&gt;

&lt;p&gt;Create a configuration class to set up the S3Client for interacting with the S3 bucket. Here’s the code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69h0xkki014ul1oaeb8u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69h0xkki014ul1oaeb8u.png" alt="Bucket Config" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more code details, you can refer to the public repository on GitHub: &lt;a href="https://github.com/SachithMayantha/aws-s3-spring" rel="noopener noreferrer"&gt;https://github.com/SachithMayantha/aws-s3-spring&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing REST API with Postman
&lt;/h3&gt;

&lt;p&gt;Once your Spring Boot application is running, you can test the file upload functionality using Postman. Configure your request as follows:&lt;/p&gt;

&lt;p&gt;Method: POST&lt;/p&gt;

&lt;p&gt;URL: &lt;a href="http://localhost:8080/api/files/upload" rel="noopener noreferrer"&gt;http://localhost:8080/api/files/upload&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Body: Select form-data and add a key file with the value set to the file you want to upload (e.g., Profile.pdf).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t5xo58vij7i0m76rmhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t5xo58vij7i0m76rmhh.png" alt="postman" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Same way, you can test other APIs as well. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgz60h5t3jlbbevgn1f5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgz60h5t3jlbbevgn1f5.gif" alt="enjoy" width="362" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>springboot</category>
      <category>project</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Deploying a React.js App on AWS Amplify in 3 Minutes</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Wed, 08 Jan 2025 14:16:20 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/deploying-a-reactjs-app-on-aws-amplify-in-3-minutes-18el</link>
      <guid>https://dev.to/sachithmayantha/deploying-a-reactjs-app-on-aws-amplify-in-3-minutes-18el</guid>
      <description>&lt;p&gt;Deploying web applications quickly and efficiently is essential in today's fast-paced development environment. AWS Amplify, a cloud-based platform by Amazon, makes deploying and managing full-stack applications, including straightforward.&lt;/p&gt;

&lt;p&gt;In this guide, I’ll show you how to deploy your React.js github repository on AWS Amplify in just 3 minutes. Let’s dive in and explore AWS Amplify with ReactJS!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create a React App Locally and Push it to GitHub
&lt;/h3&gt;

&lt;p&gt;Before deploying your React.js application on AWS Amplify, the first step is to set up your app locally and upload it to a GitHub repository. This ensures that your application’s code is version-controlled and accessible for deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Log In to AWS Console and Navigate to AWS Amplify
&lt;/h3&gt;

&lt;p&gt;After setting up your React app on GitHub, the next step is to deploy it using AWS Amplify. First, login &lt;strong&gt;AWS Management Console&lt;/strong&gt; using your credentials and go to &lt;strong&gt;AWS Amplify&lt;/strong&gt;, then click on &lt;strong&gt;create new app&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Connect Your GitHub Repository to AWS Amplify
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4scgkobd9c1fhc0o9ua0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4scgkobd9c1fhc0o9ua0.png" alt="step 3" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it's time to connect your GitHub repository and configure the deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. &lt;strong&gt;Choose Your Source Code Provider&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;On the AWS Amplify dashboard, you'll see the "Start building with Amplify" page, as shown in the screenshot.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;GitHub&lt;/strong&gt; as your source code provider and select Next (because I already uploaded my source code into GitHub).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpwnf4vvvmya2dp9fye3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpwnf4vvvmya2dp9fye3.png" alt="github authentication" width="710" height="914"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. &lt;strong&gt;Authenticate Your GitHub Account&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS Amplify will prompt you to authenticate your GitHub account.&lt;/li&gt;
&lt;li&gt;You will see the Install &amp;amp; Authorize page, where you can manage which repositories AWS Amplify can access.&lt;/li&gt;
&lt;li&gt;You can choose between two options:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;All Repositories&lt;/strong&gt;: Grants AWS Amplify access to all current and future repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Only One Repository&lt;/strong&gt;: Provides access to specific repositories only.&lt;/p&gt;

&lt;p&gt;If you select Only One Repository, choose your React app repository (e.g., recipe-app) as shown.&lt;/p&gt;

&lt;p&gt;After reviewing the permissions, click &lt;strong&gt;Install &amp;amp; Authorize&lt;/strong&gt; to grant access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Add Repository and Branch
&lt;/h3&gt;

&lt;p&gt;Once GitHub is authorized, the next step in AWS Amplify is to select the repository and branch that you want to deploy.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Select Your Repository&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;strong&gt;Add repository and branch&lt;/strong&gt; section, choose your GitHub repository from the dropdown menu. In this work, the repository &lt;code&gt;SachithMayantha/recipe-app&lt;/code&gt; is selected.&lt;/li&gt;
&lt;li&gt;If your repository doesn't appear:

&lt;ul&gt;
&lt;li&gt;Ensure the AWS Amplify GitHub App has permissions to access your repository.&lt;/li&gt;
&lt;li&gt;Push a new commit to your repository and click the &lt;strong&gt;Refresh&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Select Your Branch&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;After selecting your repository, choose the branch you want to deploy (e.g., &lt;code&gt;aws-amplify&lt;/code&gt; ).&lt;/li&gt;
&lt;li&gt;Ensure that the branch contains the latest version of your app's code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Monorepo Configuration (Optional)&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If your repository is a monorepo (a single repository containing multiple projects), check the &lt;strong&gt;My app is a monorepo&lt;/strong&gt; option.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you’ve selected the repository and branch, click &lt;strong&gt;Next&lt;/strong&gt; to configure your app’s build settings.&lt;/p&gt;

&lt;p&gt;With the repository and branch set, you’re now ready to configure build settings for your React app!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Configure App Settings and Build Settings
&lt;/h3&gt;

&lt;p&gt;After selecting the repository and branch, the next step is to configure your app settings and verify the build settings. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxladial018kgaep19hai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxladial018kgaep19hai.png" alt="App Settings" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. App Name&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;App name&lt;/strong&gt; field will be pre-filled based on your repository name (e.g., &lt;code&gt;recipe-app&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;You can leave it as it is or change it to a custom name for better identification.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Auto-Detected Framework&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS Amplify will automatically detect the framework used in your project. In this case, it recognizes that the app is built using &lt;strong&gt;React&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This ensures the correct build settings are applied for your framework.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Build Settings&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Verify the &lt;strong&gt;Frontend build command&lt;/strong&gt; and &lt;strong&gt;Build output directory&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend build command&lt;/strong&gt;: This is the command used to build your app. For React apps, it’s &lt;code&gt;npm run build&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build output directory&lt;/strong&gt;: This is the folder where the build output is generated. For React apps, it’s usually &lt;code&gt;build&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. Advanced Build Settings (Optional)&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If your project requires environment variables or custom build scripts, you can add them under the &lt;strong&gt;Advanced settings&lt;/strong&gt; section.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;5. Save and Continue&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Once everything is configured and verified, click &lt;strong&gt;Next&lt;/strong&gt; to proceed to the review and deployment step.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the app settings and build configuration complete, you’re ready to review your deployment settings and launch your React app! Let’s move on to the final step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Review and Deploy Your React App
&lt;/h3&gt;

&lt;p&gt;Now that you’ve configured your app settings and build steps, it’s time to review your deployment details and launch your app. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkore14kjprsi2esrdnk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkore14kjprsi2esrdnk.png" alt="review" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Review Deployment Details&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;On the &lt;strong&gt;Review&lt;/strong&gt; page, check the following:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repository Details&lt;/strong&gt;: Verify the GitHub repository name, branch (e.g., &lt;code&gt;aws-amplify&lt;/code&gt;), and monorepo root path (if applicable).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App Settings&lt;/strong&gt;: Confirm the app name, framework (React), and build configuration (e.g., &lt;code&gt;npm run build&lt;/code&gt; with the &lt;code&gt;build&lt;/code&gt; directory).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;If you need to make changes, click the &lt;strong&gt;Edit&lt;/strong&gt; button next to the respective section.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vi8xzxhs32zrstyuogf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vi8xzxhs32zrstyuogf.png" alt="start deployment" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Start Deployment&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Once you’ve reviewed the settings, click the &lt;strong&gt;Deploy&lt;/strong&gt; button to start the deployment process.&lt;/li&gt;
&lt;li&gt;AWS Amplify will begin building and deploying your app, as shown in the "Deploying app".&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Monitor Deployment Progress&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Amplify provides real-time updates during the build and deployment phases. You can see:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build Duration&lt;/strong&gt;: The time it takes to build your app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy Duration&lt;/strong&gt;: The time it takes to deploy your app to the hosting environment.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv80lkur8k0u9japrbl8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv80lkur8k0u9japrbl8l.png" alt="Deployed" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. Access Your Deployed App&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;After the deployment is complete, you will see a &lt;strong&gt;Deployed&lt;/strong&gt; status.&lt;/li&gt;
&lt;li&gt;A unique domain URL will be provided. Click on this link to view your live React app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Congratulations! Your React app is now live on AWS Amplify, and you’ve successfully deployed it in just a few simple steps. 🎉&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmicf0mvqpzdbmqfh42f0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmicf0mvqpzdbmqfh42f0.gif" alt="congratulations" width="455" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>react</category>
      <category>beginners</category>
      <category>project</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Wed, 08 Jan 2025 05:56:51 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/-2bp0</link>
      <guid>https://dev.to/sachithmayantha/-2bp0</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/sachithmayantha" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F383589%2F941f9199-735f-4a10-ab03-68355fece65f.png" alt="sachithmayantha"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/sachithmayantha/connecting-aws-rds-to-spring-boot-387o" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Connecting AWS RDS to Spring Boot&lt;/h2&gt;
      &lt;h3&gt;Sachith Fernando ・ Dec 8 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#mysql&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#springboot&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#project&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>aws</category>
      <category>springboot</category>
      <category>database</category>
      <category>backend</category>
    </item>
    <item>
      <title>Connecting AWS RDS to Spring Boot</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Sun, 08 Dec 2024 13:21:53 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/connecting-aws-rds-to-spring-boot-387o</link>
      <guid>https://dev.to/sachithmayantha/connecting-aws-rds-to-spring-boot-387o</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this article, I will walk through the process of setting up an AWS RDS MySQL instance after configuring the security group and connecting it to your Spring Boot application, and testing the connection.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Create a New Security Group
&lt;/h4&gt;

&lt;p&gt;Before setting up the RDS instance, you need to ensure that the instance is accessible. You can do this by configuring an AWS &lt;strong&gt;Security Group&lt;/strong&gt;. The security group acts as a virtual firewall to control inbound and outbound traffic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Access AWS Console&lt;/strong&gt;: Go to the &lt;strong&gt;EC2 Dashboard&lt;/strong&gt; &amp;gt; &lt;strong&gt;Security Groups&lt;/strong&gt; &amp;gt; &lt;strong&gt;Create Security Group&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inbound Rules&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;Choose the type as &lt;code&gt;MYSQL/Aurora&lt;/code&gt; (Becuase I'm planning to use MySQL), which opens port 3306 (MySQL default port).&lt;/li&gt;
&lt;li&gt;Set the &lt;strong&gt;Source&lt;/strong&gt; to &lt;code&gt;My IP&lt;/code&gt; then  it will automatically get your IP Address to connect with the RDS instance. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Outbound Rules&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;Set to allow all traffic to ensure that the instance can communicate freely with other resources.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhwetdchmrydvtgu13le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhwetdchmrydvtgu13le.png" alt="Security Group Rules" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgt31xbcdhruqwgfvdg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgt31xbcdhruqwgfvdg7.png" alt="Security Group Setup Success" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the security group is set up, move on to configuring the &lt;strong&gt;RDS instance&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2: Configure RDS Instance&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose the Database Engine&lt;/strong&gt;: In this case, select &lt;strong&gt;MySQL&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi8rdk8fq9k9ec0cpmlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi8rdk8fq9k9ec0cpmlc.png" alt="Instance Engine" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose a Template&lt;/strong&gt;: For simplicity, you can use the &lt;strong&gt;Free tier&lt;/strong&gt; because I'm going to setup a demo application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwu68gxninsvnrhput5vm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwu68gxninsvnrhput5vm.png" alt="Instance Template" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Set Up DB Instance&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Single DB instance&lt;/strong&gt; (if you don’t require high availability).&lt;/li&gt;
&lt;li&gt;Define &lt;strong&gt;DB Instance Identifier&lt;/strong&gt;, &lt;strong&gt;Master Username&lt;/strong&gt; (I used default name "admin"), and &lt;strong&gt;Password&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1hdfcnm12qrf8apiokk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1hdfcnm12qrf8apiokk.png" alt="Username Password" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose a DB Instance class&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;I selected db.t3.micro (minimum resources option) because no need more CPU or RAM.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4mrwli9j9lmrx0naunk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4mrwli9j9lmrx0naunk.png" alt="Instance class" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose a storage type&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;General Purpose SSD and 20 GB storage value enough for my demo application.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Note : I do not need a specific EC2 instance for this DB because there is no need to allocate compute resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzjh73gttwdlajk9x80u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzjh73gttwdlajk9x80u.png" alt="Storage type" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After configuring these settings, click on &lt;strong&gt;Create Database&lt;/strong&gt; to start provisioning the RDS instance. It will take a couple of minutes for creation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Configure Spring Boot Application&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that your RDS MySQL instance is up and running, you can proceed to configure your Spring Boot application to connect to it.&lt;/p&gt;

&lt;p&gt;I'm not going to deep dive into Spring boot, I just show a few Java files and configurations to get an idea. If youre new to Spring boot, please get a basic idea about Spring boot applications before that implementation.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3.1. Update application.properties&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In your Spring Boot project, you will need to add the necessary database connection details in the &lt;code&gt;application.properties&lt;/code&gt; file. The connection will use the endpoint (under the connectivity and security of RDS instance) along with the credentials set during the RDS setup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;spring.application.name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;DevOps&lt;/span&gt;
&lt;span class="py"&gt;spring.datasource.url&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;jdbc:mysql://&amp;lt;End Point&amp;gt;/devops&lt;/span&gt;
&lt;span class="py"&gt;spring.datasource.username&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;admin&lt;/span&gt;
&lt;span class="py"&gt;spring.datasource.password&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;Password&amp;gt;&lt;/span&gt;
&lt;span class="py"&gt;spring.jpa.hibernate.ddl-auto&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;update&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;spring.datasource.url&lt;/strong&gt;: This is the URL of your RDS instance (replace the host with the actual RDS endpoint you received).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring.datasource.username&lt;/strong&gt;: The &lt;strong&gt;admin&lt;/strong&gt; user or the master username you configured during the setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring.datasource.password&lt;/strong&gt;: The password that you configured for your RDS instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring.jpa.hibernate.ddl-auto&lt;/strong&gt;: Set to &lt;strong&gt;update&lt;/strong&gt; to automatically update your schema (ideal for development).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3.2. Add MySQL Dependency&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Ensure that you have the MySQL driver dependency in your &lt;code&gt;pom.xml&lt;/code&gt; for Maven or &lt;code&gt;build.gradle&lt;/code&gt; for Gradle.&lt;/p&gt;

&lt;p&gt;For Maven:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;mysql&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;mysql-connector-java&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Gradle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gradle"&gt;&lt;code&gt;&lt;span class="n"&gt;implementation&lt;/span&gt; &lt;span class="s1"&gt;'mysql:mysql-connector-java'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note : Ensure that you add spring-boot-starter-data-jpa dependency as well.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3.3. Define JPA Entity and Repository&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;You can now define your JPA entity and the corresponding repository. For example, to create a &lt;strong&gt;User&lt;/strong&gt; entity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Entity&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;@Id&lt;/span&gt;
    &lt;span class="nd"&gt;@GeneratedValue&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;strategy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GenerationType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;IDENTITY&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;Long&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;getters&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;setters&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;constructions&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And a repository interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Repository&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;UserRepository&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;JpaRepository&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;User&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Long&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: Create a Simple REST Controller&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create a controller to handle requests related to the &lt;strong&gt;User&lt;/strong&gt; entity. The following code shows how to create a simple POST method for saving user data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@RestController&lt;/span&gt;
&lt;span class="nd"&gt;@RequestMapping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/user"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserController&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Autowired&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;UserService&lt;/span&gt; &lt;span class="n"&gt;userService&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@PostMapping&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;saveUser&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;@RequestBody&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;userService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;saveUser&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"Success!"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getMessage&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;UserService&lt;/code&gt; class handles saving the data to the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Service&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserService&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Autowired&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;UserRepository&lt;/span&gt; &lt;span class="n"&gt;userRepository&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;saveUser&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;User&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;userRepository&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;save&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5: Verifying the Connection in MySQL Workbench&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can verify the connection by using &lt;strong&gt;MySQL Workbench&lt;/strong&gt; to connect to the AWS RDS instance. Enter the connection details as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Host&lt;/strong&gt;: The endpoint of your RDS instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Username&lt;/strong&gt;: The &lt;code&gt;admin&lt;/code&gt; username.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password&lt;/strong&gt;: The password you set for your database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 3306.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once connected, you can browse the databases and tables to confirm that your Spring Boot application is interacting with the MySQL database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7tlmqxjhc2h15fvthfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7tlmqxjhc2h15fvthfm.png" alt="MySQL Workbench" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;







&lt;h3&gt;
  
  
  &lt;strong&gt;Step 6: Testing with Postman&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can test the POST endpoint using &lt;strong&gt;Postman&lt;/strong&gt;. Send a &lt;strong&gt;POST&lt;/strong&gt; request to &lt;code&gt;http://localhost:8080/user&lt;/code&gt; with a JSON body:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test@gmail.com"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a response &lt;strong&gt;"Success!"&lt;/strong&gt; if everything is set up correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp6svt6ywen3q8ba3cc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp6svt6ywen3q8ba3cc2.png" alt="Postman Test" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;







&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You have now successfully connected your Spring Boot application to an AWS RDS MySQL instance. By following the above steps, you were able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up an AWS RDS instance for MySQL.&lt;/li&gt;
&lt;li&gt;Configure the necessary security groups for access control.&lt;/li&gt;
&lt;li&gt;Connect your Spring Boot application to the RDS instance via JDBC.&lt;/li&gt;
&lt;li&gt;Test the setup by sending POST requests through Postman and verifying the database entries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup ensures a seamless and scalable database backend for your Spring Boot application hosted on AWS. &lt;/p&gt;

&lt;p&gt;Let me know if you need any further assistance or if you have any ideas to improve the setup!&lt;/p&gt;

&lt;p&gt;Thank you! &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hmqhw733kkpm208wsms.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hmqhw733kkpm208wsms.gif" alt="enjoy" width="362" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mysql</category>
      <category>springboot</category>
      <category>project</category>
    </item>
    <item>
      <title>AWS Elastic Load Balancer: Guide to High Availability and Scalability</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Sat, 30 Nov 2024 05:53:26 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/aws-elastic-load-balancer-guide-to-high-availability-and-scalability-3d5l</link>
      <guid>https://dev.to/sachithmayantha/aws-elastic-load-balancer-guide-to-high-availability-and-scalability-3d5l</guid>
      <description>&lt;p&gt;In today’s cloud-driven world, where businesses demand resilience, speed, and adaptability, building robust applications is no longer optional, it’s essential. Imagine a bustling airport terminal with passengers arriving from all directions. To keep things running smoothly, traffic needs to be directed, whether to check-in counters, security lines, or gates. AWS Elastic Load Balancer is a powerful service that provides load balancing incoming traffic between multiple targets, which could be instances of EC2, containers, IP addresses, and Lambda. Herein, in this article, I'll give an overview of ELB and its important options to make your choice of load balancer right for your use case.&lt;/p&gt;

&lt;p&gt;AWS Elastic Load Balancer automatically distributes incoming application or network traffic across multiple targets in one or more availability zones (AZs). By acting as a traffic distribution layer, ELB enhances fault tolerance, scalability, and availability, ensuring your applications can handle varying levels of demand seamlessly.&lt;/p&gt;

&lt;h4&gt;
  
  
  So, what are the use cases?? 🧐
&lt;/h4&gt;

&lt;p&gt;It really depends on what type of load balancer you're using. Let's take one-by-one types, and I will give a brief explanation with specific use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Application Load Balancer (ALB)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsr7ywzwlm5kg3g97n0xd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsr7ywzwlm5kg3g97n0xd.png" alt="Application Load Balancer" width="654" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Think of the Application Load Balancer as the ultimate traffic controller for web apps. It specializes in handling HTTP and HTTPS traffic, operating at the application layer to guide incoming requests based on specific details such as finely tuned GPS for your traffic. This makes ALB the go-to choice for modern architectures such as microservices, web applications, and RESTful APIs.&lt;/p&gt;

&lt;p&gt;Imagine this: You’re running an e-commerce platform. With ALB, you can easily direct customers to the right microservices. For example, requests to /cart are routed to the shopping cart service, while /checkout requests go straight to the payment system. &lt;/p&gt;

&lt;h4&gt;
  
  
  Smooth, right?
&lt;/h4&gt;

&lt;h3&gt;
  
  
  2. Network Load Balancer (NLB)
&lt;/h3&gt;

&lt;p&gt;Let's talk about the powerhouse of traffic management. Network Load Balancer is purpose-built to handle huge volumes of traffic with incredible speed. Operating at the transport layer, it’s designed for TCP and UDP traffic, and it absolutely shines under pressure, managing low-latency, high-throughput workloads like a pro. In fact, the NLB is so efficient it can scale to handle millions of requests per second without breaking a sweat.&lt;/p&gt;

&lt;p&gt;Imagine this: You’re running a real-time gaming platform or managing a financial app where even the tiniest delay is unacceptable. The NLB steps in to ensure every request is processed quickly and reliably, even when traffic surges, such as a tidal wave. It’s the definition of “always-on” performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Gateway Load Balancer (GLB)
&lt;/h3&gt;

&lt;p&gt;Think of the Gateway Load Balancer as the Swiss Army knife for your traffic. It combines the capabilities of a load balancer and a gateway into one neat solution, making it ideal for scenarios where your traffic needs to pass through virtual appliances like firewalls, intrusion detection systems, or network monitoring tools. It’s all about streamlining complex tasks and doing the heavy lifting for you.&lt;/p&gt;

&lt;p&gt;Here’s a scenario: Let’s imagine you’re developing a high-security application where every bit of traffic needs to pass through a virtual firewall for inspection. GLB takes care of routing all that traffic seamlessly, which means no manual intervention is required. It’s built to handle these tasks effortlessly, letting you focus on the bigger picture while ensuring your app stays secure.&lt;/p&gt;

&lt;p&gt;So, that’s it about AWS Elastic Load Balancer.&lt;/p&gt;

&lt;p&gt;I hope you got something new!&lt;/p&gt;

&lt;p&gt;Have a great day!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiinu45n3x4q4k5zcbitv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiinu45n3x4q4k5zcbitv.gif" alt="Cheers" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>learning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Securing Your Infrastructure on Amazon EC2</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Thu, 31 Oct 2024 12:53:53 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/securing-your-infrastructure-on-amazon-ec2-3obk</link>
      <guid>https://dev.to/sachithmayantha/securing-your-infrastructure-on-amazon-ec2-3obk</guid>
      <description>&lt;p&gt;In today's digital age, the security of your infrastructure is more critical than ever. As businesses increasingly rely on cloud services such as Amazon EC2 (Elastic Compute Cloud). Furthermore, understanding how to protect your data and workloads in the cloud is essential. AWS (Amazon Web Services) provides robust security features and best practices to ensure your infrastructure is safe, reliable, and scalable.&lt;br&gt;
This article will walk you through key infrastructure security practices for Amazon EC2. Whether you are a small business owner or a seasoned IT professional, these concepts will help you design a secure and efficient cloud environment.&lt;br&gt;
The security measures for Amazon EC2 infrastructure encompass various aspects to ensure secure and controlled access to your instances, protect data, and maintain isolation. Here’s a breakdown of the key security concepts based on the detailed content provided:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Access Control
TLS and Cipher Suites: Access to Amazon EC2 requires clients to support TLS 1.2 or 1.3 and use cipher suites with Perfect Forward Secrecy (PFS), such as DHE or ECDHE. This ensures encrypted and secure communication.
API Access: API requests must be signed using an AWS access key ID and secret access key or via temporary credentials from AWS Security Token Service (STS).&lt;/li&gt;
&lt;li&gt;Network Isolation
Virtual Private Cloud (VPC): Each VPC is a logically isolated network within AWS. You can create separate VPCs to isolate workloads or different parts of an organization.
Subnets: Use subnets within a VPC to separate application tiers (web, application, database). Instances can be placed in private subnets if they don’t require direct internet access.
PrivateLink: To call Amazon EC2 API via private IPs within a VPC, AWS PrivateLink can be used to keep traffic secure within the AWS network.&lt;/li&gt;
&lt;li&gt;Isolation on Physical Hosts
Hypervisor Isolation: EC2 instances on the same physical host are isolated through hypervisor technology, which ensures that each instance’s CPU and memory are separate.
Data Security: When instances are terminated, their memory is scrubbed, and storage blocks are reset, preventing data leakage.
Network Isolation: Instances can only send traffic from their assigned MAC and IP addresses, with non-compliant traffic being dropped.&lt;/li&gt;
&lt;li&gt;Controlling Network Traffic
Security Groups: Primary mechanism to control access. Define rules to allow minimal and specific traffic, such as from a corporate network or for specific protocols (e.g., HTTPS).
Network ACLs: Provide stateless, coarse-grain network control. Can be used as an additional layer of defense to restrict traffic on a subnet level.
Private Subnets &amp;amp; Bastion Hosts: For instances in private subnets, use bastion hosts or NAT gateways to manage external connectivity without exposing the instance directly to the internet.
VPC Subnet Route Tables: Configure minimal necessary routes to control network access, such as limiting internet access to specific subnets.&lt;/li&gt;
&lt;li&gt;Windows-Specific Security Recommendations
Windows Firewall &amp;amp; Group Policies: Use Group Policy Objects (GPO) to centrally manage Windows Firewall settings, providing additional control over network traffic.
Secure Administration: Secure RDP via SSL/TLS and manage user permissions through Active Directory or AWS Directory Service. Avoid using Domain Admin accounts for daily activities.
Configuration Management: Utilize tools like EC2 Run Command, Amazon EC2 Systems Manager (SSM), and PowerShell DSC for managing configurations without direct instance access.
Application Layer Restrictions: Use built-in functionalities in Microsoft applications to set network restrictions (e.g., IP range filters in IIS, SQL Server).&lt;/li&gt;
&lt;li&gt;Monitoring &amp;amp; Automation
VPC Flow Logs: Monitor the network traffic reaching your instances for potential security insights.
AWS GuardDuty: Detects suspicious behaviors and malware, helping to identify compromised instances or malicious activity.
AWS Security Hub &amp;amp; Analyzers: Services like Reachability Analyzer and Network Access Analyzer can detect unintended network exposure, aiding in continuous security assessment.
Secure Remote Access: Utilize AWS Systems Manager Session Manager and EC2 Instance Connect for secure, keyless access to instances. This reduces the need to open SSH or RDP ports.&lt;/li&gt;
&lt;li&gt;Additional Security Measures
Multiple Network Interfaces: Deploy additional interfaces to separate and audit management traffic from application traffic, enhancing security management.
AWS VPN &amp;amp; Direct Connect: Establish private, dedicated connections between your on-premises network and VPC for secure, low-latency communication.
These security practices are designed to provide comprehensive protection across various layers, from access and network control to monitoring and configuration management. Following these guidelines ensures that your Amazon EC2 infrastructure remains secure, scalable, and resilient.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>ec2</category>
      <category>secuirty</category>
    </item>
    <item>
      <title>Container Image Management Workflow with Amazon ECR</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Tue, 17 Sep 2024 04:19:22 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/container-image-management-workflow-with-amazon-ecr-2i3b</link>
      <guid>https://dev.to/sachithmayantha/container-image-management-workflow-with-amazon-ecr-2i3b</guid>
      <description>&lt;h4&gt;
  
  
  Registry Creation
&lt;/h4&gt;

&lt;p&gt;Each AWS account is automatically provided with a private Amazon ECR registry, where you can store container images such as Docker images. This registry serves as a central hub for managing repositories and images.&lt;/p&gt;

&lt;h4&gt;
  
  
  Repository Management
&lt;/h4&gt;

&lt;p&gt;Within the registry, users can create multiple repositories. These repositories are designed to store versions of container images and can be configured with repository policies to manage access control. Policies are resource-based and use AWS IAM to control who can push, pull, or manage the images in the repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  Authentication via Authorization Tokens
&lt;/h4&gt;

&lt;p&gt;To interact with an Amazon ECR repository, a user or application (eg : Amazon EC2 instances) must first authenticate. This authentication is achieved using an authorization token provided by ECR, which is linked to AWS Identity and Access Management (IAM). The authorization token is then passed to the Docker CLI (or another compatible client) to authenticate API requests for pushing or pulling images.&lt;/p&gt;

&lt;h4&gt;
  
  
  Image Pushing and Pulling
&lt;/h4&gt;

&lt;p&gt;Once authenticated:&lt;br&gt;
Push: Users can upload container images to the repository using Docker commands or other container management tools. These images are stored in the repository with tags indicating their version.&lt;br&gt;
Pull: When an application or service (such as Amazon ECS or Amazon EKS) needs a container image, it can request and retrieve the image from the repository using the image name and tag.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lifecycle Policies
&lt;/h4&gt;

&lt;p&gt;Repositories often accumulate outdated or unused images, so Amazon ECR provides lifecycle policies to help manage storage. Users can define rules to automate the removal of old or unused images, thus saving storage costs and keeping the repository organized. These policies can be tested before applying, ensuring no valuable images are deleted by accident.&lt;/p&gt;

&lt;h4&gt;
  
  
  Image Scanning
&lt;/h4&gt;

&lt;p&gt;Security is a major concern with container images. Amazon ECR offers image scanning to identify vulnerabilities in images that are pushed to the repository. This scanning can be automatic (triggered upon image push) or manual (triggered by the user). The scan results provide details about any vulnerabilities found, allowing users to update and patch their images accordingly.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cross-Region and Cross-Account Replication
&lt;/h4&gt;

&lt;p&gt;ECR enables cross-region replication to allow the same image to be available in multiple AWS regions. This is useful for applications deployed across different geographies. Similarly, cross-account replication allows sharing of images between different AWS accounts while maintaining control over access through repository policies.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pull-Through Cache
&lt;/h4&gt;

&lt;p&gt;Amazon ECR provides a pull-through cache that allows caching of images from an upstream public registry (e.g., Docker Hub). This cache ensures faster retrieval of images and reduced dependency on the availability or performance of the upstream registry. ECR periodically synchronizes cached images with the upstream registry, ensuring that the images are up to date.&lt;/p&gt;

&lt;h4&gt;
  
  
  Integration with Other AWS Services
&lt;/h4&gt;

&lt;p&gt;Amazon ECS (Elastic Container Service) and Amazon EKS (Elastic Kubernetes Service) can both pull images from ECR repositories as part of their deployment process.&lt;br&gt;
AWS Lambda can also use container images stored in ECR to run containerized workloads.&lt;br&gt;
Amazon EC2 and AWS Fargate instances often use ECR to pull images for deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Summary of the Workflow:
&lt;/h4&gt;

&lt;p&gt;User or Service Authentication: A user or service authenticates to the ECR using an authorization token.&lt;br&gt;
Push/Pull Images: Authorized users push container images to the repository or pull images for their services.&lt;br&gt;
Security Measures: Image scanning checks for vulnerabilities.&lt;br&gt;
Maintenance: Lifecycle policies clean up old images to optimize storage.&lt;br&gt;
Scaling: Cross-region and cross-account replication provide scalability, and pull-through caching helps ensure performance.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>cloud</category>
      <category>ecr</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Security Group Rules</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Sun, 25 Aug 2024 12:32:52 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/aws-secutiy-group-rules-1fia</link>
      <guid>https://dev.to/sachithmayantha/aws-secutiy-group-rules-1fia</guid>
      <description>&lt;p&gt;AWS Security Group rules are critical for controlling and securing network traffic to and from your AWS resources. These rules define who can access your instances by specifying allowed IP addresses, protocols, and ports, ensuring that only authorized users and services can connect. This access control is crucial for preventing unauthorized access and potential security breaches, protecting your data and applications.&lt;/p&gt;

&lt;p&gt;In addition to access control, Security Groups act as virtual firewalls that manage inbound and outbound traffic, shielding your AWS environment from threats and blocking unwanted or harmful traffic. This layer of security is essential for reducing the risk of breaches and ensuring your resources remain secure.&lt;/p&gt;

&lt;p&gt;Security Group rules also play a key role in meeting compliance and security best practices. By enforcing strict access restrictions, they help ensure your infrastructure adheres to industry standards, reducing the risk of compliance issues. Furthermore, Security Groups are easy to configure and manage, allowing you to apply rules to multiple instances simultaneously, which simplifies security management and minimizes the chances of configuration errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Components
&lt;/h3&gt;

&lt;p&gt;Here’s a breakdown of each component and its importance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03wj2l04hboimp17814v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03wj2l04hboimp17814v.png" alt="Security Group Rule components" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Type: The type defines the specific protocol to open to network traffic, such as SSH, RDP, HTTP, or HTTPS. This is crucial for controlling what kind of traffic can access your instance, ensuring that only the necessary protocols are exposed, thereby minimizing security risks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Protocol: The protocol defines the method by which data is transmitted over the network, such as TCP, UDP, or ICMP. Understanding the protocol is important because different types of traffic require different protocols. Configuring the right protocol ensures that your application can communicate correctly while preventing unwanted traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Port Range: The port range specifies which ports are open for the defined protocol. Ports act as gateways for different types of network services, and correctly setting the port range allows the necessary traffic while blocking potentially harmful connections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Source: The source determines where the traffic originates (for inbound rules) or where it’s sent (for outbound rules). This component is vital for defining who or what can access your resources, allowing you to restrict access to trusted IP addresses or networks, enhancing overall security.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By carefully configuring each of these components, you can create precise rules that ensure only authorized traffic can reach your AWS instances, providing a robust defense against potential security threats.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>security</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Exploring Amazon EC2 Instance Purchasing Options</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Mon, 15 Jul 2024 14:37:50 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/exploring-amazon-ec2-instance-purchasing-options-4am2</link>
      <guid>https://dev.to/sachithmayantha/exploring-amazon-ec2-instance-purchasing-options-4am2</guid>
      <description>&lt;p&gt;Amazon Elastic Compute Cloud (EC2) offers a variety of instance purchasing options for different needs and budgets. Whether you're looking for flexibility, cost savings, or scalability, there's a suitable option for you. This article will explore the four main EC2 instance purchasing options: On-demand Instances, Reserved Instances (RI), Savings Plans, and Spot Instances.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. On-demand Instances
&lt;/h4&gt;

&lt;p&gt;On-demand Instances allow you to pay for compute capacity by the second, with no long-term commitments. This option is ideal for users who need compute capacity on a short-term basis, for spiky or unpredictable workloads. Key benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Upfront Costs&lt;/strong&gt;: Pay only for the time your instances are running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Easily scale up or down based on your application's needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Long-term Commitment&lt;/strong&gt;: Suitable for short-term, unpredictable workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On-demand Instances are perfect for development and testing environments, or for applications with unpredictable traffic patterns.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Reserved Instances (RI)
&lt;/h4&gt;

&lt;p&gt;Reserved Instances offer significant cost savings compared to On-demand Instances, with options to prepay for capacity for one or three years. There are three types of RIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard RIs&lt;/strong&gt;: Provide the highest discount, up to 75%, but are less flexible in terms of changing instance attributes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convertible RIs&lt;/strong&gt;: Offer flexibility to change instance types, operating systems, and tenancies, with a slightly lower discount compared to Standard RIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduled RIs&lt;/strong&gt;: Allow you to reserve capacity for specific time periods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reserved Instances are ideal for predictable workloads that do not require changes in compute power. They are suitable for applications with steady-state usage, such as databases or business applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Savings Plans
&lt;/h4&gt;

&lt;p&gt;Savings Plans provide a flexible pricing model, offering significant cost savings. There are two types of Savings Plans:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute Savings Plans&lt;/strong&gt;: These offer the most flexibility, allowing you to change instance families, operating systems, and tenancies, and even shift workloads between regions. They can reduce costs by up to 66%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Instance Savings Plans&lt;/strong&gt;: These apply to specific instance families within a region, offering savings of up to 72%, similar to Standard RIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Savings Plans are excellent for long-term workloads and users who need flexibility in their computing needs.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Spot Instances
&lt;/h4&gt;

&lt;p&gt;Spot Instances allow you to get spare EC2 capacity, often at significantly reduced prices. Key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost Savings&lt;/strong&gt;: Pay the Spot price, which is often much lower than the On-demand price.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Start and End Times&lt;/strong&gt;: Suitable for applications that can handle interruptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Termination Notice&lt;/strong&gt;: Receive a two-minute notice before termination.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Spot Instances are ideal for applications with flexible start and end times, such as batch processing, data analysis, and high-performance computing tasks.&lt;/p&gt;

&lt;p&gt;In conclusion, understanding these EC2 instance purchasing options allows you to optimize your AWS costs and tailor your compute capacity to your specific requirements. By selecting the right mix of On-demand, Reserved Instances, Savings Plans, and Spot Instances, you can achieve both cost efficiency and operational flexibility.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learning</category>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Manage Amazon S3 Storage Cost with Lifecycle Rules</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Sun, 07 Jul 2024 14:12:54 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/manage-storage-costs-with-amazon-s3-lifecycle-rules-1fnj</link>
      <guid>https://dev.to/sachithmayantha/manage-storage-costs-with-amazon-s3-lifecycle-rules-1fnj</guid>
      <description>&lt;p&gt;Hello World! ✌️&lt;/p&gt;

&lt;p&gt;In the last article, we introduced &lt;a href="https://dev.to/sachithmayantha/amazon-s3-storage-classes-1kjn"&gt;Amazon S3 Storage Classes&lt;/a&gt; with use cases. As an extension of that, today we are going to discuss Amazon S3 storage costs with lifecycle rules.&lt;/p&gt;

&lt;p&gt;As we talked earlier, Amazon S3 is a durable, scalable, and flexible storage service. It provides options for various kinds of storage classes optimized for cases of frequent access all the way through to archival storage.&lt;/p&gt;

&lt;p&gt;Amazon S3 also allows you to set lifecycle rules on how it automatically transitions your objects between the various classes of storage. You can even set expiration dates for object deletion. In this way, it helps you optimize storage costs by automating the movement of objects to the most cost-effective storage class if its access pattern changes over time. A lifecycle rule, in general, comprises two actions: &lt;strong&gt;Transition Actions&lt;/strong&gt; and &lt;strong&gt;Expiration Actions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let’s dive into these actions and understand how they work using the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm1yhh2qva0u3heuiht6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm1yhh2qva0u3heuiht6.jpg" alt="Example" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Transition Actions
&lt;/h4&gt;

&lt;p&gt;Transition actions are rules that automatically move objects to more cost-efficient storage classes. &lt;/p&gt;

&lt;p&gt;In the example image lifecycle policy:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 Standard&lt;/strong&gt;: The object &lt;code&gt;mydoc.pdf&lt;/code&gt; is initially stored in the S3 Standard class for the first 30 days. This class is optimized for frequent access with low latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 Standard-IA&lt;/strong&gt;: After 30 days, the object automatically transitions to the S3 Standard-IA (Infrequent Access) storage class. This move is more cost-effective for data that is accessed less frequently, as it reduces storage costs while imposing a fee for data retrieval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 Glacier Flexible Retrieval&lt;/strong&gt;: After another 60 days in S3 Standard-IA, the object transitions to the S3 Glacier Flexible Retrieval storage class. This class is designed for long-term archival where the data is rarely accessed but still needs to be available. The storage cost here is even lower, though retrieval is slower and can take from minutes to hours, with associated costs.&lt;/p&gt;

&lt;p&gt;These transitions are automated based on predefined rules, allowing you to optimize storage costs without manual intervention.&lt;/p&gt;

&lt;h4&gt;
  
  
  Expiration Actions
&lt;/h4&gt;

&lt;p&gt;Expiration actions automatically delete objects after a specified period. This is useful for managing data lifecycle and compliance requirements, ensuring that objects does not incur unnecessary storage costs.&lt;/p&gt;

&lt;p&gt;In the example image lifecycle policy:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deletion&lt;/strong&gt;: Finally, after 365 days in S3 Glacier Flexible Retrieval, the object is deleted. This expiration action ensures that the object is not kept in the S3 bucket, thus avoiding unnecessary storage costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Lifecycle Rules Benefits
&lt;/h3&gt;

&lt;p&gt;Cost Efficiency: You can save a lot of storage costs by migrating data through classes of storage, especially for data that, over time, becomes less frequently accessed.&lt;/p&gt;

&lt;p&gt;Automated Management: The lifecycle rules perform automation in data management, hence reducing the need for manual interventions.&lt;/p&gt;

&lt;p&gt;Compliance and Governance: The use of Expiration Actions allows for data retention policies and compliance requirements by deleting data automatically when it is no longer needed.&lt;/p&gt;

&lt;p&gt;Scalability: One can manage the make of data at scale through S3 lifecycle regulations.&lt;/p&gt;

&lt;p&gt;In other words, Amazon S3 Lifecycle Rules provide powerful tools to keep your storage costs optimized among various classes. Understanding and adhering to such rules would ensure that throughout the data's life cycle, it is cost-effectively stored.&lt;/p&gt;

&lt;p&gt;Have a great day, and see you soon with another incredible topic!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bgqihag6t2nddjlcllj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bgqihag6t2nddjlcllj.gif" alt="Enjoy" width="362" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>storage</category>
      <category>cost</category>
    </item>
    <item>
      <title>Amazon S3 Storage Classes</title>
      <dc:creator>Sachith Fernando</dc:creator>
      <pubDate>Wed, 26 Jun 2024 01:46:55 +0000</pubDate>
      <link>https://dev.to/sachithmayantha/amazon-s3-storage-classes-1kjn</link>
      <guid>https://dev.to/sachithmayantha/amazon-s3-storage-classes-1kjn</guid>
      <description>&lt;p&gt;Today, let’s talk about Amazon S3 Storage Classes. It is a super flexible solution from Amazon that caters to different storage needs, data access patterns, and cost considerations. Whether you’re storing frequently accessed data, archival data, or something in between, S3 has got you covered!&lt;/p&gt;

&lt;h4&gt;
  
  
  What are Amazon S3 Storage Classes? 🧐
&lt;/h4&gt;

&lt;p&gt;Simply put, Amazon S3 offers multiple storage classes that are designed to optimize for different requirements, such as how often you access your data, how fast you need it, and how much you want to spend. Let’s break it down:&lt;/p&gt;

&lt;h3&gt;
  
  
  1.S3 Standard
&lt;/h3&gt;

&lt;p&gt;This is your go-to storage class for frequently accessed data. It offers:&lt;/p&gt;

&lt;p&gt;99.9% availability and 99.9% durability&lt;br&gt;
High throughput and low latency&lt;br&gt;
Perfect for dynamic websites, mobile apps, gaming apps, and big data analytics&lt;br&gt;
However, it comes with a slightly higher storage cost compared to other classes, but the bonus is no retrieval fees.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. S3 Standard-Infrequent Access (S3 Standard-IA)
&lt;/h3&gt;

&lt;p&gt;If your data is accessed less often but still needs to be quickly retrieved, S3 Standard-IA is a smart pick. Key features include:&lt;/p&gt;

&lt;p&gt;Lower storage costs compared to S3 Standard&lt;br&gt;
Retriever fees apply, so plan wisely!&lt;br&gt;
99% availability and 99.9% durability&lt;br&gt;
Use cases? Think backups, long-term storage, or data you don’t need every day but still want handy when you do.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. S3 One Zone-Infrequent Access (S3 One Zone-IA)
&lt;/h3&gt;

&lt;p&gt;This class stores your data in a single Availability Zone, making it super cost-effective. But heads up:&lt;/p&gt;

&lt;p&gt;It has lower availability (99.5%) and no cross-AZ redundancy.&lt;br&gt;
Perfect for non-critical data or region-specific applications where cost-saving matters more than durability.&lt;br&gt;
If you can afford to lose it or easily reproduce it, S3 One Zone-IA is the way to go!&lt;/p&gt;

&lt;h3&gt;
  
  
  4. S3 Glacier Flexible Retrieval
&lt;/h3&gt;

&lt;p&gt;Need to stash your data long-term but still want flexible access? Meet S3 Glacier Flexible Retrieval:&lt;/p&gt;

&lt;p&gt;Super low storage costs for archival data&lt;br&gt;
Retrieval speeds range from minutes (Expedited) to hours (Standard and Bulk)&lt;br&gt;
Ideal for archive data, regulatory compliance, and disaster recovery&lt;br&gt;
It’s great for those "just-in-case" moments when you need old data but not immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. S3 Glacier Deep Archive
&lt;/h3&gt;

&lt;p&gt;Now, this is Amazon S3’s lowest-cost storage class. It's perfect for data that rarely gets touched.&lt;/p&gt;

&lt;p&gt;Retrieval times can go up to 12 hours, so patience is key! 🤓&lt;br&gt;
Excellent for compliance records or past data you’ll likely never need but can’t delete.&lt;br&gt;
If your data is expected to sit dormant for years, this is your best bet.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. S3 Intelligent-Tiering
&lt;/h3&gt;

&lt;p&gt;Feeling indecisive? Let S3 Intelligent-Tiering handle it! This class:&lt;/p&gt;

&lt;p&gt;Automatically moves your data between frequent and infrequent access tiers based on usage patterns.&lt;br&gt;
Provides 99.9% availability and 99.9% durability.&lt;br&gt;
Saves you money by adjusting storage fees as your access patterns change, and no manual work is required!&lt;br&gt;
It’s the ideal solution for unpredictable data access patterns.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Choose Amazon S3 Storage Classes?
&lt;/h4&gt;

&lt;p&gt;Flexibility: A class for every use case and a good solution to high-performance applications and deep archives.&lt;br&gt;
Cost-effectiveness: Pay for what you use, tailored to your data’s access patterns.&lt;br&gt;
Scalability: Whether it’s a few files or millions, S3 handles it like a champ.&lt;br&gt;
So that’s the scoop on Amazon S3 Storage Classes! I hope this helps you pick the perfect one for your needs.&lt;/p&gt;

&lt;p&gt;Have a great day, and see you soon with another incredible topic!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauklyhzesuhvku2zbrvj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauklyhzesuhvku2zbrvj.gif" alt="cheers" width="1024" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>beginners</category>
      <category>learn</category>
    </item>
  </channel>
</rss>
