<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ismail G.</title>
    <description>The latest articles on DEV Community by Ismail G. (@ismailg).</description>
    <link>https://dev.to/ismailg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ismailg"/>
    <language>en</language>
    <item>
      <title>How I Became an AWS Community Builder (Data Track)</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 07 Mar 2026 07:17:26 +0000</pubDate>
      <link>https://dev.to/ismailg/how-i-became-an-aws-community-builder-data-track-4mmp</link>
      <guid>https://dev.to/ismailg/how-i-became-an-aws-community-builder-data-track-4mmp</guid>
      <description>&lt;p&gt;I got an email a few days ago that made my day. I had been accepted into the Data track of the AWS Community Builders.&lt;/p&gt;

&lt;p&gt;This initiative may just look like another badge for a lot of folks. But for me, it means a lot more: months of studying, trying new things, and sharing what I learn with other people.&lt;/p&gt;

&lt;p&gt;After I told folks the news, a lot of them asked me the same thing: "What helped you get in?"&lt;/p&gt;

&lt;p&gt;There is no one secret, to tell the truth. But one thing was really important: always sharing technical information. One of the best things I did on my trip was to write about my experiences with databases, cloud architectures, and AWS services on Dev.to.&lt;/p&gt;

&lt;p&gt;In this article, I want to talk about what I accomplished, what made my application stand out, and what I would tell anyone who wishes to apply to the program in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the AWS Community Builders Program?
&lt;/h2&gt;

&lt;p&gt;Before diving into my journey, let's clarify what this program is. The AWS Community Builders program is designed to recognize and support technical community leaders who are passionate about sharing knowledge and connecting with others about AWS technologies.&lt;/p&gt;

&lt;p&gt;It provides builders with technical resources, mentorship, $500 in AWS credits, exam vouchers, and a direct line to AWS product teams. It’s not just about what you know; it’s about how you help others learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Started: My Background
&lt;/h2&gt;

&lt;p&gt;At first, I focused mainly on getting AWS certifications. But after a while, I realized that passing an exam is really just the starting point. To truly understand the cloud, I needed hands-on experience—working through real database problems, experimenting, sometimes breaking things, and figuring out how they work.&lt;/p&gt;

&lt;p&gt;Along the way, I also started documenting what I learned so others could benefit from the same experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Turning Point: Writing on Dev.to
&lt;/h2&gt;

&lt;p&gt;At some point, I realized that simply learning new things wasn’t enough. I was spending hours troubleshooting systems, experimenting with databases, and figuring out how different AWS services worked together—but most of those lessons stayed in my own notes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That’s when I started sharing what I learned on Dev.to.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of writing simple notes for myself, I began turning real troubleshooting sessions into structured tutorials. Whenever I solved a problem, I tried to explain the process step by step—what went wrong, what I tried, and what finally worked.&lt;/p&gt;

&lt;p&gt;Another place where I learned a lot was AWS re:Post. I started helping people who were facing real problems with AWS services. Sometimes the questions were about databases, sometimes about architecture or infrastructure.&lt;/p&gt;

&lt;p&gt;When I encountered an interesting problem there, I didn’t just answer it and move on. I often recreated the scenario in AWS, tested different solutions, and then wrote a detailed article on Dev.to so that the solution could help more people facing the same issue.&lt;/p&gt;

&lt;p&gt;Because I applied to the Data track of the AWS Community Builders, many of my articles naturally focused on data-related topics, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS DocumentDB — exploring how managed NoSQL databases work in AWS&lt;/li&gt;
&lt;li&gt;MongoDB migrations — the challenges of moving on-premise data to the cloud&lt;/li&gt;
&lt;li&gt;Database architecture — designing systems for high availability and scalability&lt;/li&gt;
&lt;li&gt;Cloud infrastructure — automating data workloads and deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Over time, something interesting happened. Writing about these topics didn’t just help others—it also helped me understand them much more deeply. Explaining a solution forces you to truly understand it.&lt;/p&gt;

&lt;p&gt;One thing I learned along the way is this:&lt;/p&gt;

&lt;p&gt;Don’t just write about what a service is. Write about the problem you solved with it.&lt;/p&gt;

&lt;p&gt;That’s the kind of knowledge the cloud community values most.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Community Contribution &amp;amp; Engagement&lt;br&gt;
A significant part of my journey consisted of writing articles; however, I quickly realized that being "constructive" wasn't just about producing content. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Being a member of a community is also an important element. Answering questions posted in online forums, participating in online discussions, and trying to help with challenging database configurations also gave me new experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Matters When Applying?
&lt;/h2&gt;

&lt;p&gt;If you are planning to apply for the next cohort, here are the four pillars that I believe made my application stand out:&lt;/p&gt;

&lt;p&gt;I can't speak for the critics, but in my experience, four elements seem to be the most important:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Information&lt;/strong&gt;&lt;br&gt;
High-quality blog posts, GitHub repositories, or films that indicate how deep your technical knowledge is and how much practical experience you have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency:&lt;/strong&gt;&lt;br&gt;
One article soon before the deadline won't do anything. If you post content regularly over a few months, it demonstrates you're actively participating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Experience:&lt;/strong&gt;&lt;br&gt;
Your information is considerably more useful if you explain how you use AWS services to solve real problems, especially difficulties with infrastructure or data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community Impact:&lt;/strong&gt;&lt;br&gt;
Last but not least, your content should be useful to individuals. People talking about your ideas, leaving comments, and using them prove that your effort is helpful to the community.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Advice for Aspiring Builders
&lt;/h2&gt;

&lt;p&gt;If you’re thinking about applying to the AWS Community Builders, my biggest advice is simple: start sharing what you learn.&lt;/p&gt;

&lt;p&gt;You don’t need to be an expert in everything. In fact, many of the articles I wrote started with something I had just learned while working on a real problem. Instead of keeping that knowledge to myself, I turned those experiences into tutorials and shared them with the community.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One thing that helped me a lot was writing about real challenges. Explaining how you solved a problem—whether it’s a database migration, an architecture decision, or a troubleshooting process—creates content that is actually useful for others.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another important thing is consistency. You don’t need to publish something every week, but sharing your learning journey over time shows that you’re actively contributing to the ecosystem.&lt;/p&gt;

&lt;p&gt;Finally, try to engage with the community whenever you can. Platforms like Dev.to or AWS re:Post are great places to both learn from others and help people solve real problems.&lt;/p&gt;

&lt;p&gt;At the end of the day, the goal isn’t just to get accepted into the program. The real value comes from learning in public and helping others along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  A New Beginning
&lt;/h2&gt;

&lt;p&gt;Becoming an AWS Community Builder is a milestone, but more importantly, it’s a beginning. It’s an invitation to learn more, share more, and connect with some of the brightest minds in the industry.&lt;/p&gt;

&lt;p&gt;Are you planning to apply for the next round? Let me know in the comments if you have any questions about the process!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>database</category>
      <category>devto</category>
    </item>
    <item>
      <title>Secure Terraform CI/CD on AWS with GitHub Actions (OIDC + Remote State)</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 08 Feb 2026 21:40:52 +0000</pubDate>
      <link>https://dev.to/ismailg/secure-terraform-cicd-on-aws-with-github-actions-oidc-remote-state-2eg6</link>
      <guid>https://dev.to/ismailg/secure-terraform-cicd-on-aws-with-github-actions-oidc-remote-state-2eg6</guid>
      <description>&lt;p&gt;For CI/CD processes to work well, they need to be secure and repeatable. Without a strong authentication system and a consistent state management strategy, infrastructure automation quickly becomes vulnerable to security threats.&lt;br&gt;
This blog post explains how to set up a remote Terraform backend with state locking using Amazon S3 and DynamoDB. We will also use OIDC to set up keyless authentication from GitHub Actions to Amazon Web Services (AWS).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon S3 – remote state storage (versioned &amp;amp; encrypted)&lt;/li&gt;
&lt;li&gt;DynamoDB – state locking&lt;/li&gt;
&lt;li&gt;AWS KMS – encryption&lt;/li&gt;
&lt;li&gt;GitHub Actions – CI/CD automation&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Why This Setup Matters
&lt;/h2&gt;

&lt;p&gt;AWS access keys are saved as secrets in traditional continuous integration pipelines. This plan is very risky because long-lived credentials could be stolen.&lt;/p&gt;

&lt;p&gt;Key rotation is hard, but it's necessary. When CI is compromised, AWS is also compromised.&lt;/p&gt;

&lt;p&gt;OpenID Connect (OIDC) solves this problem by letting GitHub Actions get an IAM role dynamically using short-lived credentials from AWS STS.&lt;/p&gt;

&lt;p&gt;Terraform also needs to use a remote backend to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop the state from getting corrupted at the same time.&lt;/li&gt;
&lt;li&gt;Take care of values that are weak.&lt;/li&gt;
&lt;li&gt;Make sure that people can work together.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture solves both of these problems in a way that is easy to use and can grow with your needs.&lt;/p&gt;

&lt;p&gt;High-Level Architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions requests an OIDC identity token&lt;/li&gt;
&lt;li&gt;AWS validates the token using IAM OIDC Provider&lt;/li&gt;
&lt;li&gt;An IAM Role is assumed via sts:AssumeRoleWithWebIdentity&lt;/li&gt;
&lt;li&gt;Terraform runs with temporary credentials&lt;/li&gt;
&lt;li&gt;State is stored in encrypted S3, locked via DynamoDB&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 1️: Create AWS OIDC Provider
&lt;/h2&gt;

&lt;p&gt;To allow GitHub Actions to authenticate with AWS, an OIDC provider must be configured in AWS IAM. Before this, if you do not have AWS CLI configured in your local computer, you must setup it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CLI Setup (macOS)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Homebrew
brew install awscli

aws --version

aws configure

write those when asked:
AWS Access Key ID [None]: &amp;lt;your-accesskey&amp;gt;
AWS Secret Access Key [None]: &amp;lt;your-secret-accesskey&amp;gt;
Default region name [None]: &amp;lt;region-name&amp;gt;
Default output format [None]: json

# Account test
aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the OIDC Provider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the following command using AWS CLI or create the provider via the AWS Console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-open-id-connect-provider \
  --url https://token.actions.githubusercontent.com \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list 6938fd4d98bab03faadb97b34396831e3780aea1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enables AWS to validate GitHub-issued identity tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2️: Create IAM Role for GitHub Actions
&lt;/h2&gt;

&lt;p&gt;Next, an IAM role must be created so that GitHub Actions workflows can assume it using sts:AssumeRoleWithWebIdentity.&lt;/p&gt;

&lt;p&gt;This role explicitly defines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who can assume it (GitHub Actions)&lt;/li&gt;
&lt;li&gt;From which repository it can be assumed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create GitHubActionsRole with trust policy sts:AssumeRoleWithWebIdentity&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::YOUR_ACCOUNT_ID:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
        },
        "StringLike": {
          "token.actions.githubusercontent.com:sub": "repo:YOUR_GITHUB_USERNAME/YOUR_REPO_NAME:*"
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Attach IAM Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For bootstrap simplicity, we attach AdministratorAccess.&lt;br&gt;
 Important:&lt;br&gt;
 In real production environments, replace this with least-privilege policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sj5ipis8ny7kzkyw0mh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sj5ipis8ny7kzkyw0mh.png" alt=" " width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4️: Configure GitHub Repository Secret
&lt;/h2&gt;

&lt;p&gt;GitHub Actions must now be informed which IAM role to assume.&lt;/p&gt;

&lt;p&gt;In the GitHub repository: Settings → Secrets and variables → Actions → New repository secret&lt;/p&gt;

&lt;p&gt;Create the following secret:&lt;/p&gt;

&lt;p&gt;AWS_ROLE_ARN = arn:aws:iam::YOUR_ACCOUNT_ID:role/GitHubActionsRole&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5️: Terraform Remote State
&lt;/h2&gt;

&lt;p&gt;We use a one-time bootstrap workflow to provision:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; S3 bucket (versioning + encryption)&lt;/li&gt;
&lt;li&gt; DynamoDB table (state locking)&lt;/li&gt;
&lt;li&gt; KMS key (state encryption)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repository Structure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform-remote-state/
├── main.tf
├── providers.tf
├── variables.tf
├── terraform.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From my GitHub repository you can check the terraform files:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/skysea-devops/aws-private-infrastructure-terraform-githubactions" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6️: Bootstrap GitHub Actions Workflow
&lt;/h2&gt;

&lt;p&gt;Below is the final bootstrap workflow.&lt;/p&gt;

&lt;p&gt;Uses OIDC for AWS auth&lt;/p&gt;

&lt;p&gt;Accepts the S3 bucket name as an input&lt;/p&gt;

&lt;p&gt;Pins Terraform version&lt;/p&gt;

&lt;p&gt;Verifies AWS identity before provisioning&lt;/p&gt;

&lt;p&gt;bootstrap.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This workflow creates the foundational infrastructure for Terraform:
# - S3 bucket for state storage with encryption and versioning
# - DynamoDB table for state locking (prevents concurrent modifications)
# - KMS key for encrypting state files and secrets
#
# Run this ONCE before deploying main infrastructure

name: Bootstrap 

on:  
  workflow_dispatch:

permissions:
  contents: read
  id-token: write

env:
  AWS_REGION: us-east-1
  TF_VERSION: 1.5.0  

jobs: 
  bootstrap:  
    runs-on: ubuntu-latest 

    defaults:
      run:
        working-directory: terraform-remote-state

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Configure AWS credentials via OIDC
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          aws-region: ${{ env.AWS_REGION }}
          role-session-name: GitHubActions-Bootstrap

      - name: Verify AWS identity
        run: |
          echo "Authenticated as:"
          aws sts get-caller-identity

          ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
          echo "AWS Account ID: $ACCOUNT_ID"
          echo "AWS Region: ${{ env.AWS_REGION }}"

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: ${{ env.TF_VERSION }}

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan
        run: |
          terraform plan -out=plan.tfplan

      - name: Terraform Apply
        run: terraform apply -auto-approve plan.tfplan

      - name: Terraform Output
        run: terraform output


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7️: Run the Bootstrap Workflow
&lt;/h2&gt;

&lt;p&gt;Go to GitHub Actions&lt;/p&gt;

&lt;p&gt;Select Bootstrap&lt;/p&gt;

&lt;p&gt;Click Run workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwmyvxbjuqodz1mj5ekb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwmyvxbjuqodz1mj5ekb.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8️: Store Terraform Outputs as GitHub Secrets
&lt;/h2&gt;

&lt;p&gt;After completion, Terraform outputs values required by all future environments.&lt;br&gt;
Store these as GitHub Secrets:&lt;br&gt;
TF_STATE_BUCKET   # S3 bucket name&lt;br&gt;
TF_LOCK_TABLE     # DynamoDB table name&lt;br&gt;
KMS_KEY_ARN       # KMS key ARN&lt;/p&gt;

</description>
      <category>aws</category>
      <category>github</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>Solving Frontend-Lambda Timeout Issues with AppSync Asynchronous Execution</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 29 Nov 2025 16:33:36 +0000</pubDate>
      <link>https://dev.to/ismailg/solving-frontend-lambda-timeout-issues-with-appsync-asynchronous-execution-2p93</link>
      <guid>https://dev.to/ismailg/solving-frontend-lambda-timeout-issues-with-appsync-asynchronous-execution-2p93</guid>
      <description>&lt;p&gt;A common issue in serverless applications: the frontend receives a timeout error while CloudWatch logs show the Lambda function completed successfully. Users see failed requests, but backend operations succeed.&lt;/p&gt;

&lt;p&gt;When a Lambda function is called synchronously, the API waits for it to complete and return a response.  For long-running tasks, this might cause considerable delays.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical timeout constraints:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Maximum Timeout&lt;/th&gt;
&lt;th&gt;Configurable&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Lambda Function&lt;/td&gt;
&lt;td&gt;15 minutes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API Gateway (REST)&lt;/td&gt;
&lt;td&gt;29 seconds&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AppSync (GraphQL)&lt;/td&gt;
&lt;td&gt;30 seconds&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Solution: AppSync Asynchronous Lambda Execution
&lt;/h2&gt;

&lt;p&gt;AWS AppSync provides asynchronous Lambda resolver support. Asynchronous execution lets a GraphQL mutation trigger a Lambda function without waiting for it to finish. The resolver returns immediately, bypassing the 30-second timeout limit.&lt;/p&gt;

&lt;p&gt;With this pattern, the frontend is no longer tied to the duration of the Lambda execution. This enables long-running workflows to complete in the background.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;br&gt;
 ```Before (Synchronous):&lt;br&gt;
Frontend → "Start job" → Wait 30s → Timeout ❌&lt;br&gt;
                           Lambda still running...&lt;/p&gt;

&lt;p&gt;After (Asynchronous):&lt;br&gt;
Frontend → "Start job" → Get job ID immediately ✅&lt;br&gt;
Lambda runs independently → Updates result → Frontend gets notified ✅```&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;When a GraphQL mutation is invoked with an async handler, AppSync invokes the Lambda function using Event invocation type (asynchronous mode). It returns a response—typically containing a job identifier—without waiting for Lambda completion.&lt;/p&gt;

&lt;p&gt;The Lambda function then executes independently in the background. The frontend retrieves results through two methods:&lt;/p&gt;

&lt;p&gt;Real-time updates: GraphQL subscriptions notify the client when data changes&lt;br&gt;
Polling: Periodic GraphQL queries check job status at defined intervals&lt;/p&gt;

&lt;p&gt;This architecture eliminates the 30-second AppSync resolver timeout limitation while maintaining a responsive user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation with AWS Amplify Gen 2
&lt;/h2&gt;

&lt;p&gt;For Amplify applications using AppSync, AWS provides native support for asynchronous Lambda resolvers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The frontend triggers a GraphQL mutation.&lt;/li&gt;
&lt;li&gt;AppSync invokes the Lambda function in asynchronous mode and immediately returns a task reference.&lt;/li&gt;
&lt;li&gt;The Lambda executes independently.&lt;/li&gt;
&lt;li&gt;Results are written to a datastore.&lt;/li&gt;
&lt;li&gt;The frontend retrieves results via follow-up GraphQL queries, or AppSync subscriptions (real-time updates).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS Documentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/#async-function-handlers" rel="noopener noreferrer"&gt;https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/#async-function-handlers&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Get Hands-On with Amazon RDS Using AWS’s Getting Started Resource Center</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 02 Aug 2025 10:15:35 +0000</pubDate>
      <link>https://dev.to/ismailg/get-hands-on-with-amazon-rds-using-awss-getting-started-resource-center-4gpa</link>
      <guid>https://dev.to/ismailg/get-hands-on-with-amazon-rds-using-awss-getting-started-resource-center-4gpa</guid>
      <description>&lt;p&gt;Understanding Amazon RDS (Relational Database Service) is essential for anyone seeking to gain expertise in cloud technology.  You can't beat getting your hands on some real-world experience with managed databases, cloud-native application deployment, or even just learning the ropes of Amazon Web Services (AWS) certification.&lt;/p&gt;

&lt;p&gt;Fortunately, the 'Getting Started Resource Center' on AWS provides a curated set of practical lessons tailored to RDS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1z1eh0h6n9lec7vizpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1z1eh0h6n9lec7vizpq.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS RDS (Relational Database Service)
&lt;/h2&gt;

&lt;p&gt;Amazon RDS is a managed relational database service provided by AWS (Amazon Web Services).  Without worrying about the underlying infrastructure, users may quickly establish, operate, and scale databases in the cloud.&lt;/p&gt;

&lt;p&gt;Launching and managing cloud-based relational databases is easy with Amazon RDS.  It facilitates numerous engines, including MariaDB, SQL Server, PostgreSQL, and MySQL, and it hides numerous tedious processes, such as backups, scalability, replication, and patching.&lt;/p&gt;

&lt;p&gt;Getting RDS knowledge empowers you with the ability to:&lt;/p&gt;

&lt;p&gt;Database provisioning that is both secure and scalable&lt;/p&gt;

&lt;p&gt;Redundancy and high availability&lt;/p&gt;

&lt;p&gt;Tracking and automating performance&lt;/p&gt;

&lt;p&gt;Managing a database instance and integrating third-party applications&lt;/p&gt;

&lt;p&gt;Unless you have hands-on experience with database launch, connection, and management, these concepts may appear abstract.  The value of AWS's practical guides becomes apparent in this context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-On RDS Tutorials Currently Available
&lt;/h2&gt;

&lt;p&gt;As of now, AWS offers three dedicated hands-on labs for Amazon RDS, each addressing a key learning scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/getting-started/hands-on/create-mysql-db/?ref=gsrchandson" rel="noopener noreferrer"&gt;Create and Connect to a MySQL Database&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Ideal for beginners&lt;/li&gt;
&lt;li&gt;Learn how to launch a MySQL RDS instance, configure access, and connect with a client&lt;/li&gt;
&lt;li&gt;Free Tier–eligible&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/getting-started/hands-on/create-microsoft-sql-db/?ref=gsrchandson&amp;amp;id=updated" rel="noopener noreferrer"&gt;Create and Connect to a Microsoft SQL Server Database&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Similar structure, but uses SQL Server as the database engine&lt;/li&gt;
&lt;li&gt;Great for Windows-centric or enterprise developers&lt;/li&gt;
&lt;li&gt;Learn connectivity, security, and basic DB management&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/getting-started/hands-on/amazon-rds-backup-restore-using-aws-backup/?ref=gsrchandson&amp;amp;id=itprohandson" rel="noopener noreferrer"&gt;Amazon RDS Backup &amp;amp; Restore Using AWS Backup&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Learn how to create an on-demand backup job for an Amazon RDS database&lt;/li&gt;
&lt;li&gt;Practice backup planning and restore workflows&lt;/li&gt;
&lt;li&gt;Valuable for DevOps and system reliability engineers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why AWS Hands-on Tutorials Are Valuable
&lt;/h2&gt;

&lt;p&gt;While the number of RDS-related hands-on tutorials is currently limited, they cover core operational skills that are widely applicable:&lt;/p&gt;

&lt;p&gt;Almost every cloud project requires database creation and connection. The availability of data in production settings depends on the backup and restore processes.&lt;/p&gt;

&lt;p&gt;The ability to use Microsoft SQL Server configurations will give you basic information about managing other databases.&lt;/p&gt;

&lt;p&gt;Don’t just read about RDS—build with it. Let's start with this:&lt;br&gt;
&lt;a href="https://aws.amazon.com/getting-started/hands-on/amazon-rds-backup-restore-using-aws-backup/?ref=gsrchandson&amp;amp;id=itprohandson" rel="noopener noreferrer"&gt;https://aws.amazon.com/getting-started/hands-on/amazon-rds-backup-restore-using-aws-backup/?ref=gsrchandson&amp;amp;id=itprohandson&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>aws</category>
      <category>database</category>
    </item>
    <item>
      <title>Using SSL with a PostgreSQL DB Instance</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 15 Jun 2025 13:22:17 +0000</pubDate>
      <link>https://dev.to/ismailg/using-ssl-with-a-postgresql-db-instance-10e9</link>
      <guid>https://dev.to/ismailg/using-ssl-with-a-postgresql-db-instance-10e9</guid>
      <description>&lt;p&gt;Protecting any app that deals with sensitive information means making sure that it is safe while it is being sent. When you host PostgreSQL on Amazon RDS, it enables Secure Sockets Layer (SSL) connections. &lt;/p&gt;

&lt;p&gt;This means that data transfers between your app and the database can be protected. This makes sure that private information is safe from being intercepted or changed while it is being sent.&lt;/p&gt;

&lt;p&gt;This post will show you how to use SSL with a PostgreSQL DB instance on Amazon RDS, including what you need to do first, how to set it up, and the best ways to do so.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling SSL on Your RDS PostgreSQL Instance
&lt;/h2&gt;

&lt;p&gt;By default, Amazon RDS for PostgreSQL supports SSL. But to make SSL work and set up your client correctly, you need to do a few more things.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Check the SSL Configuration
&lt;/h3&gt;

&lt;p&gt;Go to your RDS instance in the AWS Console and review the associated parameter group. If you are using PostgreSQL version 15 or newer, rds.force_ssl may be enforced by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx63knzge7ya16t5l8zyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx63knzge7ya16t5l8zyd.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to RDS &amp;gt; Databases &amp;gt; [your database] &amp;gt; Configuration&lt;/p&gt;

&lt;p&gt;Open the linked Parameter group&lt;/p&gt;

&lt;p&gt;Find the rds.force_ssl parameter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If set to 1, SSL is required.&lt;/li&gt;
&lt;li&gt;If set to 0, SSL is optional.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqufyqgtspwk5jyzhkeir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqufyqgtspwk5jyzhkeir.png" alt=" " width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. How to Enforce SSL in RDS (If SSL is not Enforced in RDS)
&lt;/h3&gt;

&lt;p&gt;If rds.force_ssl parameter is 0 you must set it to 1. By default, parameter groups in AWS RDS are read-only and cannot be modified. Therefore, to enable rds.force_ssl = 1, you must create a custom parameter group.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create a Custom Parameter Group:
&lt;/h4&gt;

&lt;p&gt;By default, parameter groups in AWS RDS are read-only and cannot be modified. Therefore, to enable rds.force_ssl = 1, you must create a custom parameter group.&lt;/p&gt;

&lt;p&gt;Go to RDS → Parameter groups → Click “Create parameter group”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fto77az61mhavn87ao13n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fto77az61mhavn87ao13n.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fill in the fields as follows:

&lt;ul&gt;
&lt;li&gt;Parameter group family: postgres14&lt;/li&gt;
&lt;li&gt;Group name: custom-postgres14-ssl&lt;/li&gt;
&lt;li&gt;Description: Enable SSL for PostgreSQL 14&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Click Create&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Set rds.force_ssl = 1 in your new parameter group:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Select your newly created parameter group&lt;/li&gt;
&lt;li&gt;Click “Edit parameters”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xlbyob8rgouf82awxl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xlbyob8rgouf82awxl3.png" alt=" " width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search for rds.force_ssl and change its value from 0 ➝ 1&lt;/li&gt;
&lt;li&gt;Click Save
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu66g9g03nn2mcksbqheo.png" alt=" " width="800" height="396"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Attach the custom parameter group to your RDS instance:
&lt;/h4&gt;

&lt;p&gt;Go to RDS → Databases → Click your instance (database-1)&lt;br&gt;
Click “Modify”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb3crpljtv42epdesip9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb3crpljtv42epdesip9.png" alt=" " width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the DB parameter group dropdown, select the custom group: &lt;br&gt;
custom-postgres14-ssl&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ycr2hkguku3u0ciz9h7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ycr2hkguku3u0ciz9h7.png" alt=" " width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll to the bottom and choose 'Apply immediately'.&lt;/p&gt;

&lt;p&gt;Click “Continue” and then “Apply changes”&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Download the AWS RDS Root Certificate
&lt;/h3&gt;

&lt;p&gt;To establish a secure SSL connection, you must download the root certificate authority (CA) file from AWS.&lt;/p&gt;

&lt;p&gt;You can find the latest region-specific certificates here:&lt;br&gt;
Using SSL with Amazon RDS PostgreSQL&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting with SSL
&lt;/h2&gt;

&lt;p&gt;Once you’ve downloaded the certificate, you can connect to your RDS PostgreSQL instance using several methods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using a GUI Tool (e.g., DBeaver)
&lt;/h3&gt;

&lt;p&gt;In your tool of choice, create a new PostgreSQL connection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the RDS endpoint, port (5432), database name, and credentials.&lt;/li&gt;
&lt;li&gt;Under SSL settings:

&lt;ul&gt;
&lt;li&gt;Enable SSL (usually a checkbox).&lt;/li&gt;
&lt;li&gt;Set SSL Mode to require or verify-ca.&lt;/li&gt;
&lt;li&gt;Upload the RDS Root CA certificate you previously downloaded.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using Terminal (psql CLI)
&lt;/h3&gt;

&lt;p&gt;You can also connect securely via the terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;psql "host=mydb.xxxxxx.rds.amazonaws.com port=5432 dbname=mydb user=myuser password=mypass sslmode=verify-full sslrootcert=rds-ca-2019-root.pem"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;sslmode=verify-full: Ensures both certificate and hostname validation.&lt;/p&gt;

&lt;p&gt;sslrootcert: Path to the downloaded certificate file.&lt;/p&gt;

&lt;p&gt;This configuration helps ensure both confidentiality and integrity of the data transmitted.&lt;/p&gt;

&lt;p&gt;For more information and certificate downloads, refer to the official documentation:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Concepts.General.SSL.html" rel="noopener noreferrer"&gt;Using SSL with Amazon RDS PostgreSQL&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Migrating from MongoDB to Amazon DocumentDB</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 17 May 2025 12:22:46 +0000</pubDate>
      <link>https://dev.to/ismailg/migrating-from-mongodb-to-amazon-documentdb-4eo3</link>
      <guid>https://dev.to/ismailg/migrating-from-mongodb-to-amazon-documentdb-4eo3</guid>
      <description>&lt;p&gt;Modern applications today often use document databases. For years, MongoDB has been the preferred choice for developers to build applications using JSON-like document data structures. However, a move to a fully managed service like Amazon DocumentDB is attractive when workloads increase.&lt;/p&gt;

&lt;p&gt;Built from the ground up, Amazon DocumentDB (with MongoDB compatibility) is highly available, robust, and scalable. It supports common MongoDB drivers and tools, making it easy to move teams without changing application code. This article will guide you step-by-step through migrating data from MongoDB to Amazon DocumentDB using the AWS Database Migration Service (DMS).&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin the migration, make sure you have the following:&lt;/p&gt;

&lt;p&gt;An Amazon DocumentDB cluster already created and available in your AWS account.&lt;/p&gt;

&lt;p&gt;We will use AWS Database Migration Service (DMS) to migrate data from a MongoDB database to Amazon DocumentDB. This will work with minimal downtime and will not require the export and import of collections manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an AWS DMS Replication Instance for MongoDB Migration
&lt;/h2&gt;

&lt;p&gt;The replication instance is responsible for the actual migration process.   It establishes a connection to the source and target endpoints, extracts the data, transforms it (if necessary), and then inserts it into the destination.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Access the AWS DMS Console and navigate to the Replication instances section.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhntl66wungdg50wjzmcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhntl66wungdg50wjzmcr.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Select "Create replication instance."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3r411hdu1jli8375hdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3r411hdu1jli8375hdu.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pick an instance identifier and select an instance class, such as dms.t3.medium.&lt;/li&gt;
&lt;li&gt;  Select the appropriate virtual private cloud (VPC).   This must be the same VPC that your DocumentDB cluster refers to as its home.&lt;/li&gt;
&lt;li&gt;  Multi-AZ should be activated when exceptional availability is required.&lt;/li&gt;
&lt;li&gt;  Select the "Create" button and await the "Available" status of the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create Source and Target Endpoints for MongoDB Migration
&lt;/h2&gt;

&lt;p&gt;Once the replication instance is ready, create source and target endpoints to define where the data will be moved from and to.&lt;/p&gt;

&lt;p&gt;Source Endpoint (MongoDB):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8wn7we4ycdd4icgu1bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8wn7we4ycdd4icgu1bc.png" alt=" " width="635" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Endpoint type: Source&lt;/li&gt;
&lt;li&gt;Engine: MongoDB&lt;/li&gt;
&lt;li&gt;Server name: MongoDB hostname or IP address&lt;/li&gt;
&lt;li&gt;Port: 27017&lt;/li&gt;
&lt;li&gt;Database name: your MongoDB database (e.g., zips-db)&lt;/li&gt;
&lt;li&gt;Authentication mode: default&lt;/li&gt;
&lt;li&gt;Username and password: credentials for your MongoDB instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Target Endpoint (Amazon DocumentDB):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwnd3dshfrokf479xf5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwnd3dshfrokf479xf5g.png" alt=" " width="620" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Endpoint type: Target&lt;/li&gt;
&lt;li&gt;Engine: MongoDB&lt;/li&gt;
&lt;li&gt;Server name: your DocumentDB cluster endpoint (e.g., mydocdbcluster.cluster-xxxxxx.docdb.amazonaws.com)&lt;/li&gt;
&lt;li&gt;Port: 27017&lt;/li&gt;
&lt;li&gt;Database name: target database name&lt;/li&gt;
&lt;li&gt;Authentication: enter your DocumentDB admin username and password&lt;/li&gt;
&lt;li&gt;TLS mode: verify-full&lt;/li&gt;
&lt;li&gt;TLS CA file: upload global-bundle.pem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To add TLS CA file,first download the global CA certificate. You can find the TLS certificate download command directly in the Amazon DocumentDB console under your cluster's Connectivity &amp;amp; security tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj7111fcohizpztwq7eo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj7111fcohizpztwq7eo.png" alt=" " width="800" height="524"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;After both endpoints are created, test the connections to verify that DMS can reach each database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo750r6xizoquvfggl9q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo750r6xizoquvfggl9q4.png" alt=" " width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create and Run a MongoDB Migration Task
&lt;/h2&gt;

&lt;p&gt;Now, you can define and launch the migration task.&lt;/p&gt;

&lt;p&gt;In the DMS Console, go to Database migration tasks and click Create task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq4nfv97t5i2teef4io5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq4nfv97t5i2teef4io5.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a task identifier, and select your previously created replication instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2pwsrplaa5d52s0gavm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2pwsrplaa5d52s0gavm.png" alt=" " width="661" height="489"&gt;&lt;/a&gt;&lt;br&gt;
For migration type, choose one of the following:&lt;/p&gt;

&lt;p&gt;Migrate existing data only&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3yysjdrkywhtdsp2g5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3yysjdrkywhtdsp2g5y.png" alt=" " width="667" height="699"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Important: Turn off data validation. This feature is not supported for MongoDB endpoints.&lt;/p&gt;

&lt;p&gt;Under Table mappings, click Add new selection rule (you must select at least one) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few5cvi99n7jrkh5ujyqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few5cvi99n7jrkh5ujyqh.png" alt=" " width="608" height="624"&gt;&lt;/a&gt;&lt;br&gt;
Schema: your MongoDB database name (e.g., zips-db)&lt;/p&gt;

&lt;p&gt;Source table name: % &lt;/p&gt;

&lt;p&gt;Action: Include&lt;/p&gt;

&lt;p&gt;Choose to start the task automatically on create.&lt;/p&gt;

&lt;p&gt;Once created, the task will begin migrating your data to Amazon DocumentDB. You can monitor progress via the DMS console and view logs for detailed insights.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzre4377l875fsu784lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzre4377l875fsu784lw.png" alt=" " width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;AWS DMS makes migrating from MongoDB to Amazon DocumentDB a simple task.  Following the procedures outlined above will help you to reduce downtime and transfer your document-based workloads to a completely controlled environment that grows with your requirements.  DocumentDB gives you the AWS security, scalability, and dependability advantages without compromising MongoDB compatibility.&lt;/p&gt;

&lt;p&gt;This move can be the next step in developing your architecture if you are thinking about integration with other AWS services, operational simplicity, and long-term maintenance.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>How to Set Up AWS EFS Static Provisioning Across Multiple Kubernetes Namespaces</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Fri, 11 Apr 2025 15:57:48 +0000</pubDate>
      <link>https://dev.to/ismailg/how-to-set-up-aws-efs-static-provisioning-across-multiple-kubernetes-namespaces-58i2</link>
      <guid>https://dev.to/ismailg/how-to-set-up-aws-efs-static-provisioning-across-multiple-kubernetes-namespaces-58i2</guid>
      <description>&lt;p&gt;Bitnami PostgreSQL is a widely-used container image with the default being to run safely as a non-root user. But persistent storage—especially shared storage between environments such as dev and test—becomes a problem. Here in this blog post, I'll walk you through how I used AWS EFS static provisioning to share storage between two namespaces with Bitnami PostgreSQL running on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Static Provisioning?
&lt;/h2&gt;

&lt;p&gt;While dynamic provisioning is easy, static provisioning offers full control. It allows you to set a PersistentVolume (PV) by hand which corresponds with an AWS EFS File System or Access Point—ideal for environments where there are multiple environments (e.g., dev and test). As we’re using static provisioning, there’s no need to define a StorageClass for EFS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full control over PersistentVolume (PV) setup.&lt;/li&gt;
&lt;li&gt;A way to reuse the same EFS volume across different namespaces.&lt;/li&gt;
&lt;li&gt;Simpler debugging for permission or access issues.&lt;/li&gt;
&lt;li&gt;No need to define a StorageClass&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What We’re Building
&lt;/h2&gt;

&lt;p&gt;A PostgreSQL setup running in two separate namespaces: dev and test&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both environments mount the same EFS volume&lt;/li&gt;
&lt;li&gt;PostgreSQL data is shared &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabd75wnshl0ms93ru5il.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabd75wnshl0ms93ru5il.png" alt=" " width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A running Kubernetes cluster (K3s, EKS, etc.)&lt;/li&gt;
&lt;li&gt;An AWS EFS file system already created&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;Your repo should look like this:&lt;br&gt;
deployment-files/&lt;br&gt;
├── deployment-dev/&lt;br&gt;
│   └── pv-dev.yml, pvc-dev.yml, postgres.yml&lt;br&gt;
└── deployment-test/&lt;br&gt;
    └── pv-test.yml, pvc-test.yml, postgres*.yml&lt;/p&gt;

&lt;p&gt;My GitLab repo: &lt;a href="https://gitlab.com/samueldeniz80/aws-efs-static-provisioning-bitnami-postgresqls/-/tree/main/deployment-files?ref_type=heads" rel="noopener noreferrer"&gt;https://gitlab.com/samueldeniz80/aws-efs-static-provisioning-bitnami-postgresqls/-/tree/main/deployment-files?ref_type=heads&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Create EFS Access Point
&lt;/h2&gt;

&lt;p&gt;To prevent permission issues when mounting EFS across namespaces, create an Access Point from the AWS Console with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User ID: 1001&lt;/li&gt;
&lt;li&gt;Group ID: 1001&lt;/li&gt;
&lt;li&gt;Permissions: 0775&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvnmwktdycw89v59zupm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvnmwktdycw89v59zupm.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install EFS CSI Driver in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.7"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use Helm for EFS CSI Driver installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Define the PV and PVC
&lt;/h2&gt;

&lt;p&gt;Set your pv’s volumeHandle section with both fs file system id and access point id :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumeHandle: fs-&amp;lt;file-system-id&amp;gt;::fsap-&amp;lt;access-point-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Leave storageClassName empty.&lt;/p&gt;

&lt;p&gt;pv-dev.yml:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8tyjyxur4l7zs8ioxnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8tyjyxur4l7zs8ioxnu.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;br&gt;
For pvc also leave storageClassName empty.&lt;/p&gt;

&lt;p&gt;pvc-dev.yml:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86gwr6xk4p62i9v068uv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86gwr6xk4p62i9v068uv.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create PV and PVC as the same for the test namespace. The test namespace PV will also point to the same access point in the EFS.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Configure PostgreSQL Deployment
&lt;/h2&gt;

&lt;p&gt;Make sure the deployment uses fsGroup: 1001 in its securityContext to match EFS Access Point permissions:&lt;br&gt;
securityContext:&lt;br&gt;
  fsGroup: 1001&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Deploy Namespaces
&lt;/h2&gt;

&lt;p&gt;Deploy to dev:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace dev
kubectl apply -f deployment-files/deployment-dev/ -n dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the logs to verify that the PostgreSQL PV and PVC are bound, and the postgres pod is running.&lt;/p&gt;

&lt;p&gt;Deploy to test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace test
kubectl apply -f deployment-files/deployment-test/ -n test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the logs to verify that the PostgreSQL PV and PVC are bound, and the postgres pod is running.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hmiv1q1t9qw0yufeybe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hmiv1q1t9qw0yufeybe.png" alt=" " width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Outcome
&lt;/h2&gt;

&lt;p&gt;You now have a shared EFS volume accessed by PostgreSQL pods running in different namespaces.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automate Your Python API with AWS Lambda and EventBridge</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 23 Mar 2025 07:02:54 +0000</pubDate>
      <link>https://dev.to/ismailg/automate-your-python-api-with-aws-lambda-and-eventbridge-1g8i</link>
      <guid>https://dev.to/ismailg/automate-your-python-api-with-aws-lambda-and-eventbridge-1g8i</guid>
      <description>&lt;p&gt;In situations when you want to conduct scheduled tasks without having to worry about the infrastructure, serverless architecture is an excellent option to consider.  Over the course of this piece, I will demonstrate how I utilized Amazon Web Services Lambda and Amazon EventBridge to automate a Python-based API update script that runs on a regular schedule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case
&lt;/h2&gt;

&lt;p&gt;Let's assume you have a website or app that requires you to pull information from a third-party API at regular intervals. The information could be prices, exchange rates, weather, or analytics results. You already have an existing Python script that functions to accomplish this upgrade. &lt;/p&gt;

&lt;p&gt;Now, the objective is to execute this script on a regular basis without any need to work with any servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create Your Lambda Function
&lt;/h2&gt;

&lt;p&gt;Your Python script needs to be prepared and packaged in such a way that it can be executed by AWS Lambda. This is the first stage.&lt;/p&gt;

&lt;p&gt;Your working directory ought to have the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── lambda_function.py
├── requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;lambda_function.py:&lt;/strong&gt;&lt;br&gt;
When you run a Python script on AWS Lambda, the conventional name assists AWS Lambda in identifying and running your function in the right way. Your primary Python script must be lambda_function.py.&lt;/p&gt;

&lt;p&gt;Also you must define lambda_handler(event, context). The lambda_handler() function serves as an entry point for AWS Lambda and will directly be called when your function gets invoked.   &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;requirements.txt:&lt;/strong&gt;&lt;br&gt;
Requirements.txt contains a list of all the dependencies that your code needs.  &lt;/p&gt;

&lt;p&gt;After you've structured your folder and written your Lambda function code, the next step is to package your Python script along with its dependencies. AWS Lambda does not automatically install Python packages from requirements.txt — you must include all dependencies in your deployment package:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install dependencies locally into your project folder:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt -t .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Deployment Package (ZIP):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zip -r lambda_shopify_update.zip .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Double check that you are zipping up the contents of the folder, not the folder itself. That is important — AWS needs to see the handler file (i.e., the lambda_function.py) in the root of the ZIP file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2. Upload to AWS Lambda
&lt;/h2&gt;

&lt;p&gt;Now go to the AWS Console:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open AWS Lambda&lt;/li&gt;
&lt;li&gt;Click “Create function” &lt;/li&gt;
&lt;li&gt;Choose “Upload from → .zip file”&lt;/li&gt;
&lt;li&gt;Upload your lambda_shopify_update.zip&lt;/li&gt;
&lt;li&gt;Test your lambda function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9wj6xrqu9kzea58mm31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9wj6xrqu9kzea58mm31.png" alt=" " width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Automating with EventBridge
&lt;/h2&gt;

&lt;p&gt;Once your Lambda function is working correctly, the next step is to automate its execution using Amazon EventBridge. EventBridge is a serverless scheduler that lets you trigger your Lambda function on a regular schedule.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd8o8pu9fq2gpmnts5qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd8o8pu9fq2gpmnts5qh.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the Amazon EventBridge section in the AWS Console.&lt;/li&gt;
&lt;li&gt;Click “Create rule” to set up an automatic trigger for your Lambda function.&lt;/li&gt;
&lt;li&gt;Under Rule type, choose “Schedule” to trigger your function based on time (rather than an event).&lt;/li&gt;
&lt;li&gt;Select “Rate-based schedule” to run your Lambda at a regular interval.&lt;/li&gt;
&lt;li&gt;Alternatively, you can use “Cron-based schedule” for more specific timing (e.g., every day at 08:00).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhb41l0c3qpyc0qo4qyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhb41l0c3qpyc0qo4qyd.png" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In the Target section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose AWS Lambda.&lt;/li&gt;
&lt;li&gt;Select your deployed Lambda function.&lt;/li&gt;
&lt;li&gt;Click Next.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Review your configuration, and click “Create rule”.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Your Lambda function is now scheduled to run automatically at the interval you defined — no manual execution needed.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>eventbridge</category>
      <category>automation</category>
    </item>
    <item>
      <title>Creating an AWS DMS Migration Task</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 09 Mar 2025 16:23:54 +0000</pubDate>
      <link>https://dev.to/ismailg/creating-an-aws-dms-migration-task-4ka9</link>
      <guid>https://dev.to/ismailg/creating-an-aws-dms-migration-task-4ka9</guid>
      <description>&lt;p&gt;&lt;strong&gt;Migrating Data from Local SQL Server to AWS RDS PostgreSQL Using AWS DMS - II&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my last post, I guided you through building AWS Database Migration Service (DMS) to move data from an on-premises SQL Server to an AWS RDS PostgreSQL database instance. We went over establishing the AWS environment, configuring the source database, and building the required AWS DMS resources.&lt;/p&gt;

&lt;p&gt;In this article, we will examine the migration process itself—target. With both source and target endpoints configured, the next step is to create a migration task in AWS DMS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating and Running the AWS DMS Migration Task
&lt;/h2&gt;

&lt;p&gt;With both the source (SQL Server) and target (PostgreSQL) endpoints configured (if not check the &lt;a href="https://dev.to/ismailg/migrating-data-from-local-sql-server-to-aws-rds-postgresql-using-aws-dms-58k4"&gt;Migrating Data from Local SQL Server to AWS RDS PostgreSQL Using AWS DMS - I&lt;/a&gt;, it's time to create and execute a migration task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the Migration Task
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1.Navigate to Database Migration Tasks:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to AWS DMS Console &amp;gt; Database migration tasks &amp;gt; Create task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Configure Task Settings:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F409akffllqelrpr3mosy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F409akffllqelrpr3mosy.png" alt=" " width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task Identifier:&lt;/strong&gt; Just give a name, for example, sqlserver-to-postgres-migration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replication instance:&lt;/strong&gt; Select the replication instance configured earlier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source and target database:&lt;/strong&gt; Choose the previously configured SQL Server source and PostgreSQL target endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Migration type:&lt;/strong&gt; Select "Migrate" to perform a full load of the existing data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.Task Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9sw7705eycqzzzutlea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9sw7705eycqzzzutlea.png" alt=" " width="800" height="784"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Editing mode:&lt;/strong&gt; Wizard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target table preparation mode:&lt;/strong&gt; If you wish DMS to drop and replicate tables on the target, choose for "Drop tables on target".  If you wish to maintain the current structure and data unaltered, then choose "Do nothing"; DMS just adds fresh entries without altering or deleting any current data.  If you have pre-existing data that ought to be kept whole, this option is handy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Include LOB columns in replication:&lt;/strong&gt; Enable this option if your tables contain large object (LOB) data types.​&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable validation:&lt;/strong&gt; When you enable this option, AWS DMS verifies the row counts and checksums of the source and target databases and confirms that they are equal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.Table Mapping and Transformations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddilcvzi4iiwmdohfsvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddilcvzi4iiwmdohfsvz.png" alt=" " width="800" height="867"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Editing mode:&lt;/strong&gt; Wizard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selection rules:&lt;/strong&gt; Define at least one selection rule to specify the schemas and tables you want to be included or excluded from the migration. Use % in the Source table name to include all tables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transformation rules (optional):&lt;/strong&gt; Set transformation rules, if you need to rename schemas, tables, or columns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5.Premigration assessment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4mgmrjifngmjvao9och.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4mgmrjifngmjvao9och.png" alt=" " width="800" height="324"&gt;&lt;/a&gt;&lt;br&gt;
A premigration assessment warns you of potential migration issues before starting your migration task. For this tutorial I skipped premigration assessment. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.Running the Migration Task&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click Create task, then Start the migration.&lt;/p&gt;

&lt;p&gt;AWS DMS will begin extracting data from SQL Server, transforming it, and loading it into PostgreSQL.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrating Data from Local SQL Server to AWS RDS PostgreSQL Using AWS DMS - I</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Thu, 27 Feb 2025 09:09:24 +0000</pubDate>
      <link>https://dev.to/ismailg/migrating-data-from-local-sql-server-to-aws-rds-postgresql-using-aws-dms-58k4</link>
      <guid>https://dev.to/ismailg/migrating-data-from-local-sql-server-to-aws-rds-postgresql-using-aws-dms-58k4</guid>
      <description>&lt;p&gt;Data migration is an essential activity for those organizations that are moving from on-premises database technology to cloud offerings such as AWS RDS. An extremely helpful service that simplifies this task is the AWS Database Migration Service (DMS). With AWS DMS, for example, you can migrate data from an on-premises Microsoft SQL Server database to an AWS RDS PostgreSQL. I am going to explain the process of creating SQL Server parameters and configuring an AWS DMS replication instance in this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the SQL Server for AWS DMS
&lt;/h2&gt;

&lt;p&gt;Before setting up AWS DMS, it is essential to configure your local SQL Server to allow external connections and ensure proper networking settings. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Enabling TCP/IP Connections in SQL Server
&lt;/h3&gt;

&lt;p&gt;By default, SQL Server does not allow remote connections unless TCP/IP is explicitly enabled. Follow these steps to enable TCP/IP connections:&lt;/p&gt;

&lt;p&gt;Open SQL Server Configuration Manager.&lt;/p&gt;

&lt;p&gt;Navigate to SQL Server Network Configuration → Protocols for MSSQLSERVER.&lt;/p&gt;

&lt;p&gt;Locate the TCP/IP protocol and right-click to select Enable.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq2wdubyqrbpujb66o5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq2wdubyqrbpujb66o5c.png" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Right-click on TCP/IP, select Properties, and navigate to the IP Addresses tab.&lt;/p&gt;

&lt;p&gt;Under the IPAll section, set TCP Port to 1433 (leave TCP Dynamic Ports blank).&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Configuring Windows Firewall Rules
&lt;/h3&gt;

&lt;p&gt;After enabling TCP/IP, you need to allow inbound connections on port 1433 through Windows Firewall:&lt;/p&gt;

&lt;p&gt;Open Windows Defender Firewall with Advanced Security.&lt;/p&gt;

&lt;p&gt;Navigate to Inbound Rules → Add New Rule.&lt;/p&gt;

&lt;p&gt;Select Port and click Next.&lt;/p&gt;

&lt;p&gt;Choose TCP and enter 1433 in the Specific local ports field.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dj5wjbmku17m23a0wrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dj5wjbmku17m23a0wrq.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmut6g06ipzo3d2l3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmut6g06ipzo3d2l3a.png" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose 'Allow the connection'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fm1lq682oorky6ui628.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fm1lq682oorky6ui628.png" alt=" " width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apply the rule to Domain, Private, and Public profiles.&lt;/p&gt;

&lt;p&gt;Name the rule and complete the setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Restarting SQL Server
&lt;/h3&gt;

&lt;p&gt;For the changes to take effect, restart the SQL Server service:&lt;/p&gt;

&lt;p&gt;Open SQL Server Configuration Manager.&lt;/p&gt;

&lt;p&gt;Select SQL Server Services.&lt;/p&gt;

&lt;p&gt;Right-click SQL Server (MSSQLSERVER) and select Restart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rr2n97801l1wa9vdlw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rr2n97801l1wa9vdlw9.png" alt=" " width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the SQL Server settings are configured, test the connection from another computer again using:&lt;/p&gt;

&lt;p&gt;sqlcmd -S IP-Address -U Username -P Password&lt;/p&gt;

&lt;p&gt;If you can connect successfully, your SQL Server is ready for migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up AWS DMS for Migration
&lt;/h2&gt;

&lt;p&gt;Follow these steps to configure AWS DMS:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Creating a Replication Instance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r4yhhplbekffanuxbua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r4yhhplbekffanuxbua.png" alt=" " width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam0g66d99yd28y0nyy89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam0g66d99yd28y0nyy89.png" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the AWS DMS Console.&lt;/p&gt;

&lt;p&gt;Select Replication Instances and click Create Replication Instance.&lt;/p&gt;

&lt;p&gt;Provide a name and choose an appropriate instance class.&lt;/p&gt;

&lt;p&gt;Set the replication instance to public since the SQL Server is hosted on a local computer.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security Group Configuration
&lt;/h4&gt;

&lt;p&gt;DMS relies on security groups to control inbound and outbound traffic between the replication instance and databases. To properly configure the security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to EC2 Security Groups in the AWS Console.&lt;/li&gt;
&lt;li&gt;Locate the security group assigned to the replication instance.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the following Inbound Rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MSSQL (TCP/1433): Allow traffic from your local SQL Server’s IP or security group.&lt;/li&gt;
&lt;li&gt;PostgreSQL (TCP/5432): Allow traffic to the target AWS RDS PostgreSQL instance.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Add an Outbound Rule that allows all traffic to leave (egress) the VPC. This ensures communication from the replication instance to the source and target database endpoints.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Configuring Source Endpoint (SQL Server)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F339q35zv475zkkye2hcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F339q35zv475zkkye2hcr.png" alt=" " width="627" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose Source Endpoint and enter SQL Server details.&lt;/p&gt;

&lt;p&gt;Set the Endpoint Type to Source.&lt;/p&gt;

&lt;p&gt;Enter the Server Name (IP Address), Port (1433), Username, and Password.&lt;/p&gt;

&lt;p&gt;Click Test Connection and ensure it succeeds.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Configuring Target Endpoint (AWS RDS PostgreSQL)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4kqzjghyl1v331vtu28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4kqzjghyl1v331vtu28.png" alt=" " width="622" height="697"&gt;&lt;/a&gt;&lt;br&gt;
Navigate to Endpoints and select Create Endpoint.&lt;/p&gt;

&lt;p&gt;Choose Target Endpoint and enter AWS RDS PostgreSQL details.&lt;/p&gt;

&lt;p&gt;Set the Endpoint Type to Target.&lt;/p&gt;

&lt;p&gt;Enter the RDS Endpoint, Port (5432), Username, and Password.&lt;/p&gt;

&lt;p&gt;Click Test Connection and verify success.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database migration tasks
&lt;/h2&gt;

&lt;p&gt;After making those settings you are ready to create a database migration task. I will cover this in my next blog post.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>postgres</category>
      <category>sql</category>
    </item>
    <item>
      <title>AWS Global Infrastructure: Components And Benefits</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Thu, 14 Dec 2023 11:04:41 +0000</pubDate>
      <link>https://dev.to/ismailg/aws-global-infrastructure-components-and-benefits-184l</link>
      <guid>https://dev.to/ismailg/aws-global-infrastructure-components-and-benefits-184l</guid>
      <description>&lt;p&gt;The AWS Global Cloud Infrastructure stands as one of the most expansive cloud services, providing a comprehensive array of more than 200 fully-featured services accessible from data centers distributed across the world. Learning about the AWS Global Infrastructure opens the door to a modern world of cloud technology. Understanding this digital world becomes possible as you delve deeper into the many powerful tools and services that Amazon Web Services (AWS) offers worldwide.&lt;/p&gt;

&lt;p&gt;The key components of the AWS Global Cloud Infrastructure are Regions, Availability Zones, Edge Locations, Wavelength Zones, and Regional Edge Caches. Exploring these key components provides a comprehensive understanding of how the network is structured and optimized for seamless performance and reliability. &lt;/p&gt;

&lt;p&gt;In this blog post, we will explore AWS’s Global Infrastructure. If you’re a cloud enthusiast looking to earn AWS certifications like AWS Certified Cloud Practitioner or AWS Solution Architect, this is where you should begin your adventure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkl09pxicfyamq5bi5lxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkl09pxicfyamq5bi5lxr.png" alt="AWS Global Infrastructure" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Global Infrastructure?
&lt;/h2&gt;

&lt;p&gt;The AWS Global Infrastructure is a vast network of data centers and resources strategically located throughout the world to provide cloud services. The infrastructure comprises multiple interconnected components, such as Regions, Availability Zones, Local Zones, and Wavelength Zones.&lt;/p&gt;

&lt;p&gt;AWS Global Infrastructure’s main component is data centers called Availability Zones (AZ). AZ’s are isolated locations within AWS Regions. The AWS Global Infrastructure Map currently includes 102 Availability Zones spread across 32 geographic regions globally. Moreover, plans are underway to expand with an additional 12 Availability Zones and 4 more AWS Regions, covering countries like Canada, Malaysia, New Zealand, and Thailand.&lt;/p&gt;

&lt;p&gt;Additionally, AWS offers 35 Local Zones and 29 Wavelength Zones, catering to the needs of ultralow latency applications. Local Zones extend AWS services to select metropolitan areas, allowing for proximity to end-users and reduced latency. On the other hand, telecommunication providers collaborate in designing Wavelength Zones to bring AWS services even closer to 5G networks, thereby enhancing the performance of latency-sensitive applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the Key Components of AWS Global Infrastructure?
&lt;/h2&gt;

&lt;p&gt;The AWS Global Infrastructure represents a highly distributed and robust ecosystem of data centers, ensuring global coverage, reliability, and performance for a wide array of cloud-based services and applications. Here is the 6 components of  AWS Global Infrastructure:&lt;/p&gt;

&lt;p&gt;Regions&lt;br&gt;
Availability Zones (AZs)&lt;br&gt;
Local Zones&lt;br&gt;
Edge Locations&lt;br&gt;
AWS Regional Edge Caches&lt;br&gt;
Wavelength Zones&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Regions
&lt;/h3&gt;

&lt;p&gt;An AWS Region is a distinct geographical area worldwide where a collection of data centers is clustered. This model incorporates multiple Availability Zones and physically and logically separate logical data center groups. Each AWS Region comprises at least three such Availability Zones, ensuring geographic redundancy and high availability.&lt;/p&gt;

&lt;p&gt;AWS Regions and Availability Zones are not the same. AWS Regions are the larger geographical areas where data centers are located, while Availability Zones are subsets within those regions. We will discuss AZs deeply in the next part.&lt;/p&gt;

&lt;p&gt;AWS operates in several geographic regions, including North America, South America, Europe, China, Asia Pacific, South Africa, and the Middle East.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Availability Zones
&lt;/h3&gt;

&lt;p&gt;People often refer to Availability Zones (AZs) in AWS as data center facilities. Each Availability Zone is a physically separated data center with its own power and infrastructure. AWS has strategically designed these Availability Zones to provide redundancy and resiliency for applications and services.&lt;/p&gt;

&lt;p&gt;AWS Availability Zones have their own power, cooling, and networking infrastructure. Deploying resources across multiple Availability Zones ensures that if one AZ experiences issues or downtime, it enables the others to continue operating, thereby enhancing the overall resilience of applications hosted on the AWS platform.&lt;/p&gt;

&lt;p&gt;Physical distances are important when comparing the distance between an AWS Region and its corresponding AZ. This separation helps minimize the impact of natural disasters or localized incidents. A significant distance physically separates Availability Zones in AWS. It’s important to note that while the distance between AZs is substantial, all AZs within a given AWS Region remain within 100 kilometers (approximately 60 miles) from each other.&lt;/p&gt;

&lt;p&gt;Here is the full list of AWS data center locations in all regions:&lt;/p&gt;

&lt;p&gt;Europe / Middle East / Africa&lt;br&gt;
11 Geographic Regions&lt;br&gt;
33 Availability Zones &lt;br&gt;
39 Edge Network locations &lt;br&gt;
2 Regional Edge Cache locations&lt;br&gt;
Availability Zones:&lt;br&gt;
Bahrain (3), Cape Town (3), Frankfurt (3), Ireland (3), London (3), Milan (3), Paris (3), Spain (3), Stockholm (3), Zurich (3), and UAE (3)&lt;/p&gt;

&lt;p&gt;North America&lt;br&gt;
7 Geographic Regions&lt;br&gt;
25 Availability Zones &lt;br&gt;
44 Edge Network locations &lt;br&gt;
2 Regional Edge Cache locations&lt;br&gt;
Availability Zones:&lt;br&gt;
N. Virginia (6), Ohio (3), N. California (3), Oregon (4), US-East (3), US-West (3), Central (3)&lt;/p&gt;

&lt;p&gt;South America&lt;br&gt;
1 Geographic Regions&lt;br&gt;
3 Availability Zones (in São Paulo) &lt;br&gt;
4 Edge Network locations &lt;br&gt;
1 Regional Edge Cache locations&lt;br&gt;
Asia Pacific and China&lt;br&gt;
12 Geographic Regions&lt;br&gt;
38 Availability Zones &lt;br&gt;
34 Edge Network locations &lt;br&gt;
5 Regional Edge Cache locations&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Local Zones
&lt;/h3&gt;

&lt;p&gt;AWS Local Zones are a part of the AWS Global infrastructure that brings cloud services locally to certain areas with a high concentration of users and applications. Essentially, these Local Zones form a subset of AWS Availability Zones, but they situate themselves in close proximity to specific metropolitan areas.&lt;/p&gt;

&lt;p&gt;Local Zones allow customers to deploy applications that require low latency to end-users or specific resources in those geographic areas. They are particularly useful for applications that require real-time processing, such as gaming, interactive multimedia, and financial services, where reducing latency is critical for a seamless user experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Local Zones Locations:
&lt;/h3&gt;

&lt;p&gt;Local Zones are available in 33 metropolitan areas around the world, where 16 of them are located in the US (specific names not provided), and 17 of them are located outside of the US. Here are 17 Local Zones located outside the US:&lt;/p&gt;

&lt;p&gt;Lagos&lt;br&gt;
Auckland&lt;br&gt;
Santiago&lt;br&gt;
Bangkok&lt;br&gt;
Kolkata&lt;br&gt;
Copenhagen&lt;br&gt;
Hamburg&lt;br&gt;
Helsinki&lt;br&gt;
Lima&lt;br&gt;
Muscat&lt;br&gt;
Perth&lt;br&gt;
Querétaro&lt;br&gt;
Taipei&lt;br&gt;
Warsaw&lt;br&gt;
Delhi&lt;br&gt;
Buenos Aires&lt;br&gt;
Manila&lt;br&gt;
Additionally, AWS plans to broaden Local Zones to encompass 19 additional locations spanning 16 countries:&lt;/p&gt;

&lt;p&gt;Australia&lt;br&gt;
Austria&lt;br&gt;
Belgium&lt;br&gt;
Brazil&lt;br&gt;
Canada&lt;br&gt;
Colombia&lt;br&gt;
Czech Republic&lt;br&gt;
Germany&lt;br&gt;
Greece&lt;br&gt;
India&lt;br&gt;
Kenya&lt;br&gt;
Netherlands&lt;br&gt;
Norway&lt;br&gt;
Portugal&lt;br&gt;
South Africa&lt;br&gt;
Vietnam&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Edge Locations
&lt;/h3&gt;

&lt;p&gt;AWS Edge Locations are endpoints for Amazon Web Services’ CloudFront service. They are strategically distributed data centers situated in various areas around the world. These Edge Locations act as caching servers that store copies of frequently accessed content, such as images, videos, web pages, and other static files, closer to the end users.&lt;/p&gt;

&lt;p&gt;The Edge Location closest to the client receives requests for content distributed via CloudFront. This reduces latency and improves the overall performance of the application or website if the content already exists at that Edge Location. If the content still needs to be cached, the Edge Location retrieves it from the origin server (such as an Amazon S3 bucket or a custom origin) and caches it for subsequent requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Regional Edge Caches
&lt;/h3&gt;

&lt;p&gt;Like edge locations, Regional Edge Caches are CloudFront service sites strategically placed worldwide near your viewers. They deliver content directly to viewers between your origin server and global edge locations. &lt;/p&gt;

&lt;p&gt;Regional edge caches assist with many sorts of material, especially with content that becomes less popular over time. Examples are user-generated material, e-commerce assets, news, and other content that may rapidly gain popularity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wavelength Zones
&lt;/h3&gt;

&lt;p&gt;AWS Wavelength is an Amazon Web Services (AWS) service that brings cloud services to the edge of 5G networks. It supports real-time gaming, augmented reality (AR), virtual reality (VR), video streaming, and other applications that require ultra-low latency and high bandwidth connectivity.&lt;/p&gt;

&lt;p&gt;A Wavelength Zone is essentially a specialized infrastructure deployment within a data center run by a telecommunications provider. &lt;/p&gt;

&lt;p&gt;These zones are strategically placed in close proximity to 5G base stations, allowing for a direct connection between the AWS resources and the 5G network. The proximity minimizes latency for applications requiring instant responses.&lt;/p&gt;

&lt;p&gt;With Wavelength, developers can run their applications and workloads on AWS infrastructure located at the edge of the 5G network, combining the benefits of both AWS services and high-speed, low-latency 5G connectivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the 6 benefits of AWS Global Infrastructure?
&lt;/h2&gt;

&lt;p&gt;The AWS Global Infrastructure offers 6 key benefits to businesses and organizations that leverage its services, as mentioned on the AWS website: security, availability, performance, scalability, flexibility, and global footprint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;AWS offers robust security measures, including encryption and constant monitoring. You maintain control over your data with encryption, movement, and retention options.&lt;/p&gt;

&lt;h3&gt;
  
  
  Availability
&lt;/h3&gt;

&lt;p&gt;AWS ensures the utmost network availability among cloud providers. Each region is isolated and divided into multiple Availability Zones (AZs). If one AZ experiences an issue, other AZs can continue to operate without interruption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;p&gt;Low latency, minimal packet loss, and high network quality are all characteristics of AWS Global Infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability
&lt;/h3&gt;

&lt;p&gt;AWS enables flexible scaling of resources, eliminating over-provisioning. You can instantly adjust resources based on business needs, rapidly deploying hundreds or thousands of servers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexibility
&lt;/h3&gt;

&lt;p&gt;You have the freedom to choose how and where to run workloads, utilizing the same network, control plane, APIs, and services. Options include global AWS Regions, AWS Local Zones, AWS Wavelength for low latency, and AWS Outposts for on-premises deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Global Footprint
&lt;/h3&gt;

&lt;p&gt;AWS has an extensive global infrastructure presence. You can choose to select technology infrastructure close to your intended users, ensuring ideal assistance for a wide range of applications, from those requiring high throughput to those requiring low-latency performance. &lt;/p&gt;

&lt;p&gt;These benefits empower businesses to build, scale, and operate applications efficiently while leveraging a secure and reliable global cloud infrastructure provided by AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Essential AWS Certifications and Courses to Learn AWS Global Infrastructure?
&lt;/h2&gt;

&lt;p&gt;To become proficient in understanding and working with AWS Global Infrastructure, you can pursue specific AWS certifications and courses that cover relevant concepts, technologies, and best practices. Here are the top 3 essential AWS certifications and courses that can help you learn about AWS Global Infrastructure: AWS Cloud Practitioner Certification, AWS Certified Solutions Architect – Associate, and AWS Certified Solutions Architect – Professional.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Cloud Practitioner Certification
&lt;/h3&gt;

&lt;p&gt;The AWS Certified Cloud Practitioner certification provides a solid overview of AWS services, pricing models, architecture, security, and the overall benefits of cloud computing. This certification is ideal for beginners who are new to cloud technology and want to grasp the fundamental concepts, services, and benefits of cloud computing.&lt;/p&gt;

&lt;p&gt;The AWS Certified Cloud Practitioner certification is a great starting point for anyone looking to build a foundational understanding of Amazon Web Services and cloud computing concepts. It does not dive into the technical details of specific AWS services.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Certified Solutions Architect – Associate
&lt;/h3&gt;

&lt;p&gt;AWS Certified Solutions Architect – Associate certification covers a broad range of AWS services and architectural best practices, including designing and deploying applications on a global scale using multiple regions and Availability Zones.&lt;/p&gt;

&lt;p&gt;AWS Certified Solutions Architect – Associate certification is ideal for individuals who want to showcase their expertise in architecting solutions using AWS services, including global infrastructure-related ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Certified Solutions Architect – Professional
&lt;/h3&gt;

&lt;p&gt;The AWS Certified Solutions Architect – Professional certification is an advanced-level certification offered by Amazon Web Services (AWS).  AWS Cloud professionals with extensive experience and expertise in designing and deploying complex applications are ideal candidates for this program.&lt;/p&gt;

&lt;p&gt;Building on the associate-level certification, AWS Certified Solutions Architect – Professional certification goes deeper into advanced architectural concepts, including designing highly available and fault-tolerant systems across a global infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to learn AWS?
&lt;/h2&gt;

&lt;p&gt;AWS provides comprehensive documentation on its global infrastructure, including networking, regions, Availability Zones, and global services. Reading AWS whitepapers related to networking and global infrastructure is also highly recommended. &lt;/p&gt;

&lt;p&gt;Remember that hands-on experience and practical projects are crucial to truly understanding and mastering AWS Global Infrastructure. Consider working on real-world projects, setting up multi-region architectures, and experimenting with AWS services to gain practical expertise. Moreover, remain on top of any new AWS courses that may emerge to enhance your knowledge of AWS Global Infrastructure further.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>A Comprehensive Guide to AWS Databases</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 04 Nov 2023 15:50:20 +0000</pubDate>
      <link>https://dev.to/ismailg/a-comprehensive-guide-to-aws-databases-197o</link>
      <guid>https://dev.to/ismailg/a-comprehensive-guide-to-aws-databases-197o</guid>
      <description>&lt;p&gt;In the digital age, data is the lifeblood of businesses and organizations. Storing, managing, and accessing this invaluable resource efficiently is essential for making informed decisions, improving operations, and delivering a seamless user experience. &lt;/p&gt;

&lt;p&gt;Amazon Web Services (AWS) recognized this need and developed a robust solution – Amazon Relational Database Service (Amazon RDS) – to simplify database management. In this blog post, we'll explore AWS databases, with a focus on Amazon RDS, and discover how it can help you unlock the full potential of your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AWS Databases
&lt;/h2&gt;

&lt;p&gt;AWS offers a variety of database services to cater to the diverse needs of businesses. These services are designed to address different use cases, from simple data storage to complex data analytics. Here are some key AWS database offerings:&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon RDS:
&lt;/h3&gt;

&lt;p&gt;Amazon Relational Database Service (RDS) is a managed database service that supports various relational database engines like MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB. It automates routine tasks like patching, backups, and scaling, allowing you to focus on your applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Aurora:
&lt;/h3&gt;

&lt;p&gt;Amazon Aurora is a fully-managed, high-performance relational database service compatible with MySQL and PostgreSQL. It offers enhanced performance and availability, making it an excellent choice for critical applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon DynamoDB:
&lt;/h3&gt;

&lt;p&gt;DynamoDB is a fully-managed NoSQL database service designed for high availability and scalability. It's an ideal choice for applications that require seamless scaling with minimal administrative overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Redshift:
&lt;/h3&gt;

&lt;p&gt;Amazon Redshift is a fully-managed data warehousing service optimized for analytical workloads. It enables you to run complex queries and generate insights from your data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Neptune:
&lt;/h3&gt;

&lt;p&gt;Amazon Neptune is a fully-managed graph database service designed for applications that need to work with highly connected data, such as social networks and recommendation engines.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Power of Amazon RDS
&lt;/h3&gt;

&lt;p&gt;Amazon RDS is a standout among the AWS database services due to its versatility and ease of use. Here are some key benefits of Amazon RDS:&lt;/p&gt;

&lt;h3&gt;
  
  
  Managed Service:
&lt;/h3&gt;

&lt;p&gt;With Amazon RDS, AWS takes care of the heavy lifting, such as database setup, maintenance, and backups. This frees you from time-consuming administrative tasks and lets you focus on developing your applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Engine Support:
&lt;/h3&gt;

&lt;p&gt;Amazon RDS supports a variety of database engines, providing you with the flexibility to choose the one that best suits your application requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  High Availability:
&lt;/h3&gt;

&lt;p&gt;Amazon RDS offers options for high availability, such as Multi-AZ deployments, which automatically replicate data to a standby instance in a different Availability Zone. This ensures minimal downtime and data durability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability:
&lt;/h3&gt;

&lt;p&gt;Amazon RDS allows you to easily scale your database resources up or down as your application's needs change. You can adjust CPU, memory, and storage without disrupting your operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security:
&lt;/h3&gt;

&lt;p&gt;AWS provides robust security features, such as encryption at rest and in transit, automated software patching, and user authentication and authorization controls, to keep your data safe.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring and Metrics:
&lt;/h3&gt;

&lt;p&gt;Amazon RDS offers built-in monitoring and alerting tools through Amazon CloudWatch. You can track performance, set alarms, and gain insights into your database's behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Efficiency:
&lt;/h3&gt;

&lt;p&gt;Amazon RDS's pay-as-you-go pricing model allows you to pay only for the resources you use, optimizing costs while ensuring optimal performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Amazon RDS is just one piece of the AWS database puzzle, but it plays a crucial role in simplifying database management and maximizing the potential of your data. Whether you are running a small web application or managing a complex enterprise system, AWS databases provide a scalable and reliable foundation for your data needs. &lt;/p&gt;

&lt;p&gt;Embracing these services can empower your organization with the ability to make data-driven decisions, enhance customer experiences, and stay competitive in an ever-evolving digital landscape. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
