<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kevin Kiruri</title>
    <description>The latest articles on DEV Community by Kevin Kiruri (@kevin_k).</description>
    <link>https://dev.to/kevin_k</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kevin_k"/>
    <language>en</language>
    <item>
      <title>Building Serverless Microservices on AWS with ECS Fargate, ECR, and Terraform</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Fri, 02 Jan 2026 17:36:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-serverless-microservices-on-aws-with-ecs-fargate-ecr-and-terraform-4ocm</link>
      <guid>https://dev.to/aws-builders/building-serverless-microservices-on-aws-with-ecs-fargate-ecr-and-terraform-4ocm</guid>
      <description>&lt;p&gt;In the evolving world of cloud-native architectures, serverless doesn’t always mean Lambda. With Amazon ECS Fargate, you can run containers without managing servers, combining the scalability of containers with the simplicity of serverless operations.&lt;/p&gt;

&lt;p&gt;In this article, we’ll build a serverless microservices architecture using AWS ECS Fargate, ECR, and Terraform.&lt;br&gt;
Our setup includes two Django-based microservices:&lt;/p&gt;

&lt;p&gt;Reader Service – handles read operations from a shared database.&lt;/p&gt;

&lt;p&gt;Writer Service – handles write operations to the same database.&lt;/p&gt;

&lt;p&gt;Both services connect to a shared Amazon RDS PostgreSQL instance. The entire infrastructure is defined and provisioned through Infrastructure as Code (IaC) using Terraform.&lt;/p&gt;
&lt;h2&gt;
  
  
  Key Components
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon ECS Fargate&lt;/strong&gt; – Runs containerized Django microservices without provisioning EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon ECR (Elastic Container Registry)&lt;/strong&gt; – Hosts Docker images for both services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon RDS (PostgreSQL)&lt;/strong&gt; – Central database for the microservices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt; – Distributes traffic between services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; – Automates infrastructure provisioning.&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Containerizing the Microservices
&lt;/h2&gt;

&lt;p&gt;Each Django service (Reader and Writer) is containerized using Docker. Here is the docker compose file that builds both services and a local db container for local testing (test containers before pushing to cloud).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  db:
    image: postgres:15
    container_name: postgres-db
    environment:
      - POSTGRES_DB=${DB_NAME:-library_db}
      - POSTGRES_USER=${DB_USER:-postgres}
      - POSTGRES_PASSWORD=${DB_PASSWORD:-password}
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-postgres}"]
      interval: 10s
      timeout: 5s
      retries: 5

  reader-service:
    container_name: reader-svc
    build: ./app/reader-service/reader
    ports:
      - "8000:8000"
    environment:
      - DB_USER=${DB_USER:-postgres}
      - DB_PASSWORD=${DB_PASSWORD:-password}
      - DB_NAME=${DB_NAME:-library_db}
      - DB_HOST=db
      - DB_PORT=5432
    depends_on:
      db:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health/"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M
    restart: unless-stopped

  writer-service:
    container_name: writer-svc
    build: ./app/writer-service/writer
    ports:
      - "8001:8000"
    environment:
      - DB_USER=${DB_USER:-postgres}
      - DB_PASSWORD=${DB_PASSWORD:-password}
      - DB_NAME=${DB_NAME:-library_db}
      - DB_HOST=db
      - DB_PORT=5432
    depends_on:
      db:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health/"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M
    restart: unless-stopped

volumes:
  postgres_data:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bptzyhv71i1o50udtbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bptzyhv71i1o50udtbg.png" alt=" " width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. After testing locally, build and push each image to ECR.
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository --repository-name reader-service
aws ecr create-repository --repository-name writer-service

docker build -t reader-service .
docker tag reader-service:latest &amp;lt;account-id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/reader-service:latest
docker push &amp;lt;account-id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/reader-service:latest

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65avqefx3tx4rpbgbvh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65avqefx3tx4rpbgbvh0.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Defining Infrastructure with Terraform
&lt;/h2&gt;

&lt;p&gt;Your Terraform project will include these key modules:&lt;/p&gt;

&lt;p&gt;VPC and Networking&lt;/p&gt;

&lt;p&gt;RDS Instance&lt;/p&gt;

&lt;p&gt;ECR Repositories&lt;/p&gt;

&lt;p&gt;ECS Cluster and Task Definitions&lt;/p&gt;

&lt;p&gt;Application Load Balancer&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_cluster" "main" {
  name = "serverless-cluster"
}

resource "aws_ecs_task_definition" "reader_task" {
  family                   = "reader-service"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = 256
  memory                   = 512
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  task_role_arn            = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name      = "reader"
      image     = "${aws_ecr_repository.reader.repository_url}:latest"
      essential = true
      portMappings = [
        {
          containerPort = 8000
          hostPort      = 8000
        }
      ]
      environment = [
        { name = "DB_HOST", value = aws_db_instance.app_db.address },
        { name = "DB_NAME", value = "app_db" },
        { name = "DB_USER", value = "admin" },
        { name = "DB_PASS", value = var.db_password }
      ]
    }
  ])
}

resource "aws_ecs_service" "reader_service" {
  name            = "reader-service"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.reader_task.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.public[*].id
    assign_public_ip = true
    security_groups  = [aws_security_group.ecs_service.id]
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.reader.arn
    container_name   = "reader"
    container_port   = 8000
  }

  depends_on = [aws_lb_listener.frontend]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repeat a similar configuration for the Writer Service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrrlr5djiw7f4tb8k59q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrrlr5djiw7f4tb8k59q.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. RDS Database and Networking
&lt;/h2&gt;

&lt;p&gt;Both services share the same RDS instance, accessed via private networking for security.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create database subnet group
resource "aws_db_subnet_group" "database_subnet_group" {
  name        = "${var.project_name}-${var.environment}-database-subnets"
  subnet_ids  = [aws_subnet.private_data_subnet_az1.id, aws_subnet.private_data_subnet_az2.id]
  description = "subnets for database instance"

  tags = {
    Name = "${var.project_name}-${var.environment}-database-subnets"
  }
}

# create the rds instance
resource "aws_db_instance" "database_instance" {
  engine                 = "postgres"
  engine_version         = "14"
  multi_az               = var.multi_az_deployment
  identifier             = var.database_instance_identifier
  username               = var.db_user
  password               = var.db_password
  db_name                = var.db_name
  instance_class         = var.database_instance_class
  allocated_storage      = 200
  db_subnet_group_name   = aws_db_subnet_group.database_subnet_group.name
  vpc_security_group_ids = [aws_security_group.database_security_group.id]
  availability_zone      = data.aws_availability_zones.available_zones.names[0]
  skip_final_snapshot    = true
  publicly_accessible    = var.publicly_accessible
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6wd9f6mfnjxlegtgves.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6wd9f6mfnjxlegtgves.png" alt=" " width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Find the terraform template at: &lt;a href="https://github.com/Kevin-byt/AWS-Projects/tree/64ad6a19deb4473689efbf78cee032e153c00170/ecs-django-microservices/terraform/env" rel="noopener noreferrer"&gt;ECS Django Microservices&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Terraform init

Terraform plan

Terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Why ECS Fargate is Serverless
&lt;/h2&gt;

&lt;p&gt;When most developers hear serverless, they immediately think of AWS Lambda. But serverless is more than just event-driven functions, it’s a paradigm. It’s about abstracting infrastructure management, automating scaling, and paying only for what you use.&lt;br&gt;
That’s exactly what Amazon ECS Fargate delivers for containerized workloads.&lt;/p&gt;
&lt;h3&gt;
  
  
  Serverless Principles and How Fargate Fits In
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Let’s break it down:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Server Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With ECS Fargate, you don’t provision, scale, or patch EC2 instances.&lt;br&gt;
You define your task requirements (vCPU, memory, networking), and AWS automatically runs your containers in a managed compute environment. This removes the operational overhead of managing ECS clusters on EC2, configuring capacity providers, or dealing with auto scaling groups.&lt;/p&gt;

&lt;p&gt;Your focus shifts entirely to application logic, not infrastructure maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automatic Scaling and Orchestration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fargate integrates natively with ECS Service Auto Scaling and Application Auto Scaling.&lt;br&gt;
When your traffic spikes, ECS launches more Fargate tasks; when traffic drops, it scales down automatically, no manual intervention needed.&lt;/p&gt;

&lt;p&gt;This elasticity ensures cost-efficiency and performance stability without managing the underlying scaling policies or EC2 lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pay-per-Use Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You only pay for the exact CPU and memory resources your containers use, billed per second while tasks are running.&lt;br&gt;
There’s no idle server cost, no over-provisioned clusters, and no unused capacity. This aligns perfectly with serverless economics — true usage-based billing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seamless AWS Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fargate runs as part of the broader AWS ecosystem. It integrates smoothly with:&lt;/p&gt;

&lt;p&gt;CloudWatch Logs for centralized logging,&lt;/p&gt;

&lt;p&gt;CloudWatch Alarms for health and performance metrics,&lt;/p&gt;

&lt;p&gt;IAM Roles for Tasks for fine-grained permissions,&lt;/p&gt;

&lt;p&gt;AWS X-Ray for distributed tracing, and&lt;/p&gt;

&lt;p&gt;ECR for secure container image storage and retrieval.&lt;/p&gt;

&lt;p&gt;This ecosystem synergy allows teams to build secure, observable, and automated serverless container platforms end-to-end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security and Isolation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each Fargate task runs in its own isolated compute environment, with dedicated kernel and network interfaces.&lt;br&gt;
Unlike EC2-based ECS, where containers share the host kernel, Fargate tasks achieve stronger isolation, closer to the Lambda-level security boundary, making it ideal for multi-tenant or microservice architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code Ready&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By pairing Fargate with Terraform, you extend serverless principles to your entire infrastructure.&lt;br&gt;
Your ECS services, networking, IAM roles, and monitoring configurations all live in code that is reproducible, version-controlled, and automated. This enables serverless operations not just in runtime, but also in provisioning and deployment.&lt;/p&gt;
&lt;h2&gt;
  
  
  6. Observability and Monitoring
&lt;/h2&gt;

&lt;p&gt;Add logging and metrics using:&lt;/p&gt;

&lt;p&gt;AWS CloudWatch Logs – For container logs and AWS CloudWatch Alarms – For task health and CPU/memory usage using Terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ECS CloudWatch Monitoring
resource "aws_cloudwatch_log_group" "ecs_logs" {
  name              = "/ecs/${var.project_name}"
  retention_in_days = 7
}

# ECS Service Alarms
resource "aws_cloudwatch_metric_alarm" "ecs_cpu_high" {
  alarm_name          = "${var.project_name}-ecs-cpu-high"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = "2"
  metric_name         = "CPUUtilization"
  namespace           = "AWS/ECS"
  period              = "300"
  statistic           = "Average"
  threshold           = "80"
  alarm_description   = "ECS CPU utilization is too high"
  alarm_actions       = [aws_sns_topic.alerts.arn]

  dimensions = {
    ServiceName = aws_ecs_service.ecs_service1.name
    ClusterName = aws_ecs_cluster.ecs_cluster.name
  }
}

resource "aws_cloudwatch_metric_alarm" "ecs_memory_high" {
  alarm_name          = "${var.project_name}-ecs-memory-high"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = "2"
  metric_name         = "MemoryUtilization"
  namespace           = "AWS/ECS"
  period              = "300"
  statistic           = "Average"
  threshold           = "80"
  alarm_description   = "ECS Memory utilization is too high"
  alarm_actions       = [aws_sns_topic.alerts.arn]

  dimensions = {
    ServiceName = aws_ecs_service.ecs_service1.name
    ClusterName = aws_ecs_cluster.ecs_cluster.name
  }
}

# ALB Target Health
resource "aws_cloudwatch_metric_alarm" "alb_unhealthy_targets" {
  alarm_name          = "${var.project_name}-alb-unhealthy-targets"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = "2"
  metric_name         = "UnHealthyHostCount"
  namespace           = "AWS/ApplicationELB"
  period              = "300"
  statistic           = "Average"
  threshold           = "0"
  alarm_description   = "ALB has unhealthy targets"
  alarm_actions       = [aws_sns_topic.alerts.arn]

  dimensions = {
    LoadBalancer = aws_lb.application_load_balancer.arn_suffix
  }
}

# SNS Topic for Alerts
resource "aws_sns_topic" "alerts" {
  name = "${var.project_name}-alerts"
}

resource "aws_sns_topic_subscription" "email" {
  topic_arn = aws_sns_topic.alerts.arn
  protocol  = "email"
  endpoint  = var.alert_email
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp0paa38fi29jdqa0nq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp0paa38fi29jdqa0nq3.png" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This setup demonstrates that serverless microservices don’t have to rely on AWS Lambda.&lt;br&gt;
With ECS Fargate, ECR, and Terraform, you can build production-grade, scalable, and cost-efficient systems while maintaining full control over your architecture.&lt;/p&gt;

&lt;p&gt;It’s the perfect middle ground between full container control and the simplicity of serverless.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Follow me for more demos and networking. &lt;a href="https://www.linkedin.com/in/kevin-kiruri/" rel="noopener noreferrer"&gt;Kevin Kiruri LinkedIn&lt;/a&gt;&lt;br&gt;
Find the source code here: &lt;a href="https://github.com/Kevin-byt/AWS-Projects/tree/64ad6a19deb4473689efbf78cee032e153c00170/ecs-django-microservices" rel="noopener noreferrer"&gt;ECS Django Microservices&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>microservices</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Secure Serverless with HashiCorp Vault and Lambda: Dynamic Database Credentials</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Fri, 02 Jan 2026 17:35:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/secure-serverless-with-hashicorp-vault-and-lambda-dynamic-database-credentials-1dhj</link>
      <guid>https://dev.to/aws-builders/secure-serverless-with-hashicorp-vault-and-lambda-dynamic-database-credentials-1dhj</guid>
      <description>&lt;p&gt;In the era of cloud-native applications, managing secrets and database credentials remains one of the most critical security challenges. Traditional approaches of hardcoding credentials or storing them in environment variables create significant security risks. This article explores a revolutionary approach: &lt;strong&gt;dynamic database credentials&lt;/strong&gt; using HashiCorp Vault in serverless architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Static Credentials
&lt;/h2&gt;

&lt;p&gt;Most serverless applications today rely on static database credentials that suffer from several critical issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Long-lived secrets&lt;/strong&gt; that increase exposure risk&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared credentials&lt;/strong&gt; across multiple services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual rotation&lt;/strong&gt; processes prone to human error&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited audit trails&lt;/strong&gt; for credential usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Credential sprawl&lt;/strong&gt; across configuration files and environment variables&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Enter Dynamic Credentials
&lt;/h2&gt;

&lt;p&gt;Dynamic credentials represent a paradigm shift in secrets management. Instead of storing permanent passwords, credentials are generated on-demand with automatic expiration and cleanup. This approach provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero hardcoded credentials&lt;/strong&gt; in application code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Short-lived credentials&lt;/strong&gt; (1-hour lifespan by default)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic rotation&lt;/strong&gt; and cleanup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unique credentials&lt;/strong&gt; per request&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete audit trail&lt;/strong&gt; of all credential operations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM-based authentication&lt;/strong&gt; eliminating API key management&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The solution consists of four main components working together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────┐    ┌──────────────┐    ┌─────────────┐
│   Lambda    │───▶│    Vault     │───▶│ RDS MySQL   │
│  Function   │    │   (EC2)      │    │  Database   │
└─────────────┘    └──────────────┘    └─────────────┘
      │                     │                  │
   IAM Auth          Dynamic Creds      Temp User
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Component Breakdown
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;HashiCorp Vault Server&lt;/strong&gt;: Deployed on EC2, manages the database secrets engine and handles credential generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda Function&lt;/strong&gt;: Authenticates with Vault using IAM roles and retrieves dynamic credentials for database access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RDS MySQL Database&lt;/strong&gt;: Target database where temporary users are created and automatically cleaned up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Systems Manager&lt;/strong&gt;: Securely stores the Vault root token for infrastructure management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpynu8nrjtnszorokqce1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpynu8nrjtnszorokqce1.png" alt=" " width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Vault Database Secrets Engine Configuration
&lt;/h3&gt;

&lt;p&gt;The heart of the solution is Vault's database secrets engine, configured to manage MySQL credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"vault_database_secret_backend_connection"&lt;/span&gt; &lt;span class="s2"&gt;"mysql"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;vault_mount&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;database&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mysql"&lt;/span&gt;
  &lt;span class="nx"&gt;allowed_roles&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"lambda-role"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;connection_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"{{username}}:{{password}}@tcp(${aws_db_instance.main.address}:3306)/"&lt;/span&gt;
    &lt;span class="nx"&gt;username&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;database_master_username&lt;/span&gt;
    &lt;span class="nx"&gt;password&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;random_password&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rds_master_password&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;
    &lt;span class="nx"&gt;max_open_connections&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
    &lt;span class="nx"&gt;max_idle_connections&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbi6xeoi8n7qr3octur8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbi6xeoi8n7qr3octur8.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Dynamic Role Definition
&lt;/h3&gt;

&lt;p&gt;The database role defines how temporary users are created and what permissions they receive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"vault_database_secret_backend_role"&lt;/span&gt; &lt;span class="s2"&gt;"lambda"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;vault_mount&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;database&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"lambda-role"&lt;/span&gt;
  &lt;span class="nx"&gt;db_name&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;vault_database_secret_backend_connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;creation_statements&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"GRANT SELECT, INSERT, UPDATE ON ${var.database_name}.* TO '{{name}}'@'%';"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;revocation_statements&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"DROP USER '{{name}}'@'%';"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;default_ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;  &lt;span class="c1"&gt;# 1 hour&lt;/span&gt;
  &lt;span class="nx"&gt;max_ttl&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt; &lt;span class="c1"&gt;# 24 hours&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk4w5uge5xhgpb8pb0y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk4w5uge5xhgpb8pb0y1.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. IAM Authentication Setup
&lt;/h3&gt;

&lt;p&gt;Lambda functions authenticate with Vault using AWS IAM, eliminating the need for API keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"vault_aws_auth_backend_role"&lt;/span&gt; &lt;span class="s2"&gt;"lambda"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;vault_auth_backend&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"lambda-role"&lt;/span&gt;
  &lt;span class="nx"&gt;auth_type&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"iam"&lt;/span&gt;
  &lt;span class="nx"&gt;bound_iam_principal_arns&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lambda_exec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;token_policies&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;vault_policy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;token_ttl&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;
  &lt;span class="nx"&gt;token_max_ttl&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Lambda Implementation
&lt;/h3&gt;

&lt;p&gt;The Lambda function demonstrates the complete workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_vault_token&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Authenticate with Vault using AWS IAM method&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;credentials&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_credentials&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;frozen_creds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;credentials&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_frozen_credentials&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Create signed STS request for Vault authentication
&lt;/span&gt;    &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AWSRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://sts.amazonaws.com/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/x-www-form-urlencoded&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action=GetCallerIdentity&amp;amp;Version=2011-06-15&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;sigv4&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SigV4Auth&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frozen_creds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;sigv4&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_auth&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Authenticate with Vault
&lt;/span&gt;    &lt;span class="n"&gt;iam_request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lambda-role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;iam_http_request_method&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;iam_request_url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;iam_request_body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;iam_request_headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;auth_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;vault_addr&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/v1/auth/aws/login&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;iam_request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;auth_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;auth&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;client_token&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_database_credentials&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vault_token&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Retrieve dynamic database credentials from Vault&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;X-Vault-Token&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;vault_token&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;creds_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;vault_addr&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/v1/database/creds/lambda-role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;creds_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxi35o5oke4nre73dzlac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxi35o5oke4nre73dzlac.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Benefits
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Credential Lifecycle Management
&lt;/h3&gt;

&lt;p&gt;Every database credential has a defined lifecycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generation&lt;/strong&gt;: Created on-demand when requested&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usage&lt;/strong&gt;: Valid for exactly 1 hour by default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expiration&lt;/strong&gt;: Automatically expires without manual intervention&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cleanup&lt;/strong&gt;: Database user is automatically deleted upon expiration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Principle of Least Privilege
&lt;/h3&gt;

&lt;p&gt;Each credential is granted only the minimum permissions required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scoped permissions&lt;/strong&gt;: Only SELECT, INSERT, UPDATE on specific database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time-bounded access&lt;/strong&gt;: Maximum 24-hour lifetime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Role-based access&lt;/strong&gt;: Tied to specific Lambda execution role&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Comprehensive Audit Trail
&lt;/h3&gt;

&lt;p&gt;Vault provides complete visibility into credential operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authentication events&lt;/strong&gt;: Who requested access and when&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Credential generation&lt;/strong&gt;: Which credentials were created&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usage patterns&lt;/strong&gt;: How credentials are being utilized&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expiration tracking&lt;/strong&gt;: When credentials expire and are cleaned up&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Operational Advantages
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Zero-Touch Credential Management
&lt;/h3&gt;

&lt;p&gt;Once deployed, the system requires no manual intervention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic rotation&lt;/strong&gt;: New credentials for every request&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing&lt;/strong&gt;: Failed credentials don't affect subsequent requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable&lt;/strong&gt;: Handles thousands of concurrent credential requests&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Developer Experience
&lt;/h3&gt;

&lt;p&gt;Developers work with a simple, consistent API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Get credentials
&lt;/span&gt;&lt;span class="n"&gt;vault_token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_vault_token&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;db_creds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_database_credentials&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vault_token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Use credentials
&lt;/span&gt;&lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pymysql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;rds_endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;db_creds&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;username&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;db_creds&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;database_name&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Infrastructure as Code
&lt;/h3&gt;

&lt;p&gt;The entire solution is defined in Terraform, enabling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reproducible deployments&lt;/strong&gt; across environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version-controlled infrastructure&lt;/strong&gt; changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated testing&lt;/strong&gt; and validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disaster recovery&lt;/strong&gt; capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Find the terraform template at: &lt;a href="https://github.com/Kevin-byt/AWS-Projects/tree/64ad6a19deb4473689efbf78cee032e153c00170/Hashicorp-Vault/terraform" rel="noopener noreferrer"&gt;Hashicorp Vault Dynamic Credentials&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Terraform init

Terraform plan

Terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3fuva173vy3pz946o7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3fuva173vy3pz946o7a.png" alt=" " width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Production Deployment Considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. High Availability
&lt;/h3&gt;

&lt;p&gt;For production environments, implement Vault clustering:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"vault"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;  &lt;span class="c1"&gt;# Multi-node cluster&lt;/span&gt;
  &lt;span class="c1"&gt;# ... configuration&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb"&lt;/span&gt; &lt;span class="s2"&gt;"vault"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;# Load balancer for Vault cluster&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Network Security
&lt;/h3&gt;

&lt;p&gt;Deploy Vault in private subnets with proper network controls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"vault"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8200&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8200&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;security_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Only Lambda access&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ttaom4i1eplslcav8so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ttaom4i1eplslcav8so.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and Alerting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Key Metrics to Monitor
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Credential Generation Rate&lt;/strong&gt;: Track requests per second&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication Failures&lt;/strong&gt;: Monitor failed Vault logins&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Connection Errors&lt;/strong&gt;: Alert on connection failures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Credential Expiration&lt;/strong&gt;: Track credential lifecycle&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  CloudWatch Integration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;cloudwatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cloudwatch&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;publish_metric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;metric_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;unit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Count&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;cloudwatch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put_metric_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;Namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;VaultDynamicCredentials&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;MetricData&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MetricName&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;metric_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Value&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Unit&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;unit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxolm5t04uk3gbwji5u4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxolm5t04uk3gbwji5u4.png" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Analysis
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Infrastructure Costs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vault EC2 Instance&lt;/strong&gt;: $8-15/month (t3.micro)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS Database&lt;/strong&gt;: $15-25/month (db.t3.micro)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda Execution&lt;/strong&gt;: Pay-per-use, typically &amp;lt;$1/month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Total&lt;/strong&gt;: $25-40/month for development environment&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cost vs. Security Trade-off
&lt;/h3&gt;

&lt;p&gt;While dynamic credentials add infrastructure costs, they provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced security incidents&lt;/strong&gt;: Potential savings of thousands in breach costs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance benefits&lt;/strong&gt;: Easier audit and regulatory compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational efficiency&lt;/strong&gt;: Reduced manual credential management overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison with Alternatives
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Security&lt;/th&gt;
&lt;th&gt;Complexity&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;Scalability&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Static Credentials&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AWS Secrets Manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dynamic Credentials&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;IAM Database Auth&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Dynamic database credentials with HashiCorp Vault represent a significant advancement in serverless security. By eliminating static credentials and implementing just-in-time access, organizations can dramatically reduce their security risk while maintaining operational efficiency.&lt;/p&gt;

&lt;p&gt;The implementation demonstrated here provides a production-ready foundation that can be extended and customized for specific organizational needs. As serverless architectures continue to evolve, dynamic credential management will become increasingly critical for maintaining security at scale.&lt;/p&gt;

&lt;p&gt;Run the command below to confirm your set up works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s -H "X-Vault-Token: &amp;lt;vault-token&amp;gt;" \
  "http://&amp;lt;vault-server-ip&amp;gt;:8200/v1/database/creds/lambda-role" | jq .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiob383wxtzre2cj9836n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiob383wxtzre2cj9836n.png" alt=" " width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic credentials eliminate&lt;/strong&gt; the risks associated with static database passwords&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM-based authentication&lt;/strong&gt; removes the need for API key management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic credential lifecycle&lt;/strong&gt; management reduces operational overhead&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete audit trails&lt;/strong&gt; provide visibility into all credential operations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt; enables reproducible, version-controlled deployments&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The future of secrets management is dynamic, ephemeral, and automated. Organizations that adopt these practices today will be better positioned to handle the security challenges of tomorrow's cloud-native landscape.&lt;/p&gt;

&lt;p&gt;Follow me for more demos and networking. &lt;a href="https://www.linkedin.com/in/kevin-kiruri/" rel="noopener noreferrer"&gt;Kevin Kiruri LinkedIn&lt;/a&gt;&lt;br&gt;
Find the source code here: &lt;a href="https://github.com/Kevin-byt/AWS-Projects/tree/64ad6a19deb4473689efbf78cee032e153c00170/Hashicorp-Vault/" rel="noopener noreferrer"&gt;Hashicorp Vault Dynamic Credentials&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>security</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Building a Serverless CRUD API with AWS SAM, Lambda, API Gateway, and DynamoDB</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Wed, 23 Jul 2025 20:05:58 +0000</pubDate>
      <link>https://dev.to/kevin_k/building-a-serverless-crud-api-with-aws-sam-lambda-api-gateway-and-dynamodb-45ji</link>
      <guid>https://dev.to/kevin_k/building-a-serverless-crud-api-with-aws-sam-lambda-api-gateway-and-dynamodb-45ji</guid>
      <description>&lt;p&gt;Serverless development has become a go-to strategy for modern application architectures, and with good reason: it allows developers to focus more on building features and less on managing infrastructure. In this article, I walk you through building a fully serverless CRUD API using AWS SAM, Lambda, API Gateway, and DynamoDB all neatly wrapped in Python.&lt;/p&gt;

&lt;p&gt;Whether you're new to AWS SAM or just looking for a practical CRUD example to learn from, this guide has you covered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;The goal of this project was simple: create a clean, scalable, and serverless API capable of Create, Read, Update, and Delete (CRUD) operations on a DynamoDB table. The architecture leverages the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Gateway as the HTTP interface&lt;/li&gt;
&lt;li&gt;AWS Lambda as the compute layer (Python-powered)&lt;/li&gt;
&lt;li&gt;DynamoDB as the persistent data store&lt;/li&gt;
&lt;li&gt;AWS SAM (Serverless Application Model) for easy infrastructure deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One Lambda function handles all CRUD operations, reducing overhead and keeping the solution neat and maintainable.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Project Initialisation
&lt;/h2&gt;

&lt;p&gt;Scaffold a new Python-based SAM project using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam init --runtime python3.12 --dependency-manager pip --app-template hello-world --name .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This generates the standard SAM directory structure, giving you a great starting point. Ensure that you initialise the SAM project in a new, empty subdirectory as &lt;strong&gt;sam init&lt;/strong&gt; does not allow initialising a project in a non-empty directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4pm5up7rl5db8265mkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4pm5up7rl5db8265mkl.png" alt=" " width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. One Lambda to Rule Them All
&lt;/h2&gt;

&lt;p&gt;Rather than creating separate functions for each CRUD operation, I designed one Lambda function (core/app.py) to handle all actions. It inspects the HTTP path to determine which operation (/create, /read, /update, /delete) to execute.&lt;br&gt;
Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Less boilerplate&lt;/li&gt;
&lt;li&gt;Centralized logic&lt;/li&gt;
&lt;li&gt;Easier maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a code snippet of how I structured the 4 api endpoints in one lambda function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if http_method == 'POST':
        try:
            data = event.get('body', {})
            while isinstance(data, str):
                data = json.loads(data)
            logger.info(f"Request Data (body): {data}")
            match path:
                case '/create':
                    return create(data)
                case '/read':
                    return read(data)
                case '/update':
                    return update(data)
                case '/delete':
                    return delete(data)
                case _:
                    return make_response(404, {'message': 'Path Not Found'})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. API Gateway Configuration
&lt;/h2&gt;

&lt;p&gt;Inside template.yaml,map the four CRUD routes to the single Lambda function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Events:
        CreateApi:
          Type: Api
          Properties:
            Path: /create
            Method: post
        ReadApi:
          Type: Api
          Properties:
            Path: /read
            Method: post
        UpdateApi:
          Type: Api
          Properties:
            Path: /update
            Method: post
        DeleteApi:
          Type: Api
          Properties:
            Path: /delete
            Method: post
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All these endpoints funnel into the same function, which processes the request and routes it accordingly.&lt;/p&gt;

&lt;p&gt;find the rest of the sam template code at: ""&lt;/p&gt;

&lt;h2&gt;
  
  
  4. DynamoDB Integration
&lt;/h2&gt;

&lt;p&gt;Provision a DynamoDB table named crud, using a simple primary key (id). Through boto3, AWS’s Python SDK, the Lambda function will perform the necessary CRUD operations on the table. As shown in the example below, the function performs a write to the DynamoDB table called 'crud' and returns a response accordingly. This also applies to read, update and delete functions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create(data):
    """
    Create a new item in the DynamoDB table.
    """
    try:
        dynamodb = boto3.resource('dynamodb')
        table = dynamodb.Table('crud')
        response = table.put_item(Item=data)
        return make_response(200, {'message': 'Item created successfully'})
    except Exception as e:
        logger.error(f"Error creating item: {e}")
        return make_response(500, {'message': 'Internal Server Error', 'error': str(e)})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5.Application level observability with aws-lambda-powertools &amp;amp; Cloudwatch
&lt;/h2&gt;

&lt;p&gt;Structure logging and tracing using aws-lambda-powertools, this makes debugging easier and ensures operational visibility in CloudWatch. This was a small addition but made a big difference. Simply import and use the tools in your lambda code as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_lambda_powertools import Logger, Tracer
logger = Logger()
tracer = Tracer()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How they complement cloud monitoring
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Monitoring Type&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Logger&lt;/td&gt;
&lt;td&gt;Application Logging (CloudWatch Logs)&lt;/td&gt;
&lt;td&gt;Understand what happened and why.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tracer&lt;/td&gt;
&lt;td&gt;Distributed Tracing (AWS X-Ray)&lt;/td&gt;
&lt;td&gt;Understand how requests flow and where bottlenecks occur.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;logger in cloudwatch log groups:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw6k7u8t503obh0c53i9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw6k7u8t503obh0c53i9.png" alt=" " width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Deployment with SAM
&lt;/h2&gt;

&lt;p&gt;This is done in 3 steps. validate, build and deploy.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. sam validate
&lt;/h3&gt;

&lt;p&gt;This command checks your template.yaml (SAM template) for syntax errors and verifies that it’s a valid AWS SAM template.&lt;br&gt;
Think of it like linting for your infrastructure it won’t deploy anything, just confirms your YAML and structure are correct.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe602ch8oqhdtw08r2duj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe602ch8oqhdtw08r2duj.png" alt=" " width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2. sam build
&lt;/h3&gt;

&lt;p&gt;This command packages your application code and dependencies, preparing them for deployment.&lt;br&gt;
It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a .aws-sam directory with the build artifacts.&lt;/li&gt;
&lt;li&gt;Resolves dependencies (like your Python packages from requirements.txt).&lt;/li&gt;
&lt;li&gt;Zips the code for Lambda functions as needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You run this before deploying to ensure everything is packaged correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2skeaa8n295kw48q258h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2skeaa8n295kw48q258h.png" alt=" " width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3. sam deploy --guided
&lt;/h3&gt;

&lt;p&gt;This command deploys your serverless application to AWS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It uploads the artifacts created by sam build to S3.&lt;/li&gt;
&lt;li&gt;It creates or updates AWS resources defined in your template.yaml (API Gateway, Lambda, DynamoDB, etc.).&lt;/li&gt;
&lt;li&gt;--guided prompts you interactively for settings like stack name, region, and capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx36k1v2xk8kt20gskso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx36k1v2xk8kt20gskso.png" alt=" " width="800" height="360"&gt;&lt;/a&gt;&lt;br&gt;
During your first deploy be keen on the prompts as the answers determine how your project runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdked2ha4fle9bl0i0azj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdked2ha4fle9bl0i0azj.png" alt=" " width="800" height="561"&gt;&lt;/a&gt;&lt;br&gt;
Then a CloudFormation stack is created and you get prompted to allow the deployment of the stack. AWS SAM uses CloudFormation under the hood to manage resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jypwa1pdkyxnvmfz5xm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jypwa1pdkyxnvmfz5xm.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;br&gt;
After only a few minutes, your CRUD backend is up and running in AWS.&lt;/p&gt;
&lt;h2&gt;
  
  
  7. Testing the API
&lt;/h2&gt;

&lt;p&gt;To validate functionality, I created test events mimicking API Gateway POST requests. These can be tested via the Lambda console or SAM CLI.&lt;/p&gt;

&lt;p&gt;Here are payloads you can use to test the endpoints through lambda console:&lt;/p&gt;
&lt;h3&gt;
  
  
  Create (POST /create)
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "httpMethod": "POST",
  "path": "/create",
  "body": {
    "id": "123",
    "attribute": "name",
    "value": "John Doe"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And the successful lambda response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9e0c84eeaawu3fqmsy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9e0c84eeaawu3fqmsy1.png" alt=" " width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Read (POST /read)
&lt;/h3&gt;

&lt;p&gt;Now let's retrieve what we created in the create test, using a read event test payload&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "httpMethod": "POST",
  "path": "/read",
  "body": {
    "id": "123"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;lambda response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmygycdjzxadbbaro36h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmygycdjzxadbbaro36h.png" alt=" " width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Read (POST /update)
&lt;/h3&gt;

&lt;p&gt;Now lets update John Doe to Jane Doe using an update event payload.&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "httpMethod": "POST",&lt;br&gt;
  "path": "/update",&lt;br&gt;
  "body": {&lt;br&gt;
    "id": "123",&lt;br&gt;
    "attribute": "name",&lt;br&gt;
    "value": "Jane Doe"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;lambda response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ueaa8f980wz4xydwwg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ueaa8f980wz4xydwwg4.png" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The name was updated successfully in DynamoDB.&lt;/p&gt;
&lt;h3&gt;
  
  
  Delete (POST /delete)
&lt;/h3&gt;

&lt;p&gt;Finally, let's test the delete endpoint using a delete event payload.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "httpMethod": "POST",
  "path": "/delete",
  "body": {
    "id": "123"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cgquxafrsttrq213mai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cgquxafrsttrq213mai.png" alt=" " width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our record with ID 123 was successfully deleted from DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned &amp;amp; Pro Tips
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;SAM is a Productivity Booster&lt;/strong&gt;&lt;br&gt;
It simplifies the complexity of packaging and deploying serverless applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single Lambda for Multiple Endpoints = Simplicity&lt;/strong&gt;&lt;br&gt;
Keeping all logic in one handler makes routing clear and concise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAM Deployment Prompts Are Repetitive but Important&lt;/strong&gt;&lt;br&gt;
Sometimes SAM asks the same question twice — answer carefully, especially around authentication and capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mind Your Dependencies&lt;/strong&gt;&lt;br&gt;
Ensure libraries like aws-lambda-powertools are in requirements.txt to avoid runtime issues.&lt;br&gt;
I came across such errors in the lambda response. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fking54ggfby9up4xbiiz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fking54ggfby9up4xbiiz.png" alt=" " width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To fix this, just add the dependency in the requirements.txt, then sam build and sam deploy, the process should be shorter and faster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn8ltyhda8uzelnhhtdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn8ltyhda8uzelnhhtdc.png" alt=" " width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Follow me for more demos and networking. &lt;a href="https://www.linkedin.com/in/kevin-kiruri/" rel="noopener noreferrer"&gt;Kevin Kiruri LinkedIn&lt;/a&gt;&lt;br&gt;
Find the source code here: &lt;a href="https://github.com/Kevin-byt/AWS-Projects/tree/main/aws-sam-crud" rel="noopener noreferrer"&gt;AWS SAM&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Serverless CI/CD: Automating Deployments with AWS SAM, CDK and GitHub actions</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Mon, 10 Feb 2025 17:47:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/serverless-cicd-automating-deployments-with-aws-sam-cdk-and-github-actions-5dd7</link>
      <guid>https://dev.to/aws-builders/serverless-cicd-automating-deployments-with-aws-sam-cdk-and-github-actions-5dd7</guid>
      <description>&lt;p&gt;Serverless CI/CD is a modern approach to software development that leverages serverless computing to automate the building, testing, and deployment of applications. By using AWS services like AWS SAM (Serverless Application Model) or AWS CDK (Cloud Development Kit) alongside GitHub Actions, you can create a fully automated CI/CD pipeline that requires minimal infrastructure management.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS SAM, CDK and github actions for deployment automation
&lt;/h2&gt;

&lt;p&gt;Below is a guide to creating a simple AWS Lambda function, defining it using AWS SAM or AWS CDK, and setting up a GitHub Actions CI/CD pipeline to automate deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create your working repo
&lt;/h2&gt;

&lt;p&gt;The structure may look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-serverless-app/
├── .github/
│   └── workflows/
│       └── deploy.yml           # GitHub Actions CI/CD pipeline
├── lambda_function.py           # Lambda function code
├── template.yml                 # AWS SAM template (if using SAM)
├── cdk_app.py                   # AWS CDK app (if using CDK)
├── requirements.txt             # Python dependencies (if any)
├── README.md                    # Documentation
└── (other files as needed)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a Simple AWS Lambda Function
&lt;/h2&gt;

&lt;p&gt;Let's start by creating a basic Lambda function in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# lambda_function.py
import json

def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Define in AWS SAM(template.yml) or AWS CDK(cdk_app.py)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Define the Lambda Function in AWS SAM (template.yml)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Simple Lambda Function

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: lambda_function.lambda_handler
      Runtime: python3.9
      CodeUri: .
      Events:
        HelloWorldApi:
          Type: Api
          Properties:
            Path: /hello
            Method: get
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This template defines: A Lambda function (HelloWorldFunction) with the Python 3.9 runtime and an API Gateway trigger that exposes the Lambda function at the /hello endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define the Lambda Function in AWS CDK (cdk_app.py)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, you can use AWS CDK to define your Lambda function in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cdk_app.py
from aws_cdk import (
    core,
    aws_lambda as _lambda,
    aws_apigateway as apigateway,
)

class CdkAppStack(core.Stack):

    def __init__(self, scope: core.Construct, id: str, **kwargs) -&amp;gt; None:
        super().__init__(scope, id, **kwargs)

        # Define the Lambda function
        hello_lambda = _lambda.Function(
            self, 'HelloWorldFunction',
            runtime=_lambda.Runtime.PYTHON_3_9,
            handler='lambda_function.lambda_handler',
            code=_lambda.Code.from_asset('.')
        )

        # Expose the Lambda function via API Gateway
        apigateway.LambdaRestApi(
            self, 'HelloWorldApi',
            handler=hello_lambda
        )

# Initialize the CDK app
app = core.App()
CdkAppStack(app, "CdkAppStack")
app.synth()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting up a Github Actions CI/CD Pipeline
&lt;/h2&gt;

&lt;p&gt;GitHub Actions automate the deployment of your Lambda function whenever you push changes to your repository. Here's how to set it up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a GitHub Actions Workflow File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a .github/workflows/deploy.yml file in your repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy Lambda Function

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.9'

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install aws-sam-cli

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v3
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Build and Deploy with SAM
        run: |
          sam build
          sam deploy --no-confirm-changeset --no-fail-on-empty-changeset
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Add AWS Credentials to GitHub Secrets&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to your GitHub repository.&lt;/li&gt;
&lt;li&gt;Navigate to Settings &amp;gt; Secrets &amp;gt; Actions.&lt;/li&gt;
&lt;li&gt;Add the following secrets:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS_ACCESS_KEY_ID: Your AWS access key.
AWS_SECRET_ACCESS_KEY: Your AWS secret key.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Commit and push to trigger CI/CD pipeline
&lt;/h2&gt;

&lt;p&gt;Push your code to the main branch to trigger the GitHub Actions workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m "Initial commit with Lambda function and CI/CD pipeline"
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verify the Deployment
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Once the pipeline runs successfully:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5lapuj0i61zpcxsv3ms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5lapuj0i61zpcxsv3ms.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go to the AWS Management Console, navigate to the API Gateway service.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjr5lt5orvegtzae021j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjr5lt5orvegtzae021j.png" alt="Image description" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find the deployed API and test the /hello endpoint.&lt;/strong&gt;&lt;br&gt;
Click on the API to test. Naviagate to stages and copy the invoke url, it should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://8tx1anr2sa.execute-api.us-east-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Open the link in a new tab and add your route, /hello.&lt;/strong&gt;&lt;br&gt;
You should see the response: "Hello from Lambda!".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjssme9x9pwym2ed9ykc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjssme9x9pwym2ed9ykc.png" alt="Image description" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Start with SAM for simpler deployments&lt;/li&gt;
&lt;li&gt;Gradually adopt CDK as complexity grows&lt;/li&gt;
&lt;li&gt;Implement GitHub Actions early for consistent delivery&lt;/li&gt;
&lt;li&gt;Maintain separate configurations for different environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this guide, we walked through creating a simple Lambda function, defining it using AWS SAM or CDK, and setting up a GitHub Actions pipeline to automate deployments. We also covered troubleshooting steps to ensure your API Gateway and Lambda resources are deployed correctly.&lt;/p&gt;

&lt;p&gt;With this approach, you can focus on writing code while AWS and GitHub Actions handle the heavy lifting of infrastructure management and deployment. Whether you're building a small project or a large-scale application, serverless CI/CD empowers you to deliver software faster and with greater reliability. &lt;/p&gt;

&lt;p&gt;Happy coding!&lt;br&gt;
Follow me for more demos and networking. &lt;a href="https://www.linkedin.com/in/kevin-kiruri/" rel="noopener noreferrer"&gt;Kevin Kiruri LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>cicd</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Lambda</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Sat, 08 Feb 2025 13:50:51 +0000</pubDate>
      <link>https://dev.to/aws-builders/serverless-cicd-automating-deployments-with-aws-sam-cdk-and-github-actions-563</link>
      <guid>https://dev.to/aws-builders/serverless-cicd-automating-deployments-with-aws-sam-cdk-and-github-actions-563</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;






&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>AWS Aurora DSQL for Django Developers: A Step-by-Step Guide</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Fri, 07 Feb 2025 06:15:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-aurora-dsql-for-django-developers-a-step-by-step-guide-4pah</link>
      <guid>https://dev.to/aws-builders/aws-aurora-dsql-for-django-developers-a-step-by-step-guide-4pah</guid>
      <description>&lt;p&gt;Amazon Aurora is a cloud-native relational database service that provides high performance and scalability. With the introduction of Aurora DSQL (Distributed SQL), developers can now leverage distributed database capabilities to enhance reliability and performance. This guide will walk you through setting up Aurora DSQL for a Django project in four steps:&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before proceeding, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with necessary IAM permissions.&lt;/li&gt;
&lt;li&gt;A Django project.&lt;/li&gt;
&lt;li&gt;AWS CLI installed and configured.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;boto3 installed in your Django environment (pip install boto3).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure you have the right dependancies in the requirements.txt&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Django==4.0
django-admin-cli==0.1.1
djangorestframework==3.15.1
psycopg2==2.9.9
sqlparse==0.5.0
python-dotenv==0.19.0

aurora_dsql_django
boto3&amp;gt;=1.35.74
botocore&amp;gt;=1.35.74
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1.Provision an Aurora DSQL Cluster
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the Aurora DSQL Console.&lt;/li&gt;
&lt;li&gt;Create Cluster&lt;/li&gt;
&lt;li&gt;Security Group for network access.
Inbound - Best practice: Restrict inbound access to your VPC CIDR (if the application runs inside the same VPC) or your server’s specific IP and avoid using 0.0.0.0/0 unless for testing purposes
Outbound - Is set to Allow all traffic from 0.0.0.0/0 by default&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2: Connect Django to Aurora DSQL
&lt;/h2&gt;

&lt;p&gt;Change Database settings in your django app's settings.py&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASES = {
    'default': {
        'HOST': 'uiabtxahshv6at5pcfidcxfnbq.dsql.us-east-1.on.aws',
        'USER': 'postgres',
        'NAME': 'postgres',
        'ENGINE': 'aurora_dsql_django',
        'OPTIONS': {
            'sslmode': 'require',
            'region': 'us-east-1',
            'region': 'us-east-2',
            'expires_in': 60
        }

    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Migrate the Database
&lt;/h2&gt;

&lt;p&gt;Run the following commands to apply migrations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py makemigrations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y30fio1cp4pbox836ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y30fio1cp4pbox836ew.png" alt="Image description" width="800" height="117"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F721oyg38ezcd83e1azqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F721oyg38ezcd83e1azqi.png" alt="Image description" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remember to clear any previous migrations, using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;find . -path "*/migrations/*.py" -not -name "__init__.py" -delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;find . -path "*/migrations/*.py" -not -name "__init__.py" -delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmq7xg245721e0lfymtr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmq7xg245721e0lfymtr.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Monitor and Scale
&lt;/h2&gt;

&lt;p&gt;Use Amazon CloudWatch and Performance Insights to monitor query performance and optimize configurations.&lt;/p&gt;

&lt;p&gt;Aurora DSQL enhances Django applications by providing high availability, scalability, and better performance. By following this guide, you can integrate Aurora DSQL into your Django project efficiently. Explore additional optimizations and best practices to get the most out of your Aurora database setup.&lt;/p&gt;

&lt;p&gt;Follow me for more demos and networking. &lt;a href="https://www.linkedin.com/in/kevin-kiruri/" rel="noopener noreferrer"&gt;Kevin Kiruri LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>auroradsql</category>
      <category>developer</category>
      <category>django</category>
    </item>
    <item>
      <title>Amazon Aurora DSQL: The New Era of Distributed SQL</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Mon, 30 Dec 2024 18:35:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-aurora-dsql-the-new-era-of-distributed-sql-4775</link>
      <guid>https://dev.to/aws-builders/amazon-aurora-dsql-the-new-era-of-distributed-sql-4775</guid>
      <description>&lt;p&gt;Amazon Aurora DSQL (dee-sequel) is a groundbreaking serverless, distributed SQL database that delivers active-active high availability. Announced at AWS re:Invent 2024, it is now available in preview across US East (N. Virginia), US East (Ohio), and US West (Oregon). With its distributed architecture, Aurora DSQL enables simultaneous read and write operations across multiple regions while maintaining strong consistency. Transactions are processed locally, with cross-region concurrency checks performed only at commit, ensuring performance and reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What sets Aurora DSQL apart?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Effortless Scalability:&lt;/strong&gt; Seamlessly scale reads, writes, and storage independently, providing virtually unlimited horizontal scaling for workloads of any size.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Availability:&lt;/strong&gt; Ensures 99.99% uptime in single-region deployments and an impressive 99.999% for multi-region clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt; Up to 4x faster read and write operations compared to popular distributed SQL databases, making it a top choice for performance-critical applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compatibility:&lt;/strong&gt; PostgreSQL-compatible, allowing developers to use well-known relational database concepts, accelerating adoption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Management:&lt;/strong&gt; Completely serverless, Aurora DSQL eliminates the complexities of patching, upgrades, and maintenance, freeing developers to focus on innovation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-Driven and Microservice Ready:&lt;/strong&gt; Optimized for serverless and microservices architectures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Core Architecture Components
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Distributed Design:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Amazon Aurora DSQL is built on a robust distributed architecture, at its core, the architecture is composed of four key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Relay/Connectivity: Manages client connections, routing requests to appropriate resources while ensuring seamless communication across the system.&lt;/li&gt;
&lt;li&gt;Compute/Databases: Handles query execution and processing, leveraging PostgreSQL compatibility for familiar operations.&lt;/li&gt;
&lt;li&gt;Transaction Log and Concurrency Control: Provides atomicity and isolation for transactions while ensuring consistency across nodes and regions.&lt;/li&gt;
&lt;li&gt;User Storage: Ensures data durability and redundancy by replicating user data across multiple Availability Zones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These components are orchestrated through a control plane that maintains coordination and redundancy across three Availability Zones. This design enables self-healing capabilities, automatic scaling, and high availability, ensuring that infrastructure failures do not impact database operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Multi-Region Clusters:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Aurora DSQL takes availability and resilience to the next level with multi-region linked clusters, enabling applications to achieve high performance and reliability on a global scale.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linked Clusters: Multi-region clusters operate in active-active mode, allowing read and write operations in multiple regions simultaneously. Data is synchronously replicated across regions, ensuring strong consistency and eliminating replication lag.&lt;/li&gt;
&lt;li&gt;Resilience: Each linked cluster provides independent endpoints for concurrent operations. In the event of a failure in one region, the other region continues operating seamlessly, ensuring uninterrupted service.&lt;/li&gt;
&lt;li&gt;Cross-Region Consistency: Transactions are processed locally in their originating region, with cross-region concurrency checks performed during the commit phase. This approach minimizes latency while maintaining strong data consistency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exploring Aurora DSQL Multi-Region linked clusters
&lt;/h2&gt;

&lt;p&gt;Curious about Amazon’s latest innovation in distributed SQL databases, I decided to put Multi-Region clusters (one of Aurora DSQL's Core Architecture Components) to the test by creating clusters in different regions to demonstrate cross-region replication and consistent reads from both endpoints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a cluster in Aurora DSQL:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumrxrihl44985dlb86hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumrxrihl44985dlb86hb.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/dsql" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/dsql&lt;/a&gt; and simply create a cluster, add linked regions and choose a region for your linked cluster region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connecting to the cluster using an authentication token and running SQL commands in aurora DSQL:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3jhc542izyy2wpuj3p1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3jhc542izyy2wpuj3p1.png" alt="Image description" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the cluster you want to connect to, copy the end point, then use the command below to use psql to start a connection to your cluster. You should see a prompt to provide a password. generate an authentication token and use it as the password.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PGSSLMODE=require \&lt;br&gt;
psql --dbname postgres \&lt;br&gt;
    --username admin \&lt;br&gt;
    --host uiabtxahshv6at5pcfidcxfnbq.dsql.us-east-1.on.aws \&lt;br&gt;
    --password&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writing in one region and reading from the second region:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww6aejeu22juwbonnpcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww6aejeu22juwbonnpcw.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can perform write operations in one region and immediately read the data from another region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some of the queries to try:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSERT INTO example (id, name) VALUES (1, 'Aurora DSQL Test');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM example WHERE id = 1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why this stands out:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Both regions(us-east-1 and us-east-2)operate as active endpoints, enabling concurrent read and write operations across regions.&lt;/li&gt;
&lt;li&gt;Aurora DSQL guarantees that all reads and writes return the latest committed writes no matter the region.&lt;/li&gt;
&lt;li&gt;If us-east-1 experiences downtime, all operations can seamlessly continue in us-east-2 ensuring high availability.&lt;/li&gt;
&lt;li&gt;Applications in different regions access the same database, reducing complexity.&lt;/li&gt;
&lt;li&gt;Ease of management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup is ideal for globally distributed applications such as e-commerce platforms, financial systems, or multi-region SaaS services, where uptime, consistency, and performance are critical.&lt;/p&gt;

&lt;p&gt;Follow me for more demos and networking. &lt;a href="https://www.linkedin.com/in/kevin-kiruri/" rel="noopener noreferrer"&gt;Kevin Kiruri LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aurora</category>
      <category>distributedsystems</category>
      <category>sqlserver</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Monitoring AWS Lambda Functions with AWS X-Ray and CloudWatch: Advanced Technique</title>
      <dc:creator>Kevin Kiruri</dc:creator>
      <pubDate>Mon, 16 Dec 2024 23:33:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/monitoring-aws-lambda-functions-with-aws-x-ray-and-cloudwatch-advanced-technique-1c88</link>
      <guid>https://dev.to/aws-builders/monitoring-aws-lambda-functions-with-aws-x-ray-and-cloudwatch-advanced-technique-1c88</guid>
      <description>&lt;p&gt;It is essential to understand the state of your Lambda-based application to ensure its reliability and health. Monitoring provides information that helps you detect and resolve performance problems, outages and errors in your workloads. Lambda-based applications often integrate with multiple services, which makes it just as important that you monitor each service endpoint. In this article, we will explore advanced techniques that allow you to leverage AWS X-Ray and CloudWatch to monitor and observe your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS X-Ray Advanced features with Lambda:
&lt;/h2&gt;

&lt;p&gt;Since most serverless applications consist of multiple service integrations, troubleshooting performance issues or errors usually involves tracking a request from the source caller through all involved services. In this case, AWS X-Ray is a faster and more convenient tool to trace distributed capabilities as it allows you to visualise and analyse the flow of requests across your application. &lt;/p&gt;

&lt;p&gt;To use X-Ray in Lambda you can activate it in the Lambda console for a specific function or enable it in the AWS SAM template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  GetLocations:
    Type: AWS::Serverless::Function
    Properties:
      Tracing: Active


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you also provide permission using the &lt;em&gt;AWSXRayDaemonWriteAccess&lt;/em&gt; managed policy. You must also activate X-Ray for each service in your workload. Once enabled X-Ray starts collecting tracing data for events. You can instrument your code using AWS X-Ray SDK to &lt;strong&gt;annotate&lt;/strong&gt; traces and add custom &lt;strong&gt;metadata&lt;/strong&gt;. In python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_xray_sdk.core import xray_recorder

@xray_recorder.capture('process_event')
def lambda_handler(event, context):
    # Your function logic here
    pass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can isolate which part of the system is causing the most latency using the X-Ray &lt;strong&gt;service map&lt;/strong&gt;, which visually represents the communication between your lambda function and other services on the X-Ray console.&lt;/p&gt;

&lt;p&gt;Finally, using &lt;strong&gt;Subsegments&lt;/strong&gt;, leverage granular tracing within a single Lambda invocation. In python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;with xray_recorder.in_segment('custom_subsegment'):
    # Specific operations to trace
    pass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AWS CloudWatch
&lt;/h3&gt;

&lt;p&gt;AWS CloudWatch provides a platform for centralised monitoring and logging. All Lambda functions are automatically integrated with CloudWatch and log various standard metrics that get published to CloudWatch metrics. Key CloudWatch metrics for lambda include invocations, Errors, Duration, Concurrent executions and memory usage. You can create CloudWatch alarms based on these metrics to alert on threshold. Leverage the creation of custom Dashboards to display key metrics such as errors, execution duration, and invocations in one view. Analyze Logs using CloudWatch Logs Insights to query your application’s logs for debugging and optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Monitoring with X-Ray and CloudWatch
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Correlate Traces and Logs: Use trace IDs to link X-Ray traces with CloudWatch logs for detailed debugging. This enables you to trace a request’s journey and investigate logs for specific trace segments, offering precise context during troubleshooting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Real-time Anomaly Detection: Enable anomaly detection in CloudWatch to establish dynamic thresholds based on historical patterns. This helps automatically identify unusual behaviours in your metrics, such as unexpected spikes in errors or latency, reducing the manual effort in monitoring setups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automating Alerting and Remediation: Set up CloudWatch alarms based on Lambda performance or error trends and automate remediation steps using SNS or step functions to handle issues as soon as they arise&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimise Performance: Regularly analyze X-Ray traces and CloudWatch metrics to identify inefficiencies, such as redundant function calls or underutilized resources. These insights allow you to fine-tune function configurations, optimize code, and reduce operational costs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monitoring Lambda-based applications with AWS X-Ray and CloudWatch enables you to maintain high availability, optimize performance, and quickly resolve issues. By focusing on critical metrics like errors, execution time, and throttling, and leveraging the advanced capabilities of these tools, you can build robust and resilient serverless applications.&lt;/p&gt;

&lt;p&gt;Happy Monitoring! &lt;br&gt;
&lt;a href="https://www.linkedin.com/in/kevin-kiruri/" rel="noopener noreferrer"&gt;Let's connect on LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>aws</category>
      <category>serverless</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
