DEV Community

Cover image for Aurora DSQL - Build A Serverless Multi-Region E-Commerce Platform

Aurora DSQL - Build A Serverless Multi-Region E-Commerce Platform

Introduction

I’ve always been a big fan of managed and truly serverless services offered by public cloud providers like AWS. I want to be able to prototype and build applications with as little infrastructure handling and management as possible. My time should be spent focusing on the business logic of the problem at hand.

I really like using services on AWS like Lambda, API Gateway, the Simple Queue Service (SQS), Simple Notification Service (SNS), and many others. For a database platform I have almost always used DynamoDB. DynamoDB tables can be provisioned and ready to use in seconds. The service is highly performant at any scale and you just pay for what you use. I don’t have to pay $100’s per month for something I may only use once a week.

In recent times we’ve had a resurgence of interest in SQL-based databases. Of course we all learned about databases with these but I’ve typically avoided them whenever possible due to all the setup and management required and how long it takes to start using these after you create them. I know the interface to and API for DynamoDB can be rather cryptic and difficult to get used to but it’s a very serverless offering - so right up my alley.

When AWS announced Aurora DSQL at re:Invent 2024, it really struck a chord with me and seemed to give me another big option to take advantage of. I was honestly quite disappointed when I started reading details about much of the functionality I was used to with SQL databases not being supported in DSQL. As time has passed I have understood why the DSQL team made many of the choices they did to get the performance and consistency they want. I have read a lot of articles and watched videos from Marc Brooker (Marc’s Blog) and others (AWS DSQL Blogs) and appreciate the work that went into DSQL and it’s innovative design.

Aurora DSQL is a multi-region distributed SQL database that provisions in under 60 seconds and bills only for actual usage. No instances to size, no standby replicas to pay for when you're not using them, and multi-region replication is built-in when you need it. Currently multi-region support only allow pairs of AWS regions in the same general parts of the world but the team is working on supporting more cases like pairs of regions much farther apart (think one in the US and one in Europe or something like that) and possible support for things like CDC (Change Data Capture) where changes are streamed via an interface. Aurora DSQL ensures that all reads and writes to any Regional endpoint are strongly consistent and durable. This is very tough to accomplish in a very fast and scalable fashion like they have accomplished.

The Kabob Store

I wanted to build a demo project (Github repo here) that I could expand on in later blogs and code repos. I have chosen to build “The Kabob Store” to start working with Aurora DSQL along with other AWS services. Who doesn’t like Kabobs and tasty Baclava anyway? This e-commerce platform is the start of my future Kabob empire but for now it’s a practical test: a fully functional e-commerce platform with menu browsing, cart management, order placement, and order history. It uses Aurora DSQL for data storage, Elastic Container Service (ECS) with Fargate for compute, and demonstrates whether DSQL can replace DynamoDB as the default choice for serverless applications that need relational data.

In the past I typically focused on using serverless compute via AWS Lambda for most projects. I think most people have come to the realization that there are many ways to solves problems and sticking to the same one for everything is not the best approach. I have spent a lot of time working with containers over the years - be it in local Kubernetes installs setup via kubeadm, cloud provider Kubernetes cluster like the Elastic Kubernetes Service (EKS) on AWS, and the Elastic Container Service (ECS) on AWS. I have seen that these work really well for many use cases.

I see the job of a solution architect as taking the requirements given for any problem and the boundaries set to go and sort through the vast set of available tools and platforms and build a solution that best meets the goals and budget. This doesn’t always mean using your favourite approaches and tools. For me this was almost always to use AWS serverless tools and event-driven architectures in the past. In the last few years I have been spending a lot more time mixing in things like container-based solutions, simple VM setups, and almost any approach that gets the job done.

I think one of the keys to allowing this flexibility is structuring business logic code so that it doesn’t know (or care) much about where it’s running and typically should not directly interact with most of the surrounding infrastructure. If you can setup your projects in this way it should be quite easy to move from running in AWS Lambda to running in Fargate on ECS to running straight is some VM.

The Kabob Store I present here is a full stack solution that includes a ReactJS front end. It’s not using all the latest front end tech - more a plain React Single Page App (SPA). I am more of a backend developer but did teach myself ReactJS a number of years ago and have built a few front end apps when needed.

Kebob Store Order Page

Kabob Store Architecture

The Kabob Store uses containers on ECS Fargate rather than Lambda functions. This deserves explanation since I typically default to Lambda for serverless compute.

┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ React │────▶│ ALB │────▶│ FastAPI │
│ Frontend │ │ (Route) │ │ Backend │
└─────────────┘ └─────────────┘ └─────────────┘
│ │
└──────── ECS Fargate ──────────────────┤

┌──────▼──────┐
│Aurora DSQL │
│(Multi-Region)│
└─────────────┘

Why Containers Instead of Lambda?

For this project, containers provide flexibility. The FastAPI application runs in a container that could deploy to multiple runtimes:

Runtime choice

The business logic doesn't care about the runtime. With minimal adapter code, the same application can deploy across all these platforms. This matters because project requirements change:

Development/staging: Fargate's simplicity wins (no servers to manage)

Production at scale: ECS on EC2 becomes more cost-effective (Fargate pricing is roughly 20-30% higher than equivalent EC2)

Lambda: Works well for this workload, but has 15-minute timeout limits and specific deployment constraints

For the Kabob Store, I chose Fargate for operational simplicity during development. If traffic scales significantly, migrating to ECS on EC2 workers requires no code changes. Just Terraform adjustments to swap Fargate launch type for EC2 launch type and add an Auto Scaling Group.

The principle: write business logic that's portable across runtimes. Choose the runtime based on current requirements, not because the code is locked into it.

Backend: FastAPI Without the ORM

The backend uses FastAPI with direct psycopg2 queries instead of an ORM. This keeps the business logic focused and portable. In the future I will move to using an ORM but for now I just wanted to keep it simple.

# Direct psycopg2 with parameterized queries
cursor.execute("""
    INSERT INTO orders (id, customer_name, customer_email, items, total_amount)
    VALUES (%s::UUID, %s, %s, %s::JSONB, %s)
    RETURNING *
""", (order_id, name, email, items_json, total))
result = cursor.fetchone()
conn.commit()
Enter fullscreen mode Exit fullscreen mode

This code is runtime-agnostic. It works in:

A Fargate container (current deployment)

A Lambda function using container images

An EC2-based ECS service

A Kubernetes pod in EKS

The application doesn't use Fargate-specific features or Lambda-specific event handlers. The container listens on a port and handles HTTP requests. Where it runs is an infrastructure decision, not a code decision.

The Security Layer Cake

With great power comes great responsibility. An e-commerce platform needs good security, so I implemented some basic best practices but it’s just a start with much more to be done. As I evolve this project I will add more. For now we’re using Pydantic input validation.

Layer 1: Pydantic Validation with Custom Validators

class OrderCreate(BaseModel):
    customer_name: str = Field(..., min_length=2, max_length=100)
    customer_email: EmailStr  # Pydantic's built-in email validation
    items: List[OrderItemCreate]

    @validator('customer_email')
    def validate_email_not_disposable(cls, v):
        disposable_domains = ['tempmail.com', 'throwaway.email', '10minutemail.com']
        domain = v.split('@')[1].lower()
        if domain in disposable_domains:
            raise ValueError('Disposable email addresses are not allowed')
        return v

    @validator('customer_name')
    def validate_name(cls, v):
        if not re.match(r"^[a-zA-Z\s\-']+$", v):
            raise ValueError('Name contains invalid characters')
        return v
Enter fullscreen mode Exit fullscreen mode

Layer 2: Client-Side Validation

The frontend validates inputs before submission, providing immediate user feedback:

// Name validation - letters, spaces, hyphens, apostrophes only
if (!/^[a-zA-Z\s\-']+$/.test(customerData.name)) {
  errors.name = 'Name can only contain letters, spaces, hyphens, and apostrophes';
}

// Email validation with TLD requirement
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
if (!emailRegex.test(customerData.email)) {
  errors.email = 'Please enter a valid email address';
}
Enter fullscreen mode Exit fullscreen mode

Layer 3: Request Middleware

@app.middleware("http")
async def validate_request(request: Request, call_next):
    suspicious_patterns = [
        '../',           # Path traversal
        '<script',       # XSS attempts
        'DROP TABLE',    # SQL injection
        '\x00',          # Null byte injection
    ]

    path = str(request.url)
    for pattern in suspicious_patterns:
        if pattern.lower() in path.lower():
            return JSONResponse(status_code=400,
                               content={"detail": "Invalid request"})
Enter fullscreen mode Exit fullscreen mode

Layer 4: Parameterized Queries

All SQL queries use parameterization to prevent SQL injection:

# Never do this (even with validation)
query = f"INSERT INTO orders VALUES ('{order_id}', '{name}'...)"

# Always do this
cursor.execute(
    "INSERT INTO orders VALUES (%s::UUID, %s, %s, %s, %s)",
    (order_id, name, email, items_json, total)
)
Enter fullscreen mode Exit fullscreen mode

Infrastructure as Code (With Terraform)

I am a very big proponent of using Infrastructure as Code (IaC) and my go-to tool for this is Terraform. Setting up all your resources in Terraform stacks makes it super easy to setup (and tear down) everything wherever you need. AWS services have very good Terraform support and even things like setting up multi-region DSQL clusters can be done via Terraform. Here I’m using the official terraform-aws-modules/rds-aurora DSQL module. The entire infrastructure is defined in Terraform, making it reproducible and version able.

module "dsql_primary" {
  source  = "terraform-aws-modules/rds-aurora/aws//modules/dsql"
  version = "~> 9.0"

  deletion_protection_enabled = false
  witness_region              = "us-west-2"
  create_cluster_peering      = true
  clusters                    = [module.dsql_secondary.arn]

  tags = {
    Name        = "${var.project_name}-dsql-primary"
    Environment = var.environment
  }
}

module "dsql_secondary" {
  source  = "terraform-aws-modules/rds-aurora/aws//modules/dsql"
  version = "~> 9.0"

  providers = {
    aws = aws.secondary  # us-east-2
  }

  deletion_protection_enabled = false
  witness_region              = "us-west-2"
  create_cluster_peering      = true
  clusters                    = [module.dsql_primary.arn]

  tags = {
    Name        = "${var.project_name}-dsql-secondary"
    Environment = var.environment
  }
}
Enter fullscreen mode Exit fullscreen mode

The dsql module handles cluster peering automatically, creating a multi-region DSQL setup with strong consistency across regions. One terraform apply creates multi-region DSQL clusters (primary in us-east-1, secondary in us-east-2, witness in us-west-2).

My current store implementation involves setting up a VPC, Subnets, and all the other infrastructure needed to run the Elastic Container Service. The app stack is not running in multiple AWS regions though so it doesn’t really take advantage of the DSQL database being multi-region. In future versions I will implement true multi-region support for everything with duplicate application stacks in each region and Route53 failover routing.

Terraform Apply Output

Multi-Region DSQL Configuration (Not Really Utilized Yet)

The infrastructure creates DSQL clusters in multiple US regions (us-east-1 primary, us-east-2 secondary) with us-west-2 configured as the witness region using the official Terraform module. This provides data replication and disaster recovery capabilities within the US. Note that the witness region is just a configuration setting for maintaining quorum - there's no actual DSQL cluster in us-west-2, only in us-east-1 and us-east-2. However, the current application always connects to the primary cluster in us-east-1, regardless of where the user is located.

Aurora DSQL's multi-region setup is conceptually similar to DynamoDB Global Tables - both replicate data across multiple AWS regions with strong consistency and automatic failover. The key difference: DSQL gives you SQL with PostgreSQL compatibility, while Global Tables use DynamoDB's NoSQL model.

Important limitation: DSQL multi-region clusters are currently restricted to geographic groupings. You can link clusters within the US (us-east-1, us-east-2, us-west-2), within Europe (eu-west-1, eu-west-2, eu-west-3), or within Asia Pacific (ap-northeast-1, ap-northeast-2, ap-northeast-3), but not across continents. For true global data synchronization across continents, DynamoDB Global Tables remains the better choice.

Aurora DSQL's multi-region feature shines when you have a multi-region application within the same geographic area that can route users to their nearest cluster. In that scenario, East Coast US users could connect to us-east-1 while West Coast users connect to us-west-2, both accessing the same strongly consistent data with lower latency. The witness region maintains quorum for strong consistency.

For this initial demo application with a single-region deployment (all ECS tasks in us-east-1), the multi-region clusters provide excellent data protection and fast disaster recovery within the US, but we're not leveraging the performance benefits of local reads. A future version could deploy the application stack in multiple US regions with Route53 routing users to their nearest endpoint, fully utilizing DSQL's regional multi-region capabilities.

DSQL Regional cluster

What's Next?

The Kabob Store is just the beginning. Here's what's on the roadmap:

Authentication: Adding AWS Cognito for user accounts and login

Observability: I plan to implement full OpenTelemetry observability for the platform.

Store Dashboard: Real-time order management interface for store staff with Server-Sent Events or Websockets for instant order notifications

Payments: Integrating Stripe for actual transactions

AI Ordering Agent: Conversational ordering interface using Amazon Bedrock AgentCore and Strands framework

Analytics: Building a QuickSight dashboard for business metrics

Prerequisites

If you’re going to setup the Kabob store demo code for yourself you will need the following:

AWS Account with admin permissions

Terraform >= 1.5.0

Docker for container builds

AWS CLI configured

~$2-3/day budget for testing

Try It Yourself

The entire project is open source. You can deploy your own Kabob Store:

# Clone the repo
git clone https://github.com/RDarrylR/kabob-store

# Deploy infrastructure
cd infrastructure
terraform init
terraform apply

# Build and push container images to ECR
# Then update ECS services to deploy
# See README.md for detailed deployment steps

# Visit your ALB URL and start ordering kabobs!
Enter fullscreen mode Exit fullscreen mode

Conclusion

Aurora DSQL offers most of what I've been looking for: SQL with DynamoDB-like operational characteristics. It provisions in seconds, scales automatically, and bills only for usage. For the first time, I can choose SQL for a new project without accepting always-on infrastructure costs or extended provisioning times.

The container-based approach provides similar flexibility. The same application code can run on Lambda, Fargate, ECS on EC2, or EKS. I can choose whichever runtime fits the current requirements and cost profile. During development, Fargate eliminates server management. At scale, ECS on EC2 reduces costs. If requirements change, the code doesn't need to.

The Kabob Store demonstrates a straightforward architecture: runtime-portable business logic, parameterized SQL queries, explicit transaction boundaries, multi-layer validation, and scoped IAM permissions. The entire stack deploys with terraform apply and produces a multi-region e-commerce platform with data redundancy across US regions. When requirements change (more traffic, different cost targets, specific compliance needs), the code can move to different infrastructure without rewriting the business logic.

For my projects, the decision tree has expanded. As a solution architect I always want to have as many tools to choose from. DynamoDB remains the right choice when its data model fits naturally. Lambda remains the default for event-driven workloads. But when I need SQL with serverless economics, or containers that can move between runtimes, these are now viable options. The Kabob Store proves they work in practice.

CLEANUP (IMPORTANT!!)

If you do end up deploying the Kabob Store yourself please understand some of the included resources will cost you real money. For a short period of time it won’t be much but running the VPC and NAT Gateway will incur you daily charges. Please don’t forget about it.

Please MAKE SURE TO DELETE the stack if you are no longer using it. Running terraform destroy can take care of this or you can delete the server in the AWS console.

Try the setup in your AWS account

You can clone the Github Repo and try this out in your own AWS account. The README.md file mentions any changes you need to make for it to work in your AWS account.

Please let me know if you have any suggestions or problems trying out this example project.

For more articles from me please visit my blog at Darryl's World of Cloud or find me on Bluesky, X, LinkedIn, Medium, Dev.to, or the AWS Community.

For tons of great serverless content and discussions please join the Believe In Serverless community we have put together at this link: Believe In Serverless Community

Top comments (0)