DEV Community

Cover image for DEV Track Spotlight: Build modern applications with Amazon Aurora DSQL (DEV308)
Gunnar Grosch for AWS

Posted on

DEV Track Spotlight: Build modern applications with Amazon Aurora DSQL (DEV308)

As organizations modernize their applications for scalability and agility, database architecture becomes increasingly crucial. The challenge? Finding a database that combines the serverless benefits of Amazon DynamoDB with the familiar relational model and ACID guarantees that developers love.

At AWS re:Invent 2025, Oleksii Ivanchenko (Solutions Architect at AWS) and Vadym Kazulkin (AWS Serverless Hero and Head of Development at ip.labs) delivered DEV308, exploring Amazon Aurora DSQL and how it addresses this exact challenge. Their session provided both architectural insights and practical implementation guidance for building modern serverless applications.

Watch the Full Session:

Why Amazon Aurora DSQL?

Vadym opened with a critical question: "Why we need another serverless database?" He walked through the existing AWS database offerings and their fit for serverless workloads.

Amazon DynamoDB is ideal for serverless - no infrastructure management, automatic scaling without cold starts, and single-digit millisecond performance. But it's NoSQL, requiring a different design mindset, and has limitations around eventual consistency and transaction sizes.

Amazon RDS provides the relational model developers love but requires significant infrastructure work - VPC configuration, instance sizing, and connection management. When Lambda functions scale, they can exhaust database connections, requiring solutions like Amazon RDS Proxy.

Amazon Aurora Serverless v2 moves closer to serverless ideals with automatic compute and storage scaling, optional scaling to zero, and the Data API for HTTP-based access. However, scaling to zero takes up to 15 seconds, and you lose cache during scale-down events.

Vadym's key insight: "Can we have AWS database offering which is as serverless as DynamoDB, in terms of not having to deal with the infrastructure, or scale up and down very quickly without cold starts of the database, but provides the benefits of the relational databases like these ACID things?"

Enter Amazon Aurora DSQL

Amazon Aurora DSQL addresses these challenges head-on. As Oleksii explained, it's a serverless database with no infrastructure to manage, no downtime for patching, and five nines of availability. You get separate scaling for compute and storage, separate scaling for reads and writes, and strong consistency throughout.

When to consider Amazon Aurora DSQL:

  • You need ACID transactions across multi-region deployments
  • You're building serverless architectures or microservices
  • Your application follows event-driven patterns
  • You have unpredictable or spiky traffic patterns
  • You want to continue using SQL and existing tooling

Distributed Disaggregated Architecture

What makes Amazon Aurora DSQL truly different is its architecture. Rather than hiding traditional PostgreSQL instances behind an endpoint, Amazon Aurora DSQL uses a distributed, disaggregated design.

Oleksii explained how critical components of a monolithic OLTP database are separated into independent services:

Connection Management - Handles client connections and authentication

Query Processor - Performs SQL processing, acting as a dedicated PostgreSQL engine for each transaction

Adjudicator - Determines whether transactions can commit while enforcing isolation rules

Journal - Makes transactions durable and replicates data across availability zones and regions

Crossbar - Merges data streams and directs them to storage nodes

Storage - Provides access to data with multiple replicas distributed by database key ranges

Each component works independently with fleets of compute resources that scale dynamically based on workload.

Single-Region Architecture

A single-region cluster is an active-active, multi-writer cluster distributed across three availability zones. You get one endpoint for both reads and writes, with no instances to provision. This provides four nines of availability.

Behind that single endpoint, reads and writes are always local - only transaction commits travel across availability zones.

Multi-Region Architecture

Multi-region clusters provide five nines of availability and multi-regional consistent writes. You get two regional endpoints, both supporting concurrent reads and writes, represented as a single logical database. A third witness region participates in the write quorum and acts as a tiebreaker during network partitioning.

How Transactions Work

Oleksii walked through transaction flow using a pizza ordering example. When you execute a read-only query like "select * from restaurants where rating >= 4.0", here's what happens:

  1. Your application connects to the Amazon Aurora DSQL frontend
  2. Frontend allocates a Query Processor
  3. Query Processor reads the local clock and sets transaction start time (tau start)
  4. Query Processor consults the shard map to locate data
  5. Query Processor goes directly to storage nodes (no need for Adjudicator or Journal on reads)
  6. Storage nodes return rows (not pages) with predicate pushdowns, filtering, and aggregation
  7. Query Processor merges and returns results

Reads are always local - Query Processors access storage nodes in the same availability zone.

Write Transactions and Interactive Sessions

For write transactions, Query Processors act as holding tanks. They read data from local storage nodes, save it in local memory, accumulate all changes, and wait for commit. Only at commit time does the transaction follow the write path to the Adjudicator.

This approach enables read-your-writes capabilities - subsequent reads within the same transaction see pending changes.

Conflict Resolution with the Adjudicator

When you commit, the Query Processor creates a payload containing:

  • Write set - All items modified by the transaction
  • Post image set - Table rows after applying changes
  • Tau start - Transaction start time

The Adjudicator examines payloads from concurrent transactions, looking for overlapping row changes after tau start. If two transactions modify the same row, the first to commit succeeds, and the second is aborted.

If transactions have non-overlapping write sets, both can commit and receive commit timestamps (tau commit).

Durability Through the Journal

Unlike traditional databases where durability happens at the storage level, Amazon Aurora DSQL uses the Journal - an independent component optimized for ordered replication across availability zones and regions.

Once the Adjudicator approves a transaction, it sends the payload and commit timestamp to the Journal. When the Journal acknowledges the write, your transaction is durable and atomically committed. The Crossbar then pulls data from the Journal and writes it to storage.

From your perspective, once the Query Processor sends success, the transaction is committed.

Amazon Time Sync Service

Strong consistency requires synchronized time. Oleksii emphasized: "You cannot get strongly consistency without synchronizing on the time."

Amazon Aurora DSQL uses Amazon Time Sync Service with dedicated timing hardware across AWS and AWS Nitro system, providing GPS-dictated reference clocks on EC2 instances. This enables globally-synchronized clocks with microsecond accuracy.

Developer Experience

Authentication Without Passwords

Vadym highlighted a key security feature: "DSQL doesn't use any password, but what does it use instead? It uses tokens to authenticate."

Tokens are generated by AWS SDK using fast, local cryptography - similar to how you access Amazon S3 or Amazon DynamoDB. They're very short-lived, so intercepted tokens are likely already expired.

You can generate tokens using AWS CLI and use them as passwords with psql or tools like DBeaver. Amazon Aurora DSQL recently released an integrated query editor in the browser, handling token generation automatically.

Simple Cluster Creation

Creating an Amazon Aurora DSQL cluster is remarkably simple. Vadym noted: "You see we don't have any scrollbar here, which is surprising. If you created RDS, then you probably know how many choices do you need to make."

For single-region clusters, you essentially just name the cluster. Everything else has sensible defaults. Multi-region clusters require selecting a second region and witness region, but settings remain minimal.

Building with AWS Lambda and Amazon API Gateway

Vadym demonstrated building an ordering application with AWS Lambda, Amazon API Gateway, and Amazon Aurora DSQL. The architecture includes Lambda functions for creating orders, getting orders by ID, and querying orders by date range.

The implementation uses standard JDBC connections with the cluster endpoint passed as an environment variable. IAM permissions allow Lambda functions to communicate with Amazon Aurora DSQL using the cluster resource ARN.

Connection Pooling

Use server-side connection pooling (like HikariCP for Java) even though Amazon Aurora DSQL scales connections beautifully. Creating connections still has cost, and pooling helps manage connection lifecycle.

Amazon Aurora DSQL Connectors

AWS recently released connectors that handle token generation automatically. Now available for Java, Python (psycopg and psycopg2), and Node.js (node-postgres and postgres.js).

With connectors, your code has no Amazon Aurora DSQL dependencies - it's pure JDBC. The connector intercepts calls and handles token generation transparently by adding a simple prefix to your JDBC URL.

ORM Framework Support

Amazon Aurora DSQL works seamlessly with familiar object-relational mapping frameworks:

  • Java - Hibernate
  • Python - SQLAlchemy, Django
  • Other languages - Standard ORMs

You can annotate entities and use standard patterns without code changes.

Performance Characteristics

Vadym measured performance from AWS Lambda functions using Java, focusing on warm start times and 99th percentile to exclude runtime cold start impacts. He tested both single-region and multi-region clusters.

Important note: These tests weren't designed to compare databases or test performance at massive scale. The goal was understanding basic operation performance.

Test Setup

Simple database structure with two tables:

  • orders - order_id (primary key), created timestamp, and other fields
  • order_items - product_id and other fields, with indexes on order_id and timestamp

Create Order (3 Inserts)

Creating one order with two items (three inserts in one transaction):

  • Single-region p90: ~20 milliseconds (~7ms per insert)
  • Multi-region p90: ~40 milliseconds

Multi-region performance is roughly twice as slow due to cross-region commits, which is expected.

Get Order by ID (2 Selects)

Getting an order by ID and its two items (two selects using primary key and index):

  • Single-region p90: ~12 milliseconds
  • Multi-region p90: ~12 milliseconds

Read performance is identical for single and multi-region clusters because reads are always local.

Get Orders by Date Range (101 Selects)

Getting 100 orders by date range, then fetching items for each order (101 total selects):

  • Single-region p90: ~200 milliseconds
  • Multi-region p90: ~250 milliseconds

Vadym's assessment: "I think it's a very compatible result."

Cold Start Testing

After leaving clusters untouched for three days, Vadym executed the first select:

  • Single-region median: 165 milliseconds
  • Multi-region median: 169 milliseconds

His conclusion: "No cold starts of the database, it's simply not there." Unlike Amazon Aurora Serverless v2's 15-second scale-up, Amazon Aurora DSQL shows minimal first-query latency.

Query Processor Scaling

The performance results stem from Amazon Aurora DSQL's architecture. Query Processors run in Firecracker lightweight virtual machines (microVMs) on bare metal hosts. Thousands of pre-provisioned microVMs can run on a single host.

When you connect and use Amazon Aurora DSQL, the service ensures enough microVMs are running to handle your workload, scaling automatically as needed. Each Query Processor is fully independent and isolated.

Service Quotas and Constraints

Vadym emphasized understanding constraints: "Each AWS service has its quotas... And DSQL is no exception."

Important Quotas

Maximum connections: 10,000 (adjustable) - "It scales beautifully. There is no problem just to have 10,000 Lambda functions talking to it in parallel."

Maximum transaction time: 5 minutes - Designed for microservice applications, not long-running analytics jobs

Maximum connection duration: 60 minutes - Configure connection pool TTL accordingly

Maximum rows changed per transaction: 3,000 rows (deletes and updates) - For bulk operations, divide work into batches

Maximum modified data per transaction: 10 MiB

PostgreSQL Compatibility and Unsupported Features

Amazon Aurora DSQL is PostgreSQL-compatible, enabling developers to use familiar tools, drivers, and frameworks. However, as a distributed serverless database optimized for specific workloads, it supports a subset of PostgreSQL features.

Vadym stressed that many of these limitations are temporary - features may be added over time as the service evolves.

Currently unsupported features include:

Data Types:

  • JSON and JSONB
  • Columns with default values

Database Objects:

  • Foreign keys
  • Views
  • Triggers
  • Sequences (use time-based UUID version 7 as an alternative)
  • Temporary tables
  • Partitions
  • PostgreSQL extensions

Commands:

  • ON DELETE CASCADE
  • TRUNCATE
  • VACUUM (not needed due to Amazon Aurora DSQL's architecture)

Other Limitations:

  • Functions written in languages other than SQL
  • pgvector support

The lack of foreign keys is particularly notable. Vadym explained: "I like foreign keys, and I sometimes, or nearly always, I would say, define something on delete cascade there. For example, if I delete the order, I would like that all order items will be deleted automatically."

Without foreign keys or triggers, you must handle referential integrity in application code. Similarly, without ON DELETE CASCADE, cascading deletes must be implemented in your business logic.

Best Practices

Oleksii and Vadym shared recommendations for optimal Amazon Aurora DSQL usage:

Create many parallel connections - Horizontal scaling through multiple Query Processors

Use small rows and small transactions - Reduces Adjudicator load for conflict resolution

Consider separate clusters per microservice - Minimizes conflicts and aborted transactions

Use EXPLAIN ANALYZE - Identify bottlenecks and understand costs (EXPLAIN ANALYZE VERBOSE shows DPUs)

Implement client-side connection pooling - Improves performance and manages connection lifecycle

Design for constraints - Plan around 3,000-row update limits and 5-minute transaction times

When to Choose Amazon Aurora DSQL

The session concluded with practical guidance on database selection for serverless workloads.

Amazon Aurora DSQL Strengths

  • Setup experience: Minimal configuration, like Amazon DynamoDB
  • Auto-scaling: Automatic up and down scaling with no cold starts
  • ACID support: Complete ACID guarantees like all relational databases
  • Connection management: No need for Amazon RDS Proxy or Data API
  • Familiar tools: Works with standard PostgreSQL-compatible drivers and ORMs

Current Considerations

Feature subset: Amazon Aurora DSQL supports a subset of PostgreSQL features. If you're migrating existing applications that rely on sequences, foreign keys, triggers, or bulk updates beyond 3,000 rows, you'll need to adapt your application code.

Service quotas: The 5-minute transaction time and 3,000-row update limit are designed for microservice workloads rather than long-running analytics or bulk operations. These constraints enable the performance characteristics that make Amazon Aurora DSQL serverless.

Vadym's advice: "If you start from scratch, and you know that you can start, I'm pretty confident that the features will come with time."

Key Takeaways

Distributed architecture enables serverless scale - Query Processors in Firecracker microVMs provide virtually endless scale with minimal cold start impact

Constraints enable performance - Five-minute transactions and 3,000-row limits allow better write consistency, conflict resolution, and automatic garbage collection (eliminating the need for VACUUM)

Strong consistency at any scale - Amazon Time Sync Service with microsecond accuracy enables globally-synchronized clocks for ACID guarantees

Simplified authentication - Token-based authentication with short-lived credentials eliminates password management risks

No infrastructure management - No instances to provision, no patching downtime, automatic scaling for compute and storage

Consider your use case carefully - Ideal for new serverless applications and microservices, but evaluate feature compatibility for migration scenarios


About This Series

This post is part of DEV Track Spotlight, a series highlighting the incredible sessions from the AWS re:Invent 2025 Developer Community (DEV) track.

The DEV track featured 60 unique sessions delivered by 93 speakers from the AWS Community - including AWS Heroes, AWS Community Builders, and AWS User Group Leaders - alongside speakers from AWS and Amazon. These sessions covered cutting-edge topics including:

  • πŸ€– GenAI & Agentic AI - Multi-agent systems, Strands Agents SDK, Amazon Bedrock
  • πŸ› οΈ Developer Tools - Kiro, Kiro CLI, Amazon Q Developer, AI-driven development
  • πŸ”’ Security - AI agent security, container security, automated remediation
  • πŸ—οΈ Infrastructure - Serverless, containers, edge computing, observability
  • ⚑ Modernization - Legacy app transformation, CI/CD, feature flags
  • πŸ“Š Data - Amazon Aurora DSQL, real-time processing, vector databases

Each post in this series dives deep into one session, sharing key insights, practical takeaways, and links to the full recordings. Whether you attended re:Invent or are catching up remotely, these sessions represent the best of our developer community sharing real code, real demos, and real learnings.

Follow along as we spotlight these amazing sessions and celebrate the speakers who made the DEV track what it was!

Top comments (0)