DEV Community

Cover image for DEV Track Spotlight: Unleash Rust's Potential on AWS (DEV307)
Gunnar Grosch for AWS

Posted on

DEV Track Spotlight: Unleash Rust's Potential on AWS (DEV307)

"Welcome to Rust ASMR", joked Darko Mesaroš as he kicked off one of the most technically impressive sessions of AWS re:Invent 2025. But the humor quickly gave way to serious performance numbers that made developers sit up and take notice.

DEV307 brought together two Rust enthusiasts with serious credentials: AJ Stuyvenberg, AWS Serverless Hero and Staff Engineer at Datadog, and Darko Mesaroš, Principal Developer Advocate at AWS. Their mission? To show developers exactly why Rust has become AWS' default choice for data plane projects and how you can harness its power for everything from Lambda functions to distributed systems.

The session wasn't just theory. It was packed with real production examples, performance benchmarks, and code demonstrations that showed Rust running 10x faster than other languages while using a fraction of the memory.

Watch the full session:

Why Rust at AWS: The Amazon Aurora DSQL Story

The most compelling case study came from Amazon Aurora DSQL, AWS' new serverless distributed SQL database. The project started as a 100% JVM implementation in Kotlin, but the team hit a wall.

Despite years of tuning, they could only achieve 2,000-3,000 transactions per second. The culprit? Garbage collection. The expected latency of 1 second ballooned to 10 seconds due to GC pauses, completely unacceptable for a distributed database where writes can happen across multiple global points.

The team decided to experiment. They rewrote just one component (the adjudicator) in Rust. The result? 10x faster performance. And these weren't Rust experts. They were Kotlin developers writing their first Rust code.

Encouraged by the results, they rewrote the entire data plane in Rust. Then they realized they couldn't effectively share logic and testing between Kotlin and Rust, so they rewrote the control plane too. The entire project became 100% Rust.

The kicker? The 10x performance improvement came with zero attempts at optimization. Just idiomatic Rust code was enough to achieve those gains.

As Darko explained: "The team behind it stopped asking 'Should we use Rust?' and started asking themselves, 'Where else could Rust help?'"

Today, Amazon Aurora DSQL isn't just built with Rust. It's built for Rust, with excellent support for crates like SQLx for developers building Rust applications.

Rust's Reach: From Infrastructure to Serverless

Rust isn't just for massive distributed databases at AWS. Darko demonstrated its versatility across the entire stack:

Infrastructure Layer:

  • Firecracker - The microVM technology powering Lambda and Fargate
  • Bottlerocket - The container-optimized operating system
  • Amazon Aurora DSQL - The distributed SQL database

Serverless Layer:

  • AWS Lambda functions with cargo-lambda
  • Cold start times of just 1.2 seconds
  • Warm start times of 4 milliseconds
  • Memory usage of only 29 megabytes

Application Layer:

  • Full-stack applications with Rust + Lambda + HTMX
  • Command-line tools and MCP servers
  • Real-time data processing with Amazon Kinesis

Darko even demonstrated a link shortener built entirely with Rust, Lambda, and HTMX, proving that Rust can handle front-end interactions just as well as backend processing.

Rust at Datadog: Production Performance at Scale

AJ Stuyvenberg brought real-world production experience from Datadog, where Rust powers critical observability infrastructure processing trillions of data points daily.

Serverless Extensions: From 700ms to 80ms

Datadog's journey with Rust started with Lambda extensions. Their initial Go-based extension added 700-800 milliseconds of cold start overhead, a significant burden for serverless applications.

The challenge was unique: Lambda extensions act as parent processes to function handlers. If the extension crashes, it kills the customer's function. This made memory safety critical.

Over six months, AJ and a small team who had never written Rust before built "Bottle Cap," a new Rust-based extension. The results were dramatic:

  • Cold start overhead dropped from 400-500ms to just 80ms
  • Post-runtime duration reduced from 60-140ms max to 500 microseconds
  • 53,000 lines of Go code were deleted after the migration

But they didn't stop there. By carefully managing TCP keep-alive connections and predicting Lambda invocation patterns, they achieved near-zero overhead for hot Lambda functions.

The Datadog Agent: 25% CPU Reduction

Datadog's agent, deployed across customer infrastructure, processes unbounded telemetry data. Metrics systems are "allocation factories," constantly creating small allocations for timestamps, data points, and tags.

By rewriting the data plane components in Rust, Datadog achieved:

  • 25% reduction in CPU usage per pod
  • 33% reduction in memory usage
  • Significantly better performance under load

Backend Systems: 60x Performance Improvement

The most impressive numbers came from Datadog's time-series backend system called Monacle, built on an LSM tree architecture similar to RocksDB.

During a production spike, the Go-based system handling 60,000 points per second started dropping queries. The Rust implementation? It absorbed 3.5 million points per second (a 60x improvement) while continuing to serve query traffic without issues.

AJ demonstrated a simple code comparison: a tag grouping function that looked nearly identical in Go and Rust. The Rust version was 3x faster on a C7i.xlarge instance, even without using specialized allocators like jemalloc.

Practical Rust: Concurrency Patterns That Work

Darko demonstrated practical concurrency patterns using idiomatic Rust with the Tokio runtime:

Semaphores and Task Buffers

Processing 10,000 CSV files from Amazon S3 in under 10 seconds using:

  • Semaphores with 500 permits to control API call rate
  • Task buffers of 1,000 concurrent tasks
  • Automatic permit management to prevent throttling

Adaptive Back Pressure

Writing 1 million records to Amazon DynamoDB with intelligent throttling:

  • Start with 500 concurrent write permits
  • Remove 10% of permits when throttled
  • Add 10 permits after 200 consecutive successes
  • Allow backend systems to scale while maintaining optimal throughput

The code was remarkably clean. No scary lifetime annotations, minimal borrow checker complaints, just straightforward async/await patterns that any developer could understand.

Production Tips from the Trenches

Both speakers shared hard-won lessons from running Rust in production:

Before You Deploy:

  • Use Clippy to lint away all unwrap() calls (treat warnings as failures)
  • Run profilers to identify unexpected bottlenecks
  • Parse SDK errors properly for better error handling
  • Monitor Tokio metrics for blocking tasks

Performance Optimization:

  • Idiomatic Rust code yields performance equivalent to carefully optimized Go
  • Use flame graphs to identify hot paths
  • Consider lock-free data structures for high-contention scenarios
  • Leverage LLMs to generate and benchmark multiple implementation approaches

Lambda-Specific:

  • Use cargo-lambda for seamless deployment
  • Monitor cold start and warm start metrics
  • Consider Rust on ARM for best performance
  • Use the Lambda Runtime Emulator with profilers for debugging

The Controversial Take:

Darko offered a "lukewarm take" that sparked discussion: "We should all stop writing Python." His reasoning? In the age of LLMs and coding assistants, compiled, type-safe languages like Rust provide immediate feedback on generated code, dramatically accelerating development cycles compared to runtime-error-prone languages.

Key Takeaways

🚀 Performance is Built-In - Idiomatic Rust code is 10x faster than equivalent implementations in other languages, with no optimization required

Serverless-Ready - Lambda functions in Rust have 1.2s cold starts, 4ms warm starts, and use only 29MB of memory

🏗️ Production-Proven - AWS uses Rust as the default for data plane projects; Datadog processes trillions of data points daily with Rust

🔧 Developer-Friendly - Modern tools like cargo-lambda and Tokio make Rust accessible to developers without systems programming backgrounds

🎯 Cost-Effective - 25-60% reductions in CPU and memory usage translate directly to lower cloud costs

💪 Fearless Concurrency - Semaphores, task buffers, and adaptive back pressure patterns are straightforward to implement

As AJ summarized: "Idiomatic Rust code tends to yield the performance of what very carefully optimized Go code does. If performance is a feature and cost really matters in terms of fully utilizing the compute you're paying AWS for, it is very simple to write Rust code that right out of the box is very, very good."

Ready to start? Run rustup to install Rust, use cargo new for your next project, and discover why AWS has made Rust the foundation of its most performance-critical systems.


About This Series

This post is part of DEV Track Spotlight, a series highlighting the incredible sessions from the AWS re:Invent 2025 Developer Community (DEV) track.

The DEV track featured 60 unique sessions delivered by 93 speakers from the AWS Community - including AWS Heroes, AWS Community Builders, and AWS User Group Leaders - alongside speakers from AWS and Amazon. These sessions covered cutting-edge topics including:

  • 🤖 GenAI & Agentic AI - Multi-agent systems, Strands Agents SDK, Amazon Bedrock
  • 🛠️ Developer Tools - Kiro, Kiro CLI, Amazon Q Developer, AI-driven development
  • 🔒 Security - AI agent security, container security, automated remediation
  • 🏗️ Infrastructure - Serverless, containers, edge computing, observability
  • Modernization - Legacy app transformation, CI/CD, feature flags
  • 📊 Data - Amazon Aurora DSQL, real-time processing, vector databases

Each post in this series dives deep into one session, sharing key insights, practical takeaways, and links to the full recordings. Whether you attended re:Invent or are catching up remotely, these sessions represent the best of our developer community sharing real code, real demos, and real learnings.

Follow along as we spotlight these amazing sessions and celebrate the speakers who made the DEV track what it was!

Top comments (0)