DEV Community

Cover image for .NET NativeAOT on AWS Lambda: 7 Faster Cold Starts, 73% Lower Costs
Alex
Alex

Posted on

.NET NativeAOT on AWS Lambda: 7 Faster Cold Starts, 73% Lower Costs

.NET NativeAOT on AWS Lambda: The Performance Revolution You've Been Waiting For

If you're running .NET on AWS Lambda and haven't explored NativeAOT yet, you're leaving serious performance (and money) on the table. Let me show you some numbers that might change your mind:

  • πŸš€ 7Γ— faster cold starts (6680ms β†’ 940ms)
  • ⚑ 6Γ— faster warm runs (91ms β†’ 14ms)
  • πŸ’Ύ 50% less memory (93MB β†’ 42MB)
  • πŸ’° 73% lower costs for high-volume workloads

And here's the kicker: these aren't theoretical numbers. They're from real Lambda functions running in production.

.NET NativeAOT: Performance Revolution

A Deep Dive into AOT vs ReadyToRun vs Regular .NET


πŸ“‹ Table of Contents

  1. Introduction & Objectives
  2. Understanding the Execution Models
  3. Why NativeAOT Matters
  4. Performance Results: The Numbers Don't Lie
  5. Build & Packaging Deep Dive
  6. Coding for AOT: Patterns & Best Practices
  7. Real-World Examples from This Repo
  8. Deployment Strategies
  9. Migration Guide
  10. When to Choose Which Approach
  11. FAQ & Troubleshooting
  12. Key Takeaways
  13. Additional Resources
  14. Getting Started
  15. Contributing
  16. License
  17. Acknowledgments

🎯 Introduction & Objectives

Who This Presentation Is For

.NET engineers familiar with Lambda, microservices, or performance-critical applications looking to understand modern compilation strategies.

What You'll Learn

  • NativeAOT fundamentals and how it compares to ReadyToRun and Regular .NET
  • Measured performance improvements from real-world Lambda functions
  • Practical build and deployment strategies
  • Code patterns required for successful AOT adoption
  • Migration strategies for existing applications

Repository Context

This presentation uses a multi-mode Lambda demo repository with:

  • 3 AOT Lambda functions (.NET…

Why Should You Care?

Every second your Lambda function takes to cold start is a second your users wait. And in the world of web applications, every millisecond counts:

  • Amazon found that 100ms delay = 1% revenue loss
  • Google reports 53% of mobile users abandon sites taking >3 seconds to load
  • A 6.7-second cold start (Regular .NET) is simply unacceptable for modern APIs

But it's not just about user experience - it's about your bottom line:

Scale Regular .NET NativeAOT Savings
1M requests/mo $0.98 $0.26 $0.72/mo
10M requests/mo $9.80 $2.60 $7.20/mo
100M requests/mo $98.00 $26.00 $72/mo

For an enterprise with 50 Lambda functions handling 100M requests each: $43,200/year saved.

The Three Compilation Modes: A Quick Primer

Before we dive into the numbers, let's understand what we're comparing:

1. Regular .NET (JIT)

Traditional approach: ships IL bytecode, compiles to native at runtime.

App Start β†’ Load IL β†’ Init Runtime β†’ JIT Compile β†’ Execute
                      ⏱️ SLOW      ⏱️ SLOW
Enter fullscreen mode Exit fullscreen mode

Pros: Maximum flexibility, smallest package

Cons: Slowest startup, unpredictable performance

2. ReadyToRun (R2R)

Hybrid: ships both IL and precompiled native images.

App Start β†’ Load R2R+IL β†’ Init Runtime β†’ Execute (mostly native)
                          ⏱️ SLOW
Enter fullscreen mode Exit fullscreen mode

Pros: Faster startup than Regular, minimal code changes

Cons: Still requires full runtime, larger packages

3. NativeAOT

Pure native: everything compiled ahead-of-time.

App Start β†’ Execute Native Binary
            βœ… FAST
Enter fullscreen mode Exit fullscreen mode

Pros: Fastest startup, lowest memory, no runtime overhead

Cons: Reflection limitations (requires source generators)

The Real Performance Data

I built identical Lambda functions in all three modes and ran them through hundreds of test cycles. Here's what I found:

Cold Start Performance

Regular .NET 8:      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 6680ms
ReadyToRun .NET 8:   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 4389ms
AOT .NET 8:          β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 1082ms ⚑
AOT .NET 9:          β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 971ms ⚑
AOT .NET 10:         β–ˆβ–ˆβ–ˆβ–ˆ 940ms ⚑ (FASTEST)
Enter fullscreen mode Exit fullscreen mode

Improvement: 7.1Γ— faster (Regular β†’ AOT .NET 10)

Warm Run Performance

ReadyToRun .NET 8:   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 99ms
Regular .NET 8:      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 91ms
AOT .NET 8:          β–ˆβ–ˆ 18ms ⚑
AOT .NET 9:          β–ˆ 14ms ⚑ (FASTEST)
AOT .NET 10:         β–ˆβ–ˆ 17ms ⚑
Enter fullscreen mode Exit fullscreen mode

Improvement: 6.5Γ— faster (Regular β†’ AOT .NET 9)

Memory Usage

ReadyToRun .NET 8:   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 89-96 MB
Regular .NET 8:      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 88-93 MB
AOT .NET 8:          β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 46-52 MB ⚑
AOT .NET 9:          β–ˆβ–ˆβ–ˆβ–ˆ 43-49 MB ⚑
AOT .NET 10:         β–ˆβ–ˆβ–ˆβ–ˆ 42-48 MB ⚑ (LOWEST)
Enter fullscreen mode Exit fullscreen mode

Improvement: 52% less memory (Regular β†’ AOT .NET 10)

The Code: What Changes?

The good news? Your business logic doesn't change. The differences are in how you bootstrap and serialize.

Regular .NET Lambda

[assembly: LambdaSerializer(typeof(DefaultLambdaJsonSerializer))]

namespace LambdaRegularDemo;

public class Function
{
    private readonly IDynamoDBRepository _repository;

    public Function(IDynamoDBRepository repository)
    {
        _repository = repository;
    }

    [LambdaFunction]
    public async Task<Guid> FunctionHandler(
        Dictionary<string, string> input, 
        ILambdaContext context)
    {
        using var cts = new CancellationTokenSource(context.RemainingTime);
        return await _repository.CreateAsync(cts.Token);
    }
}
Enter fullscreen mode Exit fullscreen mode

NativeAOT Lambda

Main difference: Source-generated JSON serialization

// AOTJsonContext.cs
[JsonSerializable(typeof(Guid))]
[JsonSerializable(typeof(Dictionary<string, string>))]
public partial class AOTJsonContext : JsonSerializerContext { }

// Function.cs
public class Function
{
    private readonly IDynamoDBRepository _repository;

    public Function(IDynamoDBRepository repository)
    {
        _repository = repository;
    }

    public async Task<Guid> FunctionHandler(
        Dictionary<string, string> input, 
        ILambdaContext context)
    {
        // Same business logic!
        using var cts = new CancellationTokenSource(context.RemainingTime);
        return await _repository.CreateAsync(cts.Token);
    }
}

// Program.cs - Custom runtime bootstrap
var handler = async (Dictionary<string, string> input, ILambdaContext context) =>
{
    await using var serviceProvider = CreateServiceProvider();
    var function = serviceProvider.GetRequiredService<Function>();
    return await function.FunctionHandler(input, context);
};

await LambdaBootstrapBuilder
    .Create(handler, new SourceGeneratorLambdaJsonSerializer<AOTJsonContext>())
    .Build()
    .RunAsync();
Enter fullscreen mode Exit fullscreen mode

Project Configuration

The key difference is in your .csproj:

<PropertyGroup>
  <OutputType>Exe</OutputType> <!-- Must be Exe for AOT -->
  <TargetFramework>net9.0</TargetFramework>
  <RuntimeIdentifier>linux-x64</RuntimeIdentifier>
  <PublishAot>true</PublishAot> <!-- Enable AOT -->
  <SelfContained>true</SelfContained>
  <StripSymbols>true</StripSymbols>
  <InvariantGlobalization>true</InvariantGlobalization>
</PropertyGroup>
Enter fullscreen mode Exit fullscreen mode

The AOT Challenge: Reflection

NativeAOT's biggest limitation? No dynamic reflection. But this is actually a blessing in disguise - it forces you to write better code.

❌ What Doesn't Work

// Dynamic type loading
Type t = Type.GetType("MyNamespace.MyClass");
var instance = Activator.CreateInstance(t);

// Runtime assembly loading
Assembly asm = Assembly.Load("PluginAssembly");

// Reflection-based serialization
JsonSerializer.Serialize(obj); // Uses reflection!
Enter fullscreen mode Exit fullscreen mode

βœ… AOT-Friendly Alternatives

1. JSON Source Generation

[JsonSerializable(typeof(MyType))]
public partial class MyJsonContext : JsonSerializerContext { }

// Usage
var json = JsonSerializer.Serialize(
    data, 
    MyJsonContext.Default.MyType
);
Enter fullscreen mode Exit fullscreen mode

2. Constructor Injection (Always!)

// βœ… DO: Types known at compile time
public class MyService
{
    private readonly IRepository _repo;

    public MyService(IRepository repo)
    {
        _repo = repo;
    }
}

// ❌ DON'T: Service locator pattern
var repo = serviceProvider.GetService<IRepository>();
Enter fullscreen mode Exit fullscreen mode

3. Explicit Type Registration

// βœ… DO: Compile-time registration
services.AddSingleton<IPlugin, ConcretePlugin1>();
services.AddSingleton<IPlugin, ConcretePlugin2>();

// ❌ DON'T: Runtime discovery
var plugins = Directory.GetFiles("plugins", "*.dll")
    .Select(Assembly.LoadFrom);
Enter fullscreen mode Exit fullscreen mode

Building for Production

Use Docker to ensure consistent Linux binaries:

FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build

# Install native toolchain for AOT
RUN apt-get update && \
    apt-get install -y clang zlib1g-dev zip

WORKDIR /src
COPY . .

# Build AOT Lambda
RUN dotnet restore ./LambdaAOTDemo9/LambdaAOTDemo9.csproj && \
    dotnet publish ./LambdaAOTDemo9/LambdaAOTDemo9.csproj \
    -c Release -o /artifacts/publish && \
    mv /artifacts/publish/LambdaAOTDemo9 /artifacts/publish/bootstrap && \
    cd /artifacts/publish && \
    zip -r /artifacts/LambdaAOTDemo9-lambda.zip .
Enter fullscreen mode Exit fullscreen mode

Why Docker?

  • Ensures Linux build on any dev OS (Windows, Mac, Linux)
  • Includes required native dependencies (clang, zlib1g-dev)
  • Reproducible builds across team

Deploying to AWS Lambda

Option 1: Custom Runtime (Recommended)

LambdaAOTFunction:
  Type: AWS::Lambda::Function
  Properties:
    Runtime: provided.al2023
    Handler: bootstrap
    Code:
      S3Bucket: !Ref DeploymentBucket
      S3Key: LambdaAOTDemo9-lambda.zip
    MemorySize: 256  # Can use less memory with AOT!
    Timeout: 30
Enter fullscreen mode Exit fullscreen mode

Option 2: Container Image

FROM public.ecr.aws/lambda/provided:al2023

COPY --from=build /artifacts/publish/bootstrap ${LAMBDA_RUNTIME_DIR}/bootstrap

CMD ["bootstrap"]
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • Deploy .NET 9, .NET 10, or future versions today
  • No waiting for AWS managed runtime updates
  • Smaller image (no 200MB+ .NET runtime layer)

When Should You Use AOT?

βœ… Choose NativeAOT When:

  • Cold start time is critical (APIs, webhooks, user-facing functions)
  • Memory costs matter (high-volume serverless)
  • Running .NET 9/10 on Lambda (managed runtime only supports .NET 8)
  • Predictable performance required (SLA-driven workloads)

Perfect for:

  • REST API backends
  • Event processors (S3, SQS, EventBridge)
  • GraphQL resolvers
  • Scheduled tasks

⚠️ Choose Regular .NET When:

  • Maximum flexibility required (plugins, dynamic loading)
  • Heavy reflection/dynamic code (complex ORMs, frameworks)
  • Long-running processes (cold start amortized over lifetime)
  • Fastest dev iteration

πŸ”„ Choose ReadyToRun When:

  • You want 34% faster cold starts with minimal code changes
  • Testing AOT compatibility as a migration step
  • Some dynamic features required

But remember: R2R still costs 300% more than AOT!

The Business Case: Dual Value Proposition

Quick responses aren't just a technical metric - they drive business value:

1. Lower Operating Costs

At 100M requests/month:

  • Regular: $98/month per function
  • AOT: $26/month per function
  • Savings: $72/month

With 50 functions: $43,200/year saved πŸ’°

2. Better User Experience

At 100M requests/month, switching to AOT eliminates:

  • 2,138 hours of cumulative user waiting time
  • Calculation: (91ms - 14ms) Γ— 100M requests = ~2,138 hours

Translation:

  • Higher conversion rates
  • Lower bounce rates
  • Better SLA compliance
  • Competitive advantage

Real-World Results

Here's the complete performance table from my tests:

Function .NET Runtime Cold Start Warm Avg Memory
Regular 8 dotnet8 6680 ms 91 ms 88-93 MB
ReadyToRun 8 dotnet8 4389 ms 99 ms 89-96 MB
AOT 8 dotnet8 1082 ms 18 ms 49-52 MB
AOT 9 dotnet8 971 ms 14 ms ⚑ 47-49 MB
AOT 10 dotnet8 940 ms ⚑ 17 ms 42-45 MB ⚑

.NET 10 AOT wins on cold start and memory

.NET 9 AOT wins on warm performance

Getting Started

Ready to try it yourself? Here's the quickest path:

# Clone the demo repo
git clone https://github.com/whitewAw/dotnet-lambda-aot-performance-comparison.git
cd dotnet-lambda-aot-performance-comparison

# Build all Lambda packages via Docker
docker build -f src/Dockerfile -t aot-demo .

# Extract artifacts
docker create --name temp aot-demo
docker cp temp:/artifacts ./build-output
docker rm temp

# Deploy to AWS
aws s3 cp ./build-output/LambdaAOTDemo9-lambda.zip s3://your-bucket/
aws lambda update-function-code \
  --function-name my-function \
  --s3-bucket your-bucket \
  --s3-key LambdaAOTDemo9-lambda.zip
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  1. NativeAOT is production-ready for Lambda workloads
  2. 7Γ— faster cold starts and 73% cost savings are real
  3. Code changes are minimal (mostly serialization)
  4. .NET 9/10 can run on Lambda today via custom runtime
  5. User experience AND cost both improve dramatically
  6. Start preparing now even if you're not migrating yet:
    • Adopt JSON source generation
    • Use constructor injection
    • Avoid reflection where possible

The Future is AOT

.NET's investment in NativeAOT is clear:

  • .NET 8: Stable for console apps, minimal APIs
  • .NET 9: Improved trimming, best warm performance (14ms avg)
  • .NET 10: Smallest binaries (5.56 MB), fastest cold start (940ms)

Each version shows measurable improvements in size and performance.

Performance Variance Note

⚠️ Important: Your results may vary based on:

  • AWS region and AZ
  • Time of day
  • Lambda execution environment reuse
  • Your specific workload

The results shown represent averaged measurements from multiple test runs. Individual runs may vary by Β±5-15%.

Always benchmark your specific workload!

Try It Yourself

The complete source code, benchmarks, and detailed documentation are available:

.NET NativeAOT: Performance Revolution

A Deep Dive into AOT vs ReadyToRun vs Regular .NET


πŸ“‹ Table of Contents

  1. Introduction & Objectives
  2. Understanding the Execution Models
  3. Why NativeAOT Matters
  4. Performance Results: The Numbers Don't Lie
  5. Build & Packaging Deep Dive
  6. Coding for AOT: Patterns & Best Practices
  7. Real-World Examples from This Repo
  8. Deployment Strategies
  9. Migration Guide
  10. When to Choose Which Approach
  11. FAQ & Troubleshooting
  12. Key Takeaways
  13. Additional Resources
  14. Getting Started
  15. Contributing
  16. License
  17. Acknowledgments

🎯 Introduction & Objectives

Who This Presentation Is For

.NET engineers familiar with Lambda, microservices, or performance-critical applications looking to understand modern compilation strategies.

What You'll Learn

  • NativeAOT fundamentals and how it compares to ReadyToRun and Regular .NET
  • Measured performance improvements from real-world Lambda functions
  • Practical build and deployment strategies
  • Code patterns required for successful AOT adoption
  • Migration strategies for existing applications

Repository Context

This presentation uses a multi-mode Lambda demo repository with:

  • 3 AOT Lambda functions (.NET…

What's included:

  • βœ… Working Lambda functions (.NET 8, 9, 10 AOT + R2R + Regular)
  • βœ… Containerized build pipeline
  • βœ… Performance testing invoker
  • βœ… CloudFormation templates
  • βœ… Detailed migration guide

Discussion

Have you tried NativeAOT on Lambda? What were your results? Drop a comment below!

Questions I'd love to hear:

  • What's holding you back from trying AOT?
  • What performance improvements have you seen?
  • What challenges did you face during migration?

If this helped you, consider ⭐ starring the repo and following me for more .NET performance deep-dives!

Connect:


Related Resources:

dotnet #aws #lambda #serverless #performance #nativeaot #csharp #cloudcomputing

Top comments (1)

Collapse
 
art_light profile image
Art light

Wow, this is incredibly well done β€” πŸŽ†great analysis and super valuable data. I love how practical and detailed your comparison is. Definitely inspired to give NativeAOT a real shot after reading this.