DEV Community

Cover image for AWS re:Invent 2025 Montreal Recap: 6 Lightning Demos That Actually Change How You Build
Sonia Rahal
Sonia Rahal

Posted on

AWS re:Invent 2025 Montreal Recap: 6 Lightning Demos That Actually Change How You Build

I went to a local re:Invent recap meetup in Montreal on January 15, expecting a high-level overview of AWS announcements.

What I got instead was something much better.

Six speakers each had ten minutes to demo one concrete feature they were genuinely excited about; not slides, not marketing talk, but “here’s what it does and why it changes things.”

I’m deeply curious about cloud computing and how modern systems are actually built, so this format really worked for me. It wasn’t a deep dive into internals, but it also wasn’t vague or fluffy. It sat in a sweet spot: specific enough to understand what’s new and why it matters, without needing to already be an AWS specialist.

Here’s a recap of the six features that stood out most - and how they fit into a much bigger shift AWS is making.


1) AWS DevOps Agent; AI That Investigates Incidents With You

The first demo showed AWS DevOps Agent, a new AI-powered operational assistant (currently in preview) designed to help teams investigate incidents and find root causes faster.

Instead of just alerting you that “something is broken,” the agent actually tries to understand why.

In the demo, the speaker intentionally broke a Lambda function by misconfiguring its handler. The DevOps Agent:

  • Detected errors from logs and metrics
  • Pulled configuration history
  • Built a timeline of what changed
  • Mapped dependencies between services
  • Suggested the most likely root cause

It also builds an application topology; basically a live map of how your Lambdas, databases, pipelines, and services connect. So it can reason about blast radius and downstream impact.

What made this feel different from normal observability tooling is that you can interact with the investigation:

  • Ask it follow-up questions
  • Tell it where else to look
  • Have it post findings to Slack or ServiceNow
  • Auto-generate AWS Support cases with context attached

It feels like AWS is trying to turn operations from “alert + panic + dashboards” into “alert + guided diagnosis + suggested fix.”

Image of the architecture of a DevOps Agent


2) AWS Transform; AI-Guided Codebase Migration That Isn’t Reckless

The second demo focused on AWS Transform, an AI-powered tool for modernizing large codebases.

This isn’t just “throw your repo into ChatGPT and pray.”

You run it from a CLI, tell it what kind of migration you want (for example: Node.js 16 → Node.js 20, or AWS SDK v1 → v2), and it:

  • Scans your repository
  • Applies a guided refactor across files
  • Lets you attach context like:
    • “Don’t break this logging framework”
    • “Preserve backward compatibility for this API”
  • Requires a verification command to pass (like npm test or mvn verify) If the tests fail, the migration is considered unsuccessful.

What stood out to me was how seriously correctness is treated. This is closer to a controlled migration pipeline than a one-shot AI rewrite.

The speaker referenced two real AWS case studies:

  • Air Canada: migrated ~1,000 Lambda functions to a new Node.js runtime
  • Twitch: migrated ~913 Go repositories from AWS SDK v1 → v2

Saving ~2,800 developer-days.

The bigger idea here isn’t just faster refactors. It’s compressing years of technical debt cleanup into weeks.

Image of comparison of AWS Transform to other competitors


3) SageMaker Studio; Becoming the Front Door for All Data + AI

The third demo showed the new version of Amazon SageMaker Studio and how AWS is trying to turn it into a single workspace for everything data and AI-related.

Three concrete things stood out:

Built-in Data Catalog + Discovery

Inside Studio, teams can now browse:

  • Datasets
  • Tables
  • Models
  • Notebooks
  • Pipelines

Each asset can include:

  • Documentation
  • Auto-generated descriptions (via Amazon Q)
  • Metadata
  • Data quality indicators
  • Lineage info

This makes it possible to build a real internal “marketplace” for data and models instead of everything living in random S3 buckets.

Querying + Notebooks Without Leaving Studio

You can:

  • Browse tables
  • Run SQL queries (powered by Athena)
  • Preview datasets
  • Open Jupyter notebooks

All from one UI.

Amazon Q is embedded directly into notebooks. In the demo, the speaker:

  • Asked Q to generate SQL
  • Asked Q to generate Python
  • Asked Q to generate a Matplotlib chart

This turns notebooks into an AI-assisted analysis environment instead of a blank coding surface.

Serverless Airflow Built Into Studio

Studio now integrates Amazon Managed Workflows for Apache Airflow in a serverless form.

That means:

  • No control plane to manage
  • No always-on cluster cost
  • Native UI integration

You can build:

  • Training pipelines
  • Evaluation pipelines
  • ML workflows

Directly inside Studio.

It collapses notebooks, orchestration, and ML tooling into one place.

Image of SageMaker Studio catalog and metadata


4) Durable Lambda; Serverless That Can Finally Wait

Traditional Lambda breaks down for:

  • Long workflows
  • Human approvals
  • External callbacks
  • Multi-step orchestration

So people end up wiring together Step Functions + DynamoDB + retry logic.

AWS now added Durable Lambda primitives:

  • Wait; Pause execution without paying compute (up to one year)
  • Checkpoint; Persist state so retries resume from the same point
  • Wait for Callback; Send a token to an external system and resume when it returns

How it works in practice:

  1. Create a Durable Lambda function in the AWS console.
  2. AWS automatically manages the underlying state storage — no DynamoDB or S3 setup needed.
  3. Function runtime can pause and resume at checkpoints or callback points.
  4. Retry logic is built-in and safe: the function won’t duplicate payments or actions.

In the demo, the workflow looked like this:

  1. Reserve inventory
  2. Checkpoint
  3. Process payment
  4. Checkpoint
  5. Wait 15 minutes for user payment
  6. Resume
  7. Ship product

No Step Functions. No external state store.

Retries also become safe:

  • No duplicate payments
  • No double reservations

This is also perfect for AI workflows:

  • Waiting for long LLM calls
  • Waiting for human-in-the-loop approvals
  • Waiting for batch embedding jobs

All without paying for idle compute.

Image of Durable Lambda with AWS console


5) Lambda on EC2 Capacity Providers; Serverless Without Cold Starts

Lambda can now run on AWS-managed EC2 instances, giving you more control and eliminating cold starts.

How it works:

  1. Create a capacity provider in Lambda — AWS provisions and manages EC2 instances for you.
  2. Configure instance type, CPU, memory, and architecture (GPU support coming).
  3. Lambda functions run on these pre-warmed instances for predictable performance.
  4. AWS handles patching, scaling, and lifecycle management — no SSH or instance management needed.

Benefits:

  • Always-warm environments
  • No cold starts
  • Control over instance types, CPU, memory
  • Multi-concurrency per vCPU (GPU support planned)

AWS still manages the instances - you can’t SSH or patch anything - but you get predictable performance and much better economics at scale.

Pricing example from the demo:

  • 100M requests / month
  • 20ms runtime
  • Default Lambda: ~$3,000/month
  • Lambda on EC2: ~$431/month

That’s a massive difference for high-throughput APIs or inference endpoints.

Image of capacity provider creation demo


6) S3 Vectors Vector Storage at Object-Store Scale

The last demo started by explaining what vectors are and why they matter for modern AI workflows.

  • Vectors are numeric representations of data (like text, images, or embeddings) that let models compute similarity, find nearest neighbors, or perform semantic search.
  • Modern AI applications - RAG pipelines, recommendation systems, search engines - rely heavily on vectors.

The problem today: most vector databases are expensive, always-on, and operationally heavy.

AWS’s solution: S3 Vector Buckets.

Vector Buckets are a new type of S3 bucket optimized for storing embeddings. They allow you to:

  • Store embeddings directly in S3
  • Create vector indexes
  • Run approximate nearest-neighbor (ANN) search
  • Use them in RAG pipelines, Bedrock, and SageMaker

Why S3 Vector Buckets make sense:

  • Scalability: billions of vectors at object-store scale
  • Cost: much cheaper than always-on vector DBs
  • Durability: inherits S3 reliability
  • Integration: works natively with other AWS services

Trade-off: higher latency than specialized vector databases like Pinecone or OpenSearch.

Ideal use cases:

  • Knowledge bases
  • Large-scale RAG corpora
  • Offline or batch semantic search

Image of vector bucket creation demo


The Bigger Pattern I Took Away

Across all six demos, a clear pattern emerged.

AWS is collapsing entire categories of glue infrastructure.

What used to require:

  • Step Functions
  • DynamoDB state tables
  • Vector databases
  • Orchestration clusters
  • Custom internal catalogs

Now lives inside:

  • SageMaker Studio
  • Durable Lambda
  • S3 Vectors
  • Lambda on EC2
  • Serverless Airflow

It’s not flashy, but it quietly changes what “simple architecture” even means in 2025.


Final Note

The meetup ended with a swag giveaway.

Pure luck, I won. And so did the two people next to me.

So maybe that same luck carries over to you reading this: hope one of these features ends up being exactly what unlocks your next project.

Top comments (0)