DEV Community

Cover image for AWS re:Invent 2025 Announcements

AWS re:Invent 2025 Announcements

AWS re:Invent 2025 Announcements

Analytics

AWS Clean Rooms: Privacy-Enhancing Dataset Generation for ML Training

AWS Clean Rooms introduces a novel capability allowing organizations to train machine learning (ML) models on sensitive, collaborative datasets without compromising individual privacy. This is achieved by generating synthetic datasets that replicate the statistical characteristics of the original data while embedding configurable noise to prevent re-identification risks. This privacy-preserving approach enables secure data collaboration across entities, ensuring compliance with privacy regulations and fostering innovation in sensitive data environments.


Artificial Intelligence

Amazon Nova 2 Sonic: Advanced Speech-to-Speech Model for Conversational AI

Amazon launches Nova 2 Sonic, a next-generation speech-to-speech AI model designed to enhance natural voice interactions. This model supports multilingual conversations, dynamic speech control, and crossmodal inputs (integrating speech, text, images, etc.), along with improved telephony integration. It maintains conversational context across multiple tasks, enabling more fluid, human-like dialogue in applications such as virtual assistants, customer support, and real-time translation.

Amazon Nova 2 Lite: Fast, Cost-Effective Reasoning AI Model

Nova 2 Lite is introduced as a streamlined AI model optimized for everyday applications requiring quick, efficient reasoning. It offers an extended context window of up to one million tokens and comes equipped with built-in tools to facilitate complex thought processes, making it suitable for cost-sensitive deployments without sacrificing performance.

Amazon Nova Forge: Custom Frontier Model Development Program

Nova Forge empowers organizations to create bespoke frontier AI models by providing access to Nova’s training infrastructure. This program removes traditional barriers like high costs, extensive compute requirements, and lengthy development cycles, enabling companies to infuse domain-specific expertise into foundational models tailored to their unique needs.

Amazon Nova 2 Omni (Preview): Multimodal Reasoning and Image Generation

Nova 2 Omni is an all-in-one AI model preview supporting multiple input types—text, images, videos, and speech—and capable of generating both text and image outputs. This multimodal architecture facilitates sophisticated reasoning across diverse data forms, opening new possibilities for integrated AI applications.

Amazon Nova Act: Reliable AI Agents for UI Workflow Automation

Now generally available, Amazon Nova Act enables developers to build AI agents that automate complex browser-based tasks such as form filling, searching and extracting information, shopping and booking, and quality assurance testing. These agents achieve over 90% reliability, making them viable for enterprise-grade automation.

Amazon Bedrock AgentCore: Enhanced AI Agent Deployment Controls

AgentCore enhances AI agent deployment with advanced policy controls, quality evaluations, improved memory management, and natural conversational capabilities. This facilitates scalable and trustworthy AI agent implementations across organizations, ensuring compliance and operational integrity.

Amazon S3 Vectors: Scalable, High-Performance Vector Storage

Amazon S3 Vectors reaches general availability, scaling vector storage and querying to unprecedented levels—up to 2 billion vectors per index with query latencies around 100 milliseconds. It supports expanded regional availability and reduces costs by up to 90% compared to specialized vector databases, making large-scale AI workloads more accessible and economical.

Amazon Bedrock: Expanded Foundation Model Access

Amazon Bedrock now offers 18 fully managed open-weight foundation models from industry leaders including Google, NVIDIA, OpenAI, Mistral AI, Kimi AI, MiniMax AI, and Qwen. The lineup includes the latest Mistral Large 3 and Ministral 3 models in various sizes (3B, 8B, 14B parameters), providing developers with a rich selection of pre-trained models optimized for diverse AI tasks.

Amazon SageMaker AI with Serverless MLflow: Simplified AI Experimentation

SageMaker AI integrates serverless MLflow to streamline AI experimentation. This zero-infrastructure service deploys within minutes, auto-scales based on demand, and integrates seamlessly with SageMaker’s model customization and pipeline tools, accelerating development cycles and reducing operational overhead.

Amazon Bedrock Reinforcement Fine-Tuning: Smarter AI Models with Less Effort

Bedrock introduces reinforcement fine-tuning capabilities that improve model accuracy by 66% over base models using feedback-driven training. This approach eliminates the need for large labeled datasets or deep ML expertise, democratizing advanced model customization.

Checkpointless and Elastic Training on Amazon SageMaker HyperPod

SageMaker HyperPod enhances AI training with checkpointless recovery, allowing instant continuation after failures, and elastic scaling that adjusts resources dynamically. These improvements accelerate model development by reducing downtime and optimizing compute utilization.

Serverless Customization in Amazon SageMaker AI

Further expanding on SageMaker’s capabilities, serverless customization enables rapid fine-tuning with automatic failure recovery and resource scaling, boosting productivity and simplifying AI model refinement.


Compute

AWS Graviton5: Most Powerful and Efficient CPU Yet

AWS introduces Graviton5, its fifth-generation CPU chip delivering superior price-performance across a broad spectrum of workloads on Amazon EC2. The chip combines efficiency with high computational power, catering to diverse applications from web hosting to complex data processing.

Trainium3 UltraServers: Advanced AI Training and Deployment

Trainium3 UltraServers, powered by AWS’s first 3nm AI chip, provide enhanced speed and cost-efficiency for AI training and inference. These servers enable organizations to tackle ambitious AI workloads more effectively, supporting growth in AI adoption across industries.

Amazon EC2 X8aedz Instances: High-Performance Memory-Optimized Compute

The new EC2 X8aedz instances feature 5th Gen AMD EPYC processors with up to 5 GHz speeds and 3 TiB of memory. Designed for memory-intensive tasks like electronic design automation and large databases, these instances deliver exceptional single-threaded performance and scalability.

AWS Lambda Managed Instances: Serverless Benefits with EC2 Flexibility

Lambda Managed Instances allow running Lambda functions on EC2 infrastructure, combining serverless simplicity with the flexibility to use specialized hardware and cost-optimized EC2 pricing. AWS manages the underlying infrastructure, simplifying deployment of workloads requiring unique compute resources.

AWS Lambda Durable Functions: Multi-Step AI Workflows

Durable Functions extend Lambda’s capabilities by enabling orchestration of multi-step applications that can run reliably over long periods (up to one year). This feature eliminates the need to pay for idle compute time during waits for external events or human input, optimizing cost and resource use.


Containers

Amazon EKS Enhancements: Workload Orchestration and Cloud Resource Management

Amazon EKS introduces new fully managed features that streamline Kubernetes workload orchestration and cloud resource management. These enhancements reduce infrastructure maintenance burdens while offering enterprise-level reliability, security, and operational efficiency.


Database

Database Savings Plans for AWS Databases

A new pricing model, Database Savings Plans, helps organizations optimize costs while maintaining flexibility across database services and deployment options, encouraging more cost-effective database management.

Amazon RDS for SQL Server and Oracle: Cost and Scalability Improvements

Amazon RDS introduces new capabilities including SQL Server Developer Edition support, optimized CPU performance with M7i/R7i instances, and expanded storage options up to 256 TiB. These features enhance cost efficiency and scalability for development, testing, and production environments.

Amazon OpenSearch Service: GPU-Accelerated Vector Database Performance

OpenSearch Service now supports GPU acceleration and auto-optimization for vector databases, enabling workloads to run up to 10 times faster at 25% of previous costs. This advancement balances search quality, speed, and resource usage for large-scale AI search applications.


Global Infrastructure

AWS AI Factories: On-Premises AI Infrastructure Deployment

AWS AI Factories provide fully managed AI infrastructure that can be deployed within enterprise and government data centers. This solution integrates foundation models, specialized hardware, and AWS services, accelerating AI initiatives while ensuring data residency and compliance requirements are met.


Management & Governance

AWS DevOps Agent (Preview): Autonomous Incident Response

The DevOps Agent acts like an autonomous on-call engineer, analyzing data from CloudWatch, GitHub, ServiceNow, and more to identify root causes and coordinate incident response. This tool accelerates issue resolution and improves system reliability.

Enhanced AWS Support Plans: AI-Powered Expert Guidance

New AWS Support plans combine AI-driven insights with expert human guidance to proactively monitor and prevent cloud infrastructure issues. These plans offer faster response times and comprehensive coverage across performance, security, and cost.

Amazon CloudWatch: Unified Data Management and Analytics

CloudWatch introduces automatic normalization of data from multiple sources, native analytics integration, and support for standards like OCSF and Apache Iceberg. These capabilities reduce complexity, lower costs, and improve operational, security, and compliance analytics.


Migration & Modernization

AWS Transform Custom: AI-Powered Code Modernization

AWS Transform Custom leverages AI to automate code modernization at scale, learning organizational patterns to transform repositories and reduce execution time by up to 80%. This accelerates tech debt reduction and application modernization.

AWS Transform for Windows: Full-Stack Modernization

This service modernizes Windows applications up to five times faster by coordinating AI-powered transformations across code, UI frameworks, databases, and deployment configurations, enabling comprehensive modernization efforts.

AWS Transform for Mainframe: Reimagine and Automated Testing

New capabilities support mainframe modernization by transforming legacy applications into cloud-native architectures while automating complex testing. This reduces modernization timelines from years to months through intelligent analysis and automated test generation.


Networking & Content Delivery

Amazon Route 53 Global Resolver (Preview): Secure Anycast DNS Resolution

Global Resolver simplifies hybrid DNS management by resolving both public and private domains globally via secure anycast-based DNS. This unified service reduces operational complexity and maintains consistent security controls across hybrid environments.


Partner Network

AWS Partner Central: Console Integration

Partner Central is now accessible directly within the AWS Management Console, streamlining the partner journey from customer onboarding to managing solutions, opportunities, and marketplace listings with enterprise-grade security in a unified interface.


Security, Identity, & Compliance

AWS Security Agent (Preview): Proactive Application Security

The Security Agent scales AppSec expertise through AI-powered design reviews, code analysis, and contextual penetration testing tailored to unique application architectures, enhancing security from design through deployment.

Amazon GuardDuty: Extended Threat Detection

GuardDuty now offers extended threat detection across Amazon EC2 and ECS, providing unified visibility into virtual machines and containers. This helps identify complex multi-stage attacks affecting interconnected AWS workloads.

AWS Security Hub: Near Real-Time Analytics and Risk Prioritization

Security Hub is generally available with capabilities to correlate security signals in near real-time across AWS environments, enabling faster risk response and improved security posture.

IAM Policy Autopilot: Open Source MCP Server for Policy Generation

IAM Policy Autopilot accelerates policy creation by analyzing application code to generate valid IAM policies. It provides AI coding assistants with current AWS service knowledge and permission recommendations, simplifying secure development.


Storage

Amazon FSx for NetApp ONTAP Integration with Amazon S3

FSx for NetApp ONTAP now integrates seamlessly with Amazon S3, enabling direct file system data access via S3. This facilitates unified workflows with AWS analytics, ML, and generative AI services without moving or duplicating data.

Replication Support and Intelligent-Tiering for Amazon S3 Tables

New features introduce automated cost optimization through intelligent-tiered storage and simplified replication of S3 Tables across regions and accounts, enhancing data availability and cost efficiency.

Amazon S3 Storage Lens: Enhanced Performance Metrics and Scalability

Storage Lens adds advanced performance metrics, supports analysis of billions of prefixes, and enables metric exports to S3 Tables. These enhancements help optimize application performance and simplify large-scale data analytics.


This comprehensive summary covers the latest AWS launches and updates across analytics, AI, compute, containers, databases, infrastructure, management, migration, networking, partner programs, security, and storage, providing a detailed overview of innovations designed to accelerate cloud adoption, optimize cost and performance, and enhance security and manageability.


Top comments (0)