DEV Community

Cover image for AWS re:Invent 2025 - What's New with AWS Cost Management (COP203)
Kazuya
Kazuya

Posted on • Edited on

AWS re:Invent 2025 - What's New with AWS Cost Management (COP203)

🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.

Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!

Overview

📖 AWS re:Invent 2025 - What's New with AWS Cost Management (COP203)

In this video, AWS announces major enhancements to cloud financial management services. Key launches include AWS Billing View for customized cost data sharing across accounts and payers, Billing and Cost Management Dashboard with up to 20 customizable widgets, and AWS Billing Transfer enabling a "payer of payers" model. Cost Explorer now offers 18-month forecasting with AI-powered explanations using 36 months of historical data. AWS Cost Anomaly Detection introduces intelligent AWS managed monitors that scale automatically. FOCUS 1.2 support enables multi-cloud cost normalization, while split cost allocation now includes Kubernetes labels. The Cost Optimization Hub debuts an Efficiency Score metric, Reserved Instance and Savings Plan sharing preferences allow prioritized or restricted group sharing, and Database Savings Plans finally launch covering RDS, Aurora, ElastiCache, Neptune, DocumentDB, DynamoDB, and Timestream—a feature requested since 2019.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction: Two Years of Cloud Financial Management Evolution

Good morning and welcome to our breakout session, which might be the most announcement-packed breakout session ever. Before we get started, let's take a quick look back at the past two years of our cloud financial management journey.

Two years ago, we made our cloud financial management services more accessible and easy to use. We integrated the Billing and Cost Management console. We launched AWS Data Exports, which is a flexible way for you to download reports from AWS such as Cost and Usage Report. And we introduced a very important initiative called Split Cost Allocation Data, which helps you calculate and distribute costs associated with shared resources.

Last year, we tackled standardization and accuracy. We launched Data Exports for FOCUS 1.0 for those of you who operate in multi-cloud or multi-source environments. We also launched the Authenticated Pricing Calculator, because we know when cost estimates are wrong, they're not just useless, they can be dangerous.

So what about this year? Well, this year, we're not just improving what exists. We're also rethinking what cloud financial management services can help you achieve your use cases from how you access data to how you forecast to how you optimize. The theme is simple. We want to meet you where you are.

Thumbnail 110

Now, you may be wondering, who are we? Why are we bold enough to tell you the story about how AWS is renovating the cloud financial management solutions? Let's do a quick introduction. My name is Bowen Wang. I'm part of the AWS Billing and Cost Management service team. As the Principal Product Marketing Manager, I get to witness the product innovation, and the best part is to announce them. Today I'm joined by my long-term co-presenter, Matt Berk, and two legends in the industry, Corey Quinn and Matt Cowsert.

Hey, everyone, I'm Matt Berk. I'm a Principal Technical Account Manager and I lead our technical field community for Cloud Financial Management. I'm Corey Quinn. I'm the Chief Cloud Economist at Duckbill, where I help folks at very large enterprises manage their cloud spend from a variety of different perspectives. I've also been working a very long time with AWS billing constructs, which is why I'm completely dead inside. This is going to be tough following Corey most of the time. Hey everybody, my name is Matt Cowsert. I'm the Principal Product Manager for FOCUS. I work at the FinOps Foundation. I also happen to be an ex-AWS employee. Very cool.

Thumbnail 170

Understanding the Audience: From FinOps Practitioners to End Customers

So let's see who you are. If you are in the audience today, you may be a FinOps manager or FinOps practitioner. You try to standardize the FinOps practices you set centrally while decentralizing the cost management responsibilities across functions, while at the same time trying to win trust from leaders from business, finance, and engineering.

If you're an application leader or a business unit leader, then you're looking for native cost reporting that meets your unique set of requirements, or you're looking to optimize your set of AWS accounts. If you're a reseller, then you're looking to do all of this at scale, and you're looking to do this in the most simplified centralized way possible to allow you to better serve your end customers.

And if you're an end customer, you just want the damn thing to work. You have a job to do. You're trying to get something out the door and having transparency into what the thing is actually costing, and not hear the Charlie Brown teacher yelling at you from somewhere over the rainbow about costs and allocations. That's not what you're there to do. You're there to get something else done, presumably.

Thumbnail 240

So, what do all these personas have in common? Well, you're operating in a mature organization and a complex AWS environment. Think multiple payers, multiple business units, non-integrated entities, and complex billing. When we look to build our roadmap, we work backwards from customer feedback, whether that's the feedback you share with your account teams, whether that's at industry events like FinOps X with the FinOps Foundation, or even here at re:Invent.

Thumbnail 270

A lot of the launches and features that we are going to talk about today are aiming to address those challenges or gaps when operating a mid to large enterprise on AWS. So in the next hour or so, we're going to introduce a few solutions that can help you tackle the three main challenges. First, how you can break down the organization boundaries to meet your specific reporting and billing requirements. Next, how you can improve cost reporting, governance, allocation, and standardization with AWS managed services and intelligence. And finally, we will let you know how you can optimize your ROI with AWS with the latest technology and purchase options.

So now I'm going to turn to Matt to help us get started with the first section. Thanks, Corey and Matt. See you later.

Thumbnail 320

Thumbnail 330

Breaking Down Data Access Barriers with AWS Billing View

We're going to start off by talking about breaking down organizational boundaries and rethinking how we think about that payer account structure. We're going to tackle this from three different features and three different challenges, the first being the data access challenge. Next, we're going to move on to cost reporting, and then finally we're going to talk about the billing complexity that comes with operating multiple AWS payer accounts at scale.

Thumbnail 350

So let's talk about data access. You either have access to the payer account of your organization to look at that consolidated view across all of your linked accounts in Cost Explorer or AWS Budgets, where you're able to see and filter out your organization's cost and usage data, or you don't. And if you don't, then you're generally logging into the linked accounts across maybe tens or hundreds to get the information natively that you need to look at the latest cost and usage data.

Thumbnail 370

You might be thinking that the FinOps admins are lucky because they might have access to the payer account to do their jobs, but even then they might be running into some of the same issues, which is that they are logging into multiple payer accounts, managing and looking at spend across multiple payers. And so the question becomes, how can we make this simpler? How can we somehow give you access to the data you need where you need it while also gaining access to a payer account, which is generally a good security best practice?

Thumbnail 420

Yeah, so Matt, what I'm hearing is customers like you might be having the need to access the level of cost and usage data exactly the way you want, whether it's across accounts within an organization, which is between payer and a member, or across payer accounts. Now Billing View will be able to help. We launched AWS Billing View at last re:Invent, and we've since added lots of features to this service.

Thumbnail 430

So with AWS Billing View, you will be able to scope the exact level of cost and usage data for your stakeholders and securely share with them. So let's see how it works. As a payer account owner, you will access the Billing and Cost Management console. Under the preference settings, you'll be able to create a filtered view using dimensions such as cost allocation tags or member accounts. That is to save the view for your business owners or application owners. Or you can choose to create an unfiltered view, which essentially includes all of the cost and usage data for your organization.

Thumbnail 480

Once the billing view is created, you can decide to share it with an account. That account could be within or outside of your organization, and that account will be able to have that access view within AWS Cost Explorer, Budgets, and the Billing and Cost Management dashboard, which we'll share more details on. And very recently, we've made the multi-source billing view available, and that is a game changer.

Because remember that account that can receive the shared billing view from any organization? That same account will be able to combine the billing views from across up to twenty different payers. So that is essentially a unified billing view from your entire AWS estate. Behind the scene, the magic happens through Amazon Resource Access Manager because each billing view has a unified Amazon Resource Number. But from the day-to-day experience, you do not need to worry about Resource Access Manager. You just come to the console, create your filtered view, share with any account, and that account just needs to accept and start using it.

Thumbnail 540

Simplifying Cost Reporting with Billing and Cost Management Dashboard

So now we move on to our next challenge, which is cost reporting. And generally you probably thought about this when I was talking about the data access issue because that's how most of you are approaching this today. You're trying to share your cost and usage with your decision makers, so that generally results in some kind of dashboard, whether it's using our Cloud Intelligence Dashboard solution, which we have on the screen, or you can use a CloudFormation or Terraform template to create a billing pipeline with all your information, push that to Athena and QuickSight, and we have a bunch of different templates for you to leverage with all the data that you might need.

Thumbnail 570

So you either could be leveraging something like that or a third-party BI tool or even a third-party FinOps tool to help get that information in the hands of the right folks. But you've asked us to make this easier, something in between deploying an entire dashboarding solution and potentially managing subscriptions, and something like a saved Cost Explorer view on the left side, like the easiest thing where if you're just going into Cost Explorer and you're putting in filters and your group buys, you can save that view so that the next time that you log in, you don't have to do it all over again.

So what would be in between those? You asked for something that was flexible, customizable, and easy to share, not just within your organization, but with any AWS account that might need access to your cost and usage data.

Thumbnail 610

Thumbnail 640

So Matt, that's exactly the reason why we built the Billing and Cost Management Dashboard right in the Billing and Cost Management Console. Think about this dashboard as a landing page, a blank canvas where you can drag in up to 20 different widgets of KPIs your organization cares to track and monitor. Behind the scenes, we use Amazon Resource Access Manager, and once you create a dashboard, you'll be able to share that as a view-only version, maybe with your VPs and CEOs if you do not want them to explore too much data, or you can share an editable version with a business unit leader so you can collaborate on dashboards.

When you create a dashboard, you have two paths. You can either use a predefined widget, which are the pre-populated KPIs we believe a lot of customers care to use, such as monthly cost per service per member account, EC2 hourly cost, marketplace costs, and so on. Or you can choose to use custom widgets, which you can customize from data sets such as cost, usage, Reserved Instance or Savings Plans utilization and coverage rate. The great thing is you can change the graphs or even create one template you believe every stakeholder will need and be able to drill down to specific dimensions for specific stakeholders.

Thumbnail 700

Thumbnail 710

Thumbnail 720

Thumbnail 730

Now we're going to launch into a demo of these first two features, and we're going to start with the Billing View. From the Billing and Cost Management Console, you have the drop-down to look at your views, and we're going to create a new Billing View now. You can choose to filter your cost management data either by account or by cost allocation tags that you have already configured, or you can do no filter at all, which means that you're just sharing that raw organizational consolidated view with any account that you choose. If you're picking to just share a grouping of accounts, let's say for a business unit or for a project, you can select those accounts that you want and then title the view, so I'm just giving it a test name. Then give it a description of what that view looks like so that when they're looking in the Billing and Cost Management Console from that drop-down, they can know what they're looking for.

Thumbnail 740

Thumbnail 760

Once that's created, then you're going to want to share that view. You can either share it within your organization and we'll pre-populate all the linked accounts, or you can share it with any AWS account as long as you have that AWS account ID. We're doing this through Resource Access Manager. We're using the Resource Access Manager role, so if it's an account outside your organization, they'll just have to accept that view on their end. Once they accept it, it'll be in that drop-down.

Thumbnail 770

Thumbnail 780

Next we're going to talk about the Dashboard. It's very similar to Billing View where we'll create a dashboard and we'll start by naming the dashboard, giving it a description, and you can put in up to 20 different widgets. Think of them as 20 different Cost Explorer views. You can set your date range from either an absolute date range or relative date range like the last three months, and then you can start to add your widgets.

Thumbnail 790

Thumbnail 800

Thumbnail 810

We can have some predefined widgets, so these are the most common Cost Explorer views like monthly cost by service. You can add your group-bys, your filters there to customize it. You can also add things like EC2 running hours cost. You can drag and drop that into the page, and you can also then resize and move around the widgets to wherever you want them on there. Then you can add things like custom widgets.

Thumbnail 820

This is now getting into your straight-up cost, your usage, your Savings Plans, Reserved Instance coverage or utilization. You can pick from each of these and then drill down with those Cost Explorer filters and group-bys. Here I've got Savings Plans utilization at 100%, which is great, and I'm also able to edit the titles of the individual widget and the description of each of them as well.

Thumbnail 840

Thumbnail 850

Thumbnail 860

Once you're done building out your dashboard and you can make a bunch of these for specific use cases, then it's time to share it. You can share it with any account within your organization. Again, we'll pre-populate that, or just with any account that you want, and you can choose between an edit or a view-only role so that either the person can just see it or collaborate with you on it. Once you add the recipient, you send it over to them and they'll be able to see that dashboard when they go to Dashboards in their console.

Thumbnail 880

Streamlining Multi-Payer Operations with AWS Billing Transfer

That's it for the demo. Now we're going to move on to our third and final challenge of this section, which is billing complexity. We started to address this challenge last year with the launch of something called Invoice Configuration. If you needed to send an invoice that was scoped down to a specific subset of accounts, you can create invoice units and then use that for chargeback or showback. But that wasn't all the way that we wanted to go. If you're managing this across multiple payers, then you're configuring this across multiple places, and your feedback said to us that you really would like to decouple billing.

So while we're scaling, whether that's through mergers and acquisitions, whether that's spinning up new landing zones for specific purposes, or if you're a channel partner and you're just growing your business and signing up new customers, we weren't creating billing complexity for you. So how can we create some sort of payer of payers that allows us to manage all our billing cost management preferences in one place?

Thumbnail 940

The challenges related to billing can be difficult to manage if you have a lot of payers. And that is why we're very excited to announce AWS Billing Transfer. So AWS Billing Transfer essentially is a new way for you to streamline billing operations, payments, and cost management. Think about this as a payer of payers. Now you can designate one payer account to centralize the billing, payment, invoices, and settings for all of the payers. It's just one place for everything.

Thumbnail 990

With Billing Transfer, even though it's very powerful, at the same time, you will be able to give autonomy for individual payer organizations. So they will be able to keep their root access and will be able to manage security access, and so on. At the same time, if you are a reseller or even larger enterprise customers, you want to really protect your unique pricing information. You can also do that because AWS Billing Transfer is integrated with Billing Conductor. So you'll be able to configure your own pricing information and cost visibilities for your customers.

Thumbnail 1000

Thumbnail 1020

Thumbnail 1040

So let's take a look at Billing Transfer in action. So now if you access the Billing and Cost Management console, under the preference setting, there is a Billing Transfer landing page, right? You have two perspectives. Inbound billing is where you assume the billing responsibilities from other organizations, while outbound billing is where you delegate them. So let's say you want to establish a relationship, which means you want to assume the billing responsibilities from another organization. You are given a name. Let's be as descriptive as possible, right? Maybe the billing starting period time or the organization name. And you also need to get the email address and the payer account ID from the other organization to make sure you address to the right payer account. That billing relationship can start at the very next billing cycle or in two months.

Thumbnail 1050

Thumbnail 1060

And in terms of pricing configuration, you can either keep the straightforward basic pricing, or you can use Billing Conductor to customize the pricing. And AWS will also give you a little warning. So once you pick, let's say for this demo purpose, we'll use basic pricing. They'll give you a warning to remind that organization to download the very recent Cost and Usage Report. Because once this relationship is set up, they will no longer be able to receive updated Cost and Usage Report. This is to protect your pricing information in case you want to configure a pro forma rate for them.

Thumbnail 1080

Thumbnail 1090

Thumbnail 1100

So now if you send an invitation, they accept it. Let's say the relationship is activated. AWS will automatically create two billing views for you. One view is called My View, which is the true cost you owe to AWS. Another view is showback and chargeback view, if you have used Billing Conductor to configure any unique pricing items. So let's say you are the payer of payers. You go to Cost Explorer. You'll be able to turn on billing view and toggle between the My View, which is your true cost, and My Chargeback and Showback View.

Thumbnail 1110

Thumbnail 1120

So let's say you want to see the My View, which is the true cost you owe to AWS. In this case, my average monthly cost is about seventeen dollars. And if you switch to the Chargeback and Showback View, you can see for your end customers' visibility, they owe you thirty-seven dollars. So the in-between delta is essentially the markup you have set up using Billing Conductor. So the dual view with Cost Explorer really gives you the transparency you need and also meets your cost allocation needs.

Thumbnail 1160

Enhanced Forecasting: 18-Month Projections with AI-Powered Explanations

Okay, I'm going to step off the stage and welcome Corey for the next section. Thank you. Let's go in a little bit into how we can enhance some of the visibility into these things through automation and intelligence, artificial or real intelligence as the case may be. There are a few aspects to this. You've got the whole forecasting bit, the monitoring bits, the data normalization pieces, and of course allocation. But let's break down into those things individually one by one, because trying to take it all at once is like the last ten minutes of yesterday's keynote.

Thumbnail 1180

So we care a lot about forecasting and planning, and what we have today is of course Cost Explorer and the Pricing Calculator. Specifically in Cost Explorer, customers can project twelve months of data based upon their past six months of historical data. And now we have some things that folks have been asking for, like extended planning horizons that align with fiscal cycles starting in Q3 or Q4, because some of you people absolutely suck at reading calendars.

We need deeper usage pattern recognition for seasonal trends, holiday peaks, and those fiscal year-end variations. You really shouldn't be blindsided by the fact that this week is re:Invent, for example, though somehow I always am. Anomaly detection helps identify and highlight spending outliers. That giant spike there has a fancy root cause that I'm just going to shorthand down to Greg, and don't worry, we fired him, so don't take that spike into account when we're doing our future projections. He's not welcomed back.

Thumbnail 1260

Having transparency into how you arrive at the forecast has been a big ask. We need to understand the why behind predictions, because "do not question me" from a computer in this age of AI is more than a little ominous. We want to have it explainable. When it comes to the trend approach, there are a couple of different ways you can look at it. When it comes to forecasting itself, you can go either with trend or with driver-based.

Trend is always forward-looking extrapolation of historical time series data. It's good for forecasting organic growth of existing usage and cost. Longer forecasts, the longer you stretch it out, they have less certainty and a lot more fuzziness. It doesn't account for future changes in your business environment. And of course it's super crappy because you have this spiky history and then suddenly it's a straight line. Nothing is more suspicious than that except possibly round numbers. So did the business suddenly bring order to chaos, or are we just hand-waving over a whole bunch of uncertainty and claiming otherwise? We all know the answer, and it's the bad one.

With driver-based, it accounts for business and other demand drivers. I mean, we're in December now and my Christmas tree business is booming, and we're picturing an even bigger January. Maybe there are some challenges there. That's ideal for planning costs in highly dynamic and flexible environments. The world is evolving. Cloud is no longer just a crappy data center analog. It's becoming more dynamic, and that decreases your ability to project it accurately versus traditional methods. I mean, you don't wind up showing up one day and finding five more servers in your server room unless someone's doing something weird.

It's also useful for improving the accuracy of the long-term forecast because you can start to bake some of that seasonality and less of a flat line into it. And the downside of course is that it does require additional time and effort upfront to implement the process. And that is sad because you have to invest time into it to get value out of it in the form of better outputs. Such is life, but it does seem like that's something a robot might be able to help with. Bowen, do we have one of those?

Thumbnail 1400

Yes. So Corey, just like you mentioned, the traditional cost forecasting with the correct factor or your seasonal business confidence can be inherently challenging. And the customers like you deserve confidence and transparency when it comes to cloud cost forecasting. And that is why we are excited to announce three major enhancements to the cloud forecasting capabilities within Cost Explorer. So first, we are extending the forecast time horizon from 12 months to 18 months, which is to align with your normal annual planning cycle.

And the second is we are extending the historical cost analysis window from 6 months to 36 months. So these are not just additional data. You are essentially giving Amazon machine learning technology more business context for us to understand who you are, what is your unique spending pattern. And the third enhancement is now you're having this AI-empowered explanation. So no more black box. You'll be able to understand how we can come to the conclusion of this projected number.

Thumbnail 1450

Thumbnail 1470

So now imagine after re:Invent, you'll be able to use these three features and walk into your Q4 planning meeting and very confidently understand your total 2026 IT spend. So once again, you do not have to learn a new tool. It's right in AWS Cost Explorer. If you access Cost Explorer, you will see a little flash bar introducing this new feature. You'll be able to use the very familiar date range to choose the next 18 months into the future.

And once you do that, there's a little button called generate explanation forecast. And if you do that, you will see a little write-up. AWS will let you know how we can get this projected cost number for you, what's the historical date range we have used as the baseline, and what's your spend pattern. Is it consistent, trending up or downwards? And what's the probability you can be confident about this number, that's the 80%, and the top three cost drivers we have identified.

So this AI-enabled explanation is currently in public preview. So if you use that, please give us feedback. There's a thumbs up or down button here and you can tell us specifically how we can make this more useful for you.

We really want to make sure you understand the why behind the forecasting.

Thumbnail 1550

Scalable Cost Monitoring with Intelligent AWS Managed Monitors

So the current state of cost monitoring is grim. AWS has a few services that help customers set up the right guardrails, such as AWS Budgets, AWS Cost Anomaly Detection, and the close your account button. But at the payer level, one can generate up to 501 monitors. But you really only need one that says holy crap on it in all caps. Now, fortunately that 501st is a global service monitor to do exactly that. It evaluates all the AWS services that are used across your individual AWS accounts for anomalies. So when you add new AWS services, it automatically begins to evaluate what those are, so you're not constantly chasing it. You don't have to manually configure it every time someone uses a new service, which often they're surprised to discover that they're doing.

Thumbnail 1610

And then there are 500 additional monitors you can configure for a lot more granularity around accounts, cost categories, and tags. But now that giant global holy crap button, that alert, is going to get even better. Could you please explain what it does in less apocalyptic terms? Exactly, Corey, we definitely want the cost governance to be more proactive and scalable, even though the 501 cost monitors you mentioned is very forgiving. But we actually do not want our customers to spend time to create and maintain hundreds of cost monitors. And that's why we're very happy to announce this intelligent cost monitor within AWS Cost Anomaly Detection.

So how does that work? Right now you can create one AWS managed cost monitor to track costs and usage underneath one specified dimension, and that monitoring will grow with your organization. So let's say right now you can use an AWS cost monitor to track all the costs and usage for my member accounts. So that means all the costs and usage incurred by the member accounts within your organization, whether it's existing today or newly added, will be automatically tracked by this cost monitor.

Similarly, you can create one cost monitor to track all the costs and usage associated with one cost allocation tag or one cost category name. Let's say you use cost categories, which is our rule builder to group your accounts and the resources, and you have a cost category named department. So anytime you add new resources or accounts in this department, whether it's marketing, sales, or engineering, they will be automatically tracked. So just one initial setup, this intelligent cost monitor will be able to scale as your organization grows.

Thumbnail 1690

Thumbnail 1710

So every talk in 2025 must be smothered with AI like their Waffle House hash browns, and AI for FinOps sounds like something that a LinkedIn influencer is going to post between their various hustle quotes. But this is actually tentatively useful, and Bowen, thump thump goes the bus as it drives over her that I just threw her under, is going to explain why it's not just vibes. Bowen. Yes, AI is definitely very trendy. But in reality, AI is also transforming how we do FinOps right here at AWS.

AI-Driven FinOps: Amazon Q Developer and MCP Server Integration

We spent a lot of effort in the past year trying to make sure AI solutions are actually making a difference to your day to day FinOps practices. We have things like more use cases to our AI solutions, make it more intelligent, and give you more ways to interact with our services. So let's say right now we can perform a cost analysis, do your cost forecasting, investigate cost anomalies, and within those very granular costs and usage data. And with the launch of Billing and Cost Management MCP server, for example, in the past August, you can now integrate the cost analysis right in your development environment. Or you can even build your own AI FinOps agent if you want by connecting with our services.

Thumbnail 1770

So to large enterprises, what does that mean? You will be able to now do lots of the custom calculation within minutes and be able to get involved with most stakeholders and give them this ability to self-serve. So I'll give you two quick examples, Corey, to see how that works. We just launched this enhanced cost management capabilities right in the Amazon Q Developer. So let's say many of you are FinOps agents or FinOps analysts, you're in charge of the spend within your organization. You do not want to use all the different tools. So you can ask Amazon Q, maybe you're interested in EC2 spend in the past three months. You can just ask Amazon Q, can you tell me my EC2 cost, maybe virtual CPU hour cost in the past three months and then give me analysis of my trends and maybe some suggestions on what I should do next to make it more cost effective.

Amazon Q Developer will start to retrieve information, including granular cost and usage data from Cost Explorer and maybe also from Savings Plans coverage rate and utilization rates, because that information can also be helpful to help you cost optimize. Then after they have all the granular information, the next thing they will do is perform some very custom calculation, maybe dividing my EC2 cost by the number of hours or understanding the mix of EC2 instance types and then giving me a quick breakdown. Previously, I had to download those data sets from multiple sources and do lots of manipulation in my Excel sheets. But right now, within two minutes, I'll be able to get a breakdown of my EC2 cost amongst the various EC2 instance types to understand whether I'm using more cost-effective instances or maybe less so, and do I have opportunities to maybe purchase more Savings Plans to cover more eligible usage.

Thumbnail 1870

Thumbnail 1880

And again, you can use the thumbs up or thumbs down for feedback. So the next example is the Billing and Cost Management MCP server we launched last August. A lot of the developers have already let us know that they have attached it to their development environment. So let's say you are a software engineer in Amazon Q Developer environment. You are trying to build up a video processing pipeline, but you do not know which architecture design option is more cost-effective. So you'll write up this rough design document in Amazon Q Developer and ask this AI technology to maybe evaluate the different cost implications of different options.

Thumbnail 1910

Thumbnail 1920

Thumbnail 1930

Thumbnail 1940

Thumbnail 1950

Thumbnail 1960

So can you please review and update my design documents with different cost estimates so I'll be able to make some trade-offs? We can see that because I have mentioned different services in my design documents such as EC2, Elemental MediaConvert, S3, ECS on Fargate, the AI technology will start to retrieve pricing information from the different services and the resources I intend to use in my design documents. Once they have the information, they will update my technical design document with those cost estimations and assumptions they have made in the process. So again, with that information, I do not have to wait and deploy this architecture and have to cost optimize after the fact. I'll be able to make trade-offs between cost versus availability and maintainability and be able to do my cost optimization right before the deployment.

So I'm going to step off. Thank you. I'm going to invite one of the Matts back, preferably Cowsert, but we're not picky here. Every cloud provider speaks a different dialect of cost. AWS calls it X, GCP calls it Y, Azure calls it Steve for reasons that are not at all clear, and Microsoft refuses to expound upon. Comparing cloud costs across these different providers is like comparing a giraffe to a teddy bear. Technically they're both assets, but that's about as closely aligned as you can get, unless you want a very upset zookeeper. You can't optimize what you can't compare, and you can't compare that which is not normalized. Your CFO wants one number. You have 47 numbers that might mean the number that they're after, but probably doesn't, and all the screaming doesn't help.

Normalizing Multi-Cloud Costs with FOCUS 1.2 Support

Finance teams should not need a Rosetta Stone to wind up equivocating costs between different providers and different platforms, because without normalization, you're doing cost analysis with a blindfold and a dartboard in a bird sanctuary. We force everything into the same shape, like teddy bears, but less cute and slightly more compliant. And this is where standards come in. Wouldn't it be nice to have such a thing, and why we're going to talk about FOCUS. It turns out that the FinOps Foundation had thoughts about this chaos, as embodied by my friend Matt.

Thumbnail 2060

Thumbnail 2080

Yeah, so today we're pleased to announce that Data Exports for FinOps Open Cost and Usage Specification, or we colloquially say FOCUS, now supports FOCUS version 1.2. So it's now easier than ever to be able to normalize your cost and usage data across the number of providers that you're working with today. When you think about FOCUS, yes, it is a format, but it's also a language that you can use whether you're generating these reports yourself or you're pulling them down from your providers. What this is going to allow you to do is have consistency across the reports that you're generating, and so the standardization of cost columns like billed costs and effective costs, having service categorizations and account categorizations, make it easier to understand how it is that your data is being represented across the providers that you're using.

This is crucial. This also means that when you add a new provider, which you inevitably will, you don't have to learn a brand new language. It just extends into what your team already knows. It also makes it easier for you to be able to do invoice reconciliation across your providers, as we're using the same language across all of them.

Thumbnail 2130

So I just want to do a quick rundown on the actual standardized cost columns that we have in place today. For billed cost, that's what you're going to expect to see on your actual invoices. Effective cost, or if you're familiar with the view in Cost Explorer, your net amortized cost, is going to be inclusive of your discounts, including your Reserved Instances and Savings Plans. List cost is exactly what it sounds like. It's the published rates. This allows you to effectively evaluate what your optimization efforts are giving you, and then if you have any sort of contractual agreements, the contracted cost allows you to represent that difference as well.

Thumbnail 2160

Next time. So new to what you're getting as a part of AWS supporting FOCUS 1.2 is actually a capability that FOCUS introduced in 1.1 for capacity reservations. Capacity reservations are super helpful when you're trying to ensure that you're actually leveraging the capacity that you're reserving, and this is primarily for GPU intensive workloads. So now, all within the same reporting structure, you'd understand if your reservations are used or whether they're not being used, and for AWS specifically we're talking about your on-demand capacity reservations as well as your capacity blocks.

Thumbnail 2200

We also introduced a virtual currency. Virtual currency represents the difference between the pricing unit that you're observing within your individual provider. So as we see more and more SaaS-based consumption, specifically around platform services like Databricks and Snowflake, you're going to observe that you're consuming credits or you're consuming tokens, and it may be more difficult to understand what you're actually spending within those individual services. So we introduced this concept of a virtual currency which allows you to track your credit and token purchases burned down, and it also allows you to do this if your billing currency and your pricing currency are in different values. So anyone that's operating outside of the US, where AWS primarily will bill you or they're priced in USD but they're going to bill you in whatever currency you're asking for, this makes it super easy for you to be able to validate that that conversion is accurate and consistent with your negotiated agreements.

Thumbnail 2270

All right, so if you're looking to get started with FOCUS, it's a super straightforward piece. So the first step is to actually create the report itself. You can do this within the Billing and Cost Management console. There's a section there where you can create the FOCUS export for 1.2. Once you actually have the report in hand, there's a number of different ways to use it. So if you're looking for the dashboarding capabilities, there's a FOCUS dashboard from the Cloud Intelligence Dashboards team. You can engage with this either via the command line or they have CloudFormation templates that are easily available for you to be able to use. We have use case libraries that are available on the foundation site. Cloud Intelligence Dashboards also has use case libraries, and then if you're looking for additional training, the foundation has intro to FOCUS as well as FOCUS analyst training, and we continue to add to those all the time. Remember, the best way to do SQL queries is of course copy and paste. Oh God, I hate those my favorite.

Thumbnail 2320

So this is just an example of what you're going to get on the FOCUS dashboard itself. I was actually sat in a session two days ago where we were able to see this in practice and so it's cool to see that you get this unified view of what your effective cost is, what your billed cost is, like how does your service categorization usage span across the various providers that you have in place. And so this is the simplest, easiest way to integrate with FOCUS if you're already running AWS and you have other providers in place.

Solving Container Cost Attribution with Kubernetes Labels

So, containers are great at running your applications, but they're terrible collectively at telling you what they cost to do that. Kubernetes sees pods and nodes, your CFO sees dollar signs, they talk to you, you see stars, and these things are not the same language. Shared nodes mean shared costs. Shared cost means someone's getting blamed for someone else's bitcoin miner, to put it directly. Speaking of virtual currencies, is it your fault that the node is expensive, or is it your neighbor who requested 47 cores and then uses 2? When 10 teams share infrastructure, 9 of them think that they're subsidizing the 10th. They're all right. You can measure CPU requests. You can measure actual usage, but good luck explaining why those things are different to finance.

Thumbnail 2410

Without attribution, you're doing showbacks with a Ouija board. Chargebacks become political negotiations instead of math. We've all seen that play out in our organizations. Teams optimize for looking cheap, not for being efficient, which is why we need better primitives here for cost allocation.

Thumbnail 2440

So the next feature we want to announce is that Kubernetes labels are now a part of the split cost application. Previously you were operating off of your system label tags, but now you can also pull in custom Kubernetes labels, so you're not just working with the system tags at the cluster name and namespace level. This is really important back to Corey's point around how do you think through allocation, and if you can't get the allocation right and it's not at the right level of granularity, you're either going to do a misattribution when you're allocating costs or there's some follow-on activity that you're requiring your finance and engineering teams to be able to get to the right values. There are a bunch of ways to do it, and they're all equally valid, but you have to be consistent with it. Good luck with that bottom up.

Thumbnail 2480

So again, if you're looking for the easy button here, we have a split cost allocation dashboard that incorporates that same level of Kubernetes label granularity that you're looking for. The most common dimensions are going to be represented within the workload explorer, and again, if you're using those more granular labels and pods, those are also dimensions you're going to be able to create more customized reports for. Thank you very much. I'm going to invite an upgrade from a Matt C to a Matt B at this point because I couldn't get a Matt A, and he's going to take us through some of these things, and it's his turn to suffer my slings, arrows, et cetera.

Thumbnail 2530

Measuring What Matters: The New Cost Optimization Efficiency Score

Thanks, Corey. So in our third and final section, we're going to be covering three new launches around optimization through the lens of cost management. For the next slide, we're going to talk through cost efficiency, how do we measure it, commitment purchase sharing preferences, which is a mouthful, how do we share our Reserved Instances and Savings Plans effectively in the groupings that you want versus just across the entire organization, and then finally you might have caught at the end of Matt Garman's keynote he mentioned a new Savings Plan option. Corey has a lot of thoughts about that one.

Thumbnail 2570

So let's talk about the measurement problem. Here's the thing, we all know that we should measure efficiency, not just cost. You want to cut costs, turn everything off. That's not really viable for some of us. The problem is that nobody really agrees on what efficiency actually means in the context of an organization. Finance wants cost per dollar of revenue efficiency. Engineering wants cost per transaction for efficiency. Product wants cost per user, and they're all different numbers that tell completely different stories.

What's even worse is let's say you pick one, it matters not which, cost per transaction let's say. Great, you optimize it, you bring it down thirty percent, and now you're a hero. Then product ships a new feature and suddenly a transaction means something different today than it did yesterday. Your metric just became meaningless and you didn't change anything.

And here's where it gets really painful. You have no idea if you're actually good at any of this. We all feel like we're sort of impostors on some level, like how do I demonstrate this? You think you're efficient because costs are flat year over year, but your competitor might have just cut their infrastructure spend in half while scaling way faster than you. You are flying blind here. There are no benchmarks internally or externally. There is no context. It is just your own costs versus your own history.

The cloud providers give you cost data. They don't give you efficiency data, and those are very different things. So every company at some point on the maturity curve builds their own framework for this. Every team interprets that framework differently within the organization. Nobody trusts any of them, but we keep building more because, hey, now we can look busy. At least maybe the eighteenth one is finally going to be the one that gets it right, just like reorgs.

Thumbnail 2690

We have been asking for a standard way to measure this forever, something comparable, something that doesn't require a PhD to interpret and a board of elders to make decisions on, something that tells you if you're doing well or just doing okay. So that's why we've launched the new efficiency score. You'll find this now in the Cost Optimization Hub. So if you've got that turned on, and you should, it's free, you'll see this new efficiency metric when you log in. Now this efficiency score is calculated by taking your potential savings over your total optimizable spend. What that means is the more optimization recommendations that you take from the hub, the higher that this score will go.

So if you're purchasing a new Savings Plan, if you're cleaning up idle resources, if you're right-sizing that particularly large EC2 instance, the score will go up. Not only can you track this historically, but now you have a common KPI to game. Whether that's working with your teams to see how quickly and how high the score can go, or maybe some friendly competition between peers, you can set those targets and benchmarks. You also can programmatically query this via API with the List Efficiency Metrics API.

Granular Control Over Commitments: Reserved Instance and Savings Plan Sharing Preferences

All right, let's talk a little bit about sharing Reserved Instances and Savings Plans, speaking of friendly, across accounts within your organization. Right now it is all or it is nothing. You either share your commitments across every single account in the org, or you don't share it at all and every account is responsible for its own. There is no middle ground. This creates a beautiful problem. You want to share because otherwise you're leaving money on the table and Mr. Rogers would be proud of the adult you have grown into, but you're terrified to share because the moment you flip that switch, some other account is going to effectively consume all of your commitments.

Including that one account, you know, the one from that acquisition that technically hasn't been integrated yet, or the sandbox that is still running because no one realizes it's there, the account that someone created for a demo for a talk three years ago that nobody remembers but is mysteriously still spending money like clockwork every month. So you're stuck. You bought a million dollars in Savings Plans for your production workloads because that's the right number. They could be shared with your dev accounts. That would be efficient. It means your company is now getting the best discount possible, but your dev account is in the same org as, I don't know, Karen's machine learning experiment that's going to train a model any day now and currently has 403.16xlarge instances sitting there just to get ready, but they're sitting idle.

If you enable sharing, suddenly Karen is consuming all of your discounts. Your production teams are subsidizing Karen's GPUs that are now three generations old. What does this turn into? That's right, politics. Your production team bought the commitments, so should Dev get to use them? If Dev uses them and then Prod gets charged back, who's tracking this? How does it reflect for internal costs? Finance wants utilization reports by team, but the commitments are floating around the org like freaking ghosts, and nobody knows who's actually benefiting. We know the customer is, but that's as far as we can go.

And the worst part is the fear, the psychology behind that fear of the one bad actor. It prevents you from helping the 20 good actors. We see that everywhere in society. So you leave sharing off, your commitment utilization dies, it's at 73% month after month, and everyone is just slightly worse off because you couldn't trust everyone. We have been asking for years to give us more granular controls. Let us share with these accounts, but not those accounts. Greg, you know the ones. Let us set guardrails around sharing.

Thumbnail 2890

Thumbnail 2920

So that's why we have new Reserved Instance and Savings Plan sharing preferences available. By default, the sharing has not changed. You'll start with open sharing, which means that within your organization, if there's any unused commitments in a particular account, that will float to the place in the organization that has the highest discount. We're still optimizing the savings for you overall. However, if you want to change that, we have two new options. We've got prioritized sharing and restricted group sharing.

For prioritized sharing, you're feeding us the usage that you want to prioritize, that grouping, so that if a commitment is unused, it'll float between that group first and then move to the rest of the organization. But if you want to restrict that group sharing, if you don't want Karen to get your Savings Plan commitments, then you can just restrict it to your specific cost and usage, and then if there is waste, well, then you'll pay for that too.

So how do we set this up? We have a scalable grouping mechanism for this in the form of Cost Categories, because you don't want to be manually maintaining a list of accounts that says this is one group and this is the other, and oh wait, we just added two accounts last week, so we have to add that in now. So with Cost Categories you can set your rule-based allocation methodology. If you wanted to set a rule that was based on the nomenclature of your accounts or by tags, you can. And once you create that Cost Category, you can set it to be the sharing boundary.

Once you set that up, then the Savings Plans will operate as usual. You can now use your Cost Explorer in the same way to check your utilization and your coverage rates, but you can also monitor and model the Savings Plan purchases. So if you want to do a what-if scenario with Pricing Calculator, and if you haven't used the new authenticated Pricing Calculator in the console, please check it out.

You can check out the calculator in the console by doing what's called a bill estimate. You can take those savings plans, add new ones, and then model that what-if scenario. For example, if last month I bought another $100 of savings plans, what would that look like under these new sharing preferences? You can get that output and the estimated savings that go along with it.

Thumbnail 3040

The Long-Awaited Arrival: Database Savings Plans and Session Wrap-Up

I've been talking a lot about the savings plans, and finally Corey gets to talk about our newest one. Database savings plans are here. They cover RDS, Aurora, ElastiCache, Neptune, DocumentDB, DynamoDB, Timestream, the whole family. I would like to thank the Academy, my parents, and the heat death of the universe, which we almost reached first. I have been asking for this since 2019, when savings plans originally launched. In 2019, my hair was a different color. I had different enemies. Some of you in this room weren't even in cloud yet. When this happened, you were innocent. You were unscarred by the reserved instance management experience.

For six years, AWS said, oh, just use reserved instances for each service separately and manage them like a medieval warlord moving things on the table. You're telling someone to use different currencies for each type of purchase at the grocery store. Oh, groceries and gas, you better maintain separate wallets to wind up doing that with separate exchange rates and separate commitment terms. Totally reasonable, highly normal. Compute got savings plans six years ago, and that same year I started asking for database coverage, but databases were stuck on an island for this long. You could have raised a child to kindergarten in that interim.

Thumbnail 3130

I'm not saying AWS moves slowly. I would never say that. But I will say that geological epochs have been popping into my DMs asking if AWS needs motivational coaching for this. My newsletter subscribers have heard me complain about this so many times that they have reflexive wincing and PTSD from it. I'm like the Muppets Statler and Waldorf here heckling from the balcony. What about savings plans for databases? For context, this is the pricing commitment flexibility we have had for EC2 all this time. But for the databases that power literally everything of substance that we do that is stateful, because let's be honest, your compute is replaceable, but without your databases you really don't have a business. That's where the business lives.

Reserved Instances were commitment without flexibility, relatively speaking, a prison of your own cost optimization making. Now we have commitments with flexibility across services. You can move between Aurora and other RDS, which is kind of Aurora, but not because that's weird, without sacrificing any of the commitments you have made. You can shift from ElastiCache to DynamoDB without starting over. I asked for this as a young man with dreams, but I am now old and bitter. But I have database savings plans, so I guess that everything worked out.

They're here. They're great. They only took six years and the collective screaming of the entire FinOps community to achieve. Thank you AWS. This is huge. It will save real companies real money and operational headaches, but what took so long, you absolute maniacs? Thank you for doing this. Next up is me asking AWS why it costs more to move data between regions than it does for me to literally move my actual furniture between states. So I'll see you all in 2031 to follow up on this. Back to you, Matt.

Thumbnail 3230

So after all that, you might be wondering how do I get started with database savings plans. There's the option for no upfront one-year commitments. And who here has purchased a Compute Savings Plan, an EC2 Instance Savings Plan, or a SageMaker Savings Plan? Raise your hand. You already know how to do this, so you can go and check your recommendations in the console, and you can just go with that if you want. But if you want to model a specific amount, let's say $20 an hour, you can use the new Savings Plan Purchase Analyzer that we launched last year. It's awesome. And it'll give you that savings that you'll get, and it'll give you a graph to show what that would look like for that commitment.

Thumbnail 3280

And just like Compute Savings Plans, you have seven days to return it if it turns out it doesn't fit just right. It's like buying pants on Amazon. And so you can just add that to your cart and purchase it, and then you'll be able to monitor your coverage and your utilization of database savings plans in the same way that you do for all the other savings plans. You can use Cost Explorer to check those reports. You can use your third party, your dashboards, your FinOps tools, or you can use the new Billing and Cost Management dashboards that we showed a bit earlier.

And then you might be asking, what do I buy more? What do I do? And for some of you, you're probably going to take that recommendation, hit purchase, see the amount of savings you're going to get, and your bosses will be very happy.

Other folks, I've seen the best practice to be laddering, so buy smaller commitments over time, giving yourself the flexibility and the checkpoint within either your FinOps teams or with your business unit or other stakeholders to say, hey, maybe we can purchase more next month. Have a monthly or quarterly checkpoint that way you're getting a little bit more flexibility and you're able to see that ladder of the coverage over time. So that's Database Savings Plans.

Thumbnail 3350

I'd like to welcome my other co-speakers back on the stage to do a wrap up. Great, we covered a lot today, so let's recap all the announcements in this session. We talked about how you can customize the level of cost and usage data sharing, whether it's across accounts or payers with AWS Billing View, and you can consolidate all the KPIs in one dashboard experience using Billing and Cost Management Dashboard. Both Billing View and Dashboards can be shared with any accounts within or outside the organization.

Billing Transfer is a new way to streamline. You can designate a payer of payers to centralize all the payments, billing, and cost management across payers. You get 18-month forecasting with AI-powered trademark design and explanation behind it that starts to give a lot more clarity to how you're predicting what your business is going to do. You get scalable anomaly detection with AWS managed monitors without having to play constant whack-a-mole, just track what your service teams are doing.

You get enhanced cost management for things, which is just an absolute delight in Amazon Q Developer, because you can start asking questions like a human being instead of a frustrated DBA. You can create your Data Exports in FOCUS 1.0.2. It allows you to normalize your billing data across the providers that you're operating in, and you can now split your split cost allocation data. Now it supports Kubernetes labels so you can get to the right level of granularity during your allocation.

And finally, we've got the new Cost Optimization Efficiency Score. You've got a standard way to measure your efficiency over time. We've got the new Reserved Instance and Savings Plan group sharing, so now if you want to prioritize or restrict sharing to a specific set of usage, you can. And then of course, Database Savings Plans, they're here and available for no upfront one-year purchases.

Thumbnail 3450

So to learn more, these QR codes should take you to the relevant places, provided that the website monkey didn't fall into the box of juice boxes again last night, and we'll see how it plays out. You can bookmark these. AWS CFM teams are going to continuously share new releases, best practices, etc. And here's what we want you to do on these things: don't overthink it, don't wait. Don't say, oh yeah, in March, we should really go back and look at that. Make an appointment on your calendar to go and look at these things in the somewhat near future. It's worth the time, it really is.

Thumbnail 3510

Your future self will thank you. Your CFO will probably thank you if they're aware of what you're doing, and your sanity will definitely thank you, otherwise you turn into something vaguely resembling this. Thank you all for coming. We've enjoyed having you. Please make sure that you fill out the session survey and give us high scores, otherwise we have to go back in the box and then they'll break out the duct tape and no one wants that. Remember to give us useful feedback so you don't have to suffer through these lame jokes next year, and thank you very much for coming and thank you to my co-presenters for tolerating me. Thank you. Thanks everyone, thanks everyone.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)