🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - The next frontier: Building the agentic future of Financial Services (INV209)
In this video, Scott Mullins from AWS Financial Services introduces how the industry is embracing agentic AI. Axel Schell from Allianz Technology SE demonstrates their multi-agent framework built on Amazon Bedrock, showcasing agentic browsing capabilities for insurance purchasing and a claims processing system that reduced processing time by 80 percent using seven coordinated agents. He emphasizes the need for a "Gen AI mesh" architecture with agent discovery and orchestration to avoid complexity at scale. Erik Reppel from Coinbase presents x402, an open standard for internet-native payments using stablecoins, enabling agents to autonomously pay for services with micropayments at 0.1 cent transaction costs. The protocol has processed over 70 million transactions and $38 million in volume across 250,000 users in 30 days, demonstrating how open standards can unlock agentic commerce without requiring pre-negotiated integrations.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
AWS re:Invent 2025: The Evolution of Financial Services and the Rise of Agentic AI
Please welcome to the stage Managing Director of Financial Services at AWS, Scott Mullins. Good afternoon and welcome to the 2025 Innovation Talk for the Financial Services Industry here at AWS re:Invent 2025. For those of you who don't know me, I am Scott Mullins, and it's my privilege to lead the financial services business here at AWS. I've been very fortunate to be a part of many of those watershed moments that you saw highlighted in the video that we just shared with you.
That was like a one-minute roller coaster ride through the last 13 years of what we've been doing. The very first thing that you saw was one of the only first sessions for financial services at the very first re:Invent in 2012. While some of the things about re:Invent have changed over the years—from the number of attendees we have to the advancements in both AWS services and our customers' use cases—what hasn't changed is our shared willingness to continue to push past what is currently thought to be possible.
For more than a decade now, we've been moving the industry forward together. At times with deliberate and important incremental steps, and at others with leaps and bounds, but always with the conviction that we can make it better. As a result, today AWS is the home to the mission-critical systems that power the global financial services industry. Together, we're actively evolving how financial consumers are served.
Evolution happens quickly in our industry because financial institutions have chosen to change instead of just waiting for change to happen to them. One reason that we now stand on the threshold of a new frontier today—where agents won't only chat with us but act on our behalf—is that financial institutions have been preparing for this moment for years. They've moved their data to AWS and set up processes and guardrails that would enable them to securely and confidently take advantage of machine learning and generative AI.
In doing so, financial institutions were ready to embrace the opportunity to tackle longstanding challenges and transform the way they operate with agentic capabilities. This week at re:Invent, we've heard from many financial services customers how agents are helping them rethink their core businesses and become more productive, secure, and efficient. Moody's shared how the company deployed multi-agent systems to process large data sets and conduct specialized research and analysis. Ripple shared how it transformed security operations by building a multi-agent system that analyzes massive log volumes and enables guided investigations. Commonwealth Bank of Australia shared how it's using agents to automatically analyze legacy code, perform security assessments, and handle network flow implementation.
That's just a sample of the industry-focused breakout sessions our customers are delivering just this week, and a fraction of the agentic AI use cases we're tracking across the industry today. The common thread here is that financial institutions are entrusting agents with increasingly more critical responsibilities and allowing them to interact directly with their customers, handle sensitive data, and execute complex tasks. But even more powerful change is on the horizon. Autonomous AI systems, not just goal-based agents, are poised to transform the way that organizations build financial services and the way that we all consume this.
The Future of Personal Finance: How Autonomous Agents Will Transform Consumer Experiences
So what will this look like? Well, across the next frontier, agentic systems will more and more become a part of our daily lives. Let's see how this looks. So let's say you've been offered a job that would require you to move and maybe you need to buy a new house. Because my personal financial agent knows me, it can look for properties that align to my tastes, needs, and financial resources.
You can see I have a particular taste in the style of home. Imagine that my agent finds a property that looks really great for me. But before I move forward, my agent actually takes a deeper dive into the disclosure statement for the house I'm interested in and finds out that the basement flooded a few years ago. That actually happened to the home I bought recently. My agent then accesses a weather database on its own to find out that changing weather patterns have put this home in a risky flood zone going forward. This is the type of action that agents can proactively take on their behalf today to save us time and money but also to minimize risk.
Agents will also do things like analyze your financial situation in real time and evaluate your current holdings to see if you're properly balanced and shift funds accordingly to make sure that I can actually afford that home. Your agent will make it easier for you to make informed decisions when it comes to evaluating which equities I might want to transfer around or sell to minimize capital gains taxes, or how much money you should set aside for perhaps your kids' college fund based on your current earnings and savings. They can independently perform research to make a recommendation and then present enrollment forms.
Agentic systems will make life easier for consumers like you and me, but they'll also make life easier for providers as well. Let's say your agent thinks you may want to purchase some life insurance based on your recent interactions. Well, insurance underwriting is a difficult and time-consuming process because it requires manual data collection and extensive internal and external research and analysis. Agents are already streamlining this process by automatically extracting data from submission documents.
But in the near future, autonomous AI systems will go a step further by performing risk assessments and then enabling agents to negotiate prices with other agents. New capabilities have put this vision of the future within reach. The length of tasks that agents can accomplish has increased from seconds to hours, while costs have decreased from dollars and cents. We heard from Matt Garman on this on Tuesday. But for this autonomous agent-to-agent future to work at scale, we'll need more than just technical capabilities. We'll need trust.
Building Trust Through Amazon Bedrock: New Features for Secure Agentic AI Deployment
Organizations will need to trust that autonomous systems can operate securely, comply with regulations, and protect sensitive customer data. As an industry, we're going to need to align on standards and protocols that facilitate and incentivize trust between companies. The path forward to this trust-based future starts with Amazon Bedrock. As you heard from Matt Garman in his keynote yesterday, we continue to add new foundation models—18 new foundation models to the broad selection we already provide through Bedrock—so that you have the freedom to choose the right model for the right job.
Matt also introduced and announced two new features within Bedrock Agent Core called policy and evaluations. These are two new capabilities to help you deploy and scale agentic AI systems securely without operational headaches, and I'm really excited about those two features in the financial services industry. Today we're here to learn how financial institutions are using Amazon Bedrock and other AWS services to build the future of the industry. We're fortunate to have two customers joining us to share their experience and guidance.
First, we're going to hear from Axel Schell, the Chief Technology Officer of Allianz Technology SE, who will share how one of the world's largest insurance companies has built a framework for bringing AI-powered solutions to production securely, consistently, and quickly. Then we're going to hear from Erik Reppel, who is Head of Engineering of Coinbase's Developer Platform. He'll share how Coinbase is making the vision of agentic commerce a reality through its x402 protocol. So now, please join me in welcoming Axel from Allianz to the stage.
Allianz's Journey: Navigating the Polycrisis World with AI as an Instrument of Resilience
It's a pleasure to be here, and what we want to do is talk about moving the frontier to the next level. One of the questions I get regularly asked is how will business and our technological environment look like in the next three, five, or eight years. That's a very difficult question—that's the famous million-dollar question. One of the answers is, and that's maybe something that we can say is, in short, almost everything will change in the future. We will for sure shift boundaries and frontiers.
Now let's try to understand why moving frontiers and boundaries is first of all very important, but also secondly, why this is relevant these days. If you look at the current world we're living in, you see we call it a poly-crisis world where lots of things are happening. The number of IT outages increases, more data breaches, more natural disasters, a lot more deepfake related to AI, but also global terrorism and IT attacks increase over time.
When we look into this global permanent crisis, we see several critical factors at play. First, we have a very fragile geopolitical situation. Climate change is coming and approaching, and that's relevant for humanity and our economy. We also see lots of demographic change that influences all of our lives. And last but not least, there's something called AI and digitalization, which of course adds on top of all those elements. One thing we can say is that this results in an environment of change and lots of uncertainty.
Now, the key question is: if we live in this permanent crisis, polycrisis world, is IT these days a catalyst of change? Is it actually something that is the core of disruption and makes the disruption even faster, or is it an instrument of resilience? If you're ahead of the curve and you're more or less leading the pack, and you're early enough, then this can be an instrument to create resilience. So does it disrupt, or does it create resilience? What is absolutely clear is that if we want to look at the future and advance the boundaries and move the frontiers to the next level, one thing is very clear: we have to manage our future, the future of IT, and the future of these companies. Therefore, whatever world you live in, we have to actively play a role in managing this future.
Now the question is: how do we do this? Let me provide a very quick introduction about Allianz so you understand a bit of the background. We have roughly 156,000 employees, we operate in 74 countries, and we are the leading financial services insurance brand with roughly 2.25 trillion dollars in assets under management. That's the responsibility that customers gave us. We are a 135-year-old company. The key question now is: if we look at IT, disruption, and resilience, what's going to come? One of the key questions we ask ourselves on a daily basis is how do many people shop around for insurance today and how will they do that in the future?
If we look at how they do it today, there are lots of aggregators. You go on an aggregator page, you get a couple of comparisons for insurance products, and then you maybe decide which one you want to go with. But the question now is: is AI maybe the new aggregator? I have an example, and let's start from the end customer, which is here, my cat. What we do now is we buy pet insurance for my cat in a modern way. All I do is upload this animal passport with a couple of key pieces of information, which is nothing else than the name of the cat, when it was born, and the gender—just a couple of key pieces of information. That's everything that I use. What I do now is I use a single prompt to buy an insurance product. The single prompt is: order pet insurance for my cat with the best price-to-performance perspective, proceed to checkout, do as few interactions as possible. What you see now is an agent, and then you think you're very well aware of the gigantic browsers that are out there, the Comets of the world, the Atlases of the world. We just use one example here, go through the process, and start to look at various websites and try the comparison. The whole thing takes roughly 4 to 5 minutes. I will not go through all of the details now, of course, and it ends with buying pet insurance, well, health and pet insurance for my cat. So I have already 3 pet insurances now because once I tried it the first time, the second time I recorded the video, and the third time when I did a live demo. So I will not do a live demo today. I have 3 insurances already, which is enough. By the way, from different brands, which is also an interesting experience.
Agentic Browsing and Multi-Agent Frameworks: Allianz's Model-Agnostic Approach to Automation
Now what you saw here is agentic browsing, specifically for financial services insurance products. What you see here is that we move away from the old web days of grammar-constrained decoding to now very reliable, or at least much more reliable, tool calls. We move away from browser APIs that can just do read and modify operations to something much more powerful and flexible for agent-based automation.
Agents can read, modify, click, or navigate in real web UIs. They're completely flexible. The agent doesn't care how the website looked like. If they did a change on the design the day before, the navigation is now different. These agents have lots of capabilities. They plan, act, observe, and correct. They do this multiple times. There's reasoning behind it, so they're quite good. That gives a lot more capabilities. You have now full web workflows, not only for our web page but also internally for some of your web applications, and we can pass and combine data from various sources. Of course, it's also great if you just open a new tab and navigate across multiple systems.
What this means is that there is now a universal automation layer existing with agentic browsing capabilities for all kinds of web applications, including legacy applications, even some of them that don't have proper interfaces and don't have APIs. It replaces manual integration with one agentic interface. How great your web design is no longer relevant for the agent. That's not the most important thing. My productivity increased a lot. The prompt writing took me 20 seconds, and that's it. I just had to check if I'm happy with the result, but productivity is completely different now.
What does this mean and why is this relevant now? I think we are very well aware of the world where a user interacts with a website. Now we live in a world where maybe the user in the future interacts with an agent and asks the agent to do certain activities on the web, procuring, buying products. But is this the world that we think everything will end up with? I don't think so because we don't want as Allianz Insurance that thousands of agents crawl our website and click on the buttons.
What you just saw is like a large language model processing pictures and then telling the mouse to move 50 pixels up, 50 pixels down, and click on the next button to see what happens. Then take a screenshot, bring it back to the large language model, understanding what's on the screen, giving instructions to the agent. That's not what you want. A lot of compute and a lot of capacity is needed. So what will happen is that we will also have our own agent, the Allianz agent, that shows up and says, "Cool, you're an agent. I'm an agent as well. Let's interact with each other in a completely different way." With lots of protocols and also payment will be interesting, and Eric later will talk exactly about this kind of agent-to-agent payments, which will be a very interesting journey as well. Agents communicate with each other in a totally different way.
That was for us the starting point to understand how our AI journey looks like in the next couple of months. Nobody knows what happens in two years. We live in the famous cat and dog world of AI where time moves so quickly. We started to work with colleagues from AWS to do some kind of code development and to better understand the basics of agent capabilities. We selected an area where we started with the team of our risk colleagues, and they spent a lot of time gathering and aggregating information. That took a long time. What we actually wanted to do is to make the time shift. There's yay time and nay time. The yay time is stuff they like. The nay time is just data gathering, which they don't like. We want to shift the yay and the nay time and move the 20 percent of their actual work into a much higher productivity level.
So we wrote a kind of model-agnostic multi-agent framework that we collaborated on. You see here how the features look like. We use Amazon Bedrock and cloud as large language model. The framework is built in a way that open model selection is possible. We have agents that work, and we heavily rely on MCP. This is how it looks like from an architectural perspective. This is our multi-agent framework architecture, nothing super fancy. I think you would expect how that looks like. You have an S3 bucket where we store the data. We have our VPC endpoints. We have our Gen AI Lab Agent. This is actually our home for the agent where the agent lives. We use LangGraph and also some Fargate capabilities. We do network address translation over the internet and then process everything and also integrate.
From Siloed POCs to Gen AI Mesh: Scaling Agent Architecture with Reusability and Governance
I like this slide a lot because it shows the journey where we come from. You remember all of the times that we were heavily discussing that monoliths are not what we want. We have to move to microservices. So more developer efficiency, very simple simplified operations and tech stack. This is what we thought, but there were some downsides. The application of effort, inconsistent standards, increased complexity, complex governance. I think you are very well aware of the challenges of microservices. The second thing is on cloud. Everybody said we have to move away from on-premises and have to move to cloud, and there are many advantages. I just put one here: cost reduction, improved resilience, many benefits from cloud, but it also comes with downsides. You have technical lock-in. Through maybe lift and shift, you have maybe increased parallel run cost, complex transition, and so on and so forth. Just a couple of downsides.
And what I want to compare the agentic world of the future with is exactly those problems. We will love agents, they will do unbelievable cool things, but they will come with a lot of downsides. And we should be aware of those already now. Because if we do not think about those, if we do not have proper governance, a proper structure, how to work with agents, we will run into many, many problems in the future. Now it is still manageable because it is early stage, but we have to think now about this.
What I foresee is something like this: if you look at what many companies do, they start with these siloed POCs. There is a meeting preparing assistant, co-pilot, supporting assistant, legacy modernization, HR bot, source code management, public cloud as underlying applications, so many of those agents. There is very low reusability of those agents across the vertical. And they are very tightly linked to use cases and incremental value realization is what happens. So this is the world of the many POCs. It is nice, the use case is great, but how do you actually really leverage the power of bringing things together? Is this something that is scalable?
And we should never forget one key sentence which is: you can scale everything, but you cannot scale complexity. You cannot scale complexity, and there is high risk on the left-hand side to run into very complex environments. Now what you need is something maybe even more on the right, where we call it Gen AI mesh, where you have certain agent discovery capabilities, registry and orchestration capabilities where you try to reuse agents across any of those use cases. You need a kind of flexible plug and play system, no lock-ins, and that will lead to at scale value realization, not only incremental value realization, but really at scale value realization. So keep that in mind: when you see your agents growing, you need to find a way how to move into this Gen AI mesh and move away from vertical pilots.
Now how do we do that? This is our current thinking and I am not saying that this is the best what you can do, but this is how we try to solve the problem and this is why I think this is relevant. We started to define certain levels for agents. We have first of all our system of records like databases, policy assurance, case database, solution database, whatever. And then on top we stack more or less some agentic layers. We have, first of all, utility agents on level 3 that can be highly reused for documents, for video imaging, for processing, for voice, for email.
On top we have so-called business agents on layer 2 that are more business specific. They deal with notifications, with solutions, with case management, with policy assurance, and then we have level 1 planner agents. They orchestrate the workflow. And all of that is then linked into our customer journeys. And with this kind of architecture and reference structure, we are able to utilize agents in a way that the reusability is much higher, that creates less complexity, that leads to more harmonization, and is a lot easier to scale.
The way how that works is we call it an agentic architecture framework with more or less four components. There is first of all the core stack, where we have front-end orchestration and also AI workflow tooling. You see here some of the technologies that we leverage. We have our DevOps tools, which is mainly around automation, but also the continuous deployment processes. And then of course some AI models. Flexibility needs to be built into the architecture and you need some foundational service layers as well, where you, for example, do all of the container orchestration.
You also have the possibility to work with API management and you have full stack observability. The way this looks is something like this. What you see here is an agent control plane that we have developed that functions as a marketplace internally. As I said, observability and reusability only work when you can also discover those capabilities. What you essentially need is an organized marketplace, a space where you can collect, showcase, and engage agents. You need to administer them properly, which includes configuration. You need to monitor them and their consumption, which will certainly be something like agentic ops where you need to reduce costs because LLM models will at a certain point become quite expensive. Of course, you also need some kind of agentic prompt playground where you can experiment with new agentic applications.
If you do one more level deep dive into this, I have here an example which is a claim handler agent. These are the kind of agents you would find on the marketplace at the various L1, L2, L3 levels that you can draw from, and you have here a couple of capabilities. You can either use MCP to connect or also connect with REST endpoints. You see what tools are available that come with the claims handler agent. For example, you also have the possibility to run a test case. You enter a test case number, you execute the tool, you see the results so that you can play around with it and understand what the agent can do.
Then you see also on the right hand side the very important step: all of the traces and traceability of what the agent is actually doing. In a regulated industry, you have to make sure that you know what the agent did. You see, for example, the reasoning behind the steps, all properly documented. You see when the execution is complete, you see timestamps, you see audit logs, all of that on the right hand side. This is the kind of setup that we use in order to join this new agentic world.
Real-World Impact: Reducing Claims Processing Time by 80% with Multi-Agent Workflows
The new agentic world from my point of view will grow extremely fast. Two years ago we didn't discuss agents. Last year, agents were a big thing. This year I have the feeling that people now understand how powerful those are and what is going to happen in the future. We have to prepare ourselves to be ready for many agents that we need to manage, orchestrate, and also govern properly as an IT organization.
Now let's imagine and leverage the agentic capabilities for a second. Let's imagine there's a blackout, no power. What happens is your fridge also has no power, and everything that is in the fridge is no longer eatable after a certain amount of time. This is one use case that we have implemented where we use agents to do claims processing and settlements. We have a couple of agents. First of all, we have a planner agent that initiates and coordinates the workflow. This is, as I said, at level one, the one that does that. We have a cyber agent that works on the security part, a coverage agent that actually verifies if the insurance customer is actually covered by their policy. There's a weather agent that actually looks to see if there really was a power outage happening or another thunderstorm or something like that. There's a fraud agent who looks into all of the fraud capabilities. There's the payout agent in the end who does the payout to the customer, and there is an audit agent.
What we could achieve with this setup is we could reduce our processing time by 80 percent. Something that took more or less around 100 days, and now we have 7 agents in a workflow, and it's a very powerful tool that many of our customers like. This is how it looks. What happens is you start your mobile app. You get welcomed by the agent and it says, "Hi, my fridge lost power during the storm and all my food has gone bad." The agent responds, "I'm really sorry to hear that." First of all, the agent wants to understand who you are. So basically you provide your email address, and then the next step is the agent wants to understand your full name, date of birth, and home address. These are understood. The address has been verified and then it continues with when did this happen. Please send me a picture of the fridge.
And then the agent does all of that. In the background, this is what you see on the right-hand side. What the customer cannot see is all of the various agents that are starting now: the coverage agents, the fraud agent. Looking into this, and then of course the weather agent will check if there is a weather incident, the payment agent does the work, and so on and so forth. You see then that in the end it ends up with all of the audit trails and the logs—why we took a certain decision, how much the payout is —so that we are covered from a regulatory perspective.
I think that's a good example of one use case for how you can work with agents. But now imagine we run thousands of those use cases and we have to make sure that we have all of this under control. Therefore, better plan now for what's going to come in the next one to two years. Thank you. So Axel, thank you for sharing how you've built a platform for systematically and securely deploying agents to reinvent insurance. Now, a theme we've heard a lot this week is trust. We need to be able to trust agents to do what we need them to do and to act responsibly on our behalf.
Coinbase's Vision: Building Internet-Native Payments with the x402 Protocol
And as we move further into our future, we need to trust agents to work responsibly with one another. But that trust isn't going to magically manifest itself. We're going to need to create it and to incentivize it, which brings me to our next speaker, Eric Reppel from Coinbase. Eric is going to share with us how Coinbase is taking pragmatic, tangible actions to build the trust that will form the foundation of the agentic future of commerce. So please join me in welcoming Eric to the stage.
Thanks so much, Scott. I'm super excited to be here. I'm Eric. I lead engineering for Coinbase Developer Platform. We're kind of like the AWS to what AWS is to Amazon. CDP is to Coinbase. We've taken the tools and capabilities that Coinbase has built over the years around blockchain infrastructure, custody, payments, and we're now externalizing those services and making them available to other customers who want to build on top of our infrastructure, which we of course build largely on top of AWS.
I'm going to talk a bit about a vision for the future of payments. But where I want to start is with the internet. The internet was really designed for humans. We as humans use the internet all day every day, and we kind of use it through these devices that we have. A lot of the standards that we have to use the internet are really built with this concept of human in the loop, where a human is driving a computer that is then interacting with other computers.
In Web 1.0 back in the eighties, we got really robust standards for sharing information. The internet really started in academia with researchers wanting to share papers and content and their work with other people around the globe. That was great. We got HTML, we got HTTP, we got the World Wide Web, which we now call a browser—much better name in my opinion. Then we moved on to Web 2.0. In the nineties, we got Web 2.0 and we kind of got the explosion of utility that we see today. That was really powered by standards like JavaScript, AJAX, and JSON, which let you write. The web was no longer read-only; now it's read and write.
My point here is the internet evolves through standards. Standards are conventions that we all agree on. There's no controversy involved that you're going to write your website in HTML. The power of that is that any browser that you use works with any website that you visit without there having to be some kind of bespoke agreement that goes on. Every company on the planet writes HTML and uses HTTP, and that openness creates a positive sum feedback loop where things that weren't possible—things that weren't possible if you have to negotiate one-on-one—are now possible. All these formats do very specific things, and they've largely answered a bunch of these questions like how do I give you information and what should that information look like.
But we've never actually managed to solve the problem of how do I give you value. There's actually no standards, no open standards tied to the internet for payments. And then there's a Marc Andreessen quote. I think the original sin was that we couldn't actually build economics, which is to say money, at the core of the internet. And we kind of never resolved this.
The original quest was to make payments native to the internet—not something done in requests, but payments that could accompany the requests as part of the core backbone of the internet. Eventually, we got HTTPS and encryption. We became comfortable sharing credit card or bank details in internet requests, and consumer behavior changed. People became comfortable typing in a credit card and hitting send, sharing their information with a company to process offline. We mostly forgot about this idea of internet-native payments. But you see remnants of it. You see artifacts in the history of engineering that indicate this forgotten past, and one of them is the 402 Payment Required status code.
For those of you in the audience who aren't familiar, HTTP status codes exist to convey a programmatic message of what went wrong, what happened, or what went right when your client made a request to my server. If I send you a 200, everything went perfect. 200 just means OK. Probably everyone in this room has seen a 404, Not Found. The 402 status code was encapsulated at the same time 404 and 200 were created. The entire definition of the 402 Payment Required status code is: "reserved for future use." We never actually got around to using the status code. It's used occasionally in payment workflows, but there's no standard that works alongside 402 to have a consistent, open payment experience like you can have when you visit any website, download their HTML, and view the content.
Each API you need to integrate for payments is slightly different and has different conventions. It's not a single integration that gets you payments everywhere. Payments are the hardest part of the internet, but they're the most important unlock in the last 25 years. Checkout is a very human-centric experience. Everyone in this room has probably filled out a form that looks similar to this. We put in a credit card number, expiration date, security code, and address. Repeat UX is quite good. Typically, companies will store the credit card, but the first-time UX usually requires you to go through this step. While these forms may look very similar between different services or websites, technically they're very different. They may have nothing in common between different websites you visit.
Visually, this looks similar, but what the machine sees could be entirely different. This form has 14 fields. We've gotten used to it as humans. It's an okay experience, though being a little spicy on the slide, I'd say it's a bad experience for humans. It's worse for agents. Agents are really probabilistic systems, which means they may not do the same thing every single time. I'm sure all of us have experienced this using chat interfaces or LLMs where you ask the same question, the first time you get a great answer, and the second time the answer is a little different, maybe it doesn't quite hit the same points.
If you have to probabilistically accomplish 14 tasks in a row, the probability of completing the entire task is now dependent on all tasks being complete. If you have an 85 percent probability of completing each field, you're multiplying 0.85 times 0.85 times 0.85, and you can get flaky really quickly. Agents also have to be really good for you to trust them with your credit card. I think all of us in this room would be a little hesitant to give unfettered access to their credit to an AI agent at this point in the cycle.
AI really forces us to reevaluate internet-native payments—payments that adhere to a standard and work everywhere, similarly to how HTML works anywhere the internet works. At Coinbase, we have this broad thesis that stablecoins are a really good use case for AI agents and work extremely well for agentic commerce scenarios. What I'll talk you through today is how we've been developing an open standard for internet-native payments and how it can leverage stablecoins and crypto. It will work with fiat as well over time, but stablecoins have really uniquely enabled some things in agentic commerce that I think are very difficult to replicate.
Stablecoins and Open Standards: Enabling Dynamic Agents and the Future of Agentic Commerce
So what is a stablecoin? A stablecoin, for those of you who may not be familiar, is a blockchain token that is backed 1 to 1 typically by reserve dollars. So the stablecoin Coinbase is known for is USDC. USDC has audited reserves in treasuries and liquid dollars in bank accounts, backing each digital dollar that exists. What that unlocks is because blockchains have gotten really good over the last 5 years, you have 24/7 availability, you can settle transactions in seconds, not T+2 days, but T+2 seconds. And the fees are incredibly low. The blockchain level cost to send any amount of money, whether it's 1 cent or a million dollars, typically on a modern blockchain like Base is about 0.1 of a cent.
This is true of many blockchain spaces, of course, a great blockchain, but Solana, Aptos, many blockchains have this incredibly quick to transact, incredibly cheap to transact property. Let me get to the meat of this: an open standard for internet native payments. This is where I introduce x402. x402 is an extension of the 402 status code. We want to leverage the existing infrastructure of the internet. It's an open standard for internet native payments founded at Coinbase. It's soon to be moved into an independent foundation. So the governance of the standard is not just done by Coinbase, but it's done by anyone who wants to participate and invest in this idea of an open standard.
But really, x402 is a bridge. Payments kind of exist on one side of the water, the internet kind of exists on another. x402 aims to build a bridge so that payments on the internet feel native and we can move value in a similar way to how we move data. x402 kind of exists in what I'd call the agentic standards stack, where you're starting to see things like MCP that standardize how tools are defined and how tools are used. x402 handles payment. A2A standardizes how agents communicate with each other, and you're starting to see more experimental standards like EIP-8004 for agentic reputation and discovery. And there'll be more to come. Since I've made this slide, ACP started to emerge. You're starting to see AP2.
All of these standards, I think, similarly to how HTML and CSS and JavaScript and HTTP are the standards we needed to make the web really sing, you're going to see the composition of these standards work together to really unlock the agentic age. So how's it going? Well, in the last 30 days, x402 has done over 70 million transactions using real stablecoins and real dollars. The GitHub is incredibly popular. It's had over 4000 people star the GitHub. There's been almost 100 contributors. And there's been over $30 million of volume. I believe it's actually about $38 million. This is slightly outdated. And there's been over 250,000 people who have bought a good using x402 or a service using x402, and commonly that'll be data or that'll be access to information or API use is a common use case.
Anytime you have a digital native good, x402 works incredibly well because it natively integrates into your infrastructure, natively integrates into your web servers, and allows you to include payment directly in the exchange of data that is already happening. So what does this enable? I've given my LLM access to 2 x402 tools. And I've asked it to do some research for me. I've asked it to research x402. How is x402 usage and adoption been going? And you'll notice it does a web search to try to understand the x402 protocol, and it gains information that's available on the public internet. And then it searches it uses the first tool to search for paid resources that may have information about trends on social media. And then it will actually just go and pay 5 cents to access that data. I don't need to go and create an API key or integrate an MCP. That's an API that exists and it can pay directly for that access to content.
And what you see here is I can create an experience where this agent has dedicated siloed funds that has about 56 cents, and I can create rules about what that agent can do and how it can spend. And these isolated funds, there's no way for the agent to spend more than the funds I've given it. I can say, hey, you're only allowed to spend 25 cents per session or $1 per day, and it becomes trivially easy to codify these kinds of rules.
Into the experiences that you build because these dollars are digitally native. It really is money that is truly digital versus systems that you need to interact with via APIs that are not money moving from account to account. The great thing about this is because this is open standards, anyone in this room with a good engineering team can build exactly this product that we have, and any service that's available that accepts x402 payments can be paid for using this client with this MCP.
This creates what I call dynamic agents. Right now if you have an agent, you kind of need to integrate one tool at a time, and your agent operates in a box that you define for it, and the capabilities that it has are really set by you. If it doesn't have the capability to accomplish something or to use a tool that may be useful for the task it's trying to perform, it's just not going to be able to do that. But if you open the aperture and you have an open standard where anything that accepts these payments can happen without a preexisting negotiation, your agent can find anything available that accepts these kinds of payments and just pay for the services that it uses.
Because stablecoins are 0.1 cent to transfer, it means that paying 1 cent or 5 cents or $1 becomes truly possible. You might have noticed on the slide a couple of slides back, there's been about 70 million transactions, about $38 million in volume. That's less than $1 on average per transaction, which is actually quite the feat. This works for payments from a cent to a million dollars. The fees are the same. The costs are the same. There's no percentage-based fees. In fact, x402 doesn't have any fees in it at all as a standard. There might be costs associated with doing the underlying payment networks transfer, but there's zero fees at all at the standard level of x402.
So why now? AI is the obvious answer, but I think there's some more subtle ones. I think stablecoins have product market fit and are here to stay. We're getting more and more clarity on the regulatory and compliance side of stablecoins, and more and more countries are investing and looking into what stablecoins will look like and how the benefits of stablecoins can serve their constituents. The UX for blockchains has gotten really good, really, really good, and it gets better every single day. I think you can now create experiences that leverage blockchain technologies that as a human, as a user, you don't even realize it uses crypto. But you get all the benefits: 24/7, low cost, instant settlement.
Now, as I alluded to at the beginning, AI agents are really going to force non-human loop payments to exist, where the human is not always driving the use of the computer. The agent is now driving the use of the computer. AI breaks the fundamental economic arrangement of the internet in many ways. If you think about the internet, roughly, the consumer web monetizes in this pattern. Typically as a user, you'll see content and ads served to you by a website. In the background, a merchant has done an ad placement and paid that website to show you that ad. And then at some probabilistic rate, you will convert, you'll see the ad and you'll buy a product from the merchant.
Except the issue is with AI. You end up not seeing the ad. When an LLM scrapes a website, it largely focuses on the content, not the advertisements. If you're not seeing the ad, how does the payment occur? I think this is going to be a big problem, and we've seen a lot of this with big labs going and scraping data and the publishers not being able to capture any of the value that they're creating. What x402 does is you can now directly pay for the content within the same request that happens. It's a much more native to the internet way of moving value. Your AI can go to a website and say, "Hey." The website can notice it's an AI and say, "Hey, I want 25 cents for you to access this content. This is a really valuable blog about the best restaurants in Vegas to eat at. If you're gonna consume this, I want 25 cents." That may be a fair exchange, right?
So how does this work? x402 exists in this flow between an agent, a server, what we call a facilitator. When you're using stablecoins or blockchain, the facilitator really exists to abstract away the complexities of building in crypto and make it feel like just building in the web. You can integrate x402 into most languages and frameworks in as little as one line of code. Given that flow,
you can leverage Amazon Bedrock or other tools that fill these open standards to build agentic discovery and agentic payment. AWS provides great primitives via Bedrock, Lambda, and EC2, and the full stack both the host servers that monetize via x402 and to build agents that can pay with x402. If you're using stablecoins, you likely need to have a wallet. CDP offers best-in-class wallets that are great and very well optimized to work with agents and to perform operations with stablecoins. The demo video that I showed you is built on top of our wallet stack. We like to eat our own dog food, and our wallet stack is built on top of AWS Nitro and the scaling and KMS systems that Amazon offers.
So what does it look like to actually bring these things together? If you're using Bedrock, you can use Coinbase's complimentary agent construction kit called AgentKit, which gives your agent a wallet. You can connect the wallet and tell Bedrock about these tools that exist—the tools for discovery and payment that I mentioned in that demo. Then in maybe ten lines of code, you have an agent that can buy anything on the internet that supports x402. One-time integration, forever the benefit of open access.
So what's the upside here? Well, we unlock the monetization of content, we unlock pay-per-use APIs, we unlock micropayments, and we have money now that is natively digital and exists as a first-class citizen of the internet. Really, I think the value is we get more useful AI. If your AI has more capabilities, it can do more things for you. If we don't have these open standards for transferring value, we're going to end up with a walled garden where each platform, each vendor has a competing standard that isn't interoperable. If you're using an agent from one company, it can't speak to an agent from another company. If you're using an agent that can only pay with one company's standard, it can't transfer value to another company's standard, and you end up with fragmentation of the open internet.
So my belief is that in order to keep the internet alive and to keep much of the web free, we need to create these open standards that make that actually possible in the agentic age. So what's next? Well, for Coinbase, our mission is to increase economic freedom in the world. We think if money can move globally 24/7 with no access barriers and no limit to participation, you'll get a better global financial system. If you want to learn more, you can go to x402.org. You can check out the open source repo and all the code for x402. The standards are fully open source and licensed under Apache 2.0. If you want to learn more about my day job and Coinbase Developer Platform, go to cdp.coinbase.com. Thank you so much.
Thank you, Eric, for sharing with us what you're doing at Coinbase. There's a critical point that both Eric and Axel made in their presentations today that I want to reiterate as we close. To realize their potential, new technologies require new ways of thinking. We can't build tomorrow's systems and applications on yesterday's technology, nor can we build them on yesterday's dogma. We have the opportunity to open our minds to the fact that we are going to build and consume financial services in a dramatically different way. If we're willing to lean into change, our financial lives will feature more choice, greater access to capital and opportunity, and better protection from risks to our businesses, our families, and our prosperity as a society. Thank you to Axel and Eric for sharing their perspectives today, and thank you all for joining us on this passage to the next frontier in financial services. Have a great re:Invent.
; This article is entirely auto-generated using Amazon Bedrock.
































































































Top comments (0)