🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Kiro and Amazon Bedrock: Unlock AI Agents for Your Legacy Apps (MAM403)
In this video, Ryan Peterson, AWS's worldwide tech leader for modernization, explores the evolution from traditional application modernization to agentic AI systems. He traces AI development from 2017's Transformers through tool use and the Model Context Protocol, predicting billions of agents by 2028. The session demonstrates Amazon Bedrock AgentCore and AgentCore Gateway, which bridges AI agents to legacy applications using the MCP standard. Peterson showcases how the open-source Strands Agents SDK and Kiro IDE enable rapid agent development through spec-based approaches. A live demo illustrates integrating an unmodified Swagger Petstore application with an AI agent, showing how agents can autonomously query APIs, reason about results, and combine general knowledge with real-time data—all without changing existing code.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
From Lambda to Agentic AI: A Decade of Application Modernization
Just before this session, I was actually out in the hallway chatting with some folks and found out that this was their very first re:Invent. It's so exciting to hear, and I love seeing people come here for the first time—just the grandeur of it, the excitement of it all. It reminded me of the first time that I attended re:Invent, which was actually back in 2014. I was sitting in the keynote when Andy Jassy took the stage and announced AWS Lambda, which was quickly followed by Amazon ECS. It was an incredible year of new releases and created a new dawn of application modernization.
For the next decade, I spent my time working with customers, talking about cloud native architecture and application modernization. My name is Ryan Peterson, and I'm the worldwide tech leader for modernization here at AWS. If the last decade of application modernization were concepts like serverless, containers, cloud native, microservices, and distributed architecture, we believe that the next ten years is going to be about agentic AI.
I'm going to walk through a bit of a primer on AI agents for the first half of this session. The second half is going to be spent doing a demo—a code walkthrough, actually seeing all of this in action. I wanted to start by giving a little bit of history about how we got here, because AI is a very broad topic and you're going to hear about it a lot this week. I wanted to define what makes now, what makes 2025 the turning point where agentic AI systems are going to see a launch within just about every organization that we're going to encounter.
The Evolution of AI: From Transformers to Autonomous Agents
We're all familiar with the idea of generative AI and conversational agents answering questions, but how did this come to be? It starts back in 2017 with Transformers, which enabled text to be processed in a fundamentally different way. Traditionally, using things like bag of words and n-grams, you would have to process text sequentially, and this had great limits on the scalability of how big we could build our models. Transformers changed this by allowing parallel execution, giving us a much greater amount of scale and the ability to process large corpus of text in a much more efficient way.
This was followed by the discovery of scaling laws. As these models were built and we increased compute, parameters, and data, we began to see predictable results—an increase in efficacy and accuracy of these models as we scaled up these three aspects. What made this really fundamental in the path to agentic AI is that it gave license to organizations to invest heavily in these models and really increase their scale based on this predictable path of increased performance.
Then we got into few-shot reasoning-like behavior, and what's important to realize here is we're traveling along from just next token prediction into actual reasoning. Reasoning came about not because it was specifically programmed or designed into these models, but it was actually emergent based on bigger models, more parameters, more data that was being trained on, and compute capacity. Next, we got into chain-of-thought prompting. If you remember early models, they were very bad at things like math, but if you could explain the reasoning of dissecting a word problem into its component parts and how it solved it and give an example of that to the LLM, it got very good at it. This wasn't additional training or fine-tuning; this was actually just adding this as part of the prompt.
Then came tool use, and this is where agents began. The ability to actually interface with the outside world fundamentally changed the role of agents and how they can interact with disparate systems, get access to real-time data, and then actually act out in the real world. That gave way to the ReAct framework. How can an agent reason, act, interpret, and then react? Finally, almost a year ago to the day—I think it was the end of November last year— Anthropic released the standardization Model Context Protocol, and this gave way to a really large step advancement in agent development because we now had a standard that we can all build towards.
Going from left to right we really increased the autonomy and business impact that AI agents could actually deliver. At AWS we see billions of agents existing in the next few years, fundamentally changing how we build, deploy, and really interact with systems across the board.
The analysts agree and see that by 2028, 33% of enterprise software will include agentic AI, and that's up from less than 1% that was observed last year. So we really are at the beginning of the hockey stick of agentic AI adoption. This next statistic I think is even more telling: 15% of our day-to-day work decisions will be made autonomously through AI agents.
Understanding AI Agents and the Prototype-to-Production Chasm
So what are AI agents exactly? We throw the term around a lot, and so here's our definition. They're autonomous or semi-autonomous. They can reason, plan, or act with both digital and physical environments. Physical becomes interesting. There's actually a colleague of mine that took an off-the-shelf LLM as well as off-the-shelf humanoid robotics, and we were actually able to make it work and interact with its environment. That traditionally took very specialized models and training, but LLMs are actually able to reason and adapt and respond to these things in a physical environment and actually navigate and operate robotics.
We're going to go a little deeper now and talk about the fundamental components that work in harmony together to actually deliver the functionality of an AI agent. So at the center of this all is really the brain of the agent: the LLM. This is what you hear a lot about. This is where the reasoning actually takes place. The goals and objectives and instructions—you can think of this as the mission statement of the agent. This is what defines what its overarching goals are and guardrails and instructions.
Next is tools. These, as I mentioned, are access to immediate real-time data or to interface with other applications, to interface with existing legacy applications. That's really why we're here today and we're going to spend more time on tools in just a bit. Next is context. This is both short-term and long-term in a conversational interface, which is fundamentally different from the user experience that we're all used to. You need to have context of what transpired before, not just in that conversation, but perhaps ten conversations back when you discovered insights into this particular user.
This is where it really changes and this is where agentic becomes so exciting: it's the ability of agents to take actions on their environments, observe the response and the results, make adjustments, develop new hypotheses, and then create and deploy new actions thereafter and have an actual loop in this. If you've done any kind of playing around with agentic systems, the one that I actually love to use is like Q CLI, for example. You can actually see this in action where it actually develops a hypothesis, might even deploy and create an on-the-fly application that executes against data, figure out that wasn't quite right, make an adjustment, and it keeps going until it gets it right.
This is what makes agentic AI the future for many of our organizations to work autonomously and to make decisions in a non-deterministic way. Now I need to have a bit of a reality check here because over the past year we've worked with a lot of customers who have attempted implementation of not just agentic AI but AI in general, and the statistic here is one that I've seen proven out in the field: 40% of agentic AI projects will be canceled by the end of 2025. The reality is that this is actually hard stuff. It's hard because the fundamental infrastructure and architecture is vastly different than what we are used to, and a lot of it just really hasn't existed yet to date.
These are new standards. These are our new capabilities, and it's been incredibly difficult to produce. So we call this the prototype to production chasm. And so you start off with this idea of a proof of concept, and I'll be honest, like 90% of the time it's some kind of chatbot, right? I urge you all to think beyond the chatbot. There's a lot of great implementations for agentic AI, but you have this idea and it's just so much excitement and potential, and you get out there and you're starting to develop your agent.
Then you try to deploy to production and you realize that there are huge performance issues with your implementation. There are problems with scalability because it behaves differently from regular applications. Security is a concern, and suddenly you're opening up new attack vectors that you were not even aware of. Governance becomes an issue when you're violating particular regulations or policies within your organization. We really fail to get any meaningful business value again because needing to build everything from scratch is incredibly challenging.
AWS's Vision for AI Agents: Introducing Amazon Bedrock AgentCore
That's why at AWS we really want to be the best place to build the world's most useful AI agents. We want to empower you to deploy these agents reliably and at scale. We're going to do this with four fundamental pillars. The first is building state-of-the-art science, and this is in our own first-party models such as Nova, frameworks such as Nova Act, or partners, and giving you model choice from companies such as Anthropic. Next, we want to provide the best-in-class infrastructure for running these agents. These aren't just repurposed hardware to run agents. These are purpose-built to run agents for you in the best, most cost-optimal way.
We want to deliver the best specialized agents because not every agent needs to be built from scratch. There are a lot of agents that can be reused from organization to organization, and we want to make sure that we're providing those to you in the easiest way to implement as possible. Finally, we want to provide every experience to be as intuitive and easy to use as possible. In the world of AI, it should reduce the friction for adoption of new technology, not make it harder.
One of the key service features that got me as excited as that day in 2014 when Lambda and ECS were announced was this past summer when I was sitting at the New York summit and the keynote was being delivered and Amazon Bedrock AgentCore was announced. If you've been in application development for many years, such as myself, many decades in fact, you start to see certain patterns emerge. Just like Lambda and ECS provided compute platforms for modern cloud-native architectures, you can see similarities and patterns in what Amazon Bedrock AgentCore is delivering. It immediately became clear to me that this will be the future of application modernization.
Bedrock AgentCore, however, exists in a much more full and complete AWS AI stack. I talked about those purpose-built agents at the top, and these are things like AWS Transform, Amazon Connect, Amazon Q, our first-party models such as Nova, and our partner models. AgentCore sits there in the middle with Amazon Bedrock where we give you model choice, allowing you to pick the right model for the specific application that you're designing it for. All the way down to the compute layer and ML science layer that we've had for many years, if you want to go and actually build your own models, deploy on self-inference on Trainium or Inferentia under our custom silicon, for example.
So AgentCore exists as part of this broader AWS AI stack. We want to give you choice and we want to give you the primitives so that you can build in a way that best fits your organization and your business needs. Go fully managed with something like AgentCore, or go and control inference yourself and deploy on EKS. Let's walk through AgentCore just a bit because we're going to spend a demo going over some of the detail. First up is the AgentCore runtime, and this provides the secure serverless environment for complete session isolation, supporting multi-modal workloads and long-running agents.
Next up is AgentCore Gateway, and this will be the service that we're going to demo here in just a bit. This is going to provide a connection from the agent into the tools that are made available through existing legacy applications. Next is AgentCore Browser, which provides you that browser runtime instance so that you can perform those browser-based events at scale. And then AgentCore Code Interpreter.
Agent Core Code Interpreter allows you to deploy code in an isolated, protected environment that your agents run. This is all supported from a security perspective with Agent Core Identity, and Agent Core Memory helps you manage the context I spoke of earlier, both short-term and long-term. Agent Core Observability gives you insights, logging, and performance metrics across your entire AI agent framework.
AgentCore Gateway: Bridging Legacy Applications with AI Agents Through MCP
Let's now dive deep into Agent Core Gateway, which is the tool that will bridge agents to your legacy applications. When we work with customers who are starting to deploy agents, they first see a lot of value in working with the general knowledge of the LLM and doing things like text summarization or categorization. There is value there, but what one quickly realizes is that agents, to be effective, need access to existing enterprise APIs, existing databases, and existing knowledge bases. Agents are developed with different agent runtimes and different frameworks.
We were doing this internally at Amazon, and we were developing tens of thousands of agents internally. It took months to set up each individual agent to interface with all of the other tools and resources we needed. We needed to do something different to connect all of these. Agent Core Gateway is the answer for that. It simplifies tool development and integration by giving you the ability to transform your existing APIs or Lambda functions into accessible tools that are based on the MCP standard. We take security as the highest priority here at AWS, and that is why Agent Core Gateway provides not just security and authorization for inbound transactions and access to the actual gateway, but also outbound security to your existing APIs and agents, supporting your existing authentication and authorization frameworks.
Finally, there is a problem that is fairly new as we actually roll out and deploy agents and tools. The amount of tools that you want to give your agents access to begins to grow exponentially, and this causes extreme latency and unpredictable results in that nondeterministic environment. Agent Core Gateway integrates semantic search across your tools, supported through the SDK, and we'll go into an example of exactly how that works.
Here's what the overall architecture looks like in its most simplistic form. You have agents that are running and implementing the MCP protocol. They are the MCP client. Agent Core Gateway presents an MCP endpoint that your agent points to. Again, this is all based on the MCP standard. Agent Core Gateway is then configured to interface with all of your API tools and resources, and those APIs and tools can be REST endpoints, which is a very common architectural pattern for deploying APIs for years, or they can be AWS Lambda functions. We have a lot of customers who have invested heavily in serverless architectures and deployed function as a service with AWS Lambda. All that investment can be reused and made accessible to AI agents.
Let's take a look at how it works in terms of setting up this new Agent Core Gateway. We're exposing MCP tools for an existing REST service. We have a REST service that's out there, running based on the OpenAPI standard. We have an existing identity provider, and that could be Okta, Microsoft Entra, or Amazon Cognito. We'll go into the console and set up the gateway. We'll give it a name, a description, and set up the inbound configuration. Using the existing identity provider of the application, we'll then set up a specific target to that REST API, again using the existing identity provider and the existing authentication and authorization mechanisms of that API. We're not creating any end run around security; we're integrating and making security a built-in environment. Finally, we'll use this from the agent where we can list, invoke, or search with the search functionality provided by Amazon Agent Core Gateway.
Let's go deep into the security. When the client invokes a request to the MCP endpoint for Agent Core Gateway, it will need to generate an OAuth token that it will send along. Agent Core Gateway will then consult the identity provider and determine if it has access inbound to that Agent Core Gateway. It will then check and see which resources it has access to on the outbound side, accessing those existing APIs and tools as Lambda functions or REST API endpoints. It then integrates with the existing authentication and authorization using either API keys, IAM, or OAuth tokens to access those existing API endpoints. All of this is supported from an observability standpoint using CloudTrail, Agent Core observability, and of course integration with Agent Core identity.
Let's take a look at semantic search. I mentioned that tools we found can explode exponentially. It's not uncommon to see customers that begin a path developing these agentic systems and providing access to tools having tens, hundreds, if not thousands of different tools. The way an LLM looks at these tools to determine which ones to use can increase with each of those tools that you provide. This is a big problem with conversational AI because users have an expectation that responses and answers are given quickly.
We had a period where the newness of conversational AI was so amazing that if it took a few seconds to think through a response, we were okay with that. That's not going to last longer. People are going to want immediate responses. It won't be much longer before it really parallels what we've seen in web environments where users lose interest within about 500 milliseconds. Semantic search helps with this problem.
What semantic search does is it takes a look at the context of the request and the context that the LLM is operating in and allows you to use that to execute a semantic search on the tools that are available within the gateway. Instead of getting hundreds of tools, you get a small subsection of tools that are actually applicable in the context that you're working with them. This improves the accuracy, speed, and the cost, letting the agent focus on tools that are only relevant for the given task.
All of the indexing, search, and tokenization of all of the details of each and every tool is handled for you automatically by Agent Core Gateway. All of the search infrastructure is provided serverlessly, so there are no additional servers or infrastructure to manage to provide that semantic search for your tools. This was great for us because we have the ability now to use Agent Core Gateway, which provides the framework and infrastructure purpose-built so we can deploy all of our agents.
Developer Tools for Rapid Agent Development: Strands SDK and Kiro
We still struggled though. We struggled from the developer experience perspective. We were constantly repeating the same undifferentiated heavy lifting within code to get our agents working across our organization. We started to build some tools to abstract that complexity and remove that friction for our developers who were building these agents because we needed to move quickly. Last year we released all of this to open source as Strands Agents SDK.
This creates a framework that makes it incredibly easy to build agents. What used to take weeks or months can now be done in a few lines of code in minutes. This is open source and open use, so we wanted to make sure that we're supporting all of the different AI agent runtimes, giving you choice of which models, as well as tight integration with Agent Core and Amazon Bedrock. This provided us with a lot of freedom to abstract away the complexity of agent communications from the developer so they can build these agents quickly.
As we're retrofitting these applications and making them use agents, we still wanted to really leverage AI for the actual coding aspect of it. We launched Q Developer followed by Q CLI. But last year we released Kiro.
Kiro is the AWS IDE for bringing prototypes to production. Introduced in Kiro was the concept of spec-based development. You start with specifications that can be generated in a conversational format. The agent helps develop those specifications and asks questions about them, building them into something that's human-readable so you can review and validate that the specifications match what you want to produce in your application. You then go through a design phase and ultimately develop tasks to automate and execute.
Kiro is now in general availability. If you haven't already used it, I strongly urge you to download and give it a try. I'm introducing it here and we're going to use it in the demo, but we have many great sessions that do a deep dive into all the features and functionality of Kiro, so I strongly encourage you to check those out.
That concludes the presentation portion. Now I'm going to launch right into the demo. I want to display how you would integrate agents into legacy applications. I started thinking about the demo and wanted to take something that was fairly well known, something that is ready to go, open source, and available for you to download. If you've been in application development and have developed REST API endpoints, you're probably very familiar with OpenAPI, formerly known as Swagger.
OpenAPI developed an application that they use for demonstrating REST APIs and showing how they all work in a pristine way, and that is the Swagger Petstore. In this demo, I actually downloaded the Swagger Petstore container and deployed it to ECS on Fargate. I made no changes to this application—just downloaded it from the GitHub repo, launched it into the container, and it's up and running, responding to API calls, passing all the tests, and absolutely no code changes whatsoever.
This is showing an OpenAPI specification. If you're not familiar with OpenAPI, we're basically defining each of the endpoints. We give it a description, the operation, and the parameters. One thing I want to really emphasize here is that AgentCore Gateway, the way it operates with the OpenAPI spec and the way it makes itself available to agents, is highly dependent on your specification being very well documented. It's going to use that specification to define the functionality and tasks of the actual agent.
Live Demo: Transforming the Swagger Petstore into an AI-Powered Application
All right, let's start the demo. First up, we're going to take a look at the settings for Kiro. The agent settings have this concept of steering files. The steering files define how Kiro operates. I'm going to set this up with the Strands Agent SDK README as the steering file. I'm going to make a directory for Kiro in the steering, and we'll go into dot-kiro steering. That's where the steering files for Kiro exist. Now I'm going to go to the Strands Agents website and click on the GitHub repo. The GitHub repo has a README file for the Strands SDK. This is where it defines the quick start, installation, and example implementation. Let's grab the raw URL here and we're going to drop this into the Kiro steering directory.
Now we have a steering file for Kiro that's going to explain the Strands SDK. This is right from the repo—I didn't change anything. Let's go ahead and open it up.
Under the Kiro directory, there we go. The README file is all there. The README file is all there. It's in kind of an HTML format. You can see it now in the Kiro settings. There's the agent steering, and we have this refine button. The refine button is what is going to actually re-output the README file in a format that the AI agent actually understands. So you can see it's analyzing the README file. It's now converting it into a steering document that's a better guide for the AI assistant. You can see all of it changed. It dropped all the HTML stuff and put it into a markdown file, a README, and it gives a description of exactly all the changes that were done. It's structured around core concepts and made it easier to read. You can see the differences there from what I had, and all I needed to do was click on that refine button. Now if I open it up, I can go into kind of a preview so we can actually see that in a prettier way. This is a format that the AI agent now can fully understand. This is a Kiro steering file, and this is going to help in all of the subsequent interactions that I have with Kiro.
So now let's go ahead and start with this vibe approach. I talked a lot about spec, but for this purpose we're going to do vibe. I'm just going to say, "Let's create a minimal AI agent in Python that is terminal based." I'm going to specify that we're going to want to use Strands Agent SDK. That's it. You can see that we're including the steering file, which is the README document that I downloaded and refined so that the agent within Kiro can understand it. So it's referencing that, it's creating the requirements for the application, it's creating the actual Python file, and it's creating a README for this application. This is already so much better than the developers I've worked with where documentation actually exists while the code is actually being written. Once it's done, it actually gives a full definition of everything that has been created: project structure, features, and all of that. So let's go ahead and take a look at the actual agent that was developed in the actual code, so that's all in agent.py. What you can see here is that in just a single prompt and following the steering file, it's created an AI agent, and this is a fully functioning agent. Look at the code that got created—it's incredibly minimal. You have a bunch of comments, you have two lines of import statements, and by line 23 it looks like the agent is already well defined. So with just a few lines of code, Strands SDK abstracts a lot of the complexity of doing the streaming, doing the IAM authentication, and connecting up to the Bedrock SDK. All I had to do is say, "Hey, I want a Bedrock model," and I specified Nova Pro here. Actually, Kiro picked that. I'll go with it for now. Looking down beyond that, the rest of the code is really just doing terminal management, handling the inputs and outputs and the prompt because I said I just wanted this to be terminal based. Now you could do something like open web UI if you want, but for this purpose, I'm just going to do this in the console. Opening up the README, it not only says what the application does, but it says how to actually set it up. The requirements are all the libraries that need to be installed, and it's Python compliant. So let's go ahead and give this thing a try and actually run this agent. I could follow those instructions that it gave me, but I'm actually going to use Kiro here and I'm going to say, "Hey, run this agent." This is in a new session, so Kiro's looking at the README of the application. How do I run this thing? What's the code? Let me validate it. Let me see the requirements here. Already it ran into its first problem, right?
The libraries aren't actually installed already, so it saw the error, interpreted the error, and said, "I need to actually install these libraries." So it goes in and installs the libraries, and then it finds another error. Everything's running here in a virtual environment, so you have to set up a virtual environment first. Then it wants to execute that in the virtual environment, but it's saying, "I haven't run this first, so you have to actually give me permission." So I'm going to give it permission and we're going to execute the library installation, or Kiro is going to execute the library installation for our Python application. There we go. All the requirements are actually installed now. It's looking at everything saying, "It looks like I'm ready to go. I want to run this thing," but again it says, "This is new code to run, validate." I do. Here we see initializing Strands agent and everything is running, but Kiro's still waiting because I'm in a terminal wait here, so let me exit. Now Kiro sees everything is running successfully. There are no errors. Congratulations, it looks like everything's good to go.
So let me actually run the agent now. Remember that this is a pet store application. What I've done so far is I've created a new agent using Strands SDK. I'm not connecting to any legacy pet store at all. I am connecting to Bedrock though, so this is an AI agent that is using Bedrock, the Nova Pro model that I can interact with. So I'm just going to simulate what I would do in a pet store and let's just go ahead and say, "I want to adopt a pet here," but I want the pets to make sure that they're good with kids. So that's going to actually send the request to the LLM with Bedrock and it's going to answer back using the LLM's general knowledge. So I just said, "I want to adopt a pet," and the LLM is more than happy to comply. It says, "Sounds good to me. If you want to adopt pets, dogs are great. Who doesn't love a golden retriever, right? Labradors, beagles," these are all answers from the general knowledge of what a good pet to adopt for kids actually is. And then down at the bottom, you can even see that it has some tips. Again, this is the agent actually taking freedom here to add additional information.
OK, let's now connect the legacy applications. So I'm going to go to the AWS console. I'm going to search here for AgentCore, and I'm going to go into Build and Deploy where we have Gateways. Under Gateways, I'm just going to create a brand new gateway and here I'll give it a name. We'll call this pet store gateway. I've got some additional description if I want instructions. Here's where I enable the semantic search that I talked about, but for this demo and purpose, we're going to leave that off for now. The Inbound Auth configuration I can connect to existing Cognito, but for now I'm just going to use the create a new one. Same with the service role. You can connect the existing service role, but I'm going to allow the console to create this for me. And then here's where I define the actual target. So we have Gateway and Target.
I'm going to define the pet store API here and we're going to define this as the REST API. That's the one you had to imagine earlier, right? So we can upload the REST API or we can actually paste it in here. I'm going to paste it in and this is just the JSON OpenAPI document that I showed you earlier, so I just copied and pasted that right into the console. Earlier I created an API key to access the API, so I'm just selecting it here. And clicking create gateway, and that's it. All the definition was in the OpenAPI, the endpoint, all the REST APIs were all defined within that OpenAPI spec. So the gateway was created successfully. You can see some ARNs and some endpoints there. It says that the status of that gateway is ready to go, so we can actually start using it. And the target is there and defined and that's listed as ready as well, so that target is the actual REST API that I defined in that OpenAPI spec.
The target is the actual REST API that I defined in that OpenAPI spec. The console also shows some invocation code. It actually supports Strands MCP client example code that you can go through and scroll to look at, and you can actually copy it for implementation if you want to. I'm not going to copy all that code and try to modify it and implement it. Instead, I'm going to go back to Kiro. I'm going to copy the MCP URL endpoint here straight out of the console. Then I'll go back to Kiro and tell it that I now want to update this existing agent with the new MCP endpoint that I copied from that console. So I'll update the agent to use the MCP tools at the endpoint I paste in, and that's it. It's going to pull in the steering document. The steering document is going to instruct how to actually implement MCP with the Agent SDK. So it's updating all of that code now and working through the agent Python file making those updates. We can see the two different edits that it made to agent.py. It added additional libraries and requirements to work with the MCP and it even updated the documentation. All the code is done. It actually goes through and does a compilation to make sure that there are no errors, validates it, and then it's going to report all of the changes that were made. Let's take a look at all the changes. You can see these new code additions with increased import libraries that have been added. This is the access token, the OAuth token that now needs to get created. Then there's the actual code to create the agent using the MCP URL. So again, these are all code changes that I didn't need to make, but Kiro was able to make just based on the instructions that I gave to it. Now let's go ahead and set up the final piece, which is the actual OAuth token generation, the authentication piece. Here's the token URL that I'm going to need, but I'm actually going to need the client ID and secret, and I grabbed that from the Cognito console. Remember I said I wanted to create a new Cognito user pool, and that got created. I grabbed the client ID for that by going over on the left-hand side under applications, and then app clients. So there's the app client that was created by the gateway console definition, and my client ID and my client secret are available there. That's what I need to actually update. Remember Kiro told me I needed to update these to the environment variables. We can see that in the code that got generated, I have these three environment variables that need to get set. I'll start with the MCP client ID and grab that from the console. I'll just copy that from Cognito. Then I'll paste that in here. Next is the MCP client secret, and we get that from Cognito as well. And then finally is the MCP token URL, which is the one that we saw earlier that we can get from the actual code sample in Agent Core gateway, so we'll go back there to paste that final credential.
Now we have all of the credentials that we need for the environment variables, so we're ready to run this. We have a new agent that now has access to our entire pet store. Let me clean this up a little bit and make it bigger. Now we're going to run the agent. You can see here that we see setting up our MCP connection, and it says connected to MCP endpoint with 18 tools available. This is all done with the gateway, and I'm going to pose the exact same question: I'd like to adopt a pet that is good with kids.
Here's where you'll notice something different. The agent is actually executing tools. It's using the pet store find pet by status API endpoint. It's finding a bunch of pets that are currently available for adoption. It then determines that it wants something that's friendly with kids, makes a determination of what tags might be, and then executes find pets by tags. It goes through and finds all of these pets that are available in the running pet store, the legacy application that I did not need to change. It can find all of these different pets by status and by tags based on friendly and other different tags that match the context of wanting to adopt a pet that's good with kids. I find specific pets here: a dog named Buddy, a dog named Happy Test Pet, another dog named Max, and even presents some additional options there.
Let's look at how this works in context. I'm going to say give me more details about Buddy. Buddy was defined in a return value before, but the agent already knows what that specific pet ID was for Buddy. It goes back to the tool to get details about Buddy and gives different photo URLs and specific information about Buddy, including how long it's been available for adoption. It then integrates that with the agent's general knowledge to give more information about learning about the adoption process, what to expect from a dog, and all of these things.
What we're demonstrating here is an agent that is using an existing application that has not been changed in a nondeterministic way. Imagine this pet store that has a UI that you have to define, where you have to think about how your user is going to interact with your system. You might have a list of pets that are available, but you might not have encountered the idea that people are going to want to look for pets with certain behavioral traits. Because you have the API that can function this way, the agent designs access using the tools to satisfy the requirement, combines that with general knowledge, and gives very meaningful results. This allows you to bridge legacy systems to AI agents and use them in meaningful ways.
Let's take this a step further. Let's say I wanted any pets that would not be good for kids. I've been talking about good with kids and friendly, so let's throw the agent a curveball and look at pets that would not be good for kids. What it's doing is remembering that it found these traits by using the find pets by tags endpoint, so it's iterating on different behavioral traits and personalities and executing that using tools with the find pets by tag endpoint with the existing API using the existing application that's running in ECS.
It found a few pets, and now it remembered that I wanted to get details before, so it's actually executing those get pets by ID operations and found a few. Here's a dog and here's an exotic animal, and it's even explaining why those specific pets would not make good pets for kids. This one that's shy, for example, says shy temperament may be overwhelmed by children's energy, combining the general knowledge with the specific pet information.
Okay, finally I'm going to say, "Are there any cats?" because you said a better option was cats. So now I'm going to actually see if there are any pets by a specific breed here. It goes through and executes find pet by status again. Now it has a different objective. It's going to go through and find specific cats, and it looks like it found one. It found a cat named Max. It remembers that I was looking for a pet that's good with kids, and so it's going to integrate that and use general knowledge to explain why cats are good pets with kids.
That's the end of the demo, but I wanted to take an existing off-the-shelf application and show how you can integrate it with the agent. What I strongly urge everybody to do is continue this journey and dive deep into AgentCore. You can use the URL here. We have a lot of sessions based around AgentCore, AgentCore Gateway, as well as Strands, but more than anything, not just diving deep, I really encourage you to start actually using it.
Don't wait until tomorrow or next week or even next month. As soon as you get back, think about where agents would make a difference in your organization. Develop that use case, develop the business value, prove it out, and then repeat. I wanted to thank everybody. I apologize for a little bit of the AV issues there, but hopefully you're able to imagine the open API. I'll be around for a few minutes afterwards if you have any questions at all. Please enjoy the rest of re:Invent. Thank you so much.
; This article is entirely auto-generated using Amazon Bedrock.














































































































































Top comments (0)