DEV Community

Cover image for AWS re:Invent 2025 - From Code to Market: Build and Launch AI Agents on AWS Marketplace (ISV314)
Kazuya
Kazuya

Posted on

AWS re:Invent 2025 - From Code to Market: Build and Launch AI Agents on AWS Marketplace (ISV314)

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025 - From Code to Market: Build and Launch AI Agents on AWS Marketplace (ISV314)

In this video, Kevin Kennedy and Doug Mbaya demonstrate building an AI agent locally using Strands Agents SDK and deploying it to Amazon Bedrock AgentCore with just four lines of code. They show how AgentCore provides serverless infrastructure with micro VM isolation and auto-scaling capabilities. The session then covers listing the agent on AWS Marketplace through two delivery models: container-based deployment in buyer accounts and SaaS API-based offerings. Live demos include implementing AWS License Manager for contract pricing and Marketplace Metering Service for pay-as-you-go models, showing how buyers can subscribe, deploy agents via AgentCore runtime, and consume licenses based on query complexity (standard vs premium dimensions).


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction: Building and Launching AI Agents from Code to AWS Marketplace

Hi everyone. Thank you for joining us today. I hope you're having a great re:Invent. My name is Kevin Kennedy. I'm a Senior Marketplace Solutions Architect covering APJ. I'm joined by my colleague, Doug. Hi, I'm Doug Mbaya. I'm a Senior Partner Solutions Architect at AWS. I support ISVs in integrating with AWS services. Great, thanks Doug. So as I say, thanks everyone for joining us today. What we're going to talk about today, as the title suggests, is "From Code to Market." We're going to demonstrate how to build and then launch an AI agent on AWS Marketplace.

Thumbnail 70

We're going to start quite simply by building an agent locally, and then we're going to demonstrate how to take that to AgentCore at runtime. From there, we're going to look at how you can take that to market globally and reach an audience by listing that on AWS Marketplace. I'm going to cover the different options for AWS Marketplace and show you some ways of integration. Here's a quick overview of our agenda. First, we're going to talk about what is Amazon Bedrock AgentCore. If you haven't heard of Amazon Bedrock AgentCore, Doug is going to cover that quickly so everyone is on the same level. Then we're going to do a demo where we'll demo an agent locally and then take that into AgentCore. After that, I want to cover what is AWS Marketplace. If anyone is not familiar with AWS Marketplace, we're going to cover what that is. Then I'm going to cover the delivery models that are available. We're going to jump into a demo where I'll deploy some listings on Marketplace and then deploy an integration into various different ways of metering and license manager. Finally, we'll cover the next steps. So first, I'm going to hand over to Doug. Doug is going to start off talking about Amazon Bedrock AgentCore. Thank you.

Thumbnail 110

Understanding Amazon Bedrock AgentCore: Infrastructure for Agentic Applications

So let's talk about what is AgentCore. But before we do that, I just want to do a survey. How many of you have heard of AgentCore? Quite a lot of hands. How many of you have deployed it in production capability? A few hands. So good. What is AgentCore? It's a set of services from AWS that provides the infrastructure for you to deploy, run, and scale your agentic architecture. It was built to be modular, such that it's a bunch of services that we combine together to allow you to operate your agentic application following best practices in terms of security, workload isolation, auto scaling, and so on.

Today, we are going to discuss deploying an agent from our local computer to AgentCore runtime. AgentCore runtime serves as the compute portion of the infrastructure where your agent runs. It's operated in a way that workloads are isolated with micro VMs. Whenever a user starts a session with your agent in AgentCore, it's isolated in a micro VM to provide that layer of security. While the runtime itself is at the center, as I mentioned before, it's modular. We have other services that we put together because we know that's what you need to operate your agentic system at scale.

We have services such as AgentCore Gateway. A gateway is really a way for you to extend your agentic system to other existing tools within your organization or outside of your organization. The gateway supports MCP, for example, which is a protocol to allow your agentic to gain more context by calling tools. It also supports Lambdas. If you already have a Lambda function that you think would be better to integrate with your agentic, then you use AgentCore. We have other services like AgentCore Identity, which provides the security level and authentication and authorization. We have memory to provide that consistent conversation level when your user interacts with your application.

Thumbnail 270

This is the architecture. I just talked about AgentCore runtime, which is where your application runs. You have multiple other tools around it to operate your agentic system. You have an AgentCore Browser, for example. If your agentic system needs to go to the web and acquire some data from a web page, you use that tool. Then you use the gateway for tooling and extending it.

Thumbnail 330

You use memory to provide a consistent user experience. Then you connect it to the model of your choice. That's what the architecture looks like in general. The red part in the middle is what we are going to deploy, and we are going to do it live with a demo.

Before we move forward, I wanted to talk about the portfolio that AWS has in terms of agentic capabilities. You can see the top of the stack is those applications that are ready to use. Those agentic applications that are ready to use. You just come and plug in your data or augment it with your context and then you can use it right away. When you go one layer down, this is where our agent core sits, and you start getting some infrastructure and services capabilities that allows you to deploy and build your own agentic system.

Here, we assume that you know your business and you know your customers better than we do. So we provide you with the infrastructure and the services to allow you to build your application and then make it available to the marketplace. You can see that there are two squares in red here. What we are going to show you today is how to deploy your agentic platform and then bring it up one stack layer in the marketplace so that your users can now go and start using that application without having to code anything.

Thumbnail 420

Further down the stack to the bottom, that's where you really need low-level access to build your own agentic system. Either you want to fine-tune, train, access GPUs and accelerations, that's when you go further down the stack.

Perfect. So we talked about Agent Core. This is the core of our demo. This is what we're going to show you how to deploy. An agent that's running on your local machine to the Agent Core runtime. What is Agent Core runtime really? It's a platform that allows you to give you the flexibility to deploy your own coding SDK. You are not locked to any specific language or model. You can call any models via API. You can develop it using any framework. It's a runtime environment that provides isolation, session-level isolation using micro VMs, and it supports any model or any SDK.

This is what we are going to deploy today. You can see that we are packaging the application, converting it into a Docker container, pushing it to a repository, in this case Amazon ECR, and from there, the application and the container will be deployed on Agent Core. Before we move forward, let's briefly talk about the value. The first value is really time to value. Now you no longer have to worry about operating the infrastructure that powers your agentic system. You are laser-focused on developing your agentic system, your code, and the business logic solving your customer's problem or your organization's problem. We remove that management and operational layer for you.

Thumbnail 500

Because you're laser-focused on developing your agentic application, you become more productive and you become also cost-efficient in that. The flexibility, as I mentioned earlier, is that you can use any model. You can also use any language framework to deploy your application. Trust, of course, we've baked in all the security best practices that we've learned over the years. We've baked them into the modularity of the application and the platform in Agent Core. Therefore, when you start deploying your application, you have all those security toolings and services that allows you to operate it faster at scale and at production level.

Thumbnail 580

Live Demo: Deploying a Local Agent to AgentCore Runtime with Strands Agents

For this demo, we'll be using Strands Agents. Strands Agent is an open-source SDK that takes a model-driven approach in developing agents. What it really does is allow you to build an agent with just a few lines of code. Remember, when you are building an agent, especially if you are going to expose that agent to tooling, you need to build logic into it. If, then do this, use this tool, and so on.

Thumbnail 640

When you're using a dedicated agent like Strands agents, all of that logic is done for you. Your agent is intelligent enough to know that a particular question requires access to a specific tool, and it will only call that tool without you explicitly telling the agent to use it. This makes it a very powerful library. You can use this or any other genetic library that you are familiar with.

Thumbnail 660

Thumbnail 690

I'll switch over to my laptop and start doing a demo. Hopefully this is not too small. This is an agent that I have developed that runs on my local machine. You can see that I'm importing the Strands agent that I just talked about to build my genetic application, and then also some tooling because my genetic system will access some tooling. You have to define your tools. Any other import is really just to help with the application, but the most important are the agent that we're going to initialize and the tool. We're also importing some built-in tools. Strands agent has some built-in tools that are mostly just for reference, and if you have your own tool, you can just bring it in.

Thumbnail 710

Thumbnail 740

Thumbnail 750

For example, here we are taking what is a regular function with a dummy response. It just returns sunny. What we're doing is decorating it with a tool from Strands. When you decorate it with a tool, it automatically converts your function—any Python function—into a tool. So we have a set of tools here. We have one tool that gives the weather, which is a dummy answer, another tool that gives financial advice, and another one that gives you information about your AWS spending. In addition to those three, we also have two others: a calculator and a file reader.

Thumbnail 760

Thumbnail 780

Thumbnail 790

Thumbnail 800

What we're doing here is initializing our agent by passing a model to it. If you don't pass a model, it will pick a model for you from Bedrock by default because we are using the Bedrock initialization from Strands. If you wanted to use OpenAI instead, you would import OpenAI. It's just to help and accelerate the development. Because we are specifying a Bedrock model, if you don't specify the model that you want, it's going to use a selected default for you. In this case, we're specifying the Anthropic Claude 3.7 Sonnet.

Thumbnail 810

Thumbnail 830

Thumbnail 840

Then we are initializing our agent, passing in the model and the tools. This is where we are passing multiple tools. Why are we passing multiple tools? Just to show you that your agent will be intelligent enough to select the right tool based on the user prompt and the user's question. This is the function that takes the user prompts and passes it to the agent and then processes the return. If you want to run this application, you can run it. I have a Streamlit running here so you can ask it a question, such as what is my AWS spending, or just any question to see if it's going to recognize what tool to invoke and give us the answer.

Thumbnail 860

So Doug, quick question: you've called this a local agent, so this is running locally on your machine at the moment, is that right? But it's making a call to the model that's obviously elsewhere, is that correct? So it's not all local, just the agent itself and the building is local, but it's making calls to a model externally. Correct. The agent is running on my local computer and then it's calling Bedrock through the SDK through the API. I could have done it with a local model as well, but because we're going to deploy it to AWS, it makes sense to have a model that's accessible over APIs. So yes, it runs on my computer and then it answers my question. I just ask it a question about billing, it selected the right tool.

Thumbnail 920

Thumbnail 930

Thumbnail 940

Thumbnail 960

When you ask the agent a question, such as "give me some financial advice," the agent detects the intention and looks through the available tools. It identifies that there is a tool that can provide financial advice and uses that tool to return the financial advice. Now that we have an agent running locally, the question becomes how to convert that agent so we can deploy it into AgentCore. I'll come back to my application. My application is called local agent. What I would do is copy it and call it hosted agent. Now I have a hosted agent application that I'm going to pull up. It's exactly the same application, but we're going to convert it to an agent called ready application. The first thing you do is import the runtime. I'll do from Bedrock agent core import that runtime. Then I'll import my Bedrock app. This is a Bedrock application, so that's really the first thing you do to convert code that runs locally into an AgentCore-capable agent.

Thumbnail 1010

Thumbnail 1020

Thumbnail 1030

Thumbnail 1050

Thumbnail 1060

The second thing I would do is initialize my application. I'll do app equals eco Bedrock. That's the second code I'm adding to initialize the application. All my tools will remain the same. The only thing I would change is the entry point. Remember, this is the function that we are calling when we were using it on the stream leads. In order to operate it on AgentCore, you need to tell AgentCore what your entry point is. You do this by using a decorator. Then you call the app that we just initialized and say entry points. By decorating my function, I'm telling AgentCore that when it's deployed, I want you to use this function when I'm calling the endpoint with entry points. By default, entry point is your default API. By default, it's going to execute this function when it's running on AgentCore.

Thumbnail 1090

Thumbnail 1100

Thumbnail 1110

Thumbnail 1130

Thumbnail 1140

This is the third line that we've added. Because it's going to become an API, AgentCore is going to convert this application to an API. We no longer need all of this. All we need is to initialize our application and then set up that run. Here, you can specify a port. By default, it's going to use 8080. However, you don't need to know that port from the standpoint of a user trying to interact with it because all you need is to call the AgentCore endpoint. We no longer need all of this because this is just for our local application. You can see that we've essentially converted four lines of code what was a local agent into an AgentCore-capable agent. Before we move forward, let's review our requirements file. We're importing Strands agent, Bedrock, and AgentCore. Of course, we're using both of those to initialize the app. But then also take a look at this Bedrock agent core starter toolkit. This is very important because what this does is provide you either a CLI or an SDK for you to deploy your agentic system quickly. In this case, we're going to use it as part of a CLI command line. I strongly encourage you to use the starter kit because it speeds up your development, deployment, and testing. I'll show you what I mean in a moment. All the other imports are mostly just supporting tools.

Thumbnail 1200

I'm going to go back to my hosted application. Now that I have updated it to be capable of running on AgentCore, it's still a local agent. I can still operate it locally. If I run it, it's just going to deploy a web API locally, and then I can just interact with that web API.

Thumbnail 1220

The goal here is to push it to an Agent Core. The first thing I would do is say "agent core" and I need to be on a terminal. So it's "agent core" and then "configure." Then I pass it to my application, the one that I just prepared to deploy. In this case, we call the application "hosted agent." That's the first step for you to start preparing your application to deploy it on Agent Core. Remember that my CLI is configured with my AWS credentials. However, if you use temporary or permanent credentials, we usually recommend using STS for this.

Thumbnail 1270

Thumbnail 1280

Thumbnail 1290

Thumbnail 1310

I'll start configuring my application, and for the purpose of this demo, I'll go with the default settings. It's asking me to provide a role or create one by default. I'll go ahead and auto-create one. Then it's asking me to specify a repository where the Docker image will be pushed to, and I'll ask it to auto-create that for me. It has detected that I have a requirements file locally, which I showed you earlier. So it's asking me if this will be part of the deployment, and I'll say yes, use that application. I just press enter.

Thumbnail 1350

Thumbnail 1360

Next is authorization configuration. How do you want your users to access the application? Do you want them to use OAuth, or do you want them to use other credentials? For this purpose, I'll go with the default IAM. However, in production, especially if multiple users will be using it, using an OAuth setup is a good way to do it because then each user has a different level of authorization, and that authorization will follow them as they execute your agentic system. I'm not going to add or restrict any access to this for the purpose of this demo. But essentially, you would do that if you want to restrict access based on the network.

Thumbnail 1370

Thumbnail 1380

Thumbnail 1400

So it has configured my agent locally, and it's created a few files. It created a Dockerfile. This is the file that will execute to actually package my Docker image. It has also created a Bedrock agent configuration. This is the configuration—a snapshot of how my agent will look. You always refer to this one to see what the configurations are, and you can edit those configurations and push them. So briefly, this is where we are. We are packaging it, and we're going to push it to the repository on ECR and then deploy it on Agent Core.

Now that we have configured it, we need to push it. All you need to do is type "agent core launch." That's all you need. Press enter. What it's doing is converting this application into a Docker file, putting it into a repository on ECR, and then once it's done, it will pull that application into Agent Core and that's it. It becomes available for you to operate and start running inferences on it.

So Doug, quick question. While you're implementing there, what we've shown here is Agent Core, and it looks pretty simple to take from local to migrate to Agent Core. With the runtime, what's the alternative? If someone wanted to take that agent they built locally and run that on Bedrock or anywhere else, what would they need to do instead? You have the option to use Agent Core, of course. Agent Core has all the set of tools and services ready for you, so it's helpful and it's a managed service. But if you think you have the expertise and want to deploy and self-manage it, you could first use the SDK. In addition to the CLI, you can use CloudFormation or the Cloud Development Kit to push it to Agent Core. But if you wanted to self-operate it, you can also run the same agent on an EC2 instance, or you can run it on a containerized environment such as EKS or ECS. And Agent Core, is that a serverless infrastructure?

Yes, it is a serverless infrastructure. It scales as you use it. I mentioned that it's a micro VM with session-level isolation. Each user that starts interacting with the application has an isolated micro VM, and then that scales based on the number of users you have. It auto-scales up and down for you. To be clear, there's no infrastructure to manage for the customer. They're not managing virtual machines. No, you're not managing any infrastructure. You are given a monitoring platform that integrates with AWS services, and that's what you do. You plug into your monitoring system, but essentially this is operated for you. You only pay for what you use.

We can see that our application is deployed on Agent Core. Now we may need to invoke it and test it. Let me try to test it with the same Agent Core command. And then in book. OK. And then you have to pass it the HSN. So what I would do is pass our system prompts. OK. And then that would be my question. Let me see, can you give me some financial advice? This is the same question we asked it. Now that the agent is deployed on Agent Core, we are invoking it directly from it, and then it becomes an API essentially. All right, just making sure my prompt is correct, and then I'm going to run it. The code that's now running in the container that's hosted in Agent Core. Yes, so we are at that layer where now I'm the user, and I have an application. Your users will not directly interact with the Agent Core API. You build an application on top of it, like my Streamlit application, for example, and then that's the API that will be called.

You can see that the agent has replied with the financial advice that I asked for, the same one I got when I ran it locally. I can even go to my agent called hosted agent. You can see that I now have it running here. There are multiple agents, and remember we named it hosted agent, and it's right there. If you don't specify a name, it takes the name of the Python file, and that's going to be the name of your agent. We are in runtime Bedrock runtime, and then you see all the hosted agents here. This is the agent, you can browse it, look at it, and you can even interact with the agent from the terminal for the purpose of testing.

Just to recap, we packaged a local agent with just a few lines of code, and then we converted it to Docker. Once it's on the ECR repository, it's there and can be reused and redeployed multiple times. If you are an ISV and you want to sell a product, you can either offer it as a SaaS, which means it's hosted on your account and they can access it via API, or you can also have it deployed in their own accounts. Kevin is going to go over those options with you. In general, we took what was a locally running agent, we converted it to a hosted agent with essentially four lines of code, and then we were able to execute it from Agent Core.

AWS Marketplace Overview: Delivery Models for AI Agents and Tools

If I want to finish my testing, I'll just do Agent Core destroy. When I enter that, it's going to clean up everything from my local machine, but also it's going to remove everything from Agent Core runtime on AWS. I'm not going to destroy it because Kevin may want to reuse it. Thank you very much. This was just how to push it to Agent Core runtime. Now I'll pass it to Kevin. He's going to talk about the marketplace aspect of it. Great, thanks Doug, that's really good. So just to recap, Doug built an agent locally, then he migrated to Agent Core with just a few commands. That's great, but it's running in your environment, so if you want to take that to market and you want to productize that, you need to think about how you can either deploy that into a customer environment, or as Doug mentioned, you may have a SaaS component where you charge the customer there.

We're going to cover a bit of that at the moment and take it a step further by looking at how you can productize it and list the product on AWS Marketplace, which will enable you to take that product globally. But before I go into that, I just want to level set. Can I get a show of hands from those who are aware of AWS Marketplace? Yes, quite a few of you. Who here has either listed a product on Marketplace or purchased a product via Marketplace? Okay, fewer of you. That's great, so we're going to cover that today, and if there are any questions, I can address those at the end as well.

Thumbnail 1850

Let me tell you about AWS Marketplace. Our vision for Marketplace is to be the best place for customers to find, buy, and deploy third-party software, professional services, and data. We look at it as the everything store for software. Here are some statistics that give you an idea of the scale. We have over 30,000 transactable listings on Marketplace today, over 70 categories, and recently in July of this year, we introduced categories for AI agents and tools. We have over 3 million subscriptions and millions of visitors per year to AWS Marketplace. If you're an AI agent today and you're looking to take it to market, it's a really good place to take your product and it reaches a global audience.

Thumbnail 1900

Thumbnail 1910

What I'm going to talk about today is the AI agent delivery models. Let's discuss a few of those now. In AWS Marketplace, each product has a different product type. You can categorize those based on whether you would like that product to be deployed into the buyer's account. We have a few options here: AMIs, containers, and machine learning images. These are purchased by a buyer and deployed into that buyer's account, where they then manage the resources for that product in their account. You can charge them, for example, a licensing fee.

On the other hand, you have the seller account with the SaaS product. If you have a product that runs in your account and you want to provide access to that product via a license, they can sign up to your product and have certain entitlements. That can typically be a SaaS product. The definition is whether it will run in your account as a seller versus whether it will run in the buyer's account. The options we're going to focus on here are compatible containers that can deploy an agent runtime, and API-based SaaS AI agents and tools. One would be agent runtime deployed as a container in a buyer environment, and API-based SaaS AI agents and tools would be a SaaS product that deploys in your account, and you provide access to the buyer as an API endpoint.

Thumbnail 2020

Both of those options support upfront contracts and pay-as-you-go pricing. The decision you need to make is really about understanding what your buyers are looking to purchase and how you'd like to take your product to market. That's really a business decision on which one you prefer to do. Let me look into this a bit more. We have two options: container-based agents and tools on the left there. Kevin, I have a quick question. What are some of the factors that customers need to take into account when deciding the listing model or the pricing model?

Sure. The pricing model is something we're going to cover in a moment about the difference between contracts and pay-as-you-go. But ultimately, the first decision you need to make is how you wish to deploy that product. You need to make that decision first. Is the infrastructure going to run in your environment and you give entitlements to a buyer, or are you happy to, if you go back to what Doug created and package that up as a container, then the buyer will go and access that? Those are the two different models, but we're going to cover a couple more in a moment, which will be contracts versus pay-as-you-go. Again, that will come down to the buyer preference and whether they're comfortable purchasing it as a pay-as-you-go product versus purchasing it with a contract where they purchase entitlements in advance.

Thumbnail 2090

Container-Based Listings: Implementing License Manager and Metering Integration

We're going to move on and look at how it gets deployed, and I'm going to do a bit of a demo into the code as well. What Doug showed you earlier is how to deploy that agent. We have a Docker file that's gone onto the ECR repository, and that's all in your environment today. So if we bring it into Marketplace, what we're going to do is create a Marketplace listing.

Thumbnail 2110

Thumbnail 2130

Thumbnail 2150

Thumbnail 2170

If it's a container, we create a container listing and then take that container which is in your private ECR repository and bring it across into a marketplace ECR repository that applies to your listing and then you publish that. It's very simple. Let's go and have a look at that now. I'm going to show you how a deployment looks. If we're looking to do a deployment, you can either deploy it via the marketplace portal with a simple click-through process, or you can templatize it by creating a JSON file with the parameters and content you need. Then you call the Marketplace catalog API.

Thumbnail 2190

Thumbnail 2200

I've created some examples in advance that you can use yourself. If you're looking to deploy multiple products, I'd recommend doing this as a JSON file and deploying it via the Marketplace API. Let's look through what we have here. We've got a logo that's required for your product, and we've added a few categories and given the product a title. You'll want to give it a title that's appropriate for your product. There are other things you'll need to include such as a long description, which really helps with search engine optimization and the buyer experience to understand your product, along with highlights and other details.

Thumbnail 2210

What I want to highlight here is the dimensions. When you publish a product or list of products on the marketplace, you create what are called dimensions, which represent the pricing for your product. In this case, we have a standard and a premium product. This one is a pay-as-you-go subscription product that will be externally metered. When a user signs up to use that product, they can be charged either a standard price for a standard query or a premium price. What you set for the dimensions is entirely up to you and depends on your product, but you'd usually align this to how you're charged for your product today. If it's an AI agent and you're looking to charge customers, I've set up a standard and premium option depending on the query they send in.

Thumbnail 2260

Thumbnail 2270

Thumbnail 2280

Once you're happy with that, you create the repository that goes into Marketplace by giving it a repository name that provides a unique identifier. Once we're happy with that and set the price, we put a price for standard and then you can set a price for premium, and then you're ready to go. I'm going to generate a list here, which gives me the command I need to run. When I run that, it calls over to AWS Marketplace. If the syntax is correct, we get back a change set ID and a change set ARN that shows me the syntax from my JSON file is correct and has the content required.

Thumbnail 2310

Thumbnail 2330

Before we move forward, you should see two or three requests come through to create new listings. As we can see here, we have three new listings that have been created and they're under review at the moment. That takes about ten to fifteen minutes. Once they get approved, they get published onto AWS Marketplace and that becomes your listing ready to go. Let me show you one I created earlier in the interest of time rather than waiting fifteen minutes.

Thumbnail 2340

Thumbnail 2350

Thumbnail 2360

Thumbnail 2370

I created a container contract product earlier. Here are the details we saw in that JSON file and the pricing information. This is a contract-based one with simple and advanced options, and I also created one for pay-as-you-go. We would then create the repository, which is pretty straightforward. Now that we have the listings in Marketplace, the next step is deciding the pricing model. As Doug mentioned earlier, we're focusing today on container products, and SaaS would be a separate approach, but we're going to focus on the container pods. There are two models we're going to look at today. One is with License Manager. Essentially, every request will trigger a license checkout because we call License Manager. What happens is someone will purchase a product to purchase those dimensions, and then once they run a query, they'll check out a license against that. If there's no license available, they'll get an invalid license error. With this model, customers essentially pre-purchase licenses and then it draws down against that with License Manager.

Thumbnail 2420

Thumbnail 2430

Thumbnail 2450

Thumbnail 2470

Thumbnail 2480

Thumbnail 2490

Thumbnail 2500

Thumbnail 2510

Thumbnail 2520

Think of it as an analogy: if you were going to buy coffee and purchased ten in advance, you can use those to make your purchases. The other approach we're going to show you here involves some code that demonstrates the parameters that get called, showing examples of syntax needed to check out a license. The other model is contract pricing. We're going to demo that afterwards, so let me first show you how contract pricing looks. We have a contract for an agent here. The main thing is what we've taken across and advanced from what Doug put in his code. Doug added the Bedrock Agent, and now we've added License Manager. Quite simply, it's going to use the client for License Manager, so we need to call upon that. I've added in the product ID, and that product ID relates to the listing for new crates. Then I've added in some configuration for licensing and checkout. What I've specified here is that for each time a cruise runs, we check out one license based on the dimensions you've set. We have some components here, and if it's successful, we take out one license. How we've determined this is based on a simple query, and then advanced. If it meets any of these keywords such as deep analysis or detailed analysis, any of the ones I've specified here may be advanced. It's quite simple, but you can put the logic into your application for that. The agent endpoint, as Doug explained earlier, shows the main difference we've added here: the ability to call License Manager. I'm going to show you this one here. I've deployed this into Agent Core much the same way that Doug created it, but it's running locally in my account at the moment as a test. I'm going to do a quick test of this one, invoking a simple finance advice request. This will come back, and if everything is all okay, it will return as you see here. The dimension is simple, and it's consumed one license. Then you get a unique ID, so each time you run that, it draws down a license from License Manager.

Thumbnail 2560

The alternative would be metering. If we went down the path of pay-as-you-go, the main difference we would see here is that it would actually use the marketplace metering service. If you think of it as pay-as-you-go, you're going to buy a coffee and pay for that coffee each time. That's exactly what you're doing here. Each time a query is run, it leads to that, and based on your dimensions and your charge type, there will be a charge to the user. That's essentially how it works. It records the actual consumption in real time, and customers only pay for what they're using per their bidding request.

Thumbnail 2610

Thumbnail 2620

Thumbnail 2630

Thumbnail 2640

Thumbnail 2650

Similarly, I'm going to show you how it will detect if it's a simple or advanced query. It's much the same way, but instead of using License Manager, what we're going to use here is the metering API. If you look down to here, I've set this one up earlier as well. If we run through the code on this one, it's very similar. The difference here is that the client, rather than being License Manager, is called the Marketplace Meter. We've called that one here. We've set in the product code, which is unique to your product listing as well. We set those dimensions as standard and premium. We've put in place the same logic as we had before. If it meets any of the keywords, it would be premium; otherwise, it would be standard. For each of those keywords, if it comes over, it would be premium; otherwise, it would be standard. Then we get the output. It's very simple and straightforward. Then we run this here with a prompt. I'm going to ask it for simple finance advice, the same query as before, but instead of drawing down from the License Manager, what we're going to do here is record a query and give it a unit of standard. Here's the metering record, and each time you run that, you get a unique metering record. The main difference is you're not drawing down against licenses. Every time you run this, you can continuously run it and just pay as you go, depending on the price per unit. That's pretty straightforward.

Thumbnail 2690

Buyer Experience and Next Steps: Workshops and Labs for AI Agent Deployment

The next bit I'm going to show you afterwards would be the customer experience. Let's have a quick look here from the customer view. Once we go over to here, you can see I've gone into a different account now, so I've gone into a customer account. This is what a customer would see.

Thumbnail 2720

Thumbnail 2730

Thumbnail 2740

Thumbnail 2750

Thumbnail 2760

Thumbnail 2770

Thumbnail 2780

Thumbnail 2790

Thumbnail 2810

Thumbnail 2820

Thumbnail 2830

Thumbnail 2860

Thumbnail 2870

Thumbnail 2890

Thumbnail 2920

Thumbnail 2930

Thumbnail 2970

Thumbnail 2980

They've seen a product I created earlier and could have gotten the option to purchase it. In the interest of time, I've already done some things in terms of purchasing earlier, but let's go through the process. You can subscribe to that product. This would be a public offer, and what we have here is a one-month subscription. You've got advanced and simple dimensions. What I've done here is purchased 40 of advanced and 40 of simple, and there's a price against each of those. We did that earlier and it's been subscribed to, and now I can just launch the software. Because this is Bedrock AgentCore, I can simply deploy it into my environment, so I can now set up hosting on AgentCore. If I click on that, it's going to take me through to AgentCore runtime. This would be the buyer experience, so we have buyer agent. As we go down here, we just decide we're going to host this agent. This one I'm showing you is an example of contract pricing. What we're going to need to do once it gets deployed is give it permission to call the license manager. This will start deploying here. I'll just go through here to the version it created, and as Doug showed you earlier, it is a service role. If I click on that role, I can see it has the ability to execute agent call runtime. It doesn't have anything at the moment for license manager, so let me add that one in here. We have a license manager consumption policy. We'll add that in, and that's going to allow us each time it runs to connect to license manager. This is a new one that we've deployed, so if I go back to this particular one here, this creates a unique runtime ARN, so I'm going to copy that. What I've done is created some Python code here. This is from a buyer experience, but as Doug mentioned earlier, most likely you'll create this as a web application. However, the buyer can call this agent from AgentCore directly from the console or directly from the command line here. What we're going to do is update this and provide the ID of the agent that has been subscribed. What it's going to do is run a command against that and invoke the agent that's just been purchased, so it is in the buyer account. That invoked successfully, and we can see here the dimension used is simple license. It's consumed one license here, and that's the token. What we can also check is from a buyer perspective, how many licenses they have available still. We can do it in a couple of ways. We can just run that via the command line, and we can see here in terms of usage I have simple with 39 that have been consumed out of 40 because I've been testing it earlier. If I were to run another command, that would hit the 40 mark. Then what we see is if it goes above 40, you won't be able to run any more commands. So we run that one here. We should now have hit 40, so now we've got an error that's come back. It's checkout failed, an error's occurred, there's no entitlements available. If you go back and check, all 40 have been consumed. What that means is as a buyer, they can then decide to go and purchase more licenses. So it's a great way to take your product to market and let buyers consume a certain amount of licenses as contract pricing. If they consume all of those, they can come back and purchase more. The alternative which we showed before is the pay-as-you-go model. Really, the difference is that with pay-as-you-go, if they're comfortable with that, they just pay as you go on those continuously. So if you're looking to take a product to market, you can actually list that in two ways. You can have one as a contract and one as pay-as-you-go and give those users the option there. We've showed you how it looks here. This was how it looked when it integrated, so we just did the demo as pay-as-you-go. That's pretty much how it gets deployed, and that's from the buyer experience. You can see how simple it is to take that as a container product. The alternative would be if you were looking to provide that as a software as a service. Really, what you'll look to do there is take that agent that Doug created earlier and rather than taking that as an agent deployed in the customer environment, you would have that run in your own environment. You would then sell that as a software as a service and provide access as an API endpoint. The users would be able to access those APIs, and you'd be able to charge them based on their usage of the product. Again, you can go down the contract model or you can use pay-as-you-go. The difference is that with metering records, you'd be in charge of understanding what metering the customer is using, and you would send those to AWS Marketplace once per hour. We'd have those metering records and send that back and charge the customer. That's really the two differences.

Thumbnail 3040

That's pretty much towards the end of the talk. In terms of next steps, there are a couple of workshops if you'd like to get involved and build an agent yourself and then sell it on AWS Marketplace. We've got a session tomorrow that starts at 1:00 p.m. till 2:00 p.m. in Caesar's Forum. There's also some published labs as well. We have AgentCore based on AI agent and tools, and we also have a lab if you wanted to do a SaaS model as well, so SaaS API-based AI agent and tools. Really, if you're looking to get involved in these and you have something you want to take to market, I encourage you to look at the labs and you can follow through and publish those. I think that's about it in terms of what we have, so thank you for your time. I'm happy to answer any questions anyone has.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)