🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Build useful, reliable agents with Amazon Nova (AIM372)
In this video, Amazon AGI Principal Product Manager Lori Knapp introduces Amazon Nova, a family of foundation models designed for agentic workflows. The session covers the evolution from generative AI chatbots to task-oriented agents and multi-agent systems. Key capabilities discussed include native tool use, multi-step reasoning with configurable levels, and extended context up to 1 million tokens. Rob demonstrates Nova Act's browser automation achieving over 90% reliability through supervised fine-tuning and reinforcement learning. Michael explains multi-agent systems using Strands framework, highlighting how specialization, modularity, and parallel execution improve outcomes by up to 70%. Real-world examples include Trellix's security alert triage, PGA Tour's automated QA testing, and Sumo Logic's 75% reduction in threat resolution time. The Nova 2 family includes Lite, Pro, Omni, and Sonic models, each optimized for specific use cases.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
The Evolution from Generative AI to Agentic Systems
Hello everyone, and welcome. Thank you for attending today's breakout session. My name is Lori Knapp. I'm a Principal Product Manager at Amazon AGI, where we are building Amazon's family of foundation models and services called Amazon Nova. 2025 has been called the year of agents. We're making a transition from models that can generate insights to models that can take action. But to make that transition, we need models and systems that have a broader set of capabilities.
In today's session, we're going to talk about the evolution we're seeing with our customers, going from model outputs to agentic systems. We're going to talk about the various foundation layers that make that transition possible. Then I'll go into a deeper view on what types of capabilities you should look for when you're picking a model for an agentic workflow. Then my colleague Rob will talk about Amazon Nova Act and specifically how it's designed to make browser-forward agent workflows extremely reliable.
Next, Michael will cover multi-agent systems and how you can use things like strands and various different specialized models to enable more broad and complex workflows. And then we'll wrap with some Q&A. When we think about customers adopting generative AI into their businesses, we typically see a clear evolution. They start with generative AI systems. These typically take the form of chatbots that might pull information from various internal systems. They're able to generate insights, maybe be brainstorming partners, and create reports. But they have a specific limitation: they aren't able to actually act within your systems.
And that's where agents come in. These are task-oriented, purpose-built AIs that don't just provide insights; they actually complete work. They're really bridging that gap between intelligence and execution. As we see this continue to evolve, we see it going towards multi-agent systems. They'll be able to take on more complex tasks, coordinate, delegate tasks, and work towards a common goal together. But to get there, we need models and systems that have capabilities allowing them to complete end-to-end workflows reliably without human intervention.
To make this a little more concrete, here's what it means to go from answering questions to taking actions. In coding, it's like going from explaining what's going on with this error to actually generating a whole code artifact, to then an agent that is able to write the code, run tests, and actually deploy fixes in your systems. In enterprise, we can take the example of a customer service use case. You might start with having a model that analyzes customer complaints and summarizes the top issues, passing that to a human for analysis. Then you move towards a chatbot that is able to interact with the customer and provide that first-line interface. Finally, you have an agent that not only takes in the customer issue but is able to look into your systems, understand your policies and specific frameworks, and then actually triage that issue and provide the customer a resolution.
Three Foundational Layers for Building Agents
In consumer applications, it's going from "what should I cook for dinner tonight" to actually creating the shopping list, to an agent finally that is able to order those groceries and have them arrive at your door without your intervention. Now that we've talked about what agents can do, let's talk about what makes them work. We think about three primitive layers when we think about building agents. The first are the agentic primitives. These are the building blocks that actually allow your agent to interact with the world. You can think about things like tool orchestration capabilities, memory so that they can effectively manage context, and observability capabilities so that you can monitor your agent performance and understand what the agent is doing.
At Amazon, we offer Bedrock AgentCore, which provides tools and systems to allow you to operate and deploy agents at scale in a secure fashion. The second key layer are the models themselves. Not all models are equally capable of agentic workflows. You need a model that is able to reason through complex tasks, break down a plan of action, and actually call tools reliably to complete work.
Our Amazon Nova family, our latest models which we announced yesterday, are designed specifically for agentic workflows, and I'll talk a little bit more about that in a moment. And then finally are the multi-agent frameworks. So as we get to more complex real world use cases, it's often not enough to have a single agent performing the end to end workflow. You need multiple specialized agents working together. Agentic frameworks like Strands provide you a structured approach to enable this type of multi-agent interactions with things like built-in orchestration patterns. Now a lot of customers choose to mix and match across these layers and all of these options are mixable and matchable, but we also offer more end to end solutions. So Amazon Nova Act combines the primitives, the models, and the tools together to provide you not only a simple developer experience but higher reliability. Rob will get into that in more detail later on.
Amazon Nova 2 Models and Native Tool Use Capabilities
For now let's talk a little bit about the model piece. So as a quick intro, this is what we announced yesterday: four new Nova 2 foundational models. The first two, Nova 2 Lite and Nova 2 Pro, are our multimodal understanding models. Lite is a workforce model that's great for everyday production workloads where latency matters, cost matters, and you want to scale use cases. Pro is a version of the model that's highly intelligent, so when you have more complex tasks, you would upgrade to something like Pro.
We also announced Omni, which is our first unified multimodal reasoning model where you can take in any modality and output image and text. And then finally we have Nova 2 Sonic, which is a speech to speech model. So as you're thinking about various agentic applications, each of these models is designed for specific use cases, and you might want to check them out. If you have a speech-forward customer service agent, Nova 2 Sonic is great, versus these everyday workloads where Nova 2 Lite is a great model.
So now let's talk about the capabilities that we've thought about to make these models perform well for agentic tasks. I think there are really three key things that an agent needs to be able to do. The first is calling tools. These are the capabilities that allow your agents to interact with your systems, which could be querying a database, interacting with an API, navigating the browser, or executing code. Without tools, agents really cannot complete work in your systems.
The second key thing is multi-step reasoning. This is what allows a model to effectively break down a task, understand it, plan an approach, and adapt when things go wrong. And then finally is extended context. When we think about real world workflows, you often have a lot of data that you want the agent to be able to understand to actually complete the task. Extended context allows the model to take in that data and really parse through and understand what's relevant in the moment.
So to dive a little bit deeper into each of these capabilities: when we think about native tool use, the model needs to be able to reliably select a tool at the right time for the right job. It needs to pass through the right parameters to that tool so that it calls it effectively, and it needs to be able to understand the output it gets back from the tool and plan the next step. With Nova 2 we've designed it for fast and reliable tool calling. It's able to properly generate the formatted parameters and call these things effectively, and those are common areas where other models might trip up and break agentic workflows.
We also have enabled it to intelligently chain together tool calls, which could either be operating in sequence, and we'll see an example of that in a moment, or calling a tool in parallel to get multiple responses back to formulate the best answer. Our models also now come with built-in tools. Two in particular are code interpreter and web grounding. Code interpreter allows the model to write some code and then actually execute it to get a response back, and web grounding allows it to collect real-time information from the web to better inform answers.
So I want to show an example of how easy it is to use a built-in tool with Nova. You'll see on the top left is a tool config. All we need to do here is set a parameter that lets the model know it has access to this tool. In this case we're letting it know it has access to Nova code interpreter. We don't need to write anything to create the tool. We don't need to have a dedicated sandbox. We just let the model know it has access. So in this example we'll give it a question about what is the square root of this very large number and you'll see what it does is it understands that in this case it would be helpful to be able to execute a piece of code, so it calls that tool.
It gets the response back and then uses that response to inform its final output, letting us know the square root is a large and complex number that I will not attempt to read. This is as simple as just giving it that parameter, and then it knows when to use that at the appropriate time.
Multi-Step Reasoning, Extended Context, and Real-World Performance
Not every question we have is as simple as a square root. When we think about complex workflows, what really matters is the model's ability to break that down and understand all the different steps it needs to take to actually complete a task. This involves choosing the right tools in the right order and adapting when necessary. Sometimes the model will choose a tool that is not the right tool, and it needs to understand it did not get the output it needs and go back and try again. This is where reasoning becomes really important.
Our latest family of Amazon Nova models are reasoning models, and what we have done is given you control over how much reasoning you let the model do for your use case. This allows you to choose the right performance and efficiency that you want for your use case. You can choose between no reasoning, low, medium, or high reasoning depending on how complex that use case is. I will show two examples of when you might want to think about using low reasoning or no reasoning versus medium reasoning.
In this case, we have a chat application where latency matters a lot, so we want the answers to be fast, and typically it is going to be more straightforward questions. We are going to ask what did Andy Jassy say about AI developments in this shareholder letter. We have given it access to the two tools we have: web search and code interpreter that are built in. It knows it wants to search the web for this, so it uses the web grounding tool. It pulls from various sources, and then it provides a formulated answer here, and you will see little citations where it is telling you where they got specific information from. This is very fast, a simple tool call that it is able to do without reasoning.
In this next example, we are going to show making a more complex change. In this case, we wanted to update the GitHub repo for our Amazon Nova 2 Light model announcement, so we gave it a workflow to understand the issues, understand where it needs to make changes, and actually make those code changes for us. You will see here it is going to make about 15 to 20 tool calls. We will not show all of them here, but we will walk through it, reading the issue and understanding what it needs to do, then calling a tool to add a comment about what its plan is to actually make the changes. It goes and searches for the files it needs to make the change. It makes the code updates in those files appropriately. It then creates the pull requests and ends with another comment that says I have completed the task and we are ready for merge. This is a much more complex problem, but it is able to reason through step by step, going through the file logs, trying to find the right file it needs to make a change, and really adapting to the environment as it goes.
The last key thing that is important when we think about agentic workflows is extended context. In your enterprise real world workflows, we often need to understand a lot of information. This could be anything from processing long documents, understanding large code bases, or just understanding all those tool calls back to back and not losing track of where you are. What we are really excited about with our new Amazon Nova family of models is the longer context length, so up to 1 million tokens. This enables the agents to process things like up to 1000 lines of code, documents that are 400 pages, or even videos that are up to 90 minutes long. I think that is really exciting in terms of enabling enterprises to actually pull their information and allow the model to really act on it appropriately.
Let us quickly share some of the relevant benchmarks that relate to these capabilities for our latest models. We will talk about tool use, reasoning, and information extraction from documents, correlating to those three capabilities we talked about. What we are excited for is that both our Light and Pro models are performing really well across these areas when you compare to key competitors. This is definitely worth giving a try as you are thinking about your own agentic workflows. But I think what is more important than even the benchmarks is what we hear from customers. These models can be great at benchmarks, but if they do not perform for real world use cases it really does not matter.
Here's an example from a customer, Trellix, who was one of our early adopters of Nova 2 Light. Trellix provides security solutions for organizations, including endpoint protection, network detection, and threat response. They had an issue where they receive a lot of security alerts, and each one requires a human analyst to dig into their systems and triage to understand whether it's a genuine threat. For low-level alerts, their analysts don't have the time to do this work for every single one.
What they did was give Nova a set of pre-vetted expert rules and access to their systems so it could pull from their databases, endpoints, and network logs to create a full-fledged report. Ultimately, it provides a report and a final assessment of whether something is a genuine threat or not. What's really exciting about what they've seen is very high reliability on tool calling. They've seen no failures on tool calls, which I think is a key core component of what makes an agent great. They've also seen higher accuracy from Nova 2 Light on threat classification and much more in-depth analysis.
This is super exciting for these early results, and we're really excited for you all to try these out with your own agentic workflows and see what works best for you. So with that, I'm going to pass it off to Rob, who is now going to talk about the more integrated end-to-end solution with Nova Act.
Amazon Nova Act: Building Reliable Browser-Forward Agents
Thanks, Laurie. As Laurie pointed out, there are a lot of components that go into a powerful and flexible system. The model, the orchestration, the memory, the compute. We found when working with customers that in addition to that flexibility and power, a number of use cases really need high reliability. So we built Nova Act, and I'll talk through what drove that decision as well as where we found the biggest return.
Across the wide range of workflow automation, you can imagine a lot of different use cases. Things where you're calling tools or APIs that are fairly well structured, that are producing code that are verifiable—the path to reliability is a little bit faster. You can quantifiably identify whenever there is a failed tool call or when there isn't. When you're doing something deterministic, it's a little bit easier to get to that high reliability that's needed for production workflows.
In the UI, it becomes a little bit more complex because you're doing a multitude of different steps to accomplish a single workflow, and each of those may have a much wider range of possibilities. You can click anywhere on a screen. You can enter almost any value in a field. Now that doesn't seem like it's that much more complexity, but when you're doing 20, 30, 50, or 100 steps to complete a workflow in the UI, as we all do day to day in our SaaS applications, you end up compounding the risk. So reliability becomes a much more critical focus in getting one of these workflows to be production scale.
When thinking about what the key components are for workflow automation in the enterprise, we think of three things. The first is reliability, which I touched on and we'll go into a deeper dive on. The other two are scale and control. Scale is important because in reality, you can give a co-pilot to any one of the employees at your company, but you're just improving their productivity. That's completely different than a full system automation that can run at scale in parallel without anybody needing to oversee it.
That's the power of true reliable automation—you can run, say, 1000 QA tests all at once. You can start running automation that people don't need to oversee directly. But as soon as you start taking away that observability and the co-pilot concept, now you need to figure out how to bring back in human reasoning. You need to be able to ensure that a person can exert judgment at the right time and place, so not only are they adding value, but you make sure that an agent is accomplishing the right task in the right way and you're not breaking it into smaller and smaller chunks that are just being handed back to the human.
So ensuring that you have the right observability as well as what we call human in the loop, where you can pull a human in to exert judgment at the right time and hand back to an agent to complete the rest of the task—this is how you start building a very powerful system that can be really useful and drive ROI. So talking first about reliability, going back to this concept of the UI versus more deterministic API calls and MCPs, most LLMs today out of the box don't have any understanding of UI elements. So a date dropdown is the first time they've ever seen a date dropdown in an image. That becomes very complex when you think about having to navigate UIs that we all take for granted.
I say "take for granted" knowing that there are very unintuitive UIs out there, despite the fact that we've had decades of experience seeing every permutation of the date dropdown. So imagine doing this blind as an LLM that's never seen any of this before.
Step one is looking for a model that has been introduced to these concepts. Nova Act does this. We do supervised fine-tuning where we provide examples and human annotate, basically adding training data to the model so that it can understand natively where to look for a date dropdown, a search box, or a tab for navigation—the building blocks of understanding the UI.
On top of that, the complexity becomes helping the agent learn cause and effect. Most LLMs that are powering these agents today are built on imitation learning. Imitation learning basically gives you an example from a human or from another system to replicate, and LLMs are very good at that. The problem is that UI flows are not a small subset of distribution. There's a wide range of things, and any single step that goes the wrong way could lead you to a dead end that's very hard to get out of.
Understanding cause and effect of each action is critical for an agent to be both flexible and reliable. What we do is train using reinforcement learning, and you can think of this as giving it a task to escape a maze. This could be booking a flight, which is the example that everybody loves. Along the way, we create synthetic environments like these mazes that have rewards and penalties, and we let the agent run against these synthetic environments at scale—tens of thousands of times—to learn what was a good decision and what was a bad decision.
You might think the agent learns how to get out of that maze, but when you start applying that to the UI, it starts understanding patterns. Why would I go to a search field as opposed to looking for tabs? We all know that there are certain sites where you go to the search because it's an amazing search experience. You also know what it looks like when you hit that search experience and your results do not work, so you immediately back out and start trying to navigate a set of files or tabs. The agent learns something similar, and so it becomes much more generalizable and understandable when it tries to do the same thing in a new environment.
We've done that at scale. We've built hundreds of gyms that simulate both public and enterprise environments. We then give them thousands of workflows—those are tasks to accomplish across each of those environments—and let them run. That builds up this generalized knowledge of how to navigate and what the cause and effect is of making one choice versus another. On top of the understanding of each of these UI elements, this makes them very powerful and flexible to get work done in the same way that a person might when there is no MCP or API to call.
What you get at the end of this, whether it be our system or any other, is an agent that can navigate complex workflows in non-deterministic UIs and do so reliably. For us, reliability doesn't mean that the first time out of the box the demo works flawlessly. What we really mean is once you've taught it through prompt tuning, context engineering, and instructions, it does it every time because that's the same as what you'd expect from hiring an employee. You teach it how to navigate your systems, you teach it the outcome that you're expecting, and then you expect it to do that work over and over. That's how we've architected Nova Act.
In terms of use cases, we see four categories of capabilities that start to compose all of the common workflows in the enterprise. The first is form fill. This sounds simplistic—I've got a web form, let me go fill out a survey—but in reality, most SaaS tools today are a combination of data being entered into fields and buttons being clicked to forward a workflow. The same can be applied to an agent. If you think of CRM, ERP systems, actual forms to file with external parties, these are all core components of a strong agent.
Search and extract is another category. You think of it as search, but in reality, a lot of enterprise information is locked behind credentials. If you can provide an agent that can act like an individual, it can start accessing information that would not be available otherwise. It's not public web, it's not behind an API, and nobody's going to take the time and effort to wrap an MCP around a one or two time use. This is the power of an agent that can navigate, understand what it's looking at. If you think about deal diligence for an acquisition, you don't need to teach an agent how to navigate a deal room. You just tell it to go pull the relevant information from the PDFs, and it logs in, pulls that information, and can retrieve it for you.
A more complex version of that is a booking and checkout flow. I used the flight search example earlier.
And everybody loves that one. That's essentially a very complex version of form fill, navigation, sometimes search and extract, and executing on the customer's behalf. In flows like that, we really encourage the use of the human in the loop to either verify payments, verify before checkout, or answer questions about variations. This allows you to have a generic agent that can still do 80 or 90 percent of the work and only bring the value judgment to the human who's involved.
So in that case, you don't need to have an employee who's doing all the work, but maybe they just choose their seat because they have preferences that you don't want to try to program. The final one is QA. QA is very powerful because the way that we teach this model, it uses visual perception. So we're not looking at HTML in a typical unit test; we're actually looking to see what the customer would observe. And so you can start walking through the same concepts of checkout flows and booking, and you can identify things like revenue leakage.
We have customers who have identified that the search function or the booking function in their customer-facing UI has breakages. You can run these QA tests at scale all the time and be proactive about this because the agent is flexible. So at the end of this, what we've built with Nova Act in particular, we focused on both academic and real-world stats. I keep coming back to this reliability because this is what we really think matters in the enterprise. This is what we hear from customers all the time: it doesn't matter if it worked the first time, it matters if it works every time.
And so out of the box is not the goal; making it very easy to debug, deploy, and configure is the goal. And then once you're confident that it's working, you should be confident that it works every time. So we're seeing with customers right now, our early design partners, over 90 percent reliability, meaning every time it runs, it is highly likely to succeed in its task. This is not availability of the system; this is accomplishing the goal. And on top of that, we've scored state of the art in common benchmarks for agent use, and we're very proud of that.
But really, the reliability is the one that we think is most important. I'll give you an example here: the PGA Tour was an early design partner who worked with us to manage a QA process. Their challenge is that when they have sponsors who are supposed to be displayed in certain contractual ways with each of their scoring sites for every event—40 events, 4 days a year, all year—they have to ensure that each of these elements is displayed on the page correctly and meets all of their SLAs. Seems simple, and it was.
You had a human who unfortunately had to get up at 2 in the morning local time to go check this before the event started and then check multiple times throughout the day. This seems like a really simple task, but at the same time, it had real revenue implications. This is the primary source of revenue. If they violated one of these covenants, they don't get paid. And so they were struggling to maintain their quality while scaling up their team and allowing them to focus on the fixes and be more proactive.
They worked with Nova Act to automate this, and now they can run it with increased coverage, the same reliability, and they can focus all their time on fixing bugs and being more proactive towards other areas of the site. It's a real success case, and again, the real value here is that reliability. They don't need to have a person getting up at 2 in the morning to check if the agent ran. They're just getting pinged if there's a problem. And that's the big step change. That's what we're really looking for.
Multi-Agent Systems with Strands: Specialization, Scalability, and Efficiency
So these are small examples of workflows that are in the UI by themselves. I'm going to hand over now to Michael, who's going to talk about how you can combine these systems into a multi-agent workflow to get even more power out of this. Thank you. So we've heard from Lori about how to build agents using Nova with built-in tools, using an MCP server. We've heard from Rob about how to build reliable browser agents with Nova Act.
I'm going to talk about how we bring that together with multi-agent. Now there are lots of different ways that you can build multiple agents. One offering that AWS has is the open source SDK called Strands. Now this is built around model first, and it's also model agnostic, so you don't have to use a Nova model, although I will say that we have spent a lot of time optimizing Nova to work well here. And it doesn't even have to be a model from Bedrock. So it's really flexible
and allows you to bring in different agents and different models to get the task done that you need. So before I dive in too much, I want to set the context here of why multi-agent matters, why this is important, and why I recommend that you think about this framework for your use cases. So by a show of hands, how many people here have had to make a call on hiring someone? How many people have had to build a team, whether it's on the playground or fantasy football? Just about everyone here I think has had to think about building a team, and when you do that you're not looking to just build a team of point guards or a team of marketers or engineers. You need a mix of talents so that the team works together to get a common goal done.
That's the same sort of approach that multi-agent brings. Rather than using a large monolithic model that might be very capable, you break it down into multiple agents that can be multiple models tailored to each task. And the research backs this up as well. Using multi-agent frameworks improves outcomes in complex tasks by up to 70 percent. That's a big difference. So let's dive into really what those benefits look like, and there are three main things that I'm going to focus on.
So first, specialization. Now this gets back to building the team, right? There are so many different tasks that make up a workflow. You might need to generate code, you might need to generate an image, you might need to summarize a document. All of those things could be done by a larger model, but maybe not at the cost that you want or at the latency that you want, or even at the accuracy that you want. And so being able to apply a particular task to a model or an agent can help make major improvements.
The other area here that we announced just today, in addition to the Nova 2 models, is Nova Forge. So Nova Forge is a way to deeply customize your models with your own data. And so this can be really effective for more niche use cases. And then there are other ways as well to customize where you can fine-tune a model, you can distill a larger model down to a micro or light-sized model, and this is really effective to get the mix of accuracy, cost, and latency that you need for each task.
So number two, scalability and modularity. Think of this as like the microservices design. I'm sure many of you have worked with that in a past life, but it really applies here as well. So when you need to add another component to the workflow, you can just add another agent. Rather than have to worry about completely rearchitecting or rewriting the prompt or having something else change that prompt that you spent weeks, maybe even months trying to optimize, you add this other agent into the workflow and you put a dedicated prompt with that, you have dedicated tools with that.
It's built to work as part of the broader system. And the same thing here applies if you want to upgrade a model. We just launched Nova 2, which is much better than our Nova 1 models, and it's also backward compatible, so it's very easy as new models are launched to swap them in where it makes sense. Again, much easier, less worry about having to rewrite prompts and deal with that pain of trying to optimize something to work just right.
The third point is latency and efficiency. There are two major points here. When you have a larger model, tasks generally have to be done sequentially. You would tackle task A, B, C, D, and so on, rather than looking at those tasks and executing them in parallel where you can. This approach by itself will save you lots of time. It also ties really well to the point about using models and agents that are tailored to that task. You don't need to use a large model to summarize something. You're going to spend more money than necessary, and it's going to take longer than needed. You can use a smaller model like Nova Micro, and it's going to do quite well at simple tasks while doing it faster and using fewer tokens at a lower cost than a larger model would. Both parallel execution and being able to use the right model for the right task will vastly improve your latency and efficiency.
Let's take a real-world example here. Sumo Logic is a company really focused on trying to solve cyber threats for their customers. This is only getting harder in today's world. There are more complex threats, and they seem to be happening more often. Nova comes into play here by helping to power their agents. They have a series of agents as part of their Dojo AI that is focused on security operations. Nova does things like coordinate triage between issues, translate natural language requests into Sumo Logic's query language without requiring you to memorize the syntax, and analyze data by bringing disparate threads together. All of these are separate dedicated agents that work together to accomplish the task of identifying threats as fast as possible.
The result of this is that with Nova, Sumo Logic has been able to reduce their resolution time by 75 percent. That's a significant difference when it really matters and there's the chaos of an unknown threat impacting your enterprise. I want to leave you with some takeaways here. There's no doubt that the future is agentic. You hear this buzzword all the time now. Agents are mostly about getting things done in the real world and doing so effectively. We talked about several different ways of building agents with Nova, from the built-in tools that come out of the box to bringing your own tools or MCP servers. We talked about building highly reliable and useful browser agents with Nova Act, and then we talked about bringing that together with multi-agent systems to improve latency, reduce cost, and lead to better outcomes.
Call to Action: Building with Nova Today
What has really made me excited is how many customers are already building with Nova today, and many of these customers are building agents with Nova. It's great to hear the positive feedback and see the ways that Nova is helping customers solve these really challenging problems. It's one of the things that I enjoy most about my job. But I want to leave you with a call to action to try this out yourself. We just launched these Nova 2 models, and they're highly capable of using tools and building agents. One of my favorite things that we just launched is a builder playground. Amazon.com/Dev will give you the opportunity to build an agent using a very easy UI, so it's a little less intimidating than having to go to the API and build it out that way. You can actually deploy these agents, see how they do, and make changes to the prompts all using a very easy user interface. Then when you're ready, we'd love to have you actually building on Bedrock and seeing how Nova models can help you accomplish your goals.
Thank you very much. We'd love to take questions, probably right down here. We'll be around for a little bit longer if anyone wants to chat with us. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.



















































Top comments (0)