🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - AdTech Innovation with AI-Driven Development for Brand Agents (IND3334)
In this video, the speaker presents the AI Development Lifecycle (AI-DLC) process that enables organizations to move ideas from backlog to production-ready code in just five days. The session focuses on a real-world case study with Nativo, an ad tech company that successfully built an "agents building agents" system after it sat in their backlog for two years. The AI-DLC process involves teaching AI full context through discovery, requirements analysis, domain modeling, and code generation. The speaker details Nativo's implementation using AWS services including Bedrock, Lambda, Agent Core memory, and LangGraph to create scalable customer-facing chat agents for advertisers. Key challenges addressed include brand voice consistency, safety guardrails, asset management, and cost efficiency. The presentation emphasizes treating AI as a development partner rather than just a tool, with the process repository available on GitHub for customization.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Accelerating Development with AI-DLC: A Five-Day Path from Backlog to Production
Hello everybody. How's everybody doing today? Welcome to Reinvent. I'm on fire. Can't you tell? All right. So what are we talking about here? You've got ideas, lots of ideas. Your organization has ideas, big ones, small ones, great ones, not so great ones, but how many of them are just sitting on your backlog because you can't seem to get it approved, to get enough resources, to get enough time, people to test these things out. So how do we get there? What's the vision? How do we build all these things at speed and at scale?
I'm going to talk to you today about a couple of different processes. First, we're going to talk about AI-DLC, which is using AI as a partner in your development instead of as a tool. Second, how do you build agents that build agents? So you want to build agents at scale to talk to your customers, make it faster to build chat, video, whatever you need to do with customers. Great. How do you do that? And how do you make this model your own? So your backlog is waiting. It's big. There's lots of stuff in there. It's great, but how do you get it done quickly? You know, you've got great ideas, but what happens to great ideas when they sit on the shelf for six months? That guy does it or that guy does it. Or somebody else figures out how to do it for you. So what are you going to do?
So first we're going to start with the process. And the process is AI Development Lifecycle. So take all the best bits of agile. And when you're talking about agile, you talk about all of the pieces that go with it. You've got your user stories, you've got the visioning documents, you've got stories, you've got code, you've got domain models, all of those artifacts that go into this process. And so what's the biggest problem today with AI coding? It's that your AI doesn't know everything that's going on. So how do you inform the AI of your full context without running out of tokens every two minutes, without having to just circle back through the same problems over and over again as you've all seen with AI coding?
This process goes through from the beginning and you teach the AI through every step of this process how to understand your problems. What are your goals? What is your understanding? And those days down there, they're not hyperbole, they're real. We do it with customers. Five days to production-ready code. What does that take? During discovery, you have to teach the AI about your environment. What does your environment consist of? CI/CD tools, your development environment, the libraries you use, your preferred languages, all of those pieces go together, what services are whitelisted, blacklisted, all that stuff. And you start with a vision. So your executive has a vision for a product or a product owner or whatever it is. In this case we had agents building agents to go faster to produce customer-facing agents. I'm going to say the word agents a lot. I apologize.
Once you have your environment and everything established, you move into requirements analysis and you take all that vision and you turn it into user stories, but you do user stories with enough technical content, so now you could build a domain. And the domain models are separated, so they're solvable as a coding step, and then you use those domains and those stories to actually build your code and your test harness and your CI/CD deployments. And then finally you deliver it. And look, it's not without its hiccups, it's AI. We all know they're not perfect. They're trained on humans and they're as imperfect as humans. So you're going to have to do some work, especially on days three, four, and five. But on day five, you should be looking at your backlog and saying what is the next set of features? What is new things that we've discovered during this process?
It's heavy duty. It's hard. It takes a lot of thinking and time. It takes a lot of tokens and context, and you know we're not going to get too much deeper into it, but you might say, how does this align to my AI ideals or my existing agile software development life cycle? And it's a clear map. And you get outputs into your current tools, so tools like Jira will have all of the stories in it and as you're doing development, those stories are advanced through the process backlog to in progress to testing to validate it to deploy.
As you work through your agile life cycle and define how you like to do it, you inform the tool on your preferences, and the tool helps you manage the whole stack. It's not magic. These are text files that have very rich prompts in them. If there's something about the process, I'll put up a link at the end for the process repo, or you can get it from GitHub and make it your own. They're just text files. You can fix it to make it work in your environment with your restrictions, your regulators, and whatever else is necessary.
Building Agents That Build Agents: Nativo's Solution for Scalable Brand-Safe Advertising
In advertising and ad tech, the space we're working in here today, what are some of the key challenges? Brand voice is critical. Your advertising has to speak on behalf of a brand. Let me give you an example of a tire company. If you're selling tires for minivans, what are you going to emphasize? You're going to say they're safe, durable, and they'll take your family home in comfort and style. But if you also sell big lug tires for pickup trucks that go in the mud and spray mud everywhere when people are driving through the dirt, then you're going to say these are tough, they've got big lugs, and they're American made. You have a very different voice for each of those types of products.
You need to be able to standardize and simplify the creation of that voice. It has to be safe. If somebody tries to get your AI to give them a recipe to build something nefarious, you don't want the AI to do it. But you also don't want it to indulge in comparisons. If somebody asks you about another brand of tire, say you don't have information to speak about that other brand. Don't try to pull whatever is trained from the knowledge set. So you've got brand safety and agent safety to consider.
Asset management is another key concern. What are the taglines? What is the copy you would like it to use? Are there videos? Are there diagrams? You have all this content that goes into an advertising context. Finally, it has to be cheap. You can't afford to spend a billion dollars every time somebody comes and talks to your chatbot about your product. You have to be able to build all of these capabilities at scale when you're doing this in a customer-facing context.
This is what Nativo did. They had something in their backlog for two years, and their vision was to enable their advertisers to quickly create agents that could chat with their customers about the advertisers' products quickly, efficiently, and at scale. They succeeded. That five-day process is real. You'll probably have to spend a couple of weeks after that with production polish and release. It's a real software code stack. We're not building toy software. It's not just some example that you use to show that the stuff works. This is real production-grade code.
We got to build a brand. We start with an agent that understands the basic things about how to build an agent. We start with brand voice analysis. You've got all the content from your advertiser, things like their copy, their goals, their statements of intent, and all of the other things that go into building an advertising campaign. Why not use another AI to analyze all of that content and produce the first version of a voice? A voice is just a prompt. It's not magic. It's just text. You use it to analyze it and have it produce a voice with keywords and tenor and all of those other aspects you want in your voice, and you do that quickly from your existing assets and resources.
Knowledge-based content may involve advertising for 150 different advertisers. How do you need to be able to select from your content repository which assets go with which agent? You have to build the guardrails. There are two levels of guardrails. Bedrock has guardrails built in. Those will provide you those basic things so you don't get unhappy content coming out of your bot and posted all over the internet. You get that level of safety, including protection against prompt injection and other such issues. But you also have to build the guidelines about whether you're allowed to talk about competitors, and if so, what's the comparison chart and what facts are you allowed to use.
What data are you pulling from? Finally, consider available resources. If you have relational data stores with a lot of metadata that you want to make available, make it available. If you have open search databases with semantic search capabilities, what filters do you need to apply to limit it to just this? You use an agent to build all of this.
Well, that's great, but then what do you do? Now you have all the makings of an agent, but you don't have an agent. So you consolidate all of this information into configuration. Rather than trying to code a solution at this point, we need to manage the data for the solution. What is your voice specification? What are the content guard rails? You can build all of that dynamically and store it in a data store—just a relational database, JSON data stores, or document databases. There are any number of solutions where you can store all of this information.
So that's great. How do I deploy it? There's nothing magic about AI and agents. We would recommend you use Agent Core as a way of managing this layer of complexity. In this case, the customer chose not to. They wanted to absolutely guarantee minimum cost operations, so they used lambdas for everything. You can deploy an agentic framework within a lambda just as easily as you can in a container or anything else, as long as you give it the right tools.
The standard three-level architecture includes an API Gateway for safety, a React web application at your front end, and Lambda functions that provide all the capabilities. You have a dedicated brand analysis, knowledge base, the actual agent itself that you're going to use to show to your customers, safety services, S3 for static resources, and natural language to text because that's how agents think. They want to be able to say they want this type of data from the database and be able to supply it.
Then what do you do with all this stuff? In this case, data has gravity. We talk about that a lot at Amazon. They already had a bunch of their data in a PostgreSQL data source in RDS. So don't reinvent the wheel. Just make sure that your natural language text to SQL service is read-only and has the appropriate safety guidelines coded into it. You can talk to existing data sources. You don't need to split it off and be special about it.
They had an S3 bucket that had all their stuff in it. Well, you can build knowledge bases based on S3 content. You can store video, images, diagrams, and anything else—even playback files, whatever you need—you can store in S3. Then you need the basic services. What model are you going to use? Well, Bedrock knowledge base with open search knowledge base as a built-in feature in Bedrock to make it enabled and easier. Agent Core could encapsulate much of this, but not all of it, and that's an accelerator that we would recommend that you use to encapsulate a lot of this stuff to make the heavy lifting even easier.
Implementation Architecture and Real-World Results: From Technical Design to Customer Success
Let's go down another level. When you have a customer coming in, what do you need to do at the startup of every time you're going to have a conversation? Well, tell it about the prompt. That's all your voice and all the other stuff. Initialize Bedrock. Create a set of tools that go out to those lambdas that we already had to create for the other version of the service where we were building these agents. Finally, it's initialized. In this case, they used LangGraph. You can use Strands. That's one of our favorites. At the time, Strands didn't have TypeScript support, so they used LangGraph, which does have TypeScript support. The framework is an automation tool that helps you build out all of this.
So when it starts up and right down here, we are using Agent Core memory because that was the fastest way to build reliable memory at scale. So what does it mean to actually run it? It's a very simple loop. If any of you have done LangGraph before, this is about as simple as the loops get. We come in with the brand ID and the agent config. That's all you needed to initialize everything. So now you know exactly what context you're working in. The message ID gives you enough for the session, so now you can pull back from memory any conversation you've had with the customer already.
You get the message, and you use the AI for what it's really good at: reading all that prompting and other stuff. It's going to make a set of decisions about what more information you need to help answer this question and how to get to it. That's where it's going to call out to those tools, those AWS Lambdas we initialized in the last step. Finally, it's going to compose a final response.
But like we talked about safety, how many times can it loop? That's a good question. It all depends on how responsive you need the service to be. You can improve some of that by having live responses. So as it's looping, have it send out messages like "I'm looking in the database" or "I'm searching in the knowledge base." Tell the customer what's going on under the covers. You don't have to give them the technical details, but showing somebody something on the screen while it's running is huge for your customers because they know it's still alive and still processing. This prevents what we call woodpeckers—people who just can't help but click that button over and over again until they get something.
Finally, you get the response. Since they wanted to audit all the responses, they created another repository that had all the messages. They didn't need the full context of the conversation; they just needed the full response, so they logged that in parallel with sending it back out to the client. In context, this would be used as a chat agent. You're familiar with the advertising bar on the right-hand side of a lot of websites. Well, what if the website was doing a story about tires, and now you can place a chat on the side from a tire retailer that says, "Would you like to learn more about Bob's tires?" If they say yes, they can go down a script and answer questions like what type of tires are you looking for, what's your vehicle type, what are you in the market for, and what's your timeline. These are all the questions you want to ask to generate those sales leads, which is the part of the sales process we're really getting to: how do we get back to the customer to get them to engage with the advertiser and build it out.
But all of this is simple, straightforward, and generic. I can plug any configuration as long as it has those basic values of voice or other things into it that will help you deliver this to your customer. It doesn't have to be bespoke every time. A lot of the time, everybody thinks about an agent and assumes it has to be unique. It doesn't. As long as the problem space is solvable, you can do it with standard tools and just make it a repeatable function.
So why did it work? We started with a clear direction. In five days, you don't have time to waffle around with what you want it to do. We built the domains from the requirements, from the original vision, and from the discovery that you did in your environment. It's a partnership, so the AI is doing a lot of the work that you don't want to do. Who wants to sit around and write user stories for hours and hours? What if you have the AI do it and all the human has to do is validate the stories? If you want all of these types of aspects, you could do it at speed and let the AI do the part that humans find boring.
Who wants to write unit tests? I don't know of a developer who likes to document or write unit tests. Make the AI do it because that's what it's good at. I'm not great at writing documentation. I've been writing code for forty-five years. How much documentation have I produced? Bare minimum. That's just the truth. Everything is on that minimum lovable product, or as most people know, minimum viable product. And it's production grade because you told it upfront what you needed it to be to be production.
So did it work? Yes. Have I done it more than once? Yes, this is customer number three running down the pipeline. All of them are headed to production. This one is currently in alpha with their advertisers right now where they're testing it out, making sure it meets all the business needs for the customers who are going to be utilizing this in production. We have the guardrails. We've got it, and it's repeatable and scalable. It has all those controls that you need and the safety guardrails from both cost, time, and energy that you've built into the system, and you did it in five days with a couple of weeks downstream of what you have to do with anything when you're running it out into production. It has to pass all those same guardrails. There's no magic that's going to solve the production pipeline.
Finally, what's next? There's the repo. Have fun with it. Go out there and remember it's just text. You can download these things and change them to make them work for your organization. Work at speed and scale. I would say don't start with a little kitty app that you just want to kick the tires on. Make the investment with the time and teams necessary to really give it a real shot.
So Nativo is polishing it for production, fixing anything the customer feedback gives them, and expanding. They're already working on phase two scope. What can you do? Try it. What do you have to lose? It's a little bit of time. It takes a lot of work because the first time through it, you're learning the process and running it.
; This article is entirely auto-generated using Amazon Bedrock.




















Top comments (0)