DEV Community

Cover image for AWS re:Invent 2025 - A leader's guide to AI strategy and implementation (SNR305)
Kazuya
Kazuya

Posted on

AWS re:Invent 2025 - A leader's guide to AI strategy and implementation (SNR305)

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025 - A leader's guide to AI strategy and implementation (SNR305)

In this video, Helena Yin Koeppl, AWS Executive in Residence, presents a comprehensive AI strategy playbook addressing why 95% of agentic AI projects fail to deliver ROI. She introduces the RIPPLE framework (Rational Pause, Incentive Mapping, Perspective Divergency, contrarian truth, moat building, and velocity) for translating business strategy into AI strategy, and the BREAK framework (Blindspot scan, Reframe constraints, Economic dissection, Assumption audit, Kaizen) for reimagining business processes. Real examples include Moody's reducing risk assessment from one week to one hour, and Genentech automating 43,000 hours of manual work. She emphasizes shifting from task delegation to objective setting when managing AI, establishing hub-and-spoke AI organizations with the CAIO role, and building agile data foundations. The session covers six chapters: defining AI North Star, reimagining business processes, personalizing experiences, human-AI collaboration, AI organization structure, and data foundation.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

The Agentic AI Era: Unlocking Unprecedented Productivity Gains

Good morning. Welcome to your first session of 2025 re:Invent and the first senior leaders track session at re:Invent ever. So it is 9:00 AM, Monday morning in Las Vegas. You must really, really love AI strategy. Let's get started. My name is Helena Yin Koeppl. I am Executive in Residence for AWS. I'm a member of a small group of ex-senior leaders who have run and led transformation in our past life as AWS customers. Now we join AWS to share experiences with people like you.

Thumbnail 90

Thumbnail 110

Before joining AWS last year, I led 26 years of data and AI transformation at four Fortune 500 companies in global roles. So today's session draws a lot from my own experiences, but also from my past two years of talking to hundreds of customers on their AI journey. So how much has happened in the past three years? We have gone through from generative AI assistants to generative AI agents and now agentic AI systems. And as with any technology revolution, we are foreseeing multiplication of productivity gains. As you can see from the steam engine to electricity to moving to the cloud, the multiplier is getting bigger and bigger and the time it takes to get there is getting shorter and shorter.

Thumbnail 150

Thumbnail 170

What about AI? How much productivity gain should we see and how long does it take? We do hope it can be a hundredfold and it can happen in the shortest amount of time. Because generative AI and agentic AI should unleash many, many productivity gains and huge bottlenecks. For example, these are the five productivity bottlenecks the agentic era should finally unleash. Number one is unstructured knowledge. We have gone through many eras of sorting through data and labeling them, a lot of which are structured data. And finally we have unstructured data in PDF files, in emails and critical know-hows in PDFs and people. Answers vary by who you ask and all of this now can have a single sighted front door via agentic AI to get through that institutional knowledge and reduce time to expedite and narrow compliance risk.

Thumbnail 210

Thumbnail 250

As an example, one of the AWS customers, Moody's, is a risk rating agency and it takes a lot of time to actually run through the analysis and huge amount of documents to analyze risk rating. What we helped with was to build a POC with them and using multi-agent workflow to reduce that risk assessment analysis from one week to one hour. Let's calculate the productivity gain here. Number two is that with agentic AI, one of the key things it can unleash and what we have been talking about for years and years is basically the segment of one customer journey. So we've been talking about personalization, we've been talking about tracking true customer preferences across platforms and finally with agentic AI's ability to act, we are able to do so. As an example, anybody here who is not a Rufus user? Very few, almost none, a few. Rufus is one of the key examples of agentic AI powered and generative AI powered shopping assistant.

And because it can aggregate all of those reviews and product information and personal habits, all of this information together makes shopping 4.5 times quicker during Prime Day while maintaining low latency. We are foreseeing huge value coming from Rufus creation for amazon.com. We have seen this not only in shopping for consumers but also in highly regulated environments like banking. NatWest boosted their click-through rate by 900% and achieved 2 million more higher interest savings account publications by using personalized experiences and recommendations.

Thumbnail 340

Thumbnail 360

Thumbnail 370

Thumbnail 380

Thumbnail 390

Thumbnail 400

Decision latency in complex operations represents another significant opportunity. Huge amounts of unstructured sensor data can be aggregated together and leveraged with agentic AI to make sense of it. Amazon has created an agentic and generative AI assistant called DeepFleet that manages 1 billion robots in our warehouses, helping automation become extreme. It handles everything from troubleshooting to route directing and dynamically deciding the best way to pass products into packaging and shipping.

Thumbnail 410

Thumbnail 430

We have seen significant agentic AI usage in service management. In the past, more than 20 years ago, if you wanted great customer support, you had to follow the same model: three teams based on three different continents covering 24 by 7. Now you don't have to do that anymore. DoorDash uses an agentic AI system to support all inquiries from their 7 million dashers, answering questions about tariffs, troubleshooting issues, and providing question-and-answer support. This alone represents 3 million dollars in savings just from supporting the dashers.

Thumbnail 460

Faster product cycles represent the last major opportunity. Built-in learning and sorting through huge knowledge bases is transforming industries. In healthcare and life science, where I worked for many years, there is normally a huge amount of unstructured data. One example is Genentech, a Roche Group company, where we helped their research organization automate 43,000 hours of manual work, augmenting their researchers' capabilities. This equals five years of time saved.

Thumbnail 500

Why 95% of AI Projects Fail: Moving Beyond Yesterday's Playbook

Yet when you read some headlines, 95% of agentic AI projects seem to fail to deliver measurable ROI. When you read the details, what it's really saying is that the 5% of companies who manage to gain value—millions of incremental value in productivity and new business models—did something different. They transformed their strategy and business process.

Thumbnail 540

Thumbnail 550

Thumbnail 590

What I'm talking about today is stopping the steering of today's AI challenge with yesterday's playbook. Yesterday's playbook includes several problematic approaches. First, when we talk about such fascinating technology, we often ask ourselves what we can do with it instead of starting with the problem to solve. We end up using a solution searching for a problem. Second, we don't rethink business processes. We look at the exact process as before and just find steps to automate, only achieving incremental gains and sometimes even slowing things down.

Thumbnail 620

Thumbnail 670

Thumbnail 680

Thumbnail 700

Thumbnail 720

Thumbnail 730

Third, we're still breaking down experiences into silos. Marketing handles communication, sales focuses on lead generation, and none of them are actually working together toward the same customer. There's another challenge in terms of how we approach AI. We've been talking about how AI can unleash productivity gains, but what we often do is retrain and redesign roles, or worse, replace people entirely. This is not the right approach, and it certainly doesn't deliver true productivity gains. Organizationally, we've been centralizing AI through committees, and of course, there's data—the new oil that fuels the AI engine—but we're still not setting it up correctly. Many organizations who come to us have reached an inflection point. We've done extensive experimentation over the past three years with many pilots, but we've seen uneven ROI and scattered AI spending by function. The matrix we've been using measures how many AI projects we have rather than the outcomes we're achieving. Now leaders are coming to us asking the right questions. Are we backing the right top three value pools? Will this give us defensible competitive advantage? How do we turn incremental wins into compounding gains? This is what I'm going to talk about today—how do we actually have an AI strategy and implement it to deliver results?

Thumbnail 740

Thumbnail 790

Let's think about the true flywheel of AI innovation. Yes, it's great to have experimentation and learn about new technology, but we should step back and ask where we're going to truly invest in AI and how strategy aligns with business strategy to make a real difference. Start with the opportunity, not the solution. Then deliver it quickly, show value and ROI, and all the while build the right foundation. This is the new AI strategy playbook. There are six chapters I'm going to discuss, starting with how to define an AI North Star. This begins with AI strategy that mirrors your business strategy, then reimagining business processes and personalized experiences, and finally building the right foundation—not just about technology or platforms, but about people, both those leveraging AI in your company and your AI organization itself.

Thumbnail 820

Thumbnail 840

Thumbnail 870

Thumbnail 900

The RIPPLE Framework: Defining Your AI North Star Through Strategic Questions

Let's start with AI North Star. The biggest mistake organizations often make when starting their AI transformation is jumping straight to asking how to leverage AI. The right question we should be asking is: what is your business strategy, and where do I translate that business strategy into AI strategy? I've talked to organizations that try to translate their seven-year business strategy into AI strategy by listing everything and wondering how to do it. They struggle to translate that very high-level vision into a detailed AI roadmap. The number one question you should ask yourself is where do you have the opportunity to automate, where do you have the opportunity to innovate by creating new products and services, and where do you have that disruptive advantage? Keep these three questions in mind as you think about how to translate your business strategy into AI strategy. The answer is that we should ask different questions.

Thumbnail 920

Thumbnail 930

Not the same questions we've been asking for the past 50 years, but I'm going to introduce a new framework called RIPPLE. This is a new mental model that helps you ask new questions and translate and define the must-haves from the nice-to-haves, identifying where your AI can play a role to create that competitive advantage.

Thumbnail 950

Thumbnail 980

Thumbnail 990

The first component is R, which means Rational Pause. Let's look into where you're truly making a difference in your business. And here's the uncomfortable truth: where are you bleeding customers and where are you losing revenue to your competitors? This helps you defend the existing market and protect your competitive advantage. So how do we ask different questions? Here are the different questions you can ask. Think about yourself compared to competitors. Where are nimbler competitors serving customers faster than you? What are the high-volume decisions that are costing you a lot of revenue through delays or errors? That's your opportunity. What routine work prevents your best people from winning new businesses? And where do you lack real-time visibility into customer or competitor moves? That's number one. It helps you define the problem space where AI could play a major role.

Thumbnail 1030

Thumbnail 1060

Thumbnail 1090

Thumbnail 1100

Number two is Incentive Mapping. Incentive mapping is all about realizing that within organizations, we often talk about organizational silos and how they're created. It's not just because we're in different departments, functions, or even separate teams. It's because our incentive systems are not aligned. So we should ask different questions to discover this. Where do you have conflicting incentives across different departments? Where are departments measured on metrics that work against each other? Where do department silos prevent you from serving customers better? And what handle bottlenecks and data processes are preventable? If you're discovering those opportunity spaces, that's where we go into the second sector, which is re-imagining business process. We have another framework to introduce.

Thumbnail 1120

Number three is Perspective Divergency. This is quite interesting. We do know that we have problems that we try to avoid because they're too complex, because they've always been there and we've tried to solve them for years without success. But right now, ask yourself differently. What if you had an AI-native competitor trying to enter your space? I worked in a place before where I led AI product innovation, and it was in the legal space. The legal space has many years of labeled data, and we truly thought we had the competitive advantage. However, there have been quite a few successful new entrants with an AI-first approach. At first they were small, but because they generated valuable insights very quickly with AI and generative AI, at one point they became truly threatening competitors. So where does it make them nimble? Where does it make them competitive? And how can you define and leverage your own proprietary data to become more successful than them by leveraging AI?

Thumbnail 1240

Here are some of the questions you can ask yourself to identify the area where you want to apply AI and try with experimentation and move into production. Once you've discovered some opportunities, the next natural and important question you want to ask yourself is not just "could we use AI here?" but "should we actually use AI here?" That distinction is very important.

That is the contrarian truth. You can ask yourself: Is the current process already the best fit for the context? If we introduce AI into this process, could it introduce new risks? What are the mitigations? And especially when we have perceived bottlenecks, is it real or is it actually misunderstood? Think about your business and your area, and ask not just "could we" but "should we" as well.

Thumbnail 1290

Thumbnail 1310

Another very important question you want to ask yourself is: if we build AI, if we truly invest resources into this problem area with the goal of transforming the business process, can we build sustainable competitive modes through AI? Maybe we can truly create data that is unique, data that competitors cannot and will never have access to. We can embed AI into workflows and continuously learn and record not only institutional knowledge, which we've been talking about on a yearly or monthly basis, but actually daily and second-by-second operational knowledge. We continuously train AI, learning on your business, customizing according to your business and your operations. That's where your knowledge and proprietary data become so valuable that nobody else can copy it, and that becomes your mode, your continuous competitive advantage.

Thumbnail 1370

Thumbnail 1380

Last but not least, how quickly can you implement it? That velocity of implementation is so important. If you can implement it within two months or within six months, you might be able to create eighteen months of advantage that nobody else can follow. That is where you should invest. If you identify that opportunity, put sponsorship and develop skills to execute not only for prototype but truly moving it into production, continuously keeping that gap between you and your competitors.

Thumbnail 1420

With the RIPPLE framework, the opportunity filter workshop is your next step. Use that to identify these opportunities and score each of them by four factors: impact on business outcomes, technical feasibility, which will also impact how quickly you can leverage and implement it, strategic alignment, and innovation velocity, which we talked about. Then select the must-haves to move forward. That is your map.

Thumbnail 1460

Measuring What Matters: AI ROI Discovery and Attribution Methodologies

Let's talk quickly about ROI. I've been talking about the need to identify ROI for AI, and one key learning I've gained over many years is that you need to set up observation in operation. Not post-facto, because that will be too late. You won't be able to separate successfully what is AI and what is not. There are five steps you can take. You need to select the right value stream, one that will result in productivity differentiation or new revenue streams. You need to embed AI into that flow and wire the data.

You need to have the entire data flow identified and separate short AI and long AI, and you need to actually take those usages—not only just AI, no AI, how long, which part, what's the percentage—and run and compare and scale.

There are some AI ROI discovery methodologies to put my analyst hat on a little bit. When you are running a pilot and you are able to isolate where you're running the pilot in a place that has a similar environment and similar activities but without AI, you can do test and control. The test is where you have AI like A/B testing, and the control area is where you don't. You can very quickly identify how much incremental value you can create. After post-launch, because you have embedded data flow, you are able to do AI attribution analysis. When you are ongoing, what you can do is actually identify every single touchpoint with the AI touch and how much AI touched it to continuously optimize it.

The BREAK Framework: Reimagining Business Processes from Sequential to Parallel

That's number one: Business strategy to AI strategy. That's your AI north star. Now let's look at a couple of deep dive examples. Number one is business processes. If you really keep it completely intact and only automate the separate part of it, what we can see and what we have seen from many interviews with customers is you don't actually see a lot of AI value creation. What we need to do is shift from keeping things intact with completely sequential implementation of AI into reimagining the complete business process, focusing on what outcome you want to get and rethinking about it.

Several organizations I've talked to have come back to say, "We tried that. We did a workshop, we talked about the outcome, and everybody went away to think about the opportunities and redesign it, and six weeks later they came back with Excel sheets." So how do we actually do that? What I will suggest is again ask different questions starting with the sequential process that you have right now. I have an example of due diligence. Due diligence, just like any complex business process today, is pretty sequential. You collect the documents, you have the initial review, and then you flag the issues. Then the issues go through legal review, financial impact review, and eventually you have the risk assessment where you can say, okay, can I deal with it or not and is it worth it? And do we have to go back to the previous stage and handle the exceptions there? It is time consuming, and due diligence takes six to nine months on average when you should actually be moving forward with business value creation.

Another thing to think about is that we should not start with technology. Very often when we are transforming business processes, it's not just one piece of technology, it's not just agentic AI, it's not just generative AI, it's not just using an AI assistant to make the process a little bit better. No, if you want to reimagine the process, as often said, don't start with the technology, and this is a key piece of it because let's identify exactly what we want to do. What is the most important thing and most time consuming thing is actually to discover issues in due diligence and then think about ways of mitigating issues. When you are thinking about that and you actually reimagine this process into a non-sequential process, number one is I need to inject all the information I have, all the documents.

With all the information I have and all the documents, I need to do semantic analysis to identify the key clause that might be the issue. That's number one. You are already aggregating several of the components together, not sequentially, and you can use generative AI for it. Once the anomaly is identified, you can use analytical AI to compare it with your ERP data, conduct financial impact analysis, and do the risk scoring. That's cross-referencing and valuation.

Additionally, let's say the issue you discover is a vendor that has a clause terminating the contract if any particular event happens. What do I do here? You can actually parallel process and go into several agentic AI coordinated by an agentic AI coordinator. Using legal processing agents and financial analysis agents along with human oversight all together, that's your agentic routine which saves time by not doing it sequentially and identifies what could be the most optimum outcome and solution, then sending it to ramification.

Thumbnail 1920

Thumbnail 1940

Thumbnail 1960

By the way, the ramification itself can use agentic AI to draft a waiver proposal, for example, and have a human in the loop decide the action. All of this is to reimagine and not to think about things sequentially. So we talk about how this is important—how do we do this? How do we take a sequential business process and ask different questions to say how do we completely reimagine this? I have a framework which is called BREAK, and these are the components: Blindspot scan, Reframe constraints, Economic dissection, Assumption audit, and Kaizen, the happy path.

What is blindspot scan? When you have a business process which you have been using for years, it's very hard to just tell yourself to reimagine it. One method is to ask why and ask why five times, and sometimes some of the whys might be why not. As an example, this is an actual example. The step currently is manual approval for orders over one thousand dollars. Why? Because we need to prevent fraud—it's a big amount. But why? Because big amounts are risky. But why? Because we cannot verify customer intent with that amount. Why? Because we don't have real-time data. Hey, the question has already been flipped and it's no longer a compliance question—it's actually a data question.

Thumbnail 2050

And the last why is because our systems are not interoperable. So that five whys help you identify again the business process problem space where you might want to dig deeper into. That's your blind spot, which without asking the whys you would just go ahead and continue to do what you were doing before. Second, we very often have constraints. Another idea of reframing that constraint is to challenge your process limitations. Ask yourself what if I have zero latency? What could I achieve with that? What if there's zero touch? Then what could I have achieved with that? With AWS, for example, we ask ourselves what if we have zero ETL? What can we achieve with that? And so very often a new way of thinking, new invention, and new reimagining of business process starts with asking yourself a question about a limit.

Thumbnail 2120

Let me ask you a question. This is a limit that I go to, and how am I going to achieve with that? And then ask, how do I achieve that? And that's how breakthrough happens.

Thumbnail 2200

Economic dissection. There are aspects today that you might think this process is what we can do, but what we don't think about is what are actually the costs, the actual costs and value-destroying elements if I don't make a change. For example, very often we don't think about the hidden costs when we are thinking about delivering ideas or just work output. It's normal to stand in the queue waiting in line at a supermarket, but the hidden cost of time consumption and the hidden cost of people being frustrated actually cause them to put things down and never buy it. Those are the hidden costs and value lost that you have with your customers. Think about in your process where they are and how we can solve it.

Thumbnail 2260

It's about flipping the sacred cows. Sacred cows are very often used as there are risks, there are compliance issues, but is it truly? Every single approval process might be a compliance issue, but it might just be organizational habits. Ask yourself, are these approval processes truly compliance-needed, or are we just used to it? Do we have to go through the hierarchy of the organization and also manual quality checks? I only trust humans to do this step. Could it be prevented? Can it be prevented? Which sequential dependencies are actually requirements, and which are just because historically we have always been doing that?

Thumbnail 2350

The last component is K, Kaizen, the happy path. In process engineering, we very often talk about the happy path, the path which is the easiest and which you go through eighty percent of the time, and that's where we put our focus. But the new way of discovering the happy path, especially when you have agentic AI and when you can actually parallel process many different scenarios and find the right one, is that this happens naturally. The picture you're seeing right here happened at Ohio State University. They wanted to build paths for students to get to their lecture halls, and instead of building it, they actually let students just walk across the grass and identified the most useful paths, and then eventually built the routes according to the footpaths. That is observing actual behavior, and because agentic AI has the ability of logging everything, reflecting on it, and identifying patterns with memories, you can identify the happy paths and formalize them.

Thumbnail 2380

Kaizen, in case you're not familiar with it, is a Japanese philosophy. Some enterprises and organizations actually use it as well in their business strategy. It's about continuous, step-by-step small improvements to get to bigger value. But these frequent changes are informed by what has actually happened in actual work and experiences.

With agentic AI, Kaizen at scale means there's continuous observation and continuous recording and continuous reflection and learning loop with agentic AI, and there's policy refinement according to the agent's observation. For example, this path with these analyst steps actually takes twenty percent shorter every single time.

Policy refinement is continuously done and is traceable. Instead of AB testing, parallel experimentation involves A, B, C, D, E, F, G and many thousands running in parallel to find the right path and discover what experience people truly prefer. Feedback is always integrated, including from human managers of the agentic AI system.

When you want to implement using the BREAK framework, start by process mapping to identify your current process. It might be sequential, or you might already have some digitized or automated parts. Then use the BREAK analysis to truly challenge your limits and the sacred cows. Of all the bottlenecks you identify, prioritize which ones make a real difference and what dependencies you have on them.

Thumbnail 2450

Map your findings to AI solutions as I mentioned before, based on what you've seen in the due diligence of reimagining processes and what AI solutions could be used. Eventually, develop an implementation roadmap. This way, instead of just saying let's reimagine the business process, you have a structured approach to help you get to the opportunities.

Human-Centered Design: Transforming Customer and Employee Experiences with Agentic AI

I've already worked with a few customers on the clinical trial process. Clinical trials are how we bring a drug to market, and it normally takes roughly ten years and billions of dollars because you have to go through phase one for safety, phase two for efficacy, and phase three in a larger population. Many people believe all of this needs to happen sequentially because, of course, safety comes first, then small population tests for efficacy, and then larger populations.

Thumbnail 2580

Thumbnail 2590

Thumbnail 2600

What we don't challenge ourselves on is whether some of these processes can happen in parallel. When you're preparing documents, referring to what is already in the market, and reviewing literature, much of this can actually happen at the same time through trial and error. Using a similar mindset, we can rethink how to redesign the experience with AI. It's often a very similar problem, which is the silo problem.

Thumbnail 2620

When you're rethinking this, the way to do it is to put a human at the center instead of saying marketing needs to do communication and sales needs to do lead generation. Focus on the person you need to actually serve. What is her need and what does she want during the entire process? Agentic AI can help with that because it has reasoning, planning, action, and orchestration all done in the background similarly.

Thumbnail 2640

Let me use an actual example. Returning products can be quite frustrating for customers. When a customer says I want to return this, behind the scenes AI can work in parallel, assessing the possibility by leveraging the purchasing history to identify whether this customer always returns the products she bought and whether it's valid. The AI checks what has been done before, what the warranty is, and whether she can return it. It reasons about whether it's within thirty days after purchase and gathers all the related information needed while taking action simultaneously.

If the return is valid and still within her rights, the AI processes it, updates the inventory immediately, arranges the shipping, and notifies the customer. Imagine if our original process of processing the return is going through several departments.

Thumbnail 2720

Thumbnail 2730

All of this is actually done without going through those silos and with the person with their particular needs at the center. You need to think about how to deal with it more proactively. We solve a problem with returning the product, but eventually if you put the human at the center, these possibilities of thinking proactively about designing their experiences open up. Can I propose a new product if it's a customer? Can I actually deal with the project guideline before the person notices it and let them know that it is approaching in three days and what can be accelerated? If you are dealing with partners, can you actually identify the challenge from all the bottlenecks and the repeated problems that you or the partner might have about this issue?

Thumbnail 2780

To implement this, an idea is to conduct a workshop and map the event you want to solve, identify the silos, and also the entire orchestration. Think about how you can use it to completely transform the experience without thinking about sequential steps that a human team needs to handle. An example is Amazon, where the entire Amazon organization moved into a new agentic AI solution called A to Z. Instead of when I need a vacation and I need to go through HR policies and ask my manager and provide all this information separately, I just type into the interface that I need a vacation of five days, and all the needed information is truly sent to me. The system according to policy tells me that I am in Switzerland and this is what I can do and I have remaining fifteen days. Yes, I can take five days of vacation right now. If we put the employee at the center in this case, there is the HR manager, payroll, IT, and all these departmental silos are put aside. We truly think about what needs to be achieved with this human in the center, and every process and tool can be leveraged.

Thumbnail 2890

Thumbnail 2900

Thumbnail 2950

The AI Manager Paradigm: From Task Definer to Objective Setter

Now let's talk about how we actually work together with AI for all of your employees. First of all, not everything should be given to AI, whether it's an agent or generative AI. There are agentic AI problem spaces that truly deserve to use agentic AI. For example, you have a clear objective because that's what AI needs to achieve. You have sufficient data truly available, and you have many decisions, but it is possible to reverse them. In Amazon we talk about one-way doors and two-way doors, and two-way door decisions can be reversed back, and those you can give to AI.

Thumbnail 2970

You can retain human control and you should retain human control in more strategic decisions, relationship-sensitive decisions, and novel situations. Let's not forget high-stake decisions and one-way door and irreversible decisions. We need to think about the human-AI manager role. It is no longer a task definer. What is a task definer? Run this report. You define what needs to be done, but truly now when we're working with AI, especially agentic AI, we need to give AI the objective. I want to achieve this, please do the payment on time, please help this customer and return this product in the right way within the time limit according to our policies.

Thumbnail 3030

Thumbnail 3050

This paradigm shift from task dedication to objective setting represents our true change of mindset. The human's role as an AI manager requires three core competencies: objective setting, performance monitoring, and strategic intervention. Let me detail what objective setting entails. First, you need to define the primary goal you want AI or agentic AI to achieve. You set the goal, but you also need to establish success criteria. Examples include CSAT scores, resolution time, or fast contact resolution percentage. All of these constitute your potential success metrics.

Thumbnail 3090

You must also define what constraints exist and what escalation triggers should activate human involvement. When do you actually need to keep humans in the loop? Additionally, you need to determine how often you review performance—weekly analysis, daily, or monthly? And you should consider leveraging AI's memory. Very importantly, AI manager skillsets differ significantly from human manager skillsets. One truly important skill is learning agility. When you're working with AI agents, we always say that whoever knows how to best leverage AI will become the high performer in the future, the best person you have.

Thumbnail 3190

What you need to do is be very good at pattern recognition—understanding exactly how best to leverage AI and combine it with your team members to achieve your objectives in the best way. You need route identification skills as we discussed. You also need to know intuitively, after working with AI for a while, what contextual interventions humans need to perform frequently and what you should not let AI handle. Human-AI augmentation happens when humans intuitively know when and how best to leverage AI. These are the four key skillsets you need to develop, and you truly develop them on the job by working together with AI systems.

Thumbnail 3210

Thumbnail 3240

Thumbnail 3270

Building the AI Organization: Hub-and-Spoke Models, Talent Systems, and Living Foundations

Now let's talk about the AI organization—the organization that will enable you to launch AI. We have four key components: the leader, the organizational structure, the innovation ecosystem, and the talent system. Let me start with the leader. Does anyone here have the title of CAIO? It is truly becoming a trend that organizations are hiring CAIOs—people who they believe should be leading the AI strategy and telling the organization how to best leverage AI. That's great, but sometimes we encounter a situation where one leader cannot scale because many organizations, especially at the top with the board and C-suite, are very often also new to this. They don't know what AI can or cannot do, and they don't know the right way to make their organization truly know how to leverage AI.

One key responsibility of a CAIO is to educate, orchestrate, and distribute AI fluency and ownership across the entire C-suite. We see that this is the most effective way for a CAIO to lead the organization. One person cannot scale AI strategy and cannot make it successful for the entire company. Then within the AI organization itself, I have done and been tasked with various AI initiatives.

Thumbnail 3310

Thumbnail 3350

I have led AI and data transformation initiatives for four global organizations. The model I often use is hub and spoke. We do need many things in the hub in order to scale quickly across the entire organization, but we also need spokes into the other business functions and departments to help people learn and to help those facing business problems and opportunities daily identify the right opportunities.

Thumbnail 3400

The key differences lie in how big the hub is, how big the spokes are, and how flexible or permanent the spokes are. I have led AI research-heavy models where the organization is truly creating something and doing cutting-edge research on industry's most challenging problems. If you establish an AI organization that is research-heavy, the AI center of expertise needs to be large, and the spokes into other functions are often temporary. Teams go there to solve one particular problem and then move back into the center, which is called squads.

Thumbnail 3430

Another model of hub and spoke involves a small hub with bigger and more permanent spokes. This approach is very common with AI-heavy models. When I worked for global life science companies, the commercial organization's hub was at the center, but many markets had spokes that were permanently there. The most important thing is how you discover and scale AI innovation and how you multiply that. This is what I call the AI incubator model.

Thumbnail 3510

Whether you are hub-heavy or spoke-heavy, having that ring is really important. Very often, successful ideation and pilots of AI use cases happen in the spoke. But when people discover that it is successful, they have no obligation to scale it. In order to mobilize and motivate people to move it forward and find opportunities to reapply it in other organizations and feed it back to the hub to create permanent products that enable fast scale, you need that ring. That is your innovation ecosystem and that is the incubator. Whether it belongs to the hub or the spokes does not matter.

Thumbnail 3530

Thumbnail 3540

We very often say this is a true two-piece team with multifunctional members, business subject matter experts, and importantly, AI experts. Having the right talent is crucial—talent that has business acumen on the right-hand side and problem-solving skills, but also AI functional skills on the left side. Balancing left and right side is very important, whichever organization, hub or spoke, they are in. What you can adjust is the balance. Some need to be left-brain heavy, requiring deep AI engineering or scientist skills, while others need to be balanced.

Thumbnail 3570

Thumbnail 3600

For example, a very important role is the AI translator, which focuses on problem space discovery and AI opportunity identification. For this role, left brain and right brain need to be balanced. From strategy to implementation takes 90 days. Think about whether hub and spoke is right for you, which type of hub and spoke fits where your people are today, and do the assessment to identify the fluency program for your team. Determine whether you need to increase their left-brain part or right-brain part. This can be done following the 30, 60, 90 days roadmap. Last but not least, data.

Thumbnail 3620

Thumbnail 3640

Thumbnail 3650

Thumbnail 3670

We have been fixing the foundation for years. Who hasn't encountered some of these: data warehouses, data lakes, data lakehouses, data mesh , and data fabric. What we need to think about and identify now is how can I actually build a living foundation? Because your AI foundation should not be a cathedral and should not involve keeping your head down for two years building it. Rather, it should be like a living city where you lay the pipes and continuously develop because agentic AI, unlike all of the others from the past, does not need perfect data. It can take the data, improve on the data, and actually continuously use data to both enrich the use case and further develop your models.

Thumbnail 3680

Thumbnail 3700

Thumbnail 3710

Thumbnail 3730

Thumbnail 3750

Thumbnail 3760

AI technology evolution is accelerating, and as we know, this is just some of the examples from the past three years. So what we need to do in terms of architecture is keep it simple and modularized. That's where solutions and platforms like AWS AgentCore can help you and keep your choices open while giving you production-grade observability. We need to not only keep responsible AI and risk management in principle but also operationalize it. For the implementation, again for the 90 days, if you want to think about an agile foundation, these are some of the implementation roadmaps you can follow. Start with one use case, create that template, and expand. These are my six chapters of AI strategy playbook, and this is the summary for you to take back.

Our time is up, but if you have any questions, feel free to approach me after the session or throughout the entire re:Invent and enjoy the rest of your re:Invent. You've been a great audience. Thank you very much.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)