🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.
Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!
Overview
📖 AWS re:Invent 2025 - A leader's guide to AI strategy and implementation (SNR305)
In this video, Helena Yin Koeppl, AWS Executive in Residence, presents a comprehensive AI strategy playbook for senior leaders, introducing the RIPPLE framework (Rational Pause, Incentive Mapping, Perspective Divergency, Contrarian Truth, Moat Building, and Velocity) to translate business strategy into AI strategy. She demonstrates how organizations can achieve 100x productivity gains through agentic AI by reimagining business processes using the BREAK framework (Blindspot scan, Reframe constraints, Economic dissection, Assumption audit, Kaizen the happy path). Real-world examples include Moody's reducing risk assessment from one week to one hour, and Genentech automating 43,000 hours of manual work. The session covers six critical chapters: defining AI North Star, reimagining business processes, building hub-and-spoke AI organizations with the AI incubator model, developing human-AI manager competencies, and creating living data foundations. Koeppl emphasizes that 95% of AI projects fail because organizations use yesterday's playbook, advocating for starting with business problems rather than AI solutions, and implementing AI ROI discovery methodologies including test-control analysis and AI attribution tracking.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Welcome to the First Senior Leaders Track: The AI Revolution's Accelerating Timeline
Good morning. Welcome to your first session of 2025 re:Invent and the first senior leaders track session at re:Invent ever. So it is 9:00 AM Monday morning in Las Vegas. You must really, really love AI strategy. So let's get started.
My name is Helena Yin Koeppl. I am Executive in Residence for AWS. So I'm a member of a small group of ex-senior leaders who had run and led transformation in our past life as AWS customers. Now we join AWS to share experiences with people like you. As for myself, before joining AWS last year, I led 26 years of data and AI transformation at four Fortune 500 companies in global roles. So today's session, a lot came from my own experiences, but also my past two years of talking to hundreds of customers on their AI journey.
So how much has happened in the past three years? We have gone from generative AI assistants to generative AI agents and now agentic AI systems. And as with any technology revolution, we are foreseeing multiplication of productivity gains. So as you can see from steam engine to electricity to moving to the cloud, the multiplicator is getting bigger and bigger and the time it takes to get there is getting shorter and shorter. What about AI? How much do we see productivity gains should be and how long does it take? And we do hope it can be a hundred fold and it can take the shortest of time.
Five Productivity Bottlenecks That Agentic AI Can Finally Unleash
Because generative AI and agentic AI should unleash many, many productivity bottlenecks. For example, these are the five productivity bottlenecks the agentic era should finally unleash. Number one, unstructured knowledge. We have gone through many, many eras of sorting through data, labeling them, a lot of them are structured data. And finally we have unstructured data in PDF files, in emails, and critical know-how in PDFs and people. Answers vary by who you ask and all of this now can have a single front door via agentic AI to get through that institutional knowledge and reduce time to expedite and narrow compliance risk.
As an example, one of the AWS customers, Moody's, is a risk rating agency and it takes a lot of time to actually run through the analysis and huge amount of documents to analyze risk rating. What we helped was to build a proof of concept with them, basically using multi-agent workflow to reduce that risk assessment analysis from one week to one hour. Let's calculate the productivity gain here.
Number two, we know with agentic AI, one of the key things it can unleash and what we have been talking about for years and years was basically that segment of one customer journey. So we've been talking about personalization, we've been talking about tracking true customer preferences across platforms, and finally with agentic AI's ability of acting, we are able to do so. As an example, anybody here who is not a Rufus user? Very few, almost none, a few. Rufus is one of the key examples of agentic AI powered and generative AI powered shopping assistant. And because it can aggregate all of those reviews
and product information, personal habits, and what other people who bought this also bought. All of this information together really makes it much quicker, actually 4.5 times quicker during Prime Day for shopping while maintaining low latency. We are foreseeing a huge amount of value coming out from Rufus creation for amazon.com. We have seen that not only in shopping and consumer base but also in highly regulated environments like banking. NatWest boosted their click-through rate by 900% and achieved 2 million more higher interest savings account applications by using personalized experiences and recommendations.
So number three, decision latency in complex operations. A huge amount of data, again unstructured sensor data and all, can be aggregated together and leveraging agentic AI to make sense of it. An example, again coming from Amazon, we have an agentic and generative AI assistant called DeepFleet managing the 1 billion robots we're using in our warehouses and really helping the automation become extreme from handling troubleshooting to route directing and dynamically deciding what is the best way of passing the right products into packaging and shipping out.
Again, a lot of the agentic AI usage we have seen is in service. In service management in the past, and I have experienced that more than 20 years ago, if you wanted great customer support, you would follow the sun. You would have three teams based in three different continents covering 24 by 7. Now you don't have to do that anymore. DoorDash, for the 7 million dashers, has that agentic AI system to support all of their inquiries from how much does this product cost today to troubleshooting, again to question and answering. That's 3 million in savings just in supporting the dashers.
Last but not least, faster product cycles. So built-in learning fast and really sorting through that huge knowledge base. We see that in healthcare and life science, one of the industries I worked many years in, and it's very difficult because normally there's a huge amount of unstructured data. One of the examples we have is in Genentech, which is a Roche Group company, and we helped their research organization to automate 43,000 hours of manual work, augmenting their researchers' work, and that's equal to five years of time saved.
So all of these are great examples, yet when you're reading some of the headlines, 95% of agentic AI projects seem to fail to deliver measurable ROI. If you read through and read the details, what it's really saying is those 5% of the companies who managed to really gain value, and actually millions of incremental value in productivities and in business and new business models, what did they do differently is really transforming their strategy and business process. So what I'm talking about today is stop steering today's AI challenge with yesterday's playbook.
Stop Steering Today's AI Challenge with Yesterday's Playbook: The True Flywheel of AI Innovation
There are some examples of yesterday's playbook. I'm sorry if the mic is a bit louder and softer. So yesterday's playbook, as an example, number one, when we are talking about such a fascinating piece of technology, very often we ask ourselves what can I do about it instead of starting with the problem to solve. We are actually going in and using a solution searching for a problem. That's number one. Number two, we don't really rethink the business process. We look at the exact process as before and just find steps of automating and only have incremental gains and sometimes even slow it down.
Talking about experiences, we are still breaking them down into silos. Marketing does communication, sales does lead generation, and none of them are actually working together toward the same customer. There are more challenges in terms of the how. We've been talking about how AI can unleash productivity gains, but what we often do is not retrain and redesign roles but actually replace people. This is not the right approach, and especially not the way to achieve true productivity gains.
In terms of organization, we've been centralizing AI communities, and we will talk specifically about that. And of course, there's data, which is the new oil, but especially the oil that powers the engine of AI. We are still not setting the data up right. So many organizations who come to us, who I have experienced before, have actually come to this inflection point. We have been doing a lot of experimentation in the past three years, many, many POCs, but we have seen uneven ROI and scattered AI spending by function. The metric has been how many AI projects am I having rather than the outcomes.
Now leaders come to us asking the right questions, which are: Are we really backing the right top three value pools? Will this give us defensible competitive advantage? How do we turn the incremental wins truly into compounding gains? This is what I'm going to talk about today. How do we do that? How do we actually have an AI strategy and implement it to deliver that?
Let's think about the true flywheel of AI innovation. Yes, it's great we have a lot of experimentation, and it's great to learn about the new technology. We should actually go back and improve one step up to say: Where am I going to truly invest in AI? Where is the strategic point and the business strategy that I can support that really makes a difference? Start with the opportunity, not the solution. Then, of course, deliver it quickly and truly show value and ROI, and all the while at the same time, build the right foundation.
So that is the new AI strategy playbook. There are six chapters I'm going to talk about. How do you actually define an AI North Star? Starting from the AI strategy that mirrors your business strategy. How do you reimagine business processes and personalized experiences? Then how do you actually build that motorway, not just about technology or platforms, but truly about people, both in terms of everyone who is leveraging AI in your company and also your AI organization?
Defining Your AI North Star: The RIPPLE Framework for Translating Business Strategy into AI Strategy
Number one, let's start with AI North Star. In successful AI transformation, the biggest mistake organizations very often make is starting with "Let's have an AI strategy and let's look into how I leverage AI." The right question we should be asking is: What is your business strategy, and how do I, and where do I, translate the business strategy into AI strategy? I've talked to organizations who want to translate business strategy into AI strategy. What they do is list all of their seven-year business strategy and look into it, wondering how do I do this? How am I going to translate that very high-level vision into a very detailed AI roadmap?
The number one question you want to ask yourself is: Where do you have the opportunity to automate? Where do you have the opportunity to innovate, creating new products and new services? And where do you actually have that disruptive advantage? Keep those three questions in mind and still think about how am I going to translate.
The answer is that we should ask different questions—not the same questions we've been asking for the past 50 years. I'm going to introduce a new framework called RIPPLE. This is a new mental model which helps you to ask new questions, to translate and to define the must-haves on where your AI can play a role to create that competitive advantage, as compared to nice-to-haves.
So number one is R, which means Rational Pause. Let's look into where AI is truly making a difference in your business. Here's the uncomfortable truth: where are you bleeding customers and where are you losing revenue to your competitors? This is about defending the existing market and protecting your competitive advantage. How do we ask different questions? Here are the different questions that you can ask. Think about yourself compared to competitors. Where are nimbler competitors serving customers faster than us? What are the high-volume decisions that are costing us a lot of revenue through delays or errors? That's your opportunity. What routine work prevents our best people from winning new businesses, and where do we lack real-time visibility into customer or competitor moves? That's number one, and it helps you to define where the problem space is that AI could play a major role.
Number two is Incentive Mapping. Incentive Mapping is all about realizing that within these organizations, we often talk about organizational silos and how organizational silos are created. It's not just because we are in different departments or functions or even in separate teams—it's because our incentive systems are not aligned. So we should actually ask different questions to discover that. Do you have conflicting incentives across different departments? Where are departments measured on metrics that work against each other? Where do departmental silos prevent us from serving customers better? What handoff bottlenecks in data and processes are preventable? If you're discovering those opportunity spaces, that's where we go into the second sector, which is reimagining business processes. We have another framework to introduce.
Number three is Perspective Divergency. This is quite interesting. We do know that we have problems that we try to avoid because they're too complex, because they've always been there and we have tried to solve them for years and we couldn't solve them. But right now, ask yourself differently: what if you have an AI-native competitor trying to get into your space? A place I worked before where I led AI product innovation was in the legal space. The legal space has many, many years of labeled data, and we truly thought that we had the competitive advantage. However, there have been quite a few successful new entrants with an AI-first approach. At first they were small, but because they generated valuable insights very quickly with AI and generative AI, at one point they truly became threatening competitors. So where does it make them nimble? Where does it make them competitive? How can we define and how can we actually leverage our own proprietary data, for example, to actually become more successful than them by leveraging AI?
So here are some of the questions you can ask yourself to identify that area where you want to apply AI and you want to try with experimentation and moving into production. Okay, so you've discovered some opportunities, and the next natural and actually important question you want to ask yourself is not could we actually use AI here, but should we? And that's very, very important. That is the contrarian truth, right? So you can ask your question as the process comes into question. What if it is already the best fit for the context? If we introduce AI into this process, could it introduce new risks? What are the mitigations? And especially when we have perceived bottlenecks, is it real or is it actually misunderstood? So all of this, think about your business, think about in your area that not could we but should we as well, the opposite.
And a very important question you want to ask yourself is also, if we build AI, if we truly invest into resources, if we truly invest into this problem area with actually transforming business processes, can we build sustainable competitive moats through AI? Maybe it is. Maybe we can truly create that data that are unique, that competitors cannot and will never have access to, and we can embed AI into workflows, continuously learn and record that not only institutional knowledge that we've been talking about, which are yearly, monthly, but actually daily, second by second operational knowledge, continuously training AI, learning on your business, customizing according to your business and your operations and where your knowledge, where your proprietary data is, and it's eventually becoming true that nobody else can copy. And that is your moat. That is your continuous competitive advantage.
Last but not least, how quickly can you implement it? That is so important, the velocity of implementing it. And if you can implement it within two months, within six months, you might be able to create 18 months of advantage that nobody else can follow. And that is where you should invest. And if you identify that opportunity, put sponsorship and develop skills to execute not only for prototype but truly moving it into production to continuously keep that gap between you and your competitor.
So with this RIPPLE framework, the opportunity filter workshop is your next step. So use that to identify these opportunities but score each of the opportunities by these four factors: impact on business outcomes. By the way, don't forget technical feasibility, and that would also impact how quickly you can leverage it and implement it, creating velocity with it. Strategic alignment and innovation velocity, we talked about the last point. And then select the must-haves to move forward. So that is your map.
Let's talk quickly about ROI. And we have been talking about we need to identify ROI for AI, not only just for the past three years but ever since I started working on AI. One key learning I've learned in the past many years is you need to set up observation in operation, so not post-fact. That will be too late. You won't be able to separate successfully what is AI and what is not. So there are five steps you can do. You need to select the right value stream. It really has to result into productivity differentiation or new revenue streams. You need to embed AI into that flow but wire the data.
You need to have the entire data flow identified and separate AI and non-AI, and you need to actually take those usages, not only just AI or no AI, but how long, which part, what's the percentage, and run and compare and scale.
And there are some AI ROI discovery methodologies to put my analyst's hat on a little bit. So when you are running a pilot and you are able to isolate where you're running the pilot and a place where you have a similar environment and similar activities but without AI, what you can do is test and control. So the test is where you have AI, like A/B testing, and the control area is where you don't, and you can very quickly identify how much incremental value you can create. After post-launch, because you have embedded the data flow, you are able to do AI attribution analysis. And when you are ongoing, what you can do is actually identify every single touchpoint with the AI touch and how much AI touched to continuously optimize it. So that's number one, business strategy to AI strategy. That's your AI North Star.
Reimagining Business Processes: The BREAK Framework for Non-Sequential Transformation
Now let's look at a couple of deep dive examples. Number one is we talk about business processes. If you really keep it completely intact and only automate the separate part of it, what we can see and what we have seen from many, many interviews with customers is you don't actually see a lot of AI value creation. What we need to do is to shift from the separate keeping things intact, completely sequential implementation of AI, into reimagining the complete business process, focusing on what is the outcome you want to get and rethinking about it. Several organizations I've talked to have come back to say, "We tried that. We did a workshop, we talked about the outcome, and everybody went away to think about the opportunities and redesign it, and six weeks later they came back with Excel sheets." So how do we actually do that?
And what I will suggest is again, ask different questions starting with the sequential process that you have right now, which is using, I have an example of due diligence. Due diligence, just like any complex business process today, is pretty sequential, right? So you collect the documents, you have the initial review, and then you flag the issues, and then the issues go through legal review, financial impact review, and eventually you have the risk assessment, and you basically can say, okay, can I deal with it or not, and is it worth it, and do we have to go back to the previous session and handle the exceptions there? And it is time consuming, and due diligence takes six to nine months on average, and when you are actually should be moving forward with that business value creation.
So another thing to think about is, and that's why we should not start with technology. Very often when we are transforming business processes, it's not just one piece of technology, it's not just agentic AI, it's not just generative AI, it's not just, okay, let's use AI assistants and make the process a little bit better. No, if you want to reimagine the process, as often said, don't start with the technology, and this is a key piece of it because let's identify exactly what we want to do. What is the most important thing and most time-consuming thing is actually to discover issues in due diligence and then think about ways of mitigating issues. When you are thinking about that and you actually reimagine this process into a non-sequential process, so number one is I need to inject all the information I have, all the documents.
I need to do semantic analysis to identify that key clause that might be the issue. So that's number one. You are already aggregating several of the components together, not sequentially, and you can use generative AI for it. Once the anomaly is identified, you can use analytical AI to, for example, compare it with your ERP data, perform financial impact analysis, and do the risk scoring. So that's cross-referencing and valuation.
Additionally, let's say the issue you discover is a vendor which actually has a clause of terminating the contract if any breach happens. So what do I do here? You actually can parallel process and go into several agentic AI coordinated by an agentic AI coordinator, using legal processing agents and using financial analysis agents, and actually human oversight all together. So that's your agentic routine, which saves time by not doing it sequentially and identifies what could be the most optimum outcome and solution and then sends it to ramification. And by the way, the ramification itself can use agentic AI to draft waiver proposals, for example, and human at the same time in the loop decides the action. So all of this is to reimagine and not to think about sequentially.
So we talk about how is important. How do we do this? How do we take a sequential business process and ask different questions to say, how do we completely reimagine this? And I have a framework which is called BREAK, and these are the components: Blindspot scan, Reframe constraints, Economic dissection, Assumption audit, and Kaizen the happy path.
So what is Blindspot scan? When you have a business process which you have been using for years, it's very hard to just tell yourself, reimagine it. One method is to ask why and ask why five times, and sometimes some of the whys might be why not. So as an example, this is an actual example. The step currently is manual approval for orders over one thousand dollars. Why? Because we need to prevent fraud. It's a big amount. But why? Because big amounts are risky. But why? Because we cannot verify customer intent with that amount. Why? Because we don't have real-time data.
Hey, the question has already been flipped, and it's no longer a compliance question. It's actually a data question. And the last why is because our systems are not interoperable. So that five whys help you to identify the business process problem space where you might want to dig deeper into. That's your blind spot, which without asking the whys, you would just go ahead and continue to do what you were doing before.
Second, we very often have constraints, and another idea of reframing that constraint is to challenge your process limitations. Ask yourself, what if I have zero latency? What could I achieve with that? What if there's zero touch? Then what could I have achieved? With, for example, AWS, we ask ourselves, what if we have zero ETL? What can we achieve? And so very often, a new way of thinking, new invention, and new reimagining of business processes starts with asking yourself a question. This is a limit that I have, and how am I going to achieve that?
And then ask, how do I achieve that? That's how breakthrough happens.
Economic dissection. There are things today that you might think this process is what we can do, but what we don't think about is what are actually the costs, the actual costs and value-destroying elements if I don't make a change. For example, very often we don't think about the hidden costs when we are thinking about how much we can do in terms of delivering, for example, ideas or just work output. Think about it. It's normal to stand in the queue waiting in line at the supermarket, but the hidden cost of time consumption and the hidden cost of people being frustrated, actually putting things down and never buying them, those are the hidden costs and value lost that you have with your customers. Think about where they are in your process and how we can solve it. We should have an assumption about it.
And it's about flipping the sacred cows. Sacred cows are very often being used as excuses. There are risks, there are compliance issues, but is it truly? Every single approval process, sometimes it might be a compliance issue, but it might be just organizational habits. Ask yourself, are these approval processes truly compliance-needed, or are we just used to it? Do we have to go through the hierarchy of the organization? And also manual quality checks. I only trust humans to do this step. Could it be prevented? Can it be prevented? And which sequential dependencies are actually requirements and which exist just because historically we have always been doing that?
The last part of BREAK, the last one is K, Kaizen, the happy path. In process engineering, we very often talk about the happy path, the path which is the easiest and which 80% of the time you go through, and that's where you put your focus. But the new way of discovering the happy path, especially when you have agentic AI, when you actually can parallel process many, many different scenarios and find the right one, is actually letting things just happen. Let things just happen for a while. The picture you're seeing right here happened at Ohio State University. They wanted to build paths for the students to get to their lecture halls, and instead of building it immediately, they actually let students just walk across the grass and identify the most useful paths, and then eventually built the routes according to the footpaths. So that is observing the actual behavior, and because agentic AI has the ability of logging everything, reflecting on it, and identifying patterns with memories, you can identify the happy paths and formalize them.
So Kaizen, by the way, in case you're not familiar with it, is a Japanese philosophy. Some enterprise organizations actually use it as well in their business strategy. It's about continuous step-by-step small improvements to get to bigger value. But those frequent changes are informed by actually what has happened in actual work and experiences.
And agentic AI enables Kaizen at scale. As I mentioned, there's continuous observation and continuous recording and continuous reflection and learning loops with agentic AI, and there's policy refinement according to the agent's observations. Hey, this path with these analysis steps actually takes 20% shorter every single time.
This policy refinement is continuously done and is traceable. Parallel experimentation allows you to run not just AB testing, but A, B, C, D, E, F, G, and many thousands of experiments in parallel, finding the right path and discovering the experience people truly prefer. The feedback is always integrated, including input from the human managers of the agentic AI system.
With the BREAK framework, the next step when you want to implement is to start from process mapping and identify your today's process. It might be sequential, you might already have some digitized or automated parts, but then use the BREAK analysis to truly challenge your limits and the sacred cows. Then, of all the bottlenecks you identified, prioritize which one makes truly a difference and what are the dependencies you have on them. Map it to AI solutions as I mentioned before and what you have seen in the due diligence reimagining of the processes and what AI solution could you use. Eventually, develop the implementation roadmap. That way, instead of just saying let's reimagine the business process, you have a structured approach to help you get to the opportunities.
Rethinking Customer Experiences: Putting Humans at the Center with Agentic AI Orchestration
An example that I've already been working on with a few customers is the clinical trial process. Clinical trials, when we bring a drug to the market, take roughly ten years and billions of dollars normally because you have to go through phase one safety, phase two efficacy, and phase three in a bigger population. Many people believe that all of this needs to happen sequentially because of course safety first, then small population tests for efficacy, and then bigger populations. What we don't challenge ourselves on is can some of these processes happen in parallel. When you are preparing documents, when you are referring to what is in the market already, when you're referring to the literature, many of these activities can actually happen at the same time and through trial and error as well.
So using a similar mindset, we can rethink about how we redesign the experience with AI because it is often a very similar problem, which is the silo problem. When you're rethinking about it, the way of doing it is to put a human at the center instead of saying marketing needs to do communication and sales lead generation. This is the person that you need to actually serve. What is her need and what does she want during the entire process? Agentic AI can help you with that because it has reasoning, planning, action, and orchestration all done in the background.
So let's use an actual example. Returning products can be quite frustrating for customers. When a customer says I want to return this, behind the scenes AI can work in parallel, assessing the possibility by leveraging the purchasing history to identify whether this customer always returns the products she bought. Is it valid? What has been done before? What are the warranty terms? Can she return it? Is it within thirty days after she purchased it? The AI reasons through all of this at the same time, getting all of the related information needed, and then acts. If it's valid, if it's still within her rights, let's process it, let's update the inventory immediately, let's arrange the shipping, and let's notify the customer. All of this, imagine if our original process of processing the return is going through several departments,
and all of this is actually done without going through that silo and with that person with this particular need at the center. You need to think about the possibilities of dealing with it more proactively. We solve a problem with returning the product, but eventually if you put the human at the center, there are possibilities of thinking proactively about designing their experiences. Can I propose a new product if it's a customer? Can I actually deal with the project guideline before the person notices it and let them know that it is approaching in three days and ask what can be accelerated? If you are dealing with partners, can you actually identify the challenge from all the error logs and the repeated problems that you might have or the partner might have about this issue?
To implement it, an idea is to conduct a workshop and map the event you want to solve, identify the silos, and also the entire orchestration. Think about how you can use it to completely transform the experience without thinking about sequential steps that you need to go through if it's a human team handling them. An example is actually in Amazon, where the entire Amazon organization moved into a new agentic AI solution called A to Z. Instead of when I need a vacation and I need to go through HR policies and ask my manager and provide all this information, I just type into the interface that I need a vacation of five days, and all of this needed information is truly sent to me. The system responds according to the policy, informing me that I am in Switzerland, this is what I can do, I have remaining fifteen days, and yes, I can take five days of vacation right now. If we put the employee at the center in this case, there is HR manager, payroll, IT, and all of these departmental silos are put aside, and we truly think about what needs to be achieved with this human in the center and how every process and tool can be leveraged.
Building the AI Organization: From Human-AI Managers to Hub-and-Spoke Models
Now let's talk about how we actually work together with AI for all of your employees. First of all, not everything should be given to AI, whether it's an agent or generative AI. There are agentic AI big problem spaces that truly deserve to use and launch agentic AI. For example, you have a clear objective because that's what AI needs to achieve. You have sufficient data truly available, and you have many decisions, but it is possible to reverse them. In Amazon we talk about one-way doors and two-way doors, and two-way door decisions can be reversed back, and those you can give to AI.
You can retain human control, and you should retain human control in more strategic decisions, relationship-sensitive decisions, and novel situations. Let's not forget high-stake decisions, one-way door decisions, and irreversible decisions. We need to think about the human AI manager. It is no longer a task definer. What is a task definer? Run this report. You define what needs to be done, but truly now when we're working with AI, especially agentic AI, we need to give AI the objective. I want to achieve this, please do the payment on time.
Please help this customer and, according to the policies, return this product the right way within the time limit. So that paradigm shift from task delegation to objective setting is our truly transformative change of mindset. So the human's role as AI manager requires three competencies: objective setting, performance monitoring, and strategic intervention.
The details of objective setting, for example, include what is the primary goal you want AI or agentic AI to achieve. You set the goal, but you also need to set what the success criteria are. For example, CSAT scores, resolution time, or first contact resolution percentage. All of this is potentially a success matrix. What are the constraints? What are escalation triggers? When do you actually need to keep human in the loop? And how often do we review the performance? Weekly analysis, daily, monthly? And leveraging AI's memory.
Very importantly, AI manager skill sets are different from human manager skill sets. One truly important skill is actually learning agility. When you're working with AI agents, we always say that whoever knows how to best leverage AI is going to become the high performer in the future, the best people you have. What they need to do is actually be very good at pattern recognition, as in how exactly is the best way of leveraging AI and combining with my team members to achieve the objective the best way, the best route identification we talked about. They also need to know intuitively, after working with AI for a while, what are actually the contextual interventions human needs to very often do and not let AI handle it. So that human-AI augmentation happens when humans intuitively know when and how best to leverage AI. Those are the four key skill sets they need to develop, and eventually it's truly on the job by working together with AI systems.
Now let's talk about AI organization, the organization which will enable you to launch AI. We have the leader, we have the organizational structure, we have the innovation ecosystem, and we have the talent system. Number one, the leader. Anyone here who has the title of CAIO? It is truly becoming a trend that organizations are hiring CAIOs, people who they believe should be leading the AI strategy and telling the organization how should we best leverage AI.
That's great, but sometimes we do encounter that one leader cannot scale because a lot of the organization, especially the organization at the top, the board, the C-suites, are very often also new in this. They don't know about what AI can or cannot do, and they don't know what is the right way of making their organization truly know how to leverage AI. So one key thing about CAIO is actually to educate, to orchestrate, and distribute AI fluency and ownership across the entire C-suite. We see that is the most effective way of CAIO leading the organization. One person cannot scale AI strategy and cannot make it successful for the entire company.
And then the AI organization itself, I have led and been tasked with data and AI transformation for four global organizations.
The model I often use is hub and spoke. Yes, we do need many things in the hub in order to scale quickly for the entire organization, but we also need spokes into the other business functions and departments to help people learn and to help those who are facing business problems and opportunities daily to identify the opportunities. But how big is the hub and how big are the spokes, and how flexible or permanent are the spokes, are the differences.
I have led AI research-heavy models where the organization is truly creating something and doing cutting-edge research on that, doing something which is the industry's holy grail problem. If you were to establish an AI organization which is research-heavy, by all means, the AI center for expertise needs to be big, and very often the spokes into the other functions are temporary. They go there, solve one particular problem, and move back into the center, which is called squads. And then another model of hub and spoke is small hub and bigger and more permanent spokes. Those are very often with the AI-heavy model. When I was working for global life science companies, the commercial organization's hub was actually at the center, but many of the markets had spokes which were permanently there.
But the most important thing is how are you going to discover and scale that AI innovation, and how do you actually multiply that. That is what I call the AI incubator model. Whether you are hub-heavy or spoke-heavy and having that ring is really important. Very often, successful ideation and pilots of AI use cases happen in the spoke. But when people discover that, and even if it's successful, they have no obligation to scale it. In order to actually mobilize that and motivate people to move it and find opportunities of reapplying it in other organizations, and then feed back to the hub to actually create permanent products to enable that fast scale, is that ring, and that's your innovation ecosystem, and that is the incubator. Whether it belongs to the hub or belongs to the spokes, it doesn't matter.
We very often say this is the true two-piece team with multifunctional team members, business subject matter experts, but importantly also AI experts. Having the right talents, having talents which have that business acumen on the right-hand side and problem-solving skills, but also AI functional skills on the left side, and balancing left and right side is very important, whichever organization, hub or spoke, they are in. But what you can adjust is the balance. So some need to be left-brain heavy, and you need a lot of in-depth AI engineering or scientist skills, or some need to be balanced. For example, a very important role of AI translator is about problem space discovery and AI opportunity identification. So that left brain and right brain need to be balanced.
From strategy to implementation takes 90 days, and to think about whether hub and spoke and which type of hub and spoke, where your people today are, and doing the assessment and identifying that fluency program for your team, and whether it is increasing their left-brain part or right-brain part, can be done following this 30, 60, 90 days roadmap. Last but not least, data.
Creating a Living Data Foundation: From Cathedral to City with Modular Architecture
We have been fixing the base for years. Who hasn't encountered some of these: data warehouses, data lakes, data lakehouses, data mesh, and data fabric. What we need to think about, what we need to identify now is how can I actually build a living foundation? Because your AI foundation should not be a cathedral and should not be keeping your head down for two years and building it, but really having it as a living city and laying the pipes and continuously developing because agentic AI, unlike all of the others from the past, does not need perfect data. It can take the data, it can improve on the data, and it can actually continuously use data to both enrich the use case but also further develop your models.
AI technology evolution is accelerating, and as we know, this is just some of the examples in the past three years. So what we need to do in terms of architecture is also keep it simple and modularized. And that's where, for example, solutions and platforms like AWS AgentCore can help you and keep the choices and do not lock in and give you production-grade observability. And we need to not only keep responsible AI and risk management in principle, but also operationalize it.
Okay, so for the implementation, again for the 90 days, if you want to actually think about agentic foundation, these are some of the implementation roadmaps you can follow. Start with one use case, create that template, and expand. So these are my six chapters of AI strategy playbook, and this is the summary for you to take back. So if you have any questions, our time is up, but feel free to approach me after the session or throughout the entire re:Invent and enjoy the rest of your re:Invent. You've been a great audience. Thank you very much.
; This article is entirely auto-generated using Amazon Bedrock.






































































































Top comments (0)