DEV Community

Cover image for AWS re:Invent 2025-Digital transformation excellence using agentic AI: From vision to victory-MAM204
Kazuya
Kazuya

Posted on

AWS re:Invent 2025-Digital transformation excellence using agentic AI: From vision to victory-MAM204

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025-Digital transformation excellence using agentic AI: From vision to victory-MAM204

In this video, AWS Expert Services leaders Brad Sevenko and Oscar Rodriguez demonstrate how agentic AI enables 10x faster cloud migrations with 80-90% cost reduction. They showcase AWS Transform for VMware, Windows, and mainframe modernization, Amazon QuickSuite for no-code automation, and Amazon Q Developer (Kiro) for application development. Ashish Shekhar from Danske Bank shares their remarkable achievement of migrating 850 applications in 15 months—six times faster than industry benchmarks—using automated processes. He details their mainframe modernization approach, converting COBOL to .NET and PL/1 to Java with 65% lower error rates, 80% reduced reverse engineering effort, and 4x faster development speed. The session emphasizes practical implementation strategies including education, early value proof, and production deployment of AI agents, demonstrating how organizations can compress traditional 10-15 year mainframe migrations into significantly shorter timeframes.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction: The 10x Acceleration Promise of Agentic AI in Cloud Migration

Good morning everyone. I hope you are enjoying Las Vegas, the keynote, and welcome to our session here this morning on digital transformation excellence using AWS agentic AI, taking you from vision to victory. I'm Brad Sevenko. I'm in Professional Services with AWS in a team called Expert Services, where I run the migration and modernization team. We do some of the largest migrations in the world with some of the largest household names that many of you may have heard of, and perhaps some of you are in the room today. I'm joined by Oscar Rodriguez, who leads a team with domain practices across migration and modernization, data, enterprise transformation, and supply chain. Together we're changing the world with some of these customers that I speak about.

Speaking of which, we've invited Ashish Shekhar back this year from Danske Bank, and he's going to speak about some of the progress that they've achieved last year and that they are experiencing this year from a migration and modernization perspective. So let's get started. Here's what's happening today with this new technology. I've been doing this for quite some time at AWS, and with all things being agentic AI, what we are experiencing is migrations occurring on average about 10x faster than traditional approaches. Today we're going to speak about that and what that means. 10x faster is an 80 to 90 percent cost reduction in terms of how we staff and deliver projects for our customers from a migration and modernization experience.

Thumbnail 100

The challenge and the opportunity is significant. As a matter of fact, 70 percent of the workloads that we are migrating were built, developed, tested, and deployed anywhere from 20 to 30 years ago. Think about the complexities involved with that legacy software, legacy hardware, legacy systems and processes, and people in some cases who may not even be available or who have documented those previous platforms. That is a challenge that we experience and that you experience every single day. Think about it: 70 percent of the IT landscape that we operate, that we manage, that we deploy is still on premises with a lot of contingencies and a lot of associated legacy to that.

Thumbnail 160

These days we actually need AI and automation tools to help our customers accelerate their journey to the cloud. There is no question that we are at an inflection point with some of the things that were announced this morning at the keynote, and those are some of the things that we are going to speak about today. Things are clearly different, and this is not a future trend. These are things that are happening now. As a matter of fact, we're seeing that 15 percent of day-to-day technical and business decisions by the year 2028 will be made autonomously with AI. That 15 percent means machine learning, machine reasoning through complex situations, building a plan for that complex situation, often a migration and modernization, and then executing against that plan and delivering the goods, doing the actual migration, and learning from that experience.

15 percent of those decisions will be made by 2028, and that is clearly changing and accelerating everything I spoke about. As a matter of fact, 82 percent of organizations today are just beginning to leverage AI for the things that you do. Those early adopters will be the ones who make a difference, to take share, to have competitive advantage, and achieve some of the visions to victory that you would like to achieve within your career and within your organization. So we're going to speak about a lot of these things today about gaining a significant advantage for how we do this.

Thumbnail 250

The Dual Challenge: Moving to Cloud While Adopting AI Capabilities

One of the things I've learned working with a large cloud provider is that customers want both. They do want to move their systems to the cloud, whether it's dev, test, staging, or even production systems to the cloud, but they also want to take advantage of AI, not only to help them achieve that migration, but once the estate is on AWS, to actually realize and expand their portfolio with AI capabilities on AWS. We're going to speak about how we do both of that, introduce purpose-built workloads, AWS Transform, Oscar is going to speak about that, and also some no-code solutions, Quick Suite, and how customers are actually realizing this experience again with some of the things that Ashish is doing within his organization.

Now a lot of people say that this AI thing is the beginning of the end. I actually believe, to quote Winston Churchill, I think

we're at the end of the beginning. There is no question things are going to change with some of the technologies that were introduced over the last six months from an AI perspective, but this industry is changing very quickly and so is the way I approach my customers from a migration and modernization perspective.

Thumbnail 330

Digital transformation is more than just technology. It involves people, process, technology, and governance. I'm not trying to make this more complex, but how many of you have actually done coding with AI? Just a show of hands so I can get an understanding of accelerating your coding. It looks like about 40 to 50% of the room is doing that. Do you notice that when you use something like Kiro or other AI tools that it does optimizations for the code? You start with prototyping and then you might get these one-line procedure calls that are calling the same procedures that you wrote years ago, and it's called optimization. That happens when you start to prototype, and that happens when you start to build and then turn that into a production process.

I know this because I have 16,000 lines of my own code in an application that I'm working on, but imagine if you had hundreds of millions of lines of code. What I'm discovering and what I'm seeing with my customers as developers is that you are becoming two types of skill set people. One, writing perfect prompts to build the applications and source code that you want. And two, when you have to go back and realize and debug some of those situations, you've got to look at the source code and do that. So your skills are actually expanding. That's what I mean about the end of the beginning because things are changing the way AI is introducing some of this.

Thumbnail 430

But if you do this properly, this is how doing it properly we believe you can experience 10x rapid migration to AWS with significant cost savings. So let me be very clear on the expectations that we're going to share with you today. This isn't theoretical. We're going to show you real-world practical examples using the tools, customer experiences, and most importantly, the end results that you and our customers are experiencing with these tools. There's no question it is definitely changing.

Transforming Migration Delivery: From Manual Processes to Automated Workflows

So we want you to understand how agentic AI can reduce your timelines, how it can be introduced as a technology into your daily processes, and how you can use this to leverage migration and modernization all the way to adopting new systems and new capabilities within your organization. In my role and my team's role in our organizations, I speak to at least one customer per day, at least. Sometimes it's two, sometimes it's three. This week it'll probably be 15. The trends you notice are kind of similar. What about my staffing? How do I introduce this? What about my legacy? What about the complexity? Where do I start? What do I do? These are common questions, common situations, and common experiences that customers are on the threshold of trying to understand.

How do I train my people? How do I drive scale? Ashish will talk about a hackathon that he's introduced where thousands of agents are being created within the organization, and a fluency of how to introduce agentic AI is being approached with an example. I know personally within our team we did quite a few manual processes for running a program to help customers off of legacy systems. I had some of my best migration subject matter experts in the world, my SMEs, working on these projects, doing things like level of effort, the LOE, writing the customer proposal, doing the as-built state analysis. I don't staff for that anymore. Now what we do is we automate the assessment, we automate the inventory, we come up with a wave plan. We even build out the VPC, the virtual private cloud, on AWS, and we automate that experience.

Thumbnail 470

What we're finding is our staffing models, the way we change our delivery, our proposals, and our implementation timelines to customers, it is coming in at a 10x faster experience. Ashish will talk about this a little bit later on in terms of their environment, representative perhaps of many of you. A mainframe migration takes anywhere from 10 to 15 years, 10 to 15 years, hundreds of millions of lines of code, up to 500 in some cases, or doing an enterprise migration of your infrastructure, Windows, SQL Server, Oscar's going to demonstrate some of that, Linux servers, how do you move that? Well, often we're seeing 24 to 36 months to do these migrations. Again, if we can reduce that timeline and eliminate some of those costs and manual experiences, that's what customers are asking for and that's what we're doing our best to deliver.

Thumbnail 610

We talked about some of this last year in terms of the human process and human experience to build consistency in how we deliver these migration and modernization projects. Basically, in the old days, we would come up with a level of effort, build one pod that was demonstrable and representative of the task ahead to migrate a specific wave, come up with a wave plan for those servers, and deliver the migration. Then we would staff up more pods to drive scale. That worked, and we got some of the largest migrations in the world completed in a couple of years.

What's changing now is we introduced hyper-automation last year, and now on top of that, we're doing generative AI automation with many of the tools. Matt spoke about this morning, ranging from purpose-built tools like AWS Transform, which is meant for migration and modernization. It's a tool to do that transformation for you and assess your infrastructure, build out the network, build out the wave plan, and you literally just click and move servers.

Then there are no-code experiences like Amazon Quick Suite, where you can take all of that data and ask it difficult questions. How am I doing? How's my budget? How am I tracking? How many more people do I need? It synthesizes that from a no-code perspective. Or you can write code and use something like Kiro to completely re-platform your legacy applications that maybe cannot be migrated. It's that entire suite of tools that we employ as professional services to move our customers to AWS.

Thumbnail 710

Framework for Success: Assess, Execute, and Scale with Purpose-Built Tools

I'm not trying to make this more complex than it is. We showed some fantastic demonstrations this morning of showing how easy it is. This is the approach pattern that we take, and it's not for everyone. Many customers already have the business analysis of why they're doing this. They're going to save money, lower costs, eliminate data centers. They don't need the business plan for that. What about the overall framework and the governance of roles and responsibilities and maybe extending some of that work to partner teams or staff teams? What about training people?

Remember some of those issues I talked about when I'm debugging 16,000 lines of code or 150 million lines of code for a customer. Measure twice and cut once is what I say. Think carefully about how you're going to introduce AI and unleash it onto your code base because your developers, your business analysts, your business users often will have to look through two lenses. One is writing the perfect prompts to do the work of the migration and modernization, and two is perhaps stepping back into the code to do some debugging and write the code. That's a very common experience.

The point is if you have a framework to address these challenges and approach it not too aggressively and not too slowly, this is the experience we're seeing: a three times faster delivery speed, cutting our implementation costs by fifty percent, and at the same time introducing AI into the organization to enable you so we can get out of the way from a delivery perspective and enable you to leverage your platform for what it's meant to do: drive new market share, serve customers better, and lower your costs on an AWS cloud platform leveraging the capabilities of AI.

Thumbnail 820

Let's look at some of the practical approaches and mechanisms and processes that we use. Everyone's asking us to help them accelerate their migration by at least fifty percent. That's a conversation I have every day. Improve workplace productivity. What I'm doing right now is mapping products and services to this. I think AWS Transform does the work on the left-hand side with purpose-built AI capabilities, Amazon Quick Suite does some of these automated business workflows literally with no code. You can drag and drop all the documents you wish and ask difficult questions and come up with business and technical decisions all the way to doing rapid prototyping with something like Kiro on the right-hand side to rebuild or reimagine and recreate applications in new and more productive ways that are easier to debug because there's a lot less code going on.

This is the process: assess, execute, and scale. What we like to do with our customers is spend some time on the assessment. How difficult is this going to be? What's the challenge? What's the forecast cost model, both from an implementation and an operational perspective? Then we staff up a few people to do the work. If you do not have the skills, we're here to help you achieve and realize some of those goals.

Thumbnail 890

For the remainder of our time together, we're going to demonstrate some of these and talk about some real-world customer examples. I'll invite my colleague Oscar Rodriguez on stage to go through many of these examples with you.

Thumbnail 920

AWS Transform: Purpose-Built Agents for Large-Scale Migration and Modernization

Thank you, thank you Oscar. Thank you, Brad. Everyone, great to see you. Welcome to Reinforce and thank you for joining us today. I know there are multiple options available, so we're super happy to have you here. Let me start by introducing myself. My name is Oscar Rodriguez. I'm a director at AWS and I lead the global practice for Expert Services. Brad is part of my team. The role of my team is to help some of the largest customers across the globe. If you have a big problem, we have the people to support those problems. If you are going through a digital transformation, we have people that can support you, so feel free to reach out to us anytime and we are here to support you.

Thumbnail 960

The first part I want to focus on is how AWS is helping customers to do real transformations. There are many tools available, but let me start with AWS Transform. AWS Transform is the first set of agents that we have developed over the last years to help customers to do large migrations and modernizations. We have been doing migrations for almost twenty years. This is nothing new. What we have done is bring all the different practices and lessons learned regarding how we can do code analysis and large implementations. We learned from customers that there are four main areas where customers need help. These are the main capabilities of AWS Transform.

Number one is VMware migration. Many of the customers that are reaching out to us want to exit from VMware and move elsewhere. So we have you covered. The second one is full stack Windows modernization. We have many customers who are trapped in the fact that they may have licenses for Microsoft .NET, SQL Server, and Windows. So we now have those capabilities available today for Windows modernization. The third one is mainframe, which Ashish is going to cover later on. We have a massive number of customers looking for how they can modernize the mainframe and move fast without going through a traditional approach. AWS Transform provides those capabilities.

The last one, which you probably saw from the keynote, is that we now have the ability to do custom transformations. Think about a scenario where you have a COBOL program that was written twenty-five years ago and now you want to move to Java or .NET. Or maybe you have an application in Java and now you might want to move to .NET. We're giving customers the flexibility and options to do that. But what is really the most important part of AWS Transform is that it can help you compress those timelines. Migration projects that some companies are thinking about as five to seven years can be brought down significantly with AWS Transform.

Thumbnail 1090

From SQL Server to Mainframe: Real-World Transformation with AWS Transform

Let's walk through one scenario regarding Windows. This is a typical customer that may have a Microsoft stack with Microsoft .NET, SQL Server, and Windows running on a virtual machine. With AWS Transform, we can help you in number one, migrate from Microsoft .NET to .NET Core open source, which is a new capability available this week. The second part, which is also one of the most exciting parts, is migrating from SQL Server to what we call Amazon Aurora PostgreSQL. How many of you have done a large data migration and are really worried about how you can manage this? So we have you covered today with Transform.

Thumbnail 1130

Let's go deeper on the data. If you can raise your hand, how many of you have done a data migration in the room? Probably about fifty percent of the room. And as you know, data migration is the hardest part of any transformation, whether in the cloud or outside of the cloud. What we have done with SQL Server to Aurora PostgreSQL is really simplify and make life easier for any company. There are four steps. Number one, we can help you with the analysis. You may have a massive number of databases you want to analyze. You want to understand what the dependencies are and who is using those applications. This is part of the discovery and analysis that we have available.

Second, we can help you analyze the ER schemas. We can also help you move the data itself, which is the hardest part, moving data from a massive number of systems all the way up to Aurora. The third part, which is also super important, is the procedures. Many of the customers we're working with have massive logic running in stored procedures. So how do you ensure that those logic is now available in the new system? And last but not least, is the validation and the deployment.

Thumbnail 1230

In my experience, sometimes we focus only on deployment and forget about ensuring that the data has the right integrity and security. We can help you with all of this end to end. But it's not just the Windows part of SQL Server. This is probably over the last six months, we have seen major high demand in terms of mainframe modernizations from customers, and the reason why is because customers are looking for a systematic approach. They don't want just a piece of the puzzle.

What we offer is we can help you with code analysis. The second part, which is one of my favorite capabilities, is that we can do reverse engineering. We look at the code and based on that, create automated documentation for you. Think about some of the rules that maybe were written ten or twenty years ago. We have you covered. Number three is code decomposition, which we're going to see an example in a second. Think about if you have a large monolithic application. How can you break it down into business domains or even microservices?

Migration planning is something many customers reach out to us about. Oscar and Brad, what is the best way to migrate? How many waves do we want to have as part of the planning? Then the last part, which is also important, is making a decision. Should I refactor, meaning I'm going to take the legacy logic and redesign my application, or reimagine? Reimagine is a new concept that is happening right now. It's a major factor where companies have the option to really redesign the model and redesign the logic moving forward. The last part is test and deploy.

Thumbnail 1320

We have multiple case studies. This is a real case study from Toyota Motors. When they reached out to us, what was the challenge? They were running a large mainframe system. They didn't have enough mainframe skills. Some of the people that wrote the application are no longer with the company. The massive number of lines of code was very impressive, more than forty million lines of code. They looked at different options and came up with the conclusion that AWS Transform was the only option that could do the code analysis and help them understand the complexity. It also helped them do the decomposition.

What was the main result? They were able to compress the timelines. Looking at the numbers, we're seeing more than fifty percent faster times from a traditional approach. The second part, which is a major benefit, is they were able to identify what was the logic, what is the technical debt, what are the things that we need to modernize, and what are the things that we need to throw away. You can look at the quote from Brian Corsa, who is the CTO. He literally said every transform has done what many said was impossible. They looked at different options. Something that I highly recommend for you is to give it a shot. If you're going through a transformation, you will be very surprised with the results.

Thumbnail 1400

Thumbnail 1410

Amazon QuickSuite: No-Code Business Workflow Automation and Decision Intelligence

Digital transformation is not just migrations and modernizations. Something that we also see from customers is that customers are looking for how can I improve productivity? To that end, we recently released Amazon QuickSuite, or also you can call it Amazon Quick, as you heard this morning from Matt Garman. Something that is coming as a major feature that we have available is the ability to bring data in one single place and make decisions in real time. Probably some of you heard about the customer 360 concept in CRMs, bringing all data in one place. So what is the difference? The difference is you can bring data that maybe is running on Excel, it's on Salesforce, data maybe is available in other systems. Once you bring this data through QuickSuite, now you can start asking questions and getting responses in a matter of seconds versus hours.

The second part is something that I use personally. With QuickSuite, you also think about it like your coach and mentor. How many times are you trying to make a decision and you want to see what are the options, what are the pros and cons? QuickSuite can be also your mentor. That's something that I use on a day to day basis to help me make decisions. The last part is it's not just to get insights, but also to trigger some actions. How many times have you been in a situation where you're making a decision and you have to trigger a workflow? Maybe the workflow needs to fix a problem. Maybe it needs to send an email to someone. Right now, there's a way to start doing some custom automations.

The second scenario I want to cover is the automation of business workflows. I want to show you a real example we are building for a customer. Think about this customer—they are a large bank looking for ways to automate document validation as part of a long process. First, let's see what data we need. We need the driver's license, so we're going to drag and drop the data. We're also going to do the same for a formal application. This is the process for this demo—we simply drag and drop. As soon as we click start, the system is going to kick off a massive workflow.

Thumbnail 1520

Thumbnail 1550

Thumbnail 1560

Thumbnail 1570

Thumbnail 1580

The system will start parsing the data, identify what data is missing, and apply some of the rules. There are multiple agents that happen in parallel. Once the data is ready, one of the last steps that will happen is the actual validation. The validation is not just a simple comparison of data. The system behind this applies business rules, but it also identifies whether there is any potential fraud—whether this driver's license is real or if it is a fake license. All of this will happen, and we're going to see in just a few seconds. The validation is done and the system tells us here are the discrepancies, here are the matches.

Thumbnail 1600

Something that surprised me when I saw this working is that right now we're talking about only one person. The same process can be applicable for hundreds, thousands, or millions of records of personnel running in parallel. The second part is that the system has the intelligence to keep learning. So it's not just a typical workflow. As the data continues, the system will continue learning as we go. The last part, which is also super important, is where the human is. There's a human in the loop at this point. The person probably has options—they can say I need to request further information, or they may decide to approve the application. Regardless, the human continues in the loop, and that's where your people can bring the highest value.

Thumbnail 1650

Thumbnail 1660

Kiro: The Agentic AI IDE Revolutionizing Software Development Life Cycle

We cover different use cases, but one of the areas that is really moving fast is how companies are transforming their environments using agentic AI for software development life cycle. Quiro is something I want to talk about. How many of you have used Quiro so far? About thirty percent. So my suggestion is I would love for everyone in this room to use Quiro by the end of this week and give us your feedback. As of today, and probably the numbers have changed after the keynote, we have almost three hundred thousand subscribers in just a few months. Three hundred thousand subscribers.

So what is Quiro? You may say Oscar, there are many tools in the market. Absolutely, probably like hundreds of thousands of tools available. Quiro is the first agentic AI IDE that is fully connected with AWS. That's number one. Number two, when you look at the software development life cycle, you have to go through requirements, design, and implementation. Quiro is fully embedded as part of this. Let me give you an example. With one of our customers, they were trying to build an application but didn't have the requirements or the design. So what we did was get in a room, gather the requirements, and after the session in less than a few hours, we were able to create the requirements document and a technical design for the customer using Quiro. The story didn't end there. The customer said, well, anyone can create a document. Can you show me the app? One day after we showed the prototype to the customer, they were impressed.

Quiro is the second capability that provides fully embedded functionality, and you'll see more coming up. But what are the main three areas where Quiro can help you? Number one is building a new application. It could be a prototype, but you can take that prototype all the way to production. Most importantly, you may say well, I'm not a technical person. Quiro has the ability to connect with you using natural language processing. I have seen many people who are not technical also ramping up with Quiro and building applications. The second one is if you already have an application, you may say well, I already spent six months or a year building an app.

But I don't know if the app is ready. What is the technical depth? So you can bring Kiro to really refine and reassess your applications. One of the other customers that I'm working with shared their code with us, and it was really hard for us to understand the logic. So what we use Kiro for is to create auto documentation and also simplify the code by reducing the number of functions. But the last part, which is something I'm going to hand over to Ashish soon, is combining AWS Transform and Kiro.

Thumbnail 1880

As I described, AWS Transform helps you with a massive VMware mainframe application, for example. It can help you automate a lot of things, but there are things that may also need human in the loop. You need someone who can help you finalize the process. So combining AWS Transform and Kiro will be the best way to go. But let's walk through them. We have hundreds of demos. This is a demo that we built for another bank. In this case, what I'm asking you is, I want to create a banking app, and I want the application to be in Spanish, and I want the look and feel. I'm just going to give a link. This is the title sheet. So let's walk through this together.

Thumbnail 1890

Thumbnail 1900

Thumbnail 1910

Thumbnail 1920

At this point, I'm just saying, "Kiro, can you create a banking app with some validation?" That's all I'm telling you. So Kiro is kicking off multiple agents. Those agents are going to look at what the rules are and what I need to have. It's going to look at the application that I referred to, and then in a few seconds, you're going to see the app, the prototype up and running in Kiro. Obviously, this is just a prototype. We have also implemented something very similar for a customer in just 3 weeks. The customer's original plan was to get the application in 3 months. So we went from 3 months to 3 weeks using Kiro. Again, there's high potential, and this is just the beginning.

Thumbnail 1950

Probably also, as you saw in the keynote today, there are other capabilities that are available right now. They're coming up with Kiro as far as the vault, security, and so on. So you can bring those as well. But before I hand over to Ashish, I want to close with something very important. AWS has an end-to-end portfolio for generative AI. Today we talked about Kiro, Quick Suite, and AWS Transform. So there are some of the options for you. But we also have different customers coming to me and saying, "Oscar, we want to build our own. We don't want to use Kiro. We want to use AgentCore. We want to use some of the open-source tools using EKS." Absolutely. That's the second tier that we have available.

But we also have customers that want to build their own models. So at that level, from an infrastructure point of view, we have you covered through SageMaker and some of the major announcements that came up this week. My guidance to you as you start planning this week is to look at this map and see where you want to focus. But also, once you're back at your company, take this map and see where you are, what is the right fit, and what are the tools that you want to use moving forward. So with that, I'm going to hand over now to Ashish Shekhar, Senior Vice President for Danske Bank. He is head of and managing all the technology platforms across the bank, and he's going to show us how Danske Bank has been disrupting the industry and how Danske Bank is innovating using generative AI and Agent AI.

Thumbnail 2050

Danske Bank's Journey: Achieving 6x Faster Cloud Migration at Scale

With that, Ashish, thank you for joining us. Thank you, Oscar. It's an absolute pleasure to be here today with all of you. So Oscar and Brad talked quite a bit about the technologies that are available here and now. But we're in the privileged position that we've been co-innovating on some of these technologies for some time together with AWS. So we can share a little bit about what this actually looks like in action when you put all of this together.

Thumbnail 2070

A little bit about us. I'm here representing Danske Bank. We're a 150-year-old institution with lots of amazing legacy behind us, but that comes with its own challenges. We've also been building things for close to that amount of time. We have a large presence with over 3 million customers and 21,000 staff, so you can imagine that the challenges we work with are going to be very similar to many of your organizations. We're not terribly big, but we're also not terribly small. We're highly regulated. So our starting point in this journey was not dramatically different from what many of you will experience.

Thumbnail 2130

In terms of infrastructure, we're managing roughly 18,000 VMs, about 80,000 containers, and 25 petabytes of data. That's where we started about a year and a half ago.

Why should you listen to our story? Well, we've been able to achieve some remarkable things. We're not perfect, but whatever we're doing seems to be working, so there are several things we could potentially take away from here. We've been orchestrating a cloud migration to get our estate over into AWS so we can leverage all the capabilities available on AWS. We've been able to do it six times faster than the next fastest organization at this scale.

We're one of the leaders in agentic AI usage across the bank. We've been climbing the ranks quite steadily and have climbed a number of places in the use of agentic AI across some global indexes, so that seems to be working as well. We're now moving from experimentation to strategic implementation. We started our journey with agentic AI a few years ago, like many people did with the advent of the original GPTs. But now we're moving from what used to be experiments with 200 to 300 different experiments going on across the organization to concentrating our effort and making some very large investments into very strategic places.

Why are we doing that? Because we now believe there is no more proof required for this technology to be adopted. We believe the technology is there, it's driving lots of value, so we have to take bigger steps, more meaningful steps, and put meaningful operations behind some critical elements. We're doing that now. Recently, we ran a few hackathons, and with the technological pace we've been able to achieve and where we are now in our journey, we were able to run a hackathon that was participated by about 50 to 60 teams across the organization. We built over 600 agents in the space of two days. A bunch of those we actually promised we would take to production, and some of them are now running in production from the last few hackathons.

Thumbnail 2250

How do you get to this point? As I mentioned, we started with a very complex technology estate that we were running. But we've been able to do some really cool things. Over the last 15 months, we have moved 850 of our applications out of just over 1,000 into the cloud. That is a staggering pace. When we say we move something, we really mean it. Our definition of done includes moving all non-production environments, all production environments, all firewall rules, all CNAME record updates, all DNS repointing, and cleaning up all the on-premises assets. When we move, there's nothing more to do.

At pace, during the April-May timeframe, in one single month, we ended up moving 150 applications from zero all the way through to full production in the cloud. Typically, this is done with hundreds of people formed into 10, 20, or 30 squads. We were able to do it with two squads in operation. We've talked a lot about this, and we were able to do this only because we have automated nearly 100 percent of the migration.

We've automated everything from notifications going out to our application owners to change records being created to DNS rules getting updated to DNS TTLs being lowered to replication starting. We've automated working out when things should start, taking a full snapshot of VMs before we move them, all of it. It's reduced our manual effort by a dramatic amount and helped us move at pace. It's very doable, and the time is now to be able to do it.

Thumbnail 2370

We're not stopping there. We're progressing even further. We're now applying lots of agentic AI to accelerate this even further. I can only wish that the capabilities available to us now were available to us about 12 months ago. One small example of this: we recently, probably about two to three months ago, put a new agent into production assisting our migrations. At scale, we're moving hundreds of applications in a month with only two squads. You can imagine that those two squads are not moving the applications themselves. They're only responsible for handling the ones that fail. There will always be some failures.

It's only by exception that humans and the squads actually get involved. Now the reason they get involved when something fails is that it's obviously out of the norm, and these things can be very complex when we've ended up automating 2 to 4000 steps in the migration process. When something fails, it's a complex problem. It's not usually a straightforward thing that we could have picked up, because otherwise we would have automated it anyway.

So what happens now is when something does fail, rather than the case getting handed over to a human team, it actually gets handed over to an agent. The agent first does all the deciphering and working out of what the problem may have been, does a whole bunch of troubleshooting itself, and then comes up with the most plausible solution for how to remediate the problem. What we're seeing is that what was already low error rates is cut down by 5%. Our triage times to a problem are cut down by about 70 to 80%. It's absolutely incredible what this can do, and this is the base of where we are. This can only get better from here.

There was a very interesting specific incident that happened that I can talk a little bit about. There were two applications that failed. We had two migration teams that were looking at one application independently, and they were trying to figure out what was going on. When they started looking at the output of the agent, they realized that the agent was talking about something that they didn't know. The agent identified that between these two applications that had failed, there was a common disc component that for some reason was not something that was picked up before in the process.

The agent recommended that squad one stop their work, squad two continue and fix the problem, and the other one would just be fixed automatically. And that's exactly what happened. Now, what would have otherwise happened is we would have rolled back both applications, done the analysis over a couple of days, figured out that this was the problem, then sorted it out, and come back. This prevented that whole process from happening. This is now operating in production and being used every day with our teams.

We're expanding this to all incidents being covered in the bank. Any time there's an incident that happens, it will get transferred over to an agent with all of the information that we've captured, being able to do meaningful analysis, come back with results, and cut all of this time that we're spending analyzing things and recovering things by a large amount. We also need to do a tremendous amount of modernization. It's great that we've been able to do the move, but what about the modernization? We'll talk about that in a second and also underpin everything we do with AI. The sheer magnitude of what we can do with these things is incredible.

Thumbnail 2590

All of the things that I just talked about are available in a technical blog. There are multiple things we've done. I'm going to stop here for a second. If you want to pull your phone out and get the QR code, you will be able to reference these things. They're published in blogs on AWS already. The amazing part about all of this is that these are systems that we built and existed about 12 months ago, but now this has been made even more powerful with the AWS Transform pieces that Oscar was talking about.

Thumbnail 2630

Mainframe Modernization Breakthrough: From COBOL to Modern Languages in Production

So I can only wish that was available to us then, it would have been amazing, but to you it is now, so you should be able to do this even faster than we've been able to do, and I can't wait to see where this goes. But here's the thing. Now we've talked a lot about the distributed estate. But most organizations like ours have this big behemoth mainframe, right? And it's quite a lot more prevalent in large organizations than we care to admit. So what about that? It's amazing that we can move all of these applications, but core data of the bank and some of the core applications of the bank sit in this thing that has been serving us really well for a good 30 to 40 years.

But now, is it really where we should be running everything that we have running now? We've increased latency on things that are running on AWS with different technology sets, so we're having to maintain not only a core set of skill sets on the mainframe but also on the cloud. So there is that thing, and yes, being able to move things faster is really good, but what about modernization or even from the mainframe? If we start to move things, I mean the problem has been solved before, but what does that actually look like now and what does that mean?

Thumbnail 2700

So it's important to see why we want to optimize our mainframe estate. Why don't we just keep running everything the way it is? Well, it will not be a surprise to anyone. There's a massive

shrinking pool of skill sets and capability in this space now. We have amazing amounts of Java developers, .NET developers in the bank, Go developers, you name it. But mainframe development is a shrinking pool and shrinking skill set, very hard to work with. We're having to maintain multiple skill sets across the bank, and we have growing knowledge gaps.

Legacy system constraints are everywhere. We have business logic buried in PL/1 and COBOL code, and people who are leaving the bank now. Things have been running amazingly. One of the best things about the mainframe is backwards compatibility, but that is also the thing that holds us back. We have a number of components and pieces of software running on our mainframe that are good 30 to 40 years old. They run amazingly, but nobody understands them. That's a hard problem to solve and it's getting worse by the day.

We have lots of data in the mainframe setup that is further away from all of our distributed application state, which is not a place we want to be. Innovation is happening everywhere. We're building all of these generative AI applications and leveraging AWS technology to do all of that. But not on the mainframe. We have different legacy tool sets and legacy code. There are things that we could do, but all of that requires modernization and the ability to move. So the imperative to do something about it is very much there.

We started with a few core principles in mind some time ago. Number one, the problem has been solved. People have been moving applications out of the mainframe to elsewhere. So what's new about this? The core issue with the way this was done previously for a very long period of time is a rules-based setup with very hardly defined targets. But the amount of human in the loop required at the end of a rules-based conversion is actually very high. This is a high effort problem, and that translates to the 10 to 15 years that Brad and Oscar were talking about. It's simply not time we have.

15 years later, the world would be a completely different place. We would have gone through three corporate cycles. Being able to stick through that is an incredibly hard problem and also one that is very hard to invest into if you only see results 10 to 15 years later. That's a big problem. So we wanted to do it differently. We wanted to do it not the way it has been done before. Innovation with agentic AI is obvious. It is so much, and I think a lot of what Oscar talked about is capability available to Australia, so it made sense for us to use that.

How do we use that to do it differently? Third, we're trying to do something that has not been done in the way we're trying to do it before, so proving value early was extremely important. We can't be in a situation where in our organizations we're trying to build something for 12 months and only then you can see a piece of value come out. That's too hard. Finally, it had to be a solution that scales well across the entire organization. It's not enough to be able to hand crank a pilot or a POC out and then hope that it will scale.

Thumbnail 2800

So we took two relatively complex applications. Now, obviously these are not core banks, but these are also not POCs. These are not small. These are applications that underpin some core parts of our business. It was not enough for us to take something relatively small and convert it because that would not mean anything. One of these applications is supporting our mortgage lending operations in the Nordics, and the other one is supporting a direct debit system. Some things are pretty common about these applications. Both of these applications have been written 20 to 30 years ago. They've been evolved a little bit, but very little. One was written in COBOL, one was written in PL/1.

The other common thing about this is very limited skill set and SME knowledge available in the bank to be able to work with them. But some things were quite different. Traditional approaches on converting these applications and moving them over rely on a fixed target. Most converters go into Java, or rather sort of JOBOL. It's not even really Java. And that is not a space that we want to be. We don't want to move from COBOL to JOBOL. We also don't want to move into another skill set.

Thumbnail 2910

The area that owns the fee application operates mostly in a .NET skill set. We don't want to introduce an application that is full blown Java, which means we would have to operate with another skill set in the same team. So we wanted a target from COBOL to .NET. The other one, on the other hand, they already operate in Java. So it's a PL/1 to Java conversion.

It's very targeted. It's not enough to convert to a static target because it has to match the organizational setup that we have. Otherwise, we simply increase cost and capture ourselves into another legacy. That is what we set out to do.

Thumbnail 3030

Now, how do you do it? There are a few different approaches that can be taken. There's obviously reforge setups. There's re-platform setups, but the hardest of them all is when you go from a complete legacy into a full modern space. We said if we're going to try and do this, the proof has to be for the hardest ecosystem, and that is what we went for.

We went for going from COBOL on these two applications, building together and then using the Amazon Transform ecosystem to extract business rules with high accuracy and very low human overhead. We wanted to use a little bit of human in the loop just to confirm things, but this process had to be low overhead because otherwise we can't block our entire organization from doing what they're doing now. Once we've done that, and it actually works extremely well—to our surprise with the technology available to us now—the accuracy rates are really high.

Second, we take that and pump it through all of the work that is going on in the Kiro space. We do spec and development and build a brand new application that is going straight into a fully modernized space. One into a Java target, one into a .NET target. Now this is a great concept, but the great thing is we have been able to prove it and we put both of these applications into production two weeks ago. So it is now a very proven method that can be used to move forward.

Thumbnail 3120

What are the advantages of doing it this way? We lower the effort on our SMEs. We improve their productivity. Remember that this is a very low available skill set of SMEs. We don't want to be spending tons of their time just trying to revalidate everything that happens because they've got to run a business as well.

We increase developer efficiency by a very large proportion. Preserving logic with high accuracy is what takes time. Not getting the logic right and having to redo it is what elongates the time and increases effort, which translates into hundreds of millions of dollars being spent on this problem. Finally, we're highly regulated, so we need to make sure that we meet the regulations. You can pump all of these things as steers into the build of the application via Kiro and it just works extremely well, and it will only get better from here.

Thumbnail 3170

So what does all of that get us? Sixty-five percent lower error rates compared to traditional approaches. Not even compared to human directly, but compared to traditional machine approaches, sixty-five percent lower. Imagine the amount of money that translates into.

Mainframe applications are notorious for being monoliths and heavily interconnected. We lower the interdependency through this approach when we rebuild them by seventy-five percent, hopefully increasing this even further. Reverse engineering, that is what takes all the time, the energy, the money, the effort, lowered by eighty percent, translating into further SME input being lowered, right? Human effort being significantly lowered.

For engineering, building out the application from the business rules four times faster. Four times faster. That translates literally from ten to fifteen years, cutting down to potentially four to five, five to seven, depending on your state. We obviously run a very large state, but the mass effects of these things are transformative overall. Sixty percent migration speed being faster. And we're just getting started. This can even get significantly better from where it is now.

Thumbnail 3250

Key Lessons and Call to Action: Education, Early Value, and Taking Calculated Risks

A few key points and lessons learned are very important in this transformation journey for us. Educate, educate, educate. Number one: without education of the entire skill set, without education of people, the adoption would be really low. Demystify and make this real for people. Put this in the hands of people, get them to try it, do the hackathons, get them to build things. Put the tools in front of people and let them see it for themselves. The evidence gathered directly is the most beneficial for all humans, and I can't stress that enough.

Give them paths to production. When you run a hackathon, being able to give people paths to production changes the game completely. People get a lot more excited being able to actually do that. It's super important. Prove value early. When you try and do things that have not been done before, it can seem like something that is really far away. Avoid that. Prove it early. Go for some of the approaches in being able to pilot, being able to put things into production, but don't stop too early.

Proof of concepts are not enough. Pilots are not enough. Actually going through production and proving value is super important. Finally, if you're going to try and do things that have not been done before, take some risks. Some risks are okay to take, and they change the game. They really help change the shape of how technological evolution is helping power our own future and helping us maintain our lead in digital. Thank you so much.

Thumbnail 3350

Thank you Ashish. Thank you Oscar. So just to summarize what we talked about today, I opened with some of the statistics of what some of the ambitions are, what some of the expectations are, and what some of the trends are. Oscar spoke about some real world services and products and solutions and capabilities and the framework that we use to help our customers accelerate their mission to the cloud, and then you saw Ashish with some real world practical examples of how this is done.

I remember I flew to meet Ashish a couple of years ago when this thing was just getting started, and it is truly impressive to see how far we've come in such a short amount of time. This is hard, but it's not impossible. Getting started is probably the first step to take, and that means training your people and getting fluent with AI. What I really like about this entire story is it's a journey to the cloud and it's leveraging AI and it's introducing AI into your organization in a safe and sensible way, not about staffing up people and finding the right experts. It's using capabilities that are available to you today.

Thumbnail 3420

These stats are real. I think we demonstrated some of that today. Ashish is proving some of that today, and we certainly are just getting started with many of you. So look, we have a lot of migration topics occurring throughout the week. We encourage you to build a really good map for things that are important to you and if you have any questions, we encourage you to connect with all of us. Take a quick snapshot of this one and the next slide is thanking you for our time and how to get a hold of us.

Thumbnail 3460

These are our QR codes that will take you to our LinkedIn profiles and then we can just get started with the conversation. Any questions or any follow-ups you have, we'll be taking questions outside of the room today. It's a pretty quick turn here because there's another session coming in. So we sincerely thank you for your time, thanking the speakers, and have a great week here at Las Vegas and Reinvent. Thank you so much.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)