DEV Community

Cover image for AWS re:Invent 2025 - Accelerate Legacy Modernization with Slalom & AWS (MAM216)
Kazuya
Kazuya

Posted on • Edited on

AWS re:Invent 2025 - Accelerate Legacy Modernization with Slalom & AWS (MAM216)

🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.

Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!

Overview

📖 AWS re:Invent 2025 - Accelerate Legacy Modernization with Slalom & AWS (MAM216)

In this video, Alex Hatcher from Slalom presents their Zero Legacy campaign for modernizing legacy systems, particularly mainframes. He explains how legacy monolithic systems block innovation and discusses rising costs and hiring challenges. The presentation highlights how generative AI tools like Amazon Q, Bedrock, and Kendra enable reverse engineering of COBOL code within hours, achieving 30% productivity gains in SDLC processes. A key case study features La-Z-Boy's successful Z Series mainframe retirement, where they transformed COBOL to Python, migrated DB2 to Aurora PostgreSQL, and replaced JCL with Step Functions. The project uncovered previously unknown bugs and empowered the organization to modernize business processes, demonstrating the transformative power of cloud migration.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Zero Legacy Campaign: Leveraging Generative AI to Overcome Mainframe Modernization Challenges

Hello everyone, I'm Alex Hatcher with Slalom, and I'm part of our AWS cloud capability. Today, we're going to be talking about our approach to accelerating legacy modernization.

Thumbnail 20

Throughout this talk, we'll cover our Zero Legacy campaign. We're seeing that modernization is of strategic importance for our clients and for us as well. We'll discuss what we're seeing in the landscape with our clients today, the challenges they're facing, and the opportunities that come from those challenges. We're also excited to share a recent customer story with La-Z-Boy, where we were able to move them to Zero Legacy by retiring their Z Series mainframe.

Zero Legacy—what exactly is legacy? To us, a legacy system is anything that you have to change or look at, or that is a barrier to modernization or innovating in the way that you want. I think many of us have been in that situation where the product team wants to innovate, wants to see something new, and we see an opportunity for our business. But when it comes to a technical discussion, we start to see that we have to touch system X, and if we do that, everybody starts to pull back and that idea or creativity dies right there. That's really what our Zero Legacy campaign is about—solving that problem. We want to move you into a position where you're able to think creatively and be imaginative without the drag of legacy holding you back.

Thumbnail 70

A key qualifier we see for many legacy systems is that they're typically monolithic and stateful. In contrast, modern architectures are microservice-based, non-monolithic, and stateless. There's a need for us to make a paradigm shift between the two. Mainframe specifically—Z/OS and AS/400 workloads—are very stateful, very monolithic, and centralized. That makes them a key candidate for modernization and part of our strategic initiative.

What we're hearing from our clients in this space is that their costs are rising and it's difficult to hire. I'll give you an example of one of our clients in the government space. The original developers of their mainframe hired and trained their children to maintain the system after they retired. It's a very real scenario where Dad is on the beach getting a phone call from his daughter saying, "Hey, I'm in the COBOL code. I see you in the comments. What were you thinking here when we modified this or created that?" That tells me our clients are being extremely creative about how they're solving the skills problem with these legacy technologies.

We see the problem is there, but why now? We see pressure from cost and timing overall. What's moving us forward is AI, which is probably no surprise to anybody. What we're seeing in the AI space is that these mainframes and AS/400 legacy systems have their code and data in what amounts to a black box compared to a cloud environment where we're heavily monitoring at the microservice level. Generative AI has been extremely effective at allowing us to reverse engineer those systems in a way we never have before.

What we used to do is have numerous solution architects descend on a large code base, describe it, then synthesize higher understandings of that code base, build diagrams, and get feedback on that to validate what they're finding. This is extremely labor intensive and costly. What generative AI has opened up for us is the ability to provision a Kendra index and a Bedrock knowledge base. We can zip up our code, load it into an S3 bucket, index it with Kendra, and then access our knowledge base through an MCP like Q Developer or QCLI. We're able to start asking questions about our code to understand it within an hour, which can support a process where we can start documenting it with very little overhead. The ability to use generative AI to assist in the re-engineering process is really driving us forward.

Thumbnail 390

We're using generative AI in our modernization in a few different ways. We've already touched on the reverse engineering process and how we're able to open up these legacy systems and describe them and understand them in a way that we really never have been before. We're also seeing productivity gains from using generative AI to augment our traditional SDLC process. Imagine I'm using my preferred coding agent, which is definitely going to be Q or QCOI. Now I'm able to not only use the default model, which we've all seen is a great productivity increase, but we're able to bring that context of that legacy code into that process where we can augment our traditional SDLC with tremendous productivity gains, usually about 30 percent and sometimes more depending on the problem space.

Now let's look at our forward engineering process. We're able to describe these systems and extract that knowledge from them. We've also seen really great advancements with spectrum and development toolkits like Kiro and Quiro CLI, with that announcement that just came out, where there's opportunity now for us to lean into the future which could be spec-driven development supported by what we're able to extract in a reverse engineering process. Through generative AI, we can get a high-level domain explanation of what the system does, but we're also able to decompose those domains and look at and define the execution paths of the functions or subroutines that need to be executed to achieve a task from end to end.

Thumbnail 570

If we think about it, that's very similar to what we think of as a microservice, a single execution path of a single function. With context and with that fully documented as part of our process, we start to move into territory where it makes sense and it's completely possible to do more of a spec-driven development process that brings all of that information into context with full traceability of that specification back to the original code so we can validate as we go. So far we've talked a lot about technology and technology is extremely important, but I think the one thing that we really do at Slalom, that we do really well, is that we look at and connect with our clients and meet them where they're at. It's not just about technology. We've seen from our clients in some cases that they may say their mainframe or AS 400 is owned by IT and so we kind of own the trajectory of that service.

Or maybe a product organization says they want to do something totally different than what they're doing, and they peel off and go in a completely different direction. We have not really seen that be successful. What's necessary is that we get everybody aligned because, at the end of the day, that legacy system is at the core and it's been there for more than likely a few decades. Business processes have built up around that, and business processes have built up around those business processes. We can't simply rip that out without causing massive business disruption, and that's usually where we see a lot of that failure.

Thumbnail 660

In our approach, we want to pull together all of our stakeholders, build alignment across our client organization, and then based on that, we're able to define a strategy that works for everybody. That strategy is focused on the key outcomes that collectively we all want to meet.

La-Z-Boy's Mainframe Retirement: A Success Story in Cloud Migration and Business Transformation

This is our Lazy Boy story, and we're extremely excited about this. When we started with Lazy Boy and talked to them originally in a half-day workshop, they mentioned that they had a mainframe. We said, "Let's talk more about that," and they were resistant. It was like, "Hey, we've tried to modernize this. We've tried to work with this a few different times. We've gone through a few different passes. We've got a team that's working on it and they're slowly chipping away at it." But we said, "You know, let's at least have the conversation and talk about what's possible."

As we went through, we realized there was real opportunity here for transformation. Our bringing AWS expertise along with the changes of generative AI, which they had not incorporated into their process at that time, opened up some doors and enabled us to do an AWS-funded assessment to look at what they currently had. First, we got the code and used Amazon Q to look at it and fully understand it. We were able to map the dependencies to third-party external systems that surrounded that core batch workloads they were working on, so we mapped out the full ecosystem.

From that, we were able to identify key stakeholders for each part of the organization, engage them, and get them involved. We got everyone together and said, "Hey, is this something that we want to do? We can show you a path. Do we want to make this real?" We moved forward into a mobilized phase where we used Q to accelerate our infrastructure deployment. We used Q to start building out step functions to replace the JCL for the jobs. We transformed the COBOL into Python and deployed those as ECS tasks. We modernized their DB2 environment to a modern Aurora PostgreSQL environment.

During the mobilize phase, we used a subset of the workloads just to prove out that process, and then we went to full migrate. We modernized the full set of workloads. At that point, we were production ready, but we still had built a testing process to ensure that we had functional equivalence before we went into a full go-live state with decommissioning. We basically set up a process that could compare DB2 and PostgreSQL and the data as those jobs ran in synchronization. We saw some bugs that we fixed, but one of the most interesting things about this process is that we found bugs they never knew existed that required us to rethink some business processes. That was new for the organization.

It's the first time they realized, "Hey, we're in this new world, we're in this cloud world where instead of stepping away from the table and saying this is just something we can't work with, we can not only change this code, we can change this process right now. What else can we change? What other processes can we modernize?" To us, that's the power of Zero Legacy. I really appreciate you. Thank you so much for my talk. I'm with Slalom again. We're at booth 625. I'd love to have you join me over there to continue the conversation. Thank you so much.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)