DEV Community

Cover image for AWS re:Invent 2025 - Best practices for performing custom code transformation with agentic AI-MAM344
Kazuya
Kazuya

Posted on

AWS re:Invent 2025 - Best practices for performing custom code transformation with agentic AI-MAM344

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025 - Best practices for performing custom code transformation with agentic AI-MAM344

In this video, Morgan Lunt and Venu Vasudevan from AWS introduce AWS Transform Custom, a CLI-based agentic AI tool for automated code transformations at scale. They demonstrate how it addresses technical debt through eight out-of-the-box transformations (Java, Python, Node runtime upgrades, AWS SDK migrations) and custom transformation capabilities. The session includes live demos showing Python 3.8 to 3.13 Lambda upgrades, creating custom transformations for proprietary libraries (Fluxo to Tickety migration), comprehensive code documentation generation for legacy codebases like Doom, and batch processing across multiple repositories. Key features highlighted include transformation definition registries, continual learning through knowledge items, interactive and non-interactive execution modes, and integration with MCP servers. Pricing is usage-based at $0.035 per agent minute, with typical transformations costing $1-5 per repository.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction: The Challenge of Technical Debt at Scale

Hey everyone, thanks for coming. Welcome to Best Practices for Performing Custom Code Transformations with Agentic AI. My name is Morgan Lunt. I'm a Senior Product Manager on the AWS Transform team, and I'm here with Venu Vasudevan. Hey, I'm Venu Vasudevan. I'm a Senior Specialist Solutions Architect with the NextGen Developer Experience team. This will be helpful.

Thumbnail 30

So what we're going to talk about today is the challenge of technical debt, which I'm sure you're all very familiar with. I'm not going to spend too much time on that. We'll cover some scaled remediation options—things that you may be aware of and some things that may be new to you. We'll discuss some best practices that we've discovered internally at AWS when managing tech debt at significant scale, and then the thing you're all here for is the live code demo where we'll actually show you the tools that we use.

Thumbnail 50

The problem with tech debt is that it's big, it costs a lot of money, it takes a lot of time, and it slows down your innovation. It prevents you from being able to deliver new features to customers. It prevents you from being able to hire developers that really understand your stack and can contribute to it properly. It may bring security vulnerabilities and all sorts of bad things. So generally speaking, tech debt is bad.

Thumbnail 70

There are different kinds of technical debt you can have. Security and compliance risks are especially important if you're in a compliance-oriented industry like healthcare or finance, but even for any company, a security vulnerability can cause massive PR issues and monetary loss—all sorts of things that you probably don't want to happen. There's also maintenance burden. Sometimes it can get to a point where you're just doing patching and addressing vulnerabilities and CVEs, and you're not actually spending time developing things that your customers want.

Performance limitations are another concern. Things can get slow. You can have large applications built up over 10, 20, 30, or 40 years that were well designed when initially built, but after adding layers of complexity for many years, they become unwieldy. This can also cost you money in terms of being more expensive to run on bigger VMs. Finally, there's strategic misalignment. A lot of customers I've spoken with work at companies that were acquired by other companies, and companies that are acquired may use a different tech stack than the rest of the companies in the portfolio. So there's often a desire within corporations to have some sort of standardization and internal alignment for the tech stacks they use, the libraries they use, the APIs they use, and the vendors they use, which are going to have their own different APIs and all that.

Thumbnail 150

Real-World Customer Cases: From Air Canada to MongoDB

These are some customers that I've worked with pretty closely over this last year. Air Canada has a big issue with thousands and thousands—I think tens of thousands—of Lambda functions running on deprecated versions of Node and deprecated versions of Python. Due to compliance and internal standards, they desperately need to upgrade these, and it takes a lot of time and a lot of manpower to go through every single Lambda individually and try to upgrade it to the latest version of Node or Python.

Twitch, our friends inside of Amazon, have a lot of Go code. They're kind of unique in Amazon. Not a lot of people use Go, but Twitch does. A lot of it's written using AWS SDK v1, and they want to migrate to AWS SDK v2. They have over 900 applications, and they originally projected 11 developer years of manual effort to do that.

QAD is a major ERP vendor. They make software for manufacturers that make products that we all probably use. They have a piece of software that's evolved over 20 plus years, and their customers have built customizations on top of this ERP software to do things particular to that customer. It's super useful if you're a customer and want to augment this off-the-shelf ERP with your own functionality, but over time this builds up complexity and makes it very challenging for QAD, the vendor themselves, to publish new versions of their ERP because everyone's got kind of a one-off special little thing. So they have this challenge of moving these customer-done customizations onto their common customization framework on their newer platform versions.

Then MongoDB and their customers have a lot of Java code. A lot of this Java code is on Java 8 or Java 11, and they really want to get it up to Java 21 or later versions. That's easier said than done.

Thumbnail 260

Limitations of Existing Code Modernization Approaches

There are a lot of existing approaches for doing code modernization at scale in an automated fashion. There's good old rule-based automation, which is stuff like AST-GREP or managing abstract syntax trees using something like OpenRewrite. These are good tools if you can get them to do what you want. They'll do it deterministically and they'll do it reliably, but they can be kind of brittle and inflexible. If you introduce a new piece of code that isn't exactly following the same conventions as the ones you wrote your transformation recipe with, it's probably not going to work.

It takes some specialized expertise to write these sorts of recipes. You have to know how to navigate an abstract syntax tree. You're kind of like writing code to manipulate code and dealing in this conceptual higher level, and it's a little tricky if you haven't done it before. Because of that, there's a bit of a higher startup cost in using approaches like that.

General-purpose coding AI has gotten really good. The Copilots and the Claude models of the world are awesome. You should all use them. I love them. I use them a lot. But one tricky part about doing large-scale code modernization efforts with them is that there's no easy way right now to enforce consistency across a bunch of developers trying to do the same task using those same tools.

I might ask Copilot to upgrade a Java runtime and it'll probably do it, but it may follow different conventions and different patterns than another developer does it, and we're going to end up with disparate code that may not follow the standards and use the right libraries that my organization wants to use. These tools tend to be designed to be run with a human in the loop, with a human sitting at the keyboard driving it, watching what the agent does, poking it this direction or that. That's fantastic if you're doing something once, not so fantastic if you're doing it 100 times or 1000 times.

That human cost, even though it's a significant acceleration from someone sitting down and doing this work themselves with no assistance, is still real. There's still significant cost of driving a coding agent on the keyboard. Another issue with doing it this way is that teams' learnings remain siloed. So if over the course of an API migration you discover there's an incompatibility between two particular versions and you had to do a workaround, you learn that and discover that. But some other team on the other side of the country or the other side of the world that's doing the same migration in your organization is going to go out and discover that themselves. They're going to run into the same issue, and unless you're really good about updating wikis and looking at those wikis, that learning is lost. You're going to run into the same problems several times, and that's not as efficient as it can be.

Introducing AWS Transform Custom: A Scalable Solution

So what do we need? We need a scalable code transformation mechanism that has a low barrier to entry. It's easy to get started with and doesn't take a lot of specialized knowledge. It's designed for automation. It can be scripted together and set up to run in a headless fashion without human interaction. It's something that you can teach once and run it everywhere, and it gets better every time so you don't lose these learnings that individual developers are coming across.

Thumbnail 450

Enter AWS Transform Custom. This is my baby. I'm the product manager on this product. We've been working on it for the last year. This was released as generally available on Monday. We're all very excited and proud of it, and that's what we're going to be showing you in a minute. AWS Transform Custom is intended to discover and learn from you any sort of code pattern, API upgrade, runtime upgrade, framework upgrade. I have people doing things as wild as converting VBA script embedded in an Excel spreadsheet to maintainable Python code, and it's working okay for that, or rewriting bash scripts to Rye, which is a Rust-based scripting language.

The more I learn about how customers are using this, the more amazed I become because there is some weird stuff out there. Because you can teach it your own custom code transformation, you can do your weird stuff with it, which is cool. And because you are teaching the agent, if it's doing it wrong, you can tell it to do it different and to do it better until it does do the right thing and does it consistently enough that you feel comfortable to start running it at scale. It continuously learns and gets better every time. We'll show you how that works in a moment, and it's easy to run at scale.

The main components of AWS Transform Custom are our CLI that you'll see in a moment, which you interact with using natural language. So you tell it, "Hey, I want to do X from Y to Z," and it's going to say, "Okay, cool. What exactly is X? Do you have some documentation on X? How do I go from Y to Z?" We've encoded all the best practices that we've discovered internally at AWS for doing scaled code modernization into the agent, so it's going to know the right kinds of questions to ask you. If you say you want to do an API upgrade, it's going to say, "Cool, do you have the Swagger documentation? Do you have the API documentation?" If you say you want to add a whole bunch of test cases, it's going to say, "Okay, can you give me an example of the kinds of style that you like your test cases to be written in?" Or if you wanted to emit metrics in a different way, it'll ask you for the schema. It shouldn't take a whole lot of domain expertise to build this transformation that you're going to want to apply at scale.

Thumbnail 510

Once you've built something that you're comfortable with and you're pretty happy with the performance, and you've tested it on a few repos, executing it at scale is really straightforward because it is a CLI. It's a CLI that can be driven by a human talking to an agent like you're all used to doing these days, or it can be driven by a machine. Our syntax is totally deterministic and machine-drivable. You put a bunch of commands and arguments in the command telling it to run this transformation on this repo following these conditions, and it'll just do it. So it's really easy to write a shell script to pull down your code from wherever it is—GitHub, GitLab, Bitbucket—transform it, and push it where it needs to go.

How AWS Transform Custom Works: From Definition to Deployment

Send it for code review, send it straight to prod. I probably wouldn't recommend that, but you do you. Continual learning happens both during execution of a transformation. An agent is going to record notes about what it encountered, so if it goes down a really deep self-debug path trying to deal with some weird incompatibility it discovered during execution, and then has to backtrack and try something else, it's going to remember that next time and try the shorter path.

Likewise, if you do an interactive transformation sitting there and watching the agent, which is an option you can do, and you see it going off the rails, you can tell it, "Hey, don't do that, do this," and it'll remember that and hopefully not do it next time. So how does the process work? The first thing you do is have some sort of expert. They could be on a centralized team. They can be someone that owns a particular service that a lot of other teams are consuming, someone who knows what they're doing about this transformation. They're going to sit down with the agent and describe what they want to do and go through that transformation definition building phase.

A transformation definition is a term we invented to represent the agent's understanding of a particular transformation. It's a combination of some markdown files, a RAG, and a database of learnings that the agent has encountered during its executions. This is something that we consider to be your IP. We don't look at it, we don't learn from it. If you want to sell it, if you want to keep it secret, if you want to open source it, that's none of my business. The transformation that you build is yours.

Someone is going to create an initial transformation definition. They're going to execute it a few times on some local repos and look at the output and tell the agent, "You screwed this up, you missed this edge case, you should do this differently," and it's going to update that transformation definition. Then you run it again and again, and hopefully after three or four times and not thirty or forty times, it's starting to get to the point where you're like, "OK, this is doing what I want for the most part. Cool."

At that point you're ready to share your transformation. We make that really easy because we have this notion of a transformation definition registry, which is just something within an AWS account. If you give someone the right IAM permissions and the transformation definition is just an honorable AWS resource—we're not reinventing the wheel here with permissioning and resources—it's the concepts you already know and you're used to. You can make the transformation definition available to other users in your organization. They can pull it down, they can execute it on their code, they can propose changes back to it, and everybody is happy.

Thumbnail 760

Once you have the application owning teams somewhat happy with the quality, you can start running it at scale and you can do that in the form of a push campaign if you like, where some central team pulls down a whole bunch of code from a whole bunch of teams, executes a transformation, and sends the resultant code out to be reviewed by the application owning team. Or you can do it pull campaign style where you tell the application teams, "Hey, you must blah the blah. You know there's a top-down mandate that you must blah the blah, but I'm going to give you this thing you could try running it. Hopefully it does most of the job for you and makes your life a little easier." And humans verify and the continual learning goes back into the transformation definition. Super cool, right?

Built-in Transformations and Benchmarking Results

We have been eating our own dog food as we do at Amazon. We've built a whole bunch of transformations that we thought were useful using this, and we have done some pretty comprehensive benchmarking on them using internal AWS code and some open source code. We have this little menu of transformations here. Notably Java, Python, and Node runtime version upgrades, AWS SDK version upgrades for those languages as well. We are confident in these, I'm putting my name on them, we believe they are pretty good, and we give them to you and you can just run them.

You don't have to create a new transformation at all if you're trying to do Java, Python, or Node runtime upgrades or AWS SDK upgrades. Just use these and they should get you most of the way there. If they don't, tell me and I need to fix it. Beyond that we have a couple of early access things that we've also released. We have a tech debt analysis slash documentation transformation. This one's pretty cool. It'll go through all of your code if you've got something really old and poorly documented and generate what Venu and I think is a pretty comprehensive documentation set of it, including entity relationships, sequence diagrams, lots of pretty images, full dependency trees and stuff.

Thumbnail 800

It's like every way you could think of to represent code without having the code itself we try to capture in here. As a bonus it gives you a little report in there that says, "Hey, you're using a super outdated version of this and that and this and this is vulnerable and like you might want to check these things out." We also have an upgrade that we're working on with the Graviton team in AWS to convert Java code that uses x86 that has dependencies on x86 libraries to Graviton. That's early access, doing pretty well on benchmarks. We'll probably mainline that one pretty soon. These are not set up for you. They're validated by us. We have a pretty comprehensive benchmarking infrastructure to try to get the quality on these really good. Beyond that you can build custom things.

You can build VBA to Python. You can build something crazy and unconventional if you really want to. I would recommend using it for API upgrades and language version upgrades. You can use it for language-to-language conversions, but I would caveat that in a localized context. If you have a 500 million line of code old .NET app and you want to reimplement it in Java, you're probably not going to have a great time. The models aren't quite that good yet. But if you have a whole bunch of scripts that you want to convert from one language to another that are relatively self-contained, it will work pretty well for something like that.

Best Practices: Planning, Piloting, and Integration

Best practices are what we're here for. That's the title of this thing. Start by planning and building a transformation. It's not very hard. Then just run it on a few things at small scale, and that's how you get a feel for the efficacy. That's how you get a feel for what it's going to cost. Pricing for this is entirely usage-based. It's based on agent minutes, which is an abstraction on top of our cost for providing the service. It's 0.035 cents per agent minute. A typical 1000 line Lambda costs about 1 dollar. A 5000 line Java runtime upgrade costs about 4 or 5 dollars. We intentionally priced this very low to make it up on scale. We want to incentivize you to create transformations that do what you want to do and unleash them to build a lot of scale.

Thumbnail 940

Plan, refine, and pilot. I think you all get the picture there. Another best practice is to use your current workflow. There are a lot of code transformation products out there that ask you to onboard to a totally new way of doing things, and I think that's not ideal. We intentionally built this to be a CLI that has very minimal dependencies. It has a dependency on Node, a dependency on Git, and a dependency to have an external connection to our AWS servers.

If you have a machine that has a Linux environment, Mac OS also works, and WSL also works. If you have those dependencies, you can run your transformation. People stick it in pipelines, people stick it in Docker containers, people stick it in AWS Batch jobs. People stick it on laptops in their basement. I don't care how you run it. Whatever works with your current way of doing code modernization and doing deployments, it's designed to be minimal and just slot in.

Thumbnail 1000

I would be remiss if I did not mention our AWS Transform web application. If you heard about AWS Transform last year, that's all there was to it. We didn't have the CLI. The CLI and custom transformations are the brand new thing this year. But there is integration between the CLI and the web app. If you are a program manager or campaign manager responsible for doing a bunch of code transformation at scale and you want a place to get pretty charts and graphs to show your boss, all of the status of the campaign that you're running in the CLI is automatically populated in the AWS Transform web app where you have pretty charts and graphs. You can see what repos have been transformed and which ones have been validated. Of course, you have the agentic chat there so you can ask questions about the data. You can say what proportion of them fit these criteria, how many got done in the last week, which ones had failures, whatever you want to ask.

Thumbnail 1050

Now the fun part. Thank you so much, Morgan. I have a question for you before I get started. The customer asked, can I do Fortran to Python? What's your answer for that? How big is your Fortran? That's the question, right? If you have hello world, any language to any language, and beyond that the complexity varies. I don't want to over-promise to anybody. We have a customer using some weird esoteric old language called Progress that I had never heard of before. We didn't know if the model was even going to understand it, and it totally understands it. They're putting 100,000 line of code repos in there effectively and making changes. Again, I said best practice pilot before. Just try it. It's cheap and easy to start and try. We made the installation as easy as possible. It's one curl command to pull down our CLI and start trying it out.

Thumbnail 1150

Live Demo: Out-of-the-Box Python Lambda Upgrade

Okay, let's now do some live coding. The way I've set this up is Morgan talked about the built-in transformations as well as the custom transformations and doing it at scale. So let's see if we can do all of them. Let's start with the out-of-the-box transformations and see how it works. The first thing is this is a command line interface CLI you can install it on your laptop. I'm going to use a WSL environment on Windows. Mac and Linux are also supported. I've already installed it and I'm just going to start with my first command. Like Morgan mentioned, this can be run in interactive mode as well as non-interactive mode. Non-interactive is basically a bunch of commands where you say, hey, ATX, run my transformation on, and AWS Transform executes them automatically.

In interactive mode, you can provide feedback and guide the transformation process. I'll demonstrate both modes now.

Thumbnail 1230

Thumbnail 1240

Thumbnail 1250

Thumbnail 1260

Thumbnail 1270

First, let me list the available transformation definitions. I'll use the command ATX custom definitions list to see all my definitions. I have built several custom transformations, but let's focus on the AWS managed ones. We have eight AWS managed transformations that are out of the box with zero setup, so you can get started immediately. These include AWS SDK v1 to v2 upgrades for Node.js, Python upgrades from Python 2 to Python 3, and version upgrades for both Lambda and non-Lambda functions. The same capabilities exist for Node.js as well. In early access, we have comprehensive code base analysis and Java version upgrades that support Maven and Gradle build systems, plus x86 to Graviton architecture migrations.

Thumbnail 1280

Thumbnail 1300

These are some of the user transformations I've built for demonstrating to customers and for my personal use. This is the registry where transformations are published. Once you publish a transformation, any user in your AWS account with access to the CLI and proper IAM permissions can view it, download it, and run it with their project. When I demo using the AWS internal registry, it shows approximately 350 transformations, which is great because many people internally are using it.

Thumbnail 1310

Thumbnail 1320

Let's start with the first example. I have a Lambda function here that's basically a to-do application. Let's run this transformation on this Lambda function. This use case demonstrates using an out-of-the-box transformation when you know what you're doing and have a bunch of Lambda functions to upgrade. You can run it with zero setup without creating your own transformation.

I'll use the command ATX custom exec, where exec means execute. The minus P flag specifies where my Lambda function is located—the file path on the local disk. The minus N flag specifies the Python version upgrade from the out-of-the-box list. The C flag indicates the build command that needs to be used. For this Python function, I'll use no operations, but for Java you'd use something like MVN clean install. The configuration flag provides additional context, such as the validation command I need to use. For example, I can specify running pytest to ensure all unit tests pass, and you can provide additional context in free form for running this execution.

Since this transformation is generic and can upgrade from any Python version to any Python version, I'll specify upgrading to Python 3.13. The minus X flag indicates non-interactive mode, meaning the transformation will run automatically. The minus T flag means trust all tools, so in interactive mode you might need to trust tools to write files and read files. I'll say trust all my tools and execute it.

Thumbnail 1430

Thumbnail 1440

Let's execute this command. The transformation has started. The first step creates a conversation log where you can track all the progress and monitor it. You can also resume these conversations if needed. If your network gets disconnected and you press Control+C, there's a conversation ID you can use to resume the conversation. That's a neat feature because sometimes things happen—internet goes down and you need to reconnect.

Thumbnail 1490

Let me show you the output from a dry run I performed earlier today for the same transformation. I ran the same transformation earlier in the day, and it created the plan, completed the changes, and converted my Python function from 3.8 to 3.13. Let me show you some of these changes. Since this transformation might take anywhere between 10 to 15 minutes, I don't want to just stare at my screen.

Thumbnail 1530

Thumbnail 1540

Thumbnail 1550

We are working to make it a little quicker, but we figured it's better to be slow and right than fast and wrong. So we optimize for right now. We're going to optimize for fast. This is the same project, so it did a 3.8 to 3.13 migration. Basically, it did a runtime upgrade and datetime was modernized in this with new versions. It also added 17 to 32 passing test cases and it documented what it did. It also improved interpreter performance. This is all built in. I didn't give any instruction. It is all built in. And it did not change any business logic, no breaking changes. All those things are kept.

Thumbnail 1560

Thumbnail 1580

Thumbnail 1590

It updated the README. These are the files changed. I just did a git diff. So what it does is once it makes the changes, it actually creates a local branch for you and commits all these changes. Which means at any point in time you can go to the commit ID, rollback, or you can just compare and then review it and then check it in or push your code changes. It's especially useful if the agent does the right thing for the first three quarters of the transformation and then goes off the rails. You're like, I don't want all that work to be wasted. You could just revert to a previous.

Thumbnail 1600

Thumbnail 1610

So some of these files, it basically changed all the timezone things and the main lambda function. It changed a few of the functions and also requirements.txt. And it also changed the template. This is a CloudFormation template I had, so it changed from 3.8 to 3.13 for deployment. It also changed deprecated APIs, documentation updates, all those things. So just to summarize this, just go run this out of the box transformation if you have a very similar use case for your project.

Thumbnail 1690

Custom Transformation Demo: Fluxo to Tickety Migration

That's the first thing that I want to show. Anything you want to add in out of the box? No, I think that's good. Let's now come to the fun part, which is how do you want to create your custom transformation. So here I want to show an internal library that our internal team used. The same product was used for migrating an internal ticketing system. We had something called Fluxo, which is Amazon's internal ticketing system. It's very old and not very hard to maintain. So we wanted to change to something called Tickety, which is also an internal system but is more API driven and more performance oriented. That's why we wanted to change all our projects which use Fluxo to Tickety, a library. This is Java based, but this is very specific to Amazon. The large language models do not have pre-trained data on these things. That's where the power comes in. You can feed in data or context from your organization to teach the agent how to do this stuff and then it can do that migration at scale. If you've got some weird proprietary library that no one else in the world uses, that's what we're trying to show here.

Thumbnail 1710

Thumbnail 1720

The other thing I want to mention here is that this connects to MCP servers, which means that you can pull in data pretty much from anything. I have this Fluxo ticketing migration guide which we have prepared and then I'll just put it into my GitHub repo. It's a private repo, so I'm asking ATX custom to pull in that information from my internal repositories and then use it as part of your creation of the transformation definition. I'm going to do this in an interactive mode. I'm just going to say ATX, which will open an interactive shell where you can interact with it. I'm going to create a definition based on the migration guide that I have and then review it, give feedback, then execute the transformation on that. The cool thing about this guide, by the way, is if this is that guide I sent you, this is a guide that was written for human users in Amazon published to a wiki to go and do this work. Someone already wrote this guide assuming developers were going to have to do it, so we're just giving that to the agent, no extra effort.

Thumbnail 1800

I'm just going to say, hey, create a new transformation. This is the interactive mode. It's going to ask you, hey, what do you want to do, right? Language upgrades, framework migration, library upgrades, all those things.

Thumbnail 1830

Thumbnail 1840

I want to mention internal library migration from Fluxo to Tickety. Again, the large language model does not know what Fluxo or Tickety is, right? That's where I'm going to give more information. First, it searches in the registry to see if somebody already created this definition or not. If not, it's going to create a new one, right? So it's asking me: what kind of application libraries does it have? What are the main functionalities? What programming language does it use? Do you have any migration guide, documentation, or example code that you can give me so I can learn from it, right?

Thumbnail 1860

Thumbnail 1870

Thumbnail 1880

Thumbnail 1890

I'm going to say okay, and it referred to this MD file using my GitHub MCP server. It takes a bit for connecting and getting the right information. Basically, I started using the MCP tool. Again, trusting this tool. You can also just give it a local file path to a file. Yeah, you can download it and give it, but I just wanted to show the power of being able to connect to your existing environments. Okay, so while this is running, I mean, I think I was talking to one of the customers here, and they had a use case for this. I have a prompt already written using Kiro for increasing my unit test coverage, right? Can I do it on thousands of my repositories? Yeah, absolutely. That's where the power comes in. You already solved this problem, this automation problem once with your existing AI tools. Now you want to expand it or scale it to all other teams. You could definitely do that with creating your own custom transformation.

Thumbnail 2020

Thumbnail 2050

Thumbnail 2060

Another scenario I've heard from customers that I thought was pretty cool is if you want to make your code AI ready. Like you want to write a cloud MD, a Kiro MD file for every repo under your ownership, and you want to follow certain conventions but incorporate aspects of the code there. You can define how you want those agent guide files to look in Transform Custom and have it go write all of those files for all of your repos. One thing is when you have multiple problems and the emergency and sequential one from the other, that's another problem. So it's a sequential execution of multiple. It's kind of a right. So it's not exactly a sequential execution of prompts. There are several agents under the hood: a planning agent, an orchestration agent, and an execution agent. So the first thing it does when you execute a transformation is it's going to look at the transformation definition, which itself has some steps and guidance. It's going to look at the code you're asking it to operate on and it's going to say, okay, given these instructions and this code, let me develop a plan for doing this execution on this particular piece of code. It will show that to you if you like, and you can approve it or tell it to make changes. Then it's going to go through those steps, and each of those is kind of sort of like a prompt, but there's sub-steps. It's not just a single one-shot because after each step, the agent is going to call a validation agent. It's going to make sure: does this code still build? Does the action I performed in this step actually fulfill the original intent of this step based on the plan?

Thumbnail 2080

Thumbnail 2090

So do we need to remove one? I see. And the reason why you have validation is that it validates what it does, right? If I'm doing data transforms, I'm doing it, so do I need to reduce the from? So what I would do is I would take that prompt. I would give it to AWS Transform during the transformation creation process and just tell it, here is a prompt I wrote for Kiro to do this, and it's going to say, cool, let me look at this prompt.

And there should be aspects of it that are directly applicable, and I'm just going to incorporate that into my transformation definition. There are aspects that may ask for clarification, but it's going to take that raw information and put it in a structured way that it knows effectively executes transformations with this. That's where the refined part also comes in. Once it generates and converts your prompt into its own definition, that's where you can review it, edit it, or give feedback saying, "Hey, I really don't like this validation step. Can you add more to this phase?" It should be able to accommodate that.

Thumbnail 2180

Thumbnail 2190

If I had three prongs and I need to sequence them, that's something you need to experiment with. I would start with one and see how it creates it. If that's not what you wanted, maybe split it into multiple parts. If everything can be contained in multiple prompts, but again, as Morgan told us, it's basically a CLI, so you create three transformations and execute them in a batch. It should be able to do that as well. When in doubt, I like to tell people to break down problems as small as possible, as atomically as possible, because you'll get better results there. But for really simple things, you can mash them together and it'll probably work and might save you a little effort.

Thumbnail 2200

Thumbnail 2210

Thumbnail 2240

Yeah, okay, cool. It actually created a transformation definition, so let's look at the transformation definition. Okay, wow, it's very comprehensive. So this is the transformation definition that it created for the Fluxo to Tickety migration. Let me look at this. It has a clear objective: migrate code from my deprecated Fluxo ticketing library to modern Tickety SDK. What is the summary? What exactly is it going to do? What is the entry criteria? Entry criteria is basically, how do you want to take this code in? So here it says code imports or uses the Fluxo Java client. That's the entry criteria.

Thumbnail 2250

Entry criteria is really cool, by the way, because that's how the agent knows whether this transformation applies to this code. So if you point the agent at a whole bunch of random code that does not use Fluxo, it's going to read it a little bit and be like, "This doesn't apply, skipping." In the near future, we'll have an assessment capability where you can point the agent at a whole bunch of code and it'll tell you before even transforming anything which transformations apply to this code, and you can kind of stage it out how you want to do it.

Thumbnail 2290

Thumbnail 2300

Thumbnail 2310

Thumbnail 2320

The implementation step is basically step-by-step instructions from the migration guide on how to do this transformation. First, step one is update the build dependencies, and it basically broke it down into these projects. Client initialization is basically how to initialize the new client, and it basically covered Python, TypeScript, and all those things as well. Get ticket operation and all the APIs are also covered. So these are the implementation steps. This is where you, as a domain expert, can come in and say, "Hey, it looks good," or "It doesn't look good. Do you want to change it or not?"

Thumbnail 2330

Okay, the last thing is the validation criteria. So the validation or exit criteria is what goal the agent wants to achieve and how you're going to satisfy it. This is where again we need to be very cautious about how we define this validation criteria. It could be just as simple as, "My build and unit tests pass," or it could be very complex. One of our customers is actually doing an Angular to React migration, and they actually use a Playwright MCP server to compare visually whether the Angular and React don't look the same. So it could be as complex as that. Or it could be deploying into an AWS environment and checking if it works or not. That's where your expertise comes in. How do you want to validate if this migration is successful or not?

Thumbnail 2380

All right, so this created a good transformation definition, and as a human in the loop, you can view it, modify it based on feedback, apply it, or publish it to the registry. Okay, so let's apply it first for a sample project. Okay, so now it's asking me, "Where is your code repository?"

Thumbnail 2410

Thumbnail 2420

Thumbnail 2430

Thumbnail 2440

Thumbnail 2450

Thumbnail 2460

I'm going to provide the code repository location. This is one of the sample projects I have in Fluxo that I want to migrate to Tickety. I'll go ahead with this. The system will analyze the entry criteria and all the project files and start the transformation project. As Morgan described, it first does a planning phase where it determines how to apply this definition to the current project and what file changes need to be done. You have the option to modify the plan and provide inputs, such as telling the system to skip a step if it doesn't apply to this project. Once the plan is approved, it moves into execution mode.

Thumbnail 2470

Let me show you the plan creation. The important thing to know is that reviewing the plan is completely optional. You can tell the agent to just go ahead with it. It depends on how hands-on you want to be. As we've mentioned before, this tool can run in a very human-driven mode where you're watching every operation it performs, approving every tool use, and observing every file manipulation. You can always control it, pause it, and tell it not to do something or to do something different instead.

If you have something really complex or a piece of code that you're very sensitive about, you can watch it every step of the way and guide it very carefully. Alternatively, you can run it in a fully autonomous mode and just tell it to do its best at everything. It depends on how confident you are in the quality of your transformation and how risky the transformation is to you. You can be anywhere on that spectrum.

Thumbnail 2580

During execution, the system also performs validation. If there are any errors, it self-debugs and fixes them. We have a self-debug and validation agent that does this automatically for you. If the agent can't fix errors, we try very hard to prevent a common issue where large language models really want to say they completed the job. We've tried to eliminate that behavior from the agent as much as possible. Early iterations of this would delete tests to claim all tests passed, which is awful. We've built as many guardrails as we could think of to prevent this sort of behavior and try to enforce honesty.

Thumbnail 2590

If the agent encounters issues, we want it to say something like, "I completed five out of seven steps and had trouble with these two, and here's the trouble I had." We really try to make it honest because if you just put this into a cloud environment or something, there's a good chance everything will look fine until you dig into it. That's something we've encountered previously.

Thumbnail 2600

Thumbnail 2610

Thumbnail 2620

Q&A: CloudFormation, Cross-Account Sharing, and Extensibility

You all seem very focused on code itself, like language code. We have a challenge with CloudFormation and modernizing to CDK. Is that the kind of thing this tool could do? There are people doing Terraform to CDK migrations right now with this. The way I think of it is if you have any structured text and you want to turn it into structured text that looks different, yes, you could do that. You also spoke a little bit about deployables, so actually deploying into accounts and validating that the deployments worked, being able to do green deployments or compare A to B between what was there and what could be there. That's all capable. It's possible if you build the scaffolding around it.

Thumbnail 2630

Thumbnail 2650

Thumbnail 2660

We're giving you the ability to put in text files and have different text files come out, and you can build the scaffolding around it to deploy it and return information to the agent about validation results using the guide tool as the mechanism for interaction. You're scripting around that tool to get it to do the other things you want it to do, but then turning to that client to do the transformation. Exactly. There are two extensibility approaches that have been taken, and it depends on what works better for you. You can wrap our apply tool in your own stuff, or you can tell our tool to call external shell scripts and external MCP tools. Both approaches work. It's just a matter of what's better for your scenario.

Thumbnail 2680

The transformations are importable and exportable. One of my core tenets when we wrote the product requirements for this was that they would be importable and exportable because as far as I'm concerned, this is your intellectual property. I don't want anything to do with it. Keep it secret, open source it, whatever you want. That's the customer's decision. They are fully exportable right now, and they are ninety percent reimportable. The reason I have that asterisk is because the RAG knowledge items that the system discovers during execution, rehydrating the RAG database where we store those, requires some additional work.

Thumbnail 2720

Thumbnail 2730

This is not something that we have gotten around to being able to do on the import side, so you can absolutely export it and you can mostly reimport it and we're getting that to 100%. Can you distribute this around an organization? I know you guys were talking about within an account, but I mean we're running this, we're running an organization, so could we have a central account where this stuff lives and then be able to distribute that out across? Cross account sharing is something that is on our list. It is not there yet, but in the interim you can export it. Literally when you export it, it comes out as a zip file with a bunch of stuff and then reimport it and you're not going to lose much. I can talk through what you'll lose and how to get around it.

Can they read from for the reason, the like show 8 to 170 transform? So can I exclude? So you can view the plan that it generates for upgrading your code with for doing that particular upgrade on your code. The actual transformation definition that we provide you, we had lots of fights with legal about this. We do not share with customers at this time, that may change if I get my way, but we don't share that right now. But you can extend this. You can extend it with additional plan context and other things, but you can't really view it and modify it right now. It's a black box that you can tack things on the outside of, and I would like to have it not be a black box that you can modify, but we'll get there.

Thumbnail 2810

Thumbnail 2820

Thumbnail 2830

Thumbnail 2840

Thumbnail 2850

Advanced Features: Documentation Analysis, Batch Processing, and Knowledge Items

So this is the plan it generated. Plan again, as I told you, it's basically how do I apply my definition to this project, right? Hey, first update the build dependencies, then do the client initialization and what files are going to be changed, what is the validation criteria here? All those things again here as a human in the loop you can basically review the plan and provide input. Here I'm going to just say hey looks good. Receipt. So this will be able to do that. Again, in the interest of time, I'm going to go to my completed one. I'm going to show how exactly it did it and how it created a summary, right?

Thumbnail 2860

Thumbnail 2870

Thumbnail 2880

Thumbnail 2890

So this is one of the things that I did in the morning and so this is the migration summary it created. So what has been migrated to Tickety from Fluxo? So the dependencies, this Fluxo Java client has been removed. It added the Tickety client authentication, client initialization, what it did, what are the changes. Again, get ticket operation before and after code snippet. Basically it completed all the changes that is required for documentation updates and other things for migrating so my proprietary internal library to another version or to a different system.

Thumbnail 2930

Thumbnail 2960

Cool. So just quickly going through, I have two more things that I want to quickly show. One is the talks that I talked about again. You saw in my list, there is a comprehensive documentation analysis. I'm just going to do again. So that's basically going through your code documentation, documenting all the code as well as creating the tech debt analysis for your code. So this is early access. So basically deep static, it does a deep static analysis of code base and then generating it, right? So I actually did this for one of the projects. I don't know how many of you are familiar with Doom. Doom is a gaming engine that I that we all love to play, and that's basically was, I think it's 1990, it was created. And this is pretty much the documentation it had. This is the original documentation. Hey, go use Doom that's about it, nothing else, right?

Thumbnail 2980

Thumbnail 2990

Thumbnail 3000

Thumbnail 3010

So I ran my documentation code base analysis on that and it actually created this documentation folder, right? Basically it created a complete readme, what exactly this code does, what are the quick stats, what are the documentation structure. Basically it created subcomponents. One thing I really like is it creates kind of this navigable structure where the markdown files you can click between them and it'll like navigate through them. It's like kind of a pseudo web page if you will. So, okay, so project overview, it created a project overview. And the file inventory, how many total lines of code, what are the things. And from architecture, it basically created components and identified what components are available.

Thumbnail 3020

Thumbnail 3030

Thumbnail 3040

Thumbnail 3050

Thumbnail 3070

Thumbnail 3090

Thumbnail 3100

It also created a dependency mapping tree showing the main program, game systems, core subsystems, and utilities. The patterns analysis covers architecture, system overview, and behavior, which includes the business logic like use cases and behavioral documentation. This includes weapon properties and weapon mechanics used in the game. The analysis also covers vision logic, how decisions are being made in the code, error handling, and workflows. It created all the workflows for me. Make sure you see the diagrams folder too. I like the pictures. So this is the component diagram showing the main engine, main program, game render, sound network, play simulation. It also shows abstraction, foundation, and external dependencies. The main thing is the technical debt analysis. I found a few things in this code from the 1990s, including security vulnerabilities and outdated platform dependencies. There are also 1990s C coding practices to consider.

Thumbnail 3110

Thumbnail 3130

Thumbnail 3150

Thumbnail 3160

Thumbnail 3180

Thumbnail 3220

The last thing I want to quickly show, since we have about 10 more minutes, is the batch processing capability. So how do you wrap this into a batch and run it across multiple repositories? I use Skiro to create a batch and then an ATX batch launcher. It basically takes the input of a CSV file which has my GitHub repositories, and I'm going to run this documentation on multiple GitHub repositories. You can also provide additional context like the transformation name. This is the CSV file and I wrapped my client into a batch script. I'm going to run this and say launch this batch script with the CSV file in parallel mode with a maximum of 10 jobs. You could increase it, and this is the output directory to download the clones from GitHub and then the output directory where you need to put the results. I'm in a different directory right now, so give me one second.

Thumbnail 3240

Thumbnail 3250

Thumbnail 3260

So that's basically checking the version and running these commands, starting about 13 parallel jobs. It basically downloaded all the repositories for me and then started running the analysis. You can basically monitor the execution logs as it starts creating the documentation and you should be able to see the results. That's the batch mode basically. You wrap up your SEG execution or the ATX execution command and then run it. Our code name was Super Elastic Gumby, so sometimes we slip and say Gumby or SEG. I was really sad marketing gave us a real name.

Thumbnail 3290

Thumbnail 3320

The final thing is probably the knowledge items, so I want to quickly show how the knowledge items look. I ran a bunch of transformations, so I created a transformation called Java 21 modernization for my bunch of Java projects that I had. I'm going to say I've run this multiple times on multiple projects. So I'm going to say list KI, which is the knowledge items, which is the continual learning aspect where it learns from the agent execution. By default, it will run with knowledge items, which means it will learn from the execution. You can say do not learn as well if you don't want to learn. These knowledge items when they are created are disabled by default, which means a human in the loop can review it and if they want to apply it for the next time they can enable it. So here if you see some of them, like this is a Java 21 transformation, it found that sealed classes are incompatible with JPA Hibernate lazy loading. Some of them were very specific, like SolarWins, which is one of the projects I ran. It's very project specific and not very generic. Again, Springboard virtual threads needs this. So I can go and enable it. Next time it runs the same transformation, it finds that it basically applies this knowledge item that memory is already there. I'll give it to you for a wrap up. Yeah, thank you for your time. I think that's all we got.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)