🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Modernizing Java applications with generative AI (DVT210)
In this video, AWS specialists Ben Coburn and Venu Vasudevan, along with ADP's Vijay Golla, discuss modernizing legacy Java applications using AI-powered tools. They introduce "modernization monsters" like the Documentation Dragon and Testing Troll that plague traditional upgrades. The session showcases Amazon Q Developer's Java transformation agent and the newly announced AWS Transform Custom CLI tool. Vijay shares ADP's success story: upgrading 40+ Java services (600,000 lines of code) with 60% time savings and 85% CVE reduction. AWS Transform Custom enables custom transformation definitions, continual learning from feedback, batch processing for scaling to thousands of applications, and integration with CI/CD pipelines through command-line interface, supporting multiple languages beyond Java.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: The Challenge of Modernizing Legacy Java Applications
Good afternoon. Can everybody raise their hand if they can hear me? Good, awesome. How many of you are running Java applications that were written from before you joined your current company? How many of those applications were written more than 10 years ago? Wow. How many of those applications were written more than 20 years ago? Okay, still about 25%. That was before cloud computing.
Now, you could choose to maintain the status quo, but hopefully you're here today because you want to modernize your application. In fact, for many of our customers, the question is no longer whether to modernize, it's how to do so and to do it safely, securely, and efficiently. That's exactly what we're going to explore today. My name is Ben Coburn. I'm a Principal Go-to-Market Specialist with AWS in our Next Gen Developer Experience organization.
Hey, I'm Venu Vasudevan. I'm a Senior Specialist Solutions Architect from the same Next Gen Developer Experience team. Hey, myself, Vijay Golla. I'm Director of Application Development at ADP. Thank you, thank you, thank you.
To start off, we're going to talk through some common issues with modernization and how we solve them. Then we'll hear a real-world example from Vijay about his experience at ADP modernizing their Java applications. Finally, we'll outline where past approaches have fallen short and introduce you to the latest offerings from AWS that address these challenges.
The Modernization Monsters: Six Key Challenges in Java Application Upgrades
So first, let's discuss challenges with application maintenance and upgrades. Now I know Halloween was about a month ago, but when you take on a modernization project, you open the door to a few modernization monsters. So you may be familiar with a few of them.
First up, we have the Documentation Dragon. When you're upgrading your application to a newer version of Java, you're changing a large amount of code, and this in turn requires extensive updates to your existing documentation, and that can be a full-time job. One other side of the Documentation Dragon is that your old legacy Java app may not have great documentation, and that alone can be a non-starter. These legacy apps often contain custom caching mechanisms or connection pooling, and without high-quality documentation, it can take days to understand the ins and outs of the module before you effectively upgrade it.
Next, you may be familiar with the Testing Troll. Newer versions of Java can't always be tested in the same way as older versions of Java, so this pesky little guy is going to break your unit tests. An old-school example that some of you may be familiar with is a test that relies on internal sun.misc classes that were removed after Java 9, or an outdated testing framework like PowerMock that isn't compatible with newer Java versions.
Let's move on. The Module Monster. As you know, your enterprise application isn't just one thing. It's a complex ecosystem of interconnected modules, each with interconnected services, libraries, and various components. So when one part of your application gets upgraded, it can create a negative domino effect, which can go from your core service module, for example, and then may in turn impact your logging framework, which may in turn impact your monitoring modules, and ultimately your reporting services.
Got a few more of these guys. Let me introduce you to the dreaded Version Vampire. Upgrading from one version to another version of Java introduces breaking changes because things are just done differently in later versions. A classic example is the removal of JAXB from Java 11, which breaks many applications using XML processing.
All of these illustrations were created by Kiro CLI, by the way. The Dependency Dinosaur. Your app may have tens or hundreds of third-party dependencies, whether software packages or libraries, and you need to find versions that are compatible with newer Java versions. Most commonly, these types of dependencies are for logging, testing, parsing, web support, or security.
And lastly, the scariest of all is the Time-Sucking Tarantula. Given the previous five monsters, this process just takes time, and it's no wonder that it ends up in the backlog and adding to tech debt in our organizations. It can take months or years to upgrade a production application, so it's no wonder that these projects go unaddressed.
The modernization monsters, as I like to call them, are daunting, no matter how many times you've encountered them, and even the most seasoned developers would rather apply their creativity to a new project or a new piece of innovation. So, how do we solve these challenges with AI?
AI-Powered Solution: Amazon Q Developer's Java Transformation Agent
If you've been using AI over the last couple of years, you may be familiar with some of these techniques. We can use an AI agent to modernize your application, taking care of all the complexities and monsters that we just discussed. At AWS, we originally released this capability inside of Amazon Q Developer, and a bit later we'll talk about how this capability is evolving today.
When you call the Java transformation agent from inside of your IDE, it's going to do a few things. It will verify your project with a local build. It'll analyze that project for things like JDK runtime, dependencies such as Spring Boot, and find replacements to deprecated libraries like Sun. Third, it will create a custom transformation plan for you to review before it does anything.
Now, during transformation, code generation happens on the server side in the cloud, while the agent will perform verification builds with unit tests in your local environment. After transforming your code, you'll see a file diff to review all changes before accepting them. For larger projects, there are strategies you can use to chunk your work into multiple patches, for example, starting with one patch that just is for the minimal upgrades of the most popular libraries. In other words, some of the aspects that are least likely to break your application and are low hanging fruit.
The key concept that I want to highlight here is that we're not just generating new code and stopping there. We're giving this agent the ability to implement and test and only move on once the builds are successful. And this happens in many cycles through the transformation. Now I want to hand it over to VJ from ADP to talk a little bit about how they went through this process in his organization and for their Java applications.
ADP's Context: Global Payroll Services and Java Modernization Imperatives
Thank you, Ben. Hello again. Good afternoon. Again, myself, Vijay Golla, director at ADP for Application Development. How many of you are aware of ADP? You can raise your hands. And how many of you are getting your paychecks, W-2s through ADP? That's a good number. Thank you.
So, I have 17 years of experience in developing Java applications for enterprise solutions. So I've seen myself from Java 4, the version's evolution, all the way now which is 23. So whenever we go from one version to other version, we all know whatever Ben spoke about the monsters, it all dances around our mind, right? So how do we overcome that with the Gen AI?
ADP worldwide. So we all know, right? We are a global leader in payroll and HR services with an experience of 75 years expertise, and we are serving 1.1 million customers. And then, giving our paychecks, 42 million workers are getting paychecks through ADP. It's a big number. So, we are taking care of the workers from hiring to the retirement, all the services end to end. So, again, I'm part of global payroll.
So, my team is responsible for delivering the global payroll where we deal with the multinational companies, which are part of Fortune 500. Again, together, we are serving 2,800 clients. And then we are delivering the paychecks, tax documents, everything, and we have 6 million users accessing our global payroll portal. We are part of global payroll. We are serving across 140 countries. It's huge.
Again, these are just not the numbers. They're the real people we are serving. So part of the global payroll, two main things which we consider are compliance and security. So these are non-negotiable for us. So we always ensure our software or our compliance is up to date. My microphone's cutting. Is it done?
So, compliance and security. So how do we address this in the software development? So part of global payroll. Again, what are the challenges we have? We have Java services in our portal. We want to address the challenges. Let's see what are these challenges.
Java applications modernization has Java 8/11. We all know it is end of life is coming and support is getting ended. We want to ensure the support is there. We don't want to end up with no support from the vendors, right? So compliance requirements, as I said, it's non-negotiable for us.
Security vulnerabilities are a major concern for us. If there's no support from the vendors, we end up dealing with the security vulnerabilities ourselves. We're also missing out on resiliency improvements. Whenever you're not upgrading your Java, we are losing a lot of patches, upgrades, and improvements that we won't get. Performance is another critical factor. The latest Java versions are giving us a lot of performance improvements. If we don't upgrade, we lose them, so we have to upgrade.
ADP's Pilot Program Results: 60% Time Savings and 85% CVE Reduction
When we want to address these challenges, if we do it manually, it will take a huge amount of time. How do we accelerate this in this era of Generative AI? We partnered with the AWS team to address this problem. We took a small application and did a proof of concept using the transformation agent with Java. We were able to transform the POC seamlessly and understand what it was doing. So we were confident that we could go with the pilot program.
Our global payroll portal was the pilot program where we are running over 40 Java services. As part of the transformation, we learned a lot of things. Once we do the transformation, how it works is you're in your IDE, you just give a prompt for the transformation, and it's amazing. It does everything for you. It identifies what Java version you're running and identifies the dependency stack, including what third-party dependencies you have. It's going to resolve all of them. It does an iterative process to update those dependencies, fix the builds, and remove duplicated code. All those things are handled, and it documents everything that it's doing. That's all we were looking at as the challenges, and we're solving all of those things.
A second key outcome we have seen is developer productivity. If you are doing these things manually, for example, if you're taking a complex application, it might take days, let's say three to four days. Using the transformation agent, we were able to transform this application in three to four hours. Now we are speaking about days turning into hours. This is one application we have exercised as a pilot among 40 services. Our experience is that it is saving 60% of the time. When you save 60% of the time from your developers, they can use that time elsewhere for developing new features for your customers or for innovation.
It also improved the application's resiliency and reliability because the latest Java versions are providing these benefits out of the box, and we can leverage those. It improved the performance as well. We know that the latest Java has a lot of performance improvements, so as part of the migration, we were able to see that performance has improved significantly.
Now, we've transformed the code, which is good, but how do we test it? What happens to my unit test cases with the existing ones? As part of the Java transformation, it takes care of your unit test cases as well. There is an option to choose where you can run your unit test cases post-migration. It does run, it will fix automatically, and it will give a detailed report for us. Unit testing is good, however, functional testing is also important. We want to ensure before and after that our application is working fine. We were able to use dev agents to develop automation scripts and update our existing automation scripts. One Generative AI suite is giving you the ability to transform, test, and deliver it.
Looking at simple metrics, after our experience, we were able to save 60% of the time. It is 60% faster. As I said, over 40 services were able to migrate within less time. It used to take months or weeks. The total number of lines of code which we processed for these 40 services is around 600,000 lines of code. When we did that, whenever you're upgrading your dependencies, the third-party libraries are also end of life and vulnerable. If you go and check your CVEs, a lot of CVEs are reported in your report. When we compared before and after, 85% of the CVEs are down, particularly the high and critical ones. That is a win-win situation.
Scaling Success: From Pilot to Organization-Wide Adoption at ADP
Whenever you do this transformation, does it do everything for you? We have a variety of applications, Tomcat and Spring Boot. In Spring Boot scenarios, most of the time it's seamless. When you have complex Tomcat applications or monolithic applications, we have seen it is 80 to 85% accurate. It is doing the job, but there is a human in the loop. We need to fix 15 to 20% of the job to make it successful. That's what I said, around 85% accuracy.
Now, we did a pilot, which is good, but how do we scale it to the organizational level? At ADP, we have a Center of Excellence team. We were able to demonstrate these experiences within the business units. We were able to share our knowledge, and now we've rolled out to many business units on how to advance our Java applications to the latest versions. We were able to migrate hundreds of applications now after the POC and pilot program. The second thing is developers. We are doing good, but how do we empower your organization with all these tools? We need to give access to the developers. If we don't give access to the developers with all these amazing tools, we will not see the outcome.
We were able to give access to our developers to use all these generative AI tools, so we were able to achieve our goals. For quality gates, as I said, we were able to see unit test cases, and we were able to see automation test scripts are taken care of. We were ensuring that all the code, whatever is transformed by generative AI, we were able to review it, we were able to test it, and we were able to deliver it. For change management, again, as I said, we have a strategic approach on how do we bring it from one business unit to other business units, so that was good. Yeah, I think that's pretty much it from our side. Maybe we can take it over.
Beyond Java Upgrades: Customer Demands and the Need for Greater Flexibility
Thank you so much, Vijay. So that is really impressive metrics, 60% faster upgrades, and thank you for sharing your customer success story here. Now, what's the future? Where do we go from here, right? This is great, but where do we go next? What is the future that we are evolving this Java transformation agent into, right? So I want to quickly talk about the current Java transformation agent that we have and how does it work. Like Ben mentioned, basically it's an IDE-based experience where it takes your Java code as a project and builds and tests in the source version, makes sure it all works, then it applies static recipes like Spring Boot upgrades or JUnit upgrades, and then builds and tests it in the target version, which could be 17 or 21. And if there are any issues, it goes to generative AI large language models to make sure that it fixes that and gives you the completed upgraded source code, right? That's exactly the workflow that is currently part of the Java transformation agent.
Now, why do we need something better, right? So customers often told us when we talk to customers, this is great, it's doing the job what we wanted, but our situation is a little unique, right. One of the main things that customers told us was beyond runtime upgrades, can it also do deployment upgrades like, hey, I have Terraform, I have embedded SQL, can it also upgrade that, right? And the other thing is I have JSPs UI layers, can it also upgrade the UI layer to a modern layer, right? And from a flexibility standpoint, this is great from an IDE standpoint, but my developer has to load the project to the IDE and do the transformation. Can I integrate this with my CI/CD pipelines, right? Or can I give my own prompts because I have something unique, I have an internal library that I want to upgrade with the Java application, can I do that as well? Or I have very specific libraries like I want to upgrade only Spring Boot in my application, can very targeted upgrades be done or not, right?
And the other thing we always hear from the customer is, can it support other build systems, right, like Gradle and Bazel, all those things as well, right? Another thing I always hear from the customer is upgrades are great, it was able to upgrade the version, but can it also modernize? Can it use Java 21 latest features like streams, records, right? And another thing is at scale upgrades. Again, I have 1000 applications in Java 1.8, can I upgrade them all together, right? Those are common questions I always get asked from the customers, right? So the current Java transformation, again, it can address some of them, but not all of them, right? That's why we wanted to do something better.
Okay, so introducing AWS Transform Custom. So this was released this week, right? Our CEO Matt Garman announced it yesterday on the keynote as well, right, to crush the tech debt with AWS Transform Custom. Okay, so what does it do, right? So, again, we just took a step back from the current Java modernization agent and thought instead of creating agents for specific use cases, agent for JSP, agent for Terraform, we took a step back and we thought, why don't we give an agent that customers can use and create their own transformations, right, giving their own requirements, giving their own organization's specific context, right? So what organizations need is an intelligent AI agent that can learn from your organization's specific context, your own requirements, and create a scalable transformation that you can apply for not only one repository but also integrate with your existing systems and apply at scale. That's what organizations need, right?
Introducing AWS Transform Custom: A CLI-Based Autonomous Agent with Continual Learning
AWS Transform Custom is a command line interface-based autonomous agent that does exactly what I described. It supports any code patterns, which means not only Java but also .NET to Java and other language-to-language conversions. It supports various scenarios. If you want to target specific frameworks, such as upgrading just Spring Boot or just Hibernate, it can do that. It also handles language-to-language conversions, whether you want to go from Python to Java, Java to something else, or perform a Java version upgrade. Additionally, it supports architectural changes. For example, if you have something on x86 and want to migrate your Java application from x86 to Graviton, that is also supported. Essentially, it's breaking the barriers of all the languages and frameworks. We're giving you an agent where you can customize it and do it on your own.
Another main part is the continual learning aspect. The agent can learn from your feedback. Once you apply this transformation to a couple of projects, it learns from the feedback you give and the agent execution, and it becomes smarter the next time you run the transformation. You want something where it can take your feedback with the human in the loop and apply this to the next set of transformations. The most important thing is that since it is command line interface-based, it can run from anywhere. It can run from your laptop, from your EC2 instance, in a container, in a batch mode, or even from your pipeline. This means you can wrap this as a batch script and say, here are my 1,000 applications, go run my Java upgrade job on all 1,000 applications, so you don't have to load this in the IDE and do it one by one. That's the main advantage of having this as a command line-based interface agent.
Let's look at how it works and what the workflow is. What has changed? The first thing, as always, is you should have source code that you want to transform. It could be one repository or multiple repositories, and it could be living on your GitHub repositories, GitLab, or wherever your source code management system is, or even your local laptop. The next thing is you can provide additional context. One thing I wanted to say here is that for AWS Transform Custom, you can do a custom transformation with your own requirements, and AWS will also provide AWS managed transformations like for Java upgrades, Python upgrades, x86 upgrades, and Lambda upgrades. We actually provide the definition for you, so it's zero setup and you can get started with those conversions right now.
Going back to the architecture here, once the source code is there, you provide additional context. This is where human in the loop comes in and you give your organization-specific context. It could be a validation criteria saying that you want to use a specific command for your unit tests to pass, or you want to call your smoke test suite so that after the migration is done, it runs the smoke test to make sure it works. It could be other agent instructions very specific to your company. For example, you might have a first-party library that you want to upgrade along with your Java application, or you might have very unique requirements on how to upgrade these Java applications. Those things can be given as additional context for the agents.
Once you provide that, basically the same thing happens. It will build and make sure it is built and tested in the source version. For example, if you're upgrading from Java 1.8 to 21 or 25, the first thing it does is make sure it works in your local environment in the source version. It could be Maven, it could be Gradle, or any version of Java. You could even do a Java 1.4 to 25 upgrade. Once that is done, then AWS Transform Custom takes over, where again it is a combination of multiple agents. There is an Orchestrator Agent which orchestrates all these jobs together. First, it does planning. The planning is basically determining how it is going to apply the transformation of the Java upgrade to your current project, essentially creating step-by-step instructions on what exactly needs to be done. So it creates a comprehensive plan. Again, after it generates a comprehensive plan, here also human in the loop as a human you can provide feedback saying that this step makes sense, this step does not make sense. Basically, it's an iterative process where at every stage you give instructions to the agent. Once that is done,
the execution agent takes over where it executes that plan step by step. The main advantage here is once the execution agent takes over, it basically creates a local branch where the command line interface is running and commits these changes incrementally, so you have an option to roll back to any point in time and you can monitor what is exactly happening with these changes. Again, here also you can provide feedback. If something doesn't really make sense and you need to change something, you should be able to do that as well.
And then finally, the validation and self debugger agent takes over where it validates based on the goal that you've given. Here the goal could be, hey, I want to transform my application from Java 1.8 to 21. That could be your goal. And the validation could be my build and unit tests are successful, so it basically executes that goal and makes sure the validation criteria is addressed. Again, if there are any issues that it finds with the validation criteria, there is a self debugger agent built in where it self debugs and fixes the issues also for you. So basically this loop runs to make sure that your complete project is upgraded to your goal.
And again this is based on two things. It also refers to the knowledge base. The knowledge base is the additional context that you give. So for example, you can give documentation, you can give other specific context during the additional context stage that is actually stored as a knowledge base. So during execution it actually refers to the knowledge base and also of course it goes to large language models for the code changes. Once you get the upgraded code, again, the continual learning kicks in. Continual learning is basically where the agent learns from the execution based on explicit user feedback.
When you give feedback in the execution stage or planning stage or validation stage, it basically records all these explicit user feedback as well as the agent execution and creates what we call a knowledge item. Knowledge item is like a memory that it creates during this execution. So again as a human in the loop, once the complete transformation is done, you can review the knowledge items created and see whether this knowledge item is applicable for my next run or not. If it is very generic, then you can say hey, enable this knowledge item for my next run. So to put it shortly, the agent becomes smarter as you run this on multiple projects. So that's the workflow that we want to bring in to address the challenges or the improvements that customers ask for, providing their own context and supporting multiple repos, integrating with the existing system, supporting basically all these changes that we discussed about.
The other thing is at all these stages, you can integrate with MCPs, which means you can integrate with your own specific tools. For example, during the planning stage, if you want to refer to your documentation from your Jira or Confluence, you could do that. Again, during execution stage, if you want to refer to another repo from a GitHub repo, you could do that. In the validation stage, you could again call a smoke test suite from another tool with this MCP integration. So at all stages you have an option to integrate this with your existing tools and systems.
Live Demonstrations: Out-of-the-Box Transformations, Custom Definitions, and Batch Processing at Scale
All right, let's go to the demo. So I want to show a few demos here. One, the experience of using the out of the box transformation that AWS provides for just Java upgrades. So again, zero setup, you just see the definition and execute it. So this is the demo scenario again from a command line interface standpoint. I'm going to list all the AWS managed transformations and you can review what the Java transformation upgrade looks like, apply this to your current project and you know, you can see how it is generating a plan, how you can give the input and validation and verification. That's the first demo that I want to show. This is a recorded demo, but I'll explain what is exactly going on.
So as you see here, I have my Visual Studio Code open. I have a few Java projects that I want to upgrade, and it's a Gradle-based project in 1.8 that I want to upgrade to 21. And this is my terminal. I'm going to
say ATX. ATX is basically the AWS Transform CLI. I'm saying, hey, list me all the definitions. Okay, so these definitions are basically stored in a registry that is applicable only for your account. So these are the AWS managed definitions. Like I say, Java upgrade is one of them which can upgrade from any Java version to any Java version, and I'm going to say, hey, ATX custom dev, execute this with this configuration file.
Let's review the configuration file. Configuration file contains this is my code repository path, this is my transformation name, what build command to use, what is the validation command. I'm just saying, hey, this is my Java home. Use this in my mission to build it and also additional plan context, the natural language way of giving additional instructions. I'm going to say, hey, this is an 8 to 21 transformation and specifically upgrade these libraries to specific version. For example, Spring Boot, go to 3.4.5.
Once I execute it, it basically goes and reviews the project, basically analyzes all the project files, and creates a plan. So plan is basically how I'm going to apply this transformation on your project. Again, first do the Gradle wrapper upgrade, then do the Spring Boot upgrade, then do the deprecated API upgrades. Basically it not only upgrades the dependencies but also deprecated APIs, breaking code changes, all those things.
Okay, and once you can, you can review the plan, you can give additional instruction. Here I'm going to say, hey, proceed, it looks good. Now it started making these changes, right? Step one, hey, I'm going to make 1.8 to 21 changes, right? And the second, it's going to say, hey, I'm going to make these changes, right? And also there is a work clock maintained that you can see what exactly is going on as well, basically started making all these changes, right. And this is the self debug. There is an error which it found that's not compatible, so it's making additional changes to self-debug and self-heal itself, right?
Okay, so now it's complete, right? So step 2 is complete. It's going to go to step 2, or step 3, right? Now I think all the transformation steps is complete. All the systems have been upgraded, and it basically lists down all these things which have been done, right? So what exactly it is upgraded, Gradle wrapper, Spring Boot, all the libraries, what code changes, what is the final status. Currently the validation criteria is successful for this application, right? So this is how you do the, you know, use the out of the box transformation which is being provided by AWS to transform your application.
What you see is an interactive mode, right, meaning, hey, I was able to interact with this and then do it, but you can also do it in a batch mode, which I will show in a couple of moments. Okay, so let's go to the next demo, right? So first we saw, hey, how you can upgrade it. The second thing, how you can create your custom transformation for your specific scenario and then execute it, right? So this is where it gets interesting where, hey, you basically give your requirements to create this, the definition for your use case and you can execute it, right?
Again, I'm, since this is Java upgrade session, I'm showing Java, but this actually supports all the languages. You could do Python, you could do .NET, you could do Rust, all those things as well, right? So the second scenario I'm going to show is how do you create a new custom transformation definition, review it, provide it, and again publish to the registry. Registry is basically once you create a definition, you can publish to your own account, so other users can use the same transformation definition for their projects with similar use cases, right?
And so apply this plan generation and validation. Here, the specific scenario that I took is I've upgraded my current application from 1.8 to 21. Now I want to modernize it, refactor it with newer performance improvements of Java 21. That's the thing that I'm going to show, right? And also I have a Docker container, so how to upgrade the Docker files with the newer images as well.
All right. Again, same thing. I'm going to do this in an interactive mode, which means that I'm going to converse. This is converse with the CLI to make this thing happen, okay? I opened this interactive mode. I'm just saying, hey, create a new one for me. This is a new transformation. All right, now it is going to ask what exactly you want to do. I'm saying, hey, modernize my 21 project with the Java 21 features and also update the Docker file. Cool.
Now it is going to search for any already existing transformations. I'm going to say, hey, go ahead and create a new one here for me. So now it's going to ask me, hey, are there any specific documentation, any specific context that you want to give, right? I'm saying, hey, use the best practices for Java 21 modernization or refactor, go create this. So this is before the planning stage where it actually creates the definition itself.
Now we created a definition. The definition has a clear objective, a summary, an entry criteria, basically taking only the Java 21 project as an input and going for modernization, and a step-by-step instruction of what exactly needs to be done. For example, hey, convert my POJOs into records, right? Implement pattern matching. So these are all newer features introduced in Java 21 that it wants to use, right? Virtual threads and Docker file updates. So basically whatever based on your requirements, it's actually creating the definition of the recipe that you can apply for multiple projects. And what is the validation criteria? How do you want to validate? So here also you can provide input. Hey, my build and unit tests are successful, which means the goal is achieved, right? Now it's created the definition. Again, you can modify it or publish it so other users can use it, or apply it to your current project. So now I'm going to publish this so that I feel comfortable with my recipe here. Hey, go publish it so other users can use it. So it's basically published to your registry, which is specific to your customer account, right?
All right, so it published now. Now I can use this and apply to my projects. I'm going to say one, which is applying to my project. Again, it's going to ask me, hey, where is your project? What's the build command to use, right? Again, the same thing happens. So now it is going to create a plan. You can review the plan, validate it, and then go from there. So let's see what exactly it's doing, right? So it created a plan here. Again, it says that this is a multi-module Gradle project. This is the entry criteria and the project is already on Java 21, so the plan includes ten steps. First, I do the record conversion, then text block modernization, then pattern matching. So it's basically whatever requirements I've given, right? Docker modernization and legacy modernization as well, right? And it also says if the project does not contain a few steps which is not applicable from the definition itself, it notes that this is not applicable for this project, right?
Now again here also you can give feedback. I'm going to say, hey, looks good, proceed. Awesome. Now it's going to make the changes, right? The first change is it's going to make ConfigurationKey into a Java 21 record. All right, this is the old string and new string. It's basically replacing with the newer version. So now it's complete. All the things are successful, right? Transformation is complete and it gives you a summary of what exactly has been done. Again, as I told, it creates a local branch where you can compare and see what's going on, but it also provides you a summary. Once the changes are complete, it again does a validation to ensure that all the changes are being done, so making sure that the exit criteria is met. So here the build is successful and this modernization was successful. All right. There's no issues found with this modernization, and you should be able to compare with the Git Diff on what exactly changes have been done and see and push the changes as well, right?
So what we saw is creating a definition from scratch for your requirements, applying it on your project, reviewing it, and making sure it happens, right? All right. The third thing, right? The most important thing is how do you scale? This is great. You have created a transformation, you applied on one project, very similar to what we have. Now, how am I going to apply this for a thousand projects, right? What is the thing, right? Since this is a CLI command line interface, you can basically wrap this up in a batch script. All it needs is a shell. It works on a Linux environment. It works on Mac. It works on Windows through WSL.
As long as you have a batch script that can basically run it across your repository, you can run it. It could be a batch script, or you could just put it in a container and run it, or AWS Batch, or any scaling mechanism that can be done.
Okay, so let's see how that works. So I have a batch script here which I created called ATX Batch Launcher. Basically, it's a wrapper around this ATX CLI saying that it executes ATX CLI in parallel or in serial, taking a CSV file as an input. It takes a CSV file as an input and what mode, parallel or serial mode, and whether you want to trust all tools so it can execute without human in the loop, and what is the build command to use, what are the additional parameters to use.
Okay, so now let's take a look at the repos. So this is the repos that I have. I just gave a bunch of GitHub repos which is in Java 1.8, and I say, hey, this is my Gradle project. Go run my out-of-the-box transformation provided by AWS. This is my Java home and use this as a 1.8 to 21 migration. Okay, and those are the two inputs I gave, and I'm just launching again the script, saying that hey, launch this script with this CSV file as an input, have a parallel mode, 10 jobs maximum. The cloned directory is this local directory in my machine. I'm just going to execute this. Okay, it basically spawns 10 threads for the ATX CLI and does this job. And you could do this not only from your machine, as I told, just put it on an EC2 container or EC2 or containers and run it as well.
Okay, so basically you can monitor the progress as well, what exactly is happening. So just opening this, it basically opens up a CLI, what I did, like an ATX CLI in an interactive mode, but does it in a non-interactive mode, which means that I trusted all tools. I'm running it in non-interactive, so don't ask me for any question, go execute it. It also created a config.yaml for each and every transformation that you can review it as well. Okay, so now it is complete. So it had one failed repository, but most, all the others are successful. The one failed was basically because there was some repository preparation that failed, but that's due to my credentials. But I just wanted to show, hey, how easy it is to scale this across your projects.
Again, for Java, like I showed in the first demo, if it's plain vanilla Java upgrades, just use the out-of-the-box thing that we already gave you, just run with that, unless you have unique requirements that you want to do. Just run with out-of-the-box transformation. Cool. Okay, so yeah, so the status is passed for most of the transformations.
Continual Learning in Action and Defeating the Modernization Monsters: Key Takeaways and Next Steps
Okay, so the next one I want to show is continual learning aspect. So now we started with the out-of-the-box transformation, then we graduated into creating our own transformation. Now we saw how you can scale it. Now how the continual learning works, how does the agent become smarter the next time? So I'm going to use the same demo scenario, like the same Java 21 modernization custom thing that I created and ran across the projects and capture these knowledge items and list it, review it, and enable it.
Okay, so again, just as an example, this is what I ran, like ATX custom dev exec for multiple projects. Okay, so now assume that all these transformations are done. Now I can go back to another terminal and list these knowledge items that are created. Let's see how that happens. Okay, so I'm going to say ATX custom dev list knowledge items for this transformation. So it's listing all the knowledge items which it found from the explicit user feedback as well as agent execution.
So some of them it found was, hey, Gradle Shadow plugin outputs into this path, SolarWinds and Morpheus. This is very specific to that project, so maybe not applicable to enable for other projects. Groovy text blocks required two strings, so it captured specific things it found from the agent execution that can be applicable for other projects as well. Sealed classes fundamentally are incompatible with JPA Hibernate.
Record conversion requires systematic getter method migration. So these are some things that it learned, right.
Okay, so now what I'm going to do is I'm going to pick up some generic items and I'm going to enable this, which means that next time I run, it's already enabled and then that will be automatically taken care of. Okay, I'm going to say, hey, this knowledge item, go and enable it, right? Same thing with a couple of other knowledge items as well. I'm just going to enable a few of the knowledge items so when you use the same account, same transformation name, it's basically going to make sure that this is already taken care. It doesn't have to do this again and again.
This is especially important where you have multiple teams working and they capture knowledge items so you centrally have all these memory or organization specific knowledge items that you can enable and track, right? So how we envision this to be working is a central team creates this kind of transformation who are the domain experts and then distributes this to the application teams so they can run it and make sure that it works and then apply at scale. So that's the process that we envision, but each company is unique, right? You might not have a centralized team where developers are decentralized. That is also possible where you can run this within your teams within the account, so those knowledge items are captured.
All right, those are the things I want to cover with the demo, right, when we want to wrap it up. As we start to wrap this up, a few key principles to highlight from AWS Transform Custom. First is the continual learning capability that Venu just highlighted where the transformation definition can be improved over time either through agent generated critique or through developer generated critique. Second is the guided transformation definition creation. When you're first creating a definition, Venu highlighted the ability for intelligent feedback loops to identify missing context and to prompt the user for extra instructions in the case of ambiguity.
We have the concept of learn once transform everywhere, so your developers should not be spending duplicative time creating transformation definitions that are common to your organization. You can create it once, you can vet it once, and then publish it to your internal registry. And lastly we have the concept of organization wide transformation campaigns. So from a central PMO office or developer office you can manage these transformations at scale.
Now, as we wrap up, let's go back to our quest against the modernization monsters. The documentation dragon, so you can see from AWS Transform Custom how a transformation definition can automatically create updated documentation for your project or net new documentation based on the instructions that you give it. The testing troll, a transformation definition will update existing tests or again create net new tests for your project. Module monster, we support multi-module projects out of the box. And for the version vampire, the CLI tool has all the functionalities to address breaking changes.
Lastly, the dependency dinosaur. So we support all common third party dependencies and frameworks like Spring Boot, Hibernate, Apache libraries, etc. and these culminate in again the time sucking tarantula. Venu highlighted 60% faster transformations at ADP and we've seen customers achieve up to 5X acceleration in their transformations.
As far as next steps, this tool is available for you today. Maybe to get started, an easy way is to try updating documentation or creating new documentation for an existing project. You could then move on to maybe using one of our out of the box transformation definitions such as our Java transformation definition. And then you can go on to expand to your larger projects or create your own definitions. You can find more resources on this at Skills Builder for AWS transformation as well as all of our other AWS services and the announcements from this week.
So I'd like to thank you all for coming today, invite you to please complete the session feedback in the mobile app. We won't take Q&A right now, but we'll be happy to take it afterwards beside the stage. And with that, thank you all for coming. Thank you. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.



































































































Top comments (0)