🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Grupo Tress Internacional's .NET modernization with AWS Transform (MAM320)
In this video, Grupo Tress Internacional shares their transformation journey from .NET Framework 4.6 to .NET 8 using AWS Transform for .NET. Armando Valenzuela, Head of Engineering, explains how they modernized a critical payroll stamping service processing 11.3 million documents daily for 4.4 million employees. The migration from Elastic Beanstalk to Lambda achieved 40% cost reduction, 70% reduction in development hours, and eliminated Windows licensing costs. The team used AWS Transform to automatically migrate 135,000 lines of code, cleaned 23 unnecessary NuGet packages, and leveraged Amazon Q Developer for manual fixes and validation. They deployed on Graviton-based Lambdas, achieving zero downtime during peak payroll periods. The presentation includes a live demonstration of AWS Transform in Visual Studio, showing the IDE integration, transformation process, and how Amazon Q assists with code modernization tasks like replacing Entity Framework with Entity Framework Core.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Grupo Tress Internacional's AWS Transformation Journey
Great, thanks for taking the time to see our presentation. We are going to be having a conversation with Grupo Tress Internacional, particularly Armando Valenzuela, who is going to share with us their transformation process using AWS Transform for .NET. The service was launched a few months ago, so they started in a fair release. Now at the event there are a lot of launches that you have already seen, so we are going to go through those as well. But let's hear what Grupo Tress's challenges are regarding their modernization efforts, right?
So this is the agenda we're going to go through. A quick introduction, we're going to present Grupo Tress Internacional and what they do. Armando is going to dive deep into their challenges. He's going to share with us their transformation path, the transformation journey regarding all the key lessons learned that they got into their path while transforming their solution. And this is very interesting. He's going to dive deep into the architecture. As you know, this session is a level 200, so expect deep technical content and also the demonstration of the service and benefits that is going to be led by Thiago. And what's next in terms of lessons learned and what's the next adventure from Grupo Tress.
Now who is Grupo Tress Internacional? So Grupo Tress is a leading Mexican company. They are focused on human resources management, payroll processing, and attendance control solutions. They have a long heritage. They have been building software since 1991, so you can imagine that they have a lot of legacy code out there. And their solution has been evolving during these years. They are headquartered in Tijuana, not so far from here, and by 2024, GTI as we call them, reached 68% coverage of all Mexico's manufacturing employees. So imagine that you are receiving your payroll and you are working at a manufacturing sector. You may want to receive your payments week by week or every 15 days. So this is how it works for the manufacturing sector in Mexico in particular. And they reach more than 1200 customers and up to 4.4 million employees, so they have a pretty large impact out there.
So let me introduce Armando Valenzuela who is the head of engineering. Buenos dias, bom dia in Portuguese, right, Thiago, and good morning everyone. Let me just introduce myself really quickly. I have been working in Grupo Tress for more than 20 years, and I started working on legacy Windows on-premise applications. Yeah, I worked with Delphi in that time. No shame on that. Then I moved to working on cloud native serverless applications, and now I work as a part of the architecture enablement team, and we serve multiple product development teams as well.
The Friday Crisis: Challenges of Legacy Infrastructure and Modernization Imperatives
So picture this. You have more than 4 million users trying to request to your app, right? They're trying to do something meaningful with your application on a Friday evening. Customers are calling technical support for your help. The product teams are really stressful, and this application is running on legacy code of more than 10 years running on Windows being installed that doesn't escalate as you think or as you wish. Well, surely that is not a happy Friday for everyone, and that happened to us earlier this year. Week by week, and well, a lot of people were involved in that, including an infrastructure team and support.
And I mention this quote, the leading sources of technical debt are architectural choices, because it's important to remark that migrating the code to a newer version of .NET is not the important issue here. You need to think broader to not derive in a technical debt that you're going to heritage to your new employees of your company in the following years. Right.
This modernization approach is something that we think about in our organization because our mission is to experience the joy of improving lives. The lives that we are improving are not just our customers or the employees, but also our developers. With this modernization, we could do both things. One, deliver a better service to our end users, and also improve the developer experience for our developers.
What challenges do we have? Well, basically more than challenges, these are the pillars that we follow in this migration. One is to stay customer-centric. We needed to meet scalability and performance needs. The second one is to stay cost-effective. In this way, we need to reduce the operational cost and simplify the modernization and maintenance efforts, and that's related to the developer experience. Also, zero downtime requirements. This was a crucial need when you migrate a critical production workload. You need to do it without disrupting payroll operations or customer SLAs.
Understanding the Payroll Stamping Service: A Critical Mexican Business Application
Okay, let me explain the application context. This is a very Mexican thing because the Mexican government requires for every payroll slip to be notified to the Mexican Tax Administration Service. It's like the IRS but in Mexico. Basically, the process begins when the payroll manager or administrator, using our HR management suites, in this case we have Sistemares on-premises and Revolution as a software as a service that's running on AWS, they send all the payroll data through the payroll stamping services that we provide to them. Because, well, we have security guidelines, I'm not allowed to explain all the VPCs, all the integration with networking and such, but I tried to explain the features that payroll stamping services provide.
Then, after the payroll managers send all the data, the employee through our self-service HR application is able to download and get the payroll slip. That's a huge deal for the employees in Mexico, and I'm sure here in the US as well. But for legal reasons, the companies are obligated to report those payrolls on time. If that doesn't happen, the company could have some legal issues with their unions or with their employees directly.
What we have here is a legacy XML PDF engine that actually generates the PDF of the invoice. We're replacing, we replaced this service with a Lambda service practically, and this Lambda has around 2 million requests daily. On peak days, we have up to 4 million requests, and for our services, this is the most used endpoint in our entire infrastructure.
Okay, this is just an overall view of the monolith architecture. This is basically an Elastic Beanstalk containerized application that runs on two availability zones and multiple regions as well. I don't want to say that Beanstalk is not running well. Actually, it's running phenomenal. But the issues that we have with Beanstalk is that it doesn't scale the instances as we expect.
The Transformation Approach: From .NET 4.6 to Serverless Lambda Architecture
It doesn't have the same velocity as Lambda. So, using AWS Transform, what was our transformation approach? Well, I recommend being prepared. And I'm not talking about the product developers or engineers. I'm talking about your code. The first thing that I suggest doing is to analyze which projects are the most suitable or the best candidates to migrate. You surely need, after selecting the best project that matches your needs for this migration, to organize your legacy projects.
Talking about our experience, we had a lot of dependencies related to NuGet packages in our solutions, and I would like to ask you something. Please raise your hand if you are dealing with NuGet or package dependency issues lately. Yeah, just a few. Perhaps it was just us, but in our projects we had like 73 NuGet packages involved, some private and some public, and we removed around 23 unneeded packages. That's normal because developers actually add NuGet packages that they need, and with the inherited dependencies they add multiple and unneeded NuGet packages. That's common. Also, just to report that we had more than 135,000 lines of code. This is lines of code of the entire suite. You also need to have your unit tests in order, and if you don't have them, that's okay, you can create them later, but you need to have it in mind to compare the before and after.
Okay, with AWS Transform, we were able to automatically migrate from .NET 4.6 to .NET 8, and AWS Transform helped us to validate most of the code. Also, the packages were updated. But we needed to take care of less than 100% of the code. There were not just a few lines, but with the help of Amazon Q, we were able to do some manual fixes. And we could run local and functional tests. With Amazon Q, now named, well now Q Developer, we were able to multiply our efforts rapidly to jump onto production. After the adjustment and running the local application, of course we needed to build a standalone application to test the transformation. We entered into multiple cycles of refactoring. We moved the packages to the correct artifact, and I'm not talking about the NuGet packages. We decoupled our application or our service into multiple artifacts to not create a microservice with all the code involved.
I was in a session this Monday, MAM402, there was a cool talk and they mentioned that the thing is not to create a shared monolith or a distributed monolith. So you need to think right and decouple your code before migrating to Lambda. Also, we were able to automate our builds via CDK and CodePipeline. And I'm going to go through our timeline in a more explainable way. We started this effort in April, and we were able to transform the code and do the manual fixes in less than two weeks. It took a little bit longer because we needed to decouple the service correctly.
During this migration, we were able to replace the legacy NuGet server to use CodeArtifact, CDK, and CodePipeline to integrate those. Then on the runtime, we didn't jump directly from Elastic Beanstalk to Lambda. We took a step with AWS Fargate for the microservices. First, we tried to run it on ECS Fargate and ran some tests. The behavior was okay with Fargate, but we didn't want to have more things to administrate or more things to do around ECS Fargate. That's not our traditional model. We tend to use more Lambdas for our applications, so we removed the containerization practically and changed the project to not use a Docker image for that, and we used a zip model instead.
The most important issue here is to add X-Ray metrics, CloudWatch, and of course get metrics inside of your running application and running service, because when you migrate your code, the operational environment is quite different. You need to keep track of what is not behaving as you expected. This is a high-level solution, and it's simple like that. It's a Lambda named Document Generator. It's almost the same, all the same buckets that we're managing. It has its own application database, so we were able to decouple the database as well, and everything behaves and integrates well with the payroll stepping service. Our self-service application named Mazorden didn't have to change their endpoints or anything. Everything is managed inside this context of the application.
Measurable Benefits and Key Lessons: 70% Reduction in Development Time and 40% Cost Savings
So what were the benefits of this migration? Firstly, we could do this in a rapid way. When I started the conversation with you, I said that the product teams were really busy, so we had to help on that as an architecture and enabling team. We don't usually invest the time on developing new things. We help the teams to design, to organize, to train them, but we were able to help with this migration. We calculated almost 70% reduction in human hours. We saved like two months of the team on manual code testing and validation. This was not just migrating the code. We needed to also generate new tests, new integration tests, and such.
Another thing was the strategic refactoring. The team was focused on value-driving refactoring rather than tedious manual work. I'm not saying that because we had the time, we invested the time in architecture. No, it's the other way. Because with this product, we were able to not just react to the problem that we had, rapidly do the migration manually, and then go to production. No, we were able to think this broader and decouple all the monolith for further migrations, not just the document generation that I explained.
Also, cost savings. We were able to reduce the infrastructure cost by over 40% using Graviton-based Lambdas. What I'm trying to say is that we were not just going through the basic Lambda configuration. In this case, the AWS Mexico architects team helped us to go farther. They said, hey, you can go Graviton directly,
and our packages that we are using to generate PDFs are well tested on Windows environments. So we were worrying about not having the same results. With Amazon Q, we generated more than a couple of Python scripts that automatically help us to validate A B tests. In the run, we also eliminated the Windows licensing costs. That's an indirect cost that we have on Beanstalk, so that's something that was good for us.
Regarding elasticity, on average we have 11.3 million payroll documents processed daily. We don't have an issue with that with Lambda right now. Serial manner scaling, of course we were not doing manual scaling in Beanstalk, but we had to be aware of what's happening on the peak days. Then the infrastructure team did some changes on it. So that's something that we don't need to be careful for now with our Lambda. Of course you need to follow the metrics and perhaps you can do some changes in your Lambda in case your demand increases, but for now we are calculating that with Lambda we're okay.
Then we had zero downtime during payroll spikes. This is just an example of what we had running the A B testings earlier this year. As you can see, we have the duration between half of a second. During the peak invocations, Lambda still has better results than when the invocations are low, so that wasn't good for us.
Lesson learned. Well, I'm going to try to summarize this really quickly, as I said before. If you have a running legacy application, you have the privilege to compare with the new version. In case you can run a task more than once, like we do with the payroll slip, you can compare the results of your legacy versus the new version. When I talk about the Python scripts that compare the PDFs, basically we did it on production. We deployed this at the same time we were delivering the PDF that we built in Beanstalk, and at the back end we were having the Lambda generated version. So we were able to compare the PDFs pixel by pixel and text by text.
In that way, we were confident that we can switch gradually with a canary release to the new version. So if you are in that case, I recommend to do this. In case your process cannot be executed more than once, of course you can go canary always and select your start users as well. Otherwise, there is a win on this.
The other thing that we learned, in a sense, is to adapt. Modernization is not just code migration. So don't expect to just transform and be on the other side the next day. But also, you need to be aware if you're going to migrate your application with the help of Amazon Q to Lambda. You need to be aware of cold starts, provision concurrency, SnapStart, and package size optimizations, because you're going to migrate something that was running on a monolith and works perfectly because all your singletons, your connections are well oiled in the production environment, not with Lambda. With Lambda, you need to be aware of your connection pools, your memory setups, and your connection with other services, right?
So in case your internal developer says something that is being installed was faster than Lambda, don't hesitate and procure this. There are probably issues with cold starts, and I just want to ask you a question. I don't know if you have many issues with cold starts on Lambda. Yeah, just be aware of that if you are migrating from data automation. We achieved fully automated validation that I mentioned before, the pattern scripts and other kinds of scripts with J reader as well to do critical benchmarking tests, right? So now let's get technical. Back to you, David.
AWS Transform for .NET: Technical Deep Dive into Automated Modernization
Thank you. Thank you very much, Armando. Thank you for sharing with us your case. It's pretty amazing what you have achieved in this time. Also, if you have any questions, you can approach the Grupo Tress team that is here by the end of this session so you can discuss it further. Now let's get into the AWS Transform and how it works.
Why modernize your applications? That's a pretty simple question. You may already know the answer, but there are a lot of factors in here. For example, Armando mentioned we improved response times of the services we moved from Elastic Beanstalk to Lambda. So from there we can deduce that we are going to have some cost optimizations. We will, as you can see, have performance improvements. And also the scalability, having Lambda for example instead of a monolith, will allow the teams to actually have more manageability and also more scalability in terms of what they are doing and what they are prospecting in terms of application growth.
Something interesting that Armando mentioned is that the AWS Transform service moves the monolith to actual microservices in containers. You can deploy, for example, transformed code to EC2 with Linux or ECS Fargate. But what Armando did and the team did was to actually migrate directly to a serverless architecture, and that's possible. And that's something that maybe you don't see in AWS Transform right now, but with the integration of Qiro and other tools, it is actually possible and it took Grupo Tress two weeks to actually reach this stage of maturity.
So porting to cross-platform .NET, it's hard and it's slow, and you already know that. That's why you are here, right? And also the AWS Transform service will take your code. So as you can imagine, everything is agentic now. AWS Transform is agentic now. This week it was launched, the next generation of AWS Transform that will allow you to have multi-agent capacity. What does this mean? You're going to be able to actually drive the modernization process the way you like it.
So through this agent that was released this week, you are able to actually drive the process of modernization. Maybe if you are looking toward changing the architecture of your services and only porting the code to this new architecture, you're going to be able to do so instead of actually depending entirely on the agent capacities to design this plan. So this is a huge improvement compared to what we had in the past. Also, detecting incompatibilities, as Armando mentioned before, they were able to clean their code and they found that 23 dependencies were just hanging around. They just removed them and the code actually became cleaner.
Then we port the code. The Transform agent, after analyzing and detecting incompatibilities, is going to design a modernization plan. And what's going on here? So the agent is going to design a plan based on your source code and projects that you want to migrate. As humans, we need to validate that process, so we need to be involved in this validation stage of this modernization plan.
After having the modernization plan and porting the code, we are going to be able to deploy directly to ECS Fargate or EC2 with Linux. In this case, it was very manual labor, as Armando mentioned. They would take months to do what they do in a couple of weeks. And a lot of stuff going around in terms of how teams will manage the migrated services.
So what we are looking at here is how long projects can be shorter, how licensing costs can be cut, and also suboptimal .NET porting quality. The transformation process is not porting the code only. It's going to find fixes that can be made within your code, and it's going to recommend code that has improvements in terms of quality of the code that is being imported and also quality in terms of security. So this is very important to have in mind.
And well, this is the introduction to AWS Transform for .NET that has two experiences: the IDE, which is the one that we are going to show you today, and also the web IDE. Both behave similarly in terms of functionality and the agents that are behind the service. But you need, for example, in the case of the .NET porting into the IDE, that is the second picture here, the integration with the AWS Toolkit. From there, you can grab the AWS Transform component and start your modernization. In terms of the web interface, you just need to connect your source code that could be placed, for example, in Azure DevOps, GitHub, GitLab, or Bitbucket, such as in the case of Grupo Tress.
And once we have that, we have connected our code, we have integrated Visual Studio, we can start the process of analyzing the code base. So what's going to happen here? These agents are going to index your code, are going to process this part. The code is being moved to AWS processing to actually understand what your code is, how it behaves, the dependencies it has. And once we have completed this part, here comes the transformation process.
The transformation process, again, really depends on a transformation plan. And once we have this transformation plan, we can iterate here, and this is something new. You can interact with the agent to actually drive the transformation process. And finally, the validation, the human in the loop part of every single agentic solution. What's the purpose here, and this is like the whole that this slide presents, is how we can take these .NET Framework applications to the new Linux .NET 8 applications and now .NET 10.
This is a more deep dive. A component that I want to highlight here is this part, the dependencies. For example, the analysis process is a little bit more complex within the service. It really requires analyzing code for incompatibilities, identifying and generating replacement code that is going to substitute your actual code. AWS Transform, once the transformation is completed, is going to push this transformed code into a version system. It will use the version system you connected to, and once again, you need to generate a branch so the transformation process can deposit code into that branch so we can have full control of how the ported code is going to be placed.
And here are two dependencies. If you develop your NuGet packages yourself, you can share those with the AWS Transform service so you can reach this dependency management control. If no NuGet packages are provided, or if the service does not find in its knowledge base those NuGet packages, the service is going to try to do its best to actually understand what those packages are doing.
So once we have done this, we apply code modifications. We can verify the code. If something didn't go as expected, we can get back and try again. So this is a process that is usually led by the developer or the architect.
Now, this slide is regarding the .NET transformation for .NET Framework. It will also transform MVC Razor interfaces. We have Web Forms to Blazor integration and also support for cross-platform .NET 8 and .NET 10. So there are a couple of targets for this transformation that are going to help us build modern .NET architectures. This also includes previous projects like Windows Forms and Windows Presentation Foundation for desktop projects, for example, and everything else here.
For large-scale modernization, you can take several projects at one single shot, or you can choose between all those .NET projects to start developing and transforming one by one, or you can grab all of your code base and start transforming from there. Now, the MVC Razor part is pretty interesting. It has been requested a lot, and it's going to take and transform your code and port it to ASP.NET Core. Also, Web Forms to Blazor transformation is supported as well, and it's shown here. It's going to take your Web Forms and transform them into Blazor. So this one's pretty new as well.
And how does it look like? Here, Thiago is going to share with us a demonstration on how this works and how to integrate into Visual Studio the AWS Transform components. And before diving deep, this is just a reminder of the connections for code. You can connect GitHub, GitLab, Bitbucket, Azure Repos, and also Amazon S3 for code analysis. If you don't want to actually connect your source code repositories, you can share them using Amazon S3.
Also, the view assessment is a summary report that you're going to be able to see when the transformation process is complete, and you will also be able to provide NuGet packages. These could be developed internally, or they could be from third parties. Now, the console experience is pretty much the same, but in this case, we are going to have full support of an agent and a chat console that you're going to be able to interact with. What's the advantage of this part? Having the capacity to actually drive the modernization efforts to a certain architecture, to a certain way of coding that you have in your companies, is crucial for driving the modernization efforts you are paying attention to, and this is very important. We are going to be reviewing this as well.
This is the integration. Amazon Q is going to help us, as Armando already mentioned. They used Amazon Q at the last stages. So what's the recommendation here? Start with AWS Transform. Once you have ported your code to .NET 8 or .NET 10, use Amazon Q to actually improve code, make fixes, or port to Lambda, for example, which was what Grupo Tress did. And the SQL Server transformation, maybe you are wondering what happens to the databases. Well, this is pretty straightforward. We can also support databases including SQL Server and manage dependencies within your code.
How do we do that? Here is the part before and after the transformation. So this is a full-stack Windows transformation, and you will have everything here starting from your application, your database layer, and your virtual machine running on Windows Server. The intention of this part of AWS Transform is to actually modernize the full stack, not just leave it with application transformation. You can focus on that as well if you want to keep your databases, but if you are planning to migrate everything to Linux, this is the way to go. So the target is to have a cross-platform .NET application, run your databases in Aurora PostgreSQL, and also have the infrastructure components regarding Amazon ECS and Amazon EC2 with Linux.
So let's get to the demo.
Live Demonstration: Modernizing .NET Framework Applications with AWS Transform and Amazon Q
Thank you, Thiago. Thank you, David. Good morning. What a great achievement Grupo Tress did with the modernization of their application. And today I will demonstrate how you can achieve the same. My name is Thiago Goncalves. I'm a Solutions Architect at AWS, and for the past 20 years, I have been developing and modernizing applications.
So when we talk about AWS Transform, as David mentioned, we have two options. One is the web experience. If you are in a DevOps team and you want to modernize applications in batch, you will use the web experience. But if you are a developer and you want to modernize one application at a time, you will use the IDE version of AWS Transform. So what I have here is Visual Studio, the IDE for .NET application development. And if you want to modernize your application using AWS Transform, the first step is to install the AWS Toolkit. So you go to Extensions and search for AWS Toolkit with Amazon Q, and this is the first step. You need to have the extension installed in the Visual Studio IDE.
With the plugin extension installed, you will have the option to enable AWS Transform in the IDE, and we have a few options here. If you want to use only AWS Transform, you can choose the first option, the option on the right side. And we have the Amazon Q Developer, which now is the Q Pro subscription where you have the AWS Transform and plus generative AI options to help you modernize your application. So in this case, for this demonstration, I will use the Amazon Q Developer subscription.
Okay, with the extension enabled in Visual Studio, on the right side I have one solution here. It's a .NET application. In here I have a project with .NET Framework. We can see here that it's the version .NET Framework 4.7. And as we know, the .NET Framework is no longer supported by Microsoft, and what I want to achieve is to modernize this application to .NET Core version 8. So how can I accomplish this? In the Solution Explorer, I right-click on the solution, and I have this option: Port solution with AWS Transform.
And it will ask me what is the target. Right now we only support .NET 8, but soon we will support .NET 10 as well. And I will start the job transformation. So as we can see here, the first thing is your application needs to be in a state of build. Otherwise, the transformation job will not start.
Okay, now my application is building, and what is going to happen right now is AWS Transform will package all your source code, the packages required for your application to execute, and this source code will be sent to an AWS account, a sandbox. It will create a secure connection with an AWS account. It will send your source code there, and the transformation happens in the AWS account. The transformation job doesn't happen on your local computer. So right now nothing changes in your solution. The source code will be analyzed and transformed in an AWS account.
This process will take about 15 to 20 minutes for this small solution, so I will not wait until the transformation job is done. I have one solution in here where I already completed this transformation. When your source code is transformed and the job is done, you will get the response in the IDE, and this is how it looks like.
In here, I have a summary of what has changed and the projects that I have. In this part here, at the bottom of the screen, I can see that new files were added to my solution. Before, I had a Web.config in .NET Framework, but I no longer need a Web.config. Now I have appsettings.json, so AWS Transform automatically added the files that were missing for a .NET Core application. All new files that I need to have this application running on .NET Core are now available. Files that are no longer needed, like Global.asax and other files, were removed or renamed.
I can see there are some changes in my source code, and I can review the changes that were done. In here, I can see some using statements were replaced and some small changes were made in my source code. There is a change, and I can download the summary of everything that was changed in my application and see all the changes and the packages that were changed or need some attention. On the left side, I have this Linux readiness assessment, which demonstrates that for this transformation, the job was not able to replace Entity Framework with Entity Framework Core, so this is something that I need to change manually if I want to make this application run or execute on .NET Core. With this report, I have an overview of everything that was done in my solution.
As I said, right now, nothing has changed in my solution. So if I want to apply the changes that the job made to my source code, I need to select all the changes and apply the changes. Since the project file was changed, it's asking to reload the solution. Now I have all the changes applied in my solution. On the right side, I can see now that I have the appsettings.json, the Startup.cs, Program.cs, and all the new files required to have this application running on .NET Core.
If I look in here, the framework now is .NET 8, so all the changes required for this application to execute on .NET 8 are done now. This is the first step. What about the changes that I mentioned that I need to replace manually? My application is now ready to work on .NET 8. The AWS Transform job doesn't allow me to choose more interactions or to interact with the transformation, so it knows how the transformation works, what needs to be replaced, and what needs to be changed, but I don't have many more options to interact with my source code. The only mission or the only goal of AWS Transform is to make your application available in .NET 8.
If you need more interactions with your source code, we have another option here, which is Amazon Q Developer. This is the next step when you have your application transformed. As I mentioned, I have Entity Framework that needs to be replaced with Entity Framework Core.
So what I'm going to do now is ask Amazon Q to replace Entity Framework with Entity Framework Core. This is very powerful because it can access my source code, the solution, understand what's happening with my application, read the files, and replace the files as needed. So it's asking to have access to my solution. I say yes.
So it knows that I need to execute some commands. It knows what commands need to be executed in my solution, and we start to replace the packages, the references. And we will change as well the source code, my files that need to be changed in order to have Entity Framework Core working in my solution. And I can see all the changes that it's making in my source code.
So with these two tools, AWS Transform and Amazon Q, I can interact with my source code, add new classes, ask to add new pages, add new routes in my application, all of this just typing what I want in plain English. So it's very powerful when we are modernizing, or if you want to understand, I'm a new developer and I want to understand what's going on with my application, how the connection with database is going on. I can debug my application, fix errors, or just type the errors or message that I want, the goal that I want to accomplish in the chat, all of this without leaving the IDE.
And this is the first step when we are talking about modernizing our application. It's not just moving from .NET Framework to .NET Core. After this, if you want to accomplish more performance or reduced costs, the next step is to move our application to execute on Graviton. So now that we have our application able to run on Linux, we can take advantage to move our application to execute on Graviton. And we did a performance test with one application and compared the cost of this application running on a Windows machine, and then later converted this application to run on AMD, and then comparing with Graviton. You can see the cost after modernizing your application and executing the same application on Graviton, the cost difference is huge.
And not only the costs, the performance of this application is much better. So it's not only about modernizing your application from .NET Framework to .NET Core. It will give you better cost and better performance when executing on a modern architecture.
So thank you, and now we have Armando to say more words.
Looking Ahead: Continuing the Modernization Roadmap with Strangler Fig Pattern
Oh, thank you. Thank you, Thiago, and thanks everyone for listening to us. What's next for us? Well, this was just the tip of the iceberg of our migration roadmap. Basically, we're going to continue modernizing our .NET Framework projects using the Strangler Fig pattern. But now, with what we have learned at re:Invent, we need to rethink this model. We need to go further with Amazon Q to accelerate the code development and modernization as well. Of course, we need to adopt a composable serverless architecture to speed delivery and boost efficiency. And of course, we need to enhance the other features that we have in our payroll stamping services. We need to enhance the SQL database integrations, so perhaps we're going to migrate from SQL Server to another database model.
I just want to end with this quote. It is not about migrating your code. You need to think broadly and involve key members of your team. This time we were able to work with Mauricio, Roberto, and Pedro's team, the architecture team, and they got really involved in this solutioning so we could go farther and deliver this on time and with the expected results. So don't leave your developers alone on this. And well, that's all from my side. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.









































































Top comments (0)