DEV Community

Cover image for How I use AI agents to ship legacy code faster (15y experience)
Mario Araya Romero
Mario Araya Romero

Posted on

How I use AI agents to ship legacy code faster (15y experience)

I have been writing software for 15 years. I started in a world where every deploy was manual, every codebase had tribal knowledge nobody wrote down, and every new project was a fresh fight with someone else's unfinished ideas.

The last couple of years changed something important. AI coding tools stopped being autocomplete and started being real collaborators. Not magic, not a replacement for thinking, but something new. And I noticed that most developers around me were using them wrong.

This is how I use them today.

AI is not autocomplete

Most people I know open Copilot, accept the suggestion, move on. That is fine for boilerplate. It does not work for the kind of work I do, which is mostly brownfield: unfamiliar codebases, legacy systems, messy constraints.

When I work with Claude Code on real code, I do not start by typing. I start by preparing context, creating .MD files. What does this part of the system do? What are the dangerous spots? What patterns does the team already follow? I write a short note, sometimes a single paragraph, sometimes a full document, and I give it to the agent before asking anything.

This sounds slow. It is actually the opposite. An agent with good context produces code I can merge. An agent without context produces code I have to rewrite. And that is what saves time when new requirements or modifications come from management.

The most useful question I ask now

I ask it constantly: what does the agent need to know to not make a mistake here?

That question changed how I write specs. It changed how I name things. It changed how I structure folders. A codebase that is easy for a new developer to navigate is also easy for an agent. A codebase that depends on implicit knowledge is dangerous for both.

I used to think of documentation as something I wrote for future humans. Now I write it for future agents too. It forces me to be more explicit about assumptions, which makes the code better for everyone.

A real example from work

At my current job I inherited a legacy purchase order system. It worked, but it was old, and we needed to migrate it to .NET 8 and React without any downtime.

The approach we took was boring and worked: run the new system next to the old one, route orders to the new system by default, fall back to the legacy system if the new one could not handle a specific case (once or twice a day). No big bang. No business interruption.

What made that work with AI agents in the mix: I wrote a very clear document describing every order type, the edge cases the old system handled, and the validation rules we did not want to miss. That document became the spec. The agent used it as working memory. When it generated code, I could review it against the document, not against my own memory of the legacy logic.

The legacy system will be fully decommissioned a couple of months later, quietly.

A small skill I built that saved us hours

One of the bottlenecks in our team was the gap between "product describes a feature in plain words" and "developer has a spec they can actually implement." That gap was usually filled with meetings, Slack threads, and half-written tickets.

I built a small Claude-powered skill. You give it a product requirement in plain words. It produces a first-pass spec: acceptance criteria, edge cases, a suggested component breakdown, explicit out of scope items, and open questions. The team reviews it, adjusts whatever is wrong or missing, and the final version becomes the source of truth. The agent then uses that spec to write the implementation.

The "out of scope" section ended up being the most important one. It prevents the agent from solving problems we did not ask it to solve, which is the failure mode I see most often when teams adopt these tools.

The speed gain is real but not the main thing. The main thing is that code review stopped being about syntax and started being about logic and business correctness. That is a better use of everyone's time.

What CI/CD looks like when you trust the tools

I also rebuilt our deploy pipeline. Before: files copied over SCP, services restarted by hand, no tests, no rollback. Every deploy was a small prayer.

Now: GitHub Actions runs on every pull request. Lint, build, test suite. Main and production branches are protected. On merge, the pipeline builds a Docker image, pushes it to a private registry, and the VM pulls it and restarts the container. Rollback is a single command.

Deploy time dropped around 60 percent, but the bigger change is that the team ships more often. They trust the process, so they take smaller risks more frequently, which means fewer big risks.

What I have stopped believing

A few things I used to believe and do not anymore:

That AI tools are a shortcut. They are not. They are a multiplier. If your process is bad, they multiply the badness. If your context is unclear, they amplify the confusion. The work shifts, it does not disappear.

That legacy code is something to escape. Legacy code is where most real work happens. The interesting problem is not "how do I avoid this" but "how do I make this navigable for me, my teammates, and my agents."

That speed is the point. It is not. The point is shipping things that work, with fewer surprises. AI tools help me do that. Faster is a side effect.

Where I am going

I am practicing to certify as Azure Developer Associate this year. I am building a couple of small side projects in public, simple tools I would actually use myself, and I hope a few of them help other people and pay for my coffee. I am also trying to write more, because writing is how I figure out what I actually think.

If any of this was useful, or if you want to argue with me about it, the comments are open.


A quick note before you go

If you want the full Claude prompt I use to generate specs from a plain-text requirement, drop a comment with "PROMPT" or DM me, and I'll send it over.

Also building this in public:

Live demo: http://timer-app-livid-psi.vercel.app
Code: https://github.com/marioAraya/timer-app

Still rough around the edges, but useful already.
If it helps you, a GitHub star goes a long way.

Top comments (0)