DEV Community

joietej
joietej

Posted on

I Used 22 Prompts to Plan an Entire MuleSoft-to-.NET Migration. Here's the Playbook.

Last week I sat down to migrate a MuleSoft integration project to .NET 10 Minimal APIs. Instead of spending days writing migration specs, architecture docs, and agent team definitions manually, I paired with Claude to do it in a single conversation.

22 prompts. That's all it took to go from "where do I start?" to a fully structured, audited, ready-to-execute migration toolkit -- complete with scanner agents, phased prompts, integration patterns, and project scaffolding.

Here's exactly how I did it, broken into a repeatable playbook any dev team can follow.


Phase 1 -- Explore & Ground the AI in Your Codebase

Most developers start an AI conversation with a wall of text explaining everything upfront. Don't.

Start broad. Let the AI propose an approach first, then steer it.

My opening prompt was simple:

"What is the best approach to convert MuleSoft to C# Minimal API?"

Claude came back with a solid general strategy -- Strangler Fig pattern, connector mapping tables, a phased approach. Good foundation, but generic. It didn't know my stack.

So I pointed it at my actual code:

"Check my project on GitHub for the target architecture"

It pulled the wrong repo at first. I corrected it:

"That's not the right repo -- try this URL instead"

This is the key insight: the AI produces dramatically better output when it's grounded in your real project, not a hypothetical one. Once it read my solution structure, EF Core setup, Polly config, and Azure AD auth -- everything it generated afterward was contextually accurate.

Takeaway: Don't explain your architecture in prose. Point the AI at your code and let it read.


Phase 2 -- Request Deliverables That Match Your Tooling

Once the AI understands your codebase, ask for output in the format you'll actually use. Not a blog post. Not a summary. The actual artifact.

"Can you create detailed agent team definitions I can use with Claude Code CLI?"

Claude generated a full set of agent team roles, prompts, and coordination rules -- tailored specifically for Claude Code's agent teams feature. But when I reviewed the output, I spotted a gap:

"We're missing a step -- we need to scan and inventory the source project before migrating anything"

This is where it gets interesting. The AI didn't think of a Phase 0 scanner because I hadn't mentioned one. But the moment I flagged the gap, it built a comprehensive 5-agent scanner team that parses MuleSoft XML flows, catalogs DataWeave transforms, maps connectors to NuGet packages, and generates a phased migration plan.

Then I fed it domain knowledge it couldn't possibly discover on its own:

"I already know the integrations we use -- Azure AD auth, Key Vault for secrets, Graph API for user management, Box for documents, and SQL Server stored procedures"

With this, it pre-seeded the scanner with ready-to-use C# implementation patterns for each integration -- complete with typed clients, DI registration, and config examples. No hallucinated APIs. No outdated SDK calls.

Takeaway: The AI can't read your production systems. Feed it what you know, and it'll build on top of that knowledge rather than guessing.


Phase 3 -- Challenge Architecture Decisions

Here's where most people go wrong with AI: they accept the first answer. Don't. Debate it.

I initially suggested:

"I think we should copy the source code into a docs folder inside the project"

Then immediately challenged my own idea:

"Actually no -- the source should stay external and never be modified. Ask the user for the path at runtime instead"

This back-and-forth -- which took maybe 30 seconds -- resulted in a fundamentally better design. The MuleSoft project stays read-only in its original location. The scanner asks for the path. No file duplication. No accidental modifications. Clean separation.

The AI adapted instantly. It rewrote the Phase 0 init prompt to ask for the path, updated all agent team definitions to reference MULE_SOURCE_PATH, and added validation for the directory structure.

Takeaway: The best architecture emerges from debate, not from prompting. Push back on the AI. Push back on yourself. The AI is fast enough to restructure everything in seconds.


Phase 4 -- Evolve the Design as Context Changes

Real projects don't stand still while you plan. Midway through my session, the project structure changed.

I was building the template in a parallel Claude session and realized:

"This is a reusable template, not the actual project -- we need a scaffolding step that renames everything"

Claude immediately generated init scripts for both Bash and PowerShell. But then my other session handled it differently using dotnet new template engine with sourceName:

"Another session is handling the template config already, so we don't need the init script anymore -- drop it"

Most developers would forget to tell the AI about this. They'd end up with duplicate work, conflicting approaches, and docs that reference deleted files. Instead, one prompt -- "drop it" -- and Claude removed the scripts, updated all cross-references, and simplified the workflow.

Takeaway: When something else handles a concern, tell the AI to remove work -- not just add more. AI-generated docs with stale references are worse than no docs.


Phase 5 -- Quality Gate Before Shipping

AI makes consistency errors. Across 6 interconnected files, there will be naming leaks, broken cross-references, and stale paths. Always audit before shipping.

I caught the first issue:

"The generated config still has hardcoded project name references -- it needs to use generic placeholders"

Then the repo structure changed:

"These files belong in a separate toolkit repo -- here's the folder structure, reorganize everything to fit"

And finally, the most important prompt of the entire session:

"Do a deep review of every file -- check cross-references, path consistency, typos, and logical errors"

Claude ran a 12-point audit across all files. It found and fixed: inconsistent placeholder names ({Name} vs {ProjectName}), wrong MuleSoft directory paths in quick-reference prompts, a typo in a JSON config (EnterprisId), stale "copied to" language that should have said "accessible at", and old file names in cross-references.

Without that final audit, I would have shipped docs that pointed to non-existent files and used incorrect MuleSoft paths. The audit took 2 minutes. It would have cost hours of debugging later.

Takeaway: Never ship AI-generated output without a final audit pass. Ask the AI to check its own work -- it's surprisingly good at catching its own mistakes when you explicitly ask.


The Playbook

Here's the pattern, distilled:

  1. Start broad -- let the AI propose, don't over-specify upfront
  2. Ground it in real code -- point at your actual repo, not a hypothetical one
  3. Request usable artifacts -- ask for the format your tooling actually consumes
  4. Feed domain knowledge -- tell it what you know about your integrations, constraints, and systems
  5. Identify gaps -- review output and flag what's missing
  6. Debate architecture -- push back on assumptions, including your own
  7. Evolve the plan -- when context changes, update the AI and remove stale work
  8. Audit everything -- demand a thorough cross-file review before shipping

22 prompts. One conversation. A complete migration toolkit with scanner agents, phased migration prompts, integration patterns, setup guides, and project scaffolding -- all verified, cross-referenced, and ready to use.

The AI didn't replace my judgment. It amplified it. Every architectural decision was mine. The AI just made it possible to execute on those decisions in hours instead of days.

Top comments (0)