<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Brady Stroud</title>
    <description>The latest articles on DEV Community by Brady Stroud (@brady_stroud_402d6c121a83).</description>
    <link>https://dev.to/brady_stroud_402d6c121a83</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/brady_stroud_402d6c121a83"/>
    <language>en</language>
    <item>
      <title>Tracking Copilot Usage Without an API (Raycast + Month Progress)</title>
      <dc:creator>Brady Stroud</dc:creator>
      <pubDate>Wed, 04 Mar 2026 11:09:21 +0000</pubDate>
      <link>https://dev.to/brady_stroud_402d6c121a83/tracking-copilot-usage-without-an-api-raycast-month-progress-199a</link>
      <guid>https://dev.to/brady_stroud_402d6c121a83/tracking-copilot-usage-without-an-api-raycast-month-progress-199a</guid>
      <description>&lt;p&gt;GitHub Copilot shows usage only in the UI. No API, no programmatic access, just a bar chart buried in settings. Easy to burn through your monthly quota by day 13 when you can't quickly check where you stand.&lt;/p&gt;

&lt;p&gt;I needed a fast, low-friction way to sanity-check my usage without opening browsers or scraping brittle HTML. Here's how I solved it with Raycast Script Commands and a mental model based on calendar progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot's limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Usage data lives only in the UI at &lt;code&gt;github.com/settings/copilot/features&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;No public API for remaining credits or usage percentage&lt;/li&gt;
&lt;li&gt;Easy to burn quota early in the month without realizing it&lt;/li&gt;
&lt;li&gt;Checking requires: open browser → navigate to settings → find Copilot → read chart&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast feedback (1-2 keystrokes)&lt;/li&gt;
&lt;li&gt;Zero maintenance (no scraping, no auth tokens)&lt;/li&gt;
&lt;li&gt;Mental model: "I'm 42% through the month → usage bar should roughly match"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solution: Raycast Script Commands
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.raycast.com/" rel="noopener noreferrer"&gt;Raycast&lt;/a&gt; is a macOS launcher (like Spotlight or Alfred). Script Commands let you run bash scripts with hotkeys.&lt;/p&gt;

&lt;p&gt;My approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Calculate % through the current month (calendar days)&lt;/li&gt;
&lt;li&gt;Show a macOS notification with the percentage&lt;/li&gt;
&lt;li&gt;Auto-open the Copilot usage page in the browser&lt;/li&gt;
&lt;li&gt;Quick visual sanity check: "42% through month → usage should be ~40-45%"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why Raycast instead of a CLI alias?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always available (global hotkey)&lt;/li&gt;
&lt;li&gt;Notifications &amp;gt; terminal output&lt;/li&gt;
&lt;li&gt;Easy to trigger once configured correctly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;The script is available on GitHub: &lt;a href="https://github.com/bradystroud/raycast-scripts/blob/main/copilot-month-%25-check.sh" rel="noopener noreferrer"&gt;copilot-month-%-check.sh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important Raycast setup details&lt;/strong&gt; (this is where things can feel broken):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In Raycast, set your Script Directory in &lt;strong&gt;Raycast Settings → Extensions → Script Commands → Script Directory&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use Raycast's &lt;strong&gt;Create Script Command&lt;/strong&gt; action&lt;/li&gt;
&lt;li&gt;Open the file Raycast creates and paste in the script content&lt;/li&gt;
&lt;li&gt;Save and run &lt;strong&gt;Reload Script Directory&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Raycast is supposed to discover scripts from your Script Directory, and Reload Script Directory should refresh them. In practice, manually copying a script file into the folder often does not get picked up reliably. Creating the command via Raycast first, then pasting content into that generated file, consistently works.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  UX Flow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Hit hotkey (I use &lt;code&gt;Cmd+Space&lt;/code&gt; → type "copilot")&lt;/li&gt;
&lt;li&gt;Notification appears: &lt;strong&gt;"Month Progress: 42.0% (day 13/31)"&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Browser opens to &lt;code&gt;github.com/settings/copilot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Quick sanity check: Usage bar at ~45%? I need to pace myself...&lt;/li&gt;
&lt;li&gt;Done - Close tab, back to work.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>githubcopilot</category>
      <category>productivity</category>
      <category>github</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Stop Explaining Bugs to AI - Show It the Bug</title>
      <dc:creator>Brady Stroud</dc:creator>
      <pubDate>Wed, 04 Mar 2026 06:11:05 +0000</pubDate>
      <link>https://dev.to/brady_stroud_402d6c121a83/stop-explaining-bugs-to-ai-show-it-the-bug-a45</link>
      <guid>https://dev.to/brady_stroud_402d6c121a83/stop-explaining-bugs-to-ai-show-it-the-bug-a45</guid>
      <description>&lt;p&gt;I found a better way to debug bugs with AI.&lt;/p&gt;

&lt;p&gt;AI is brilliant at code-level reasoning, but sometimes it's painful to provide enough context of a bug to get a useful output.&lt;br&gt;
So if you're trying to debug by describing the bug in chat, you can waste a lot of time.&lt;/p&gt;

&lt;p&gt;A better approach is to stop explaining and start showing.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Pattern That Works
&lt;/h2&gt;

&lt;p&gt;When the AI can't reliably drive your UI, do this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tell the UI to add targeted logs around the action path.
It knows what to log and where to put it.&lt;/li&gt;
&lt;li&gt;Reproduce the bug yourself in the app.&lt;/li&gt;
&lt;li&gt;Write the logs to a file.&lt;/li&gt;
&lt;li&gt;Let the AI read the file and debug from facts, not guesses.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This works well, lately I've fixed heaps of bugs this way.&lt;/p&gt;

&lt;p&gt;My prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;There is a bug with xxx feature where yyy happens.
1. Add logs in that area and log to a local file.
2. Ask me to reproduce the bug a few times.
3. Read the logs and find the bug.
4. Fix the bug, then I'll test again.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;UI issues are usually timing + state + sequence problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Clicked save twice in 400ms"&lt;/li&gt;
&lt;li&gt;"Request B completed before Request A"&lt;/li&gt;
&lt;li&gt;"State changed after unmount"&lt;/li&gt;
&lt;li&gt;"Token refreshed mid-flight"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are hard to communicate in plain English, but obvious in logs.&lt;/p&gt;

&lt;p&gt;When you give AI a real event trail, it can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reconstruct the execution path,&lt;/li&gt;
&lt;li&gt;identify ordering/race issues,&lt;/li&gt;
&lt;li&gt;map symptoms to likely code paths,&lt;/li&gt;
&lt;li&gt;and suggest fixes grounded in actual evidence.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Can AI Just Use the UI?
&lt;/h2&gt;

&lt;p&gt;Sometimes, yes.&lt;/p&gt;

&lt;p&gt;With MCP-based tooling, you can give agents browser automation abilities.&lt;br&gt;
For example, Playwright MCP exposes browser actions to AI agents.&lt;/p&gt;

&lt;p&gt;So UI-driving is possible, but even with good tools, it's not always the fastest path for tricky bugs.&lt;/p&gt;

&lt;p&gt;And if you're using Aspire, there is a built-in MCP server that can give agents rich app context:&lt;br&gt;
&lt;a href="https://aspire.dev/get-started/configure-mcp/" rel="noopener noreferrer"&gt;https://aspire.dev/get-started/configure-mcp/&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;aspire mcp init&lt;/code&gt; can configure MCP integration for supported assistants.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Make Large-Scale Refactors Easy with Parallel AI Agents</title>
      <dc:creator>Brady Stroud</dc:creator>
      <pubDate>Wed, 04 Mar 2026 06:09:19 +0000</pubDate>
      <link>https://dev.to/brady_stroud_402d6c121a83/make-large-scale-refactors-easy-with-parallel-ai-agents-1nmp</link>
      <guid>https://dev.to/brady_stroud_402d6c121a83/make-large-scale-refactors-easy-with-parallel-ai-agents-1nmp</guid>
      <description>&lt;p&gt;Large-scale refactors used to be a nightmare. The kind of work that makes you question your career choices. But I just completed a cross-cutting refactor across a modular monolith using 15 AI agents working in parallel, and it changed how I thought about code maintenance.&lt;/p&gt;

&lt;p&gt;Here's how I turned a week-long slog into a coordinated distributed AI collaboration.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pain
&lt;/h2&gt;

&lt;p&gt;Here's a real example from a recent project. I had a modular monolith with vertical slice architecture - Timesheets, Projects, Billing, Notifications - all nicely separated. Clean boundaries. Good separation of concerns.&lt;/p&gt;

&lt;p&gt;Then I noticed an inconsistency: &lt;code&gt;TimeEntry.Stop()&lt;/code&gt; was throwing exceptions for error cases, while &lt;code&gt;FlowTask&lt;/code&gt; methods were using the &lt;code&gt;Result&amp;lt;T&amp;gt;&lt;/code&gt; pattern. Same codebase, two different error handling approaches.&lt;/p&gt;

&lt;p&gt;The fix was straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Convert &lt;code&gt;TimeEntry.Stop()&lt;/code&gt; from exception-based to &lt;code&gt;Result&amp;lt;Success&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Update &lt;code&gt;StopTimerEndpoint.cs&lt;/code&gt; to handle &lt;code&gt;Result&lt;/code&gt; instead of try-catch&lt;/li&gt;
&lt;li&gt;Do the same for Projects, Billing, and Notifications modules&lt;/li&gt;
&lt;li&gt;Update all the tests&lt;/li&gt;
&lt;li&gt;Add architecture tests to prevent future inconsistencies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;56 files across 12 modules. Mechanical, repetitive work.&lt;/p&gt;

&lt;p&gt;Let's be honest - this refactor isn't shipping features. It's not fixing a critical bug. It's not making customers happy. When a PO is choosing between "standardize error handling across all domain entities" and "ship the feature the sales team is waiting for", the feature wins every time.&lt;/p&gt;

&lt;p&gt;So the ticket sits in the backlog. Months pass. The inconsistency spreads. New code follows whatever pattern is already in that module. The cognitive load increases.&lt;/p&gt;

&lt;p&gt;Eventually, the tech debt compounds to the point where it actually slows down feature development. But by then, the refactor is even bigger. Even more daunting. Even easier to deprioritize.&lt;/p&gt;

&lt;p&gt;This is how codebases rot. Not because developers are lazy, but because the economics of manual refactoring can make it impossible to justify against feature work.&lt;/p&gt;

&lt;p&gt;Sure, for simple find-and-replace operations, regex in your IDE works great. Renaming a variable across files? Easy. But the moment you need context-aware changes - You're back to manual work.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Makes It Possible (But Not Yet Fast)
&lt;/h2&gt;

&lt;p&gt;AI coding assistants changed the game here. With tools like &lt;a href="https://opencode.ai" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;, you can actually explain the refactor once and let the AI handle the mechanical work. Unlike regex, AI understands context, follows complex rules, and adapts to variations in your code.&lt;/p&gt;

&lt;p&gt;But there's a catch: it's still linear.&lt;/p&gt;

&lt;p&gt;One AI session means one agent working through your codebase file by file. It's faster than doing it manually, but for a large refactor, you're still watching the AI work for hours.&lt;/p&gt;

&lt;p&gt;I tried this approach first. Opened OpenCode, explained my refactor plan, and watched it work through the checklist one by one.&lt;/p&gt;

&lt;p&gt;After 45 minutes, it had completed Phase 1 and Phase 2 - the error definitions and domain entity updates.&lt;/p&gt;

&lt;p&gt;Better than manual, sure. But I had 12 phases to go. I was looking at a 4-5 hour session. And that's assuming nothing went wrong.&lt;/p&gt;

&lt;p&gt;There had to be a better way...&lt;/p&gt;




&lt;h2&gt;
  
  
  Parallel Sessions: Multiple Windows, Same Brain
&lt;/h2&gt;

&lt;p&gt;The first breakthrough was obvious in retrospect: why use one AI session when I could use multiple?&lt;/p&gt;

&lt;p&gt;I got AI to help me create a detailed refactor plan - a single source of truth with clear transformation rules and a phased checklist.&lt;/p&gt;

&lt;p&gt;Instead of generic "work on the User module", I had specific phases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Phase 3:&lt;/strong&gt; Timesheets Endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 4:&lt;/strong&gt; Projects Endpoints
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 5:&lt;/strong&gt; Billing Endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 6:&lt;/strong&gt; Notifications Endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each phase was independent. An agent working on Timesheets endpoints wouldn't conflict with one working on Billing endpoints because they touched completely different files.&lt;/p&gt;

&lt;p&gt;Here's a snippet from my &lt;code&gt;docs/ai-tasks/refactor-plan.md&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Implementation Checklist&lt;/span&gt;

&lt;span class="gu"&gt;### Phase 3: Endpoint Layer - Timesheets Feature&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; 🔴 src/WebApi/Features/Timesheets/Endpoints/StopTimerEndpoint.cs (Remove try-catch)
&lt;span class="p"&gt;-&lt;/span&gt; 🔴 src/WebApi/Features/Timesheets/Endpoints/StartTimerEndpoint.cs (Remove try-catch)
&lt;span class="p"&gt;-&lt;/span&gt; 🔴 src/WebApi/Features/Timesheets/Endpoints/GetActiveTimerEndpoint.cs (Update error handling)

&lt;span class="gu"&gt;### Phase 4: Endpoint Layer - Projects Feature&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; 🔴 src/WebApi/Features/Projects/Endpoints/ArchiveProjectEndpoint.cs (Remove try-catch)
&lt;span class="p"&gt;-&lt;/span&gt; 🔴 src/WebApi/Features/Projects/Endpoints/AddProjectMemberEndpoint.cs (Remove try-catch)

&lt;span class="gu"&gt;### Phase 5: Endpoint Layer - Billing Feature&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; 🔴 src/WebApi/Features/Billing/Endpoints/MarkInvoiceAsPaidEndpoint.cs (Remove try-catch)
&lt;span class="p"&gt;-&lt;/span&gt; 🔴 src/WebApi/Features/Billing/Endpoints/VoidInvoiceEndpoint.cs (Remove try-catch)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I opened three separate OpenCode sessions, each in a different terminal window:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Session 1:&lt;/strong&gt; "Complete Phase 3 and Phase 7 (Timesheets) following &lt;code&gt;/docs/refactor-plan.md&lt;/code&gt;"
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session 2:&lt;/strong&gt; "Complete Phase 4 and Phase 9 (Projects) following &lt;code&gt;/docs/refactor-plan.md&lt;/code&gt;"
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session 3:&lt;/strong&gt; "Complete Phase 5 and Phase 10 (Billing) following &lt;code&gt;/docs/refactor-plan.md&lt;/code&gt;"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All three agents read the same plan. All three understood the same transformation rules (exception-based → Result). All three worked independently on their assigned phases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The speed improvement was immediate.&lt;/strong&gt; What took 45 minutes with one agent now took 15 minutes with three.&lt;/p&gt;

&lt;p&gt;And the checklist prevented coordination issues. Because each phase had explicit file paths, agents couldn't accidentally work on the same files. The markdown file acted as a distributed lock - when Session 1 marked Phase 3 items as 🟢 Done, Session 2 knew to stay in its lane.&lt;/p&gt;

&lt;p&gt;&lt;a href="/uploads/easy-ai-refactors/vscode-refactor-plan-edits.png" class="article-body-image-wrapper"&gt;&lt;img src="/uploads/easy-ai-refactors/vscode-refactor-plan-edits.png" alt="Refactor plan showing traffic light status indicators"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Figure: Refactor plan in progress with checklist - the agents are keeping track of what's done, in progress, and not started&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Sub-Agents Changed the Game
&lt;/h2&gt;

&lt;p&gt;This is where &lt;a href="https://opencode.ai/docs/agents/" rel="noopener noreferrer"&gt;OpenCode's sub-agent system&lt;/a&gt; improved the process.&lt;/p&gt;

&lt;p&gt;Instead of manually telling each session what to do, I encoded the entire refactor process into a custom sub-agent definition. Think of it like writing a repeatable script that an AI can execute autonomously.&lt;/p&gt;

&lt;p&gt;I created &lt;code&gt;.opencode/agents/result-refactor.md&lt;/code&gt; with a specialized refactor agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Converts exception-based error handling to Result pattern&lt;/span&gt;
&lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;subagent&lt;/span&gt;
&lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;anthropic/claude-sonnet-4.5&lt;/span&gt;
&lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.1&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

You are an expert at refactoring error handling patterns in C# codebases.

&lt;span class="gs"&gt;**Your task:**&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Read &lt;span class="sb"&gt;`/docs/refactor-plan.md`&lt;/span&gt; for transformation rules and checklist
&lt;span class="p"&gt;-&lt;/span&gt; Identify your assigned phase from the checklist
&lt;span class="p"&gt;-&lt;/span&gt; Mark those files as "in progress" (🔴 → 🟠)
&lt;span class="p"&gt;-&lt;/span&gt; Apply transformations systematically:
&lt;span class="p"&gt;  1.&lt;/span&gt; Update domain entity methods to return Result&lt;span class="nt"&gt;&amp;lt;Success&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;  2.&lt;/span&gt; Replace throw statements with Error returns
&lt;span class="p"&gt;  3.&lt;/span&gt; Update endpoint try-catch blocks to if (result.IsError) checks
&lt;span class="p"&gt;  4.&lt;/span&gt; Update tests to assert on Result results
&lt;span class="p"&gt;-&lt;/span&gt; Don't try run tests  &lt;span class="c"&gt;&amp;lt;!-- This creates problems when multiple agents run simultaneously --&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Mark checklist items as complete (🟠 → 🟢)
&lt;span class="p"&gt;-&lt;/span&gt; Report any conflicts or edge cases

Focus on maintaining consistency across all features.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And included the actual code transformations in the refactor plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// BEFORE: Exception-based ❌&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;EndTime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;InvalidOperationException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Time entry already stopped"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;EndTime&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UtcNow&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// AFTER: Result pattern ✅&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Success&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;Stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;EndTime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;TimeEntryErrors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AlreadyStopped&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="n"&gt;EndTime&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UtcNow&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Success&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now instead of explaining the refactor to each session, I could just say:&lt;/p&gt;

&lt;p&gt;"@result-refactor Phase 3"&lt;/p&gt;

&lt;p&gt;The agent would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read the refactor plan&lt;/li&gt;
&lt;li&gt;Understand the rules&lt;/li&gt;
&lt;li&gt;Apply the changes systematically&lt;/li&gt;
&lt;li&gt;Mark the checklist&lt;/li&gt;
&lt;li&gt;Report back&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But here's where it gets wild: sub-agents can spawn more sub-agents.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Final Setup: Distributed AI Orchestration
&lt;/h2&gt;

&lt;p&gt;I added a line to my subagent definition instructing it to spawn additional sub-agents for each file, or group of files in its assigned phase. &lt;br&gt;
With some modules having 15-20 files to update, this meant working in parallel saved heaps of time.&lt;/p&gt;

&lt;p&gt;My final workflow looked like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Get AI to write a detailed refactor plan in &lt;code&gt;/docs/ai-tasks/refactor-plan.md&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; Create the sub-agent definition in &lt;code&gt;.opencode/agents/result-refactor.md&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; Open a few OpenCode sessions in parallel&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; In each session, invoke the sub-agent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Session 1: "/result-refactor Phase 3 and Phase 8 (Timesheets)"&lt;/li&gt;
&lt;li&gt;Session 2: "/result-refactor Phase 4 and Phase 9 (Projects)"
&lt;/li&gt;
&lt;li&gt;Session 3: "/result-refactor Phase 5 and Phase 10 (Billing)"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Watch the magic happen&lt;/p&gt;

&lt;p&gt;Each of my 3 main sessions spawned 3-5 sub-agents to handle their assigned modules. Those sub-agents worked in parallel, all following the same plan, all updating the same checklist.&lt;/p&gt;

&lt;p&gt;I effectively had ~15 AI agents working on the same refactor simultaneously.&lt;/p&gt;

&lt;p&gt;The entire refactor completed in under 20 minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="/uploads/easy-ai-refactors/open-code-subagents.png" class="article-body-image-wrapper"&gt;&lt;img src="/uploads/easy-ai-refactors/open-code-subagents.png" alt="Three OpenCode sessions with multiple sub-agents working in parallel"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Figure: Three OpenCode sessions each spawning multiple sub-agents to work in parallel on different phases of the refactor.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Unlocks
&lt;/h2&gt;

&lt;p&gt;This isn't just about speed. This fundamentally changes what kinds of work are economically viable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture changes become cheap.&lt;/strong&gt; That refactor you've been putting off for months because "it would touch too many files"? Now it's a 20-minute task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Massive refactors become normal.&lt;/strong&gt; Want to rename core abstractions? Update your error handling pattern everywhere? Switch from one state management approach to another? Go for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developers become orchestrators.&lt;/strong&gt; Your job shifts from manually updating files to designing the plan, encoding the rules, and coordinating the agents.&lt;/p&gt;

&lt;p&gt;You're no longer writing code. You're conducting an orchestra of AI agents who write code for you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Warnings
&lt;/h2&gt;

&lt;p&gt;Before you rush off to parallelize everything, some important caveats:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This burns credits fast.&lt;/strong&gt; Running 15 AI agents simultaneously is not cheap. Budget accordingly. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You need strong plans.&lt;/strong&gt; The quality of your refactor plan directly determines the quality of the output. Vague instructions will produce inconsistent results across agents. Be specific and include examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review still matters.&lt;/strong&gt; AI agents make mistakes. When you have 15 of them working simultaneously, you can make mistakes really fast. Check the changes carefully, if you see repeated mistakes, adjust your plan and rerun affected phases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coordination takes work.&lt;/strong&gt; You need to think carefully about module boundaries, shared interfaces, and potential conflicts. The better your architecture already is, the better this approach works.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;The future of software development is about learning how to orchestrate AI agents effectively. This includes context engineering and parallel execution.&lt;/p&gt;

&lt;p&gt;When you shift your thinking from "how do I get an AI to help me code" to "how can I more efficiently execute a plan with multiple AIs", everything changes.&lt;/p&gt;

&lt;p&gt;The refactor that used to take a week now takes 20 minutes. Not because the AI is smarter, but because you stopped thinking in serial and started thinking in parallel.&lt;/p&gt;




&lt;p&gt;Have you tried parallel AI refactors? Hit me up on Twitter/X or LinkedIn - I'd love to hear how it went.&lt;/p&gt;

</description>
      <category>aidev</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
