<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jurre Brandsen</title>
    <description>The latest articles on DEV Community by Jurre Brandsen (@jurre_brandsen).</description>
    <link>https://dev.to/jurre_brandsen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jurre_brandsen"/>
    <language>en</language>
    <item>
      <title>Every Upgrade Made Sense: How I Over-Engineered My AI Coding Setup</title>
      <dc:creator>Jurre Brandsen</dc:creator>
      <pubDate>Fri, 24 Apr 2026 09:24:24 +0000</pubDate>
      <link>https://dev.to/jurre_brandsen/every-upgrade-made-sense-how-i-over-engineered-my-ai-coding-setup-km8</link>
      <guid>https://dev.to/jurre_brandsen/every-upgrade-made-sense-how-i-over-engineered-my-ai-coding-setup-km8</guid>
      <description>&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;Last week a colleague and I were presenting the AI coding setup we'd built. The full show: fine-grained instruction files, custom agents with scoped responsibilities, reusable skills, and to top it off, an orchestrator agent tying it all together. It looked impressive.&lt;/p&gt;

&lt;p&gt;Then someone in the audience asked: "That's cool, but do we always need this? If we skip all that and keep it simple, what do we actually lose?"&lt;/p&gt;

&lt;p&gt;Neither of us had a good answer, and the question stuck with me. Consulting means complexity has to be justifiable to someone other than yourself. Impressive isn't an argument. The argument is: what does each piece solve, what does it cost, and when is the complexity worth it?&lt;/p&gt;

&lt;p&gt;I've been using AI coding tools since GitHub Copilot arrived during my studies in 2022, from predictive completions to agent mode. Looking back, I notice a pattern. Every time I hit a limitation, I'd hack around it. Then the tooling would catch up with an official feature. It felt like a natural upgrade, a clear improvement. So I never questioned the complexity I was adding. Each decision made sense on its own. It took someone else asking "why?" for me to realize I never had.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;For a long time, every AI session started with explaining the codebase from scratch: the domain language, the architecture conventions, the things the AI shouldn't touch. Everyone on the team had this frustration. We started saving these explanations as Markdown files locally, then as reusable prompts in the &lt;code&gt;.github&lt;/code&gt; folder. In early 2025, when custom instruction files landed and the tooling started loading them automatically, that friction disappeared overnight.&lt;/p&gt;

&lt;p&gt;And then we stuffed everything in. Architecture rationale, domain explanations, testing guidelines. The file grew into an onboarding document. For small tasks, the AI was loading thousands of words of context it didn't need, diluting the signal and nudging the model toward things that didn't apply.&lt;/p&gt;

&lt;p&gt;Looking back, the lesson was obvious: instructions should be lean. Constraints, conventions, the things that are always relevant. If a piece of context only matters sometimes, it doesn't belong in your instruction file.&lt;/p&gt;

&lt;p&gt;Instructions are &lt;strong&gt;passive&lt;/strong&gt;. They shape behavior, they don't direct it. That's their strength and their limit. I covered them in depth in my earlier article about &lt;a href="https://machinethoughts.substack.com/p/how-i-levelled-up-my-github-copilot?r=3fopd7" rel="noopener noreferrer"&gt;context-engineering&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At the time, it felt like a solved problem. We'd gone from copy-pasting context to a clean, versionable solution. The improvement was obvious enough that the added project configuration didn't register as a cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agents
&lt;/h2&gt;

&lt;p&gt;By mid-2025, custom agent definitions arrived across the major tools. Before they existed, we were using prompt files to steer complex workflows, and we leaned on them hard. Our prompt files produced detailed plan documents that pre-coded an entire feature in Markdown before a line of code was written. For a well-scoped feature, that review-before-you-build step was genuinely useful: we could steer the AI before it committed to anything. The problem wasn't the output, or sharing these files with the team. The problem was mechanism. A prompt file behaves like a script: a fixed sequence of steps, executed the same way every time. It doesn't decide, it runs. The procedure is baked in, so the output scales with the procedure, not with the task.&lt;/p&gt;

&lt;p&gt;Custom agent definitions changed the mechanism. Where a prompt file runs a fixed script, an agent runs a decision loop. You hand it a goal and a set of boundaries (what tools it can use, what it should focus on, what it shouldn't touch), and the agent picks the path within them. The value of the plan-before-you-build step didn't disappear; it stopped being mandatory. An agent can decide whether a plan is warranted and scale its approach to the task.&lt;/p&gt;

&lt;p&gt;Same cycle as before, though. At first we treated agents the way we'd treat human developers (one generalist who could handle anything), which meant agents scoped too broadly and system prompts packed with too much context. Over time, we learned to cut back. The principle is the same as single responsibility in code: separate agents for separate concerns. One agent that plans, one that implements, one that reviews. Each with its own scope, its own allowed tools, its own constraints. I wrote about this architectural view in my earlier &lt;a href="https://machinethoughts.substack.com/p/steering-ai-agents-in-professional?r=3fopd7" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Again, it felt like a natural step. We had a real problem, the tooling provided a proper solution, and we adopted it. The before-and-after was compelling enough that nobody stopped to ask whether we'd traded one kind of complexity for another.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills
&lt;/h2&gt;

&lt;p&gt;Instructions and agents still left a gap. The knowledge that eventually became skills was scattered across both. Some of it lived in instruction files: testing conventions, mocking rules, our assertion patterns. It did its job when relevant, but it loaded with everything else, every session, even when the task had nothing to do with it. Other pieces sat inside agent definitions, baked into a planner or a reviewer. That worked decently, but the knowledge wasn't really the agent's job. It was there because I had nowhere better to put it.&lt;/p&gt;

&lt;p&gt;So I had a category of knowledge that was valuable, reusable, and specific, but it wasn't quite an instruction and wasn't quite an agent. It needed to load sometimes, on its own terms.&lt;/p&gt;

&lt;p&gt;Skills solved that. A skill is a compact, focused bundle of knowledge or process that loads only when needed, and the AI itself decides when a skill is relevant and pulls it into the session. The testing conventions stopped inflating the instruction file. The review checklist stopped riding along inside the reviewer agent. Each became its own skill, triggered when the AI noticed the work called for it.&lt;/p&gt;

&lt;p&gt;That completes the set. Instructions are always loaded. Agents run on a goal. Skills are &lt;strong&gt;just-in-time&lt;/strong&gt;: they load when they're relevant, bringing the right knowledge at the right moment.&lt;/p&gt;

&lt;p&gt;Same cycle as before. Each skill was worth having on its own, but every new one meant changing the setup again.&lt;/p&gt;

&lt;p&gt;The pattern should be familiar by now. Another layer. Another set of files to configure. Another thing my colleagues needed to understand before they could work with the setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Choreography
&lt;/h2&gt;

&lt;p&gt;When everything composes, the result is genuinely impressive. You invoke a workflow, the right agent picks it up, following the right conventions, with the right context loaded at the right moment. Instructions provide the always-on foundation. Agents handle the execution within their boundaries. Skills inject specialized knowledge when it's relevant. It feels natural. You're not thinking about layers or configuration. It works.&lt;/p&gt;

&lt;p&gt;That's what we presented. And it is cool.&lt;/p&gt;

&lt;p&gt;But it took months of iteration to get there. Most of that iteration was replacing workarounds, because we'd been hacking around limitations for so long that every new feature felt like a replacement for something we already had. Each replacement felt like progress. And it was. But at the presentation, when someone in the audience asked "Do we always need this?" I realized: the choreography was optimized for &lt;em&gt;me&lt;/em&gt;. I understood every piece because I'd built every piece. The question was whether anyone else could.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;Back to the question: "Do we always need this?"&lt;/p&gt;

&lt;p&gt;No.&lt;/p&gt;

&lt;p&gt;For most projects, a good instruction file and a couple of well-scoped agents will get you excellent results. Skills, orchestrator agents, the full choreography: they're powerful, but they're not a prerequisite for getting real value from AI coding tools. The question isn't whether they're good. It's whether your situation needs them yet.&lt;/p&gt;

&lt;p&gt;I fall into this trap myself. Every time a new feature ships, my programmer heart starts racing. I can replace that hacky workaround with something proper. I can make the setup more elegant. And I do, because I want to stay on top of this as a practitioner and as someone who writes and speaks about this topic.&lt;/p&gt;

&lt;p&gt;But as a consultant, I work with two audiences. My fellow consultants are technically strong and most work with AI tools daily, and even they can't always follow the pace of changes I make. When I bring this to client teams, the gap is wider. Not because they lack skill, but because tracking every evolution in AI tooling isn't their job. Their priority is shipping their product.&lt;/p&gt;

&lt;p&gt;That's the uncomfortable truth about customizing AI coding tools. Each upgrade makes sense individually. The improvement is real. But complexity accumulates, and it accumulates invisibly, because every step feels justified. You don't notice you've built something only you can maintain until someone from outside asks why.&lt;/p&gt;

&lt;p&gt;So here's what to do differently: &lt;strong&gt;build a business case before you adopt the next feature.&lt;/strong&gt; Not a formal document. Three honest answers. What problem does this solve that your team actually has today? What do you lose if you skip it, or wait six months? Who else needs to understand it for the capability to survive if you move off the project tomorrow?&lt;/p&gt;

&lt;p&gt;Those three questions force the tradeoff into the open. Wait too long and the gap to the state of the art grows so wide that catching up feels impossible. Adopt every improvement the moment it ships and you need the whole team to carry the knowledge, and in reality, they won't. You end up with a single point of knowledge, and that's its own kind of technical debt.&lt;/p&gt;

&lt;p&gt;The answer isn't to chase every new feature, and it isn't to ignore the evolution either. Find the pace your team can sustain. Adopt what gives you clear value. Skip what doesn't solve a problem you actually have. And whatever you add, make sure more than one person can maintain it. That's how you bring the rest of your organization along instead of leaving them behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;And here's where I have to practice what I preach.&lt;/p&gt;

&lt;p&gt;The next frontier is agent teams: multiple agents working together, coordinating tasks, splitting work, running in parallel. It's already showing up as an experimental feature in tools like Claude Code and GitHub Copilot. I have an orchestrator agent in the setup we presented, which is a step in that direction. And I can feel the pull to go deeper, to have agents delegate to other agents and handle entire workflows autonomously.&lt;/p&gt;

&lt;p&gt;The impact would be real. But so would the complexity. If my current choreography is already difficult for colleagues to follow, what happens when the agents are orchestrating each other?&lt;/p&gt;

&lt;p&gt;So for the first time, I'm asking the question before introducing it to my teams. What problem would agent teams solve that my current setup doesn't? What's the tradeoff in maintainability? And what does it take to bring not only my fellow consultants but also our clients along?&lt;/p&gt;

&lt;p&gt;That doesn't mean I stop experimenting. I'm still exploring agent teams on my own projects, building the knowledge so that when the time is right, I can introduce it with confidence. But introducing it means more than having the technical setup. It means building the business case, creating the training, and making sure the knowledge doesn't live in one person's head. That effort is part of the cost, and it's the part I used to ignore.&lt;/p&gt;

&lt;p&gt;The tools will keep evolving. New capabilities will keep appearing. And each one will feel like an obvious improvement. The discipline I'm still developing isn't knowing how to adopt them. It's knowing when to bring the rest of the team along.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>contextengineering</category>
      <category>githubcopilot</category>
      <category>softwareengineering</category>
    </item>
  </channel>
</rss>
