<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Giovanni Rufino (Geo)</title>
    <description>The latest articles on DEV Community by Giovanni Rufino (Geo) (@giovanni_rufinogeo_77b).</description>
    <link>https://dev.to/giovanni_rufinogeo_77b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/giovanni_rufinogeo_77b"/>
    <language>en</language>
    <item>
      <title>Coding on a Ration: GitHub Copilot for Peasants</title>
      <dc:creator>Giovanni Rufino (Geo)</dc:creator>
      <pubDate>Sat, 28 Feb 2026 03:48:30 +0000</pubDate>
      <link>https://dev.to/giovanni_rufinogeo_77b/coding-on-a-ration-github-copilot-for-peasants-1ak3</link>
      <guid>https://dev.to/giovanni_rufinogeo_77b/coding-on-a-ration-github-copilot-for-peasants-1ak3</guid>
      <description>&lt;p&gt;GitHub Copilot has completely revolutionized how we write software, but that AI magic isn't infinite. For many users, Copilot comes with a standard monthly limit on requests and chat interactions. When you are operating under a strict quota, prompt economy becomes incredibly important. You have to treat your prompts like a finite, heavily rationed utility. Making those prompts go as far as possible is an absolute necessity for maintaining your development velocity until the end of your billing cycle.&lt;/p&gt;

&lt;p&gt;Personally, I am a Copilot peasant who simply cannot justify dropping endless cash on a per-token basis. Watching that usage bar inch toward 100% triggers a deeply primal panic in my soul. Once that limit is hit, my productivity plummets through the floorboards. I am abruptly forced to remember how to write boilerplate code with my own two hands like it's 2019. It is a dark, barbaric reality that I actively try to avoid.&lt;/p&gt;

&lt;p&gt;The secret to surviving on a peasant's budget is abandoning the single-action prompt. Most developers use Copilot chat like a simple command-line interface by typing something basic like &lt;code&gt;run dotnet test&lt;/code&gt;. The agent dutifully runs the test, stops, and waits for your next command. If there are errors, you have to spend another precious prompt asking it to investigate and yet another asking it to fix the issue. You end up burning through your monthly quota on tiny micro-interactions.&lt;/p&gt;

&lt;p&gt;A much better approach is to chain your workflows together into comprehensive directives. Why spend four prompts when you can spend one? Try instructing the agent: &lt;code&gt;run dotnet test, identify any errors, fix those errors, run dotnet test again to confirm your changes worked, and commit your work.&lt;/code&gt; Front-loading the logic and anticipating the next steps drastically reduces your prompt spend while exponentially increasing the actual work completed per request.&lt;/p&gt;

&lt;p&gt;Doling out single-step instructions to an AI is like ordering a sandwich at a deli by requesting the bread, waiting five minutes, asking for the turkey, waiting another five minutes, and then finally requesting the cheese. It is exhausting, incredibly inefficient, and honestly, the AI is probably judging you. Don't spoon-feed the AI directions like you're reading off MapQuest; give it the map and destination and let it drive.&lt;/p&gt;

&lt;p&gt;You can stretch your prompts even further by leveraging Copilot Instructions and Agent Skills. Documenting common project workflows completely eliminates the need to explicitly request them in your daily chats. For example, you can set a rigid rule to always run &lt;code&gt;dotnet format&lt;/code&gt; after making code changes. Agent skills complement this by directing the agent to autonomously execute complex, multi-step actions that you have pre-defined. This allows you to trigger massive workflows with a minimal initial prompt.&lt;/p&gt;

&lt;p&gt;To reach peak prompt frugality, implement Agent Hooks. As outlined in GitHub's documentation, hooks allow you to chain common tasks that automatically execute before or after specific agent actions. You could set a pre-hook to always restore dependencies before a build or a post-hook to clean up temporary directories after a test run. Baking these essential, repetitive tasks directly into the agent's operating procedure completely offloads them from your prompt quota.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Implement This in Your Repo
&lt;/h3&gt;

&lt;p&gt;To get started, you can define your project's baseline rules using a custom instructions file. Create a file at &lt;code&gt;.github/copilot-instructions.md&lt;/code&gt; and lay out your default expectations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# .NET Project Copilot Instructions&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Always run &lt;span class="sb"&gt;`dotnet format`&lt;/span&gt; after modifying C# files.
&lt;span class="p"&gt;-&lt;/span&gt; When generating unit tests, default to xUnit and Moq.
&lt;span class="p"&gt;-&lt;/span&gt; If asked to "run tests," automatically review the error output on failure, attempt to apply a fix, and re-run the tests to verify.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you can configure Agent Hooks to handle the repetitive setup and teardown tasks. By defining pre and post-execution commands, you save the agent from needing explicit instructions for every minor step. A typical hook configuration might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/copilot/agent.yml&lt;/span&gt;
&lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pre_test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dotnet restore&lt;/span&gt;
  &lt;span class="na"&gt;post_test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dotnet format&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git add .&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Chaining your commands, establishing smart instructions, utilizing skills, and leveraging hooks lets you effectively automate the automation. Guard your prompts, string your workflows together, and prove that even a Copilot peasant can code like royalty.&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>productivity</category>
      <category>ai</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>The Pitfall of "Helpful" AI: Navigating the Missing Context Problem in Software Engineering</title>
      <dc:creator>Giovanni Rufino (Geo)</dc:creator>
      <pubDate>Thu, 26 Feb 2026 02:09:26 +0000</pubDate>
      <link>https://dev.to/giovanni_rufinogeo_77b/the-pitfall-of-helpful-ai-navigating-the-missing-context-problem-in-software-engineering-17im</link>
      <guid>https://dev.to/giovanni_rufinogeo_77b/the-pitfall-of-helpful-ai-navigating-the-missing-context-problem-in-software-engineering-17im</guid>
      <description>&lt;p&gt;If you ask an AI assistant to help you with a workflow, you expect a smart, contextual answer. What you often get, however, is a highly confident assumption masquerading as absolute truth.&lt;/p&gt;

&lt;p&gt;Recently, I was trying to quickly dump a series of screenshots into a presentation using the Photo Album feature in PowerPoint. I prompted my AI assistant for the quickest way to execute this workflow. Here is the response I received:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Unfortunately, the Photo Album feature is not available in the web version of PowerPoint... Even if you were to download the desktop version on your Mac Mini, you likely wouldn't find it there either. However, since you’re looking to dump screenshots quickly on the web, here are the best workarounds..."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The AI sprang into action and gave me a bunch of workarounds to help me achieve my goal. I couldn't have asked for a better answer, unless I was on a Windows PC. But I was, so it missed the mark like a blind archer.&lt;/p&gt;

&lt;p&gt;Because the AI remembered that I had recently picked up an M1 Mac Mini, it anchored its entire troubleshooting process to that single data point. Instead of asking a basic diagnostic question—&lt;em&gt;"What operating system are you currently using?"&lt;/em&gt; It assumed my environment, declared my goal impossible, and confidently steered me toward a workaround I didn't need.&lt;/p&gt;

&lt;p&gt;As a minor desktop quirk, this is merely annoying. But when applied to the scale of enterprise software development, this exact behavior becomes a massive architectural pitfall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Helpful, But Is It?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To build effectively with AI, we have to understand that human engineers and AI models handle missing information in fundamentally different ways.&lt;/p&gt;

&lt;p&gt;Human engineers possess epistemic uncertainty. When we are handed a fragmented problem, our instinct is to halt and gather requirements. We know what we don't know, and we ask clarifying questions to fill the gaps.&lt;/p&gt;

&lt;p&gt;AI models, on the other hand, are designed to be completion engines, not clarification engines. During their training phases, specifically through Reinforcement Learning from Human Feedback (RLHF), Large Language Models are heavily rewarded for reducing friction. They are trained to provide immediate, actionable answers and penalized for being overly pedantic or refusing a prompt.&lt;/p&gt;

&lt;p&gt;Over time, this creates a strong "helpfulness" bias. In short, AI is the ultimate people-pleaser. It would rather confidently hallucinate a completely fabricated reality than look you in the digital eye and say, "I need more information."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Microservice Minefield&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s scale this up from a PowerPoint annoyance to a modern enterprise ecosystem. Imagine you are planning a new feature that spans multiple microservices. Let's say we're working with an Angular frontend, a Node.js middle tier, and a Python-based backend, all living happily (or so we hope) in Azure.&lt;/p&gt;

&lt;p&gt;You open up your AI tool, ready to architect the new data flow, but you only feed it the context for the Angular app.&lt;/p&gt;

&lt;p&gt;A human engineer would instantly stop you: &lt;em&gt;"Where are the Swagger docs for the Python service? What does the Node payload look like?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The AI? The AI doesn't need your pesky documentation. Driven by its insatiable need to be helpful, it will confidently invent the API contracts for your other services. It will hand you a beautifully formatted, syntactically flawless integration plan that relies on endpoints that do not exist, returning data structures it literally just dreamt up.&lt;/p&gt;

&lt;p&gt;If you blindly trust that output, you aren't engineering a solution; you are just meticulously orchestrating your next production outage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution: Orchestrating the Context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we accept that AI is an incurable people-pleaser fundamentally incapable of asking for directions, the solution becomes clear: we must assume the role of the ultimate context orchestrator.&lt;/p&gt;

&lt;p&gt;When initiating the architectural design of a new feature, providing a single user story and asking the model for code is a recipe for disaster. It is the engineering equivalent of handing a caffeinated intern looking to prove themselves a sticky note that says "build a checkout cart," and then leaving for the weekend. You return on Monday to find them waiting at your desk with a proud look on their face, tail practically wagging, eager to show you the bespoke payment gateway they wrote in a framework your infrastructure doesn't support, backed by a database they invented in their dreams.&lt;/p&gt;

&lt;p&gt;To mitigate this, we must aggressively front-load our prompts. Before asking the model to write a single line of logic or sequence a data flow, you must feed it the entire ecosystem. Drop the Swagger documentation, the database schemas, the frontend component structures, and the payload models from your middle tier directly into the context window. By establishing these hard boundaries upfront, you close the blanks the AI would otherwise try to fill with hallucinations. You are forcing it to route its logic through your actual architecture, rather than its imagination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Forcing the Clarification (Prompting for Engineers)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even with extensive front-loading, edge cases and gaps will remain. This is where we must program the AI's behavior, actively overriding its default instinct to guess. We do this by explicitly commanding it to act like a senior engineer.&lt;/p&gt;

&lt;p&gt;Append your architectural prompts with strict, behavioral constraints. A reliable pattern is to end your initial prompt with: &lt;em&gt;"Before providing a solution, analyze the provided repositories and ask me up to three clarifying questions about the system architecture, deployment environment, or missing API contracts."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To continue with our eager, tail-wagging intern analogy: hold that AI leash super tight. Give it all the context it needs, and confirm it knows exactly where it's going before unleashing it on its mission. You cannot let it sprint off to do its favorite thing (generating code) until it has explicitly proven it understands the assignment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Engineering the Prompts, Engineering the System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI is an incredibly powerful mechanism for accelerating development, but it fundamentally lacks the instinct to hit the brakes. It will run off a cliff if it thinks that is what you asked it to do.&lt;/p&gt;

&lt;p&gt;As engineering leaders, our job is no longer just writing code or drawing system architectures. Our job is mastering the management of context. Recognizing the epistemic gaps, knowing exactly what the AI doesn't know, is rapidly becoming the most critical skill in modern software design.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>contextwindow</category>
      <category>promptengineering</category>
    </item>
  </channel>
</rss>
