<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: B McGhee</title>
    <description>The latest articles on DEV Community by B McGhee (@bmcghee).</description>
    <link>https://dev.to/bmcghee</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bmcghee"/>
    <language>en</language>
    <item>
      <title>My AI pipeline had a 1M token context window. The output still got worse.</title>
      <dc:creator>B McGhee</dc:creator>
      <pubDate>Fri, 10 Apr 2026 16:07:53 +0000</pubDate>
      <link>https://dev.to/bmcghee/my-ai-pipeline-had-a-1m-token-context-window-the-output-still-got-worse-333l</link>
      <guid>https://dev.to/bmcghee/my-ai-pipeline-had-a-1m-token-context-window-the-output-still-got-worse-333l</guid>
      <description>&lt;h2&gt;
  
  
  Fixing a context window problem in an AIOps investigation pipeline
&lt;/h2&gt;

&lt;p&gt;The pipeline stitches context from three repos, calls Gemini with a chain-of-thought prompt, and posts root cause analysis to Slack and Jira. At some point output quality dropped.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagnosis
&lt;/h2&gt;

&lt;p&gt;A character count diagnostic showed the actual repo sizes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;frontend    ~527k tokens
backend     ~311k tokens
legacy      ~7.9M tokens
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fixed 50/35/15 budget split was loading the same proportion of irrelevant code regardless of ticket type. A scheduling bug got the same legacy allocation as an auth bug.&lt;/p&gt;

&lt;p&gt;Models don't attend uniformly across long contexts. Irrelevant content degrades output quality, it doesn't just take up space. The ceiling wasn't the constraint. Context selection was.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraints to consider
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Model rate limits and context window.&lt;/strong&gt; Already hitting the API directly so context caching is available, but the 1M token ceiling is hard. The fix had to work within it, not around it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context quality vs. quantity.&lt;/strong&gt; A smaller focused window consistently outperforms a larger noisy one for reasoning tasks. This ruled out "just get a bigger window" as a solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency.&lt;/strong&gt; A secondary concern alongside quality: the time from bug filed to Slack/Jira report. Runner queue time and repo checkout compound. Addressed separately via infrastructure, not the script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-language repos.&lt;/strong&gt; Three different primary languages (TypeScript, Go, legacy Node.js) with different directory conventions. The routing table had to account for each independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fix
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Label-based routing.&lt;/strong&gt; Extended the ticket fetch to include labels and components, then mapped those to repo-specific dirs and budget splits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scheduling → frontend: 45% / backend: 45% (scheduling handlers only) / legacy: 10%
auth       → frontend: 55% (providers, hooks) / backend: 35% / legacy: 10%
default    → 50/35/15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Prompt restructure.&lt;/strong&gt; Stable content (architecture context, codebase) at the top, ticket at the bottom. Better attention on the ticket, enables implicit caching across back-to-back runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The point
&lt;/h2&gt;

&lt;p&gt;Use deterministic pre-filtering before the LLM sees any code. The model sees less. The output is better. Reach for context selection before a bigger window.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gemini</category>
      <category>llm</category>
      <category>promptengineering</category>
    </item>
  </channel>
</rss>
