<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: takawasi</title>
    <description>The latest articles on DEV Community by takawasi (@takawasi_a3daaa65d00ffee8).</description>
    <link>https://dev.to/takawasi_a3daaa65d00ffee8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/takawasi_a3daaa65d00ffee8"/>
    <language>en</language>
    <item>
      <title>Why Your LLM Ignores Detailed Instructions (It's Not a Bug)</title>
      <dc:creator>takawasi</dc:creator>
      <pubDate>Wed, 25 Mar 2026 07:47:02 +0000</pubDate>
      <link>https://dev.to/takawasi_a3daaa65d00ffee8/why-your-llm-ignores-detailed-instructions-its-not-a-bug-35n1</link>
      <guid>https://dev.to/takawasi_a3daaa65d00ffee8/why-your-llm-ignores-detailed-instructions-its-not-a-bug-35n1</guid>
      <description>&lt;p&gt;You've been there. You write a meticulous 100-step prompt. You stuff it into a 1M-token context. The model ignores half of it.&lt;/p&gt;

&lt;p&gt;This isn't a bug. It's the structural ceiling of LLMs — and understanding it will change how you design AI systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Human Chunk" Problem
&lt;/h2&gt;

&lt;p&gt;LLMs are trained on human-written text. Humans write in natural units: blog posts, emails, functions, conversation turns. I call these &lt;strong&gt;human chunks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The model's probability space is structured around these chunks. When you input fine-grained instructions, the model doesn't process them at your granularity — it elevates them to human-chunk level. A 100-step procedure becomes "do the task."&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Your System Design
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# This won't work as expected:
&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Step 1: Check if X
Step 2: If X, do Y
Step 3: Verify Y was done
&lt;/span&gt;&lt;span class="gp"&gt;...&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;97&lt;/span&gt; &lt;span class="n"&gt;more&lt;/span&gt; &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Model processes this as one big chunk, not 100 steps
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# This works better:
&lt;/span&gt;&lt;span class="n"&gt;result_1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Check if X&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;result_2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Given &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result_1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, do Y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;result_3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Verify: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result_2&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Each call is at human-chunk granularity
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Design Principle
&lt;/h2&gt;

&lt;p&gt;Systems that &lt;strong&gt;accept&lt;/strong&gt; this ceiling — stateless chains, tasks split at human-chunk granularity — naturally improve as models get better. Systems that &lt;strong&gt;fight&lt;/strong&gt; this ceiling need re-engineering every model update.&lt;/p&gt;

&lt;p&gt;Prompt engineering is optimization &lt;em&gt;within&lt;/em&gt; the ceiling. It's valuable, but it doesn't change the ceiling itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Stop trying to overcome the human-chunk ceiling with more detailed prompts. Design around it instead. Your system will be simpler, more robust, and will scale with model improvements automatically.&lt;/p&gt;

&lt;p&gt;What patterns have you found for working &lt;em&gt;with&lt;/em&gt; this ceiling instead of against it?&lt;/p&gt;

</description>
      <category>rag</category>
    </item>
  </channel>
</rss>
