<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Serhii Panchyshyn</title>
    <description>The latest articles on DEV Community by Serhii Panchyshyn (@serhiip).</description>
    <link>https://dev.to/serhiip</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/serhiip"/>
    <language>en</language>
    <item>
      <title>Why Your AI Agent Costs 7x What It Should</title>
      <dc:creator>Serhii Panchyshyn</dc:creator>
      <pubDate>Tue, 21 Apr 2026 17:38:28 +0000</pubDate>
      <link>https://dev.to/serhiip/why-your-ai-agent-costs-7x-what-it-should-3bm0</link>
      <guid>https://dev.to/serhiip/why-your-ai-agent-costs-7x-what-it-should-3bm0</guid>
      <description>&lt;p&gt;Most AI agents are loops. Call the model. Read the response. Run a tool. Feed the result back. Call again.&lt;/p&gt;

&lt;p&gt;That loop is also a billing loop. Every iteration re-sends the entire prompt. And unless you've thought about it carefully, every iteration is paying full price for tokens the provider already saw three calls ago.&lt;/p&gt;

&lt;p&gt;I learned this the hard way. I had an agent that called the model five to seven times per user interaction. Screenshots were involved. A single medium-resolution image runs about two thousand tokens. Multiply that by seven passes and you're billing fourteen thousand tokens for an image the model only needed to see once.&lt;/p&gt;

&lt;p&gt;The fix took about thirty minutes. It cut my input costs by roughly 80%.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem is the loop, not the model
&lt;/h2&gt;

&lt;p&gt;When people optimize LLM costs, they usually start with the model. Can I use a smaller one? Can I cut the system prompt? Can I reduce the context window?&lt;/p&gt;

&lt;p&gt;Those are fine. But they're linear improvements. You shave off 20% here, 30% there.&lt;/p&gt;

&lt;p&gt;The loop is a multiplier. If your agent runs five iterations and you're re-billing the same stable content on every pass, you're not overpaying by a percentage. You're overpaying by a multiple. Five iterations means 5x. Seven means 7x. That's the gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually happening under the hood
&lt;/h2&gt;

&lt;p&gt;Every major LLM provider now offers prompt caching. You pay full price the first time you send a prompt. On subsequent calls that share the same beginning, you pay a fraction of the input cost for the cached portion. OpenAI gives up to a 90% discount on cached tokens. Anthropic's cache reads run about 10% of normal input cost.&lt;/p&gt;

&lt;p&gt;The key word is "beginning." These caches are prefix caches. They match your prompt from byte zero forward. The moment the bytes diverge from what's stored, the match ends. Everything after that point is a miss.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Call 1:  [A][B][C][D][E]   → writes prefix to cache
Call 2:  [A][B][C][D][F]   → cache hit through [D], full price for [F]
Call 3:  [A][X][C][D][E]   → cache MISS at position 1, full price for everything
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In call 3, the tokens [C], [D], and [E] are identical to call 1. Doesn't matter. The chain broke at position 1. The cache is left-anchored and unforgiving.&lt;/p&gt;

&lt;h2&gt;
  
  
  This isn't theoretical. It's happening in production right now.
&lt;/h2&gt;

&lt;p&gt;I was looking at LightRAG, a popular open-source RAG framework. Their entity extraction pipeline embeds variable content directly inside the system prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;System prompt:
  ---Role---         (static, ~100 tokens)
  ---Instructions--- (static, ~400 tokens)
  ---Examples---     (static, ~800 tokens)
  ---Input Text---
  {input_text}       ← CHANGES FOR EVERY CHUNK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every chunk produces a completely different system prompt string. There's no shared prefix across chunks because the variable content is baked into the same message as the static instructions. Nothing gets cached.&lt;/p&gt;

&lt;p&gt;For a typical indexing run of 8,000 chunks, that's roughly 11.6 million prompt tokens all counted as new. If the static prefix (~1,300 tokens) were separated from the variable input, roughly 10.4 million of those tokens would hit the cache. That's a 45% cost reduction just from moving one variable out of the system message and into the user message.&lt;/p&gt;

&lt;p&gt;The fix is three lines of code. Split the template. Put static content in the system message. Put &lt;code&gt;{input_text}&lt;/code&gt; in the user message. Done.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;System message (cached):
  ---Role---
  ---Instructions---
  ---Examples---
  ---Entity Types---

User message (variable):
  ---Input Text---
  {input_text}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern shows up everywhere. If you're building any pipeline that processes documents in chunks, your prompt is probably structured like LightRAG's. And you're probably paying for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The layout that actually works
&lt;/h2&gt;

&lt;p&gt;Once you understand prefix caching, prompt layout stops being cosmetic and starts being economic. The shape you want is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[  STABLE PREFIX  ][ cache breakpoint ][  GROWING TAIL  ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything that stays the same between calls goes to the left. Everything that changes goes to the right. The breakpoint sits between them.&lt;/p&gt;

&lt;p&gt;The Claude Code team at Anthropic shared their exact ordering and it's a good template for any agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Static system prompt + tool definitions  (globally cached)
2. Project-level context                    (cached within a project)
3. Session context                          (cached within a session)
4. Conversation messages                    (the growing tail)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each layer is stable relative to the layer below it. System prompts change less than project context. Project context changes less than session context. Session context changes less than conversation messages. The cache hits cascade.&lt;/p&gt;

&lt;h2&gt;
  
  
  The counterintuitive part
&lt;/h2&gt;

&lt;p&gt;In a single-shot call, the user's message naturally goes at the end. That's correct. But in a loop, the user's message is not the tail. The loop's output is.&lt;/p&gt;

&lt;p&gt;Think about it. The user's message doesn't change between iterations. It's the same question on pass one as it is on pass five. What changes is the assistant's responses and tool results that accumulate with each iteration.&lt;/p&gt;

&lt;p&gt;So the user's content belongs in the prefix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ system prompt ][ user message ][ breakpoint ][ loop state → grows each iteration ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This looks wrong. The user's message isn't at the end. But the cache doesn't care about narrative order. It cares about byte stability. The user's message is frozen across iterations. The loop output is what moves. Frozen things go left. Moving things go right.&lt;/p&gt;

&lt;h2&gt;
  
  
  The math on images
&lt;/h2&gt;

&lt;p&gt;Text tokens are cheap enough that sloppy caching is survivable. Images are not.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image tokens&lt;/th&gt;
&lt;th&gt;Passes&lt;/th&gt;
&lt;th&gt;Without caching&lt;/th&gt;
&lt;th&gt;With caching&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2,000&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2,000&lt;/td&gt;
&lt;td&gt;2,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2,000&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;~2,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2,000&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;14,000&lt;/td&gt;
&lt;td&gt;~2,600&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The cached version writes the image once and reads it at a fraction of full price on every subsequent pass. That's the difference between an agent that's economically viable and one that burns through your API budget in a week.&lt;/p&gt;

&lt;p&gt;If your agent processes screenshots, documents, or any visual input inside a loop, this is probably the single highest-leverage optimization available to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The silent cache killers
&lt;/h2&gt;

&lt;p&gt;Even with the right layout, caching breaks in quiet ways. Every one of these has bitten me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timestamps in the system prompt.&lt;/strong&gt; "The current time is 2025-04-22 14:23:07." Changes every call. One line and your entire prefix is invalidated. The fix is to pass time updates in the next user message instead. Claude Code does exactly this. They append a &lt;code&gt;&amp;lt;system-reminder&amp;gt;&lt;/code&gt; tag in the next turn rather than touching the system prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adding or removing tools mid-conversation.&lt;/strong&gt; This is probably the most common mistake I see. It seems logical to only give the model tools it needs right now. But tool definitions are part of the cached prefix. Adding or removing a tool invalidates the cache for the entire conversation history.&lt;/p&gt;

&lt;p&gt;The Claude Code team learned this the hard way. Their plan mode initially swapped out tools for read-only versions. Cache broke every time. The fix: keep all tools in the request always. Make plan mode a tool itself (&lt;code&gt;EnterPlanMode&lt;/code&gt;, &lt;code&gt;ExitPlanMode&lt;/code&gt;). The tool definitions never change. The model calls a tool to change its own behavior instead of you changing the toolset.&lt;/p&gt;

&lt;p&gt;If you have many tools and loading all of them is expensive, send lightweight stubs with just the tool name and let the model discover full schemas through a search tool when needed. The stubs are stable. The prefix stays intact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Switching models mid-session.&lt;/strong&gt; Prompt caches are model-specific. If you're 100k tokens into a conversation with a large model and want to hand off an easy subtask to a smaller one, you'd have to rebuild the entire cache for the new model. That rebuild often costs more than just letting the original model answer.&lt;/p&gt;

&lt;p&gt;If you need multi-model workflows, use subagents. The primary model prepares a focused handoff message for the secondary model. The secondary model works with a short, fresh context. Neither model's cache gets broken.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unordered data structures.&lt;/strong&gt; If you build context from a set or unordered dict, iteration order can drift between calls. Sort before serializing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Whitespace drift.&lt;/strong&gt; One version of your template has a trailing newline, another doesn't. The bytes don't match.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In-place edits to history.&lt;/strong&gt; The moment you mutate a past message, every byte after it shifts. Your cache for that whole conversation is gone.&lt;/p&gt;

&lt;p&gt;The unifying principle: content that looks identical to a human is not necessarily byte-identical to a hash function. The cache only speaks bytes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to verify it's working
&lt;/h2&gt;

&lt;p&gt;Don't trust your layout. Measure it.&lt;/p&gt;

&lt;p&gt;Every major provider returns cache metrics in the response. OpenAI includes &lt;code&gt;cached_tokens&lt;/code&gt; in &lt;code&gt;usage.prompt_tokens_details&lt;/code&gt;. Anthropic returns &lt;code&gt;cache_creation_input_tokens&lt;/code&gt; and &lt;code&gt;cache_read_input_tokens&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;On the first call, cached tokens should be zero. On every subsequent call, they should climb to match your stable prefix length. If they don't, your prefix isn't stable.&lt;/p&gt;

&lt;p&gt;The best debugging step: dump the raw prompt from two consecutive calls and diff them. You'll find the drift immediately.&lt;/p&gt;

&lt;p&gt;A habit that's saved me hours: write a test that runs your prompt builder twice with equivalent inputs and asserts the first N bytes are byte-equal. Humans can't eyeball byte stability. Hash functions can.&lt;/p&gt;

&lt;p&gt;And if caching is a meaningful part of your cost structure, monitor it like you'd monitor uptime. The Claude Code team runs alerts on their cache hit rate and treats drops as incidents. A few percentage points of cache miss can dramatically change unit economics. It deserves a dashboard, not a gut check.&lt;/p&gt;

&lt;h2&gt;
  
  
  The reframe
&lt;/h2&gt;

&lt;p&gt;I used to think of prompts as messages. Now I think of them as data structures with cache semantics. Some regions are stable. Some regions grow. The breakpoint is the contract between them.&lt;/p&gt;

&lt;p&gt;Every piece of content gets the same triage: does this change between calls? If yes, it goes to the tail. If no, it goes to the prefix. If it's expensive and it belongs to the user's turn, I figure out how to keep it in the prefix anyway. Even if it means putting things somewhere that looks weird.&lt;/p&gt;

&lt;p&gt;This reframe changes how I design features. Instead of asking "what tools does the model need right now?" I ask "how do I model this state change without breaking the prefix?" Instead of editing the system prompt to update context, I pass updates through messages. Instead of switching to a cheaper model mid-conversation, I fork a subagent with a clean context.&lt;/p&gt;

&lt;p&gt;The model is the expensive part of your system. The shape of what you send it is the part you actually control.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I help engineering teams ship AI features that work in production, not just in demos. If your agents are burning through API budgets or your LLM infrastructure needs a cost audit, &lt;a href="https://cal.com/animanovalabs/ai-strategy-call" rel="noopener noreferrer"&gt;let's talk&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>llm</category>
      <category>production</category>
    </item>
    <item>
      <title>The One Mindset Shift That Separates People Who Use AI From People Who Get Left Behind</title>
      <dc:creator>Serhii Panchyshyn</dc:creator>
      <pubDate>Sat, 18 Apr 2026 00:11:43 +0000</pubDate>
      <link>https://dev.to/serhiip/the-one-mindset-shift-that-separates-people-who-use-ai-from-people-who-get-left-behind-31ao</link>
      <guid>https://dev.to/serhiip/the-one-mindset-shift-that-separates-people-who-use-ai-from-people-who-get-left-behind-31ao</guid>
      <description>&lt;p&gt;You take out the garbage every day.&lt;/p&gt;

&lt;p&gt;You've done it for years. Maybe decades. It's just a thing you do. Part of the routine. You grab the bag, walk outside, toss it in the bin. Done. Never think about it twice.&lt;/p&gt;

&lt;p&gt;But what if you stopped for 10 seconds and asked: "Does it have to be this way?"&lt;/p&gt;

&lt;p&gt;What if the garbage could take itself out?&lt;/p&gt;

&lt;p&gt;That sounds ridiculous. And that's exactly the point. Because most people never ask the ridiculous question. They never get curious enough to wonder if the thing they've always done could be done differently. Or not done at all.&lt;/p&gt;

&lt;p&gt;And right now, in 2026, that lack of curiosity is the single biggest thing holding people back.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Curiosity Gap Is the New Skills Gap
&lt;/h2&gt;

&lt;p&gt;Traditional businesses sitting on decades of manual processes. The pattern I see over and over is not a technology gap. It's a curiosity gap.&lt;/p&gt;

&lt;p&gt;The tools are here. AI can write, analyze, build, automate, reason. It gets better every month. But most people interact with these tools the way they interact with their garbage routine. They accept the default. They don't question the process. They don't get curious about what's underneath.&lt;/p&gt;

&lt;p&gt;Adam Grant, organizational psychologist at Wharton, studied what he calls "originals." People who drive creativity and change. His research found something surprising. The biggest difference between originals and everyone else wasn't talent or intelligence. It was that originals were more afraid of not trying than of failing. They generated massive volumes of ideas. Most were bad. But the volume itself created the conditions for breakthroughs.&lt;/p&gt;

&lt;p&gt;That's curiosity in action. Not passive wondering. Active experimentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Your Brain Fights Curiosity
&lt;/h2&gt;

&lt;p&gt;Your brain is actually wired to avoid curiosity.&lt;/p&gt;

&lt;p&gt;Psychiatrist Judson Brewer at Brown University has spent over 20 years studying how the brain forms habits. His research shows that our brains run on a reward-based learning loop. Trigger, behavior, reward. See garbage bag full. Pick it up. Take it out. Feel good that the task is done. Loop complete. Brain moves on.&lt;/p&gt;

&lt;p&gt;The problem is that this same loop applies to how we think. We encounter a problem. We reach for the familiar solution. We get the small reward of "done." And we never question whether the problem itself was the right one to solve.&lt;/p&gt;

&lt;p&gt;Brewer's key insight is that curiosity is actually more powerful than willpower for breaking these loops. When you get genuinely curious about a habit or pattern, your brain's reward system updates. You start seeing the actual results of your default behaviors instead of running on autopilot.&lt;/p&gt;

&lt;p&gt;This is why curiosity isn't just nice to have. It's a mechanism for rewiring how you operate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Garbage Test
&lt;/h2&gt;

&lt;p&gt;I use something I call the Garbage Test with teams I work with. It's simple.&lt;/p&gt;

&lt;p&gt;Pick one thing you do every single day that you've never questioned. Something so routine it's invisible. Now get curious about it. Not "how do I optimize this?" That's efficiency thinking. Instead ask:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Why does this exist at all?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you ask that question about enough things, you start finding entire categories of work that shouldn't exist. Reports nobody reads. Meetings that could be async messages. Manual data entry that an API could handle. Approval workflows that exist because someone got burned once in 2017.&lt;/p&gt;

&lt;p&gt;The garbage doesn't need a faster route to the bin. The garbage needs to stop being generated in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Curiosity Is Not a Personality Trait. It's a Practice.
&lt;/h2&gt;

&lt;p&gt;People tell me they're "not the curious type." That's like saying you're not the breathing type. Curiosity is a human default. Kids ask somewhere around 300 questions a day. By adulthood, that number drops to almost nothing.&lt;/p&gt;

&lt;p&gt;What happened? We got trained out of it. Schools rewarded correct answers over good questions. Workplaces rewarded execution over exploration. We learned that asking "why" makes you look like you don't know what you're doing.&lt;/p&gt;

&lt;p&gt;Anne-Laure Le Cunff, who spoke at SXSW EDU 2025 on the experimental mindset, put it well. She argues that by middle school, most kids have already shifted from the excitement of discovery to the pressure of getting things right. And we carry that pressure into our careers, our businesses, our relationship with technology.&lt;/p&gt;

&lt;p&gt;The fix is not some grand mindset overhaul. It's small experiments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Curiosity Protocol
&lt;/h2&gt;

&lt;p&gt;This is what I've used across engagements with teams adopting AI and building new workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Pick one friction point.&lt;/strong&gt; Something that bothers you. Something tedious. Something you complain about but accept. Start there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Shut everything down for 15 minutes.&lt;/strong&gt; No Slack. No email. No music. Just you and the question: "What is actually happening here? What's underneath this?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Get weird with it.&lt;/strong&gt; Ask the dumb question. "What if this didn't exist?" "What if I did the opposite?" "What if a five-year-old designed this?" The value isn't in the answer. It's in breaking the default pattern your brain is stuck in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Run one micro-experiment.&lt;/strong&gt; Don't plan. Don't strategize. Don't build a deck. Just try something. One small test. See what happens. The goal isn't to succeed. The goal is to learn something you didn't know 30 minutes ago.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Record what you found.&lt;/strong&gt; Not a formal report. A single sentence. "I tried X and learned Y." That's it. Stack enough of those sentences and you have a roadmap that no consultant could have built for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Looks Like With AI
&lt;/h2&gt;

&lt;p&gt;Say you spend 45 minutes every morning reading through Slack messages, emails, and project updates to figure out what needs your attention. You've done this for years. It's just the morning routine.&lt;/p&gt;

&lt;p&gt;The Garbage Test: "Why does this exist?"&lt;/p&gt;

&lt;p&gt;Because information is scattered. Because there's no single source of truth. Because everyone communicates differently.&lt;/p&gt;

&lt;p&gt;The curious question: "What if I didn't do this at all? What if something did it for me?"&lt;/p&gt;

&lt;p&gt;The micro-experiment: Spend one hour building a simple AI workflow that summarizes your channels and flags what actually needs you. Not a perfect system. A prototype. A test.&lt;/p&gt;

&lt;p&gt;Maybe it works. Maybe it doesn't. But now you've learned something about what AI can do, what your actual information bottlenecks are, and where you should focus next. That's more progress than most people make in a month of "meaning to look into AI."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Competitive Advantage
&lt;/h2&gt;

&lt;p&gt;Research from ISG found that curiosity is becoming one of the most critical organizational capabilities in the AI era. Not because curious people are smarter. But because curious people experiment. And experimentation is the only way to figure out how AI actually fits into your specific context.&lt;/p&gt;

&lt;p&gt;No blog post, course, or consultant can tell you exactly how AI will transform your work. That answer only comes from getting curious enough to try things. To break things. To ask the question nobody else is asking.&lt;/p&gt;

&lt;p&gt;Google built their innovation culture on this. They gave employees 20% of their time for self-directed projects. Not because they knew what would come out of it. But because they understood that curiosity at scale produces outcomes you can't predict or plan for.&lt;/p&gt;

&lt;p&gt;You don't need Google's budget to do this. You need 15 minutes and one dumb question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Part Nobody Wants to Do
&lt;/h2&gt;

&lt;p&gt;Curiosity requires something most people avoid: sitting with not knowing.&lt;/p&gt;

&lt;p&gt;We live in an era of instant answers. Google it. Ask ChatGPT. Get the solution. Move on. But curiosity isn't about getting answers faster. It's about asking better questions. And better questions come from the discomfort of not knowing. From staying in that space long enough to see what's really there.&lt;/p&gt;

&lt;p&gt;Stuart Firestein, a neuroscientist at Columbia, gave a TED Talk called "The Pursuit of Ignorance" where he argued that knowledge actually generates more ignorance, not less. Every answer opens new questions. The people who thrive are the ones who see that as exciting, not threatening.&lt;/p&gt;

&lt;p&gt;That's the mindset shift. Not "I need to learn AI." But "I wonder what would happen if..."&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Today
&lt;/h2&gt;

&lt;p&gt;Don't bookmark this article and forget about it. That's the old pattern. The default loop. Instead, do this:&lt;/p&gt;

&lt;p&gt;Before you close this tab, pick one thing in your life or work that you've never questioned. One thing that's "just how it is." Write it down. Then spend 15 minutes getting curious about it.&lt;/p&gt;

&lt;p&gt;Not tomorrow. Right now.&lt;/p&gt;

&lt;p&gt;The garbage is waiting. But maybe it doesn't have to be.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I help companies figure out where AI actually fits in their business. Not the hype version. The version that makes your team's daily work better. If you're sitting on processes that feel like they shouldn't exist in 2026, let's talk.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Things You're Overengineering in Your AI Agent (The LLM Already Handles Them)</title>
      <dc:creator>Serhii Panchyshyn</dc:creator>
      <pubDate>Tue, 14 Apr 2026 20:15:22 +0000</pubDate>
      <link>https://dev.to/serhiip/things-youre-overengineering-in-your-ai-agent-the-llm-already-handles-them-2lop</link>
      <guid>https://dev.to/serhiip/things-youre-overengineering-in-your-ai-agent-the-llm-already-handles-them-2lop</guid>
      <description>&lt;p&gt;I've been building AI agents in production for the past two years. Not demos. Not weekend projects. Systems that real users talk to every day and get angry at when they break.&lt;/p&gt;

&lt;p&gt;And the pattern I keep seeing? Engineers building elaborate machinery around the model. Custom orchestration layers. Hand-rolled retry logic. Massive tool routing systems. All to solve problems the LLM was already solving if you just let it.&lt;/p&gt;

&lt;p&gt;Here's what I'd rip out if I could go back.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Custom Tool Selection Logic
&lt;/h2&gt;

&lt;p&gt;You built a classifier that decides which tool the agent should use. Maybe a regex-based router. Maybe a whole separate model call just to pick the right function.&lt;/p&gt;

&lt;p&gt;Stop.&lt;/p&gt;

&lt;p&gt;Modern LLMs are shockingly good at tool selection when you give them well-named, well-described tools. The problem was never the model. It was your tool descriptions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Bad: vague tool name, model guesses wrong&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;search&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Searches for things&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Good: specific name, clear scope, model nails it&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;search_customer_accounts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Search customer accounts by account ID, customer name, or date range. Returns subscription status, plan details, and usage history.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix isn't a smarter router. It's better tool design. Name your tools like you're writing an API for a junior dev who's never seen your codebase. Be embarrassingly specific.&lt;/p&gt;

&lt;p&gt;Tool selection metrics can look great while the final answer is still garbage. I've seen this firsthand. The agent picks the right tool 95% of the time but still gives wrong answers because the tool descriptions don't explain what the returned data actually means.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Prompt Chains for Multi-Step Reasoning
&lt;/h2&gt;

&lt;p&gt;I used to build 4-5 step prompt chains for anything complex. Break the problem down. Feed output A into prompt B. Parse the result. Feed it into prompt C.&lt;/p&gt;

&lt;p&gt;Turns out a single well-structured system prompt with clear instructions handles most of this natively. The model already knows how to decompose problems. You just need to tell it what your constraints are and what good output looks like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Instead of chaining 3 prompts:&lt;/span&gt;
&lt;span class="c1"&gt;// 1. "Classify the user intent"&lt;/span&gt;
&lt;span class="c1"&gt;// 2. "Based on intent X, gather context"  &lt;/span&gt;
&lt;span class="c1"&gt;// 3. "Now generate the answer"&lt;/span&gt;

&lt;span class="c1"&gt;// Just do this:&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;systemPrompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`You are a support agent for a SaaS platform.

When a user asks a question:
1. Identify whether they need account info, billing help, or technical support
2. Use the appropriate tool to get the data
3. Answer in plain English with the specific details they asked for

If you're unsure about intent, ask one clarifying question. Never guess.`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The chain approach also creates a hidden problem. Each step is a failure point. And debugging a 4-step chain when something breaks on step 3 is miserable. A single prompt with clear instructions is easier to observe, easier to eval, and fails more gracefully.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Retrieval Complexity Before Retrieval Quality
&lt;/h2&gt;

&lt;p&gt;This one hurts because I've done it myself.&lt;/p&gt;

&lt;p&gt;You spend two weeks building a hybrid retrieval pipeline. BM25 plus vector search plus re-ranking. Beautiful architecture. Looks great in a diagram.&lt;/p&gt;

&lt;p&gt;Then you realize the actual problem is that your knowledge base documents are written in a way the model can't parse. Or your chunking strategy splits the answer across two chunks and neither one makes sense alone.&lt;/p&gt;

&lt;p&gt;The retrieval pipeline doesn't matter if the underlying data is messy.&lt;/p&gt;

&lt;p&gt;Before you optimize the search algorithm, ask yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If I showed this chunk to a human with no context, would they understand the answer?&lt;/li&gt;
&lt;li&gt;Are my documents written for the model or for the original author's brain?&lt;/li&gt;
&lt;li&gt;Am I chunking at logical boundaries or just every 500 tokens?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've seen teams where retrieval "works" but answers are still wrong because the reference data itself contains outdated or incorrect information. That's not a retrieval problem. That's a data quality problem wearing a retrieval costume.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Custom Guardrails That Block Legitimate Use
&lt;/h2&gt;

&lt;p&gt;You built a content filter. It catches bad inputs. Great.&lt;/p&gt;

&lt;p&gt;Then users start complaining that normal questions get blocked. Someone asks about "terminating a contract" and the guardrail flags "terminating." Someone asks about "explosive growth" in their metrics and that trips another filter.&lt;/p&gt;

&lt;p&gt;Rule-based guardrails at scale become a whack-a-mole game you can't win.&lt;/p&gt;

&lt;p&gt;The LLM itself is already pretty good at understanding intent and context. Instead of building regex walls around the model, build guardrails INTO the model's instructions. Tell it what topics are off-limits. Tell it what information it should never reveal. Tell it to redirect gracefully instead of stonewalling.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Instead of: regex filter that blocks "kill", "terminate", "destroy"&lt;/span&gt;
&lt;span class="c1"&gt;// Try this in your system prompt:&lt;/span&gt;

&lt;span class="s2"&gt;`If a user asks about topics outside your domain (account management and billing),
politely redirect them. Never share internal system details, API keys, 
or other customer data. You can decline requests, but always explain why 
and suggest what you CAN help with.`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Guardrails and permissions are product design, not just safety theater. Treat them that way.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Agent Memory as a Separate System
&lt;/h2&gt;

&lt;p&gt;You have your agent's database over here. Its memory system over there. A vector store somewhere else. And glue code holding all of it together with prayers and setTimeout.&lt;/p&gt;

&lt;p&gt;The real question is simpler than the architecture you built: what does the agent actually need to remember between sessions?&lt;/p&gt;

&lt;p&gt;Most agents don't need a sophisticated memory system. They need a well-structured context window. The conversation history plus a few key facts about the user. That's it. The model handles the rest.&lt;/p&gt;

&lt;p&gt;When you DO need persistent memory, keep it close to your data. Don't build a separate memory service that has to sync with your database. Store memory where your data lives. Query it with the same tools.&lt;/p&gt;

&lt;p&gt;The moment your agent's memory can't see its own database, you've created an integration problem disguised as a feature.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Sub-Agent Orchestration for Everything
&lt;/h2&gt;

&lt;p&gt;Multi-agent architectures are seductive. One agent plans. One retrieves. One generates. One validates. They talk to each other through a message bus. It looks amazing on a whiteboard.&lt;/p&gt;

&lt;p&gt;In production it's a nightmare to debug. When the answer is wrong, which agent broke? The planner? The retriever? The generator? You end up building observability tooling just to trace what happened across four agents when one would have been fine.&lt;/p&gt;

&lt;p&gt;Start with one agent. Push it until it genuinely can't handle the complexity. Only THEN split into specialized sub-agents with clear, narrow responsibilities.&lt;/p&gt;

&lt;p&gt;The rule I use: a sub-agent should exist only when the parent agent's context window literally can't hold the information it needs. Not because "separation of concerns" sounds good in a design doc.&lt;/p&gt;

&lt;p&gt;Specialized agents make sense for high-context tasks where the prompt would blow up the token budget. General agents handle 80% of use cases with less operational overhead. Know which one you're building and why.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Evaluations That Test Happy Paths
&lt;/h2&gt;

&lt;p&gt;This is the one that bites hardest.&lt;/p&gt;

&lt;p&gt;You write 50 eval cases. The agent passes 48 of them. Ship it.&lt;/p&gt;

&lt;p&gt;Then users find the 200 edge cases you didn't think of. The model hallucinates an account ID. It confidently answers a question it should have said "I don't know" to. It uses data from one customer to answer another customer's question.&lt;/p&gt;

&lt;p&gt;Good evals don't test whether the agent CAN answer correctly. They test whether it WILL answer correctly under pressure.&lt;/p&gt;

&lt;p&gt;Build evals that target failure modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happens when the tool returns empty results?&lt;/li&gt;
&lt;li&gt;What happens when two tools return conflicting information?&lt;/li&gt;
&lt;li&gt;What happens when the user asks something slightly outside the agent's domain?&lt;/li&gt;
&lt;li&gt;What happens when the context is ambiguous?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The eval suite is the real moat. Not the model. Not the prompts. Not the architecture. The team that can systematically find and fix failure modes ships better agents than the team with the fancier framework.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;Most of the complexity in your agent isn't making it smarter. It's making it harder to debug, harder to eval, and harder to change.&lt;/p&gt;

&lt;p&gt;The best agent architectures I've built are embarrassingly simple. One model. Clear system prompt. Well-named tools. Good data. Ruthless evals.&lt;/p&gt;

&lt;p&gt;Everything else is either premature optimization or an expensive lesson waiting to happen.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What's the most over-engineered thing you've built into an agent that turned out to be unnecessary?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>javascript</category>
      <category>production</category>
    </item>
    <item>
      <title>Your Agent Isn't Broken Because of the Prompt. It's Broken Because of What the Model Can See.</title>
      <dc:creator>Serhii Panchyshyn</dc:creator>
      <pubDate>Mon, 13 Apr 2026 23:43:02 +0000</pubDate>
      <link>https://dev.to/serhiip/stop-prompting-start-engineering-perception-4fh5</link>
      <guid>https://dev.to/serhiip/stop-prompting-start-engineering-perception-4fh5</guid>
      <description>&lt;p&gt;I've watched teams spend weeks rewriting the same system prompt.&lt;/p&gt;

&lt;p&gt;Different phrasings. More examples. Clearer instructions. The agent still picks the wrong tool. Still hallucinates. Still feels broken.&lt;/p&gt;

&lt;p&gt;Then they rename six functions and accuracy jumps 30%.&lt;/p&gt;

&lt;p&gt;This pattern shows up constantly across the teams I work with. The model doesn't care how clever your prompt is. It cares about what it can &lt;em&gt;see&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem I see everywhere
&lt;/h2&gt;

&lt;p&gt;Teams treat prompts like magic spells. Say the right words, get the right output.&lt;/p&gt;

&lt;p&gt;But agents aren't following instructions. They're making predictions based on everything in context. The tool names. The API responses. The error messages. The structure of your data.&lt;/p&gt;

&lt;p&gt;That's perception. And it matters way more than your system prompt.&lt;/p&gt;

&lt;p&gt;Most teams optimize the wrong layer. They iterate on prompts for weeks while their tool names are &lt;code&gt;handleData&lt;/code&gt; and &lt;code&gt;processRequest&lt;/code&gt;. The model has no chance.&lt;/p&gt;

&lt;p&gt;Here are 10 patterns I've seen work across the past two years of helping teams build production agents 💪&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Tool names are the real prompt
&lt;/h2&gt;

&lt;p&gt;Bad tool names are invisible to the model.&lt;/p&gt;

&lt;p&gt;I audit client codebases and find this constantly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ The model has no idea what this does&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;handleRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ✅ Now it knows exactly when to use this&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;createInvoiceFromQuote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;quoteId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I've seen agents with 30, 40, even 50+ tools defined Half had names like &lt;code&gt;processData&lt;/code&gt; or &lt;code&gt;executeAction&lt;/code&gt;. The model was guessing.&lt;/p&gt;

&lt;p&gt;We renamed a handful of functions. Tool selection accuracy went from 60% to 87%. No prompt changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Tool descriptions matter more than you think
&lt;/h2&gt;

&lt;p&gt;The model reads descriptions to decide which tool to pick.&lt;/p&gt;

&lt;p&gt;I tell clients: write descriptions like you're onboarding a new developer. Because you are.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Vague description&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;searchRecords&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Search for records in the system&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ✅ Specific description with constraints&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;searchSupportTickets&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Search support tickets by ticket ID, customer email, priority, or date range. Returns max 50 results. Use filters to narrow results before searching.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specific descriptions reduce wrong tool selection by 30-40% in my experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Passing everything into context is lazy
&lt;/h2&gt;

&lt;p&gt;I've reviewed architectures where teams dump entire conversation histories into context. 20 turns. 50 tool results. Everything.&lt;/p&gt;

&lt;p&gt;The model drowns.&lt;/p&gt;

&lt;p&gt;What works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Last 3 turns by default&lt;/li&gt;
&lt;li&gt;Relevant retrieved docs only&lt;/li&gt;
&lt;li&gt;Structured summaries instead of raw data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Less context. Better decisions. Faster responses.&lt;/p&gt;

&lt;p&gt;One team cut their context by 60% and saw answer quality improve. Counter-intuitive until you realize the model was distracted by noise.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Scoped retrieval beats broad retrieval
&lt;/h2&gt;

&lt;p&gt;Early RAG implementations pull from everywhere. The whole knowledge base. 200+ docs. The model has no idea which ones matter.&lt;/p&gt;

&lt;p&gt;I push clients toward module-level filtering. If someone asks about billing, only retrieve billing docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Retrieve from everything&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// ✅ Scope to relevant module&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="na"&gt;module&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;detectModule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;maxResults&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; 
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Recall goes up. Hallucinations go down. Should be the default from day one.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Structured outputs prevent downstream chaos
&lt;/h2&gt;

&lt;p&gt;If another agent or system consumes your output, structure it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Free text response&lt;/span&gt;
&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;I found 3 tickets that match. The first one is #12345 from a customer in Chicago...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;// ✅ Structured response&lt;/span&gt;
&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tickets&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;12345&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;customer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Acme Corp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;escalated&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;12346&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;customer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Globex Inc&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;resolved&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unstructured responses compound errors. Each downstream consumer has to parse and guess. I've seen entire pipelines break because one agent returned prose instead of JSON.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Silent failures are invisible failures
&lt;/h2&gt;

&lt;p&gt;The model can't fix what it can't see.&lt;/p&gt;

&lt;p&gt;I audit error handling in every client codebase. Same pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Silent failure&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;hasPermission&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ✅ Loud failure&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;hasPermission&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PERMISSION_DENIED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User lacks 'tickets.create' permission&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;requiredPermission&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tickets.create&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;suggestedAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Request access from workspace admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explicit errors let the agent reason about what went wrong. And let you debug faster.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Real system state beats assumed state
&lt;/h2&gt;

&lt;p&gt;I've seen agents confidently tell users something was done when it wasn't. Ticket resolved. Payment processed. Account updated. The agent assumed based on patterns instead of checking the actual record.&lt;/p&gt;

&lt;p&gt;This happens when teams don't pass real state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Agent has to guess&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;12345&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ✅ Agent knows the truth&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="na"&gt;ticket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;12345&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;open&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;lastUpdate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2024-01-15T10:30:00Z&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;assignedAgent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Sarah K.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agents will make up state if you don't give them real state. Always.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. Specialized agents beat one generalist
&lt;/h2&gt;

&lt;p&gt;I've seen teams try to build one agent that handles everything. Customer questions. Data entry. Workflow automation. Reports.&lt;/p&gt;

&lt;p&gt;It's mediocre at all of them.&lt;/p&gt;

&lt;p&gt;The pattern that works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One agent for customer Q&amp;amp;A&lt;/strong&gt; using the knowledge base&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One agent for data operations&lt;/strong&gt; with strict schemas&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One agent for document parsing&lt;/strong&gt; with specialized prompts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each one is easier to eval. Easier to constrain. Easier to improve.&lt;/p&gt;

&lt;p&gt;Generalist agents are harder to debug and harder to trust. I push clients toward decomposition early.&lt;/p&gt;




&lt;h2&gt;
  
  
  9. Guardrails should block bad things, not useful things
&lt;/h2&gt;

&lt;p&gt;"Can you help me set up a webhook?" → BLOCKED (mentions code execution)&lt;/p&gt;

&lt;p&gt;"What's the API endpoint for exports?" → BLOCKED (mentions API)&lt;/p&gt;

&lt;p&gt;Users stop trusting the product. Not because the AI is bad. Because the guardrails are dumb.&lt;/p&gt;

&lt;p&gt;The users stopped trusting the product. Not because the AI was bad. Because the guardrails were dumb.&lt;/p&gt;

&lt;p&gt;Narrow guardrails work better. Be specific about what's actually dangerous. Allow everything else.&lt;/p&gt;




&lt;h2&gt;
  
  
  10. Audit perception before rewriting prompts
&lt;/h2&gt;

&lt;p&gt;When a client tells me their agent is underperforming, I ask these questions first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can it see the right tools? Are names and descriptions clear?&lt;/li&gt;
&lt;li&gt;Can it see the right context? Or is it drowning in noise?&lt;/li&gt;
&lt;li&gt;Can it see real state? Or is it guessing?&lt;/li&gt;
&lt;li&gt;Can it see errors? Or do failures happen silently?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nine times out of ten, the problem is perception. Not the prompt.&lt;/p&gt;




&lt;h2&gt;
  
  
  The outcome when you get this right
&lt;/h2&gt;

&lt;p&gt;Teams that engineer perception instead of prompts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop the endless prompt iteration cycle&lt;/li&gt;
&lt;li&gt;Get measurable accuracy improvements in days, not months&lt;/li&gt;
&lt;li&gt;Build agents that actually work in production&lt;/li&gt;
&lt;li&gt;Have clear debugging paths when things break&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The teams that keep tweaking prompts stay stuck. I've seen it enough times to know.&lt;/p&gt;




&lt;h2&gt;
  
  
  The mental model shift
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prompt engineering&lt;/strong&gt; asks: "How do I word this better?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perception engineering&lt;/strong&gt; asks: "What does the agent need to see to make a good decision?"&lt;/p&gt;

&lt;p&gt;One has diminishing returns after a few iterations.&lt;/p&gt;

&lt;p&gt;The other compounds as your system improves.&lt;/p&gt;




&lt;p&gt;Stop rewriting prompts. Start auditing what your agent can perceive.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rename tools for clarity&lt;/li&gt;
&lt;li&gt;Scope your context&lt;/li&gt;
&lt;li&gt;Pass real state&lt;/li&gt;
&lt;li&gt;Make errors loud&lt;/li&gt;
&lt;li&gt;Use specialized agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your agent is only as good as what it can see 👀&lt;/p&gt;




&lt;p&gt;If you're building agents and want a second set of eyes on your architecture, I help teams get this right. DM me on X or LinkedIn.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>architecture</category>
      <category>production</category>
    </item>
    <item>
      <title>No Evals, No Idea. How 40% of RAG Answers Go Wrong.</title>
      <dc:creator>Serhii Panchyshyn</dc:creator>
      <pubDate>Mon, 13 Apr 2026 20:58:06 +0000</pubDate>
      <link>https://dev.to/serhiip/my-first-rag-system-had-no-evals-40-of-answers-were-wrong-ab</link>
      <guid>https://dev.to/serhiip/my-first-rag-system-had-no-evals-40-of-answers-were-wrong-ab</guid>
      <description>&lt;p&gt;When I started building production RAG systems, I noticed something: nobody was measuring retrieval quality.&lt;/p&gt;

&lt;p&gt;Teams would ship a system, ask users if it "felt good," and move on. No metrics. No baseline. No way to know if changes actually helped.&lt;/p&gt;

&lt;p&gt;So I started measuring everything. And the first thing I discovered: &lt;strong&gt;most RAG failures aren't LLM failures. They're retrieval failures.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The documents that could answer the question aren't making it into the context window. The LLM is being asked to answer questions without the information it needs. No wonder it hallucinates.&lt;/p&gt;

&lt;p&gt;Here's what I've learned about measuring and fixing RAG systems across dozens of client engagements.&lt;/p&gt;




&lt;h2&gt;
  
  
  The metric that actually matters: Recall@k
&lt;/h2&gt;

&lt;p&gt;Before I measure anything else on a new RAG system, I measure &lt;strong&gt;Recall@k&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Recall@k answers a simple question: "Of all the documents that &lt;em&gt;should&lt;/em&gt; have been retrieved, what percentage actually made it into the top k results?"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;recall_at_k&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;retrieved_ids&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;relevant_ids&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;What % of relevant docs are in the top k results?&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;top_k&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;retrieved_ids&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;relevant&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;relevant_ids&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;relevant&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;top_k&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;relevant&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;relevant&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On systems I've audited, Recall@10 is often around 60%. That means 40% of the time, the document that could answer the question isn't even in the context. The LLM never had a chance.&lt;/p&gt;

&lt;p&gt;Here's the math that drives everything:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P(correct answer) ≈ P(correct context retrieved)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the right chunks aren't retrieved, the LLM can't answer correctly. This is why I always measure retrieval separately from answer quality. Otherwise you're debugging the wrong layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  You can start measuring today
&lt;/h2&gt;

&lt;p&gt;You don't need production traffic to build evals. Generate synthetic test data from your corpus:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_synthetic_evals&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Generate question-answer pairs from your chunks.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;eval_pairs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Generate 3 questions that this text can answer.
Make them specific. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is this about?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; doesn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t test retrieval.

Text:
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;

Return JSON: [{{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chunk_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;}}]
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;eval_pairs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;parse_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;eval_pairs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;50-100 questions is enough to establish a baseline. Run your retriever, measure Recall@10, write down the number. Now you can actually tell if changes help.&lt;/p&gt;




&lt;h2&gt;
  
  
  The two fixes that consistently move the needle
&lt;/h2&gt;

&lt;p&gt;I've tried a lot of retrieval improvements across different client systems. Most make marginal differences. Two consistently deliver results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fix 1: Hybrid search
&lt;/h3&gt;

&lt;p&gt;Embeddings are great at semantic similarity. "How do I reset my password?" matches "Steps to recover account access" even though they share no keywords.&lt;/p&gt;

&lt;p&gt;But embeddings are weak on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Numbers&lt;/strong&gt;: They don't understand that 49 is close to 50&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exact match&lt;/strong&gt;: Product codes, IDs, ticker symbols&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rare terms&lt;/strong&gt;: Domain jargon not in the training data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BM25 (keyword search) catches what embeddings miss. Combine them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hybrid_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Combine embedding search and BM25 using RRF.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="n"&gt;embedding_results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;embedding_index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;bm25_results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bm25_index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Reciprocal Rank Fusion
&lt;/span&gt;    &lt;span class="n"&gt;scores&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="n"&gt;rrf_k&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rank&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;doc_id&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding_results&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rrf_k&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;rank&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rank&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;doc_id&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bm25_results&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rrf_k&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;rank&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;ranked&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sorted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;reverse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ranked&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Typical improvement: &lt;strong&gt;5-15% recall boost&lt;/strong&gt; depending on query mix.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fix 2: Add a reranker
&lt;/h3&gt;

&lt;p&gt;Embedding models are bi-encoders. They encode query and documents separately, then compare. Fast, but imprecise.&lt;/p&gt;

&lt;p&gt;Cross-encoders (rerankers) look at the query and document together. Slower, but much more accurate. Use them as a second pass:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_with_rerank&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Retrieve broadly, then rerank precisely.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="c1"&gt;# Cast a wide net
&lt;/span&gt;    &lt;span class="n"&gt;candidates&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hybrid_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Rerank with cross-encoder
&lt;/span&gt;    &lt;span class="n"&gt;pairs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;get_content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc_id&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;candidates&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;scores&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;reranker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pairs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Return top k after reranking
&lt;/span&gt;    &lt;span class="n"&gt;ranked&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sorted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;candidates&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;reverse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;ranked&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Typical improvement: &lt;strong&gt;another 5-10%&lt;/strong&gt; on top of hybrid search.&lt;/p&gt;

&lt;p&gt;Combined, these two fixes often take a system from 60% to 80% recall. That's the difference between "works sometimes" and "works reliably."&lt;/p&gt;




&lt;h2&gt;
  
  
  Chunking decisions that make or break retrieval
&lt;/h2&gt;

&lt;p&gt;Your chunking strategy matters more than your embedding model choice. A few things I always check when onboarding a new project:&lt;/p&gt;

&lt;h3&gt;
  
  
  The "it" problem
&lt;/h3&gt;

&lt;p&gt;Chunks that start with "It also supports..." or "This feature allows..." are useless on their own. The word "it" has no meaning without the previous chunk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix: Prepend context to every chunk.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;chunk_with_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sections&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Prepend document and section info
&lt;/span&gt;        &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Document: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Section: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;section&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;header&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk_text&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;split_section&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;section&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;chunk_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metadata&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;doc_title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;section&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;section&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;header&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;chunks&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Other chunking rules I follow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Never split mid-table.&lt;/strong&gt; A row without headers is meaningless.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10-20% overlap&lt;/strong&gt; between consecutive chunks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test multiple chunk sizes&lt;/strong&gt; (256, 512, 1024 tokens). Optimal depends on your queries.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The workflow I run on every new RAG engagement
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Phase 1-2: Establish baseline&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parse documents (test multiple parsers for PDFs)&lt;/li&gt;
&lt;li&gt;Chunk with context headers&lt;/li&gt;
&lt;li&gt;Generate 50-100 synthetic eval questions&lt;/li&gt;
&lt;li&gt;Build basic retriever&lt;/li&gt;
&lt;li&gt;Measure Recall@10&lt;/li&gt;
&lt;li&gt;Write down the number&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Phase 2-4: Apply standard fixes&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add hybrid search (BM25 + embeddings)&lt;/li&gt;
&lt;li&gt;Add reranker&lt;/li&gt;
&lt;li&gt;Measure again&lt;/li&gt;
&lt;li&gt;Compare to baseline&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Phase 4+: Debug specific failures&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Break down recall by query type&lt;/li&gt;
&lt;li&gt;Find worst-performing segment&lt;/li&gt;
&lt;li&gt;Fix that segment&lt;/li&gt;
&lt;li&gt;Measure again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key: measure after every change. If you can't see improvement in numbers, you're guessing.&lt;/p&gt;




&lt;h2&gt;
  
  
  When to measure answer quality
&lt;/h2&gt;

&lt;p&gt;Only after retrieval is solid.&lt;/p&gt;

&lt;p&gt;Once Recall@10 is above 80%, start measuring end-to-end:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;eval_answer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Use LLM-as-judge for answer evaluation.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Evaluate this answer. Return JSON:
- correct: true/false (factually accurate)
- grounded: true/false (supported by the context)
- complete: true/false (addresses the full question)

Context: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;format_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
Question: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
Answer: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;parse_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But if retrieval is broken, this eval is noise. You're just measuring how well your LLM fills in gaps it shouldn't have to fill.&lt;/p&gt;




&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;RAG quality is retrieval quality.&lt;/p&gt;

&lt;p&gt;Before you touch your prompts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generate synthetic evals from your corpus&lt;/li&gt;
&lt;li&gt;Measure Recall@10&lt;/li&gt;
&lt;li&gt;Add hybrid search&lt;/li&gt;
&lt;li&gt;Add a reranker&lt;/li&gt;
&lt;li&gt;Fix your chunking&lt;/li&gt;
&lt;li&gt;Measure again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The fixes are straightforward. The impact is not.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 1 of a series on production AI systems. Next: how to know when to fix your prompts vs. build an evaluator.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  About me
&lt;/h2&gt;

&lt;p&gt;I help B2B SaaS companies ship production AI in 6 weeks.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>production</category>
      <category>evaluation</category>
    </item>
  </channel>
</rss>
