<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: clomia</title>
    <description>The latest articles on DEV Community by clomia (@clomia_d9be33fb6dacd82fa1).</description>
    <link>https://dev.to/clomia_d9be33fb6dacd82fa1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/clomia_d9be33fb6dacd82fa1"/>
    <language>en</language>
    <item>
      <title>I built a Claude Code plugin that finds what your agent can't see</title>
      <dc:creator>clomia</dc:creator>
      <pubDate>Fri, 10 Apr 2026 08:04:55 +0000</pubDate>
      <link>https://dev.to/clomia_d9be33fb6dacd82fa1/i-built-a-claude-code-plugin-that-finds-what-your-agent-cant-see-4fn8</link>
      <guid>https://dev.to/clomia_d9be33fb6dacd82fa1/i-built-a-claude-code-plugin-that-finds-what-your-agent-cant-see-4fn8</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Not "think deeper" or "try again" — "look where you haven't looked." Parallax spins up a separate agent with no shared conversation to find regions your main agent never considered.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude plugin marketplace add clomia/claude-automata
claude plugin &lt;span class="nb"&gt;install &lt;/span&gt;parallax@claude-automata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then append &lt;code&gt;parallaxthink&lt;/code&gt; to any prompt.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;You know the pattern: you give Claude a complex task, it finishes, and something's clearly missing. You re-prompt, re-explain context, wait again. Maybe 3-5 times.&lt;/p&gt;

&lt;p&gt;LLMs narrow their exploration as they generate. Each token constrains what comes next. The bottleneck is thinking &lt;em&gt;breadth&lt;/em&gt;, not depth. (For the curious: GPT-4 goes from 4% to 74% on Game of 24 just by exploring multiple paths instead of one.)&lt;/p&gt;

&lt;p&gt;Self-correction within the same context doesn't fix this either — it tends to reinforce what the agent already believes rather than surface something new.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Parallax does
&lt;/h2&gt;

&lt;p&gt;Every time your agent tries to stop, Parallax intervenes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A separate analysis agent spins up — fresh context, no shared conversation history.&lt;/li&gt;
&lt;li&gt;It reads through your codebase (read-only, can't modify anything) and identifies regions the main agent never considered.&lt;/li&gt;
&lt;li&gt;If it finds something, it injects the feedback and the agent keeps working. If nothing's left, it stops.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cap at 30 rounds, typically finishes within 10. Each round, the analyzer deliberately picks the region &lt;em&gt;farthest&lt;/em&gt; from what's already been covered, so it doesn't repeat itself.&lt;/p&gt;

&lt;p&gt;Your original prompt is captured at the start, so even after auto-compaction the mission doesn't drift.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it differs from existing approaches
&lt;/h2&gt;

&lt;p&gt;Different layers, different problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;"Think deeper"&lt;/strong&gt; (e.g. extended thinking) = more reasoning budget in the same context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Try again"&lt;/strong&gt; (e.g. retry loops) = fresh instances, but prior direction often carries over&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;parallaxthink&lt;/strong&gt; = find what you missed from a completely separate context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They're not competing. They stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this actually looks like
&lt;/h2&gt;

&lt;p&gt;My setup in Claude Code: &lt;code&gt;opus[1m] + /effort max + thinking mode + parallaxthink + remote-control&lt;/code&gt; — that's max-context Opus, maximum reasoning effort, extended thinking enabled, Parallax active, and remote-control mode so I don't need to approve each step. Craft a solid prompt, dial up the reasoning, and let it go. Walk away, come back hours later, done. Parallax keeps pushing the agent into regions it would never have explored on its own — round after round, autonomously. No re-prompting, no re-explaining context. Just one prompt.&lt;/p&gt;

&lt;p&gt;The stronger the reasoning, the better Parallax works — the analysis agent needs to actually &lt;em&gt;think&lt;/em&gt; to find meaningful blind spots. That's why I run it at max effort with opus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No formal benchmarks.&lt;/strong&gt; Run &lt;code&gt;/parallax-log&lt;/code&gt; to see exactly what the analyzer found each round and judge for yourself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Each round adds analysis overhead.&lt;/strong&gt; But compare that to re-prompting and re-explaining context 3-5 times.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Source
&lt;/h2&gt;

&lt;p&gt;Fully open source: &lt;a href="https://github.com/clomia/claude-automata" rel="noopener noreferrer"&gt;github.com/clomia/claude-automata&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Architecture diagram, docs, and all prompt templates are in the repo. Feedback and issues welcome.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>claude</category>
    </item>
  </channel>
</rss>
