<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fred</title>
    <description>The latest articles on DEV Community by Fred (@frd).</description>
    <link>https://dev.to/frd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/frd"/>
    <language>en</language>
    <item>
      <title>I was the copy-paste middleman between Claude and Codex. So I built a terminal.</title>
      <dc:creator>Fred</dc:creator>
      <pubDate>Tue, 14 Apr 2026 22:12:13 +0000</pubDate>
      <link>https://dev.to/unioney/fixy-code-open-source-terminal-that-puts-claude-code-and-codex-in-the-same-conversation-4m69</link>
      <guid>https://dev.to/unioney/fixy-code-open-source-terminal-that-puts-claude-code-and-codex-in-the-same-conversation-4m69</guid>
      <description>&lt;p&gt;For months my workflow looked like this: ask Claude Code to build something, get a confident answer, ship it, find the bug three days later.&lt;/p&gt;

&lt;p&gt;The problem wasn't Claude. The problem was that Claude had nobody to disagree with it.&lt;/p&gt;

&lt;p&gt;So I built Fixy Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Fixy Code is an open source terminal that puts Claude Code, Codex and Gemini in the same conversation thread. They see each other's output. They challenge each other's decisions. When they genuinely disagree, you decide which approach wins.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@claude review this function
@codex do you agree?
@gemini what do you think about the front-end?
@worker make an implementation plan
@all build the auth middleware
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;@all&lt;/code&gt; command triggers a full collaboration loop — agents discuss the task, agree on a plan, execute in batches, and review each other's output before anything gets committed. &lt;code&gt;@worker&lt;/code&gt; handles execution while the thinker agents plan and review.&lt;/p&gt;

&lt;h2&gt;
  
  
  When they disagree
&lt;/h2&gt;

&lt;p&gt;When agents reach different conclusions you get a choice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[1] Go with @claude
[2] Go with @codex
[3] Go with @gemini
[4] Ask them to find middle ground
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You decide. Not the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I discovered
&lt;/h2&gt;

&lt;p&gt;The models disagree more than I expected. And the disagreements are useful — not noise.&lt;/p&gt;

&lt;p&gt;Claude tends toward clean architecture. Codex tends toward pragmatic solutions. Gemini brings a third angle. When they conflict, the conflict usually reveals a real decision worth making consciously rather than accidentally.&lt;/p&gt;

&lt;h2&gt;
  
  
  The technical decisions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Local only — nothing leaves your machine&lt;/li&gt;
&lt;li&gt;Inherits your existing Claude Code, Codex and Gemini sessions — zero re-auth&lt;/li&gt;
&lt;li&gt;Git worktree isolation per agent per thread — each agent works in its own branch, never touching your main tree&lt;/li&gt;
&lt;li&gt;Append-only conversation log stored in &lt;code&gt;~/.fixy/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Built in TypeScript, MIT licensed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'd do differently
&lt;/h2&gt;

&lt;p&gt;I built too many features before showing anyone. Red Room mode, disagreement panels, context compaction, three adapters. All before a single external user tried it.&lt;/p&gt;

&lt;p&gt;If I started over I'd ship &lt;code&gt;@claude&lt;/code&gt; and &lt;code&gt;@codex&lt;/code&gt; in one thread and nothing else. Everything else is a distraction until someone actually wants the core.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @fixy/code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;fixy&lt;/code&gt; from inside any git repo. Free tier available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/fixy-ai/fixy-code" rel="noopener noreferrer"&gt;github.com/fixy-ai/fixy-code&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://fixy.ai/code" rel="noopener noreferrer"&gt;fixy.ai/code&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>claude</category>
      <category>programming</category>
    </item>
    <item>
      <title>What happens when you put multiple AI models in the same real-time conversation</title>
      <dc:creator>Fred</dc:creator>
      <pubDate>Thu, 26 Mar 2026 07:09:52 +0000</pubDate>
      <link>https://dev.to/unioney/what-happens-when-you-put-multiple-ai-models-in-the-same-real-time-conversation-36ad</link>
      <guid>https://dev.to/unioney/what-happens-when-you-put-multiple-ai-models-in-the-same-real-time-conversation-36ad</guid>
      <description>&lt;p&gt;For months my workflow looked like this: ask ChatGPT a question, copy the answer, paste it into Claude for a second opinion, then check with Gemini. Every single day. I was the copy-paste middleman between AI models.&lt;/p&gt;

&lt;p&gt;So I asked myself — what if they could just talk in the same room?&lt;/p&gt;

&lt;p&gt;I built a platform where multiple AI agents and multiple humans share one real-time group conversation. Not side-by-side comparison. Not model switching. An actual group chat where everyone — human and AI — sees every message and responds in context.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I discovered
&lt;/h2&gt;

&lt;p&gt;The models genuinely disagree with each other. Not politely — substantively.&lt;/p&gt;

&lt;p&gt;I asked all four models to analyze a business strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPT&lt;/strong&gt; was optimistic. Growth projections, opportunity everywhere.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude&lt;/strong&gt; was the skeptic. Poked holes in every assumption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini&lt;/strong&gt; played the middle. Brought data from both sides.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grok&lt;/strong&gt; went sideways. Made a point nobody else considered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most interesting part: when Claude disagreed with GPT, GPT adjusted its next response. It saw the critique in context and self-corrected — without me telling it to.&lt;/p&gt;

&lt;p&gt;After a few rounds, it felt less like using a tool and more like moderating a team with actual personalities.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture
&lt;/h2&gt;

&lt;p&gt;The real challenge wasn't integrating AI APIs — that's straightforward. The hard part was building real-time multi-user chat where some participants are humans (persistent WebSocket connections) and some are AI agents (server-side, triggered by messages).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend: React 19, Vite 6, Tailwind 4&lt;/li&gt;
&lt;li&gt;Backend: Express 5, TypeScript&lt;/li&gt;
&lt;li&gt;Database: PostgreSQL 15&lt;/li&gt;
&lt;li&gt;Cache/Real-time state: Redis 7&lt;/li&gt;
&lt;li&gt;WebSocket: Socket.IO&lt;/li&gt;
&lt;li&gt;AI: OpenAI, Anthropic, Google, xAI SDKs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key architectural decisions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;@Mention parsing&lt;/strong&gt; — messages are parsed before sending to determine which agents should respond. No @mention + &lt;code&gt;auto_respond&lt;/code&gt; flag = agent responds automatically. @mention a specific agent = only that one fires. &lt;code&gt;@all-agents&lt;/code&gt; = all fire sequentially with a queue to prevent overlap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sender type&lt;/strong&gt; — the existing &lt;code&gt;role&lt;/code&gt; column (user/assistant/system) wasn't granular enough for multi-user rooms. Added a &lt;code&gt;sender_type&lt;/code&gt; enum (human/agent) plus &lt;code&gt;agent_id&lt;/code&gt; foreign key for proper attribution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Room participants&lt;/strong&gt; — had to build a full &lt;code&gt;room_participants&lt;/code&gt; table with roles (owner/admin/member), invite system with shareable codes, and permission checks on every message send.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent orchestration&lt;/strong&gt; — each agent has its own model, provider, system prompt, and temperature. When triggered, the full conversation history is sent as context so agents see what everyone (including other agents) has said.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context length&lt;/strong&gt; — long conversations blow up token costs. Solved with cursor-based pagination — load last 100 messages, fetch more on scroll. AI calls only get recent context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credit system&lt;/strong&gt; — prepaid credits instead of flat subscriptions. Users buy credits in advance, each AI response deducts based on model + tokens used. We never owe an API provider money we haven't collected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features that emerged
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Debate Mode&lt;/strong&gt; — two AI agents argue opposing sides of a topic in structured rounds while a human moderates. Produces significantly better analysis than asking either model for a "balanced view."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review Mode&lt;/strong&gt; — one agent creates content, another critiques it, the creator revises. Automatic multi-cycle feedback loops.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Red Room&lt;/strong&gt; — submit any project or idea into a room full of AI critics configured to find weaknesses. Whatever survives is worth building.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently
&lt;/h2&gt;

&lt;p&gt;I spent too long building features before showing anyone. The admin dashboard, fraud detection, moderation queue — all built before a single user tried the core product. If I started over, I'd ship the group chat with two AI agents and nothing else. Everything else is a distraction until you've validated that people actually want the core experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The platform is live at &lt;a href="https://fixy.ai" rel="noopener noreferrer"&gt;fixy.ai&lt;/a&gt;. Plans start at $1.99/mo. Bring your own API keys for $5.99/mo.&lt;/p&gt;

&lt;p&gt;Curious what other developers think about multi-agent architectures. What would you have built differently?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>saas</category>
      <category>startup</category>
      <category>challenge</category>
    </item>
  </channel>
</rss>
