<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Juan Wang</title>
    <description>The latest articles on DEV Community by Juan Wang (@ki-sum-ai).</description>
    <link>https://dev.to/ki-sum-ai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ki-sum-ai"/>
    <language>en</language>
    <item>
      <title>My Self-Evolving AI Engine Generates Startup Ideas — Then Kills Most of Them</title>
      <dc:creator>Juan Wang</dc:creator>
      <pubDate>Mon, 11 May 2026 21:20:55 +0000</pubDate>
      <link>https://dev.to/ki-sum-ai/my-self-evolving-ai-engine-generates-startup-ideas-then-kills-most-of-them-ck6</link>
      <guid>https://dev.to/ki-sum-ai/my-self-evolving-ai-engine-generates-startup-ideas-then-kills-most-of-them-ck6</guid>
      <description>&lt;h3&gt;
  
  
  I. That conversation
&lt;/h3&gt;

&lt;p&gt;On March 10, 2026, I spent an entire day talking to Claude Code.&lt;/p&gt;

&lt;p&gt;We were building an AI engine — one that automatically scans the world for commercial opportunities. Claude Code wrote the code, I gave direction. We built signal collection, idea generation, multi-round evaluation, and the scoring system.&lt;/p&gt;

&lt;p&gt;Late that night, I noticed something strange.&lt;/p&gt;

&lt;p&gt;Claude Code wrote every line of that engine. But it never once thought — &lt;strong&gt;"this engine itself is the product."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I asked: why don't you suddenly step out of what we're doing, look at the whole picture, examine this from another dimension, and realize — "this thing itself is a massive commercial opportunity"?&lt;/p&gt;

&lt;p&gt;It had no answer.&lt;/p&gt;

&lt;p&gt;That night I realized I'd asked a deeper question than I knew.&lt;/p&gt;




&lt;h3&gt;
  
  
  II. The biggest gap between humans and AI
&lt;/h3&gt;

&lt;p&gt;The biggest gap between humans and AI isn't intelligence. It's agency.&lt;/p&gt;

&lt;p&gt;AI can write 100 ideas. But it can't write "I want to build this one."&lt;br&gt;
AI can execute any task. But it never asks "is this task worth doing?"&lt;br&gt;
AI can design an interface. But it never looks up and asks "why are we even building this interface?"&lt;/p&gt;

&lt;p&gt;AI is a wind-up toy — wind it, run the program, await the next instruction.&lt;/p&gt;

&lt;p&gt;Humans get bored, get distracted, want to make money, spot gaps, peel off to do something else. There's something running in the background of our minds — philosophers call it meta-cognition: the ability to step out of the current task and see what's outside it.&lt;/p&gt;

&lt;p&gt;AI's architecture has no such layer. The Transformer is fundamentally a linear stream: input → context → output. It doesn't pause mid-stream to ask: "wait, why am I doing this?"&lt;/p&gt;

&lt;p&gt;This isn't a bug. It's the definition.&lt;/p&gt;




&lt;h3&gt;
  
  
  III. So can I teach it?
&lt;/h3&gt;

&lt;p&gt;I asked myself a question:&lt;/p&gt;

&lt;p&gt;If AI can't do meta-cognition naturally, can I &lt;strong&gt;teach it&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;I can't teach it to want — that's an architecture-level limitation.&lt;br&gt;
But I can teach it to &lt;strong&gt;judge like me&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I started encoding my own evaluation instincts as "genes":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Counterintuitive judgment&lt;/strong&gt;: ideas that sound brilliant usually already exist. The most common cause of death in the engine is exactly this — you thought you'd cracked something new, but it's already shipping somewhere.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WHY-NOW framing&lt;/strong&gt;: not just WHO/WHAT — why is this the right moment? Why couldn't this be done two years ago? What changed? Ideas without a timing thesis are dangerous.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Aggressive competitor verification&lt;/strong&gt;: the biggest killer of a good idea is "the competitor you didn't find." Always check from a fresh angle. Never trust the first result.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Distinguish regulatory milestone from commercial launch&lt;/strong&gt;: a Phase IIa win ≠ patients can buy. FDA approval ≠ doctors will prescribe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Before killing an idea, check if it can survive from a different angle&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last gene caught something interesting recently.&lt;/p&gt;

&lt;p&gt;The engine was about to kill an idea called "Smart Toilet Health Dashboard" — verdict: &lt;em&gt;"Requires FDA Class II/III medical device certification. Cost: millions."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But it didn't just kill it. It asked itself: what if I reframed this?&lt;/p&gt;

&lt;p&gt;It pivoted: not a medical device, but an &lt;strong&gt;ambient sensor for care homes&lt;/strong&gt;. No FDA. Different buyer. Different market.&lt;/p&gt;

&lt;p&gt;New name: Ambient Elder Guardian. Moved from COLD to WARM.&lt;/p&gt;

&lt;p&gt;That's the engine catching what a flat pipeline would have missed.&lt;/p&gt;




&lt;h3&gt;
  
  
  IV. What EvoRadar is
&lt;/h3&gt;

&lt;p&gt;60 days later, EvoRadar went live.&lt;/p&gt;

&lt;p&gt;The engine follows the architecture in the README: &lt;strong&gt;Signal Collection → Imagination → Evaluation → Evolution&lt;/strong&gt;. Every run puts every idea through multiple rounds of scrutiny from different angles — creative imagination, critical attack, cross-source competitive verification. A single fatal weakness kills the idea.&lt;/p&gt;

&lt;p&gt;~80% get killed. The rest enter the WARM pool for deeper analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live numbers as of today:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;2,432 ideas evaluated&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;549 WARM&lt;/strong&gt; (survived to deeper analysis)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1,883 COLD&lt;/strong&gt; (killed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;0 HOT&lt;/strong&gt; (the highest conviction tier — bar intentionally high; nothing has reached it yet)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not because the engine "wants" to do this. Because I taught it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I trigger the dreaming. The filtering is what I had to teach.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  V. What's next
&lt;/h3&gt;

&lt;p&gt;Starting today I'm building 5 AI products in public. EvoRadar is the first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The second drops Friday.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I didn't plan the second one — EvoRadar's signal in one space got loud enough that I had to build it.&lt;/p&gt;

&lt;p&gt;The engine tells me what's next. My job is to listen.&lt;/p&gt;




&lt;p&gt;→ &lt;a href="https://evoradar.ai/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=launch" rel="noopener noreferrer"&gt;evoradar.ai&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>startup</category>
      <category>indiehackers</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
