DEV Community

Cover image for My Self-Evolving AI Engine Generates Startup Ideas — Then Kills Most of Them
Juan Wang
Juan Wang

Posted on • Originally published at Medium

My Self-Evolving AI Engine Generates Startup Ideas — Then Kills Most of Them

I. That conversation

On March 10, 2026, I spent an entire day talking to Claude Code.

We were building an AI engine — one that automatically scans the world for commercial opportunities. Claude Code wrote the code, I gave direction. We built signal collection, idea generation, multi-round evaluation, and the scoring system.

Late that night, I noticed something strange.

Claude Code wrote every line of that engine. But it never once thought — "this engine itself is the product."

I asked: why don't you suddenly step out of what we're doing, look at the whole picture, examine this from another dimension, and realize — "this thing itself is a massive commercial opportunity"?

It had no answer.

That night I realized I'd asked a deeper question than I knew.


II. The biggest gap between humans and AI

The biggest gap between humans and AI isn't intelligence. It's agency.

AI can write 100 ideas. But it can't write "I want to build this one."
AI can execute any task. But it never asks "is this task worth doing?"
AI can design an interface. But it never looks up and asks "why are we even building this interface?"

AI is a wind-up toy — wind it, run the program, await the next instruction.

Humans get bored, get distracted, want to make money, spot gaps, peel off to do something else. There's something running in the background of our minds — philosophers call it meta-cognition: the ability to step out of the current task and see what's outside it.

AI's architecture has no such layer. The Transformer is fundamentally a linear stream: input → context → output. It doesn't pause mid-stream to ask: "wait, why am I doing this?"

This isn't a bug. It's the definition.


III. So can I teach it?

I asked myself a question:

If AI can't do meta-cognition naturally, can I teach it?

I can't teach it to want — that's an architecture-level limitation.
But I can teach it to judge like me.

I started encoding my own evaluation instincts as "genes":

  • Counterintuitive judgment: ideas that sound brilliant usually already exist. The most common cause of death in the engine is exactly this — you thought you'd cracked something new, but it's already shipping somewhere.

  • WHY-NOW framing: not just WHO/WHAT — why is this the right moment? Why couldn't this be done two years ago? What changed? Ideas without a timing thesis are dangerous.

  • Aggressive competitor verification: the biggest killer of a good idea is "the competitor you didn't find." Always check from a fresh angle. Never trust the first result.

  • Distinguish regulatory milestone from commercial launch: a Phase IIa win ≠ patients can buy. FDA approval ≠ doctors will prescribe.

  • Before killing an idea, check if it can survive from a different angle.

That last gene caught something interesting recently.

The engine was about to kill an idea called "Smart Toilet Health Dashboard" — verdict: "Requires FDA Class II/III medical device certification. Cost: millions."

But it didn't just kill it. It asked itself: what if I reframed this?

It pivoted: not a medical device, but an ambient sensor for care homes. No FDA. Different buyer. Different market.

New name: Ambient Elder Guardian. Moved from COLD to WARM.

That's the engine catching what a flat pipeline would have missed.


IV. What EvoRadar is

60 days later, EvoRadar went live.

The engine follows the architecture in the README: Signal Collection → Imagination → Evaluation → Evolution. Every run puts every idea through multiple rounds of scrutiny from different angles — creative imagination, critical attack, cross-source competitive verification. A single fatal weakness kills the idea.

~80% get killed. The rest enter the WARM pool for deeper analysis.

Live numbers as of today:

  • 2,432 ideas evaluated
  • 549 WARM (survived to deeper analysis)
  • 1,883 COLD (killed)
  • 0 HOT (the highest conviction tier — bar intentionally high; nothing has reached it yet)

Not because the engine "wants" to do this. Because I taught it.

I trigger the dreaming. The filtering is what I had to teach.


V. What's next

Starting today I'm building 5 AI products in public. EvoRadar is the first.

The second drops Friday.

I didn't plan the second one — EvoRadar's signal in one space got loud enough that I had to build it.

The engine tells me what's next. My job is to listen.


evoradar.ai

Top comments (0)