43% of startups do not fail because the code was bad.
They fail because the wrong thing got built.
Not ugly code.
Not missing features.
Not “we should have used a different framework.”
The product was simply solving a problem that was not painful enough, urgent enough, or owned by someone specific enough.
That is the part AI makes more dangerous.
Because now we can build the wrong thing faster than ever.
So I built product-init.
A hard-gated product discovery system for AI coding tools.
It runs before your agent writes a single line of code.
The problem
AI coding tools are getting insanely good.
Codex can ship.
Claude Code can build.
OpenClaw can orchestrate.
Agents can plan, write, test, and deploy.
But there is still one uncomfortable question:
What if the goal is wrong?
Most AI workflows start too late.
They begin with:
Build this app.
But product failure usually starts much earlier.
Before the first component.
Before the first database table.
Before the first API route.
It starts when nobody asks:
- Who is this really for?
- What painful job are they hiring this product to do?
- Who owns the failure if this does not work?
- What would make us kill this idea before we waste time building it?
That is the gap product-init tries to close.
What is product-init?
product-init is a Claude Code skill that also works with Codex CLI and OpenClaw-style workflows.
You type:
/product-init "build an HR assessment tool"
And before any code is written, it forces the product through 9 gates.
Gate 1 does not ask for a tech stack.
It asks things like:
- Who gets fired if this fails?
- What job is the user hiring this product for?
- What does failure look like in production — in numbers?
- What signal proves this is worth building?
If the answer is weak, the pipeline stops.
Not “warns.”
Stops.
CRITICAL findings block everything.
There is no --skip flag.
The 9 gates
| Gate | Name | What blocks it |
|---|---|---|
| 1 | Discovery Constitution | JTBD undefined, kill criteria missing |
| 2 | Statement of Work | Appetite not set, PR-FAQ not signed |
| 3 | Design | Screen not mapped to a Gate 1 job |
| 4 | Build | Orphan TODOs, commit message not AC-linked |
| 5 | QA | Unit, integration, or E2E tests failing |
| 6 | UAT | No real human sign-off on a real URL |
| 7 | Deploy | No production HTTP 200, no rollback drill |
| 8 | Handoff | No runbook, no DEBT.md
|
| 9 | Warranty | 72-hour monitoring window not passed |
The idea is simple:
AI should not only generate code.
It should be forced to respect product judgment, delivery discipline, and operational evidence.
I dogfooded it
I tested product-init by building an AI-powered HR interview product in one session.
The result included:
- Editorial landing page
- Dark interview room with live AI sessions
- Candidate dashboard with scored results
- PDF reports across 4 evaluation dimensions
- Production deployment
- Handoff package
Live demo: demorpoject.vercel.app
The important part was not that the product shipped.
The important part was that the system kept asking whether it deserved to ship.
All 9 gates passed.
No hidden “trust me bro” layer.
No fake done.
No agent saying “completed” without evidence.
The research behind it
This is not a custom framework I invented from vibes.
product-init is assembled from proven product and delivery thinking:
- CB Insights 2024 — market-need failure as a hard Gate 1 blocker
- Clayton Christensen’s Jobs To Be Done — user/job framing
- Marty Cagan’s four-risk model — value, usability, feasibility, business viability
- Basecamp Shape Up — appetite, scope, and fixed-time product bets
- Amazon PR-FAQ — narrative-first product definition
- Eric Ries’ Lean Startup — build-measure-learn loops and kill criteria
The goal is not to slow AI down.
The goal is to stop AI from confidently building the wrong thing.
Install
Works with Claude Code, Codex CLI, and OpenClaw-style workflows.
curl -sSL https://raw.githubusercontent.com/mturac/product-init/main/install.sh | bash
Then run:
/product-init "your product idea"
Why I built it
We are entering a strange phase of software.
The limiting factor is no longer whether we can build.
The limiting factor is whether we can decide what is worth building.
AI agents are becoming execution engines.
But execution without product judgment is just faster waste.
That is why product-init exists.
A product gatekeeper for agentic development.
Before the code.
Before the sprint.
Before the demo.
Before the illusion of done.
GitHub: https://github.com/mturac/product-init
Open source.
Free.
No --skip flag.
Top comments (0)