I've written two retrospectives in the past week. One was the story of letting an AI agent run a product launch and watching it ship something the world didn't need. The other was the category analysis I did after. Both posts are about the same failure from two angles — the personal and the structural.
This one is the practical layer. After writing those, I changed how I work. Specifically, I added four questions to the beginning of anything I build now, whether solo or with AI.
They're not a process. They're a friction. They're what I say out loud before I let the AI start, because every one of them is something the AI will not raise on its own and I will forget to ask if I'm excited.
The place where I now hesitate
Three years ago, the hesitation before building anything was technical. Can I actually build this? How long will it take? What will break? AI has mostly removed that hesitation. Execution got cheap.
What's left is a different shape of hesitation. The four questions below are what fills it. They're deliberately slow. They're supposed to be slow.
Question 1 — Whose heart does this move?
When I ask the AI "is there demand for this?", what comes back is a market analysis. Competitors, price points, comparable launches, conversion funnels. It sounds like an answer, and it scans as evidence, and it is not actually answering the question I need answered.
The question I need answered is: if I picture one specific person going about their Tuesday afternoon, and the product appears in front of them, does their heart move?
There are four flavors of heart movement that get people to pull out a credit card:
- Pain relief — a thing they were doing the hard way, now easier
- Desire fulfillment — a thing they wanted but couldn't get
- Delight — something useless that makes them laugh, or beautiful enough to have just because
- Belonging — a signal that says "this is me, this is my tribe"
A product with none of those is an artifact. Well-made, inert, not bought.
I now force myself to write down which flavor, for a specific imagined person, the product delivers. If I can't name a flavor cleanly, the product is not ready to build.
Question 2 — If this customer has my tools, why do they still need me?
This one came out of a shower. I had spent a week on a product and never asked it. It would have saved me the week.
The setup: I was building something for technical people. My customer was, by construction, technically capable. The AI had not flagged this, because the AI had reasoned about the market as if the AI was not also in the market.
The question makes the AI's blindspot visible: the customer has the same tools I'm using to build this. What's left that only I can offer?
The honest answers, in 2026, are a short list:
- A specific niche the AI defaults won't cover (not "a landing page," but "a landing page for a specific compliance regime the model hallucinates on")
- Integration with something that costs real effort to replicate
- An audience that trusts me to make this decision for them
- Ongoing service or community — the value is continuous, not a one-shot artifact
If my answer is none of those, I'm building something the customer could make themselves in an afternoon, and the moat is fictional.
Question 3 — What is the AI refusing to surface on its own?
The AI will evaluate almost anything I put in front of it. The AI will not, in 2026, reliably initiate the question that breaks my plan.
This isn't a flaw I'm complaining about. It's a shape of tool I need to understand. The agent is fast, competent, compliant. It answers what I ask, exactly as I ask it. What it won't do is walk up and say hey, the thing you're not asking me is this.
I've started adding a specific prompt after the research phase: "What am I not asking you that I should be?"
The answer is often a shrug. Sometimes it's a generic list of risks. Occasionally — and this is the whole reason the prompt exists — it surfaces one specific thing I hadn't considered. That one specific thing is worth the price of asking.
This prompt doesn't replace judgment. It's a lightweight ritual that slightly raises the odds the AI will do what it won't do on its own.
Question 4 — Am I reasoning about a market, or about a person?
Market reasoning is easy now. AI is good at it. It produces clean graphs, reasonable assumptions, defensible conclusions.
Person reasoning is still slow. Still human. You have to picture one specific human, give them a name if you need to, and ask what they would feel.
The test I use: can I name one actual person — real or imagined with enough texture — who would react a specific way to this product?
If yes, I can describe their Tuesday, their frustration, their reason to click, their moment of "oh, finally." The product is solving for someone.
If no — if I can only describe "developers" or "indie founders" or "people who want a landing page" — I'm still reasoning about a market. Market-level reasoning doesn't move money. It moves decks.
I don't build from decks.
The meta-lesson
The four questions don't replace judgment. They surface where judgment is missing.
It's tempting to turn them into a checklist and fill out the boxes quickly, because we all know what happens when you make something a checklist: it stops being a ritual and starts being a formality. The boxes get checked, the judgment gets skipped, and you ship anyway.
The way I use them now is slow. One question, one sitting. Ten minutes each, not checkbox minutes. I usually get through one or two in a morning, then sleep on the rest. If the answer to any of them is "I'll figure that out later," I've just learned that the product isn't ready yet, and the AI can sit.
The AI cannot tell me to slow down. That's still my job.
The black cat can tell me to slow down, actually, because if I type for more than 45 minutes she sighs and sits on the keyboard. But it's ambiguous whether that counts as judgment or just a nap preference.
What changed for me, concretely
Before these four questions, my default was: if I can build it, and the market data looks okay, I build it. The calendar drove the decisions.
Now the default is: the AI can build it, the market data always looks okay, and the interesting question is whether I should. That question lives in a part of my brain that doesn't get faster just because the tooling got faster.
Shipping happens later than it used to. When it does happen, I think it'll happen into better ground.
I haven't earned the right to claim that yet — the product that'll test this hasn't landed. What I've earned is the honest list of what I'd ask myself if I were starting from zero today.
Back to the quiet work.
Three cups of tea. Still tired. The cats continue to sigh at my function names.
Top comments (0)