DEV Community

Cover image for Y Combinator validates ideas. Nobody validates adoption. Here's what I built instead.
Almin Zolotic
Almin Zolotic

Posted on

Y Combinator validates ideas. Nobody validates adoption. Here's what I built instead.

By Almin Zolotic — Founder of Zologic, City Shower, and Market Physics Engine


The validation that didn't save me

In 2020 I was living in a vacant school in Rotterdam after a divorce wiped out everything I had built. During that period I noticed something obvious that nobody was solving: tens of thousands of homeless people in the Netherlands couldn't access a shower. Not because showers didn't exist — because access was fragmented, gatekept, and invisible.

I built City Shower. A network of accessible hygiene facilities for people living on the streets.

I validated it the way every startup course tells you to. I talked to people. Social workers, shelter managers, municipal health officers, community leaders, entrepreneurs. Every single one said it was needed. Many said it was overdue.

I had positive signals everywhere.

And then I ran into a wall that no interview had prepared me for.

Regulatory friction. Switching costs embedded in institutional processes. Trust infrastructure that didn't exist yet between gyms, municipalities, and social services. The idea was sound. The adoption pathway was blocked.

Nobody's survey had told me that. Because surveys don't measure adoption. They measure intent. And Kahneman proved decades ago that intent and behavior are not the same thing.

That gap — between "people say yes" and "people actually adopt" — is where most startups die. And almost nobody is measuring it.


What traditional validation actually measures

The lean startup canon — customer interviews, problem validation, demand testing — is built on a flawed assumption: that what people tell you they will do predicts what they will actually do.

It doesn't.

Daniel Kahneman and Amos Tversky's Prospect Theory (1979) showed that human decision-making is not rational utility calculation. Losses loom approximately twice as large as equivalent gains. The friction of changing behavior is systematically underweighted by the people experiencing it — and almost completely invisible in an interview context, where social pressure to be supportive overwhelms honest friction assessment.

Y Combinator's application asks: "What do you understand about your business that other companies in it just don't see?"

It doesn't ask: "What structural friction stands between your idea and the people who will need to change their behavior to adopt it?"

Lean startup asks: "Have you talked to customers?"

It doesn't ask: "Have you modeled the regulatory environment your regulator is operating in, the switching cost your operator faces, or the trust infrastructure your early adopter needs before they convert?"

These are different questions. The first set measures whether your idea sounds good in a room. The second set measures whether it will actually spread in a market.

There's another problem nobody talks about: the people who will kill your idea don't show up to customer discovery sessions.

Regulators don't attend your user interviews. Risk officers don't fill in your survey. Institutional gatekeepers don't book a 30-minute call on Calendly. They show up after you've scaled — and by then you've already built the wrong thing in the wrong way.


So I built a simulator

After City Shower, after founding Zologic, after shipping UCPReady (an AI agent commerce plugin for WooCommerce, currently in WooCommerce Marketplace business review) — I kept running into the same pattern. Good ideas. Real problems. Structural friction that surveys didn't surface until after significant time and money had been spent.

So I built Market Physics Engine.

Not another survey tool. Not another AI that asks you questions and gives you generic feedback. A behavioral simulation engine that treats adoption as a physics problem — where perceived value competes against structural friction across a heterogeneous population of decision-makers.

Here's the conceptual architecture:

Step 1 — Generate the full stakeholder ecosystem

You submit your pitch. The engine generates 12 stakeholder personas covering every role that influences adoption: consumer, buyer, operator, supplier, regulator, investor, ecosystem partner, technical evaluator, early adopter, skeptic analyst, risk officer, and public stakeholder.

Each persona is scored across two competing forces:

Value drivers — functional utility, emotional motivation, social signaling, market timing, ecosystem readiness

Friction drivers — cognitive cost (switching effort, learning curve), perceived risk (financial, regulatory, technical), and regulatory burden

The scoring is role-specific and grounded in documented behavioral patterns for each archetype. A regulator looks nothing like an early adopter. A risk officer's friction profile is structurally different from a consumer's. The engine enforces this — it doesn't let every persona converge on the same generic scores.

Step 2 — Simulate a heterogeneous population

Each persona seeds a population of synthetic agents with behavioral variation drawn from the persona's scores. The population is designed to be heterogeneous — capturing the full distribution of real market behavior, including the resistant tail that single-point estimates miss entirely.

This is grounded in Monte Carlo simulation methodology. No single persona represents a market. The variance matters as much as the mean.

Step 3 — Run network diffusion across 12 market periods

Agents don't respond to a global market signal. They respond to their immediate network — consistent with Watts-Strogatz small-world topology research and how platform adoption actually propagates in the real world.

Adoption cascades locally before reaching the broader population. This produces tipping points rather than smooth growth curves — which is exactly what you see in every major platform market that has ever succeeded or failed.

The simulation runs across 12 sequential periods, separating innovator adoption from imitator adoption — consistent with Bass Diffusion Model dynamics documented across decades of empirical platform research.

Friction variables are weighted asymmetrically throughout — consistent with Prospect Theory. A blocker's resistance costs more adoption probability than an equivalent level of enthusiasm generates. This is not an assumption. It is empirically documented.

Step 4 — Output you can actually act on

  • Idea Fitness Index (IFI) — a composite behavioral fitness score
  • Adoption curve — 12-period S-curve with period-by-period growth rate overlay
  • Friction profile — Cognitive, Risk, and Regulatory breakdown per persona
  • Benchmark comparison — your curve overlaid against empirical anchors for Airbnb, Uber, Stripe, and Slack
  • Stakeholder sentiment — 12 personas classified as supporters, skeptics, or blockers with behavioral reasoning

What this looks like on a real pitch

I ran the Airbnb pitch through the engine:

"A two-sided marketplace where homeowners rent out spare rooms or entire properties to travelers seeking affordable, local accommodation alternatives to hotels."

Output:

  • IFI: 0.53 — Scalable opportunity
  • Adoption Probability: 78.5%
  • Majority adoption at: Period 6
  • Closest benchmark match: Airbnb
  • Dominant friction: Risk and Regulatory

The friction profile surfaces exactly what actually happened to Airbnb in their early years — not consumer demand failure (demand was high from day one), but regulatory and trust infrastructure friction that consumed enormous operational energy before the product could scale.

The personas that dragged the score were the regulator and the risk officer — both high on regulatory burden and perceived risk, both structurally resistant to a trust-dependent two-sided marketplace entering an unregulated category.

What traditional validation told founders about Airbnb in 2008:

  • Travelers want cheaper accommodation ✓
  • Homeowners would consider renting their space ✓
  • Waitlist filled up quickly ✓

What it missed:

The regulatory cascade that would define Airbnb's next decade. The trust infrastructure they'd have to build from scratch. The institutional friction that doesn't appear in any customer interview because the people carrying that friction don't attend customer interviews.

Market Physics surfaces that friction in 60 seconds, before you spend a cent building toward it.


The same friction model applied to my own work

I ran the UCPReady pitch through Market Physics before submitting to the WooCommerce Marketplace.

UCPReady makes WooCommerce stores transactable by AI agents via the Universal Commerce Protocol — REST, MCP, and embedded checkout in one plugin. It's the infrastructure layer that lets AI agents shop your store the way a human would, without scraping or hallucinating product data.

The simulation told me exactly where the friction would concentrate: the institutional regulator persona (WooCommerce's own review process), the technical evaluator (merchant-developers who need spec compliance proof before adoption), and the skeptic analyst (merchants who've heard "AI will change everything" too many times).

It concentrated there. The WooCommerce Marketplace business review has been ongoing since February. The technical evaluator friction resolved through independent third-party testing (UCPReady was cited as the most spec-complete non-Shopify UCP implementation tested across 180 real agent shopping sessions). The skeptic analyst friction is resolving through publishing — this article being part of that.

That's not a coincidence. That's the model working.


The honest comparison

YC Application Lean Startup Market Physics
Measures stated intent
Measures behavioral adoption probability
Surfaces regulatory friction pre-build Rarely
Models network diffusion dynamics
Captures the resistant tail
Requires access to real humans
Takes weeks
Takes 60 seconds
Grounded in behavioral economics Partially Partially
Benchmarks against empirical historical data

Market Physics doesn't replace customer discovery. Talking to real humans is irreplaceable — you learn things about context, language, and emotion that no simulation captures.

What it replaces is the false confidence that intent data gives you. The "everyone I talked to loved it" that precedes so many failed launches.

Run the simulation first. Then go talk to the persona the simulation identified as your biggest blocker.

That conversation will be completely different from a generic customer interview — because you'll know exactly what friction to probe for before you walk in the room.


Try it

marketphysics.eu — free tier, 3 simulations. No credit card.

Run your own pitch. See where the friction concentrates before you build.

If the IFI comes back low, that's not a failure signal. It's a navigation signal. The simulation tells you which variables are dragging the score and which personas are the blockers. Fix those structural barriers, re-run, and watch the curve change.

That's the loop lean startup was always supposed to create. Market Physics just runs it in behavioral space instead of interview space — and it runs it before you've committed to building anything.


One last thing

I built Market Physics because I needed it. City Shower taught me what late-stage friction discovery costs. Every venture since has been shaped by the question I now run through the engine first:

Not "can I build this." Not "do people want this." But "will people actually change their behavior to adopt this — and what stands in the way?"

That's a different question. It deserves a different tool.


Almin Zolotic is the founder of Zologic (AI commerce infrastructure, Den Haag NL), City Shower (hygiene access for the homeless in the Netherlands), and Market Physics Engine (behavioral adoption simulation). UCPReady — the WooCommerce AI agent commerce plugin — is currently in WooCommerce Marketplace business review.


Launching on Product Hunt tomorrow — would mean a lot if you checked it out 👇

🚀 Launching on Product Hunt tomorrow — upvotes mean the world

Tags: startup entrepreneurship ai productivity

Top comments (0)