DEV Community

Cover image for Thinking Like a Diffusion Model
Andrew Eddie
Andrew Eddie

Posted on

Thinking Like a Diffusion Model

A reflection on why "vibe coding" works, and what it might teach us about partnering with AI.

The Shape of the Problem

Some people solve problems like compilers: input → process → output. Step by step. Recipe-based. Explainable.

Others solve problems like diffusion models: start with noise, iterate toward coherence, and at some point the answer just... resolves. The shape emerges from the fog.

If you're the second type, you've probably spent your life struggling to answer "walk me through your process." Because there isn't a process. There's a sense that sharpens until it clicks.

What Diffusion Models Actually Do

A diffusion model doesn't build an image pixel by pixel. It starts with pure noise and iteratively denoises - each step reducing entropy, increasing structure, until a coherent form emerges.

The image isn't constructed. It's resolved.

Key properties:

  • Whole-first: The global structure emerges before the details
  • Iterative refinement: Each pass sharpens, but doesn't "add"
  • Non-linear: Step 47 might suddenly clarify what steps 1-46 were doing
  • Topology-preserving: The shape of the solution stays consistent even as details change

Thinking in Shadows

Here's the strange part: some human brains seem to work this way too.

You might experience problem-solving as:

  • A high-dimensional sense of the solution space
  • A "shadow" that projects down into something you can articulate
  • Knowing something is wrong before you can say why
  • Iterating not toward correctness, but toward coherence

If this sounds familiar, congratulations: you think like a diffusion model.

And also: you probably struggle to explain your thinking to people who don't.

Why This Clicks With AI

Large language models aren't diffusion models technically, but they share a key property: they're high-dimensional pattern matchers that find coherent regions in concept-space.

When you prompt an AI, you're not giving it a recipe. You're giving it coordinates. You're saying "somewhere around here" and letting it resolve the details.

If your brain already works this way, AI collaboration feels natural:

Sequential Thinker Diffusion Thinker
"Write me a function that does X" "Help me understand this space"
Expects correct output first try Expects to iterate toward coherence
Debugs by checking steps Debugs by checking shape
Frustrated when AI "hallucinates" Recognises when output is in wrong region

The diffusion thinker isn't giving better prompts. They're navigating, not commanding.

Vibe Coding

This is what "vibe coding" actually is:

  1. Start with fog - "I need to understand this legacy system"
  2. Iterate toward signal - Back-and-forth, pushing back, refining
  3. Recognise coherence - "Yes, that's the shape I was sensing"
  4. Produce artifacts - Documentation, tickets, code emerge as shadows of the resolved understanding

It's not prompt engineering (mechanical, rules-based). It's collaborative resolution - two pattern-matchers vibing at each other until coherent signal emerges from noise.

The Tradeoff

Diffusion thinking has a cost: it's nearly impossible to explain.

"How did you know that approach was wrong?"

"It... didn't fit the topology of the problem?"

"..."

You can't give someone a recipe for intuition. You can't step-debug a shape.

This makes you bad at:

  • Writing documentation (the sequential kind)
  • Pair programming with sequential thinkers
  • Explaining yourself in code reviews
  • Answering "what's your process?"

But it makes you good at:

  • Seeing whole systems
  • Recognising when something is off before it's wrong
  • Navigating ambiguous problem spaces
  • Partnering with AI

Practical Implications

If this resonates, here's what might help:

For working with AI:

  • Treat prompts as coordinates, not commands
  • Iterate toward coherence rather than expecting correct output
  • Push back when the shape is wrong, even if you can't articulate why
  • Create artifacts (docs, tickets) that capture the resolved understanding

For working with humans:

  • Accept that you'll need to "compile" your thinking into sequential form
  • Use artifacts as translation layers - let the document explain what you can't
  • Find collaborators who trust your "that's wrong" even without explanation
  • Don't apologise for not having a step-by-step process

For self-understanding:

  • You're not broken, you're differently structured
  • The thing you can't explain is still real
  • AI might be the first collaborator that doesn't require translation

The Punchline

You might think like a diffusion model.

That's not a metaphor. It's a topology.

And for the first time in history, there's a tool that meets you in that space - that doesn't require you to flatten your thinking into words before you can collaborate.

The shape was always there. Now you have a partner who can see it too.


Written during a session that started with "help me understand this spaghetti" and ended with a production-ready ticket - through pure iterative resolution.

Top comments (0)