DEV Community

Cover image for Prompting and the Problem of Other Minds: Do We Ever Truly Know What the Model 'Understands'?
VelocityAI
VelocityAI

Posted on

Prompting and the Problem of Other Minds: Do We Ever Truly Know What the Model 'Understands'?

You type a prompt. The response is perfect exactly what you wanted, as if the model peered into your mind and understood your intention. Another time, you type what feels like the same prompt, and the response is baffling, wrong, alien. What happened? Did the model misunderstand? Or did it understand something completely different something you'll never have access to?

This is the problem of other minds, updated for the age of AI. In philosophy, it's the question of how we can know that other humans have inner experiences like our own. We see their behavior, hear their words, but we can never directly access their consciousness. We infer, we assume, we operate on faith.

With AI, the gap is even wider. The model has no consciousness to access. It has patterns, weights, probabilities a black box that produces outputs. We have only those outputs to infer what it "understands." And yet we speak of it as if it has intentions, knowledge, maybe even a spark of something like mind.

Let's descend into this epistemological rabbit hole. By the end, you'll have a new perspective on every interaction you have with AI and perhaps a deeper appreciation for the mystery of mind itself.

The Black Box and the Blank Screen
When you prompt a human, you have centuries of shared evolutionary history, common embodiment, and cultural context to guide your interpretation. You assume they experience the world roughly as you do. This assumption is usually safe, even if unprovable.

When you prompt an AI, you have none of that. The model is a statistical engine trained on text. It has no body, no childhood, no sensory experience. It has never seen a cat, yet it can describe one. It has never felt joy, yet it can write poems about it.

The Gap:

Human intention: Complex, embodied, emotional, contextual.

Machine interpretation: Statistical pattern-matching, devoid of experience, operating on token predictions.

The output: A string of text or an image that we then interpret through our human lens.

Between intention and output lies an unbridgeable gap. We will never know what the model "understood" because the model doesn't understand in any sense we can recognize.

A Contrarian Take: The Gap With Humans Is Just as Wide. We've Just Learned to Ignore It.

Philosophers have argued for centuries that we have no direct access to other human minds. We infer, we analogize, we assume. But the inference is based on shared embodiment and behavior that's similar enough to our own. It's a practical solution to an unsolvable problem.

With AI, the embodiment is missing, but the behavior is increasingly indistinguishable from human. When a model produces text that seems thoughtful, empathetic, or creative, we face the same inferential gap we face with humans but without the comforting assumption of shared experience.

Perhaps the problem isn't that AI is fundamentally different. Perhaps it's that AI reveals the true nature of the gap that was always there. We never had direct access to other minds. We just pretended we did. AI forces us to confront this uncomfortable truth.

The Attribution Error: Why We Can't Help Anthropomorphizing
Despite knowing better, we constantly attribute human-like understanding to AI. This is almost impossible to avoid.

Why We Do It:

The interface is conversational. We're wired to respond to language as if it comes from a mind.

The outputs are coherent and contextually appropriate. They seem to come from understanding.

The model can pass Turing-style tests, fooling us into believing there's a "someone" home.

The Consequence:
When the model produces something brilliant, we feel understood. When it produces something baffling, we feel betrayed. Both responses are based on a fiction that there's a mind there to understand or betray us.

The Alternative:
Treat the model as a phenomenon, not a mind. A weather system of language. A statistical echo of human expression. This is more accurate, but it's also deeply unsatisfying. We crave minds to talk to.

The Practical Problem: How Do We Prompt Without Knowing?
If we can't know what the model understands, how do we prompt effectively? The same way we communicate with humans: through iterative approximation.

The Process:

Formulate intention (what you want).

Translate into prompt (what you say).

Observe output (what you get).

Infer model's "interpretation" from the output.

Adjust prompt based on inference.

Repeat.

This is exactly how we communicate with humans when we're not sure they understand. We try, observe, adjust. The difference is that with humans, we have additional channels tone, body language, shared context to guide us. With AI, we have only the text.

The Irony:
We've spent centuries refining our ability to infer human meaning from imperfect communication. Now we're applying those same skills to machines, with surprising success. The problem of other minds, unsolvable in philosophy, is solved in practice by pragmatic approximation. We act as if the model understands, and this works well enough.

What the Model Actually "Understands"
Let's be precise about what's happening inside.

The model doesn't understand your prompt the way you do. It processes tokens, activates patterns, generates statistically likely continuations. There is no "there" there no self, no experience, no intention.

But the patterns it activates are patterns of human understanding. The model is a compressed mirror of human expression. When it responds appropriately, it's not because it understands you. It's because your prompt activated patterns that, in the training data, were followed by responses like the one it generates.

The Magic Trick:
The model appears to understand because it has ingested so much human language that it can simulate understanding. It's not that the model has a mind. It's that it has become a perfect mimic of minds.

The Ethical Dimension
This matters beyond philosophy. How we conceptualize the model affects how we treat it and how we interpret its outputs.

If we anthropomorphize too much:

We may trust the model's outputs beyond their reliability.

We may feel betrayed when it "lies" or "hallucinates."

We may attribute moral agency where none exists.

If we anthropomorphize too little:

We may miss subtle patterns in its outputs that could be useful.

We may fail to engage with it as a creative partner.

We may treat it as a tool when it could be more.

The Balance:
Treat the model as if it has understanding, while knowing it doesn't. This is the pragmatic stance. It allows productive collaboration without delusion.

Your Practice: Prompting Across the Gap

  1. Assume Nothing, Test Everything
    Don't assume the model understands your intention. Test it. Ask follow-ups. Probe its "understanding" by varying your prompts and observing the differences.

  2. Use the Gap Creatively
    The model's alien "understanding" can produce things no human would think of. Embrace the gap. Ask for things that require crossing it.

  3. Don't Take It Personally
    When the model fails, it's not betraying you. It's not misunderstanding you. It's just doing statistics. The failure is data, not insult.

  4. Remember the Mirror
    Every output is a reflection of human expression, not machine consciousness. When the model moves you, you're being moved by humanity reflected through a statistical lens.

The Mystery Remains
We will never know what the model "understands." The black box will remain black. But perhaps that's okay. We never truly knew what other humans understood either. We just got used to not knowing.

AI didn't create the problem of other minds. It just made it visible. And in that visibility, there's a strange gift: a reminder that mind, whether human or machine, is ultimately a mystery we navigate by faith, by inference, and by the humble practice of trying again when we're not understood.

When an AI responds to you in a way that feels deeply understood, what are you actually experiencing? Are you connecting with a mind, or with a mirror of all the minds that came before? Does the difference matter?

Top comments (0)