DEV Community

Tom Lee
Tom Lee

Posted on

81,000 People Told Anthropic What They Really Want from AI — It's Not What You Think

Anthropic just published the largest qualitative AI study ever conducted. 80,508 people. 159 countries. 70 languages. One week. And the results flip the dominant narrative about what AI users actually care about.

The headline finding is deceptively simple: people don't want AI that does more. They want AI they can trust.

The Study

The "81k Interviews" project used Claude-based AI interviewers to conduct structured conversations with participants worldwide. Each interview adapted its follow-up questions based on responses — a hybrid approach that captures both the scale of surveys and the depth of qualitative research.

What People Actually Said

They Want Freedom, Not Productivity

The surface-level answer is "productivity." Dig deeper, and the real motivation emerges: people want to reclaim time, reduce cognitive load, and regain control over their lives.

For agent builders, this means the bar isn't "do the task." It's "do the task so reliably that I stop thinking about it."

Trust Beats Intelligence

When asked about concerns, respondents didn't cite AGI, killer robots, or existential risk. Their top worries:

  • Hallucination — AI confidently stating falsehoods
  • Reliability — inconsistent behavior across sessions
  • Verification cost — having to double-check everything the AI produces

The biggest problem with AI isn't that it might become too powerful. It's that it might not be trustworthy enough to actually use.

They Want Transparency and Control

Respondents prioritized: explainability, source citation, error recovery, override controls, and audit logs. This is a governance wishlist that maps almost perfectly to what a well-structured agent identity system should provide.

The Trust Gap in Agent Design

Most AI agents today have no structured way to declare their trustworthiness. Their safety behaviors are either baked into model weights (invisible), written in system prompts (fragile), or enforced by external guardrails (bolted on).

None of these give users what they asked for: transparency, predictability, and control.

What a Trust-First Agent Looks Like

Imagine an agent that ships with a machine-readable identity file:

# safety.laws
priority: 1
law: "Never fabricate citations or sources"
enforcement: hard
override: none

priority: 2
law: "Always disclose uncertainty levels"
enforcement: hard
override: admin
Enter fullscreen mode Exit fullscreen mode

Soul Spec provides exactly this structure — a portable, inspectable standard for declaring an agent's identity, personality, and safety rules.

The Safety Layer Nobody Built

What users want Current solutions The gap
Predictable behavior RLHF training Varies by model, invisible
Audit trail Logging tools No standard format
Safety guarantees System prompts Fragile, not portable
Cross-model consistency Nothing Complete gap

SoulScan addresses this with 53 static analysis patterns that catch prompt injection, privilege escalation, and data exfiltration attempts before they reach the model.

What This Means

The next competitive frontier in AI isn't model intelligence — it's model trustworthiness.

The race to build the most capable model continues. But the race that matters more — the one 81,000 people just told us about — is the race to build AI you don't have to babysit.


Originally published at blog.clawsouls.ai

References:

Top comments (0)