Claude Now Wants Your Passport: What Developers Need to Know About Anthropic's Identity Verification
On April 15, 2026, Anthropic quietly rolled out identity verification for Claude users. The requirement: a government-issued photo ID (passport, driver's license, or national ID card) plus a live selfie. No photocopies. No digital IDs. No student credentials.
The developer community is not happy about it.
What Exactly Is Required
- A physical, undamaged government-issued photo ID held in front of a camera
- A live selfie taken in real time
- The process takes "under five minutes" according to Anthropic
- Verification is handled by Persona, a third-party identity verification company
Accepted documents: passport, driver's license, state/provincial ID, national identity card. Not accepted: photocopies, mobile IDs, temporary paper IDs, non-government IDs.
When Does Verification Trigger?
This is where things get problematic. Anthropic's help page lists three triggers:
- "Accessing certain capabilities"
- "Routine platform integrity checks"
- "Safety and compliance measures"
That's it. No specifics. No list of gated features. No explanation of what behavior prompts a check. As one Hacker News commenter put it: "It's worrying that they don't specify in which cases they require identity checks." Another replied: "The only relevant question, and it's the one they didn't answer."
The Persona Problem
Anthropic isn't handling verification directly. They're using Persona Identities as a third-party processor. This introduces a separate set of concerns:
Data flow: Your ID and selfie go to Persona, not Anthropic's servers. Anthropic can access verification records through Persona's platform when needed.
Subprocessors: According to Hacker News analysis, Persona may share data with up to 17 different subprocessors. Whether these subprocessors follow the same privacy commitments as Anthropic is unclear.
Data retention: Anthropic's help page does not specify how long Persona retains your ID data.
Training: Anthropic says "We are not using your identity data to train our models." But whether Persona uses the data for their own model training or fraud detection improvements is a separate question.
Developer Reactions
The Hacker News thread has 100+ comments, mostly critical:
- "Does the company follow same privacy commitments as Anthropic itself? Hell no!"
- "Why do they wait to ban until after collecting personal info?" — Multiple users report being asked to verify immediately before account suspension
- "The AI itself is the security layer — ID adds zero marginal security"
- "When Persona inevitably gets compromised, threat to users exceeds benefits"
The irony isn't lost on developers: many switched to Claude specifically because of Anthropic's stated commitment to safety and privacy. Being asked to upload government IDs to a third-party service feels like a betrayal of that positioning.
What This Means for Developers Using Claude's API
Here's what matters practically:
| Access Method | Verification Required? |
|---|---|
| Claude.ai (web) | Yes, may be triggered |
| Claude Code (CLI) | Yes, may be triggered |
| Claude API (direct) | No — API key authentication only |
| Claude via third-party providers | No — provider handles auth |
If you're accessing Claude models through the API — whether directly or through a unified gateway — this doesn't affect you. API access is authenticated via API keys, not identity documents.
This distinction matters for production applications. If your product depends on Claude, you probably don't want individual developer accounts subject to opaque verification triggers. API access through your organization's account or through a multi-provider gateway keeps things predictable.
The Bigger Pattern
This isn't happening in isolation. AI providers are increasingly adding friction to direct access:
- OpenAI has rate-limited free tier API access multiple times
- Google requires billing setup before any Gemini API usage
- Anthropic now adds ID verification for certain Claude features
The trend is clear: direct consumer access to frontier AI models is getting more restricted. Developer and enterprise access through APIs remains the stable path.
What You Can Do
If you're a Claude.ai user: Decide whether you're comfortable providing government ID to a third-party. If not, the API is an alternative that doesn't require it.
If you're building on Claude's API: No action needed. API authentication is separate from user identity verification.
If you depend on multiple AI models: Consider using a multi-provider API gateway that gives you access to Claude, GPT, Gemini, and other models through a single endpoint. If one provider adds friction, you can route to another without code changes.
If you're concerned about privacy: Review Persona's privacy policy separately from Anthropic's. They are different companies with different data practices.
The full policy is on Claude's help center. The Hacker News discussion is here.
What's your take — reasonable safety measure, or overreach? Drop your thoughts in the comments.
Top comments (0)