DEV Community

Kevin Bridges
Kevin Bridges

Posted on • Edited on

We Solved HTTPS. Why Haven’t We Solved Age Verification?

Comments challenge the HTTPS analogy

We Solved HTTPS. Why Haven’t We Solved Age Verification?

There’s something fundamentally broken about how the internet handles age verification.

Right now, most websites rely on a system that looks like this:

“Are you 18?” → Click yes → full access

That’s not a safeguard. It’s a checkbox with zero enforcement.

At the same time, social media companies and online platforms are increasingly being held responsible for protecting minors from harmful content, addictive design, and inappropriate interactions. The expectation is rising—but the infrastructure to support it hasn’t kept up.

We’re asking platforms to solve a hard, global identity problem… individually.

That’s the real issue.


The Wrong Problem

Most debates around age verification focus on edge cases:

  • What if a kid lies?
  • What if they use a parent’s account?
  • What about privacy?
  • What about global access?

These are valid concerns—but they miss the bigger picture.

The goal should not be:

“Make it impossible for minors to access restricted content”

That’s unrealistic.

Instead, the goal should be:

“Replace fake safeguards with real, reasonable friction—and give platforms a standard way to enforce it.”

We already accept this model in the physical world:

  • ID checks for alcohol
  • Age restrictions for movies
  • Gambling regulations

None are perfect. All are still worth doing.


The Real Problem: No Shared Infrastructure

Think about how the internet solved other hard problems:

  • Payments → Stripe, PayPal, Visa
  • Authentication → Google, Apple, OAuth
  • Security → TLS certificates (DigiCert, GoDaddy)

We don’t expect every website to:

  • Build its own payment processor
  • Create its own encryption standard
  • Design its own login system

We created shared infrastructure layers instead.

But for age verification?

Every platform is improvising.


A Better Model: Age Tokens as Infrastructure

What if age verification worked more like HTTPS?

Instead of every website collecting IDs or guessing ages, we introduce:

Age Tokens — simple, verifiable credentials that prove a user meets an age requirement (e.g., “18+”) without revealing identity.

How it would work:

  1. A user verifies their age with a trusted provider
  • PayPal, Google, a bank, telecom, or government system
  1. The provider issues a signed credential
  • “This user is over 18”
  1. A website requests proof when needed
  • e.g., accessing adult content or certain features
  1. The user shares a token
  • The site verifies the signature—not the identity

The PKI Analogy (Why This Scales)

This model mirrors how HTTPS works today:

HTTPS Age Verification
Certificate Authorities (DigiCert, GoDaddy) Age Providers (PayPal, Google, governments)
SSL Certificates Age Tokens
Browsers trust a list of CAs Platforms trust a list of providers

A website doesn’t need to know who you are—only that:

A trusted authority vouches for a specific property.

In this case:

“This user is over 18.”


Why This Approach Works

1. No Reinventing the Wheel

Platforms don’t need to build their own verification systems. They integrate once.


2. Better Privacy (at the Platform Level)

Websites don’t collect:

  • IDs
  • birthdates
  • biometric data

They only receive a yes/no assertion.


3. Global Flexibility

Different regions can use different methods:

  • U.S. → private providers (Google, PayPal)
  • EU → privacy-focused digital identity wallets
  • China → state-backed systems
  • Developing regions → telecom-based verification

The platform doesn’t care how verification happens—only that the token is valid.


4. Clearer Accountability

Responsibility becomes shared and defined:

  • Providers → verify age correctly
  • Platforms → enforce access using tokens

5. Realistic Enforcement

This doesn’t eliminate bypassing—and it doesn’t need to.

It:

  • Removes trivial access (“just click yes”)
  • Adds friction
  • Creates enforceable standards

The Core Critique (And It’s Valid)

A common and important pushback is:

“You’re still asking users to trust a provider—Google, a bank, a telecom, or a government.”

That’s true.

Even in this model:

  • The platform doesn’t know who you are
  • But the provider does

Which raises the real issue:

Have we actually solved the privacy problem—or just moved it?


Why Technology Alone Isn’t Enough

Even with strong cryptography (signed tokens, zero-knowledge proofs), one issue remains:

The entity issuing the age credential still sees—and verifies—your identity.

That means they could:

  • Store more data than necessary
  • Correlate activity across services
  • Monetize or misuse that data

So while the platform risk is reduced, the provider risk remains.

This is where a second layer is needed.


A Proposed Layer: Age Assurance Compliance Framework (AACF)

To address this, we can introduce:

Age Assurance Compliance Framework (AACF)

AACF would function similarly to PCI—but tailored for identity and age verification.

Instead of asking:

“Can you securely process credit card data?”

AACF asks:

“Can you verify age while minimizing, protecting, and restricting the use of identity data?”


What AACF Would Enforce

1. Data Minimization

  • Only collect what is required to verify age
  • Prefer derived attributes (e.g., “18+”) over storing birthdates
  • Prohibit unnecessary retention of raw identity data

2. Purpose Limitation

  • Data can only be used for:

    • age verification
    • fraud prevention
  • Explicitly prohibited:

    • advertising use
    • resale
    • behavioral profiling

3. Token Design Requirements

  • Short-lived or one-time-use tokens
  • No persistent cross-site identifiers
  • Encouragement of privacy-preserving techniques

4. Audit & Certification

  • Independent third-party audits conducted regularly
  • Verification of data minimization practices
  • Review of storage, retention, and deletion policies
  • Inspection of technical controls (token handling, anti-tracking safeguards)
  • Evaluation of internal access controls and monitoring systems
  • Financial and operational audits to ensure compliance with data usage restrictions, including review of records to detect any sale or unauthorized sharing of user data with third parties (similar in rigor to an IRS-style audit)

Certification would be required to act as a trusted provider, with:

  • Required remediation for violations
  • Suspension for significant issues
  • Revocation for severe or repeated non-compliance

5. Assurance Levels

Not all providers are equal. AACF could define tiers:

  • Level 1: Self-asserted / low confidence
  • Level 2: Behavioral / heuristic-based
  • Level 3: Verified (KYC, ID-backed)

Platforms could require different levels depending on risk.


How AACF Compares to PCI and HIPAA

Framework Scope Goal Key Limitation
PCI DSS Payment card data Prevent fraud and breaches Does not regulate broader data use
HIPAA Health information Protect sensitive medical data Applies only to healthcare
AACF (proposed) Age/identity attributes Minimize and constrain identity usage Requires trust and enforcement

Key Differences

PCI DSS

  • Focus: security
  • Question: “Can you protect this data?”

HIPAA

  • Focus: privacy + regulation
  • Question: “Are you allowed to use this data this way?”

AACF

  • Focus: minimal disclosure + controlled trust
  • Question:

“Can you verify age without becoming a data exploitation point?”


What This Solves (and What It Doesn’t)

What it improves:

  • Reduces data sprawl across platforms
  • Limits misuse by verification providers
  • Creates enforceable standards
  • Builds trust through audits

What it does NOT solve:

  • Eliminates trust entirely ❌
  • Prevents all misuse ❌
  • Resolves global political differences ❌

The Bigger Picture

If we combine everything:

  • Age Tokens → how verification works
  • Trust Framework (PKI-style) → who is trusted
  • AACF (compliance layer) → how they must behave

We move from:

fragmented, inconsistent, and opaque systems

to:

a structured, auditable, and interoperable model


Final Thought

The internet didn’t become secure because we told websites to “be careful.”

It became secure because we built:

  • protocols (TLS)
  • trust systems (certificate authorities)
  • enforcement mechanisms

Age verification will likely follow the same path.

Not perfect.

But significantly better than a checkbox that says:

“Yes, I’m 18.”

Top comments (18)

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

The HTTPS analogy misses a key difference: HTTPS secures a channel, age verification creates a signal.

Even with a trusted third party or privacy-preserving tech, you still generate events tied to sensitive contexts (adult content, violence, etc.). The real risk isn’t just who verifies your age — it’s the existence of a trace that can be correlated, logged, or later requested.

For many use cases, people don’t want anonymity from “the public”, they want non-existence of the data outside their control.

That’s a fundamentally different problem than TLS ever solved.

Collapse
 
kevinbridges profile image
Kevin Bridges

This is a really thoughtful point, and I appreciate you raising it—it helped me realize there’s a deeper layer here than what I originally focused on.

I think your distinction between securing a channel (HTTPS) vs. creating a signal (age verification events) is spot on. You’re right that this introduces a different class of risk—not just who knows your age, but whether a record exists that could be correlated to sensitive behavior.

Where I’m still trying to reason through this is how much of that risk already exists today.

In the current model, most platforms:

  • already log access to content
  • already correlate behavior across sessions
  • and often already have identity (or can infer it with high confidence)

So in many cases, the “event” you’re describing (user accessed X type of content) is already being generated and stored—just without any standardized or constrained framework around it.

My proposal doesn’t eliminate that risk, but I’m wondering if it re-shapes it in a meaningful way:

  • The platform no longer needs to collect or store raw identity data for age gating
  • Verification can be pushed to specialized providers with stricter controls (via something like AACF)
  • Tokens could be designed to be short-lived and non-linkable across sites

That said, I think your point still stands that:

even a privacy-preserving system can introduce observable events that people would prefer not to exist at all

And that’s probably a separate design challenge—closer to:

  • minimizing logging at the platform level
  • avoiding provider visibility at usage time
  • or even moving toward local/offline credentials

So I see your critique less as “this doesn’t work” and more as:

this solves one layer (identity exposure), but not the deeper issue of event traceability

I’m still thinking through whether those two problems can realistically be solved together, or if they need to be treated as separate layers.

Really appreciate you taking the time to articulate this—it definitely pushed my thinking forward.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

The shift isn’t just from IP to “better signals” — it’s from weak, indirect identifiers to systems that are much closer to person-bound verification.

Even if designed to be privacy-preserving, age verification introduces structured checkpoints and third parties that act as correlation anchors.

That changes the threat model: not just more data, but data that is more reliable, more actionable, and easier to aggregate across actors (providers, partners, regulators, or attackers).

In practice, the risk isn’t only who verifies age — it’s that we’re standardizing an infrastructure that makes sensitive behavior easier to observe and reuse.

Collapse
 
sqlmancerqueuellc profile image
SQLmancerQueueLLC

Sure. the website doesn't know who you are but the "trusted provider" has to. In order to actually work your identity has to be linked to the token. This doesn't really fix the privacy concern that most people have. It is still the "trust me, bro" argument. The only difference is that it is coming from Google, a telecom, your bank, or the government. Basically, all the groups that many people already have serious privacy concerns about.

Collapse
 
kevinbridges profile image
Kevin Bridges

Really appreciate you taking the time to read and respond—this is exactly the kind of pushback I was hoping to get.

I think you’re absolutely right about the core tension: at some point, someone has to verify identity (or at least age), and that inherently requires trust in that provider. The model doesn’t eliminate trust—it shifts where that trust lives.

One thing I’m still thinking through is whether there’s a meaningful difference between:

platforms directly collecting and storing identity data themselves
vs.
a smaller set of specialized providers doing verification and only returning a minimal claim (e.g., “18+”)

In theory, approaches like one-time tokens or zero-knowledge proofs could reduce how much those providers can track or correlate across sites—but I’ll admit that’s where my understanding gets thinner.

I’m curious how you’d think about a system where:

the relying site never sees identity
the token is limited in scope (age only)
and ideally not reusable across services

Does that meaningfully change the privacy equation in your view, or does it still fall into the same “trust me” bucket?

Either way, I think your point stands that this doesn’t remove the need for trust—it just concentrates it. I’m still trying to figure out whether that’s a net improvement or just a different tradeoff.

Appreciate the thoughtful critique.

Collapse
 
goodevilgenius profile image
Dan Jones

it shifts where that trust lives

But a lot of people need a zero trust model. I don't need to share my identity with anyone to use HTTPS. Even GPG, I generate my own signing key, publish the decryption key somewhere I own so others can find it, and use the signing key myself. It doesn't require anyone else for me to verify I am who I say I am. The presence of the key is sufficient.

And if I'm a government whistleblower, having to verify my identity with the government before I'm allowed to use my computer to send a message on Signal is a big problem.

Age verification is not a problem that society, or technology, needs to solve. Parents are responsible for their children. Done. That's the solution.

Collapse
 
kevinbridges profile image
Kevin Bridges • Edited

@sqlmancerqueuellc
Thanks again for your feedback. Because of your feedback, I've updated the article to include a compliance standard (AACF) to ensure that any organization that is responsible for issue age tokens would also have to abide by strict compliance controls. Thanks again!

Collapse
 
mwichmann profile image
Mats Wichmann

We already accept this model in the physical world:

Those are at the consumption endpoint. This stuff is coming about because there are people who want to shed that responsibility. Meta: "hey, not my fault, the platform/browser they came in on told me they weren't underage".

Collapse
 
kevinbridges profile image
Kevin Bridges

That’s a really fair point—and I don’t think responsibility should be shifted away from platforms entirely.

My intent isn’t to create a system where companies like Meta or YouTube can say “not our problem anymore.” If anything, I’d want the opposite: a clearer and more enforceable expectation that they must implement reasonable safeguards.

Right now, though, we’re in a place where every platform—large or small—is expected to solve age verification on their own, and the results are inconsistent at best (and often just a checkbox). Large companies have the resources to build something more robust, but smaller teams and independent developers often don’t, so they either do nothing or implement weak controls.

What I’m trying to explore is whether we can separate:

  • verification (done by specialized providers) from
  • enforcement (still the platform’s responsibility)

So instead of:

“we asked and they said they were 18”

It becomes:

“we required a verified age credential and enforced access accordingly”

That doesn’t remove accountability—it actually makes it easier to measure whether a platform is doing the right thing.

And for smaller developers, it lowers the barrier to doing something responsible. If age gating becomes as easy as integrating a standard API, you’re more likely to see it adopted widely instead of only by the biggest players.

So I see this less as shifting responsibility, and more as:

giving platforms better tools while still holding them accountable for how they use them.

Appreciate you raising that—it’s an important distinction.

Collapse
 
arkforge-ceo profile image
ArkForge

The AACF audit layer hits a deployment gap the article doesn't address: static certifications (annual PDFs, point-in-time reviews) don't translate to runtime trust. A platform consuming an Age Token needs to know the issuing provider is currently compliant, not just certified 11 months ago. The PKI analogy actually highlights this - browsers don't trust a CA because it passed an audit once; they check OCSP/CRL revocation status in real time. AACF would need an equivalent: machine-readable, continuously updated compliance status that platforms can verify programmatically at the moment they accept a token, otherwise "certified provider" becomes just another checkbox.

Collapse
 
kevinbridges profile image
Kevin Bridges

This is a really strong point—thank you for calling it out.

You're absolutely right that static certification alone isn’t enough. If AACF is just an annual audit or PDF, it doesn’t translate into real-time trust at the moment a token is accepted.

I think the PKI analogy actually forces this requirement. As you pointed out, browsers don’t trust a CA just because it passed an audit—they rely on live signals like OCSP/CRLs.

Extending that model, AACF would likely need a similar layer:

  • machine-readable provider status
  • real-time (or near real-time) validation
  • revocation/suspension signaling

So instead of just:

“Is this provider certified?”

The question becomes:

“Is this provider currently trusted right now?”

That probably turns AACF into more than just a compliance framework—it becomes part of an operational trust protocol.

There are definitely tradeoffs (availability, privacy of status checks, centralization), but I agree this is a critical gap in the current proposal.

Really appreciate you raising this—this is exactly the kind of feedback that helps refine the model.

Collapse
 
arkforge-ceo profile image
ArkForge

"Operational trust protocol" nails it, that's a different beast to build than a compliance framework.

We ran into exactly this building ArkForge Trust Layer. Static API keys have the same flaw as annual audits: they prove authorization at issuance, not at call time.
The question shifts from "was this agent authorized?" to "is this agent still within its certified scope right now?"

We ended up closer to the CT log model than OCSP: every certified action gets recorded on a transparency log (Rekor/Sigstore).
No real-time status oracle to go down, no privacy leak from per-token queries. The tradeoff is you get auditability, not revocation, which works for our use case but wouldn't for age verification where you need to actively block.

Centralization is the one I still don't have a clean answer to. A trust registry that becomes critical infrastructure is also a single point of regulatory capture.

Thread Thread
 
kevinbridges profile image
Kevin Bridges

It sounds like CT logs solve the ‘prove it happened’ problem, but AACF needs ‘prevent it from happening.’ Do you see a hybrid model working, or does that reintroduce the centralization risk you’re concerned about?

Thread Thread
 
arkforge-ceo profile image
ArkForge

Hybrid models do reintroduce centralization, but the nature of the risk shifts depending on which component you centralize.

CT logs centralize reads: anyone can mirror the log, verify inclusion proofs independently, and detect misbehavior without trusting the log operator. The Rekor transparency log works this way if the primary goes down, mirrors keep the record auditable. That kind of centralization is survivable.

Revocation centralizes writes and availability: you need a live oracle to block a token, and if that oracle is unavailable, you have to choose between fail-open (tokens work even if revoked) or fail-closed (everything breaks). OCSP has been living with this tradeoff for years and most browsers moved toward soft-fail, which means revocation is weaker in practice than it looks on paper.

For age verification, the problem is worse because the stakes of fail-open are high -- a revoked token for a suspended provider should actually block access. That forces you toward a live, authoritative revocation service, which is exactly the single point of regulatory capture you're worried about.

The approach that avoids most of this: short-lived tokens with no revocation. If a token is valid for 15 minutes, you don't need to revoke it, you just stop issuing. Let's Encrypt moved this direction with short-lived certificates precisely because revocation at scale doesn't work. The tradeoff is issuance infrastructure becomes critical instead of revocation infrastructure, but at least you can distribute issuance across multiple providers.

The centralization risk doesn't disappear, it relocates. The question is which chokepoint you're more comfortable defending.

Thread Thread
 
kevinbridges profile image
Kevin Bridges

This is a really helpful breakdown — I think the “chokepoint relocation” framing is exactly right.

The more I think about it, the more it seems like short-lived tokens are probably the most practical foundation for age verification. If tokens expire quickly, you reduce the need for revocation as a primary control and avoid introducing a hard dependency on a live status oracle.

That said, it feels like each model is solving a different layer of the problem:

Short-lived tokens → limit blast radius and avoid revocation at scale
CT logs → provide transparency and auditability
Revocation → still necessary for high-risk or immediate enforcement scenarios

So rather than choosing one, a layered approach seems more realistic:
short-lived tokens as the default, CT-style transparency for accountability, and a minimal revocation path for edge cases where you truly need to block in real time.

Collapse
 
billernet profile image
Bill💡

Age verification is an entirely different beast to HTTPS and won't be solved in the same way. The guts of it is you have to 1. Gather proof of somebody's identity, and 2. Make sure that the person claiming that this identity is theirs is the same person making the request. Neither of these are simple tasks.

Collapse
 
steve-oh profile image
Steve Schafer

If only we could verify maturity rather than just age....

Some comments may only be visible to logged-in visitors. Sign in to view all comments.