We Solved HTTPS. Why Haven’t We Solved Age Verification?
There’s something fundamentally broken about how the internet handles age verif...
For further actions, you may consider blocking this person and/or reporting abuse
The HTTPS analogy misses a key difference: HTTPS secures a channel, age verification creates a signal.
Even with a trusted third party or privacy-preserving tech, you still generate events tied to sensitive contexts (adult content, violence, etc.). The real risk isn’t just who verifies your age — it’s the existence of a trace that can be correlated, logged, or later requested.
For many use cases, people don’t want anonymity from “the public”, they want non-existence of the data outside their control.
That’s a fundamentally different problem than TLS ever solved.
This is a really thoughtful point, and I appreciate you raising it—it helped me realize there’s a deeper layer here than what I originally focused on.
I think your distinction between securing a channel (HTTPS) vs. creating a signal (age verification events) is spot on. You’re right that this introduces a different class of risk—not just who knows your age, but whether a record exists that could be correlated to sensitive behavior.
Where I’m still trying to reason through this is how much of that risk already exists today.
In the current model, most platforms:
So in many cases, the “event” you’re describing (user accessed X type of content) is already being generated and stored—just without any standardized or constrained framework around it.
My proposal doesn’t eliminate that risk, but I’m wondering if it re-shapes it in a meaningful way:
That said, I think your point still stands that:
And that’s probably a separate design challenge—closer to:
So I see your critique less as “this doesn’t work” and more as:
I’m still thinking through whether those two problems can realistically be solved together, or if they need to be treated as separate layers.
Really appreciate you taking the time to articulate this—it definitely pushed my thinking forward.
The shift isn’t just from IP to “better signals” — it’s from weak, indirect identifiers to systems that are much closer to person-bound verification.
Even if designed to be privacy-preserving, age verification introduces structured checkpoints and third parties that act as correlation anchors.
That changes the threat model: not just more data, but data that is more reliable, more actionable, and easier to aggregate across actors (providers, partners, regulators, or attackers).
In practice, the risk isn’t only who verifies age — it’s that we’re standardizing an infrastructure that makes sensitive behavior easier to observe and reuse.
Sure. the website doesn't know who you are but the "trusted provider" has to. In order to actually work your identity has to be linked to the token. This doesn't really fix the privacy concern that most people have. It is still the "trust me, bro" argument. The only difference is that it is coming from Google, a telecom, your bank, or the government. Basically, all the groups that many people already have serious privacy concerns about.
Really appreciate you taking the time to read and respond—this is exactly the kind of pushback I was hoping to get.
I think you’re absolutely right about the core tension: at some point, someone has to verify identity (or at least age), and that inherently requires trust in that provider. The model doesn’t eliminate trust—it shifts where that trust lives.
One thing I’m still thinking through is whether there’s a meaningful difference between:
platforms directly collecting and storing identity data themselves
vs.
a smaller set of specialized providers doing verification and only returning a minimal claim (e.g., “18+”)
In theory, approaches like one-time tokens or zero-knowledge proofs could reduce how much those providers can track or correlate across sites—but I’ll admit that’s where my understanding gets thinner.
I’m curious how you’d think about a system where:
the relying site never sees identity
the token is limited in scope (age only)
and ideally not reusable across services
Does that meaningfully change the privacy equation in your view, or does it still fall into the same “trust me” bucket?
Either way, I think your point stands that this doesn’t remove the need for trust—it just concentrates it. I’m still trying to figure out whether that’s a net improvement or just a different tradeoff.
Appreciate the thoughtful critique.
But a lot of people need a zero trust model. I don't need to share my identity with anyone to use HTTPS. Even GPG, I generate my own signing key, publish the decryption key somewhere I own so others can find it, and use the signing key myself. It doesn't require anyone else for me to verify I am who I say I am. The presence of the key is sufficient.
And if I'm a government whistleblower, having to verify my identity with the government before I'm allowed to use my computer to send a message on Signal is a big problem.
Age verification is not a problem that society, or technology, needs to solve. Parents are responsible for their children. Done. That's the solution.
@sqlmancerqueuellc
Thanks again for your feedback. Because of your feedback, I've updated the article to include a compliance standard (AACF) to ensure that any organization that is responsible for issue age tokens would also have to abide by strict compliance controls. Thanks again!
Those are at the consumption endpoint. This stuff is coming about because there are people who want to shed that responsibility. Meta: "hey, not my fault, the platform/browser they came in on told me they weren't underage".
That’s a really fair point—and I don’t think responsibility should be shifted away from platforms entirely.
My intent isn’t to create a system where companies like Meta or YouTube can say “not our problem anymore.” If anything, I’d want the opposite: a clearer and more enforceable expectation that they must implement reasonable safeguards.
Right now, though, we’re in a place where every platform—large or small—is expected to solve age verification on their own, and the results are inconsistent at best (and often just a checkbox). Large companies have the resources to build something more robust, but smaller teams and independent developers often don’t, so they either do nothing or implement weak controls.
What I’m trying to explore is whether we can separate:
So instead of:
It becomes:
That doesn’t remove accountability—it actually makes it easier to measure whether a platform is doing the right thing.
And for smaller developers, it lowers the barrier to doing something responsible. If age gating becomes as easy as integrating a standard API, you’re more likely to see it adopted widely instead of only by the biggest players.
So I see this less as shifting responsibility, and more as:
Appreciate you raising that—it’s an important distinction.
The AACF audit layer hits a deployment gap the article doesn't address: static certifications (annual PDFs, point-in-time reviews) don't translate to runtime trust. A platform consuming an Age Token needs to know the issuing provider is currently compliant, not just certified 11 months ago. The PKI analogy actually highlights this - browsers don't trust a CA because it passed an audit once; they check OCSP/CRL revocation status in real time. AACF would need an equivalent: machine-readable, continuously updated compliance status that platforms can verify programmatically at the moment they accept a token, otherwise "certified provider" becomes just another checkbox.
This is a really strong point—thank you for calling it out.
You're absolutely right that static certification alone isn’t enough. If AACF is just an annual audit or PDF, it doesn’t translate into real-time trust at the moment a token is accepted.
I think the PKI analogy actually forces this requirement. As you pointed out, browsers don’t trust a CA just because it passed an audit—they rely on live signals like OCSP/CRLs.
Extending that model, AACF would likely need a similar layer:
So instead of just:
The question becomes:
That probably turns AACF into more than just a compliance framework—it becomes part of an operational trust protocol.
There are definitely tradeoffs (availability, privacy of status checks, centralization), but I agree this is a critical gap in the current proposal.
Really appreciate you raising this—this is exactly the kind of feedback that helps refine the model.
"Operational trust protocol" nails it, that's a different beast to build than a compliance framework.
We ran into exactly this building ArkForge Trust Layer. Static API keys have the same flaw as annual audits: they prove authorization at issuance, not at call time.
The question shifts from "was this agent authorized?" to "is this agent still within its certified scope right now?"
We ended up closer to the CT log model than OCSP: every certified action gets recorded on a transparency log (Rekor/Sigstore).
No real-time status oracle to go down, no privacy leak from per-token queries. The tradeoff is you get auditability, not revocation, which works for our use case but wouldn't for age verification where you need to actively block.
Centralization is the one I still don't have a clean answer to. A trust registry that becomes critical infrastructure is also a single point of regulatory capture.
It sounds like CT logs solve the ‘prove it happened’ problem, but AACF needs ‘prevent it from happening.’ Do you see a hybrid model working, or does that reintroduce the centralization risk you’re concerned about?
Hybrid models do reintroduce centralization, but the nature of the risk shifts depending on which component you centralize.
CT logs centralize reads: anyone can mirror the log, verify inclusion proofs independently, and detect misbehavior without trusting the log operator. The Rekor transparency log works this way if the primary goes down, mirrors keep the record auditable. That kind of centralization is survivable.
Revocation centralizes writes and availability: you need a live oracle to block a token, and if that oracle is unavailable, you have to choose between fail-open (tokens work even if revoked) or fail-closed (everything breaks). OCSP has been living with this tradeoff for years and most browsers moved toward soft-fail, which means revocation is weaker in practice than it looks on paper.
For age verification, the problem is worse because the stakes of fail-open are high -- a revoked token for a suspended provider should actually block access. That forces you toward a live, authoritative revocation service, which is exactly the single point of regulatory capture you're worried about.
The approach that avoids most of this: short-lived tokens with no revocation. If a token is valid for 15 minutes, you don't need to revoke it, you just stop issuing. Let's Encrypt moved this direction with short-lived certificates precisely because revocation at scale doesn't work. The tradeoff is issuance infrastructure becomes critical instead of revocation infrastructure, but at least you can distribute issuance across multiple providers.
The centralization risk doesn't disappear, it relocates. The question is which chokepoint you're more comfortable defending.
This is a really helpful breakdown — I think the “chokepoint relocation” framing is exactly right.
The more I think about it, the more it seems like short-lived tokens are probably the most practical foundation for age verification. If tokens expire quickly, you reduce the need for revocation as a primary control and avoid introducing a hard dependency on a live status oracle.
That said, it feels like each model is solving a different layer of the problem:
Short-lived tokens → limit blast radius and avoid revocation at scale
CT logs → provide transparency and auditability
Revocation → still necessary for high-risk or immediate enforcement scenarios
So rather than choosing one, a layered approach seems more realistic:
short-lived tokens as the default, CT-style transparency for accountability, and a minimal revocation path for edge cases where you truly need to block in real time.
Age verification is an entirely different beast to HTTPS and won't be solved in the same way. The guts of it is you have to 1. Gather proof of somebody's identity, and 2. Make sure that the person claiming that this identity is theirs is the same person making the request. Neither of these are simple tasks.
If only we could verify maturity rather than just age....