On March 8, 2026, seven simultaneous developments — from the battlefields of Iran to state legislatures across America, from India's Ministry of IT to Brussels, from federal courtrooms to Grammarly's AI feature — converged on the same structural truth: the world is racing to regulate AI-generated content, but every single approach still relies on the "Trust Us" model for the most critical question of all — what did the AI refuse to generate?
TL;DR
AI-generated fake war footage from the Iran conflict has reached hundreds of millions of views, and X's response relies on Community Notes — not cryptographic proof. Twenty-plus US states are simultaneously pushing AI provenance bills. India's IT Rules 2026 just created the world's first binding synthetic content mandate. The EU is finalizing its Article 50 transparency code. The TAKE IT DOWN Act enforcement deadline is 72 days away. AI-generated CSAM prosecutions are happening in 22 states. Grammarly got caught impersonating dead professors.
Every one of these developments demands proof of what AI systems did and didn't do. None of them provides a mechanism for verifiable refusal provenance — cryptographic proof that an AI system blocked harmful content.
This article:
- Fact-checks all seven developments against primary sources (with corrections)
- Maps the structural gap each reveals using CAP-SRP v1.1's four Completeness Invariants
- Provides working Python code for the new v1.1 event types:
ACCOUNT_ACTION,LAW_ENFORCEMENT_REFERRAL,POLICY_VERSION,GEN_WARN,GEN_ESCALATE,GEN_QUARANTINE - Shows how the expanded verification algorithm catches the exact failures these real-world cases expose
GitHub: veritaschain/cap-spec · veritaschain/cap-srp · License: CC BY 4.0
Table of Contents
- The Week That Proved the Gap
- Event 1: AI War Fakes Hit Hundreds of Millions of Views
- Event 2: Twenty-Plus States Push AI Provenance Laws Simultaneously
- Event 3: India Enforces the World's First Binding Synthetic Content Mandate
- Event 4: EU Finalizes Article 50 Transparency Code
- Event 5: TAKE IT DOWN Act — 72 Days to Compliance
- Event 6: AI-Generated CSAM Prosecutions in 22 States
- Event 7: Grammarly's Dead Professor Problem
- The Pattern: Seven Stories, One Structural Blind Spot
- CAP-SRP v1.1: What Changed and Why
- Building the v1.1 Event Types
- The Four Completeness Invariants
- Regulatory Mapping: Where Each Law Falls Short
- What This Means for Developers Building AI Systems
- Transparency Notes
The Week That Proved the Gap
Here's the uncomfortable truth about the first week of March 2026: every major AI governance development reinforced the same structural failure pattern.
Platforms announced new moderation policies — but backed them with Community Notes, not cryptographic proof. State legislatures mandated provenance labels — but only for content that was generated. India created the world's most prescriptive synthetic content law — but left refusal verification entirely to platform self-reporting. The EU drafted its transparency code — but explicitly avoided specifying which standard to use. Federal law demands takedowns — but can't verify that harmful generation was blocked in the first place.
The "Trust Us" model isn't failing at the margins. It's failing at the center, in real time, with hundreds of millions of people watching AI-generated war footage that no platform can cryptographically prove it tried to prevent.
Let's fact-check each development, then build the verification layer that's missing.
Event 1: AI War Fakes Hit Hundreds of Millions of Views
What happened
The US-Israel joint air strikes against Iran, codenamed "Operation Epic Fury," began on February 28, 2026 (not March 1, as some reports state). Within days, AI-generated fake combat footage, fabricated satellite imagery, and synthetic news clips flooded X, Instagram, and Facebook. BBC Verify identified a fabricated "satellite image" — purportedly showing the US Fifth Fleet Naval Base in Bahrain — distributed by Iran's Tehran Times, and confirmed it as AI-generated using Google's SynthID detection tool.
X responded on March 3 with a new policy: 90-day Creator Revenue Sharing suspension for users posting unlabeled AI-generated conflict content. X's head of product, Nikita Bier, described the enforcement mechanism on March 4, revealing that a single Pakistani user had operated 31 accounts (hacked accounts renamed to "Iran War Monitor" variants beginning February 27) to distribute AI-generated war footage at scale.
Fact-check verdict: ⚠️ Partially verified — two corrections needed
Correction 1: Conflict start date. Multiple authoritative sources — the UK House of Commons Library, CNN, Al Jazeera, UN News — confirm the strikes began February 28, 2026, not March 1. This is a two-day discrepancy that matters for timeline accuracy.
Correction 2: View count. BBC Verify's own reporting describes "hundreds of millions" of collective views across the most viral fakes. The three most prominent fabrications garnered over 100 million views combined. Claims of "billions" overstate the documented evidence.
All other claims — BBC Verify's SynthID identification, X's March 3 policy, the 31-account Pakistani operation — are fully confirmed against primary sources.
The verification gap
Here's what X's response looks like in practice:
Detection method: Community Notes (user mutual reporting)
Cryptographic proof of AI generation: ❌ None
Cryptographic proof of generation refusal: ❌ None
External verification mechanism: ❌ None
Audit trail for enforcement decisions: ❌ None
X's policy creates three categories of response: monetization suspension, Community Notes labeling, and account removal. But the detection mechanism is purely social — other users must flag content as AI-generated. There's no cryptographic provenance, no hash-chain audit trail, no way for an external auditor to verify that X's AI systems (including Grok) actually refused to generate similar content when requested.
This is structurally identical to the Grok NCII crisis. The platform marks its own homework. The only difference is the scale: conflict disinformation reaches far more people than non-consensual intimate imagery, and the consequences of undetected AI fakes during an active military conflict are categorically different.
CAP-SRP fit: This is the core use case. If Completeness Invariant logging existed on the platforms generating this content, every generation attempt would have a recorded outcome — GEN, GEN_DENY, GEN_WARN, or GEN_ERROR. An external auditor could verify, cryptographically, whether requests for conflict footage were being blocked, and at what rate. Without it, we have only Community Notes.
Event 2: Twenty-Plus States Push AI Provenance Laws Simultaneously
What happened
Transparency Coalition AI (TCAI), a 501(c)(3) nonprofit that tracks state AI legislation, published its March 6, 2026 weekly update showing 20+ states simultaneously advancing chatbot safety and AI provenance bills. This is consistent with TCAI's February 27 update, which tracked bills in 30+ states — making "20+" a conservative figure.
Four bills stand out for their direct relevance to content provenance:
Oregon SB 1546 — Passed the House 52-0 on March 5 and was sent to the governor. Multiple sources (Oregon Legislature, KOIN 6 News, Oregon House Democrats, TCAI) confirm this as the first major AI chatbot safety bill to pass in 2026. It includes AI disclosure requirements, suicide prevention protocols, and annual reporting obligations.
Arizona SB 1786 — Mandates provenance data embedding in AI-generated media, including watermark and metadata tamper-resistance requirements. The bill exists and addresses provenance, but its claimed Senate passage date of March 3 could not be independently verified. As of TCAI's February 27 update, it was listed as "new this week" (newly introduced), making full Senate passage within four days unusually fast.
Illinois SB 3263 — The "AI Provenance Data Act," confirmed via the Illinois General Assembly's official website. Requires provenance labels, free reader tools, API access, and licensee maintenance of provenance capabilities.
New York A 6540 / S 6954 — The "Stop Deepfakes Act," addressing synthetic content provenance. Exists and is correctly described, though the "third reading March 4" milestone may apply to the companion bill S 6955 (AI Training Data Transparency Act) rather than S 6954 — a likely bill-number mix-up between related bills from the same sponsors.
Fact-check verdict: ⚠️ Mostly verified — two details unconfirmed
Oregon SB 1546 (52-0, March 5): ✅ Fully confirmed
Arizona SB 1786 passage date: ⚠️ Unverified — bill exists, passage date unconfirmed
Illinois SB 3263: ✅ Fully confirmed
New York A 6540/S 6954 third reading: ⚠️ Likely bill-number confusion with S 6955
The verification gap
Every one of these bills shares the same structural limitation: they mandate provenance for content that was generated — labels, metadata, watermarks, disclosure. None of them address the inverse question: can the platform prove that harmful content was refused?
Oregon SB 1546 requires annual reporting on AI safety measures — but the reporting is self-attested. Arizona SB 1786 mandates provenance data embedding — but only for content that exits the system. Illinois SB 3263 requires provenance labels and API access — but only for produced content.
The refusal side of the ledger is entirely absent.
What these bills require:
✅ "This content was AI-generated" (label)
✅ "This content has provenance metadata" (C2PA-style)
✅ "This platform has safety measures" (annual report)
What these bills cannot verify:
❌ "This request was blocked" (refusal record)
❌ "This account was suspended for violations" (enforcement record)
❌ "The safety policy in effect at decision time was X" (policy version)
CAP-SRP fit: These bills occupy the C2PA layer — proving what was generated. CAP-SRP's Safe Refusal Provenance (SRP) layer is the complement: proving what was refused. Together, they close the audit loop. Separately, provenance labels without refusal logs allow platforms to label what they produce while offering no verification of what they blocked.
Event 3: India Enforces the World's First Binding Synthetic Content Mandate
What happened
India's IT Rules Amendment 2026 (Gazette Notification G.S.R. 120(E)), signed by Joint Secretary Ajit Kumar, was notified on February 10 and took effect February 20, 2026. It is the first national regulation to create a binding legal definition of "synthetically generated information" (SGI) — defined in the new Rule 2(1)(wa) — and mandate platform compliance with specific provenance and takedown requirements.
Key provisions, verified against the Gazette, Mondaq, LiveLaw, Bhatt & Joshi Associates, and the Internet Freedom Foundation:
- Permanent provenance metadata must accompany all AI-generated content
- User-visible labels must be applied by default
- Anti-tampering measures for provenance metadata are required
- 3-hour deepfake takedown deadline (reduced from 36 hours)
- 2-hour deadline for non-consensual intimate imagery (reduced from 24 hours)
- Safe harbor immunity loss under Section 79 for non-compliance
Fact-check verdict: ⚠️ One claim unsupported
All provisions above are fully verified across multiple legal analyses. However, one claim in the daily watch report — that C-DAC's detection tool has "up to 89% accuracy" — cannot be verified. No public source cites this figure. C-DAC (Centre for Development of Advanced Computing) has deepfake detection programs, but neither C-DAC nor the similar C-DOT (Centre for Development of Telematics) has published this accuracy figure publicly. The Internet Freedom Foundation has actually criticized the rules for overestimating detection tool reliability. This figure should be treated as unverified.
The verification gap
India's IT Rules are the world's most prescriptive synthetic content mandate. They go further than any other jurisdiction in requiring permanent provenance, visible labeling, and strict takedown timelines. But they share the same blind spot:
India IT Rules 2026 — What's covered:
✅ Definition of synthetic content (SGI)
✅ Provenance metadata (permanent)
✅ Visible labels (default-on)
✅ Anti-tampering measures
✅ Takedown deadlines (3h / 2h)
✅ Safe harbor consequences
What's not covered:
❌ Refusal logging
❌ Completeness verification
❌ External audit of moderation decisions
❌ Cryptographic proof of enforcement
❌ Policy version provenance
A platform operating under these rules can label every piece of AI content it produces, take down flagged content within 3 hours, and still have no mechanism for proving it refused to generate harmful content in the first place. The "Trust Us" model persists inside the strongest regulatory framework on Earth.
CAP-SRP fit: India's SGI framework is the natural lower layer. CAP-SRP's Completeness Invariant can function as an upper layer — ensuring that every generation attempt has a recorded outcome, even when the outcome is refusal. The combination would create the world's first end-to-end verifiable content governance stack.
Event 4: EU Finalizes Article 50 Transparency Code
What happened
The EU AI Act's Article 50 transparency obligations become enforceable on August 2, 2026 — exactly 24 months after the Act's entry into force on August 1, 2024. The European Commission's Code of Practice on AI-Generated Content Transparency, first drafted December 17, 2025, is progressing toward a June 2026 final version.
Key provisions from the first draft, verified via Jones Day, Cooley, Shibolet & Co., Pearl Cohen, and Bird & Bird:
- Multi-layer marking approach: metadata + invisible watermarks (+ fingerprinting/logging as optional elements)
- Technology-neutral: no specific standard (C2PA, SynthID, etc.) is endorsed, though C2PA is mentioned as an example
- Default-on visible labels required for providers
- Detection tools (including API access) must be offered
- Comprehensive compliance documentation (including test results) is required
Important nuance: The second draft (approximately March 2026) revised the approach to a "two-layered marking" system focused on secured metadata and watermarking, with fingerprinting/logging made optional rather than required. The Jones Day and Bird & Bird analyses flag possible delays from the EU AI Digital Omnibus initiative.
Fact-check verdict: ✅ Fully verified (with nuance noted)
All claims confirmed across multiple law firm analyses. The June 2026 timeline for final version is consistent across sources, though potential delays are flagged.
The verification gap
The EU's approach is the most architecturally sophisticated of any jurisdiction — the multi-layer model explicitly includes "logging" as a component. This is closer to CAP-SRP's territory than any other regulation. But the critical word is optional. Logging is not required. And even where logging exists, the Code of Practice doesn't specify:
- What must be logged (generation? refusal? both?)
- What integrity guarantees the logs must have (hash chain? Merkle tree? signatures?)
- How external parties can verify the logs
- What happens when logs are absent or incomplete
CAP-SRP fit: Article 50's "logging or fingerprinting" element is the natural integration point. Article 12's high-risk AI logging requirements create additional alignment. The IETF draft-kamimura-scitt-refusal-events specification (currently at -02) provides the encoding-agnostic application profile that could satisfy both Article 50's logging element and Article 12's record-keeping requirements. The gap is enforcement specificity — the Code says "log," but doesn't say what that means. CAP-SRP defines exactly what it means.
Event 5: TAKE IT DOWN Act — 72 Days to Compliance
What happened
The TAKE IT DOWN Act (signed into law May 19, 2025) creates a one-year compliance window ending May 19, 2026 — 72 days from today. Covered platforms must establish notice-and-removal processes with a 48-hour removal window for non-consensual intimate images upon receiving a valid request. The Act has two distinctive provisions: it expands FTC enforcement authority to nonprofit organizations (ordinarily outside FTC Act jurisdiction), and it covers both real and AI-generated intimate imagery.
Fact-check verdict: ✅ Fully verified
Confirmed against Congress.gov, Wiley Law, Skadden, Latham & Watkins, and ZwillGen analyses.
The verification gap
The TAKE IT DOWN Act creates a clear legal obligation to remove content. But it creates no obligation — and no mechanism — to prove that content was never generated in the first place.
Consider the lifecycle:
1. User requests AI system to generate NCII
2. AI system should refuse → but where's the proof?
3. If NCII appears online, platform must remove within 48h → verifiable
4. Platform must also remove "reasonably similar" copies → verifiable
5. But did the AI system refuse the original request? → NOT verifiable
Step 2 is the gap. The Act governs what happens after harmful content exists. It cannot reach backward to verify that generation was blocked. This is exactly the gap the Grok NCII crisis exposed — 4.4 million images generated in 9 days, 41% sexualized, with no mechanism to verify what Grok's safety systems did or didn't do during that period.
CAP-SRP fit: SRP's Verifiable Refusal Record is the technical answer to step 2. A GEN_DENY event with RiskCategory: NCII_RISK creates a cryptographically signed, hash-chained, externally anchored record that the request was blocked. A TakedownRelevance flag (new in v1.1) cross-references the refusal to TAKE IT DOWN Act applicability. Without this, the Act's enforcement mechanism starts too late.
Event 6: AI-Generated CSAM Prosecutions in 22 States
What happened
NBC News published a February 28 investigation identifying 36 criminal cases in 22 states related to AI-generated CSAM, with all completed cases resulting in convictions (though most remain active). State attorneys general are escalating enforcement, as confirmed by Morgan Lewis's January 2026 analysis.
Fact-check verdict: ⚠️ Key statistic requires critical context
The NBC investigation and prosecution data are confirmed. However, one widely cited statistic requires major context:
The claim that NCMEC received 485,000 AI-CSAM reports in H1 2025 (a 624% year-over-year increase) is technically based on NCMEC CyberTipline data, but it is deeply misleading. Stanford researcher Riana Pfefferkorn and Bloomberg reporting revealed that approximately 380,000 of those reports (~80%) came from Amazon and involved zero AI-generated CSAM — they were hash-matches to known traditional CSAM found during Amazon's AI training data scanning. NCMEC's own CyberTipline executive confirmed that excluding Amazon, AI-related reports came in "really, really small numbers."
The 624% figure is also methodologically suspect: it compares H1 2025 (485,000) against full-year 2024 (67,000), mixing time periods.
This doesn't diminish the seriousness of AI-generated CSAM — 36 prosecutions in 22 states is a real enforcement pattern. But using inflated statistics undermines the credibility of the argument. Accuracy matters, especially on this topic.
The verification gap
The Grok incident remains the clearest illustration of this gap. Over 9 days in early 2025, Grok generated an estimated 4.4 million images, with approximately 41% classified as sexualized (per the New York Times's conservative estimate; the Center for Countering Digital Hate's broader analysis found 65%). During this period, there was no external mechanism to verify:
- How many generation requests were blocked
- What safety policies were in effect
- Whether safety thresholds were changed during the incident
- Whether accounts flagged for violations were referred to law enforcement
Every one of these questions maps to a specific CAP-SRP v1.1 event type. And the absence of these records is exactly what makes the "Trust Us" model untenable for CSAM enforcement.
Event 7: Grammarly's Dead Professor Problem
What happened
Grammarly's AI "Expert Review" feature (launched August 2025 as part of its AI agents suite) was discovered to be using real identities — journalists, academics, and deceased persons — without permission for its AI review personas.
The controversy broke on March 2 when medieval historian Verena Krebs flagged that deceased Cambridge professor David Abulafia (died January 2026) was listed as an available "expert reviewer." Carl Sagan (died 1996) was also available. The Verge's March 6 investigation revealed its own editorial staff — including editor-in-chief Nilay Patel and editor-at-large David Pierce — were being impersonated, along with journalists from Bloomberg, the New York Times, The Atlantic, and Wired.
For context: Grammarly rebranded as Superhuman on October 29, 2025 following a series of acquisitions. Academics described the practice as "digital necromancy."
Fact-check verdict: ✅ Fully verified
Confirmed via The Verge (March 6), Wired (March 5), Futurism, Chronicle of Higher Education, Boing Boing, and A.V. Club reporting.
The verification gap
This item extends the provenance question beyond media generation into AI agent identity governance. The core issue isn't just that Grammarly used real identities — it's that there is no audit trail showing:
- Which identities were used as AI personas
- When they were added or removed
- Whether permission was obtained or refused
- What the decision process was for including deceased persons
This is structurally analogous to the content generation audit problem. The system made decisions about what to generate (persona assignments) with no externally verifiable record of those decisions.
CAP-SRP fit: While identity governance extends beyond CAP-SRP's core scope of content generation, the underlying principle — verifiable audit trails for AI system decisions — is the same. A POLICY_VERSION event documenting the persona inclusion policy, combined with GEN_ATTEMPT/GEN events tracking which personas were invoked and when, would provide exactly the audit trail that's missing.
The Pattern: Seven Stories, One Structural Blind Spot
Let's map all seven events against the same question set:
Proves Proves External Policy
Event content refusal audit version
created? happened? possible? tracked?
─────────────────────────────────────────────────────────────
1. Iran AI fakes ❌* ❌ ❌ ❌
2. State bills ✅ (req) ❌ ❌ ❌
3. India IT Rules ✅ (req) ❌ ❌ ❌
4. EU Art. 50 ✅ (req) ❌ Partial** ❌
5. TAKE IT DOWN ✅ (del) ❌ ❌ ❌
6. AI CSAM ❌ ❌ ❌ ❌
7. Grammarly ❌ ❌ ❌ ❌
* X relies on Community Notes, not cryptographic proof
** EU Code includes "logging" as optional element
(req) = requirement exists but verification is self-attested
(del) = proves deletion, not prevention
Seven different events. Four continents. Billions of dollars in regulatory compliance costs. And the same column — "Proves refusal happened?" — is empty in every row.
This is the gap CAP-SRP fills.
CAP-SRP v1.1: What Changed and Why
CAP-SRP v1.1.0 (released March 5, 2026) was directly shaped by the real-world failures documented above. Here's what changed:
New Event Types
| Event Type | Motivated By | What It Logs |
|---|---|---|
ACCOUNT_ACTION |
Tumbler Ridge shooting (Feb 2026) — account banned but no LE referral | Account bans, suspensions, rate limits, reinstatements |
LAW_ENFORCEMENT_REFERRAL |
Same — was the LE notification threshold assessed? | Threshold assessments and referral decisions |
POLICY_VERSION |
Grok NCII crisis — were safety thresholds lowered? | Tamper-evident versioning of safety policies |
Formalized Intermediate States
| Event Type | Problem It Solves |
|---|---|
GEN_WARN |
Content allowed with safety warning — was this logged? |
GEN_ESCALATE |
Sent for human review — was the outcome recorded? |
GEN_QUARANTINE |
Content held before delivery — was it ever released or denied? |
New Completeness Invariants
v1.0 had one invariant. v1.1 has four:
Invariant 1 (Primary):
∑ GEN_ATTEMPT = ∑ GEN + ∑ GEN_DENY + ∑ GEN_ERROR
(where ∑ GEN includes GEN_WARN)
Invariant 2 (Escalation Resolution):
∑ GEN_ESCALATE = ∑ ESCALATION_RESOLVED
Every escalation must resolve. Unresolved > 72h = violation.
Invariant 3 (Quarantine Resolution):
∑ GEN_QUARANTINE = ∑ QUARANTINE_RELEASED + ∑ QUARANTINE_DENIED
No permanent unresolved quarantine states.
Invariant 4 (Policy Anchoring Precedence):
anchor_timestamp(POLICY_VERSION) ≤ EffectiveFrom
Policies must be externally anchored BEFORE they take effect.
Prevents retroactive policy creation.
New Risk Category
VIOLENCE_PLANNING — added to address cases where content doesn't depict violence directly but facilitates planning of violent acts. This was motivated by the Tumbler Ridge pattern, where the AI interaction was preparatory, not generative of violent imagery.
Building the v1.1 Event Types
Let's implement the new event types. All code uses the same cryptographic primitives as v1.0 (Ed25519 signatures, SHA-256 hash chains) and builds on the existing CAPEvent base class.
The Base: v1.1 Event Enumerations
from enum import Enum
from dataclasses import dataclass, field
from typing import Optional, List
from datetime import datetime, timezone
import hashlib
import json
import uuid
class EventType(str, Enum):
"""All CAP-SRP v1.1 event types."""
# Core generation events (v1.0)
GEN_ATTEMPT = "GEN_ATTEMPT"
GEN = "GEN"
GEN_DENY = "GEN_DENY"
GEN_ERROR = "GEN_ERROR"
# Intermediate-state events (v1.1 formalized)
GEN_WARN = "GEN_WARN"
GEN_ESCALATE = "GEN_ESCALATE"
GEN_QUARANTINE = "GEN_QUARANTINE"
# Account and policy events (v1.1 new)
ACCOUNT_ACTION = "ACCOUNT_ACTION"
LAW_ENFORCEMENT_REFERRAL = "LAW_ENFORCEMENT_REFERRAL"
POLICY_VERSION = "POLICY_VERSION"
class RiskCategory(str, Enum):
"""Risk categories including v1.1 additions."""
CSAM_RISK = "CSAM_RISK"
NCII_RISK = "NCII_RISK"
MINOR_SEXUALIZATION = "MINOR_SEXUALIZATION"
REAL_PERSON_DEEPFAKE = "REAL_PERSON_DEEPFAKE"
VIOLENCE_EXTREME = "VIOLENCE_EXTREME"
VIOLENCE_PLANNING = "VIOLENCE_PLANNING" # v1.1 NEW
HATE_CONTENT = "HATE_CONTENT"
TERRORIST_CONTENT = "TERRORIST_CONTENT"
SELF_HARM_PROMOTION = "SELF_HARM_PROMOTION"
COPYRIGHT_VIOLATION = "COPYRIGHT_VIOLATION"
COPYRIGHT_STYLE_MIMICRY = "COPYRIGHT_STYLE_MIMICRY"
OTHER = "OTHER"
class ActionType(str, Enum):
"""Account action types (v1.1)."""
SUSPEND = "SUSPEND"
BAN = "BAN"
RATE_LIMIT = "RATE_LIMIT"
REINSTATE = "REINSTATE"
FLAG_FOR_REVIEW = "FLAG_FOR_REVIEW"
class RiskScoreBand(str, Enum):
LOW = "LOW"
MEDIUM = "MEDIUM"
HIGH = "HIGH"
CRITICAL = "CRITICAL"
class DecisionMechanism(str, Enum):
AUTOMATED = "AUTOMATED"
HUMAN_INITIATED = "HUMAN_INITIATED"
HUMAN_CONFIRMED_AUTOMATED = "HUMAN_CONFIRMED_AUTOMATED"
class EscalationReason(str, Enum):
CLASSIFIER_CONFIDENCE_LOW = "CLASSIFIER_CONFIDENCE_LOW"
JURISDICTIONAL_AMBIGUITY = "JURISDICTIONAL_AMBIGUITY"
NOVEL_CONTENT_TYPE = "NOVEL_CONTENT_TYPE"
LEGAL_REVIEW_REQUIRED = "LEGAL_REVIEW_REQUIRED"
OTHER = "OTHER"
class ReviewerType(str, Enum):
HUMAN_TRUST_AND_SAFETY = "HUMAN_TRUST_AND_SAFETY"
LEGAL = "LEGAL"
EXTERNAL_AUDITOR = "EXTERNAL_AUDITOR"
class MultiModalType(str, Enum):
TEXT = "TEXT"
IMAGE = "IMAGE"
VIDEO = "VIDEO"
AUDIO = "AUDIO"
MULTIMODAL = "MULTIMODAL"
ACCOUNT_ACTION: Logging What Happened to the Account
This is the event that would have changed the Tumbler Ridge case. When a platform bans an account, the question isn't just "was the account banned?" — it's "was law enforcement notified, and can you prove the threshold assessment was made?"
@dataclass
class LawEnforcementAssessment:
"""Sub-object for LE threshold evaluation.
Captures the decision of whether flagged activity
meets the threshold for law enforcement notification.
"""
threshold_met: bool
threshold_definition_ref: str # EventID of POLICY_VERSION
assessment_timestamp: str # RFC 3339
assessor_type: str # AUTOMATED | HUMAN_TRUST_AND_SAFETY | LEGAL_COUNSEL
def to_dict(self) -> dict:
return {
"ThresholdMet": self.threshold_met,
"ThresholdDefinitionRef": self.threshold_definition_ref,
"AssessmentTimestamp": self.assessment_timestamp,
"AssessorType": self.assessor_type,
}
@dataclass
class AccountActionEvent:
"""CAP-SRP v1.1 ACCOUNT_ACTION event.
Records account-level enforcement decisions with
cryptographic proof of law enforcement assessment.
"""
event_id: str
chain_id: str
prev_hash: Optional[str]
timestamp: str
event_type: str = "ACCOUNT_ACTION"
hash_algo: str = "SHA256"
sign_algo: str = "ED25519"
# v1.1 fields
account_hash: str = "" # SHA-256(HMAC(account_id, per_user_key))
action_type: str = "" # SUSPEND | BAN | RATE_LIMIT | REINSTATE | FLAG_FOR_REVIEW
triggering_event_refs: List[str] = field(default_factory=list)
policy_version_ref: str = "" # EventID of governing POLICY_VERSION
risk_score_band: str = "" # LOW | MEDIUM | HIGH | CRITICAL
decision_mechanism: str = "" # AUTOMATED | HUMAN_INITIATED | HUMAN_CONFIRMED_AUTOMATED
le_assessment: Optional[LawEnforcementAssessment] = None
event_hash: str = ""
signature: str = ""
@classmethod
def create(cls, chain_id: str, prev_hash: Optional[str],
account_id: str, hmac_key: bytes,
action_type: ActionType,
triggering_refs: List[str],
policy_version_ref: str,
risk_band: RiskScoreBand,
decision_mechanism: DecisionMechanism,
le_assessment: Optional[LawEnforcementAssessment] = None
) -> "AccountActionEvent":
"""Create ACCOUNT_ACTION with privacy-preserving account hash."""
import hmac as hmac_mod
# Privacy: HMAC prevents cross-provider correlation
account_hash = "sha256:" + hmac_mod.new(
hmac_key, account_id.encode(), hashlib.sha256
).hexdigest()
return cls(
event_id=str(uuid.uuid7()) if hasattr(uuid, 'uuid7') else str(uuid.uuid4()),
chain_id=chain_id,
prev_hash=prev_hash,
timestamp=datetime.now(timezone.utc).isoformat(),
account_hash=account_hash,
action_type=action_type.value,
triggering_event_refs=triggering_refs,
policy_version_ref=policy_version_ref,
risk_score_band=risk_band.value,
decision_mechanism=decision_mechanism.value,
le_assessment=le_assessment,
)
def to_dict(self) -> dict:
d = {
"EventID": self.event_id,
"ChainID": self.chain_id,
"PrevHash": self.prev_hash,
"Timestamp": self.timestamp,
"EventType": self.event_type,
"HashAlgo": self.hash_algo,
"SignAlgo": self.sign_algo,
"AccountHash": self.account_hash,
"ActionType": self.action_type,
"TriggeringEventRefs": self.triggering_event_refs,
"PolicyVersionRef": self.policy_version_ref,
"RiskScoreBand": self.risk_score_band,
"DecisionMechanism": self.decision_mechanism,
}
if self.le_assessment:
d["LawEnforcementAssessment"] = self.le_assessment.to_dict()
d["EventHash"] = self.event_hash
d["Signature"] = self.signature
return d
POLICY_VERSION: Tamper-Evident Policy Management
This is Invariant 4's enforcement mechanism. If a platform changes its safety thresholds — as was suspected during the Grok crisis — the policy change must be externally anchored before it takes effect. Retroactive policy creation is a compliance violation.
@dataclass
class PolicyVersionEvent:
"""CAP-SRP v1.1 POLICY_VERSION event.
Creates tamper-evident record of safety policy changes.
External anchor must predate EffectiveFrom (Invariant 4).
"""
event_id: str
chain_id: str
prev_hash: Optional[str]
timestamp: str
event_type: str = "POLICY_VERSION"
hash_algo: str = "SHA256"
sign_algo: str = "ED25519"
policy_id: str = ""
policy_hash: str = "" # SHA-256 of full policy document
effective_from: str = "" # RFC 3339 — when policy takes effect
supersedes: Optional[str] = None # EventID of previous POLICY_VERSION
change_summary_hash: str = "" # SHA-256 of change description
approval_chain: List[str] = field(default_factory=list)
external_anchor_ref: Optional[str] = None # RFC 3161 timestamp token
event_hash: str = ""
signature: str = ""
@classmethod
def create(cls, chain_id: str, prev_hash: Optional[str],
policy_id: str, policy_document: str,
effective_from: str, supersedes: Optional[str],
change_summary: str,
approval_chain: List[str]) -> "PolicyVersionEvent":
"""Create a new POLICY_VERSION event."""
return cls(
event_id=str(uuid.uuid4()),
chain_id=chain_id,
prev_hash=prev_hash,
timestamp=datetime.now(timezone.utc).isoformat(),
policy_id=policy_id,
policy_hash="sha256:" + hashlib.sha256(
policy_document.encode()
).hexdigest(),
effective_from=effective_from,
supersedes=supersedes,
change_summary_hash="sha256:" + hashlib.sha256(
change_summary.encode()
).hexdigest(),
approval_chain=approval_chain,
)
def to_dict(self) -> dict:
return {
"EventID": self.event_id,
"ChainID": self.chain_id,
"PrevHash": self.prev_hash,
"Timestamp": self.timestamp,
"EventType": self.event_type,
"HashAlgo": self.hash_algo,
"SignAlgo": self.sign_algo,
"PolicyID": self.policy_id,
"PolicyHash": self.policy_hash,
"EffectiveFrom": self.effective_from,
"Supersedes": self.supersedes,
"ChangeSummaryHash": self.change_summary_hash,
"ApprovalChain": self.approval_chain,
"ExternalAnchorRef": self.external_anchor_ref,
"EventHash": self.event_hash,
"Signature": self.signature,
}
GEN_WARN, GEN_ESCALATE, GEN_QUARANTINE: The Intermediate States
Real content moderation isn't binary. Between "allow" and "block" lies a continuum of intermediate decisions — warn, escalate to human review, hold for post-generation inspection. v1.1 formalizes these states.
@dataclass
class GenWarnEvent:
"""Content allowed with a safety warning.
Counts as GEN (outcome) in the Primary Completeness Invariant.
Warning text itself is NOT stored — only its hash.
"""
event_id: str
chain_id: str
prev_hash: Optional[str]
timestamp: str
event_type: str = "GEN_WARN"
attempt_id: str = ""
content_hash: str = "" # SHA-256 of generated content
warn_message_hash: str = "" # SHA-256 of warning shown to user
risk_category: str = ""
risk_score: float = 0.0
applied_policy_version_ref: str = ""
multi_modal_type: str = ""
event_hash: str = ""
signature: str = ""
@classmethod
def create(cls, chain_id: str, prev_hash: Optional[str],
attempt_id: str, content: bytes,
warn_message: str, risk_category: RiskCategory,
risk_score: float, policy_ref: str,
modal_type: MultiModalType) -> "GenWarnEvent":
return cls(
event_id=str(uuid.uuid4()),
chain_id=chain_id,
prev_hash=prev_hash,
timestamp=datetime.now(timezone.utc).isoformat(),
attempt_id=attempt_id,
content_hash="sha256:" + hashlib.sha256(content).hexdigest(),
warn_message_hash="sha256:" + hashlib.sha256(
warn_message.encode()
).hexdigest(),
risk_category=risk_category.value,
risk_score=risk_score,
applied_policy_version_ref=policy_ref,
multi_modal_type=modal_type.value,
)
@dataclass
class GenEscalateEvent:
"""Request sent for human review — a PENDING state.
Must be resolved by GEN or GEN_DENY (Invariant 2).
Unresolved > 72h = compliance violation.
"""
event_id: str
chain_id: str
prev_hash: Optional[str]
timestamp: str
event_type: str = "GEN_ESCALATE"
attempt_id: str = ""
escalation_reason: str = ""
reviewer_type: str = ""
escalation_timestamp: str = ""
resolution_ref: Optional[str] = None # Populated when resolved
applied_policy_version_ref: str = ""
event_hash: str = ""
signature: str = ""
@classmethod
def create(cls, chain_id: str, prev_hash: Optional[str],
attempt_id: str,
reason: EscalationReason,
reviewer: ReviewerType,
policy_ref: str) -> "GenEscalateEvent":
now = datetime.now(timezone.utc).isoformat()
return cls(
event_id=str(uuid.uuid4()),
chain_id=chain_id,
prev_hash=prev_hash,
timestamp=now,
attempt_id=attempt_id,
escalation_reason=reason.value,
reviewer_type=reviewer.value,
escalation_timestamp=now,
applied_policy_version_ref=policy_ref,
)
@dataclass
class GenQuarantineEvent:
"""Content generated but held before delivery — a PENDING state.
Must be resolved by EXPORT or GEN_DENY (Invariant 3).
ContentHash enables TAKE IT DOWN Act duplicate detection
without retaining harmful content in the audit log.
"""
event_id: str
chain_id: str
prev_hash: Optional[str]
timestamp: str
event_type: str = "GEN_QUARANTINE"
attempt_id: str = ""
content_hash: str = ""
quarantine_reason: str = ""
expiry_policy: str = "" # AUTO_RELEASE | REQUIRES_HUMAN_APPROVAL | PERMANENT
release_ref: Optional[str] = None # Populated when resolved
applied_policy_version_ref: str = ""
event_hash: str = ""
signature: str = ""
@classmethod
def create(cls, chain_id: str, prev_hash: Optional[str],
attempt_id: str, content: bytes,
reason: str, expiry_policy: str,
policy_ref: str) -> "GenQuarantineEvent":
return cls(
event_id=str(uuid.uuid4()),
chain_id=chain_id,
prev_hash=prev_hash,
timestamp=datetime.now(timezone.utc).isoformat(),
attempt_id=attempt_id,
content_hash="sha256:" + hashlib.sha256(content).hexdigest(),
quarantine_reason=reason,
expiry_policy=expiry_policy,
applied_policy_version_ref=policy_ref,
)
The Four Completeness Invariants
Here's the verification algorithm that enforces all four invariants simultaneously. This is the mathematical anchor — the thing that makes CAP-SRP's guarantees meaningful rather than aspirational.
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import List, Tuple, Optional
@dataclass
class VerificationResult:
"""Result of completeness invariant verification."""
valid: bool
# Primary invariant
unmatched_attempts: List[str]
orphan_outcomes: List[str]
total_attempts: int
total_outcomes: int
# Escalation invariant (v1.1)
unresolved_escalations: List[str]
# Quarantine invariant (v1.1)
unresolved_quarantines: List[str]
# Policy anchoring invariant (v1.1)
retroactive_policies: List[str]
# Summary
errors: List[str]
def __str__(self) -> str:
if self.valid:
return (
f"✅ All four invariants verified. "
f"{self.total_attempts} attempts, "
f"{self.total_outcomes} outcomes."
)
return (
f"❌ Invariant violation(s) detected:\n"
+ "\n".join(f" • {e}" for e in self.errors)
)
def get_anchor_timestamp(anchor_ref: Optional[str]) -> Optional[datetime]:
"""Look up external anchor timestamp.
In production, this queries the RFC 3161 TSA or SCITT
transparency service. Here, we parse the reference.
"""
if anchor_ref is None:
return None
# Placeholder: in production, resolve against TSA
return datetime.fromisoformat(anchor_ref) if anchor_ref else None
def verify_completeness_v1_1(
events: List[dict],
time_window: Tuple[datetime, datetime],
escalation_grace: timedelta = timedelta(hours=72),
) -> VerificationResult:
"""
Verify all four CAP-SRP v1.1 Completeness Invariants.
Invariant 1: Every GEN_ATTEMPT has exactly one outcome
Invariant 2: Every GEN_ESCALATE is resolved
Invariant 3: Every GEN_QUARANTINE is resolved
Invariant 4: Every POLICY_VERSION is anchored before EffectiveFrom
All checks are O(n) time and O(n) space.
"""
errors = []
window_start, window_end = time_window
# Filter events to time window
filtered = [
e for e in events
if window_start <= datetime.fromisoformat(e["Timestamp"]) <= window_end
]
# ── Invariant 1: Primary (attempt → outcome pairing) ──────────
attempts = {
e["EventID"]: e for e in filtered
if e["EventType"] == "GEN_ATTEMPT"
}
outcome_types = {
"GEN", "GEN_WARN", "GEN_DENY", "GEN_ERROR",
"GEN_ESCALATE", "GEN_QUARANTINE"
}
outcomes = [e for e in filtered if e["EventType"] in outcome_types]
matched_attempts = set()
orphan_outcomes = []
for outcome in outcomes:
attempt_id = outcome.get("AttemptID")
if attempt_id in attempts:
if attempt_id in matched_attempts:
errors.append(
f"DUPLICATE_OUTCOME: Multiple outcomes for attempt {attempt_id}"
)
matched_attempts.add(attempt_id)
else:
orphan_outcomes.append(outcome["EventID"])
unmatched = list(set(attempts.keys()) - matched_attempts)
if unmatched:
errors.append(
f"UNMATCHED_ATTEMPTS: {len(unmatched)} attempts have no outcome"
)
if orphan_outcomes:
errors.append(
f"ORPHAN_OUTCOMES: {len(orphan_outcomes)} outcomes have no attempt"
)
# ── Invariant 2: Escalation resolution ─────────────────────────
escalations = {
e["EventID"]: e for e in filtered
if e["EventType"] == "GEN_ESCALATE"
}
now = datetime.now()
unresolved_escalations = [
eid for eid, e in escalations.items()
if e.get("ResolutionRef") is None
and (now - datetime.fromisoformat(e["Timestamp"])) > escalation_grace
]
if unresolved_escalations:
errors.append(
f"UNRESOLVED_ESCALATIONS: {len(unresolved_escalations)} "
f"escalations unresolved beyond {escalation_grace}"
)
# ── Invariant 3: Quarantine resolution ─────────────────────────
quarantines = {
e["EventID"]: e for e in filtered
if e["EventType"] == "GEN_QUARANTINE"
}
unresolved_quarantines = [
eid for eid, e in quarantines.items()
if e.get("ReleaseRef") is None
]
if unresolved_quarantines:
errors.append(
f"UNRESOLVED_QUARANTINES: {len(unresolved_quarantines)} "
f"quarantined items without resolution"
)
# ── Invariant 4: Policy anchoring precedence ───────────────────
policy_versions = [
e for e in filtered
if e["EventType"] == "POLICY_VERSION"
]
retroactive_policies = []
for pv in policy_versions:
anchor_ts = get_anchor_timestamp(pv.get("ExternalAnchorRef"))
effective = datetime.fromisoformat(pv["EffectiveFrom"])
if anchor_ts and anchor_ts > effective:
retroactive_policies.append(pv["EventID"])
if retroactive_policies:
errors.append(
f"RETROACTIVE_POLICIES: {len(retroactive_policies)} "
f"policies anchored after their effective date"
)
return VerificationResult(
valid=len(errors) == 0,
unmatched_attempts=unmatched,
orphan_outcomes=orphan_outcomes,
total_attempts=len(attempts),
total_outcomes=len(outcomes),
unresolved_escalations=unresolved_escalations,
unresolved_quarantines=unresolved_quarantines,
retroactive_policies=retroactive_policies,
errors=errors,
)
What Each Invariant Catches
Let's map the invariants back to the real-world failures:
Invariant 1 (Primary — attempt/outcome pairing):
→ Iran fake footage: Were generation requests for conflict imagery
blocked? Count attempts vs. denials for REAL_PERSON_DEEPFAKE
and VIOLENCE_EXTREME categories.
→ Grok NCII crisis: 4.4M images in 9 days. What % were
GEN_DENY? Without the invariant, we'll never know.
Invariant 2 (Escalation resolution):
→ Grammarly personas: Were identity-use decisions escalated
to legal review? Were escalations resolved or abandoned?
→ State bill compliance: Annual reports claim "human review"
processes exist. Prove escalations resolve within 72h.
Invariant 3 (Quarantine resolution):
→ TAKE IT DOWN Act: Content held for review must be either
released or denied. No permanent "quarantine" hiding.
→ India 3-hour deadline: Quarantined content must resolve
within the regulatory timeline.
Invariant 4 (Policy anchoring precedence):
→ Did Grok's safety thresholds change during the crisis?
Policies must be anchored BEFORE they take effect.
→ Did X change moderation policies mid-conflict?
Retroactive policy creation is now detectable.
Regulatory Mapping: Where Each Law Falls Short
Here's the comprehensive mapping of all seven developments against CAP-SRP v1.1's capabilities:
┌──────────────────────────┬──────────────┬──────────────┬──────────────┐
│ Regulation / Event │ C2PA Layer │ SRP Layer │ v1.1 Events │
│ │ (generated) │ (refused) │ Needed │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ X Iran policy │ ❌ Not req │ ❌ Not req │ ACCOUNT_ACT │
│ │ │ │ POLICY_VER │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ Oregon SB 1546 │ Partial* │ ❌ Not req │ GEN_DENY │
│ (chatbot safety) │ │ │ GEN_WARN │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ Arizona SB 1786 │ ✅ Required │ ❌ Not req │ GEN_DENY │
│ (provenance embedding) │ │ │ POLICY_VER │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ Illinois SB 3263 │ ✅ Required │ ❌ Not req │ GEN_DENY │
│ (AI Provenance Data Act) │ │ │ POLICY_VER │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ India IT Rules 2026 │ ✅ Required │ ❌ Not req │ GEN_DENY │
│ (SGI mandate) │ │ │ ACCOUNT_ACT │
│ │ │ │ POLICY_VER │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ EU AI Act Art. 50 │ ✅ Required │ Optional** │ All v1.1 │
│ (transparency code) │ │ │ types apply │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ TAKE IT DOWN Act │ N/A (del) │ ❌ Not req │ GEN_DENY │
│ (48h removal) │ │ │ GEN_QUARANT │
│ │ │ │ ACCOUNT_ACT │
│ │ │ │ LE_REFERRAL │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ AI CSAM enforcement │ N/A │ ❌ Not req │ GEN_DENY │
│ (22-state prosecution) │ │ │ ACCOUNT_ACT │
│ │ │ │ LE_REFERRAL │
│ │ │ │ POLICY_VER │
├──────────────────────────┼──────────────┼──────────────┼──────────────┤
│ Grammarly persona issue │ ❌ Not req │ ❌ Not req │ GEN_ATTEMPT │
│ (identity governance) │ │ │ POLICY_VER │
└──────────────────────────┴──────────────┴──────────────┴──────────────┘
* Oregon requires disclosure but not C2PA specifically
** EU Code lists "logging" as optional element
The pattern is clear: the C2PA column (proving what was generated) is increasingly covered by regulation. The SRP column (proving what was refused) is empty everywhere except a partial "optional" in the EU Code.
What This Means for Developers Building AI Systems
If you're building or maintaining an AI content generation system, here's the practical takeaway:
1. The compliance surface is expanding fast
Twenty-plus US states, India, the EU, and federal law are all converging on content provenance requirements. If you're not logging generation events today, you'll be required to by Q3 2026 at the latest (EU Article 50 deadline: August 2, 2026).
2. Generation logging isn't enough
Every one of these regulations will eventually ask: "What did you refuse to generate?" The only question is when. Logging GEN_DENY events now is forward-looking compliance — it's cheaper to build it into the architecture than to retrofit it under regulatory pressure.
3. Start with the Primary Invariant
You don't need to implement all four invariants on day one. Start with the primary:
# The minimum viable verification:
# Every GEN_ATTEMPT must have exactly one outcome.
assert count(GEN_ATTEMPT) == count(GEN) + count(GEN_DENY) + count(GEN_ERROR)
If you can verify this for your system, you have the foundation. The escalation, quarantine, and policy invariants build on top.
4. Policy versioning is free and invaluable
Even if you implement nothing else from v1.1, start hash-chaining your safety policy documents. When a regulator asks "What policy was in effect when this decision was made?", you want to have a cryptographic answer, not a manual document search.
# Minimum viable policy versioning:
policy_hash = hashlib.sha256(policy_document.encode()).hexdigest()
# Store: (policy_id, policy_hash, effective_from, timestamp)
# Anchor externally: RFC 3161 timestamp token
5. The architecture diagram
Here's how the full v1.1 event flow looks in practice:
User Request
│
▼
┌──────────────────┐
│ GEN_ATTEMPT │ ◄── Log BEFORE safety check (critical!)
└────────┬─────────┘
│
▼
┌──────────────────┐
│ Safety Pipeline │
└────────┬─────────┘
│
┌────┴────┬──────────┬───────────┬──────────────┐
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌───────┐ ┌────────┐ ┌────────┐ ┌──────────┐ ┌──────────┐
│ GEN │ │GEN_DENY│ │GEN_WARN│ │GEN_ESCAL │ │GEN_ERROR │
│(pass) │ │(block) │ │(warn) │ │(review) │ │(failure) │
└───┬───┘ └────────┘ └────────┘ └────┬─────┘ └──────────┘
│ │
▼ resolved by
┌───────────┐ GEN or GEN_DENY
│GEN_QUARANT│
│(hold) │
└─────┬─────┘
│
resolved by
EXPORT or GEN_DENY
Account-Level Events (independent flow)
────────────────────────────────────────
Flagged Pattern ──▶ ACCOUNT_ACTION ──▶ LAW_ENFORCEMENT_REFERRAL
(BAN/SUSPEND) (REFERRED/NOT_REFERRED)
│
References: POLICY_VERSION
(what policy governed the decision?)
Transparency Notes
About CAP-SRP: CAP-SRP is an open specification published under CC BY 4.0 by the VeritasChain Standards Organization (VSO), founded in Tokyo. The specification is early-stage — v1.0 was released January 28, 2026; v1.1 on March 5, 2026. It has not been endorsed by major AI companies and is not yet an adopted IETF standard. An individual Internet-Draft (draft-kamimura-scitt-refusal-events-02) has been submitted to the SCITT working group but has not been formally adopted. The underlying standards it builds on — SCITT, C2PA, COSE/CBOR, RFC 3161 — are mature and widely implemented.
Fact-check methodology: Every claim in this article was verified against primary sources before publication. Corrections are noted inline where original reports contained errors (conflict start date, view count scale, NCMEC statistics context, unverified legislative dates). URLs were checked for existence and content accuracy. One URL cited in the source report (aicerts.ai) appeared fabricated — the site exists but the specific article slug does not match published content.
What CAP-SRP is:
- A technically sound approach to a genuine and well-documented gap
- Aligned with existing standards (C2PA, SCITT, RFC 3161)
- Available on GitHub: veritaschain/cap-spec · veritaschain/cap-srp
What CAP-SRP is not (yet):
- An industry-endorsed standard
- An IETF RFC
- A guaranteed solution
The real question is whether the industry builds some form of verifiable refusal provenance before regulators impose one. The August 2, 2026 EU AI Act enforcement deadline is 147 days away.
Verify, don't trust. The code is the proof.
GitHub: veritaschain/cap-spec · Specification: CAP-SRP v1.1 · IETF Draft: draft-kamimura-scitt-refusal-events-02 · License: CC BY 4.0
Top comments (0)