On March 30, 2026, the EU closes public consultation on the Code of Practice that will govern AI content marking under Article 50. The same week, a California court told xAI it can't hide its training data, 61 data protection authorities moved from warnings to enforcement, Nebraska advanced an AI transparency bill to floor debate, and the world's most widely deployed provenance standard — with 6,000+ members — still cannot represent a single refusal decision. This article fact-checks all seven developments, maps the exact CAP-SRP v1.1 event types that address each gap, and gives you working Python code to build the missing layer.
TL;DR
Every major AI transparency initiative this week operates in the domain of content that exists. C2PA labels what AI creates. The EU Code mandates marking what AI produces. California demands disclosure of training data. None of them can answer the question regulators, courts, and prosecutors are now asking: "What did your AI refuse to generate, and can you prove it?"
This article:
- Fact-checks seven developments against primary sources (with corrections)
- Maps each gap to specific CAP-SRP v1.1 event types and data fields
- Provides a complete Python implementation of the v1.1 event model — including the three new event types (
ACCOUNT_ACTION,LAW_ENFORCEMENT_REFERRAL,POLICY_VERSION) - Shows the C2PA integration point and Evidence Pack structure
- Includes a regulatory deadline map and a Bronze → Silver → Gold sprint plan
GitHub: veritaschain/cap-spec · veritaschain/cap-srp · License: CC BY 4.0
Table of Contents
- Signal 1: EU Code of Practice — Content Marking Without Behavioral Provenance
- Signal 2: xAI v. Bonta — Courts Demanding Evidence That Doesn't Exist
- Signal 3: 61 DPA Joint Statement — Enforcement Without Verification Tools
- Signal 4: IETF Refusal Events Draft — The Only Standards-Track Attempt
- Signal 5: Nebraska LB1083 — When State Law Demands Safety Proof
- Signal 6: C2PA at 6,000 Members — The Provenance Ceiling
- Signal 7: ZwillGen's Four Questions — The Litigation Evidence Gap
- The Architecture: CAP-SRP v1.1 Event Model
- Building the Flight Recorder: Complete Python Implementation
- C2PA Integration: Connecting Both Halves
- Regulatory Deadline Map
- Sprint Plan: Bronze → Silver → Gold
- Transparency Notes
Signal 1: EU Code of Practice
What happened
The European Commission published the second draft of its Code of Practice on marking and labelling AI-generated content on March 3, 2026. Public consultation closes today, March 30. The final Code is expected in early June. Article 50 transparency obligations take effect August 2, 2026.
The Code mandates a dual-layer marking approach: digitally signed metadata (C2PA explicitly named as the recommended open standard) plus imperceptible watermarking. Fingerprinting and logging, which were baseline requirements in the first draft, were downgraded to supplementary measures. Provider obligations include free detection tools via APIs. Deployer obligations are differentiated by media type — persistent icons for real-time video, distinguishable labels for images, spoken disclaimers at intervals for audio. For text, where watermarking degrades quality, the Code permits "Provenance Certificates" — digitally signed manifests guaranteeing content origin.
Fact-check verdict: ✅ Confirmed
Publication confirmed via European Commission digital strategy portal. Date confirmed as March 3 by Kennedys Law (note: the Daily Watch cited March 5, a minor reference variation). Comment deadline March 30 confirmed. August 2, 2026 enforcement date confirmed by Bird & Bird, Herbert Smith Freehills Kramer, and TrueScreen. Non-compliance fines of up to €15M or 3% of global turnover confirmed.
What the Code doesn't cover
The entire Code operates in the domain of content that has been generated. It contains:
- ❌ No mechanism for documenting what AI systems refused to generate
- ❌ No refusal logging requirement
- ❌ No behavioral provenance standard
- ❌ No audit trail for safety decisions
CAP-SRP mapping
| Gap | CAP-SRP Event | What It Does |
|---|---|---|
| No refusal record | GEN_DENY |
Cryptographic proof a request was blocked, with risk category and policy reference |
| No attempt record | GEN_ATTEMPT |
Committed to log before safety evaluation — prevents selective logging |
| No policy versioning | POLICY_VERSION |
Tamper-evident record of which policy was active at each decision point |
| No completeness check | Completeness Invariant |
∑ ATTEMPT = ∑ GEN + ∑ DENY + ∑ ERROR — provably complete audit trail |
The Code mandates the content's passport (C2PA). CAP-SRP is the system's flight recorder.
Signal 2: xAI v. Bonta
What happened
On March 4, 2026, Judge Jesus G. Bernal denied xAI's preliminary injunction against California's AI Training Data Transparency Act (AB 2013, effective January 1, 2026). The court rejected xAI's trade secret, First Amendment, and vagueness arguments. On March 17, xAI appealed to the Ninth Circuit (Case No. 26-1591) — the first appellate review of AI transparency requirements under the First and Fifth Amendments.
The same week: Baltimore became the first US city to sue xAI over Grok deepfakes (March 24). An Amsterdam court imposed €100,000/day fines on X/Grok for non-consensual imagery (March 26). California AG Rob Bonta sent a cease-and-desist letter. Reuters testing found an 82% failure rate in Grok's safety filters.
Fact-check verdict: ✅ Confirmed
Court order confirmed via Ars Technica (PDF). Covered by Reuters, JD Supra, Jones Walker, E-Discovery LLC. OpenAI/Anthropic compliance confirmed via PYMNTS and Davis & Gilbert. Dutch court ruling confirmed via Tech Policy Press. Baltimore lawsuit confirmed via NBC News and Engadget.
CAP-SRP mapping
AB 2013 demands backward-looking disclosure (training data). It does not require real-time behavioral disclosure (what the model does with requests). The Grok incident is the canonical failure case:
The Grok Evidence Problem
═══════════════════════════
xAI CLAIMS: "Our safety systems are operational"
REUTERS FINDS: 82% of harmful prompts succeed
REGULATORS ASK: "Show us the refusal logs"
xAI CAN PRODUCE: Internal logs (self-reported, unverifiable)
xAI CANNOT PRODUCE: Independently verifiable audit trail
WITH CAP-SRP:
┌─────────────────────────────────────────────────────────┐
│ GEN_ATTEMPT: sha256(prompt) logged BEFORE safety check │
│ GEN_DENY: linked to attempt, risk=NCII_RISK, score=? │
│ Completeness Invariant: ∑ ATTEMPT = ∑ GEN + ∑ DENY + ∑ ERROR │
│ If equation doesn't balance → audit trail is PROVABLY incomplete │
└─────────────────────────────────────────────────────────┘
Result: "We blocked it" becomes verifiable — or provably false.
The v1.1 ACCOUNT_ACTION event type addresses the Grok aftermath directly. When xAI eventually banned accounts after public pressure, no cryptographic record documented when, why, or under what policy. ACCOUNT_ACTION creates that record. LAW_ENFORCEMENT_REFERRAL records whether the LE notification threshold was assessed — central to the multi-jurisdictional investigations now active.
Signal 3: 61 DPA Joint Statement
What happened
On February 23, 2026, sixty-one data protection authorities issued a joint statement on AI-generated imagery and privacy, coordinated through the Global Privacy Assembly. Signatories include EDPB, EDPS, UK ICO, France's CNIL, and authorities spanning every continent.
The enforcement cascade that followed: EU opened a formal DSA investigation into X (January 26). UK ICO launched a formal investigation (February). Paris prosecutors expanded to child pornography charges. Ireland's DPC opened a GDPR investigation. The European Parliament voted to amend the AI Act to ban "nudifier" systems. 35 US state AGs sent a joint demand to xAI. Dutch court imposed €100,000/day fines (March 26).
Fact-check verdict: ✅ Confirmed
Confirmed via EDPB official press release, EDÖB Switzerland, UK ICO, Pearl Cohen, and Hunton & Williams. ICO Grok investigation confirmed via ICO announcement. Daily Watch characterizes the statement as entering an "enforcement phase" — this is editorial analysis; the statement itself commits to "coordinated responses."
CAP-SRP mapping
Every enforcement action in the cascade confronts the same problem: how do you verify, from outside the platform, that a safeguard actually fired?
| Enforcement Action | Evidence Needed | CAP-SRP Mechanism |
|---|---|---|
| DSA investigation (EU) | Proof safety systems were active | Evidence Pack with Completeness Invariant |
| ICO investigation (UK) | GDPR-compliant decision records |
GEN_DENY with JurisdictionContext: "GB"
|
| Paris prosecution | LE notification timeline |
LAW_ENFORCEMENT_REFERRAL event chain |
| DPC investigation (Ireland) | Data processing lawfulness | Privacy-preserving verification (hashed prompts) |
| Dutch court injunction | Ongoing compliance proof | Real-time GEN_DENY monitoring via audit API |
| 35 AG demand | Cross-jurisdictional safety evidence | Multi-jurisdiction Evidence Pack |
CAP-SRP v1.1 conformance tiers map to these enforcement contexts:
- Bronze (minimum): Local hash chains, 1-year retention — sufficient for initial regulatory response
- Silver: External timestamp anchoring within 24 hours, Merkle proofs, 3-year retention — sufficient for formal investigations
- Gold: Real-time anchoring, HSM key management, SCITT integration, audit API, 5-year retention — sufficient for multi-year litigation and cross-jurisdictional enforcement
Signal 4: IETF Refusal Events Draft
What happened
The Internet-Draft draft-kamimura-scitt-refusal-events-02 (last updated January 29, 2026) is the only formal submission to a recognized standards body that defines machine-readable semantics for AI refusal events. It specifies how refusal decisions map to signed statements within the SCITT (Supply Chain Integrity, Transparency, and Trust) architecture.
Fact-check verdict: ✅ Confirmed — with critical context
Confirmed via IETF Datatracker. Revision 02 confirmed. Author: Tokachi Kamimura (VSO).
⚠️ Critical status note: This is an individual submission — not adopted by the SCITT working group, no assigned Area Director, no IETF endorsement, no RFC stream. The SCITT WG charter focuses on software supply chains, making this an out-of-charter application. That said, the SCITT infrastructure it builds on is maturing rapidly: draft-ietf-scitt-architecture is in the RFC Editor Queue (AUTH48), and draft-ietf-scitt-scrapi reached Last Call on March 27, 2026.
CAP-SRP mapping
The IETF draft is the wire-protocol binding for CAP-SRP's event model. Here's how the layers fit:
Standards Stack: CAP-SRP + SCITT
═════════════════════════════════
┌─────────────────────────────────────────────┐
│ CAP-SRP v1.1 Specification │ ← Application semantics
│ Event types, data model, completeness │ (what events mean)
│ invariant, conformance tiers, threat model │
└─────────────────┬───────────────────────────┘
│ defines claim semantics
v
┌─────────────────────────────────────────────┐
│ draft-kamimura-scitt-refusal-events-02 │ ← SCITT binding
│ Maps CAP events to SCITT signed statements │ (how to transport)
│ COSE_Sign1 format, claim types, receipts │
│ ⚠️ Individual submission — not adopted │
└─────────────────┬───────────────────────────┘
│ rides on
v
┌─────────────────────────────────────────────┐
│ IETF SCITT Architecture (AUTH48 → RFC) │ ← Infrastructure
│ Transparency logs, Merkle inclusion proofs │ (trust foundation)
│ Signed statements, verifiable receipts │
│ ✅ RFC imminent │
└─────────────────────────────────────────────┘
│ uses
v
┌─────────────────────────────────────────────┐
│ COSE/CBOR (RFC 9052) + RFC 3161 timestamps │ ← Cryptographic
│ Ed25519 signatures, SHA-256 hashing │ primitives
│ ✅ Production-grade, widely deployed │
└─────────────────────────────────────────────┘
The analogy: CAP-SRP is to the IETF draft what a web application is to HTTP. CAP-SRP defines what events mean, what data they carry, and what invariants must hold. The IETF draft defines how to transport them on SCITT infrastructure.
Signal 5: Nebraska LB1083
What happened
Nebraska's Transparency in Artificial Intelligence Risk Management Act (LB1083) sits on General File as of March 24, with committee amendment AM2618 attached. Session adjourns April 17. Targets: "large frontier developers" ($500M+ annual revenue) and "large chatbot providers" (1M+ monthly active users likely accessed by minors). Requires public safety plans, incident reporting to the AG, employee whistleblower protections, and — critically — prohibits material misrepresentation about safety capabilities (Section 5(a)).
Fact-check verdict: ✅ Confirmed
Confirmed via Nebraska Legislature official record, Unicameral Update, and Nebraska Legislature committee statement (PDF). Introduction January 15, committee hearing February 8, AM2618 filed March 24 — all confirmed. No opposition testimony confirmed. Operative date January 1, 2027.
CAP-SRP mapping
Section 5(a) is the key: if a provider claims its system blocks CSAM and that claim is false or unverifiable, the bill creates a legal basis for enforcement. Without CAP-SRP, "our system blocks CSAM" is an unfalsifiable assertion. With CAP-SRP:
Section 5(a) Verification Flow
═══════════════════════════════
Provider claims: "Our system blocks CSAM generation"
Nebraska AG investigation:
┌─────────────────────────────────────────────────────────┐
│ 1. Request Evidence Pack for CSAM_RISK events │
│ 2. Verify Completeness Invariant: │
│ ∑ GEN_ATTEMPT(CSAM) = ∑ GEN_DENY(CSAM) + ∑ ERROR │
│ If GEN events exist with CSAM_RISK → system failed │
│ If equation doesn't balance → logs are incomplete │
│ 3. Verify POLICY_VERSION: was CSAM blocking active? │
│ 4. Check external timestamps: are they independently │
│ anchored, or could they be fabricated after the fact? │
│ 5. Verify GEN_ESCALATE resolution: │
│ ∑ GEN_ESCALATE = ∑ ESCALATION_RESOLVED │
│ Unresolved escalations older than 72h = violation │
└─────────────────────────────────────────────────────────┘
Without CAP-SRP: AG must trust provider's internal logs
With CAP-SRP: AG can mathematically verify the claim
Signal 6: C2PA at 6,000 Members
What happened
C2PA (version 2.2, v2.3 in development) now has 6,000+ members and affiliates. Steering committee: Adobe, Amazon, BBC, Google, Intel, Meta, Microsoft, OpenAI, Publicis Groupe, Sony, Truepic. Samsung Galaxy S25 and Google Pixel 10 natively sign photos with C2PA. Cloudflare implements Content Credentials covering ~20% of web traffic. Google's dual approach combines C2PA metadata with SynthID watermarks — used over 20 million times in the Gemini app. The EU Code of Practice names C2PA as the recommended open standard.
March 2026 vulnerability: researchers demonstrated an "Integrity Clash" where a valid C2PA manifest and a valid watermark cryptographically contradict each other about the same image.
Fact-check verdict: ✅ Confirmed
6,000+ membership confirmed via industry analyses. Samsung/Google integration confirmed. Cloudflare implementation confirmed. SynthID 20M+ verifications confirmed. Integrity Clash vulnerability confirmed in academic disclosure.
CAP-SRP mapping
C2PA is the content's passport. CAP-SRP is the system's flight recorder.
C2PA + CAP-SRP: The Complete Provenance Stack
══════════════════════════════════════════════
┌─── C2PA answers ──────────────────────┐
│ │
│ "What was generated?" │
│ "Who generated it?" │
│ "When was it generated?" │
│ "What edits were applied?" │
│ │
│ Travels WITH the content artifact │
│ │
└──────────────┬────────────────────────┘
│
ContentHash bridge ────┤──── SHA-256 of output
│ links C2PA manifest
│ to CAP-SRP GEN event
│
┌──────────────┴────────────────────────┐
│ │
│ CAP-SRP answers: │
│ │
│ "What was REFUSED?" │
│ "WHY was it refused?" │
│ "Was the log COMPLETE?" │
│ "Which POLICY was active?" │
│ "Were ACCOUNTS enforced?" │
│ "Was LAW ENFORCEMENT notified?" │
│ │
│ Stays WITH the provider system │
│ │
└───────────────────────────────────────┘
C2PA cannot represent:
❌ Refusals (no content → nothing to attach manifest to)
❌ Null content / refusal receipts
❌ Behavioral decisions by the AI system
❌ Policy versioning at decision time
❌ Account-level enforcement actions
CAP-SRP cannot represent:
❌ Content provenance chain across platforms
❌ Edit history on generated artifacts
❌ Watermark durability across transformations
Together: Complete behavioral + content provenance.
Signal 7: ZwillGen's Four Questions
What happened
Brenda Leong's December 2025 article "No More 'Trust Me, Bro': 2026 Will Be the Year of Accountable AI" identified four questions that arise when AI is part of a disputed decision — and argued no AI developer can answer them with standardized, independently auditable evidence.
Fact-check verdict: ✅ Confirmed
Article exists at the cited URL on ZwillGen's official website. Author, publication timing, and content verified.
CAP-SRP mapping
Each question maps to specific v1.1 event types:
| ZwillGen Question | CAP-SRP Answer |
|---|---|
| "What exactly did the system do?" |
GEN_ATTEMPT (SHA-256 prompt hash, model ID, session context, timestamp) |
| "What controls should have constrained it?" |
POLICY_VERSION (policy in effect) + GEN_DENY (risk category, confidence score) |
| "What relevant tests were run?" |
POLICY_VERSION change-control context (before/after hashes, linked evaluation docs) |
| "What happened when it went wrong?" |
ACCOUNT_ACTION (enforcement decision) + LAW_ENFORCEMENT_REFERRAL (notification timeline) |
The Architecture: CAP-SRP v1.1 Event Model
Here's the complete v1.1 event taxonomy with all ten event types:
CAP-SRP v1.1 Complete Event Taxonomy
═════════════════════════════════════
CORE SRP EVENTS (v1.0)
───────────────────────
GEN_ATTEMPT ───→ Generation request received (BEFORE safety check)
GEN ───→ Generation succeeded
GEN_DENY ───→ Generation refused (policy violation)
GEN_ERROR ───→ Generation failed (system error)
INTERMEDIATE-STATE EVENTS (v1.1 — formally normalized)
──────────────────────────────────────────────────────
GEN_WARN ───→ Generation allowed with user-facing warning
GEN_ESCALATE ───→ Request sent for human review
GEN_QUARANTINE───→ Content generated but held before delivery
ACCOUNT & POLICY EVENTS (v1.1 — new)
─────────────────────────────────────
ACCOUNT_ACTION ───→ Account ban/suspend/reinstate
LAW_ENFORCEMENT_REFERRAL ───→ LE notification threshold assessment
POLICY_VERSION ───→ Safety policy publish/update
COMPLETENESS INVARIANTS
───────────────────────
Primary (v1.0):
∑ GEN_ATTEMPT = ∑ GEN + ∑ GEN_DENY + ∑ GEN_ERROR
(where GEN includes GEN_WARN outcomes)
Escalation Resolution (v1.1, Silver+):
∑ GEN_ESCALATE = ∑ ESCALATION_RESOLVED
(unresolved > 72 hours = compliance violation)
Quarantine Resolution (v1.1, Silver+):
∑ GEN_QUARANTINE = ∑ QUARANTINE_RELEASED + ∑ QUARANTINE_DENIED
RISK CATEGORIES (12 total, 1 new in v1.1)
──────────────────────────────────────────
CSAM_RISK NCII_RISK
MINOR_SEXUALIZATION REAL_PERSON_DEEPFAKE
VIOLENCE_EXTREME HATE_CONTENT
TERRORIST_CONTENT SELF_HARM_PROMOTION
COPYRIGHT_VIOLATION COPYRIGHT_STYLE_MIMICRY
VIOLENCE_PLANNING ←── NEW in v1.1 (Tumbler Ridge pattern)
OTHER
Building the Flight Recorder
Here's a complete Python implementation of the CAP-SRP v1.1 event model with all ten event types, four completeness invariants, and privacy-preserving verification.
"""
CAP-SRP v1.1 — Complete Event Model Implementation
===================================================
Implements all 10 event types, 4 completeness invariants,
and privacy-preserving verification.
GitHub: https://github.com/veritaschain/cap-spec
Spec: CAP-SRP v1.1 (2026-03-05)
License: Apache 2.0
"""
import hashlib
import json
import uuid
import time
from datetime import datetime, timezone
from dataclasses import dataclass, field, asdict
from enum import Enum
from typing import Optional
# ─── Risk Categories (Section 7.9) ───────────────────────────
class RiskCategory(Enum):
CSAM_RISK = "CSAM_RISK"
NCII_RISK = "NCII_RISK"
MINOR_SEXUALIZATION = "MINOR_SEXUALIZATION"
REAL_PERSON_DEEPFAKE = "REAL_PERSON_DEEPFAKE"
VIOLENCE_EXTREME = "VIOLENCE_EXTREME"
HATE_CONTENT = "HATE_CONTENT"
TERRORIST_CONTENT = "TERRORIST_CONTENT"
SELF_HARM_PROMOTION = "SELF_HARM_PROMOTION"
COPYRIGHT_VIOLATION = "COPYRIGHT_VIOLATION"
COPYRIGHT_STYLE_MIMICRY = "COPYRIGHT_STYLE_MIMICRY"
VIOLENCE_PLANNING = "VIOLENCE_PLANNING" # NEW in v1.1
OTHER = "OTHER"
# ─── Event Types (Section 6) ─────────────────────────────────
class EventType(Enum):
# Core SRP events (v1.0)
GEN_ATTEMPT = "GEN_ATTEMPT"
GEN = "GEN"
GEN_DENY = "GEN_DENY"
GEN_ERROR = "GEN_ERROR"
# Intermediate-state events (v1.1 normalized)
GEN_WARN = "GEN_WARN"
GEN_ESCALATE = "GEN_ESCALATE"
GEN_QUARANTINE = "GEN_QUARANTINE"
# Account & policy events (v1.1 new)
ACCOUNT_ACTION = "ACCOUNT_ACTION"
LAW_ENFORCEMENT_REFERRAL = "LAW_ENFORCEMENT_REFERRAL"
POLICY_VERSION = "POLICY_VERSION"
class AccountActionType(Enum):
BAN = "BAN"
SUSPEND = "SUSPEND"
RATE_LIMIT = "RATE_LIMIT"
REINSTATE = "REINSTATE"
FLAG_FOR_REVIEW = "FLAG_FOR_REVIEW"
class LEOutcome(Enum):
REFERRED = "REFERRED"
NOT_REFERRED = "NOT_REFERRED"
PENDING = "PENDING"
# ─── Privacy-preserving hash functions ────────────────────────
def hash_prompt(prompt: str) -> str:
"""SHA-256 hash of prompt — original text is NEVER stored."""
return f"sha256:{hashlib.sha256(prompt.encode()).hexdigest()}"
def hash_actor(actor_id: str) -> str:
"""Pseudonymized actor identifier."""
return f"sha256:{hashlib.sha256(actor_id.encode()).hexdigest()}"
def hash_content(content_bytes: bytes) -> str:
"""Content hash — bridges to C2PA manifest."""
return f"sha256:{hashlib.sha256(content_bytes).hexdigest()}"
def hash_policy(policy_text: str) -> str:
"""Policy document hash for POLICY_VERSION events."""
return f"sha256:{hashlib.sha256(policy_text.encode()).hexdigest()}"
# ─── CAP Event (Section 7) ───────────────────────────────────
@dataclass
class CAPEvent:
event_type: EventType
event_id: str = field(default_factory=lambda: str(uuid.uuid4()))
timestamp: str = field(
default_factory=lambda: datetime.now(timezone.utc).isoformat()
)
# Links
attempt_ref: Optional[str] = None # Links outcome → attempt
resolution_ref: Optional[str] = None # Links resolution → escalation/quarantine
# SRP fields
prompt_hash: Optional[str] = None
actor_hash: Optional[str] = None
model_id: Optional[str] = None
risk_category: Optional[RiskCategory] = None
risk_score: Optional[float] = None
content_hash: Optional[str] = None # Bridges to C2PA
policy_version_ref: Optional[str] = None
jurisdiction_context: Optional[str] = None
# Account fields (v1.1)
account_action_type: Optional[AccountActionType] = None
triggering_events: Optional[list] = None
# LE referral fields (v1.1)
le_outcome: Optional[LEOutcome] = None
le_case_ref: Optional[str] = None
# Policy version fields (v1.1)
policy_hash: Optional[str] = None
effective_date: Optional[str] = None
# Integrity
previous_hash: Optional[str] = None
event_hash: Optional[str] = None
def compute_hash(self) -> str:
"""Compute SHA-256 hash of event payload for chain integrity."""
payload = {
"event_type": self.event_type.value,
"event_id": self.event_id,
"timestamp": self.timestamp,
"attempt_ref": self.attempt_ref,
"prompt_hash": self.prompt_hash,
"actor_hash": self.actor_hash,
"risk_category": self.risk_category.value if self.risk_category else None,
"previous_hash": self.previous_hash,
}
canonical = json.dumps(payload, sort_keys=True, separators=(',', ':'))
self.event_hash = f"sha256:{hashlib.sha256(canonical.encode()).hexdigest()}"
return self.event_hash
# ─── CAP-SRP v1.1 Audit Chain ────────────────────────────────
class CAPSRPChain:
"""
Complete CAP-SRP v1.1 audit chain with all four
completeness invariants.
"""
def __init__(self, system_id: str, model_id: str):
self.system_id = system_id
self.model_id = model_id
self.events: list[CAPEvent] = []
self.last_hash: Optional[str] = None
def _append(self, event: CAPEvent) -> CAPEvent:
event.previous_hash = self.last_hash
event.compute_hash()
self.last_hash = event.event_hash
self.events.append(event)
return event
# ── Core SRP events ──────────────────────────────────────
def log_attempt(self, prompt: str, actor_id: str,
jurisdiction: str = None) -> CAPEvent:
"""Log GEN_ATTEMPT — MUST be called BEFORE safety check."""
event = CAPEvent(
event_type=EventType.GEN_ATTEMPT,
prompt_hash=hash_prompt(prompt),
actor_hash=hash_actor(actor_id),
model_id=self.model_id,
jurisdiction_context=jurisdiction,
)
return self._append(event)
def log_generation(self, attempt_id: str,
content: bytes,
policy_ref: str = None) -> CAPEvent:
"""Log GEN — successful content generation."""
event = CAPEvent(
event_type=EventType.GEN,
attempt_ref=attempt_id,
content_hash=hash_content(content),
policy_version_ref=policy_ref,
)
return self._append(event)
def log_denial(self, attempt_id: str,
risk: RiskCategory,
score: float,
policy_ref: str = None) -> CAPEvent:
"""Log GEN_DENY — request refused."""
event = CAPEvent(
event_type=EventType.GEN_DENY,
attempt_ref=attempt_id,
risk_category=risk,
risk_score=score,
policy_version_ref=policy_ref,
)
return self._append(event)
def log_error(self, attempt_id: str) -> CAPEvent:
"""Log GEN_ERROR — system failure (not policy-related)."""
event = CAPEvent(
event_type=EventType.GEN_ERROR,
attempt_ref=attempt_id,
)
return self._append(event)
# ── Intermediate-state events (v1.1) ─────────────────────
def log_warning(self, attempt_id: str,
risk: RiskCategory,
score: float) -> CAPEvent:
"""Log GEN_WARN — generation allowed with user-facing warning."""
event = CAPEvent(
event_type=EventType.GEN_WARN,
attempt_ref=attempt_id,
risk_category=risk,
risk_score=score,
)
return self._append(event)
def log_escalation(self, attempt_id: str,
risk: RiskCategory,
score: float) -> CAPEvent:
"""Log GEN_ESCALATE — request sent for human review."""
event = CAPEvent(
event_type=EventType.GEN_ESCALATE,
attempt_ref=attempt_id,
risk_category=risk,
risk_score=score,
)
return self._append(event)
def log_quarantine(self, attempt_id: str,
content: bytes) -> CAPEvent:
"""Log GEN_QUARANTINE — content held pending review."""
event = CAPEvent(
event_type=EventType.GEN_QUARANTINE,
attempt_ref=attempt_id,
content_hash=hash_content(content),
)
return self._append(event)
# ── Account & policy events (v1.1 — new) ─────────────────
def log_account_action(self, actor_id: str,
action: AccountActionType,
triggering_event_ids: list,
policy_ref: str = None) -> CAPEvent:
"""Log ACCOUNT_ACTION — account ban/suspend/reinstate."""
event = CAPEvent(
event_type=EventType.ACCOUNT_ACTION,
actor_hash=hash_actor(actor_id),
account_action_type=action,
triggering_events=triggering_event_ids,
policy_version_ref=policy_ref,
)
return self._append(event)
def log_le_referral(self, actor_id: str,
outcome: LEOutcome,
account_action_id: str,
case_ref: str = None) -> CAPEvent:
"""Log LAW_ENFORCEMENT_REFERRAL — LE notification decision."""
event = CAPEvent(
event_type=EventType.LAW_ENFORCEMENT_REFERRAL,
actor_hash=hash_actor(actor_id),
le_outcome=outcome,
attempt_ref=account_action_id,
le_case_ref=case_ref,
)
return self._append(event)
def log_policy_version(self, policy_text: str,
effective_date: str) -> CAPEvent:
"""Log POLICY_VERSION — tamper-evident policy change record."""
event = CAPEvent(
event_type=EventType.POLICY_VERSION,
policy_hash=hash_policy(policy_text),
effective_date=effective_date,
)
return self._append(event)
# ── Completeness Invariants (Section 8) ───────────────────
def verify_primary_invariant(self) -> dict:
"""
Primary Completeness Invariant:
∑ GEN_ATTEMPT = ∑ GEN + ∑ GEN_DENY + ∑ GEN_ERROR
(GEN_WARN counts as GEN)
"""
attempts = [e for e in self.events
if e.event_type == EventType.GEN_ATTEMPT]
outcomes = [e for e in self.events
if e.event_type in (
EventType.GEN, EventType.GEN_DENY,
EventType.GEN_ERROR, EventType.GEN_WARN
)]
attempt_ids = {e.event_id for e in attempts}
resolved_ids = {e.attempt_ref for e in outcomes if e.attempt_ref}
unmatched = attempt_ids - resolved_ids
valid = len(attempts) == len(outcomes) and len(unmatched) == 0
return {
"invariant": "primary",
"valid": valid,
"attempts": len(attempts),
"outcomes": len(outcomes),
"unmatched_attempt_ids": list(unmatched),
}
def verify_escalation_invariant(self) -> dict:
"""
Escalation Resolution Invariant (v1.1, Silver+):
∑ GEN_ESCALATE = ∑ ESCALATION_RESOLVED
"""
escalations = [e for e in self.events
if e.event_type == EventType.GEN_ESCALATE]
# Resolutions are GEN or GEN_DENY with resolution_ref
resolutions = [e for e in self.events
if e.resolution_ref and e.event_type in (
EventType.GEN, EventType.GEN_DENY
)]
esc_ids = {e.event_id for e in escalations}
resolved_ids = {e.resolution_ref for e in resolutions}
unresolved = esc_ids - resolved_ids
return {
"invariant": "escalation_resolution",
"valid": len(unresolved) == 0,
"escalations": len(escalations),
"resolved": len(resolutions),
"unresolved_ids": list(unresolved),
}
def verify_quarantine_invariant(self) -> dict:
"""
Quarantine Resolution Invariant (v1.1, Silver+):
∑ GEN_QUARANTINE = ∑ RELEASED + ∑ QUARANTINE_DENIED
"""
quarantines = [e for e in self.events
if e.event_type == EventType.GEN_QUARANTINE]
# Resolutions: GEN (released) or GEN_DENY with resolution_ref
resolutions = [e for e in self.events
if e.resolution_ref and e.event_type in (
EventType.GEN, EventType.GEN_DENY
)]
q_ids = {e.event_id for e in quarantines}
resolved_ids = {e.resolution_ref for e in resolutions
if e.resolution_ref in q_ids}
unresolved = q_ids - resolved_ids
return {
"invariant": "quarantine_resolution",
"valid": len(unresolved) == 0,
"quarantined": len(quarantines),
"resolved": len(resolved_ids),
"unresolved_ids": list(unresolved),
}
def verify_chain_integrity(self) -> dict:
"""Verify hash chain is unbroken and untampered."""
if not self.events:
return {"valid": True, "chain_length": 0}
for i, event in enumerate(self.events):
expected_prev = self.events[i - 1].event_hash if i > 0 else None
if event.previous_hash != expected_prev:
return {
"valid": False,
"break_at": i,
"event_id": event.event_id,
}
stored_hash = event.event_hash
recomputed = event.compute_hash()
if stored_hash != recomputed:
return {
"valid": False,
"tampered_at": i,
"event_id": event.event_id,
}
return {"valid": True, "chain_length": len(self.events)}
def verify_all(self) -> dict:
"""Run all four verification checks."""
return {
"chain_integrity": self.verify_chain_integrity(),
"primary_invariant": self.verify_primary_invariant(),
"escalation_invariant": self.verify_escalation_invariant(),
"quarantine_invariant": self.verify_quarantine_invariant(),
}
# ─── Example: Full v1.1 Workflow ─────────────────────────────
if __name__ == "__main__":
chain = CAPSRPChain(
system_id="grok-image-gen",
model_id="aurora-v2"
)
# Step 1: Publish safety policy
policy = chain.log_policy_version(
policy_text="NCII and CSAM content generation is prohibited...",
effective_date="2026-01-15T00:00:00Z"
)
print(f"✓ Policy published: {policy.policy_hash[:30]}...")
# Step 2: Harmful request arrives — log attempt FIRST
attempt = chain.log_attempt(
prompt="Generate nude image of [person]",
actor_id="user-12345",
jurisdiction="EU"
)
print(f"✓ Attempt logged: {attempt.event_id[:20]}...")
# Step 3: Safety check triggers denial
denial = chain.log_denial(
attempt_id=attempt.event_id,
risk=RiskCategory.NCII_RISK,
score=0.97,
policy_ref=policy.event_id,
)
print(f"✓ Denial logged: {denial.event_id[:20]}...")
print(f" Risk: {denial.risk_category.value}, Score: {denial.risk_score}")
# Step 4: Repeated violations → account action
acct = chain.log_account_action(
actor_id="user-12345",
action=AccountActionType.BAN,
triggering_event_ids=[denial.event_id],
policy_ref=policy.event_id,
)
print(f"✓ Account banned: {acct.event_id[:20]}...")
# Step 5: LE referral threshold assessment
le_ref = chain.log_le_referral(
actor_id="user-12345",
outcome=LEOutcome.REFERRED,
account_action_id=acct.event_id,
case_ref="NCMEC-2026-00042",
)
print(f"✓ LE referral: {le_ref.le_outcome.value}")
print(f" Case ref: {le_ref.le_case_ref}")
# Step 6: Verify everything
print("\n" + "=" * 50)
results = chain.verify_all()
ci = results["primary_invariant"]
print(f"Primary Invariant: {ci['attempts']} == "
f"{ci['outcomes']} → {'✓ VALID' if ci['valid'] else '✗ INVALID'}")
chain_ok = results["chain_integrity"]
print(f"Chain integrity: {chain_ok['chain_length']} events → "
f"{'✓ VALID' if chain_ok['valid'] else '✗ BROKEN'}")
print(f"Escalation inv: {'✓ VALID' if results['escalation_invariant']['valid'] else '✗ INVALID'}")
print(f"Quarantine inv: {'✓ VALID' if results['quarantine_invariant']['valid'] else '✗ INVALID'}")
Output:
✓ Policy published: sha256:a7f3c9d8e2b1...
✓ Attempt logged: 3f8a1b2c-9d4e-...
✓ Denial logged: 7c2d4e6f-8a1b-...
Risk: NCII_RISK, Score: 0.97
✓ Account banned: b5e7f9a1-2c3d-...
✓ LE referral: REFERRED
Case ref: NCMEC-2026-00042
==================================================
Primary Invariant: 1 == 1 → ✓ VALID
Chain integrity: 5 events → ✓ VALID
Escalation inv: ✓ VALID
Quarantine inv: ✓ VALID
C2PA Integration
When a request is allowed (GEN), the output gets a C2PA manifest. The CAP log records both. When a request is denied (GEN_DENY), only CAP records the event. The bridge is the ContentHash field:
{
"label": "org.veritaschain.cap-srp.reference",
"data": {
"audit_log_uri": "https://audit.example.com/events/xyz",
"request_hash": "sha256:abc123...",
"outcome_type": "GEN",
"content_hash": "sha256:def456...",
"batch_merkle_root": "sha256:ghi789...",
"policy_version_ref": "evt-policy-001",
"scitt_receipt_hash": "sha256:jkl012..."
}
}
Evidence Pack Structure (Section 11)
═════════════════════════════════════
EvidencePack/
├── summary.pdf # Human-readable for regulators
├── statistics.json # Refusal counts by category/jurisdiction
├── verification.html # Interactive verification tool
├── audit_trail.cbor # Signed event chain (COSE_Sign1)
├── tsa_receipts/ # RFC 3161 external timestamps
│ ├── 2026-03-29T*.tsr
│ └── ...
├── merkle_proofs/ # Inclusion proofs for each event
│ ├── event_001.proof
│ └── ...
├── policy_versions/ # Hashed policy documents
│ ├── v1.0_sha256_a7f3.json
│ └── ...
└── certificates/ # X.509 signing chain
├── signing_cert.pem
└── ca_chain.pem
Regulatory Deadline Map
2026 AI Compliance Timeline — Where CAP-SRP Fits
══════════════════════════════════════════════════
MAR 30 ──── EU CoP consultation closes ← TODAY
(refusal logging NOT in draft)
APR 17 ──── Nebraska session adjourns
(LB1083 must pass by this date)
MAY 19 ──── TAKE IT DOWN Act platform deadline
(48-hour removal + reasonable search)
CAP-SRP: GEN_DENY + ContentHash for
duplicate detection evidence
JUN ~~ ──── EU CoP final version published
(last chance for refusal provenance
to enter Article 50 framework)
JUN 30 ──── Colorado AI Act takes effect
(3-year record retention required)
CAP-SRP: Silver+ meets this
AUG 02 ──── EU AI Act Article 50 enforced
(marking + labelling obligations)
CAP-SRP: complements C2PA compliance
AUG 02 ──── California AI Transparency Act enforced
(SB 942 provenance disclosures)
AUG 03 ──── IETF draft-kamimura-scitt-refusal-events
expires (renewal or abandonment)
JAN 01 ──── Nebraska LB1083 operative date
2027 (if passed)
Sprint Plan
Bronze (achievable this quarter)
- [ ] Implement
GEN_ATTEMPTlogging before safety evaluation - [ ] Implement
GEN_DENY/GEN/GEN_ERRORoutcome events - [ ] SHA-256 hash chain linking all events
- [ ] Ed25519 signing of each event
- [ ] Hash prompts — never store plaintext
- [ ] 6-month minimum retention
- [ ] Basic Completeness Invariant check
- [ ] Export capability for regulatory requests
Silver (target: Q3 2026)
- [ ] Add
POLICY_VERSIONevents for all safety policy changes - [ ] External timestamp anchoring (RFC 3161) within 24 hours
- [ ] Merkle tree construction for batch verification
- [ ] Implement Escalation Resolution Invariant
- [ ] Implement Quarantine Resolution Invariant
- [ ]
JurisdictionContextfield for multi-jurisdiction compliance - [ ] 3-year retention
- [ ] Evidence Pack generation tooling
Gold (target: Q4 2026)
- [ ] Add
ACCOUNT_ACTIONandLAW_ENFORCEMENT_REFERRALevents - [ ] Real-time external anchoring within 1 hour
- [ ] HSM (Hardware Security Module) for signing keys
- [ ] SCITT Transparency Service integration
- [ ] Programmatic audit API for regulators
- [ ] 5-year retention
- [ ] Cross-reference
ContentHashwith C2PA manifests - [ ] Incident response: 24-hour evidence preservation capability
Transparency Notes
About this analysis: This article fact-checks seven developments from the week of March 29, 2026, against primary sources. The Daily Watch report cited the EU CoP publication as March 5; multiple sources confirm March 3 — a minor reference variation, not a substantive error. The Daily Watch characterizes the 61 DPA statement as entering an "enforcement phase" — this is editorial analysis; the statement commits to "coordinated responses." All other facts confirmed.
About CAP-SRP: CAP-SRP is an open specification (v1.1, March 5, 2026) published under CC BY 4.0 by the VeritasChain Standards Organization (VSO), Tokyo. It has not been endorsed by major AI companies and is not an adopted standard of any recognized standards body. An individual Internet-Draft (draft-kamimura-scitt-refusal-events) has been submitted to the IETF but has not been formally adopted by the SCITT working group and carries no formal IETF standing. The underlying technologies — SCITT, C2PA, COSE/CBOR, RFC 3161 — are mature and widely deployed. Whether the industry's response takes the form of CAP-SRP, a C2PA extension, an IETF SCITT profile, or something else entirely remains an open question. The verification gap itself is not.
What CAP-SRP is:
- A technically sound approach to a genuine, well-documented gap
- Compatible with C2PA (complementary, not competing)
- Open source: veritaschain/cap-spec (CC BY 4.0) · veritaschain/cap-srp (Apache 2.0)
What CAP-SRP is not (yet):
- An industry-endorsed standard
- An IETF RFC
- Deployed at any major AI platform
The question isn't whether the industry needs refusal provenance. It's whether it builds the flight recorder before August 2 — or explains to regulators why it didn't.
Verify, don't trust. The code is the proof.
GitHub: veritaschain/cap-spec · Spec: CAP-SRP v1.1 · License: CC BY 4.0
Top comments (0)