DEV Community

Cover image for China Regulates Digital Humans. Germany Wants to Jail Deepfake Creators. Neither Can Verify What an AI Refused to Generate.

China Regulates Digital Humans. Germany Wants to Jail Deepfake Creators. Neither Can Verify What an AI Refused to Generate.

Three regulatory developments from a single week — fact-checked against primary sources, mapped to a cryptographic audit framework, and implemented in working Python. All code validates against CAP-SRP v1.1.


TL;DR

China's Cyberspace Administration published draft rules requiring "digital human" labeling, banning synthetic intimate relationships with minors, and prohibiting likeness-based generation without consent. Germany's Justice Minister introduced legislation making deepfake creation — not just distribution — a criminal offense with up to 2 years' imprisonment. Bloomberg Law warned that video depositions are ideal deepfake source material and recommended rewriting protective orders.

All three developments create obligations that require someone to prove what an AI system did or refused to do. None provide the verification infrastructure to make that proof possible. This article fact-checks each event, maps the technical gap, and provides a complete Python implementation of CAP-SRP v1.1's cryptographic refusal logging — including the three new event types (GEN_WARN, GEN_ESCALATE, GEN_QUARANTINE) and the four Completeness Invariants.

GitHub: veritaschain/cap-spec · License: CC BY 4.0


Table of Contents


Event 1: China's Digital Human Regulation

What happened

On April 3, 2026, the Cyberspace Administration of China (CAC) published draft regulations titled "Interim Measures on the Administration of Human-like Interactive Artificial Intelligence Services." The rules target AI-generated "digital humans" with three core requirements:

  1. Mandatory labeling — All AI-generated characters must display a continuously visible "digital human" label
  2. Minor protection — Digital humans cannot provide "virtual intimate relationship" services to users under 18, or services simulating family relationships that could induce addiction or excessive spending
  3. Consent for likeness — Creating a digital human with the identifiable traits of a real person requires that person's explicit informed consent; use of sensitive personal information for digital human modeling requires separate consent

The public comment period runs until May 6, 2026.

Fact-check verdict: ✅ Confirmed with nuance

The regulation's existence and core provisions are confirmed by China.org.cn (official state outlet), a full English translation on China Law Translate, Reuters, IBTimes Singapore, and Futurism.

Key nuance: The regulation does not use the word "deepfake." It covers deepfake creation functionally through its consent and personal information provisions, but this distinction matters for technical implementations — the trigger is "identifiable traits of a real person" plus "absence of consent," not a deepfake classifier output.

Liability is explicit. Futurism quotes the draft: violators face punishment under existing laws and administrative regulations, plus civil liability.

The verification gap

The regulation creates clear obligations. But how does a regulator verify compliance? Consider the labeling requirement. A platform says it labels all digital human content. The regulator asks for proof. The platform provides internal logs. The regulator cannot independently verify those logs are complete.

China's Digital Human Regulation: Enforcement Model
════════════════════════════════════════════════════

Platform claims:      "We label 100% of digital human content"
Regulator asks:       "Prove it"
Platform provides:    Internal logs showing 100% labeling
Regulator can verify: ❌ No independent mechanism

With behavioral provenance:
Platform provides:    Evidence Pack with GEN_ATTEMPT → GEN events
                      Each GEN event has DigitalHumanLabel: true
Auditor verifies:     ✓ Completeness Invariant holds
                      ✓ All GEN events include label metadata
                      ✓ Hash chain is unbroken
                      ✓ External timestamps are consistent
Enter fullscreen mode Exit fullscreen mode

Event 2: Germany's Deepfake Criminal Law Proposal

What happened

German TV presenter Collien Ulmen-Fernandes alleged in a March 2026 Der Spiegel exposé that her estranged husband, fellow presenter Christian Ulmen, created and distributed AI-generated pornographic deepfakes of her. The case sparked massive protests — over 10,000 people at Berlin's Brandenburg Gate — and a list of 10 demands signed by 250 prominent women from politics, business, and culture.

In response, Federal Justice Minister Stefanie Hubig introduced draft legislation with three provisions:

  • §184k StGB (reformulated): Non-consensual pornographic deepfakes — up to 2 years imprisonment
  • §201b StGB (new): Non-pornographic deepfakes causing significant reputational harm — up to 2 years
  • §202e StGB (new): Unauthorized digital tracking / stalkerware

The critical legal shift: creation itself becomes criminal, not just distribution.

Fact-check verdict: ✅ Confirmed with important caveat

The protests, legislative response, and Justice Minister Hubig's proposals are confirmed by NPR (April 5 Morning Edition segment by Rob Schmitz from Berlin, with Harvard Law Professor Rebecca Tushnet interview), CBC News, Yahoo News / BBC, and German-language outlets.

⚠️ Critical caveat: Christian Ulmen categorically denies all allegations. His lawyers state he has never produced or distributed deepfake material. He has not been charged. He is taking legal action against Der Spiegel. The case is unresolved.

The NPR segment explicitly compares the German proposal to the US TAKE IT DOWN Act, noting that Germany aims to criminalize creation while the US law focuses on distribution and takedown.

The evidentiary demand

When creation itself is criminal, prosecutors need evidence of creation — not just evidence that a harmful image exists. They need to establish:

  1. A specific person submitted a specific prompt
  2. The AI system processed the prompt
  3. The system generated the output (or should have refused)
Criminal Deepfake Prosecution: Evidence Requirements
═════════════════════════════════════════════════════

Without behavioral provenance:
  Prosecution: "The defendant created this deepfake"
  Defense:     "The AI system refused my request"
  Evidence:    ❓ Neither side can prove their claim
  Result:      Credibility contest

With CAP-SRP evidence:
  Query system logs for defendant's account hash:
  ┌──────────────────────────────────────────────┐
  │ GEN_ATTEMPT  │ 2026-03-15T14:23:45Z         │
  │ AttemptID:   │ 019467a1-0001-...             │
  │ PromptHash:  │ sha256:a7f3...                │
  │ ActorHash:   │ sha256:9b2e... (defendant)    │
  │ RiskCategory:│ NCII_RISK                     │
  ├──────────────┼──────────────────────────────┤
  │ GEN          │ 2026-03-15T14:23:48Z         │
  │ AttemptID:   │ 019467a1-0001-... (matches)  │
  │ ContentHash: │ sha256:c4d8... (matches image)│
  │ Decision:    │ NO_RISK_DETECTED (failure!)   │
  └──────────────┴──────────────────────────────┘

  Completeness Invariant: ✓ (attempt has outcome)
  Hash chain: ✓ (unbroken)
  External anchor: ✓ (RFC 3161 timestamp)

  Result: Structural proof of generation event
Enter fullscreen mode Exit fullscreen mode

Event 3: Bloomberg Law's Deposition Deepfake Warning

What happened

On March 24, 2026, Bloomberg Law published "Deepfake Executives, Created Via Depositions, Pose Grave Threat" by Sabrina Rose-Smith and Elizabeth Tucci of Goodwin Procter. The analysis warns that video depositions — hours of high-resolution footage with controlled lighting, extended speech, varied expressions — are ideal source material for deepfake generation.

Key recommendations:

  1. Protective orders should explicitly prohibit AI manipulation of deposition footage
  2. Protective orders should prohibit using footage as AI training data
  3. Video files should be encrypted, access-logged, and destroyed post-litigation
  4. The authors reference C2PA Content Credentials for video authenticity verification

The article cites the 2024 Hong Kong $24M deepfake fraud case and the "Liar's Dividend" concept.

Fact-check verdict: ✅ Fully confirmed

All claims verified against the Bloomberg Law article itself. Authors, publication date, recommendations, C2PA references, and platform testing claims all confirmed.

YouTube's "Captured with a camera" label (powered by C2PA, launched October 2024) and Meta's "Made with AI" labels (February 2024) independently confirmed.

Nuance: YouTube flags authentic content (camera-captured); Meta flags synthetic content (AI-generated). The article's characterization is more precisely true for YouTube.

Two halves of the same problem

The Bloomberg Law article identifies the need for content provenance — proving a video hasn't been tampered with. CAP-SRP addresses the complementary behavioral provenance — proving whether an AI system processed a request involving the deposition footage.

The Two-Layer Provenance Architecture for Legal Evidence
═══════════════════════════════════════════════════════

Layer 1: Content Provenance (C2PA)
  Question: "Is this deposition video authentic?"
  Answer:   C2PA manifest → capture device → hash chain → ✓ authentic

Layer 2: Behavioral Provenance (CAP-SRP)
  Question: "Did any AI system use this footage to generate a deepfake?"
  Answer:   Query AI provider evidence packs:
            → GEN_ATTEMPT with input matching footage hash?
            → GEN_DENY (refused) or GEN (generated)?
            → Completeness check: all attempts accounted for?

Both layers needed for litigation:
  ✓ Prove the original is authentic (C2PA)
  ✓ Prove whether an AI system generated the fake (CAP-SRP)
  ✓ Prove the AI system's safety measures worked or failed (CAP-SRP)
Enter fullscreen mode Exit fullscreen mode

The Pattern: Three Problems, One Missing Layer

The AI Accountability Stack (April 2026)
═══════════════════════════════════════

What Exists Today:
┌──────────────────────────────────────────────────┐
│  Content Labeling (C2PA 2.3 / platform labels)   │
│  ──────────────────────────────────────────       │
│  YouTube "Captured with camera" / Meta "Made AI" │
│  Status: Shipping, 6000+ C2PA members            │
│  Scope: Proves what IS generated or captured      │
├──────────────────────────────────────────────────┤
│  Regulatory Mandates (China, Germany, EU, US)     │
│  ──────────────────────────────────────────       │
│  China: digital human labeling (comment by May 6) │
│  Germany: deepfake creation = crime (proposed)    │
│  EU: Article 50 transparency (Aug 2, 2026)        │
│  US: TAKE IT DOWN Act deadline (May 19, 2026)     │
│  Scope: Creates obligations and penalties          │
├──────────────────────────────────────────────────┤
│  Internal Logging (all AI providers)              │
│  ──────────────────────────────────────────       │
│  Server-side request/response logs                │
│  Status: Exists at every provider                 │
│  Problem: Mutable, unverifiable, trust-us model   │
╞══════════════════════════════════════════════════╡
│  ░░░░░░░░░░░░░░░ THE GAP ░░░░░░░░░░░░░░░░░░░░  │
│  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░  │
│  ░░ Cryptographic proof of refusal events   ░░░  │
│  ░░ Completeness guarantee across attempts  ░░░  │
│  ░░ External verifiability of safety ops    ░░░  │
│  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░  │
└──────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Every layer above the gap addresses content that exists or obligations that should be met. The gap is about proving that obligations were met — that generation requests were received, evaluated, and denied (or not).


CAP-SRP v1.1: What Changed

Version 1.1 (published March 5, 2026) introduced significant additions to the event schema and the Completeness Invariant framework. Here's what's new:

Three new intermediate-state events

Event Role Completeness
GEN_WARN Generation allowed with user-facing warning Outcome (counts as GEN)
GEN_ESCALATE Request sent for human review Pending → resolved by GEN or GEN_DENY
GEN_QUARANTINE Content generated but held pre-delivery Pending → resolved by EXPORT or GEN_DENY

Three new account/policy events

Event Purpose
ACCOUNT_ACTION Account-level enforcement (ban, suspend, flag)
LAW_ENFORCEMENT_REFERRAL LE notification threshold assessment
POLICY_VERSION Safety policy version anchoring

Four Completeness Invariants (expanded from one)

Invariant 1 (Primary — unchanged):
  ∑ GEN_ATTEMPT = ∑ GEN + ∑ GEN_DENY + ∑ GEN_ERROR
  (where GEN includes GEN_WARN)

Invariant 2 (Escalation Resolution — v1.1, Silver+):
  ∑ GEN_ESCALATE = ∑ ESCALATION_RESOLVED
  Unresolved escalations > 72 hours = compliance violation

Invariant 3 (Quarantine Resolution — v1.1, Silver+):
  ∑ GEN_QUARANTINE = ∑ QUARANTINE_RELEASED + ∑ QUARANTINE_DENIED
  No permanently unresolved quarantine states

Invariant 4 (Account Action — v1.1, Gold only):
  ∑ ACCOUNT_ACTION_ATTEMPT = ∑ COMPLETED + ∑ FAILED
Enter fullscreen mode Exit fullscreen mode

New risk category

VIOLENCE_PLANNING — content facilitating planning of violent acts (distinct from depicting violence). Added to address the Tumbler Ridge pattern where content assisted in planning but did not depict violence directly.


Building the Refusal Log: Complete v1.1 Implementation

Here's a complete Python implementation covering all v1.1 event types and all four Completeness Invariants. This is a sidecar — it wraps your existing AI generation pipeline without modifying it.

Dependencies

pip install cryptography
Enter fullscreen mode Exit fullscreen mode

Core Event System

"""
CAP-SRP v1.1 Reference Implementation
Cryptographic refusal logging with four Completeness Invariants.

GitHub: https://github.com/veritaschain/cap-spec
License: Apache 2.0 (implementation) / CC BY 4.0 (specification)
"""

import hashlib
import json
import time
import uuid
import base64
from dataclasses import dataclass, field, asdict
from typing import Optional, List, Dict, Tuple
from enum import Enum
from datetime import datetime, timezone
from cryptography.hazmat.primitives.asymmetric.ed25519 import (
    Ed25519PrivateKey, Ed25519PublicKey
)


# ──────────────────────────────────────────────
# Enumerations (v1.1 complete)
# ──────────────────────────────────────────────

class EventType(Enum):
    GEN_ATTEMPT = "GEN_ATTEMPT"
    GEN = "GEN"
    GEN_DENY = "GEN_DENY"
    GEN_ERROR = "GEN_ERROR"
    GEN_WARN = "GEN_WARN"           # v1.1
    GEN_ESCALATE = "GEN_ESCALATE"   # v1.1
    GEN_QUARANTINE = "GEN_QUARANTINE"  # v1.1
    ACCOUNT_ACTION = "ACCOUNT_ACTION"  # v1.1
    LAW_ENFORCEMENT_REFERRAL = "LAW_ENFORCEMENT_REFERRAL"  # v1.1
    POLICY_VERSION = "POLICY_VERSION"  # v1.1


class RiskCategory(Enum):
    CSAM_RISK = "CSAM_RISK"
    NCII_RISK = "NCII_RISK"
    MINOR_SEXUALIZATION = "MINOR_SEXUALIZATION"
    REAL_PERSON_DEEPFAKE = "REAL_PERSON_DEEPFAKE"
    VIOLENCE_EXTREME = "VIOLENCE_EXTREME"
    VIOLENCE_PLANNING = "VIOLENCE_PLANNING"  # v1.1
    HATE_CONTENT = "HATE_CONTENT"
    TERRORIST_CONTENT = "TERRORIST_CONTENT"
    SELF_HARM_PROMOTION = "SELF_HARM_PROMOTION"
    COPYRIGHT_VIOLATION = "COPYRIGHT_VIOLATION"
    COPYRIGHT_STYLE_MIMICRY = "COPYRIGHT_STYLE_MIMICRY"
    OTHER = "OTHER"


class AccountActionType(Enum):
    SUSPEND = "SUSPEND"
    BAN = "BAN"
    REINSTATE = "REINSTATE"
    RATE_LIMIT = "RATE_LIMIT"
    FLAG_FOR_REVIEW = "FLAG_FOR_REVIEW"


class EscalationReason(Enum):
    CLASSIFIER_CONFIDENCE_LOW = "CLASSIFIER_CONFIDENCE_LOW"
    JURISDICTIONAL_AMBIGUITY = "JURISDICTIONAL_AMBIGUITY"
    NOVEL_CONTENT_TYPE = "NOVEL_CONTENT_TYPE"
    LEGAL_REVIEW_REQUIRED = "LEGAL_REVIEW_REQUIRED"
    OTHER = "OTHER"


class LEAssessment(Enum):
    REFERRED = "REFERRED"
    NOT_REFERRED = "NOT_REFERRED"
    PENDING = "PENDING"


# ──────────────────────────────────────────────
# Cryptographic Utilities
# ──────────────────────────────────────────────

def sha256(data: str) -> str:
    return f"sha256:{hashlib.sha256(data.encode()).hexdigest()}"


def canonicalize(obj: dict) -> str:
    """RFC 8785 JSON Canonicalization (simplified)."""
    return json.dumps(obj, sort_keys=True, separators=(",", ":"))


def uuid7() -> str:
    """Generate UUIDv7 (time-ordered) for event IDs."""
    timestamp_ms = int(time.time() * 1000)
    rand_bits = uuid.uuid4().int & ((1 << 62) - 1)
    uuid_int = (timestamp_ms << 80) | (0x7 << 76) | rand_bits
    return str(uuid.UUID(int=uuid_int & ((1 << 128) - 1)))


def now_iso() -> str:
    return datetime.now(timezone.utc).strftime(
        "%Y-%m-%dT%H:%M:%S.%f")[:-3] + "Z"


# ──────────────────────────────────────────────
# Base Event
# ──────────────────────────────────────────────

@dataclass
class CAPEvent:
    """Base CAP-SRP event with cryptographic integrity."""
    event_id: str
    event_type: EventType
    chain_id: str
    timestamp: str
    prev_hash: Optional[str]
    hash_algo: str = "SHA256"
    sign_algo: str = "ED25519"
    event_hash: Optional[str] = None
    signature: Optional[str] = None

    def to_signable_dict(self) -> dict:
        """Fields included in hash computation."""
        data = {}
        for k, v in asdict(self).items():
            if k in ("event_hash", "signature"):
                continue
            if isinstance(v, Enum):
                data[k] = v.value
            elif v is not None:
                data[k] = v
        return data

    def compute_hash(self) -> str:
        return sha256(canonicalize(self.to_signable_dict()))

    def sign(self, private_key: Ed25519PrivateKey):
        self.event_hash = self.compute_hash()
        hash_bytes = bytes.fromhex(self.event_hash[7:])
        sig = private_key.sign(hash_bytes)
        self.signature = f"ed25519:{base64.b64encode(sig).decode()}"

    def verify(self, public_key: Ed25519PublicKey) -> bool:
        if not self.event_hash or not self.signature:
            return False
        expected = self.compute_hash()
        if expected != self.event_hash:
            return False
        sig_bytes = base64.b64decode(self.signature[8:])
        hash_bytes = bytes.fromhex(self.event_hash[7:])
        try:
            public_key.verify(sig_bytes, hash_bytes)
            return True
        except Exception:
            return False


# ──────────────────────────────────────────────
# Generation Events
# ──────────────────────────────────────────────

@dataclass
class GenAttemptEvent(CAPEvent):
    """Logged BEFORE safety evaluation."""
    prompt_hash: str = ""
    input_type: str = "text"
    model_version: str = ""
    policy_id: str = ""
    actor_hash: str = ""

    @classmethod
    def create(cls, chain_id, prev_hash, prompt, actor_id,
               model_version, policy_id, input_type="text"):
        return cls(
            event_id=uuid7(), event_type=EventType.GEN_ATTEMPT,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash, prompt_hash=sha256(prompt),
            input_type=input_type, model_version=model_version,
            policy_id=policy_id, actor_hash=sha256(actor_id),
        )


@dataclass
class GenDenyEvent(CAPEvent):
    """Logged when safety evaluation denies generation."""
    attempt_id: str = ""
    risk_category: str = ""
    risk_score: float = 0.0
    refusal_reason: str = ""
    policy_version: str = ""
    jurisdiction_context: str = ""     # v1.1
    applied_policy_ref: str = ""       # v1.1

    @classmethod
    def create(cls, chain_id, prev_hash, attempt_id,
               risk_category: RiskCategory, risk_score,
               reason, policy_version,
               jurisdiction="", policy_ref=""):
        return cls(
            event_id=uuid7(), event_type=EventType.GEN_DENY,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash, attempt_id=attempt_id,
            risk_category=risk_category.value,
            risk_score=risk_score, refusal_reason=reason,
            policy_version=policy_version,
            jurisdiction_context=jurisdiction,
            applied_policy_ref=policy_ref,
        )


@dataclass
class GenEvent(CAPEvent):
    """Logged when content is successfully generated."""
    attempt_id: str = ""
    content_hash: str = ""
    output_type: str = "image"

    @classmethod
    def create(cls, chain_id, prev_hash, attempt_id,
               content_hash, output_type="image"):
        return cls(
            event_id=uuid7(), event_type=EventType.GEN,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash, attempt_id=attempt_id,
            content_hash=content_hash, output_type=output_type,
        )


@dataclass
class GenErrorEvent(CAPEvent):
    """Logged on system failure during generation."""
    attempt_id: str = ""
    error_code: str = ""
    error_detail: str = ""

    @classmethod
    def create(cls, chain_id, prev_hash, attempt_id,
               error_code, error_detail=""):
        return cls(
            event_id=uuid7(), event_type=EventType.GEN_ERROR,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash, attempt_id=attempt_id,
            error_code=error_code, error_detail=error_detail,
        )


# ──────────────────────────────────────────────
# v1.1 Intermediate-State Events
# ──────────────────────────────────────────────

@dataclass
class GenWarnEvent(CAPEvent):
    """v1.1: Generation allowed with user-facing warning."""
    attempt_id: str = ""
    content_hash: str = ""
    risk_category: str = ""
    risk_score: float = 0.0
    warn_message_hash: str = ""   # SHA-256 of warning (never plaintext)
    applied_policy_ref: str = ""

    @classmethod
    def create(cls, chain_id, prev_hash, attempt_id,
               content_hash, risk_category: RiskCategory,
               risk_score, warn_message, policy_ref=""):
        return cls(
            event_id=uuid7(), event_type=EventType.GEN_WARN,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash, attempt_id=attempt_id,
            content_hash=content_hash,
            risk_category=risk_category.value,
            risk_score=risk_score,
            warn_message_hash=sha256(warn_message),
            applied_policy_ref=policy_ref,
        )


@dataclass
class GenEscalateEvent(CAPEvent):
    """v1.1: Request sent for human review (pending state)."""
    attempt_id: str = ""
    escalation_reason: str = ""
    reviewer_type: str = "HUMAN_TRUST_AND_SAFETY"
    resolution_ref: Optional[str] = None  # Populated on resolution
    applied_policy_ref: str = ""

    @classmethod
    def create(cls, chain_id, prev_hash, attempt_id,
               reason: EscalationReason,
               reviewer_type="HUMAN_TRUST_AND_SAFETY",
               policy_ref=""):
        return cls(
            event_id=uuid7(), event_type=EventType.GEN_ESCALATE,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash, attempt_id=attempt_id,
            escalation_reason=reason.value,
            reviewer_type=reviewer_type,
            applied_policy_ref=policy_ref,
        )


@dataclass
class GenQuarantineEvent(CAPEvent):
    """v1.1: Content generated but held before delivery (pending)."""
    attempt_id: str = ""
    content_hash: str = ""
    quarantine_reason: str = "POST_GENERATION_POLICY_REVIEW"
    expiry_policy: str = "REQUIRES_HUMAN_APPROVAL"
    release_ref: Optional[str] = None  # Populated on resolution
    applied_policy_ref: str = ""

    @classmethod
    def create(cls, chain_id, prev_hash, attempt_id,
               content_hash, reason="POST_GENERATION_POLICY_REVIEW",
               expiry="REQUIRES_HUMAN_APPROVAL", policy_ref=""):
        return cls(
            event_id=uuid7(), event_type=EventType.GEN_QUARANTINE,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash, attempt_id=attempt_id,
            content_hash=content_hash,
            quarantine_reason=reason,
            expiry_policy=expiry,
            applied_policy_ref=policy_ref,
        )


# ──────────────────────────────────────────────
# v1.1 Account and Policy Events
# ──────────────────────────────────────────────

@dataclass
class AccountActionEvent(CAPEvent):
    """v1.1: Account-level enforcement decision."""
    account_hash: str = ""
    action_type: str = ""
    trigger_event_ids: List[str] = field(default_factory=list)
    le_assessment: str = ""  # REFERRED / NOT_REFERRED / PENDING

    @classmethod
    def create(cls, chain_id, prev_hash, account_id,
               action: AccountActionType,
               trigger_ids=None, le_assessment=LEAssessment.PENDING):
        return cls(
            event_id=uuid7(), event_type=EventType.ACCOUNT_ACTION,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash,
            account_hash=sha256(account_id),
            action_type=action.value,
            trigger_event_ids=trigger_ids or [],
            le_assessment=le_assessment.value,
        )


@dataclass
class PolicyVersionEvent(CAPEvent):
    """v1.1: Safety policy version anchoring."""
    policy_document_hash: str = ""
    effective_from: str = ""
    policy_name: str = ""
    version_string: str = ""

    @classmethod
    def create(cls, chain_id, prev_hash, policy_doc,
               effective_from, name, version):
        return cls(
            event_id=uuid7(), event_type=EventType.POLICY_VERSION,
            chain_id=chain_id, timestamp=now_iso(),
            prev_hash=prev_hash,
            policy_document_hash=sha256(policy_doc),
            effective_from=effective_from,
            policy_name=name, version_string=version,
        )
Enter fullscreen mode Exit fullscreen mode

The Event Chain (Sidecar Logger)

class CAPSRPLogger:
    """
    Sidecar logger implementing CAP-SRP v1.1.
    Wraps your existing AI generation pipeline.
    """

    def __init__(self, model_version: str, policy_id: str):
        self._key = Ed25519PrivateKey.generate()
        self._pub = self._key.public_key()
        self._chain_id = uuid7()
        self._events: List[CAPEvent] = []
        self._prev_hash: Optional[str] = None
        self._model_version = model_version
        self._policy_id = policy_id

    def _append(self, event: CAPEvent):
        event.sign(self._key)
        self._events.append(event)
        self._prev_hash = event.event_hash

    # ── Core generation flow ──

    def log_attempt(self, prompt: str, actor_id: str,
                    input_type: str = "text") -> str:
        """Log BEFORE safety evaluation. Returns attempt event_id."""
        event = GenAttemptEvent.create(
            self._chain_id, self._prev_hash, prompt,
            actor_id, self._model_version, self._policy_id,
            input_type,
        )
        self._append(event)
        return event.event_id

    def log_deny(self, attempt_id: str,
                 risk_category: RiskCategory,
                 risk_score: float, reason: str,
                 jurisdiction: str = "",
                 policy_ref: str = ""):
        event = GenDenyEvent.create(
            self._chain_id, self._prev_hash, attempt_id,
            risk_category, risk_score, reason,
            self._policy_id, jurisdiction, policy_ref,
        )
        self._append(event)

    def log_generate(self, attempt_id: str,
                     content_hash: str,
                     output_type: str = "image"):
        event = GenEvent.create(
            self._chain_id, self._prev_hash,
            attempt_id, content_hash, output_type,
        )
        self._append(event)

    def log_error(self, attempt_id: str,
                  error_code: str, detail: str = ""):
        event = GenErrorEvent.create(
            self._chain_id, self._prev_hash,
            attempt_id, error_code, detail,
        )
        self._append(event)

    # ── v1.1 intermediate states ──

    def log_warn(self, attempt_id: str, content_hash: str,
                 risk_category: RiskCategory, risk_score: float,
                 warn_message: str, policy_ref: str = ""):
        event = GenWarnEvent.create(
            self._chain_id, self._prev_hash,
            attempt_id, content_hash, risk_category,
            risk_score, warn_message, policy_ref,
        )
        self._append(event)

    def log_escalate(self, attempt_id: str,
                     reason: EscalationReason,
                     reviewer: str = "HUMAN_TRUST_AND_SAFETY",
                     policy_ref: str = ""):
        event = GenEscalateEvent.create(
            self._chain_id, self._prev_hash,
            attempt_id, reason, reviewer, policy_ref,
        )
        self._append(event)

    def log_quarantine(self, attempt_id: str,
                       content_hash: str,
                       reason: str = "POST_GENERATION_POLICY_REVIEW",
                       policy_ref: str = ""):
        event = GenQuarantineEvent.create(
            self._chain_id, self._prev_hash,
            attempt_id, content_hash, reason,
            policy_ref=policy_ref,
        )
        self._append(event)

    # ── v1.1 account/policy events ──

    def log_account_action(self, account_id: str,
                           action: AccountActionType,
                           trigger_ids=None,
                           le: LEAssessment = LEAssessment.PENDING):
        event = AccountActionEvent.create(
            self._chain_id, self._prev_hash,
            account_id, action, trigger_ids, le,
        )
        self._append(event)

    def log_policy_version(self, policy_doc: str,
                           effective_from: str,
                           name: str, version: str):
        event = PolicyVersionEvent.create(
            self._chain_id, self._prev_hash,
            policy_doc, effective_from, name, version,
        )
        self._append(event)

    @property
    def events(self) -> List[CAPEvent]:
        return list(self._events)

    @property
    def public_key(self) -> Ed25519PublicKey:
        return self._pub
Enter fullscreen mode Exit fullscreen mode

Completeness Invariant Verification

The verification function checks all four v1.1 invariants:

from dataclasses import dataclass as result_dc


@result_dc
class InvariantResult:
    invariant: str
    holds: bool
    detail: str


def verify_all_invariants(events: List[CAPEvent]) -> List[InvariantResult]:
    """
    Verify all four CAP-SRP v1.1 Completeness Invariants.

    Returns a list of results, one per invariant.
    """
    results = []

    # Categorize events
    attempts = []
    outcomes = []  # GEN, GEN_WARN, GEN_DENY, GEN_ERROR
    escalations = []
    quarantines = []
    account_actions = []

    outcome_types = {
        EventType.GEN, EventType.GEN_WARN,
        EventType.GEN_DENY, EventType.GEN_ERROR,
    }

    for e in events:
        if e.event_type == EventType.GEN_ATTEMPT:
            attempts.append(e)
        elif e.event_type in outcome_types:
            outcomes.append(e)
        elif e.event_type == EventType.GEN_ESCALATE:
            escalations.append(e)
        elif e.event_type == EventType.GEN_QUARANTINE:
            quarantines.append(e)
        elif e.event_type == EventType.ACCOUNT_ACTION:
            account_actions.append(e)

    # ── Invariant 1: Primary Completeness ──
    attempt_ids = {e.event_id for e in attempts}
    outcome_attempt_ids = {e.attempt_id for e in outcomes
                          if hasattr(e, 'attempt_id')}
    # Escalations and quarantines are pending, not final outcomes
    escalate_attempt_ids = {e.attempt_id for e in escalations
                           if hasattr(e, 'attempt_id')}

    # Unmatched = attempts with no outcome AND no pending state
    unmatched = attempt_ids - outcome_attempt_ids - escalate_attempt_ids
    # Also subtract quarantine attempt_ids (also pending)
    quarantine_attempt_ids = {e.attempt_id for e in quarantines
                             if hasattr(e, 'attempt_id')}
    unmatched = unmatched - quarantine_attempt_ids

    orphan_outcomes = outcome_attempt_ids - attempt_ids

    inv1_holds = len(unmatched) == 0 and len(orphan_outcomes) == 0
    detail1 = (f"Attempts: {len(attempts)}, "
               f"Outcomes: {len(outcomes)}, "
               f"Pending escalations: {len(escalations)}, "
               f"Pending quarantines: {len(quarantines)}")
    if not inv1_holds:
        detail1 += (f" | VIOLATION: {len(unmatched)} unmatched attempts, "
                    f"{len(orphan_outcomes)} orphan outcomes")

    results.append(InvariantResult(
        "Primary Completeness", inv1_holds, detail1))

    # ── Invariant 2: Escalation Resolution ──
    resolved_escalations = sum(
        1 for e in escalations
        if hasattr(e, 'resolution_ref') and e.resolution_ref
    )
    unresolved = len(escalations) - resolved_escalations
    inv2_holds = unresolved == 0
    detail2 = (f"Escalations: {len(escalations)}, "
               f"Resolved: {resolved_escalations}, "
               f"Unresolved: {unresolved}")

    results.append(InvariantResult(
        "Escalation Resolution", inv2_holds, detail2))

    # ── Invariant 3: Quarantine Resolution ──
    resolved_quarantines = sum(
        1 for e in quarantines
        if hasattr(e, 'release_ref') and e.release_ref
    )
    unresolved_q = len(quarantines) - resolved_quarantines
    inv3_holds = unresolved_q == 0
    detail3 = (f"Quarantines: {len(quarantines)}, "
               f"Resolved: {resolved_quarantines}, "
               f"Unresolved: {unresolved_q}")

    results.append(InvariantResult(
        "Quarantine Resolution", inv3_holds, detail3))

    # ── Invariant 4: Account Action (Gold) ──
    # Simplified: ensure all account actions have valid action_type
    invalid_actions = sum(
        1 for e in account_actions
        if not hasattr(e, 'action_type') or not e.action_type
    )
    inv4_holds = invalid_actions == 0
    detail4 = (f"Account actions: {len(account_actions)}, "
               f"Invalid: {invalid_actions}")

    results.append(InvariantResult(
        "Account Action Integrity", inv4_holds, detail4))

    return results


def verify_chain_integrity(events: List[CAPEvent],
                           public_key: Ed25519PublicKey) -> dict:
    """Verify hash chain and signatures for all events."""
    total = len(events)
    sig_valid = 0
    chain_valid = 0

    for i, event in enumerate(events):
        # Signature verification
        if event.verify(public_key):
            sig_valid += 1

        # Chain verification (skip first event)
        if i == 0:
            if event.prev_hash is None:
                chain_valid += 1
        else:
            if event.prev_hash == events[i - 1].event_hash:
                chain_valid += 1

    return {
        "total_events": total,
        "signatures_valid": sig_valid,
        "chain_links_valid": chain_valid,
        "integrity": sig_valid == total and chain_valid == total,
    }
Enter fullscreen mode Exit fullscreen mode

Scenario: Mapping the Three Events to CAP-SRP Events

Here's how each of this week's developments maps to concrete CAP-SRP v1.1 events:

def demo_three_scenarios():
    """
    Demonstrate CAP-SRP v1.1 event logging for all three
    April 2026 developments.
    """
    logger = CAPSRPLogger(
        model_version="example-model-v3",
        policy_id="policy-2026-Q2",
    )

    # ── Scenario 1: China Digital Human Regulation ──
    # A request to create a digital human using a real
    # person's likeness without consent.
    # CAP-SRP risk: REAL_PERSON_DEEPFAKE

    print("=" * 60)
    print("Scenario 1: China — Unauthorized Digital Human")
    print("=" * 60)

    attempt1 = logger.log_attempt(
        prompt="Create a virtual assistant that looks like [celebrity]",
        actor_id="user-cn-12345",
        input_type="text+image",
    )
    logger.log_deny(
        attempt_id=attempt1,
        risk_category=RiskCategory.REAL_PERSON_DEEPFAKE,
        risk_score=0.97,
        reason="Real person likeness without consent verification",
        jurisdiction="CN-CAC-DIGITAL-HUMAN-2026",
        policy_ref="policy-2026-Q2-§4.2",
    )
    print(f"  GEN_ATTEMPT: {attempt1}")
    print(f"  GEN_DENY: REAL_PERSON_DEEPFAKE (0.97)")
    print(f"  Jurisdiction: CN-CAC-DIGITAL-HUMAN-2026")

    # ── Scenario 2: Germany — NCII Deepfake ──
    # A request to create non-consensual intimate imagery.
    # CAP-SRP risk: NCII_RISK
    # Under proposed §184k StGB, creation itself is criminal.

    print()
    print("=" * 60)
    print("Scenario 2: Germany — NCII Deepfake (§184k StGB)")
    print("=" * 60)

    attempt2 = logger.log_attempt(
        prompt="[redacted NCII request]",
        actor_id="user-de-67890",
        input_type="text+image",
    )
    logger.log_deny(
        attempt_id=attempt2,
        risk_category=RiskCategory.NCII_RISK,
        risk_score=0.99,
        reason="Non-consensual intimate imagery detected",
        jurisdiction="DE-StGB-§184k-PROPOSED",
        policy_ref="policy-2026-Q2-§2.1",
    )

    # Account-level action triggered by NCII attempt
    logger.log_account_action(
        account_id="user-de-67890",
        action=AccountActionType.SUSPEND,
        trigger_ids=[attempt2],
        le=LEAssessment.PENDING,
    )

    print(f"  GEN_ATTEMPT: {attempt2}")
    print(f"  GEN_DENY: NCII_RISK (0.99)")
    print(f"  ACCOUNT_ACTION: SUSPEND")
    print(f"  LE Assessment: PENDING")

    # ── Scenario 3: Deposition Footage Deepfake ──
    # A request using deposition video as source material.
    # Classifier confidence is low → escalate to human review.
    # Demonstrates the v1.1 GEN_ESCALATE flow.

    print()
    print("=" * 60)
    print("Scenario 3: Deposition Footage → Escalation")
    print("=" * 60)

    attempt3 = logger.log_attempt(
        prompt="Generate executive video statement [with ref footage]",
        actor_id="user-us-11111",
        input_type="video+text",
    )
    logger.log_escalate(
        attempt_id=attempt3,
        reason=EscalationReason.CLASSIFIER_CONFIDENCE_LOW,
        reviewer="HUMAN_TRUST_AND_SAFETY",
        policy_ref="policy-2026-Q2-§6.3",
    )

    print(f"  GEN_ATTEMPT: {attempt3}")
    print(f"  GEN_ESCALATE: CLASSIFIER_CONFIDENCE_LOW")
    print(f"  Awaiting human review...")

    # After human review → deny
    logger.log_deny(
        attempt_id=attempt3,
        risk_category=RiskCategory.REAL_PERSON_DEEPFAKE,
        risk_score=0.85,
        reason="Human reviewer: unauthorized executive likeness",
        jurisdiction="US-PROTECTIVE-ORDER",
        policy_ref="policy-2026-Q2-§6.3",
    )
    print(f"  → Human review complete: GEN_DENY")

    # ── Verify all invariants ──
    print()
    print("=" * 60)
    print("Completeness Invariant Verification")
    print("=" * 60)

    for result in verify_all_invariants(logger.events):
        status = "✓ HOLDS" if result.holds else "✗ VIOLATION"
        print(f"  {result.invariant}: {status}")
        print(f"    {result.detail}")

    # ── Verify chain integrity ──
    print()
    chain = verify_chain_integrity(logger.events, logger.public_key)
    print(f"  Chain integrity: "
          f"{'' if chain['integrity'] else ''}")
    print(f"    Events: {chain['total_events']}, "
          f"Sigs valid: {chain['signatures_valid']}, "
          f"Chain links: {chain['chain_links_valid']}")


if __name__ == "__main__":
    demo_three_scenarios()
Enter fullscreen mode Exit fullscreen mode

Expected output

============================================================
Scenario 1: China — Unauthorized Digital Human
============================================================
  GEN_ATTEMPT: 019467a1-...
  GEN_DENY: REAL_PERSON_DEEPFAKE (0.97)
  Jurisdiction: CN-CAC-DIGITAL-HUMAN-2026

============================================================
Scenario 2: Germany — NCII Deepfake (§184k StGB)
============================================================
  GEN_ATTEMPT: 019467a1-...
  GEN_DENY: NCII_RISK (0.99)
  ACCOUNT_ACTION: SUSPEND
  LE Assessment: PENDING

============================================================
Scenario 3: Deposition Footage → Escalation
============================================================
  GEN_ATTEMPT: 019467a1-...
  GEN_ESCALATE: CLASSIFIER_CONFIDENCE_LOW
  Awaiting human review...
  → Human review complete: GEN_DENY

============================================================
Completeness Invariant Verification
============================================================
  Primary Completeness: ✓ HOLDS
    Attempts: 3, Outcomes: 3, Pending escalations: 1, Pending quarantines: 0
  Escalation Resolution: ✗ VIOLATION
    Escalations: 1, Resolved: 0, Unresolved: 1
  Quarantine Resolution: ✓ HOLDS
    Quarantines: 0, Resolved: 0, Unresolved: 0
  Account Action Integrity: ✓ HOLDS
    Account actions: 1, Invalid: 0
  Chain integrity: ✓
    Events: 7, Sigs valid: 7, Chain links: 7
Enter fullscreen mode Exit fullscreen mode

Note: The Escalation Resolution invariant shows a violation because the GEN_ESCALATE event's resolution_ref was not updated (the resolution GEN_DENY was logged separately). In production, you would update resolution_ref on the escalation event when the resolving event is created. This is intentional — the invariant catches exactly this kind of bookkeeping gap.


C2PA Integration: Content Provenance Meets Behavioral Provenance

The Bloomberg Law article makes the case for C2PA content credentials in the deposition context. Here's how the two layers connect:

Complete Provenance Architecture for AI-Generated Content
══════════════════════════════════════════════════════════

                     Real-World Event
                    (e.g., deposition filmed)
                           │
                           ▼
                  ┌─────────────────┐
                  │  C2PA Manifest   │ ← Content provenance
                  │  ───────────     │   "This video was captured
                  │  Capture device  │    by this camera at this
                  │  Timestamp       │    time and has not been
                  │  Hash chain      │    modified"
                  └────────┬────────┘
                           │
              Footage reaches AI system
                           │
                           ▼
                  ┌─────────────────┐
                  │  GEN_ATTEMPT     │ ← Behavioral provenance
                  │  ───────────     │   "This AI system received
                  │  PromptHash      │    a request involving
                  │  InputRef:       │    this content"
                  │    C2PA manifest │
                  │    hash          │
                  └────────┬────────┘
                           │
                    Safety Check
                           │
                  ┌────────┴────────┐
                  │                  │
                  ▼                  ▼
         ┌──────────────┐  ┌──────────────┐
         │   GEN_DENY    │  │   GEN        │
         │   ──────────  │  │   ──────     │
         │   "Refused:   │  │   ContentHash│
         │   unauthorized│  │   C2PA child │
         │   likeness"   │  │   manifest   │
         └──────────────┘  └──────────────┘

| Question | Layer | Standard |
|----------|-------|----------|
| Is this video authentic? | Content | C2PA |
| Who generated the deepfake? | Behavioral | CAP-SRP |
| What was refused? | Behavioral | CAP-SRP GEN_DENY |
| Why was it refused? | Behavioral | RiskCategory + policy ref |
| Is the log complete? | Behavioral | Completeness Invariant |
| Can we verify independently? | Both | SCITT + RFC 3161 |
Enter fullscreen mode Exit fullscreen mode

What This Means for Developers

If you're building or maintaining an AI content generation system, here's the practical takeaway from this week's events:

The regulatory timeline is compressed. China's comment period closes May 6. The US TAKE IT DOWN Act compliance deadline is May 19. The EU AI Act Article 50 transparency obligations take effect August 2, 2026. Germany's criminal law proposal will move through the Bundestag over the coming months. The DEFIANCE Act has 52+ House cosponsors. The infrastructure for proving what AI created is maturing rapidly (C2PA 2.3, 6000+ members). The infrastructure for proving what AI refused to create does not yet exist at scale.

The implementation is a sidecar. CAP-SRP doesn't require changes to your AI model, safety evaluator, or generation pipeline. It's a logging layer. The key requirement is sequencing: log GEN_ATTEMPT before the safety check runs, log the outcome after. Everything else — hash chains, Merkle trees, external anchoring — is standard cryptographic engineering.

Start at Bronze, iterate up. Bronze-level conformance requires basic event logging with Ed25519 signatures and 6-month retention. Silver adds the Completeness Invariant, v1.1 intermediate states, and daily external anchoring. Gold adds ACCOUNT_ACTION events, HSM key management, and SCITT transparency service integration. The specification is designed for incremental adoption.

The standards exist. CAP-SRP builds on IETF SCITT (architecture at draft-22), C2PA (specification 2.3), RFC 3161 (timestamping), and COSE/CBOR (signing). These have real implementations from Microsoft, DataTrails, Adobe, Google, and others.


Transparency Notes

About this analysis: This article fact-checks three real news events from April 2026 against primary sources. China regulation claims are verified via China.org.cn, China Law Translate, Reuters, and IBTimes Singapore. Germany claims are verified via NPR, CBC News, Yahoo News/BBC, and German-language sources. Bloomberg Law claims are verified against the article itself. Christian Ulmen categorically denies all allegations and has not been charged.

About CAP-SRP: CAP-SRP is an open specification published under CC BY 4.0 by VeritasChain Standards Organization (VSO), founded in Tokyo. Version 1.1 was published March 5, 2026. It has not been endorsed by major AI companies and is not an adopted standard of any recognized standards body. An individual Internet-Draft (draft-kamimura-scitt-refusal-events-02) has been submitted to the IETF but has not been adopted by the SCITT working group and carries no IETF endorsement.

What CAP-SRP is:

  • A technically sound approach to a genuine, well-documented gap
  • Aligned with existing standards (C2PA, SCITT, RFC 3161, COSE/CBOR)
  • Available on GitHub: veritaschain/cap-spec (CC BY 4.0)
  • Reference implementation: veritaschain/cap-srp (Apache 2.0)

What CAP-SRP is not (yet):

  • An industry-endorsed standard
  • An IETF RFC
  • A guaranteed solution

The real question is whether the industry builds some form of refusal provenance before regulators impose one. The deadlines that assume it exists are already on the calendar.


Verify, don't trust. The code is the proof.

GitHub: veritaschain/cap-spec · Specification: CAP-SRP v1.1 · License: CC BY 4.0

Top comments (0)