DEV Community

Cover image for Architecting for Vulnerability: Introducing Protective Computing Core v1.0
CrisisCore-Systems
CrisisCore-Systems

Posted on

Architecting for Vulnerability: Introducing Protective Computing Core v1.0

Most software is built on a dangerous premise: the Stability Assumption.

We assume the user has a stable network, stable cognitive capacity, a secure physical environment, and institutional trust. When those conditions hold, modern cloud-native architecture works beautifully.

But when users enter a vulnerability state—whether due to natural disaster, cognitive overload, physical displacement, or coercive threats—the Stability Assumption collapses. Cloud-dependent apps lock users out of their own data. "Helpful" auto-sync features broadcast location data from compromised networks. Irreversible state changes occur when the user lacks the cognitive bandwidth to understand the confirmation modals.

We need a systems-engineering discipline for designing software under conditions of human vulnerability.

Today, I am open-sourcing Protective Computing Core v1.0.

What is Protective Computing?

Protective Computing is not a privacy manifesto. It is a strict, testable engineering discipline. It provides a formal vocabulary and a pattern library for building systems that degrade safely, contain failures locally, and defend user agency under asymmetric power conditions.

The v1.0 Core Specification introduces numbered, testable requirements (PC-REQ) and an explicit conformance model. It is built around four architectural pillars:

1. Local Authority Pattern

The system MUST preserve user authority over locally stored critical data in the absence of network connectivity. Network transport is treated as an optional enhancement, not a dependency for Essential Utility.

2. Exposure Surface Minimization

The system MUST NOT increase its exposure surface during crisis-state escalation. Analytics, third-party telemetry, and remote logging are default-off and hard-gated.

3. Reversible State Pattern

The system MUST NOT introduce irreversible state transitions during declared vulnerability states unless explicitly confirmed. High-impact destructive actions require bounded restoration windows (where security invariants allow).

4. Explicit Degradation Modes

The system cannot just "go offline." It MUST define explicit degradation modes (e.g., Connectivity Degradation, Cognitive Degradation, Institutional Latency) and map how Essential Utility is preserved in each state.

The Reference Implementation: PainTracker

To prove these patterns are implementable in standard web technologies, I built a reference implementation: PainTracker.ca.

It is an offline-first Progressive Web App (PWA) designed for users tracking chronic health data—a highly sensitive payload often logged during states of high cognitive or physical distress.

Instead of a traditional SaaS architecture, PainTracker implements Protective Computing Core v1.0 through:

  • Encrypted IndexedDB Persistence: The primary database lives entirely on the device.
  • Zero-Knowledge Vault Gating: A local security boundary that does not rely on a remote auth server.
  • Unlock-Only Bounded Reversibility: A pending-wipe window for destructive actions that can only be aborted by a successful cryptographic unlock, preserving brute-force resistance while protecting against accidental deletion.
  • Hard Telemetry Gating: A verifiable kill-switch for all outbound network requests not explicitly initiated by the user.

To see what this looks like in practice, consider the Reversible State Pattern applied to our vault kill-switch.

Standard security dictates that after N failed unlock attempts, a local vault should wipe. But under cognitive overload (a defined vulnerability state), users mistype passwords. A standard immediate wipe causes irreversible data loss. However, adding a generic "Cancel" button destroys the brute-force resistance.

Protective Computing requires a bounded restoration window that does not weaken the security invariant. Here is how PainTracker implements it:

// Bounded Reversibility under Asymmetric Power Defense
async function handleFailedUnlock() {
  failedAttempts++;

  if (failedAttempts >= MAX_FAILED_UNLOCK_ATTEMPTS && privacySettings.vaultKillSwitchEnabled) {
    // 1. Enter a bounded degradation state
    // 2. Disclose the pending irreversible action
    // 3. ONLY a successful cryptographic unlock can abort the timer

    await enterPendingWipeState({
      windowMs: 10_000,
      reason: "failed_unlock_threshold",
      onExpire: () => executeEmergencyWipe()
    });

    UI.showWarning("Vault will wipe in 10s. Enter correct passphrase to abort.");
  }
}
Enter fullscreen mode Exit fullscreen mode

Notice the architectural constraint: There is no cancelWipe() function exposed to the UI. The only path to reversibility is proving local authority.

stateDiagram-v2
    [*] --> Normal
    Normal --> PendingWipe: N failed unlocks & kill-switch enabled
    PendingWipe --> Normal: successful unlock within window
    PendingWipe --> Wiped: window expired
    Wiped --> [*]
    note right of PendingWipe: user sees warning UI
    note right of Normal: regular operation
    note right of Wiped: data erased
Enter fullscreen mode Exit fullscreen mode

Measuring Posture: The Protective Legitimacy Score (PLS)

In this space, marketing claims like "military-grade encryption" or "secure by design" are useless. Engineers and regulators need auditable transparency.

Alongside the Core spec, we are publishing the Protective Legitimacy Score (PLS) Disclosure Template. PLS is a structured transparency instrument—not a certification. It forces maintainers to explicitly declare their assumed vulnerability conditions, their architectural tiers across five dimensions, and their deviation register.

PainTracker's complete PLS v1.0.0 disclosure, cross-referenced against the Core spec requirements, is public in the repository.

The Call for Reference Implementation B

PainTracker proves the discipline works for localized health telemetry. But Protective Computing is domain-agnostic.

These patterns are exactly what is needed for:

  • Disaster-response cache applications
  • Coercion-resistant messaging interfaces
  • Offline-first journalistic tooling

I am inviting engineers and systems architects to review the Protective Computing Core v1.0 Spec, challenge the normative requirements, and collaborate on building Reference Implementation B.

If we change the architectural defaults, we can stop building software that breaks exactly when people need it most.

Top comments (0)