From Stochastic Trust to Deterministic Human Intent in Hostile Build Environments
Introduction: The Assumption That Keeps Failing
Modern CI/CD pipelines are built on a deceptively simple assumption:
If an action originates from a valid session token, it must originate from valid human intent.
This assumption feels intuitive. Engineers authenticate using SSO, receive session tokens, and those tokens authorize deployments to production. If the token is valid and the user has the correct role, the system proceeds.
SolarWinds, Codecov, and Log4j demonstrated that this assumption is false in practice.
In all three cases, systems behaved “correctly” from an authorization perspective:
- Credentials were valid
- Tokens were legitimate
- Pipelines executed as designed
And yet, catastrophic outcomes occurred.
This article introduces what I call the Intent-Verification Gap: the structural failure of modern CI/CD security models to distinguish possession of credentials from conscious human intent. This gap is not theoretical — it is the attack surface exploited by real-world Advanced Persistent Threats (APTs).
The Stochastic Trust Model of CI/CD
Most CI/CD pipelines operate under what can be described as a stochastic trust model:
- A user authenticates at some point in time
- A session token persists for hours
- Actions taken during that window are assumed to reflect ongoing user intent
This model is probabilistic. It assumes that during the lifetime of the token, the user remains in control of their device, network, and execution environment.
This assumption fails under modern threat models.
Once malware compromises the endpoint, the system cannot distinguish between:
- A human intentionally deploying code
- Malware using the same token to deploy malicious artifacts
From the perspective of the pipeline, both are indistinguishable. The signature is valid. The role is correct. The authorization check passes.
This is not a bug in implementation.
It is a flaw in the trust model itself.
Authentication ≠ Intent
In security terminology, we separate:
- Authentication (AuthN): Who are you?
- Authorization (AuthZ): Are you allowed to do this?
But neither AuthN nor AuthZ answer a third, more important question:
Did the human consciously intend to perform this specific action at this specific moment?
In most pipelines, the sequence is:
- Identity Assertion (SSO / token)
- Privilege Check (DEPLOY_PROD role)
- Execution (Production changes)
This sequence proves authority.
It does not prove intent.
If malware executes a deployment using a cached token, the system functions “correctly” while failing catastrophically from a security perspective.
This is the Intent-Verification Gap.
SolarWinds: When the Build System Lies
The SolarWinds Sunburst attack is often framed as a “build system compromise.”
But the deeper failure was one of intent verification.
The build system:
- Compiled malicious code
- Signed the artifact
- Distributed it to customers
From the CI/CD pipeline’s perspective, nothing was wrong.
The system behaved exactly as designed.
The missing question was never asked:
Did a human consciously intend to deploy this specific artifact?
Once the build server was compromised, cryptographic signatures became meaningless.
The server signed malware just as happily as it signed legitimate code.
This reveals a deeper truth:
Cryptography can authenticate machines.
It cannot authenticate human intent.
Codecov: Silent Drift and the Absence of Forensics
The Codecov breach persisted for months because there was no immutable forensic trail of what code actually ran in pipelines over time.
From the pipeline’s perspective:
- Scripts were downloaded
- Environment variables were exported
- Everything executed normally
The breach went undetected because the system had no memory that could not be rewritten.
Even if human intent is later questioned, there is no cryptographically reliable record of:
- What actions were authorized
- When they occurred
- What code actually executed
Security systems that cannot preserve forensic truth cannot reconstruct reality after compromise.
The Dirty Laptop Hypothesis
The modern developer workstation is hostile territory.
A typical laptop runs:
- Browser extensions
- Background daemons
- Package managers
- Chat clients
- Build tools
- Remote access agents
Any of these can be compromised.
Yet most security systems assume that the same machine can:
- Display the approval UI
- Generate cryptographic signatures
- Safely convey intent
This is the Dirty Laptop Hypothesis:
Any general-purpose computing device used for development must be treated as compromised by default.
If approval and signing occur on the same device as development, malware can manipulate what the human sees while signing something else under the hood.
This collapses the trust boundary.
Process vs Physics
The industry response to supply-chain attacks is typically procedural:
- More approvals
- More policies
- More compliance checklists
- More training
These are process-based controls.
Process-based security fails when the underlying execution environment is compromised.
A compromised compiler does not respect peer review.
A compromised build server does not honor managerial sign-offs.
This leads to the physics-based security counter-thesis:
Security must be rooted in constraints attackers cannot bypass with software alone.
Examples:
- Physical presence
- Hardware-isolated signing
- Air-gapped approval channels
- Immutable storage
When security depends on physical properties, attackers must cross domains: digital → physical.
This drastically increases attack cost.
Why Tokens Are Structural Liabilities
Session tokens behave like blank checks:
- They remain valid for hours
- They can be replayed
- They can be exfiltrated
- They can be proxied by malware
Tokens collapse temporal context.
They convert high-risk actions into low-entropy signals.
This is why token-based deployment authorization is structurally unsafe under hostile endpoint assumptions.
A deployment should require fresh proof of intent, not inherited authority from an earlier login event.
From Identity to Intent
Modern DevSecOps obsessively answers:
Who is this?
But in compromised environments, identity is irrelevant.
What matters is:
Did this human consciously authorize this specific action right now?
Intent is an action, not a state.
Identity is a state, not an action.
Security systems that authenticate identity without verifying intent are blind to the most critical failure mode in modern CI/CD pipelines.
The Architectural Implication
Once you accept the Intent-Verification Gap, several architectural requirements follow:
- Approval must be per-action, not per-session
- Signing must be physically isolated from the development environment
- Authorization must be cryptographically bound to specific artifacts and environments
- Logs must be immutable by design
- Friction must be proportional to risk, not uniform
These principles form the foundation of intent-verification architectures.
Conclusion: Intent as a Security Primitive
Identity is a convenience layer.
Intent is the security boundary.
Until CI/CD systems treat human intent as a first-class cryptographic primitive, supply-chain attacks will continue to bypass controls while passing every compliance check.
The future of CI/CD security is not more dashboards.
It is fewer trust assumptions.
Top comments (0)