DEV Community

Cover image for The Gap Between Encrypting Secrets and Proving You Handled Them Right
James Spears
James Spears

Posted on

The Gap Between Encrypting Secrets and Proving You Handled Them Right

There's a moment in every secrets pipeline that nobody talks about.

You encrypt your secrets at rest. You store them in git as ciphertext. You manage KMS keys with IAM policies. You rotate credentials. You might even use SOPS or sealed-secrets or Vault. Your secrets management story sounds solid in an architecture review.

But at some point, something has to decrypt those secrets and do something with them. And that something runs on a CI runner.

The Plaintext Moment

Think about what happens during a typical deployment. Your CI pipeline checks out the repo, decrypts SOPS-encrypted files, merges the right values for the target service and environment, re-encrypts them into a deployment artifact, and pushes it somewhere your runtime can consume it.

For a brief window, plaintext secrets exist in memory on a general-purpose compute environment. The same environment that runs your test suite, your linters, your build tools, and whatever transitive dependencies those tools pulled in this week.

The ciphertext in your git repo is inert. Your KMS keys in isolation are useless. But the moment ciphertext meets decryption capability in the same execution context — that's when plaintext is born. And that moment happens on your CI runner.

Why This Matters Less Than You Think (And More Than You Think)

If you self-host your CI runners, you probably trust them. You control the machine, the network, the software. Telling you "your CI runner might be compromised" is arguing against your own operational confidence, and it's a hard sell.

But here's the thing: trusting your CI runner and being able to prove what your CI runner did are two different things.

When an auditor asks "what code processed these secrets during the last deployment," the honest answer from most pipelines is "we trust that our CI ran the right code." That's an assertion, not evidence. The CI logs say what happened — but the CI wrote those logs. If the runner were compromised, the logs would say whatever the attacker wanted them to say.

This isn't a security gap in the traditional sense. It's a provenance gap. You might be doing everything right, but you can't prove it cryptographically.

What Cryptographic Provenance Looks Like

Imagine if the "pack" operation — the moment where ciphertext meets KMS decryption and plaintext is born — ran inside hardware-isolated memory. The host operating system can't read it. The hypervisor can't read it. Root access on the machine can't read it.

Now imagine the hardware produces a signed attestation document that says "this exact binary, with this exact image hash, ran this operation." And your KMS key policy says "only decrypt when the request comes with a valid attestation document bearing this specific image hash."

You now have a chain:

  1. The binary is source-available — anyone can read the code
  2. The build is reproducible — anyone can verify the binary matches the source
  3. The image hash is deterministic — same source produces same hash
  4. The KMS policy only allows decryption by that exact image
  5. The attestation document is signed by the hardware, not by software you control
  6. The result is a signed artifact with a receipt that includes the attestation, the source commit, and the KMS keys used

This isn't "we trust the CI runner." This is "the hardware proved what code ran, and the KMS proved only that code could decrypt." No one in the chain needs to be trusted — the proof is anchored in hardware attestation and KMS policy evaluation.

The Audit Answer Changes

Without this: "We followed our procedures and our logs show the pipeline ran correctly."

With this: "Here is the attestation receipt. It proves this specific binary, running inside hardware-isolated memory, processed secrets from this git commit, using these KMS keys, and produced this artifact. The binary is source-available and reproducibly built — here's the build spec if you want to verify the image hash yourself."

The first answer requires trusting the organization. The second requires trusting math.

Who Actually Needs This

Not everyone. If you're a startup with five engineers and you trust your CI, you probably don't need hardware-attested pack operations. Your threat model doesn't justify the operational complexity.

But if you're in a regulated industry — finance, healthcare, government, or any organization where auditors ask pointed questions about secret handling — the provenance gap is real. And it gets wider as your secrets pipeline gets more complex: more services, more environments, more KMS keys, more people with access to CI infrastructure.

The interesting thing is that this isn't about replacing your secrets manager. You keep Vault, or SOPS, or whatever you use. The attestation layer sits on top of your existing pipeline, specifically around the moment where secrets are decrypted and repackaged. It's a narrow, focused intervention at the one point in the pipeline where plaintext exists.

The Punchline

Encryption protects secrets at rest. KMS policies protect secrets at the API boundary. Network isolation protects secrets in transit. But none of these protect the moment of use — the instant when ciphertext is decrypted and plaintext lives in memory.

Hardware enclaves protect that moment. Attestation proves what happened during that moment. Reproducible builds let anyone verify that the attested code matches published source.

The result isn't a security product. It's a provenance product. The answer to "how do you know your pipeline didn't leak these credentials" stops being an assertion and becomes a proof.

Top comments (0)