The Uncomfortable Truth Most Teams Avoid
You can have a fully automated pipeline, Terraform-managed infrastructure, Kubernetes running workloads, and GitHub Actions firing on every push, and still have no idea whether your environment is actually compliant with anything.
That sentence should be uncomfortable. For a lot of teams, it is. And yet the prevailing assumption in most engineering organizations is that if it's automated, it must be under control.
It is not.
What the Lab Exposed
I have been building a Cloud and Platform Engineering Lab designed to simulate enterprise-scale systems. Not a sandbox for tutorials. An actual attempt to reproduce the architectural complexity, operational drift, and governance pressure of a real production platform environment.
What I expected to find: tooling gaps, performance edge cases, configuration quirks.
What I did not expect to find: a consistent, systemic disconnect between automation maturity and compliance posture.
Repo after repo in the lab had CI/CD pipelines. Most had some form of Infrastructure as Code. A few had Kubernetes manifests checked in and drift-detected. By surface-level metrics, these looked like healthy, modern engineering environments.
But when I started asking harder questions, the answers were unsettling:
- Are secrets being scanned before merge, or just after the damage is done?
- Does any of this IaC align with a defined security baseline, or did someone just run
terraform initand figure it out as they went? - Is there a README that could survive an audit, or is it a three-line placeholder from two years ago?
- If this pipeline failed a compliance gate, would anyone know which gate, why, or what to do next?
The answer, in most cases, was: not really.
Why Automation Creates a False Sense of Security
Automation solves repeatability. That is its core value proposition. Run the same steps, in the same order, every time. It eliminates human error from execution.
But compliance is not a problem of execution. It is a problem of posture, context, and intent.
Consider the difference:
Automation answers: Did the deployment succeed? Did the tests pass? Was the container built?
Compliance asks: Does this deployment introduce risk? Does the infrastructure reflect organizational policy? Is this system auditable?
These are fundamentally different questions, and automation tooling is not designed to answer the second set. Yet the presence of CI/CD pipelines is often treated, implicitly, as evidence of maturity and control.
That conflation is where the gap lives.
What DevOps Practices Miss About Compliance Visibility
Modern DevOps tooling is excellent at signaling operational health. Dashboards, alerts, pipeline statuses, SLOs. All of that is genuinely useful.
What it rarely surfaces is compliance health. Not because the data does not exist, but because nobody has built the layer that connects engineering artifacts to compliance signals.
Think about how compliance is typically handled today:
- A quarterly audit arrives
- Someone manually reviews pipelines, access controls, documentation
- Findings are captured in a spreadsheet
- Engineers scramble to address gaps
- Repeat in three months
This is not a process failure. This is an architectural failure. The system was never designed to surface compliance posture continuously. It was designed to execute workloads.
Introducing the Concept of Compliance Signals
A compliance signal is not a binary pass/fail check. It is an observable characteristic of an engineering environment that carries meaningful information about risk, maturity, or alignment with policy.
The key word is "observable." Compliance signals are already present in the artifacts engineers produce every day. The problem is that nobody is reading them with compliance intent.
Here is what that looks like in practice across common signal categories:
CI/CD Presence and Configuration
- Is there a pipeline at all?
- Does it include test stages, or just build and deploy?
- Are there branch protection rules that require review before merge?
- Is there evidence of security scanning integrated into the pipeline, or bolted on as an afterthought?
Infrastructure as Code Usage
- Is infrastructure defined in code, or provisioned manually?
- Is the IaC versioned and peer-reviewed like application code?
- Are there policy-as-code tools like Checkov, tfsec, or OPA evaluating the templates before apply?
- Is there drift detection in place?
Secrets Exposure Risk
- Does the repository have a secrets scanning integration enabled?
- Are there historical commits that contain credentials, tokens, or API keys, even revoked ones?
- Is there evidence of
.envfiles or hardcoded configuration values being checked in? - Are secret references externalized to a vault or parameter store?
Documentation Maturity
- Does the README explain what this system does, who owns it, and how to run it?
- Is there an architecture decision record (ADR) trail?
- Is there runbook documentation that would survive a team rotation?
- Does documentation reference security controls, data classification, or dependency risk?
None of these signals require a new tool. They are present in existing repositories, configuration files, and pipeline definitions. What is missing is the read layer.
Efficient Use of LLMs: High-Signal Input, Low Noise
When I started exploring how to analyze these signals at scale, the instinct was to throw everything at a model and ask it to reason over the full context. That is expensive, slow, and often produces verbose output that is hard to act on.
The more useful approach is to be surgical about what you send.
LLMs are genuinely good at a specific subset of compliance analysis tasks:
- Interpreting ambiguous configuration patterns (is this a deliberate design choice or a gap?)
- Synthesizing partial evidence into a risk narrative
- Assessing documentation quality and identifying what is absent, not just what is present
- Detecting intent drift, where the code and the documentation no longer describe the same system
But those tasks benefit from receiving focused, pre-filtered input rather than raw, unprocessed repository content.
The practical approach looks like this:
Extract structured signals first. Parse pipeline configuration files, scan for known patterns (e.g.,
aws_access_key,password =, absence of.gitignoreentries for.env), check for file existence (README.md, CODEOWNERS, docs/).Build a structured signal summary. Not the raw files. A normalized representation of what was found and what was not found.
Send the summary, not the source. The model does not need to read 400 lines of Terraform. It needs to know that Terraform is present, there is no tfsec integration, and the state backend is local rather than remote.
Ask specific, bounded questions. "Based on these signals, what compliance risks are observable?" performs better than "analyze this repository for security issues."
This approach keeps token usage low and response quality high. More importantly, it keeps the human in the loop as an interpreter of findings, not a processor of noise.
The Platform Engineering Angle: Governance Needs a Control Plane
Platform engineering exists, in part, to abstract complexity away from product teams while maintaining organizational control over how systems are built and operated. The internal developer platform (IDP) is the mechanism for encoding those standards.
But most IDPs today are delivery platforms. They make it easier to build and deploy. They do not make it easier to govern.
This is not a criticism of the teams building those platforms. It reflects where investment has been directed. Delivery velocity has clear, measurable ROI. Compliance visibility is harder to quantify until something goes wrong.
The gap I am describing here is the same gap between an IDP and a governance control plane. A governance control plane would:
- Continuously evaluate repositories and environments against defined compliance criteria
- Aggregate findings across an organization into a single, queryable posture view
- Surface risk-ranked findings to the teams responsible for addressing them
- Close the loop between audit findings and engineering remediation
That is not a new category of tool. It is a missing integration layer between existing tooling and organizational policy. The signals are there. The policy exists, in most organizations, in some form. The bridge between them is what is absent.
Toward Self-Evaluating Systems
The longer-term vision here is not a compliance dashboard. Dashboards require someone to look at them.
What would be genuinely useful is an engineering environment that continuously evaluates its own compliance posture, surfaces observations in context, and makes it easier for engineers to close gaps before they become audit findings or incidents.
This is not surveillance. It is structural self-awareness. The same way a well-instrumented application surfaces performance anomalies without requiring a human to check dashboards constantly, a well-instrumented platform should surface compliance anomalies without requiring a quarterly audit cycle.
The signals already exist. The analysis capability exists. The missing piece is the integration architecture that connects them into a coherent posture view.
That is the space I am building toward in the lab. The system I have started calling Komplora is an early exploration of exactly this problem: analyzing engineering environments at the repository level, detecting compliance signals, and producing structured, actionable posture assessments without requiring a manual audit process.
It is early. But the signal detection layer is already producing useful observations.
What This Means for Platform Engineers
If you are building or operating a platform, compliance visibility should be a first-class concern, not a feature added in response to a failed audit.
The practical starting point is not tooling. It is clarity on what compliance means for your organization, expressed as observable signals in engineering artifacts. Once you have that, the path to continuous visibility becomes an engineering problem rather than a process problem.
And engineering problems, in this space, are ones we are genuinely equipped to solve.
A Question Worth Sitting With
Most organizations can tell you their deployment frequency and mean time to recovery. Far fewer can tell you their compliance posture across their engineering estate on any given day.
What would it take for your platform to answer that question continuously, not quarterly?
I am curious how others are thinking about this, especially those who have tried to close this gap at scale. What approaches have worked? What assumptions did you have to abandon?
If this resonated, follow along. I will be sharing more observations from the platform lab as the work progresses.
** #devops #platformengineering #devsecops #cloud**

Top comments (0)