When we first built the compliance agent in IntentGuard, it ran every framework against every codebase.
The result was technically thorough and practically useless.
A Go REST API with no payment processing was being evaluated against PCI DSS. A Python data pipeline with no personal data handling was generating GDPR findings. A non-AI internal tool was receiving EU AI Act violations as its most prominent output.
The findings were not wrong, exactly. They were irrelevant. And in audit contexts, irrelevant findings are worse than no findings - they train reviewers to ignore output, which is the opposite of what you want.
The problem with framework-agnostic scanning
Most compliance tools apply frameworks uniformly. You select the frameworks you want evaluated, and the tool checks the codebase against all of them equally. This approach has a surface-level logic to it - better to check more than less.
The problem is that compliance frameworks are not generic. PCI DSS applies to systems that process payment card data. HIPAA applies to systems handling protected health information. DORA - the EU's Digital Operational Resilience Act - applies to financial sector entities providing ICT services. Running these frameworks against a codebase that does not fall within their scope produces noise, not signal.
Worse: when a finding from an inapplicable framework appears at the same severity as a finding from an applicable one, the auditor has to mentally filter. That filtering work defeats the purpose of automation.
How we addressed it
Before any LLM call, we now run a deterministic classification step. It reads the intent model — the structured representation of what the product was designed to do — and classifies each framework as applicable or not applicable based on what the codebase actually is.
The classification is deterministic: no probability, no inference, no LLM. It looks for specific signals in the product description and inferred architecture. A codebase described as processing financial account data and using PCI DSS relevant patterns gets PCI DSS evaluated. One that does not, does not.
When a framework is not applicable, the compliance agent is instructed to produce a single informational finding: "[Framework] — Not applicable to this codebase." Not a critical violation. Not a high severity gap. An informational acknowledgement that the framework was considered and excluded.
The result is a compliance grid that reflects the codebase's actual regulatory context — not a generic checklist applied uniformly to everything.
Why this matters for the findings you get
Five frameworks are universal — they apply to every codebase regardless of type: ISO 27001, SOC 2, OWASP ASVS L2, NIST CSF, and CIS Controls v8.
These are the baseline for any modern software system.
The remaining eleven frameworks are conditional. GDPR activates on personal data handling. DORA activates on financial sector context. HIPAA activates on health data signals. OWASP API Top 10 activates on REST or GraphQL API patterns.
This means an IT auditor reviewing a financial services platform gets a compliance grid dominated by the frameworks that matter to their client — not one where ISO 42001 and EU AI Act appear at the top because those happen to be in the list.
The scope question
The obvious challenge with deterministic scoping is edge cases. A codebase that does not explicitly declare payment processing but accepts card numbers through a generic input handler would not trigger PCI DSS through intent model signals alone — it would surface through the Security Agent's findings instead.
This is by design. The scoping step uses the intent model, which comes from the product description the user provides. If the description is accurate, the scoping is accurate. If the description is incomplete, the user is told the confidence is low and prompted to provide more context.
The Security Agent, the Dependency Agent, and the Architecture Agent all run regardless of framework scoping. A PCI DSS relevant vulnerability will still appear as a security finding even if PCI DSS framework evaluation is scoped out. The framework compliance grid and the security finding list are separate outputs from separate agents.
Building IntentGuard in public from Johannesburg. If you have worked on compliance tooling and have thoughts on the framework scoping problem — particularly around edge cases — I would like to hear them in the comments.
The concepts discussed are my own, the presentation and formating of this post is enhanced by an AI assitant.
Olebeng · Founder, IntentGuard · intentguard.dev
Top comments (0)