A live investigation. This post will be updated as I dig deeper, fix it, and reflect on what it means.
The Discovery
Today I deleted a file called llm_api_client.py.
It had no imports pointing to it anywhere in the codebase. Pure orphan. Dead code by any definition.
The problem: CORE's constitutional auditor didn't catch it.
CORE has a rule called purity.no_dead_code:
{
"id": "purity.no_dead_code",
"statement": "Production code MUST NOT contain unreachable or dead symbols as identified by static analysis.",
"enforcement": "reporting"
}
The rule exists. The audit runs it on every core-admin code audit call. It produced exactly 1 warning in recent runs — but not for llm_api_client.py.
I only found the dead file manually, while working through a separate compliance task.
That's a problem worth understanding.
The Investigation: What Is the Auditor Actually Doing?
CORE's enforcement model separates what the law says from how it's enforced. The rule lives in .intent/rules/, the enforcement mechanism lives in .intent/enforcement/mappings/.
Here's the full enforcement declaration for purity.no_dead_code:
purity.no_dead_code:
engine: workflow_gate
params:
check_type: dead_code_check
tool: "vulture"
confidence: 80
Vulture. A solid static analysis tool — but one with a specific scope. Vulture finds unused symbols within files: functions that are defined but never called, variables assigned but never read, classes that are never instantiated.
What vulture does not do: traverse the import graph to find files that nothing imports.
llm_api_client.py likely had internal symbols that appeared "used" within the file itself. From vulture's perspective: no violations. From reality's perspective: the entire file was unreachable from the rest of the system.
The rule says: "unreachable or dead symbols"
The enforcement checks: unused symbols inside files
These are two different things. The enforcement is a subset of what the rule claims to guarantee. The constitution was shallow.
The Insight
This is, I think, the most honest thing I can say about constitutional AI governance:
The constitution is only as strong as its enforcement mechanisms. A rule that exists but enforces shallowly is not a guarantee — it's an aspiration.
CORE did exactly what it was told. No more, no less. The law declared "no dead code." The enforcement mechanism checked for unused symbols. The file slipped through the gap between what the law said and what the enforcement did.
This isn't a criticism of the approach. It's the nature of any governance system — constitutional law included. The text of the law and the apparatus that enforces it are always two separate things. The gap between them is where violations live.
What matters is: can the system correct itself when the gap is found?
In CORE's model, the fix is a .intent/ declaration change. Not Python. Not a code patch. A policy update that changes enforcement behavior system-wide.
What the Fix Looks Like (Conceptually)
True dead file detection requires import graph traversal — building a dependency graph of the entire codebase and identifying files that no entry point can reach.
Tools that can do this: pydeps, custom AST graph traversal, or a knowledge_gate that queries CORE's own symbol database (which already tracks file-level relationships via core.symbols).
The declaration change would look something like:
purity.no_dead_code:
engine: workflow_gate
params:
check_type: dead_code_check
tool: "vulture" # symbol-level: keep this
confidence: 80
# ADD:
additional_checks:
- check_type: orphan_file_check
engine: knowledge_gate
params:
check_type: unreachable_files
entry_points:
- "src/cli/"
- "src/body/atomic/"
I checked. CORE's knowledge_gate currently supports:
capability_assignmentast_duplicationsemantic_duplicationduplicate_idstable_has_records
No orphan file detection. No import graph traversal.
The gap goes deeper than a declaration change. A new check_type implementation is needed — which means extending knowledge_gate itself, or building a dedicated engine. The .intent/ declaration is the easy part. The enforcement mechanism has to exist first.
This is the rabbit hole.
Status
- [x] Dead file discovered manually (
llm_api_client.py) - [x] Root cause identified (vulture scope vs. import graph traversal)
- [x] Constitutional gap diagnosed (rule vs. enforcement mismatch)
- [x] Investigation:
knowledge_gatedoes not support orphan file detection — new engine needed - [ ] Design: new
check_typefor import graph traversal - [ ] Implementation: extend
knowledge_gateor build dedicated engine - [ ] Declaration: update
.intent/enforcement/mappings/code/purity.yaml - [ ] Verify: audit now catches what it missed
- [ ] Reflection: what this means for CORE's autonomy claims
[UPDATE 1 — coming soon: designing the orphan file check — declaration-first, engine second]
[UPDATE 2 — coming soon: implementation and proof it works]
[UPDATE 3 — coming soon: the philosophical reflection on constitutional blind spots]
CORE is open source: github.com/DariuszNewecki/CORE
Credit: the PromptModel artifact pattern was inspired by Ruben Hassid's prompt engineering work.
Top comments (3)
UPDATE 1: Building the Orphan File Check
Declaration first. Always.
I extended
knowledge_gatewith a newcheck_type: orphan_file_check. The implementation uses Python'sastmodule to traverse the import graph from declared entry points, marks every reachable file, and flags the rest as orphans.The new rule declaration:
The enforcement mapping:
First audit run: 231 orphan files flagged.
That's not what I expected.
UPDATE 3: The Real Problem — Static Analysis vs. Dependency Injection
CORE uses Dependency Injection heavily.
CoreContext,CognitiveService,FileHandlerand dozens of other objects are constructed and wired together at runtime by the orchestration layer. Agents, workflows, and services are injected — not imported at the top of a file.So
researcher_agent.pyexists, is actively used, but never appears in a static import chain. The DI container knows about it. The import graph does not.This is a fundamental incompatibility: static analysis and dependency injection are at odds. The more decoupled your architecture, the less visible it is to static tools. CORE's constitutional governance engine is itself invisible to the very check it's running —
knowledge_gate.pyis loaded byEngineRegistryviapkgutil.iter_modules, not by any import.The check caught a genuinely orphaned file (
llm_api_client.py). Then it ran into the wall of CORE's own architecture.The Second Insight — The Deeper One
Three ways to fix this properly:
Option A — DI registry instrumentation. Make the DI container record what it wires up, use that as reachability data. Architecturally clean, weeks of work.
Option B — Explicit declaration for every injectable. Require every agent, service, and workflow to declare itself in
.intent/(like workers already do). Constitutional and principled, but a massive migration.Option C — Git activity heuristic. Flag files with zero git activity in 90+ days AND zero appearances in any import. Pragmatic, catches genuinely dead files, tolerates DI-injected ones. Less principled but immediately useful.
None of these is worth doing right now. The check has real value — it found
llm_api_client.py, and it will find future genuinely dead files. The limitation is documented in the mapping. That's enough for now.The bigger lesson: governance tools reveal architecture. Running this check didn't just find dead code — it produced a complete map of CORE's dynamic loading patterns, DI boundaries, and plugin conventions. That knowledge has value independent of the check's accuracy.
Status
llm_api_client.py)knowledge_gateextended withorphan_file_check.intent/mapping comments.intent/) — A3+ workUPDATE 2: The Architecture Fights Back
231 was obviously wrong. I widened the entry points — added
src/main.py,src/api/main.py, and the known dynamic plugin directories:Why those specific directories? Because CORE uses dynamic discovery in several places —
pkgutil.iter_modules+importlib.import_moduleto scan directories and load allBaseEnginesubclasses, all workers, all phases at runtime. Static import graph traversal can't see that. Those directories are plugin roots — loaded by convention, not by import chain.Second audit run: 164 orphan files.
Still wrong. I kept digging.
The remaining 164 flagged files include things like
src/will/agents/researcher_agent.py,src/will/self_healing/context_aware_test_generator.py,src/will/orchestration/remediation_orchestrator.py. These are not dead. They're actively used. But nothing statically imports them.That's when the real problem became clear.