I run a personal AI platform with eight active agents, dozens of processors, and a fully self-hosted Langfuse instance. I built the observability layer myself. I shipped it a few weeks ago. Last week I ran the audit query for the first time.
The agents that talk to me the most only had Langfuse-level lineage coverage for about 13% of their decisions.
This is the writeup of what I found, why it happened, and the schema and code that explain it. If you run agents and you've never run this audit, you have a very good chance of finding the same gap.
The Setup
Quick context. The platform is called Nexus. It's a TypeScript monorepo plus a fleet of Python processors, running on a couple of mini PCs in my apartment. It ingests 26 data sources, runs 8 reasoning agents on schedules, and serves an MCP tool surface I use as my daily driver.
Two layers matter for this post:
The agents are reasoning entities. They read from gold-layer tables, decide things, and write proposals to inbox tables. ARIA is the user-facing coordinator. Chronicler owns the timeline. Insight does anomaly detection. Five others fill in around them. They're scheduled, bounded, and they don't directly execute infrastructure changes — they propose, a human decides.
Every agent decision lands in a row in agent_decisions. Every row has a trace_id like aria-1777559470433-5c0db36c. That trace_id is generated by the agent itself at the start of a cycle and is 100% covered. It tells you the agent ran. It does not tell you what the LLM was asked or what it returned.
The processors are the deterministic side. They read raw data, enrich it, write to silver and gold. Some call LLMs (Gmail enrichment, ambient capture upgrade, financial event extraction). Each run lands in aurora_processing_runs with a langfuse_trace_id column populated when the run had Langfuse turned on.
Langfuse itself is self-hosted on a host on my private network. It's been running fine for weeks. It has traces in it. The dashboard shows traces. I have used the dashboard.
I just hadn't asked the question "what fraction of my agent and processor activity is actually represented there."
The Audit Query
The MCP tool that surfaced this is nexus_agent_architecture_status. Under the hood it's running this against the operational Nexus Postgres:
SELECT agent_id,
COALESCE(invocation_type, 'cycle') AS invocation_type,
COUNT(*)::int AS decisions,
COUNT(*) FILTER (WHERE trace_id IS NOT NULL)::int
AS with_trace_id,
COUNT(*) FILTER (
WHERE state_snapshot ? 'langfuse_enabled'
)::int AS with_langfuse_flag,
COUNT(*) FILTER (
WHERE COALESCE((state_snapshot->>'langfuse_enabled')::boolean, false)
)::int AS langfuse_enabled_count,
COUNT(*) FILTER (
WHERE NULLIF(state_snapshot->>'langfuse_trace_id', '') IS NOT NULL
)::int AS with_langfuse_trace_id,
MAX(created_at) AS last_decision_at
FROM agent_decisions
WHERE created_at >= NOW() - (30 * INTERVAL '1 day')
GROUP BY agent_id, COALESCE(invocation_type, 'cycle')
ORDER BY agent_id, invocation_type;
The state_snapshot column is JSONB. Every agent cycle writes a small snapshot of the runtime config it ran under, including whether Langfuse was enabled, the active trace ID, and (when disabled) a langfuse_disabled_reason string. This is the schema that lets me tell the difference between "we never tried to trace" and "we tried and failed."
The result over a 30-day window, sorted by decision volume:
| Agent | Decisions | Internal trace | Langfuse trace | Coverage | Executor |
|---|---|---|---|---|---|
| ARIA | 31,451 | 31,451 | 5,452 | 17% | executor-A |
| Insight | 25,913 | 25,913 | 4,402 | 17% | executor-A |
| Chronicler | 23,297 | 23,297 | 2,950 | 13% | executor-A |
| Circle | 21,510 | 21,510 | 2,490 | 12% | executor-A |
| Infra | 19,701 | 19,701 | 2,524 | 13% | executor-A |
| Correlator | 2,594 | 2,594 | 2,592 | 100% | executor-A |
| Planner | 2,592 | 2,592 | 2,591 | 100% | executor-A |
| Keeper | 696 | 696 | 696 | 100% | executor-B |
Read that table in two passes.
First pass: the agents producing the most decisions (ARIA at 31K, Insight at 25K) are the ones with the lowest Langfuse coverage (12–17%). The agents with low volume (Correlator, Planner, Keeper) sit at 100%. Inversely correlated.
Second pass: it's not actually about volume. It's about something the volume happens to correlate with. The five high-volume agents are the ones whose execution is shaped by an older code path; the three high-coverage agents are on the newer one. Keeper runs on a different executor entirely.
What's in the Untraced Rows
Pulling a sample of the rows where langfuse_enabled is false tells the story directly:
{
"id": 141946,
"agent_id": "aria",
"invocation_type": "cycle",
"trace_id": "aria-1777559470433-5c0db36c",
"created_at": "2026-04-30T14:31:17.266Z",
"langfuse_disabled_reason": "LANGFUSE_ENABLED is false"
}
That field is the answer. At the moment of that decision, the agent process saw LANGFUSE_ENABLED=false in its environment and routed every LLM call through the no-op path.
How the No-Op Path Works
Here's the actual gating code, lightly trimmed, from packages/core/src/services/langfuse-client.ts:
export function getLangfuseConfig(env = process.env): LangfuseConfig {
return {
enabled: parseBool(env.LANGFUSE_ENABLED, false), // default false
publicKey: env.LANGFUSE_PUBLIC_KEY?.trim() || undefined,
secretKey: env.LANGFUSE_SECRET_KEY?.trim() || undefined,
baseUrl: trimTrailingSlash(env.LANGFUSE_BASE_URL?.trim()),
// ...
};
}
export async function runWithLangfuseTrace<T>(
params: LangfuseTraceParams,
fn: (context: LangfuseTraceContext) => Promise<T> | T,
): Promise<T> {
const cfg = getLangfuseConfig();
const reason = getDisabledReason(cfg);
if (reason) {
warnDisabled(reason); // logs once per process
return fn({ enabled: false }); // run the work, no trace
}
// ... normal trace path
}
This is a textbook pattern. Default off. Fail open. Log once. Never block the agent.
The pattern is right. It's the same one the Python services use, and the same one the publishing pipeline uses for its drafting code. You don't want a Langfuse outage taking down agents.
What the pattern doesn't do is tell you when it's been firing for weeks.
The warnDisabled call is guarded by a module-level boolean so it only logs once per process lifetime. The next 10,000 calls to runWithLangfuseTrace from that process are silent. No counter, no metric, no row in the disabled-runs table. Just a single line in stdout that scrolled past at startup.
The Real Story: It Was Never Turned On
I went looking through every checked-in config file for LANGFUSE_ENABLED=true:
$ rg "LANGFUSE_ENABLED" --type=yaml --type=service --type=env --type=conf
Zero hits. The flag isn't set in any committed config. The agents that have full Langfuse coverage are the ones whose runtime environment happens to have LANGFUSE_ENABLED=true set somewhere out of band — a systemd unit, an inherited shell env, a compose override that lives on the host.
That explains the table.
- Keeper runs under the newer executor process, which inherits an env that has the flag set. 100% coverage.
- Correlator and Planner are recent additions wired into a different runtime path that always emits Langfuse spans regardless of the flag. 100% coverage.
- The five high-volume agents (ARIA, Insight, Chronicler, Circle, Infra) run under the older executor. Most of the time it doesn't see the flag. Occasionally it does — about 12-17% of cycles — probably the ones that happen to fall after a manual restart in a shell where the flag was exported.
It's not drift. It's never having been turned on in the first place for the path that does the most work.
The Processor Side Has the Same Shape
Pulling the 30 most recent rows from aurora_processing_runs:
| Processor Name | Version | Has Trace |
|---|---|---|
| ambient-moment-sync | 2026-04-29.langfuse-v1 | ✓ |
| gmail-enrich | 2026-04-29.events-v1 | ✓ |
| gmail-appointment-extract | 2026-04-30.v1 | ✓ |
| mem-bronze-drain | v1 | ✗ |
| plans-to-kg | v1 | ✗ |
| voice-to-kg | v1 | ✗ |
| social-bronze-drain | v1 | ✗ |
| ambient-context-upgrade-processor | 2026-04-29.context-v1 | ✗ |
| health-timeline-promote | 2026-04-30.v2 | ✗ |
Same pattern. Processors with a langfuse-v1 or events-v1 tag in the version string emit trace IDs because their code was explicitly migrated to call runWithLangfuseTrace. Processors still on v1 were written before the migration helper existed and never adopted it. They call traceLlmGeneration if they make LLM calls, but the outer trace context is missing, so the spans don't correlate to anything queryable.
The version string is doing the work the env flag isn't. It encodes whether the code knows about the tracing helper.
What Generalizes
I run this stack as one person. Eight agents, a handful of processors, one Langfuse instance, one set of credentials. The fix is a long afternoon. The same problem at any non-trivial agent deployment is much more expensive to discover and much more expensive to close, because by the time you ask the question you have hundreds of thousands of decisions you can't reconstruct.
Three patterns that generalize from this audit:
1. Decision counts are not coverage.
Every dashboard I had was counting decisions and showing them as green. None of them computed coverage ratios. Decision counts tell you the agent ran. They don't tell you whether you can answer what it did. If you're going to instrument observability, instrument the observability itself.
2. Default-off is correct. Silent default-off is not.
The parseBool(env.LANGFUSE_ENABLED, false) default is right. You don't want observability code that fails closed and breaks the agent. But there's a difference between "fails open" and "fails open silently for weeks." The fix is a periodic check, on a separate cadence from the agents themselves, that reports langfuse_enabled=false across {n} cycles in the last hour to a channel a human will see. The disabled-reason field already exists. Aggregating it is one cron job.
3. Code-version is the actual observability gate.
The flag check is a red herring. The real question is whether the agent or processor was written to call into the tracing helper at all. 2026-04-29.langfuse-v1 in a version string is a much better predictor of coverage than the env flag. Treat your tracing migration as a code migration, audit by version, and don't assume an env flag covers the gap.
What I'm Doing About It
Three things, in this order:
Set the flag where it should always have been set. This is the embarrassing one. Add LANGFUSE_ENABLED=true to the older executor's systemd unit, restart, verify with one cycle from each of the five low-coverage agents. This closes the going-forward gap immediately.
Materialize coverage as a first-class metric. A view, agent_observability_coverage, computed from the audit query above on a rolling 24-hour window. A small alert that fires if any active agent drops below 95%. The view is gitignored config; the alert lives in the existing notification path.
Backfill triage. I can't recover the prompts and responses for the 100,000+ untraced decisions. They're gone. What I can do is replay the inputs for the high-importance subset — anything that touched a person record, anything in the financial event flow, anything routed through ARIA's user-facing path — and emit a post-hoc trace with whatever the prompt would have been at the version pin recorded in state_snapshot.prompt_version. The output won't match what actually happened. But it gives a baseline for behavioral drift detection going forward.
Closing
The Nexus doctrine line is:
Nexus is best understood as a data and memory platform with bounded reasoning agents on top, not as an unbounded autonomous swarm.
The corollary I hadn't written down until now is that bounded reasoning is only bounded if you can see the reasoning. A trace_id that points to a row with no LLM-level lineage isn't bounded reasoning. It's bounded execution with hidden reasoning behind it.
The agents I was most worried about turned out to be the ones I was least able to inspect. That's the inverse of the order I would have chosen.
The fix is straightforward. The lesson is that I had to write a query to find out.
The public architectural repository for Nexus is available here: github.com/niclydon/nexus-public.
One important clarification: nexus-public intentionally does not ship with hard dependencies on vendor-specific observability and evaluation tooling like Langfuse, Promptfoo, and several other operational integrations I use in the live runtime. The public repo is designed more as an architectural reference implementation — agents, processors, MCP tooling, schemas, orchestration boundaries, and execution patterns — so someone can wire in whichever tracing and observability stack they prefer rather than inheriting mine by default.
The Langfuse integration, executor runtime paths, and audit tooling discussed in this post come from the private operational implementation that powers the platform day to day.


Top comments (0)