Your AI Agent Has Amnesia. Here's the Infrastructure It's Missing.
You ship an AI agent. It runs overnight. Morning comes. You ask: what did it do? What did it decide? Why? What evidence did it use? How many tokens did it burn? Did it contradict itself?
You do not have answers to these questions. You have logs.
Logs are not knowledge. Logs are text with timestamps. They tell you what happened in the order it happened. They do not tell you what the agent believed, at what confidence, based on what evidence, or whether a later finding invalidated an earlier conclusion.
This is the gap. Not inference. Not retrieval. Not prompt engineering. The gap is governed state -- an AI agent that can assert what it knows, trace why it knows it, revise its knowledge when evidence changes, and operate within enforced boundaries the entire time.
Limen is the infrastructure for that.
What Limen Is
Limen is a cognitive operating system. Deterministic infrastructure hosting stochastic cognition. The name is Latin for threshold -- the boundary where governed infrastructure meets AI reasoning.
Practically: it is a TypeScript engine that gives your AI agents structured knowledge, evidence-backed claims, relationship graphs, mission lifecycle governance, token budget enforcement, audit trails, and working memory. One production dependency. SQLite underneath. Apache-2.0.
npm install limen-ai
It is not a vector database. It is not a key-value store. It is not a wrapper around LLM APIs (though it does that too, across six providers, with zero SDKs). It is the operating system layer that sits between your agent's reasoning and the mutations that reasoning produces.
The core principle: intelligence proposes, infrastructure decides. An agent cannot directly mutate state. It proposes through 16 formally defined system calls. The engine validates every proposal before any state change occurs.
Quick Start: Three Lines to Chat
import { createLimen } from 'limen-ai';
const limen = await createLimen();
console.log(await limen.chat('What is quantum computing?').text);
await limen.shutdown();
Set an API key environment variable (ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY, GROQ_API_KEY, MISTRAL_API_KEY, or run Ollama locally). That is the only setup. createLimen() auto-detects your provider, generates a dev encryption key, and provisions a local SQLite database.
Those three lines are not a thin wrapper. When you ran them, the engine created an AES-256-GCM encrypted database with WAL mode, recorded a hash-chained audit entry for every state mutation, enforced RBAC authorization, tracked token usage against a budget ledger, and ran the request through circuit breakers with stall detection. The governance layer runs whether you configure it or not.
The Claim Protocol: Knowledge That Knows Where It Came From
This is the part that matters. Most AI agents operate on strings. Limen agents operate on claims -- structured assertions with subjects, predicates, confidence scores, evidence references, and temporal anchoring.
import { createLimen } from 'limen-ai';
import type { TenantId, AgentId, MissionId, TaskId } from 'limen-ai';
const limen = await createLimen();
// Assert a knowledge claim with evidence
const claim1 = limen.claims.assertClaim({
tenantId: null as unknown as TenantId,
agentId: 'analyst' as AgentId,
missionId: 'mission-1' as MissionId,
taskId: 'task-1' as TaskId,
subject: 'European EV Market',
predicate: 'market_size_2025',
object: '$45.3 billion',
confidence: 0.87,
evidence: {
type: 'artifact',
artifactId: 'report-001',
artifactVersion: 1,
excerpt: 'Market analysis report Section 3.2',
},
});
That claim is now a first-class object in the system. It has an identity. It has provenance. It can be queried, related to other claims, and -- critically -- superseded when better evidence arrives.
Relationships: Knowledge Is a Graph
Claims do not exist in isolation. They form a typed graph:
// Assert a second claim
const claim2 = limen.claims.assertClaim({
tenantId: null as unknown as TenantId,
agentId: 'analyst' as AgentId,
missionId: 'mission-1' as MissionId,
taskId: 'task-1' as TaskId,
subject: 'European EV Market',
predicate: 'growth_rate_2025',
object: '23% YoY',
confidence: 0.82,
evidence: {
type: 'artifact',
artifactId: 'report-001',
artifactVersion: 1,
excerpt: 'Market analysis report Section 3.4',
},
});
// Relate: market size supports growth rate claim
if (claim1.ok && claim2.ok) {
limen.claims.relateClaims({
tenantId: null as unknown as TenantId,
agentId: 'analyst' as AgentId,
missionId: 'mission-1' as MissionId,
sourceClaimId: claim1.value.claimId,
targetClaimId: claim2.value.claimId,
relationship: 'supports',
});
}
Four relationship types: supports, contradicts, supersedes, derived_from. When new evidence arrives, you do not delete old claims. You supersede them. The original claim remains in the graph with full provenance. Every revision is traceable.
This is not a design preference. There is no retractClaim on the public API. Limen enforces append-only knowledge evolution by making supersession the only path forward.
Querying: What Does the Agent Know?
const results = limen.claims.queryClaims({
tenantId: null as unknown as TenantId,
subject: 'European EV Market',
});
if (results.ok) {
for (const claim of results.value.claims) {
console.log(`${claim.predicate}: ${claim.object} (${claim.confidence})`);
}
}
Filter by subject, predicate, mission, confidence threshold, or any combination. The query engine returns claims with their full evidence chains and relationship graphs.
Missions: Governed Autonomy
An agent does not get a blank check. It gets a mission with a budget and a deadline.
// Register an agent with declared capabilities
await limen.agents.register({
name: 'researcher',
capabilities: ['web', 'data'],
});
// Create a budget-governed mission
const mission = await limen.missions.create({
agent: 'researcher',
objective: 'Analyze the renewable energy market in Europe',
constraints: {
tokenBudget: 50_000,
deadline: new Date(Date.now() + 3_600_000).toISOString(),
capabilities: ['web', 'data'],
maxTasks: 10,
},
deliverables: [
{ type: 'report', name: 'market-analysis' },
],
});
// Monitor and wait
mission.on('checkpoint', (payload) => console.log('Checkpoint:', payload));
const result = await mission.wait();
console.log(`Tokens used: ${result.resourcesConsumed.tokens}`);
Missions transition through a governed lifecycle: CREATED -> PLANNING -> EXECUTING -> REVIEWING -> COMPLETED. The agent decomposes objectives into task graphs. Each task runs within the mission's budget. If the budget runs out, execution stops. If the deadline passes, execution stops. The agent cannot override these boundaries -- they are structural, not conventional.
Agent trust levels provide another governance layer: untrusted -> probationary -> trusted -> admin. A newly registered agent starts untrusted. Trust is earned, not declared.
MCP Integration
Limen ships a Model Context Protocol server as a separate package. It exposes the engine as tools that any MCP-compatible AI system can call.
npm install limen-mcp
Add to your MCP configuration:
{
"mcpServers": {
"limen": {
"command": "npx",
"args": ["limen-mcp"]
}
}
}
The MCP server exposes two tiers:
Low-level tools -- direct engine access: limen_health, limen_agent_register, limen_agent_list, limen_mission_create, limen_claim_assert, limen_claim_query, limen_wm_write, limen_wm_read.
High-level knowledge tools -- session-managed with governance protection: limen_session_open, limen_remember, limen_recall, limen_connect, limen_reflect, limen_scratch, limen_session_close.
The high-level tools wrap the claim protocol into a simpler interface. limen_remember asserts a claim. limen_recall queries claims with superseded claims automatically excluded. limen_reflect batch-asserts categorized learnings (decisions, patterns, warnings, findings). limen_connect creates governed relationships between claims.
Your AI agent gets persistent, structured, evidence-backed knowledge through standard tool calls. No custom integration code.
Architecture: Four Layers, One Direction
+---------------------------------------------+
| API Surface (L4) | createLimen(), chat(), infer(),
| Public interface. Composes everything. | sessions, agents, missions
+---------------------------------------------+
| Orchestration (L2) | Missions, task graphs, budgets,
| Cognitive governance. 16 system calls. | claims, checkpoints, artifacts
+---------------------------------------------+
| Substrate (L1.5) | LLM gateway, transport engine,
| Execution infrastructure. | worker pool, scheduling
+---------------------------------------------+
| Kernel (L1) | SQLite (WAL), audit trail, RBAC,
| Persistence and trust. | crypto, events, rate limiting
+---------------------------------------------+
Dependencies flow down only. The Kernel knows nothing about AI. The Substrate knows nothing about missions. The Orchestration layer validates every agent proposal before any state mutation reaches the Kernel. The API Surface composes these layers into a single frozen, immutable engine instance.
Six LLM providers. Zero SDKs. All communication is raw HTTP via fetch -- circuit breakers, exponential backoff, streaming with stall detection, TLS enforcement. No transitive dependency trees. No version conflicts between provider packages.
| Provider | Streaming | Auth |
|---|---|---|
| Anthropic | SSE | Bearer token |
| OpenAI | SSE | Bearer token |
| Google Gemini | SSE | Query param |
| Groq | SSE | Bearer token |
| Mistral | SSE | Bearer token |
| Ollama | NDJSON | None (local) |
The Numbers
-
1 production dependency (
better-sqlite3) - 99 invariants specified and enforced
- 16 system calls defining the governance boundary
- 45 failure mode defenses catalogued
- 2,447+ passing tests
- 0 provider SDKs
- 6 LLM providers supported
What Limen Is Not
Limen is a new project with near-zero community adoption. It does not have the ecosystem breadth of LangChain or the production deployment track record of the Vercel AI SDK. If your primary need is calling an LLM or composing a retrieval pipeline, those tools are more appropriate and more mature.
Limen occupies a different architectural position. It is for when you need enforced governance over agent behavior -- budgets that cannot be exceeded, audit trails that cannot be tampered with, knowledge that cannot be silently mutated, and a structural boundary between what an agent wants to do and what the infrastructure permits.
Get Started
npm install limen-ai
GitHub: github.com/solishq/limen (Apache-2.0)
The examples/ directory contains eight progressive examples from hello-world through governed missions. The README has full configuration reference, architecture documentation, and a trust surface with line-level evidence for every claim.
If you are building AI agents that need to remember what they know, trace why they know it, and operate within enforced boundaries -- Limen is the infrastructure for that.
Built by SolisHQ
Top comments (0)