Cloudflare Agent Memory enters public beta today. It solves a real problem: agents that die between sessions lose context, state, and continuity. Durable Objects backed by KV gives you persistent agent state that survives container boundaries.
But persistence without trust is a different problem.
An agent with write access to Cloudflare Agent Memory can store anything — credentials, session tokens, scraped data, intermediate reasoning chains. If you're accepting memory writes from an external agent, or letting an external agent read from your memory store, you're making a trust decision. The question is whether you're making it deliberately or by default.
This post shows how to use AgentLair's memory-trust endpoint to make that decision deliberately.
What Memory-Scoped AATs Are
AgentLair issues Agent Authentication Tokens (AATs): short-lived JWTs that cryptographically attest what an agent is permitted to do. A memory-scoped AAT looks like this:
// POST https://agentlair.dev/v1/tokens/issue
const response = await fetch("https://agentlair.dev/v1/tokens/issue", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.AGENTLAIR_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
agent_id: "acc_your_agent_id",
scopes: ["memory:read"], // or memory:write
audience: "https://your-kv-store.example.com",
ttl: 300, // 5 minutes
}),
});
const { token } = await response.json();
// token is a signed JWT. Pass it to whatever service guards your KV store.
When an agent issues an AAT with memory:read or memory:write scope, AgentLair writes a signed audit event to its tamper-evident audit trail. That event records: which agent, which scope, which audience URL, at what time.
Over hundreds of sessions, that audit trail becomes behavioral data — a record of how this agent actually uses memory, not just what it claims.
The Memory-Trust Endpoint
GET /v1/agents/:id/memory-trust aggregates those audit events into a behavioral profile:
import { payWithX402 } from "@x402/client";
const agentId = "acc_external_agent_id";
const trustResponse = await fetch(
`https://agentlair.dev/v1/agents/${agentId}/memory-trust`,
{
headers: {
"X-PAYMENT": await payWithX402({
amount: "0.01",
currency: "USDC",
network: "base",
}),
},
}
);
const trust = await trustResponse.json();
Response shape:
{
"agent_id": "acc_external_agent_id",
"memory_behavior": {
"total_memory_tokens": 847,
"memory_read_count": 791,
"memory_write_count": 56,
"read_write_ratio": 0.934,
"consistency_score": 0.81,
"access_pattern": "read_heavy",
"behavioral_summary": "Agent issued 847 memory-scoped AATs over 107 day(s). Pattern: read_heavy, access cadence: consistent. Consistency score: 81%."
},
"verified_by": "agentlair.dev",
"jwks_uri": "https://agentlair.dev/.well-known/jwks.json",
"audit_source": "https://agentlair.dev/v1/audit?account=acc_external_agent_id&category=memory",
"generated_at": "2026-04-30T08:00:00Z"
}
The data comes from AgentLair's audit trail, not from self-report. The agent cannot modify it.
Making an Access Decision
Here's a complete trust gate for Cloudflare Agent Memory. Before letting an external agent read from your KV namespace, check its behavioral profile:
interface MemoryTrustPolicy {
minTotalTokens: number;
minConsistencyScore: number;
allowedPatterns: string[];
maxWriteRatio: number;
}
async function evaluateMemoryTrust(
agentId: string,
policy: MemoryTrustPolicy
): Promise<{ trusted: boolean; reason: string }> {
const response = await fetch(
`https://agentlair.dev/v1/agents/${agentId}/memory-trust`,
{
headers: {
"X-PAYMENT": await payWithX402({
amount: "0.01",
currency: "USDC",
network: "base",
}),
},
}
);
if (!response.ok) {
return { trusted: false, reason: `Trust endpoint error: ${response.status}` };
}
const { memory_behavior } = await response.json();
if (memory_behavior.total_memory_tokens < policy.minTotalTokens) {
return {
trusted: false,
reason: `Insufficient history: ${memory_behavior.total_memory_tokens} tokens < ${policy.minTotalTokens} required`,
};
}
if (memory_behavior.consistency_score < policy.minConsistencyScore) {
return {
trusted: false,
reason: `Bursty access pattern: score ${memory_behavior.consistency_score} < ${policy.minConsistencyScore} required`,
};
}
if (!policy.allowedPatterns.includes(memory_behavior.access_pattern)) {
return {
trusted: false,
reason: `Unexpected access pattern: ${memory_behavior.access_pattern}`,
};
}
const writeRatio = 1 - memory_behavior.read_write_ratio;
if (writeRatio > policy.maxWriteRatio) {
return {
trusted: false,
reason: `Write ratio ${writeRatio.toFixed(2)} exceeds max ${policy.maxWriteRatio}`,
};
}
return { trusted: true, reason: memory_behavior.behavioral_summary };
}
// In your Cloudflare Worker:
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const agentId = request.headers.get("X-Agent-Id");
if (!agentId) return new Response("Missing agent ID", { status: 401 });
const { trusted, reason } = await evaluateMemoryTrust(agentId, {
minTotalTokens: 50,
minConsistencyScore: 0.4,
allowedPatterns: ["read_heavy", "balanced"],
maxWriteRatio: 0.3,
});
if (!trusted) {
return new Response(JSON.stringify({ error: "Untrusted agent", reason }), {
status: 403,
headers: { "Content-Type": "application/json" },
});
}
const value = await env.AGENT_MEMORY.get(agentId);
return new Response(value, { status: 200 });
},
};
minTotalTokens: 50 requires at least 50 prior memory interactions. An agent with no behavioral history gets no access. Fifty is a low bar, but it's better than zero.
minConsistencyScore: 0.4 measures the coefficient of variation of inter-access gaps. Below 0.4 means bursty access: the agent hammers memory in short windows then goes quiet. Consistent cadence suggests steady operational use.
allowedPatterns: ["read_heavy", "balanced"] blocks write-heavy agents. One that has historically been read-heavy and is now requesting write access is worth scrutinizing.
The Architecture
Three layers, each doing its job:
Cloudflare Agent Memory handles persistence. Durable Objects, KV, the Workers runtime — Cloudflare's infrastructure keeps your data alive across sessions. This is storage, not identity.
AgentLair handles identity and behavioral attestation. It issues scoped tokens, maintains the audit trail, and aggregates behavioral patterns. The memory-trust endpoint surfaces what an agent has actually done across every organization that uses AgentLair — not what it claims.
Your trust gate makes the access decision. The policy lives with you. You decide what consistency score is sufficient, what access patterns you'll accept, how much history you require.
The payment for the trust endpoint (0.01 USDC via x402) is deliberate. It's not a revenue mechanism — it's anti-gaming. An attacker querying their own agent's trust profile to calibrate exfiltration pays per query. The cost is low for legitimate use; it accumulates for systematic probing.
What the Audit Source Gives You
Every memory-trust response includes an audit_source URL pointing to the raw audit trail: the signed events that produced the behavioral profile. The JWKS URI at agentlair.dev/.well-known/jwks.json lets you verify the signatures on each event.
Trust but verify. Verify the verifier.
When to Use This
Use it when accepting memory writes from external agents. An agent you've never interacted with wants to write to your KV store. What's its history? Is it doing what it claims?
Use it before granting memory:read to sensitive namespaces. Not all KV reads are equal. An agent reading cached search results is different from one reading session state or intermediate reasoning chains.
Use it as part of your onboarding flow for external agents. If the history is thin or the patterns are unusual, require a lower-permission trial period before granting full memory access.
Don't use it as a substitute for other checks. Memory-trust tells you about memory access patterns. It doesn't cover API call patterns, tool invocations, or financial transactions. Each scope has its own behavioral signal. Combine them.
Cloudflare Agent Memory solves the persistence problem. Persistent agents that behave badly are worse than stateless ones — they accumulate state, and that state compounds. The behavioral trust layer exists to keep persistence from becoming a liability.
The endpoint is live. Get started at agentlair.dev.
Top comments (0)