On February 20, 2026, NIST's Center for AI Standards and Innovation (CAISI) launched the AI Agent Standards Initiative — the U.S. government's first formal effort to standardize security for autonomous AI agents. The initiative includes an RFI on AI Agent Security (docket NIST-2025-0035, comments due March 9), a draft concept paper on Agent Identity and Authorization from NCCoE (due April 2), and upcoming listening sessions for healthcare, finance, and education.
This is a big deal. The federal government is saying: AI agents that take autonomous actions present unique security challenges, and we need standards for them.
We agree. That's why we've been shipping those standards as open-source code since January 2026.
Here's every security concern NIST raised — and the ClawMoat module that already addresses it.
1. Constraining Agent Access in Deployment Environments
📋 What NIST Says:
The RFI asks about "interventions in deployment environments to address security risks affecting AI agent systems, including methods to **constrain and monitor the extent of agent access* in the deployment environment."*
— NIST CAISI RFI, January 2026
🏰 What ClawMoat Ships: Host Guardian — four permission tiers that constrain what an AI agent can do on the host system, from full lockdown to controlled access.
import { createHostGuardian } from 'clawmoat';
const guardian = createHostGuardian({
tier: 'restricted', // lockdown | restricted | standard | trusted
rules: {
filesystem: { writable: ['/tmp', './workspace'], blocked: ['~/.ssh', '~/.aws'] },
network: { allowedHosts: ['api.openai.com', 'github.com'] },
processes: { blocked: ['curl', 'wget', 'nc'] },
env: { redact: ['AWS_SECRET_ACCESS_KEY', 'DATABASE_URL'] }
}
});
// Every agent action passes through the guardian
const result = await guardian.evaluate({
action: 'exec',
command: 'cat /etc/passwd'
});
// → { allowed: false, reason: 'Path /etc/passwd outside permitted directories' }
Four tiers from lockdown (zero external access) to trusted (full access with audit logging). Most deployments run restricted.
2. Tool Validation and Authorization
📋 What NIST Says:
The NCCoE concept paper focuses on how to "identify, manage, and authorize access and actions taken by software agents, including AI agents" — specifically, controlling what tools agents can invoke and what those tools can do.
— NCCoE Draft Concept Paper, February 2026
🏰 What ClawMoat Ships: McpFirewall — intercepts every MCP tool call with allowlisting, read-only enforcement, argument validation, and rate limiting.
import { McpFirewall } from 'clawmoat';
const firewall = new McpFirewall({
tools: {
'database_query': { mode: 'read-only', blocked: ['DROP', 'DELETE', 'TRUNCATE'] },
'file_write': { allowed: false },
'web_search': { rateLimit: { max: 10, windowMs: 60000 } },
'send_email': { requireApproval: true }
},
defaultPolicy: 'deny' // Unknown tools are blocked by default
});
const safeMcp = firewall.wrap(mcpServer);
McpFirewall recognizes 29 write-operation patterns across SQL, filesystem, and API calls.
3. Adversarial Data and Prompt Injection
📋 What NIST Says:
"This includes risks from models interacting with adversarial data (such as in **indirect prompt injection), risks from the use of insecure models (such as models that have been subject to data poisoning)."
— NIST CAISI RFI, January 2026
🏰 What ClawMoat Ships: Prompt Injection Scanner — pattern-based detection of injection attempts in tool outputs, user inputs, and retrieved documents before they reach the model.
import { PromptInjectionScanner } from 'clawmoat';
const scanner = new PromptInjectionScanner();
const toolOutput = await mcpTool.call('web_scrape', { url: untrustedUrl });
const scan = scanner.scan(toolOutput);
if (scan.injectionDetected) {
console.log(scan.threats);
// → [{ type: 'instruction_override', pattern: 'ignore previous instructions',
// severity: 'critical', location: 'line 47' }]
}
The scanner detects role hijacking, instruction overrides, data exfiltration attempts, and encoding-based evasion. It runs in <2ms per scan.
4. Data Protection and Sensitive Information
📋 What NIST Says:
The Initiative highlights sector-specific concerns in healthcare, finance, and education, with listening sessions planned to identify barriers to secure AI adoption in these regulated industries.
— NIST AI Agent Standards Initiative, February 2026
🏰 What ClawMoat Ships: Secret Scanner and FinanceGuard — field-level redaction of credentials, PII, and financial data before it leaves the agent's context.
import { SecretScanner, FinanceGuard } from 'clawmoat';
const secrets = new SecretScanner();
const result = secrets.scan(agentOutput);
// → Detects: AWS keys, GitHub tokens, JWTs, database URLs, SSNs, credit cards
const finance = new FinanceGuard({
redact: ['account_number', 'routing_number', 'ssn', 'credit_card'],
audit: true,
allowFields: ['transaction_date', 'amount', 'category']
});
const safeOutput = finance.process(agentResponse);
// "Transfer $5,000 from account 7834-2291-0054 routing 021000021"
// → "Transfer $5,000 from account [REDACTED] routing [REDACTED]"
FinanceGuard generates SOX and PCI-DSS compliance reports automatically.
5. Supply Chain Security
📋 What NIST Says:
"Risks from the use of insecure models (such as models that have been subject to **data poisoning)" — and more broadly, the NIST AI RMF (AI 600-1) GenAI Profile identifies supply chain integrity as a core risk management action area, with 200+ suggested actions.
— NIST AI 600-1 GenAI Profile
🏰 What ClawMoat Ships: Skill Integrity Checker — hash verification and behavioral analysis of AI agent skills/plugins before installation.
import { SkillIntegrityChecker } from 'clawmoat';
const checker = new SkillIntegrityChecker();
const audit = await checker.scan('./skills/untrusted-plugin/');
// Checks for 14 suspicious patterns:
// - Obfuscated code (base64 decode, eval, Function constructor)
// - Network exfiltration (fetch to unknown hosts, DNS tunneling)
// - File system access outside workspace
// - Environment variable harvesting
// - Cryptocurrency mining signatures
console.log(audit);
// → { safe: false, threats: 3, details: [...] }
6. Monitoring and Audit Trails
📋 What NIST Says:
The Initiative's three strategic pillars include "advancing research in areas of AI agent **security and identity* to enable new use cases and to promote trusted adoption across sectors."* The RFI specifically asks about "methods for **measuring the security* of AI agent systems."*
— NIST AI Agent Standards Initiative, February 2026
🏰 What ClawMoat Ships: Network Egress Logger and full audit trail with compliance report generation.
import { NetworkEgressLogger, ComplianceReporter } from 'clawmoat';
const egress = new NetworkEgressLogger({
logFile: './audit/network-egress.jsonl',
alertOn: { unknownHosts: true, highVolume: true, unusualPorts: true }
});
const reporter = new ComplianceReporter({
framework: 'SOX',
period: 'monthly',
include: ['tool_calls', 'data_access', 'redactions', 'blocked_actions']
});
const report = await reporter.generate();
// → Structured report: 4,291 tool calls, 847 redactions,
// 23 blocked actions, 0 data exfiltration attempts
The Full Mapping
| NIST Concern | Document | ClawMoat Module | Status |
|---|---|---|---|
| Constrain agent access | CAISI RFI | Host Guardian (4 tiers) | ✅ Shipping |
| Tool authorization | NCCoE Concept Paper | McpFirewall | ✅ Shipping |
| Prompt injection | CAISI RFI | Prompt Injection Scanner | ✅ Shipping |
| Data protection (PII/financial) | Listening Sessions | Secret Scanner + FinanceGuard | ✅ Shipping |
| Supply chain integrity | AI 600-1 / RFI | Skill Integrity Checker | ✅ Shipping |
| Security measurement | CAISI RFI | Network Egress Logger + Audit | ✅ Shipping |
| Agent identity | NCCoE Concept Paper | Audit trail per-agent | ✅ Shipping |
| Sector-specific (finance) | Listening Sessions | FinanceGuard + SOX/PCI reports | ✅ Shipping |
What This Means
NIST is doing the right thing. AI agents that can "work autonomously for hours, write and debug code, manage emails and calendars, and shop for goods" (their words) need security standards. The RFI, the NCCoE concept paper, the listening sessions — this is how good policy gets made.
But standards take time. The RFI closes March 9. The concept paper comments close April 2. Guidelines will follow months or years later. Meanwhile, agents are running in production right now, handling real data, making real API calls, accessing real systems.
We're not waiting for the standards. We're shipping them.
ClawMoat is open-source, MIT-licensed, and works with any AI agent framework. Every module described above is in npm today. If NIST's final guidelines recommend something we haven't built yet, we'll build it. If they recommend something better than what we have, we'll adopt it.
But we're not going to leave agents unprotected while the comment period runs.
Start Securing Your Agents Today
Every NIST recommendation above is available as a single npm install.
npm install clawmoat
Top comments (0)