19 AI security rules. Prompt injection. Agent safety. OWASP LLM coverage.
Quick Install
npm install --save-dev eslint-plugin-vercel-ai-security
Flat Config
// eslint.config.js
import vercelAI from 'eslint-plugin-vercel-ai-security';
export default [vercelAI.configs.recommended];
Run ESLint
npx eslint .
You'll see output like:
src/chat.ts
8:3 error π CWE-77 OWASP:LLM01 | Unvalidated prompt input
Risk: Prompt injection vulnerability
Fix: Use validated prompt: sanitizePrompt(userInput)
src/agent.ts
24:5 error π OWASP:LLM08 | Tool missing confirmation gate
Risk: AI agent can execute arbitrary actions
Fix: Add await requireUserConfirmation() before execution
Rule Overview
| Category | Rules | Examples |
|---|---|---|
| Prompt Injection | 4 | Unvalidated input, dynamic system prompts |
| Data Exfiltration | 3 | System prompt leaks, sensitive data in prompts |
| Agent Safety | 3 | Missing tool confirmation, unlimited steps |
| Resource Limits | 4 | Token limits, timeouts, abort signals |
| RAG Security | 2 | Content validation, embedding verification |
| Output Safety | 3 | Output filtering, validation |
Quick Wins
Before
// β Prompt Injection Risk
const { text } = await generateText({
model: openai('gpt-4'),
prompt: userInput, // Unvalidated!
});
After
// β
Validated Input
const { text } = await generateText({
model: openai('gpt-4'),
prompt: sanitizePrompt(userInput),
maxTokens: 1000,
abortSignal: AbortSignal.timeout(30000),
});
Before
// β Unlimited Agent
const { result } = await generateText({
model: openai('gpt-4'),
tools: dangerousTools,
});
After
// β
Limited Agent
const { result } = await generateText({
model: openai('gpt-4'),
tools: safeTools,
maxSteps: 5,
});
Available Presets
// Security-focused configuration
vercelAI.configs.recommended;
// Full OWASP LLM Top 10 coverage
vercelAI.configs['owasp-llm-top-10'];
OWASP LLM Top 10 Mapping
| OWASP LLM | Rules |
|---|---|
| LLM01: Prompt Injection |
require-validated-prompt, no-dynamic-system-prompt
|
| LLM02: Insecure Output |
require-output-filtering, no-unsafe-output-handling
|
| LLM04: Model DoS |
require-max-tokens, require-abort-signal
|
| LLM06: Sensitive Data |
no-sensitive-in-prompt, no-system-prompt-leak
|
| LLM07: Plugin Design |
require-tool-schema, require-tool-confirmation
|
| LLM08: Excessive Agency |
require-max-steps, require-tool-confirmation
|
Customizing Rules
// eslint.config.js
import vercelAI from 'eslint-plugin-vercel-ai-security';
export default [
vercelAI.configs.recommended,
{
rules: {
// Configure max steps
'vercel-ai/require-max-steps': ['error', { maxSteps: 10 }],
// Make RAG validation a warning
'vercel-ai/require-rag-content-validation': 'warn',
},
},
];
Quick Reference
# Install
npm install --save-dev eslint-plugin-vercel-ai-security
# Config (eslint.config.js)
import vercelAI from 'eslint-plugin-vercel-ai-security';
export default [vercelAI.configs.recommended];
# Run
npx eslint .
Quick Install
π¦ npm: eslint-plugin-vercel-ai-security
π Full Rule List
π OWASP LLM Mapping
π Building with Vercel AI SDK? Let me know your experience in the comments!
Top comments (0)