Here's a pattern you've probably seen:
const results = items.map(async (item) => {
return await fetchItem(item);
});
Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it.
Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop.
This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality.
The Problem: AI Writes Code That Works, Not Code That's Right
LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood.
After reviewing several empirical studies on LLM-generated code bugs — including an analysis of 333 bugs and PromptHub's study of 558 incorrect snippets — I found clear patterns emerging:
| Bug Type | Frequency |
|---|---|
| Missing corner cases | 15.3% |
| Misinterpretations | 20.8% |
| Hallucinated objects/APIs | 9.6% |
| Incorrect conditions | High |
| Missing code blocks | 40%+ |
The most frustrating part? Many of these are preventable at lint time.
The Solution: ESLint Rules Designed for AI-Generated Code
I built eslint-plugin-llm-core — an ESLint plugin with 20 rules specifically designed to catch the mistakes AI coding assistants make most often.
Not just generic best practices, but patterns I've seen repeatedly in AI-generated codebases:
- Async/await misuse
- Inconsistent error handling
- Missing null checks
- Magic numbers instead of named constants
- Deep nesting instead of early returns
- Empty catch blocks that swallow errors
- Generic variable names that obscure intent
Example: The Async Array Callback Trap
// ❌ AI often writes this
const userIds = users.map(async (user) => {
return await db.getUser(user.id);
});
// userIds is Promise<User>[] — not User[]
// ✅ What you actually need
const userIds = await Promise.all(
users.map((user) => db.getUser(user.id))
);
The plugin catches this with no-async-array-callbacks:
57:27 error Avoid passing async functions to array methods llm-core/no-async-array-callbacks
This pattern returns an array of Promises, not the resolved values.
Consider using Promise.all() or a for...of loop instead.
Notice the error message? It's designed to teach, not just complain. The goal is to help developers (and their AI assistants) understand why it's wrong.
Example: The Empty Catch Anti-Pattern
// ❌ AI often generates this
try {
await processData(data);
} catch (e) {
// TODO: handle error
}
The no-empty-catch rule catches this:
63:11 error Empty catch block silently swallows errors llm-core/no-empty-catch
Unhandled errors make debugging difficult and can hide critical failures.
Either handle the error, rethrow it, or log it with context.
Example: Deep Nesting Instead of Early Returns
// ❌ AI loves nesting
function processData(data: Data | null) {
if (data) {
if (data.items) {
if (data.items.length > 0) {
return data.items.map(processItem);
}
}
}
return [];
}
// ✅ Early returns are cleaner
function processData(data: Data | null) {
if (!data?.items?.length) return [];
return data.items.map(processItem);
}
The prefer-early-return rule encourages the flatter pattern.
The Research Behind the Rules
Each rule is backed by observed patterns in LLM-generated code:
| Rule | Bug Pattern Addressed |
|---|---|
no-async-array-callbacks |
Missing Promise.all, incorrect async flow |
no-empty-catch |
Silent error swallowing |
no-magic-numbers |
Unmaintainable constants |
prefer-early-return |
Deep nesting, unclear control flow |
prefer-unknown-in-catch |
any typed catch params |
throw-error-objects |
Throwing strings instead of Error instances |
structured-logging |
Inconsistent log formats |
consistent-exports |
Mixed default/named exports |
explicit-export-types |
Missing return types on public functions |
no-commented-out-code |
Dead code accumulation |
Full rule documentation: github.com/pertrai1/eslint-plugin-llm-core
Why Not Just Use typescript-eslint?
Great question. typescript-eslint is excellent — this plugin is designed to complement it, not replace it.
The difference is focus:
| typescript-eslint | eslint-plugin-llm-core | |
|---|---|---|
| Focus | TypeScript language correctness | AI coding pattern prevention |
| Error messages | Technical, spec-focused | Educational, context-rich |
| Rule design | Language spec compliance | Observed LLM bug patterns |
You should use both. typescript-eslint catches TypeScript-specific issues. llm-core catches patterns that LLMs repeatedly get wrong — regardless of whether they're technically valid TypeScript.
Getting Started
npm install -D eslint-plugin-llm-core
// eslint.config.js
import llmCore from 'eslint-plugin-llm-core';
export default [
{
plugins: {
'llm-core': llmCore,
},
rules: {
...llmCore.configs.recommended.rules,
},
},
];
That's it. Zero config for the recommended ruleset.
The Bigger Picture: Teaching AI Better Habits
Here's the interesting part: these rules don't just catch mistakes. They teach.
When your AI assistant sees the error messages:
Avoid passing async functions to array methods.
This pattern returns an array of Promises, not the resolved values.
Consider using Promise.all() or a for...of loop instead.
It learns. Next time, it writes the correct pattern.
In looped agent workflows — where AI iteratively writes, tests, and fixes code — this feedback loop compounds. Each lint error becomes a teaching moment.
What's Next
The plugin is early but functional. Current focus areas:
- Auto-fixes for fixable rules
- More logging library detection (Pino, Winston, Bunyan)
- Additional rules based on ongoing research
- Evidence gathering on whether rules actually improve AI-generated code quality
If you're working with AI coding assistants — Cursor, Claude Code, Copilot, or others — I'd love your feedback on what patterns you've seen them get wrong.
Try It
npm install -D eslint-plugin-llm-core
GitHub: pertrai1/eslint-plugin-llm-core
Built this? Hate it? Have ideas for rules I missed? Open an issue or reach out. I'm actively looking for contributors who've seen AI write weird code.
Top comments (0)