DEV Community

Ryan DesJardins
Ryan DesJardins

Posted on

14 patterns AI code generators get wrong — and how to catch them

AI coding tools ship code fast. That's the point. But fast means they also
introduce a specific class of bugs that are easy to miss in review — not
because they're hidden, but because they look correct at a glance.

After building an automated reviewer specifically for AI-generated code, I've cataloged 14 recurring patterns. Here's what they look like and how to spot them. Although an automated review agent can be helpful, manually spotting these 14 things can also benefit your work when using coding agents.


The core problem

AI code generators optimize for producing code that looks complete and
compiles. They're not optimizing for runtime correctness, security, or the
subtle behavioral contracts your codebase depends on. The patterns below
all share the same root: plausible-looking code that silently fails, skips
important work, or creates vulnerabilities that generic linters don't catch.


1. Fake error handling

The most common one. The try/catch exists, the error variable is named, and absolutely nothing useful happens with it.

// Looks handled. Isn't.
try {
  await processPayment(order);
} catch (error) {
  console.log('Payment failed:', error);
  return { success: true }; // ← the real problem
}
Enter fullscreen mode Exit fullscreen mode

The tell: the catch block logs but doesn't change the outcome. The function returns success regardless. This pattern is everywhere in AI-generated code because the model is completing a "handle errors" pattern without reasoning about what handling should actually do.


2. Hallucinated imports

AI models confidently import packages that don't exist, functions that were renamed, or methods from the wrong module.

import { sanitizeHtml } from 'express-validator'; // doesn't exist there
import { parseJWT } from 'jsonwebtoken'; // it's jwt.verify(), not parseJWT
import { createLogger } from '@company/utils'; // your internal package, maybe
Enter fullscreen mode Exit fullscreen mode

These fail at runtime, not compile time (unless you have strict module checking). They're easy to miss in review because the import looks reasonable.


3. Placeholder stubs that look complete

async function sendNotification(userId: string, message: string) {
  // TODO: integrate with notification service
  console.log(`Notification queued for ${userId}: ${message}`);
  return { queued: true };
}
Enter fullscreen mode Exit fullscreen mode

The function signature is real. It returns a typed response. It logs
something. It will pass a basic smoke test. The actual work is never done.
AI generators do this when they're implementing something they don't have
enough context to complete — they produce scaffolding that looks finished.


4. Silent async failures

// somewhere in initialization
loadUserPreferences(userId); // no await, no .catch()

// result: preferences silently fail to load, app continues with defaults,
// user wonders why their settings aren't saving
Enter fullscreen mode Exit fullscreen mode

Unhandled promise rejections are a top-tier AI slop pattern. The model adds the function call, forgets the await, and the error disappears into the void.


5. An optimistic state that never rolls back

// Update UI immediately
setCartItems(prev => [...prev, newItem]);

// Then try the server
try {
  await api.addToCart(newItem);
} catch {
  // ← nothing here
  // cart shows item that wasn't actually saved
}
Enter fullscreen mode Exit fullscreen mode

AI models learn to write optimistic updates from examples that include
rollback logic — but they frequently omit the rollback when generating new
code.


6. Missing auth checks in route handlers

export async function PUT(request: Request, { params }: { params: { id: string } }) {
  const body = await request.json();

  // ← where is the auth check?
  // ← where is the ownership check?

  await db.update(posts).set(body).where(eq(posts.id, params.id));
  return Response.json({ success: true });
}
Enter fullscreen mode Exit fullscreen mode

This is IDOR (Insecure Direct Object Reference) — any authenticated user can update any post. AI models write the happy path correctly but skip the
authorization layer because it requires knowledge of your specific auth
system that isn't in the immediate context.


7. Hardcoded ARIA states

// This button controls a dropdown
<button aria-expanded="false" aria-controls="menu">
  Options
</button>
Enter fullscreen mode Exit fullscreen mode

The ARIA attribute is there. The accessibility auditor won't flag it as
missing. But aria-expanded is hardcoded to false and never updates when the menu opens. Screen reader users get incorrect state information on every interaction.

AI models add ARIA attributes to pass static analysis without connecting
them to the component state.


8. Array index as React key

{items.map((item, index) => (
  <ListItem key={index} item={item} />  // ← classic
))}
Enter fullscreen mode Exit fullscreen mode

Everyone knows this one, and AI still generates it constantly. It causes
subtle reconciliation bugs when items are reordered or deleted. The fix
(key={item.id}) requires knowing your data shape, which the model doesn't always have.


9. TypeScript escape hatches

const user = response.data as User; // trusts external data unconditionally
const element = document.querySelector('.target') as HTMLInputElement; // assumes
const config = value as unknown as Config; // the double cast — worse
Enter fullscreen mode Exit fullscreen mode

Type assertions instead of type guards. The code compiles, and TypeScript
stops complaining, but the runtime behavior is undefined if the assumption
is wrong. AI models reach for as when the type system resists them, rather than fixing the underlying issue.


10. useEffect with no cleanup

useEffect(() => {
  const subscription = eventEmitter.subscribe(handleEvent);
  // ← no return () => subscription.unsubscribe()
}, []);
Enter fullscreen mode Exit fullscreen mode

Memory leaks and stale closure bugs. The model writes the setup, skips the
teardown. Shows up in event listeners, subscriptions, timers, and WebSocket connections.


11. N+1 query patterns

const users = await db.select().from(usersTable);

// for each user, a separate query — scales as O(n)
const usersWithPosts = await Promise.all(
  users.map(user => 
    db.select().from(postsTable).where(eq(postsTable.userId, user.id))
  )
);
Enter fullscreen mode Exit fullscreen mode

This works correctly in development with a handful of records. In production with thousands of users it's a significant performance problem. AI generates the naive version because it's the direct translation of the requirement.


12. Unbounded list rendering

function UserList({ users }: { users: User[] }) {
  return (
    <ul>
      {users.map(user => <UserItem key={user.id} user={user} />)}
    </ul>
  );
}
Enter fullscreen mode Exit fullscreen mode

No pagination, no virtualization, no limit. Renders fine with test data.
Blocks the main thread with 10,000 records. AI models generate the complete render because truncating the list requires product decisions they don't have context for.


13. Missing loading and error states

function Dashboard() {
  const [data, setData] = useState(null);

  useEffect(() => {
    fetchDashboardData().then(setData);
  }, []);

  return <DashboardContent data={data} />;  // null crash on first render
  // no loading state, no error state, no empty state
}
Enter fullscreen mode Exit fullscreen mode

The happy path is complete. The other three states (loading, error, empty)
are absent. This pattern comes up because the model implements what was
asked for without considering the temporal states of the component.


14. Over-broad catch with specific handling

try {
  await stripe.paymentIntents.create(params);
} catch (error) {
  if (error.code === 'card_declined') {
    return { error: 'Your card was declined' };
  }
  // ← every other error silently returns undefined
  // network errors, auth errors, rate limits — all disappear
}
Enter fullscreen mode Exit fullscreen mode

Handles one specific case, drops everything else. The model generates the
case it was asked to handle and stops.


The pattern behind the patterns

All 14 of these share the same root cause: AI models generate code that
satisfies the immediate requirement without reasoning about failure modes,
state transitions, or the implicit contracts of the surrounding system.

The fix isn't to stop using AI code generation — it's to review specifically for these patterns rather than doing a general read-through. They're consistent enough that you can check for them systematically.

I automated this check in a Claude Code plugin
(rad-code-review) that specifically looks for these patterns in diffs before you commit. But even without automation, knowing the list makes manual review significantly faster.


Any patterns I missed? Drop them in the comments — I'm still actively
updating the detection logic.

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.