DEV Community

Yuvraj Angad Singh
Yuvraj Angad Singh

Posted on

I Scanned a 1K-Star Cursor Project. AI Code Doesn't Look Like AI Code Anymore.

There's a common belief that AI-generated code is easy to spot. Obvious comments, step-by-step numbered instructions, hedging language like "might need to adjust this later."

I built vibecheck, a static analysis tool that detects these patterns. I ran it against ryOS, a 1,100-star web-based macOS clone built entirely with Cursor by Ryo Lu (Head of Design at Cursor). If any project would have AI fingerprints, this one would.

The results surprised me.

Zero comment-level AI tells

None. No "// Initialize the state variable" above a useState. No "// Step 1: Fetch the data." No narrator comments, no hedging, no placeholder stubs. The code reads clean line by line.

AI-generated code has evolved past the obvious tells. The models learned to stop over-explaining. If you're still looking for bad comments as your AI detector, you're looking at last year's problem.

The smell moved to architecture

vibecheck found 4,523 issues across 378 files. Here's where the signal actually is:

God functions. MacDock is a single 2,003-line React component. useIpodLogic is an 1,891-line hook. useKaraokeLogic is 1,798 lines. A human writing incrementally would extract sub-hooks, split components, refactor. AI keeps stuffing logic into the same function because it doesn't have the "this is getting too big" instinct.

Deep nesting. 13 levels deep in ChatMessages.tsx. Callback hell meets JSX spaghetti. When you ask AI to "add a conditional render for loading states" three times in a row, each one nests inside the previous one.

Swallowed errors. 35 empty catch blocks, many in sequence. infiniteMacHandler.ts has 10+ empty catches in a row (lines 790, 797, 804, 827...). This happens when AI wraps every async call in try/catch but has no error handling strategy.

Console.log pollution. 671 console.log statements in production code. AI adds them for debugging and never removes them because no one asks it to clean up.

Error info leaks. 11 API endpoints that send error.message directly in HTTP responses. Internal details (stack traces, DB errors) exposed to clients.

Why this matters

Code review catches line-level problems. A reviewer reads 50 lines of a function and it looks fine. Clean variable names, proper TypeScript, no weird patterns.

But zoom out and the function is 2,000 lines long. The reviewer never sees the full picture because the diff only shows the 30 lines that changed. The AI-generated architecture accumulates invisibly.

This is the new AI code smell: code that passes review but fails at scale.

What to look for

If you're reviewing AI-assisted code, stop looking for bad comments and start looking for:

  1. Function length. Anything over 200 lines is suspicious. Over 500 is almost certainly AI-accumulated.
  2. Nesting depth. 5+ levels means the function is doing too many things.
  3. Empty catch blocks. AI loves try/catch. It hates error handling.
  4. Console pollution. Count the console.logs. AI never cleans up after itself.
  5. Repeated patterns. 10 empty catches in a row? That's a loop, not a developer.

Try it yourself

npx @yuvrajangadsingh/vibecheck ./your-project
Enter fullscreen mode Exit fullscreen mode

vibecheck catches 32 patterns across JS/TS and Python. Works as a CLI, GitHub Action, VS Code extension, and pre-commit hook. All offline, no API calls.

GitHub | npm | VS Code

Top comments (0)