DEV Community

Rohan San
Rohan San

Posted on

How I built a tool that detects AI slop in codebases (and what patterns I found)

The Problem

I've been using AI coding assistants heavily for the past year — Cursor, Copilot, Claude through various interfaces. They're incredible for velocity, but I kept noticing the same lazy patterns slipping through code review:

  • TODO comments everywhere, often contradicting the actual implementation
  • Placeholder variable names like data2, temp, result_final
  • Empty except blocks with just pass
  • Entire blocks of commented-out code
  • Functions named handle_it, do_stuff, process_data that do three unrelated things

The worst part? These patterns are invisible to traditional linters. pylint doesn't care if your function is called processData or handleStuff. flake8 won't flag a TODO comment. But every human reviewer immediately spots them and questions whether you actually understand the code you're submitting.

So I built roast-my-code — a CLI that specifically hunts for these patterns.
The Detection Rules

The analyzer is built around three categories: AI Slop, Code Quality, and Style. Each category has weighted scoring, with AI Slop counting for 50% of the overall score.

AI Slop Patterns (High Severity)


python
# Detect TODO/FIXME/HACK comments
TODO_PATTERN = re.compile(r'#\s*(TODO|FIXME|HACK|XXX)', re.IGNORECASE)

# Placeholder variable detection
PLACEHOLDER_NAMES = {
    'foo', 'bar', 'baz', 'temp', 'data2', 'result2', 
    'test123', 'foo_bar', 'test_var'
}

# Hallucinated imports (common fake packages AI invents)
FAKE_IMPORTS = {
    'magiclib', 'utils2', 'helpers.everything', 
    'common_utils', 'my_helpers'
}
Enter fullscreen mode Exit fullscreen mode

Top comments (0)