DEV Community

Cover image for AI-SLOP Detector v2.6.1: It now audits itself (and that’s the point)
Kwansub Yun
Kwansub Yun

Posted on

AI-SLOP Detector v2.6.1: It now audits itself (and that’s the point)

AI code can look “production-ready” while implementing almost nothing.

It passes lint.
It follows clean architecture.
It even ships.

And yet—when you ask what logic is actually here—the answer is often surprisingly thin.

That gap is what I call AI slop.

AI-SLOP Detector is a deterministic static analyzer that doesn’t ask “is this safe?”

It asks: “Is there real logic here—or just convincing scaffolding?”


What’s new in v2.6.1

v2.6.1 is a trust release.

Less marketing, more auditability.

The theme is simple:

Move assumptions into configuration, and lock behavior with tests.


1) Configuration Sovereignty (YAML-driven dependency intents)

In v2.6.1, the dependency “meaning” layer is no longer hardcoded.

They were externalized into:
src/slop_detector/config/known_deps.yaml

Why this matters:

  • Auditability — rules are explicit, reviewable, and versioned
  • Team customization — tune categories/intents without touching analyzer code
  • Lower false positives — encode what your org considers “normal” dependencies

Example (trimmed):

categories:
  ml:
    - torch
    - tensorflow
    - transformers
  http:
    - requests
    - httpx
  database:
    - sqlalchemy
    - redis

Enter fullscreen mode Exit fullscreen mode

The hallucinated-dependency detector now loads this YAML dynamically, instead of relying on implicit assumptions embedded in code.


2) Quality improvements(test suite + coverage push)

This release is heavily test- and coverage-driven:

  • ✅ 165 tests

  • ✅ 85% overall coverage

  • ✅ CI Gate coverage: 0% → 88%(with 37 new tests)

Translation: the detector is no longer just “smart” — it’s harder to regress.


3) Question Generator is now pinned by tests(stable review UX)

The Question Generator turns findings into actionable code-review prompts.

In v2.6.1, it gained a dedicated test suite:

8 new test cases

significantly improved module coverage

If you ship review UX, it must be deterministic.
This patch locks its behavior release-to-release.


4) VS Code extension sync

The VS Code extension was synchronized to v2.6.1
so editor feedback stays aligned with the core analyzer.


Real-world proof: I ran the detector on itself

If a slop detector can’t survive its own criteria, it’s just a mascot.

So I ran AI-SLOP Detector against the ai-slop-detector codebase.

Executive result

Metric Result
Status CLEAN
Deficit Score (lower is better) 18.75
Inflation (Jargon) 0.07

This is the point:

  • It can pass a real codebase as CLEAN
  • while still surfacing concrete risks and “empty logic” where it exists

What it still flagged (because it should)

Even in a CLEAN repo, it surfaced real issues and anti-patterns, including:

  • bare except: (swallows KeyboardInterrupt/SystemExit)
  • mutable default arguments
  • star imports
  • globals
  • placeholder signals like pass, ..., TODO/FIXME patterns (in test fixtures)

And yes—this includes calling out my own words:
it flagged jargon terms like “production-ready” inside the CLI layer.


Synthetic slop fixtures: the detector should explode

The repo contains intentionally-bad “slop fixtures” for validation.

Example:
generated_slop.py scored 96.77 Deficit Score and triggered heavy jargon inflation
(“neural”, “state-of-the-art”, “optimized”, “transformer”, etc.).

A good analyzer should:

  • pass healthy code
  • flag real risks
  • and absolutely light up on synthetic slop

Quick start

pip install -U ai-slop-detector

# single file
slop-detector mycode.py

# project scan
slop-detector --project .
Enter fullscreen mode Exit fullscreen mode

CI Gate modes (soft / hard / quarantine)

bash
Copy code
# Soft: report only, never fails
slop-detector --project . --ci-mode soft --ci-report

# Hard: fail build on thresholds
slop-detector --project . --ci-mode hard --ci-report

# Quarantine: gradual enforcement (escalates repeat offenders)
slop-detector --project . --ci-mode quarantine --ci-report
Enter fullscreen mode Exit fullscreen mode

If you try it

Run it on a repo where AI-written code “looks complete”.

If the detector:

misses convincing emptiness, or

creates noisy false positives

open an issue with a sanitized snippet + your expectation.
I’m actively tuning the rules + fixtures to match real-world review pain.


RFC

What’s the fastest reliable signal, in your team,
that a PR is scaffolding dressed as implementation?

Top comments (0)