DEV Community

Cover image for AI Slop Detector v2.6.2: Integration Test Evidence (because “green CI” can still be hollow)
Kwansub Yun
Kwansub Yun

Posted on

AI Slop Detector v2.6.2: Integration Test Evidence (because “green CI” can still be hollow)

What is “AI Slop”?

AI Slop is code that looks legitimate but carries little causal weight.

  • It’s not “broken.”
  • It’s not “malicious.”
  • It’s just convincingly empty.

It often shows up as:

  • promises outrunning evidence (“production-ready”, “scalable”)
  • tests that exist, but don’t hit real dependencies
  • structure and documentation growing faster than implementation

Community Feedback (and why this release exists)

This release exists because of a thoughtful comment from OnlineProxy (https://onlineproxy.io/).

They described a “complete-looking” repo with green CI that still felt hollow—and pointed out the real red flag:

CI is green, but 0 integration tests hit real dependencies.

That’s not a nitpick. It’s a real production failure mode.

So I treated that feedback like a bug report—and shipped v2.6.2 as the patch.


What’s new in v2.6.2

1) Integration Test Evidence (explicit split)

“Tests exist” isn’t enough.

v2.6.2 distinguishes:

  • tests_unit (fast, isolated)
  • tests_integration (hits real dependencies / realistic boundaries)

Detection uses four layers:
1) Path-based (tests/integration/, e2e/, it/)

2) Filename patterns (test_integration_*.py, *_integration_test.py)

3) Pytest markers (@pytest.mark.integration, @pytest.mark.e2e)

4) Runtime signals (TestClient, testcontainers, docker-compose)


2) Claims now require integration evidence

Strong claims now require stronger proof:

  • production-ready → requires tests_unit + tests_integration
  • scalable / fault-tolerant → requires tests_integration

This closes the gap: code that looks complete, but proves nothing under real dependencies.


3) Clearer reports & questions

The goal isn’t “more numbers.” It’s more inspectable output.

Reports and questions now surface:

  • unit vs integration test breakdown
  • explicit warnings when integration tests are missing but production claims exist
  • more readable evidence labels (e.g., “integration tests”)

Quick start

# Install / upgrade
pip install -U ai-slop-detector

# Scan a project
slop-detector --project .
Enter fullscreen mode Exit fullscreen mode

CI examples

# Soft: report only (never fails)
slop-detector --project . --ci-mode soft --ci-report

# Hard: fail on thresholds
slop-detector --project . --ci-mode hard --ci-report

# Claims strict: fail when production claims lack integration-test evidence
slop-detector --project . --ci-mode hard --ci-report --ci-claims-strict
Enter fullscreen mode Exit fullscreen mode

Why this matters (in one line)

AI-era failures often aren’t syntax failures.
They’re verification gaps hidden behind clean structure and green CI.

v2.6.2 makes one of the most common gaps measurable:

“0 integration tests” is now something you can detect, report, and gate.


Links


We’re in an era where a “tiny” suggestion can turn into real tooling.
Your feedback doesn’t just help one maintainer—it can become reusable engineering that helps the next AI developer ship with more proof and less guesswork.

Top comments (0)