Some communities have decided that how code was written matters more than whether it works.
Not all communities. dev.to isn't one of them, and honestly that's part of why I'm posting here. But if you've spent time on certain forums, subreddits, or open source projects, you've seen the pattern: a review that ignores the actual code and goes straight to "this looks AI-generated." A thread about contribution quality that's really about provenance. A project that rejects a working patch because someone ran it through a detector.
The failure mode isn't "a robot touched it." It's "the human didn't understand or verify what they shipped." Those aren't the same thing, and conflating them has made some corners of the dev community genuinely hostile to LLM-assisted work regardless of quality.
So I built a tool.
greysquirr3l
/
papertowel
AI code fingerprint scrubber and git history humanizer
papertowel
Cleaning up the slop.
You know how some communities have decided that how code was written matters more than whether it's any good? How the same people who happily accept copy-pasted Stack Overflow answers and half-remembered blog post patterns suddenly care deeply about provenance when an LLM enters the loop?
Yeah. This tool exists because of them.
papertowel detects and removes the stylistic fingerprints that scream "an AI wrote this." It's a specialized linter for the tells that have become forensic evidence in spaces where AI-assisted development is treated as original sin rather than tool use.
The irony of using AI to build a tool that hides AI involvement is not lost on me. I'm leaning into it.
What it does
The Scrubber — static analysis of your codebase to find and fix the obvious tells:
- Slop vocabulary ("robust," "comprehensive," "streamlined," "utilize," "leverage")
- Over-documentation (comments that restate what…
papertowel is a scrubber for AI stylistic fingerprints. It detects and removes the tells that have become forensic evidence in spaces where AI assistance is treated as original sin.
What it actually does
The scrubber catches patterns that cluster in LLM output:
Slop vocabulary — words like "robust," "comprehensive," "streamlined," "utilize," "leverage." Individually harmless. In concentration, they read like a product spec.
Over-documentation — comments that restate what code obviously does. // Helper function to calculate the sum of two integers above a function named add.
Cookie-cutter README structure — you know the one. Emoji section headers. Checkmark feature lists. Getting Started → Prerequisites → Installation → Usage → Contributing → License. Every project generated in the same 24 hours has the same skeleton.
Metadata artifacts — CONTRIBUTING.md, CODE_OF_CONDUCT.md, SECURITY.md, and perfect GitHub issue templates all in commit one.
# See what you're working with
papertowel scan .
# Fix it
papertowel scrub .
papertowel scrub . --dry-run # if you want to look first
There's also a wringer for git history humanization, but that's a separate post.
The recipe system
The scrubber is pattern-driven. Detection rules are TOML files — regex patterns with optional replacements, scoped to file types. No Rust required to extend it.
[[rules]]
name = "slop-utilize"
pattern = '\butilize\b'
replacement = "use"
applies_to = ["*.rs", "*.go", "*.ts"]
severity = "medium"
The built-in recipes are in src/recipes. If you've noticed a fingerprint papertowel misses, a recipe PR is the fastest path from "this annoys me" to "this is fixed." That's the main thing I'm looking for right now — more recipes, more patterns, better coverage.
Yes, I used AI to build it ...quickly.
The irony is not subtle. I'm leaning into it.
A tool that detects AI fingerprints, built with AI assistance, to help people ship AI-assisted code that doesn't get flagged. The recursive self-reference is the point.
Code quality is about quality. Ship working code, understand what it does, verify it solves the problem. Who wrote the first draft is incidental.
cargo install papertowel if you want to try it.

Top comments (0)