Copilot, Claude Code, Cursor — they all read your docstrings to understand your code. When those docstrings are wrong, your AI makes confident, wrong suggestions.
And wrong docs are worse than no docs. Studies show incorrect documentation drops LLM task success by 22 percentage points compared to correct docs.
Your linter checks style. But who checks that the docstring is actually accurate?
The gap in your toolchain
Existing tools cover the basics:
- ruff — docstring style and formatting
- interrogate — docstring presence
But neither checks whether your docstring matches the code. A function that raises ValueError but doesn't document it. A parameter added last sprint but missing from the docstring. Code that changed but the docstring didn't.
That's layers 3–6 of docstring quality — and nothing was checking them.
docvet fills that gap
docvet is a CLI tool that vets docstrings across six quality layers:
| Layer | Check | What it catches |
|---|---|---|
| Presence | docvet presence |
Public symbols with no docstring |
| Completeness | docvet enrichment |
Missing Raises, Yields, Attributes sections |
| Accuracy | docvet freshness |
Code changed, docstring didn't |
| Rendering | docvet griffe |
Docstrings that break mkdocs |
| Visibility | docvet coverage |
Modules invisible to doc generators |
Try it
pip install docvet
docvet check --all
Run it on your codebase. You'll probably find something.
Why this matters for AI
Docstrings are no longer just for humans reading your code. They're the context window for every AI tool touching your codebase. Accurate docstrings create a feedback loop: better context → better AI suggestions → better code.
docvet keeps that contract honest.
Top comments (0)