Your AI coding agent can read your code, run your tests, and search your repo. But can it check whether your docstrings actually match what the code does?
Research shows incorrect documentation drops LLM task success by 22.6 percentage points. Missing docs are annoying. Wrong docs are toxic — they create false confidence in generated code.
docvet catches these gaps: 19 rules that check whether your docstrings actually match what the code does. Since v1.8, it ships an MCP server — meaning any MCP-aware editor can give its AI agent direct, programmatic access to those checks.
What Your Agent Gets
Two tools appear in the agent's toolbox:
docvet_check — Run checks on any Python file or directory. Returns structured JSON:
{
"findings": [
{
"file": "src/pipeline/extract.py",
"line": 42,
"symbol": "extract_text",
"rule": "missing-raises",
"message": "Function 'extract_text' raises ValueError but has no Raises section",
"category": "required"
}
],
"summary": {
"total": 3,
"by_category": {"required": 2, "recommended": 1},
"files_checked": 8
}
}
docvet_rules — List all 19 rules with descriptions and categories.
No CLI output to parse. No regex. Typed fields the agent reasons about directly.
Setup: One Block of JSON
The MCP server runs on stdio via uvx — no pip install in your project, no virtual environment pollution, no global packages. uvx downloads and runs docvet in an isolated environment automatically. You add the config and it just works.
VS Code
Add to .vscode/mcp.json:
{
"servers": {
"docvet": {
"type": "stdio",
"command": "uvx",
"args": ["docvet[mcp]", "mcp"]
}
}
}
Note: VS Code uses
"servers", not"mcpServers".
Cursor
Add to .cursor/mcp.json (project) or ~/.cursor/mcp.json (global):
{
"mcpServers": {
"docvet": {
"command": "uvx",
"args": ["docvet[mcp]", "mcp"]
}
}
}
Claude Code
One command:
claude mcp add --transport stdio --scope project docvet -- uvx "docvet[mcp]" mcp
Others
Windsurf, Claude Desktop, and anything that speaks MCP — same mcpServers pattern. Full configs here.
The Workflow
Once configured, the agent uses docvet as part of its normal flow:
- Agent opens a Python file to modify
- Agent runs
docvet_checkon the file - Findings come back — missing Raises sections, stale signatures, undocumented attributes
- Agent fixes the docstrings alongside the code change
The feedback loop becomes automatic — like a line cook who taste-tests every dish before it leaves the pass. Code and documentation stay in sync because the agent checks both.
Try It
- Add the
.vscode/mcp.jsonblock above - Open a Python file with a known gap (function raises an exception, no
Raises:section) - Ask your AI agent to check the file with docvet
- Watch it fix the docstring
Links:
Top comments (0)