DEV Community

Alberto Nieto
Alberto Nieto

Posted on • Originally published at alberto.codes

Give Your AI Coding Agent a Docstring Quality Tool (MCP Setup for VS Code, Cursor, and Claude Code)

Your AI coding agent can read your code, run your tests, and search your repo. But can it check whether your docstrings actually match what the code does?

Research shows incorrect documentation drops LLM task success by 22.6 percentage points. Missing docs are annoying. Wrong docs are toxic — they create false confidence in generated code.

docvet catches these gaps: 19 rules that check whether your docstrings actually match what the code does. Since v1.8, it ships an MCP server — meaning any MCP-aware editor can give its AI agent direct, programmatic access to those checks.

What Your Agent Gets

Two tools appear in the agent's toolbox:

docvet_check — Run checks on any Python file or directory. Returns structured JSON:

{
  "findings": [
    {
      "file": "src/pipeline/extract.py",
      "line": 42,
      "symbol": "extract_text",
      "rule": "missing-raises",
      "message": "Function 'extract_text' raises ValueError but has no Raises section",
      "category": "required"
    }
  ],
  "summary": {
    "total": 3,
    "by_category": {"required": 2, "recommended": 1},
    "files_checked": 8
  }
}
Enter fullscreen mode Exit fullscreen mode

docvet_rules — List all 19 rules with descriptions and categories.

No CLI output to parse. No regex. Typed fields the agent reasons about directly.

Setup: One Block of JSON

The MCP server runs on stdio via uvx — no pip install in your project, no virtual environment pollution, no global packages. uvx downloads and runs docvet in an isolated environment automatically. You add the config and it just works.

VS Code

Add to .vscode/mcp.json:

{
  "servers": {
    "docvet": {
      "type": "stdio",
      "command": "uvx",
      "args": ["docvet[mcp]", "mcp"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Note: VS Code uses "servers", not "mcpServers".

Cursor

Add to .cursor/mcp.json (project) or ~/.cursor/mcp.json (global):

{
  "mcpServers": {
    "docvet": {
      "command": "uvx",
      "args": ["docvet[mcp]", "mcp"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Claude Code

One command:

claude mcp add --transport stdio --scope project docvet -- uvx "docvet[mcp]" mcp
Enter fullscreen mode Exit fullscreen mode

Others

Windsurf, Claude Desktop, and anything that speaks MCP — same mcpServers pattern. Full configs here.

The Workflow

Once configured, the agent uses docvet as part of its normal flow:

  1. Agent opens a Python file to modify
  2. Agent runs docvet_check on the file
  3. Findings come back — missing Raises sections, stale signatures, undocumented attributes
  4. Agent fixes the docstrings alongside the code change

The feedback loop becomes automatic — like a line cook who taste-tests every dish before it leaves the pass. Code and documentation stay in sync because the agent checks both.

Try It

  1. Add the .vscode/mcp.json block above
  2. Open a Python file with a known gap (function raises an exception, no Raises: section)
  3. Ask your AI agent to check the file with docvet
  4. Watch it fix the docstring

Links:

Top comments (0)