DEV Community

Yeomin Seon
Yeomin Seon

Posted on

I stopped writing prompts to review PRDs — now I just "grade" my AI agent's homework

The copy-paste hell

If you've ever asked an AI agent to review a 50-page PRD, you know the pain.

You find a problem on page 12. You copy that paragraph. You write a prompt: "In the section about authentication, the retry logic is wrong. Please change it to exponential backoff with max 3 retries."

The agent finds a different section about authentication. Or it changes the right section but also "improves" three other things you didn't ask for. You lose 10 minutes untangling what happened.

Repeat this 30 times per document. That was my workflow for months.

What I actually wanted

I wanted to do what a professor does with a student's paper:

  • Draw a line through the bad part
  • Write "fix this" or "what does this mean?"
  • Hand it back

The student (agent) should know exactly where the mark is and exactly what type of feedback it is. No ambiguity, no lost context.

What I built

MD Feedback — a VS Code extension + MCP server.

Open any markdown file. Select text. Press 2 for fix or 3 for question. That's it.

Each annotation gets stored as an HTML comment inside the markdown itself:

[FIX] id:abc status:open
Use exponential backoff, not linear retry
(stored as HTML comments — invisible in normal markdown viewers)
Enter fullscreen mode Exit fullscreen mode

Since it's just HTML comments:

  • They survive git commits, branches, and merges
  • They render invisibly in any markdown viewer
  • They don't break the document for anyone not using the tool

The MCP part

The extension ships with an MCP server (27 tools). Your AI agent connects and can:

  • Read all annotations with exact locations
  • Apply fixes and report what changed
  • Check quality gates ("3 fixes remaining → blocked")
  • Generate session handoffs so the next session picks up where you left off

No copy-paste. No export. The agent reads your red marks directly.

The gate system

This changed my workflow the most. The agent cannot proceed to implementation until I approve its fixes. I see inline before/after diffs for every change.

It's like a pull request review, but for the plan itself.

Workflow

1. YOU     → Write plan in markdown
2. YOU     → Annotate: highlight (1), fix (2), question (3)
3. AGENT   → Reads annotations via MCP
4. AGENT   → Implements fixes
5. YOU     → Approve / reject inline
6. AGENT   → Gates auto-evaluate: "all done → ready"
Enter fullscreen mode Exit fullscreen mode

You do steps 1-2 and 5. The agent does the rest.

Setup (2 minutes)

Install from VS Code Marketplace, then add to your MCP config:

{
  "mcpServers": {
    "md-feedback": {
      "command": "npx",
      "args": ["-y", "md-feedback"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Works with Claude Code, Cursor, Copilot, Codex, Cline, Windsurf, and more.

Open source

GitHub: yeominux/md-feedback

Free for personal use. I'd love to hear how your review workflow compares — especially if you've found other ways to solve the "agent loses context on long docs" problem.

Top comments (0)