DEV Community

Katsu
Katsu

Posted on

MCP Server for Accessing Official Language Docs from AI Editors

Overview

There is an OSS called DevDocs that aggregates official documentation for various programming languages. I built an MCP (Model Context Protocol) server that makes DevDocs accessible from AI editors (such as Claude or Cursor). Both DevDocs and the MCP server run entirely in a local environment.

👉 https://github.com/katsulau/devdocs-mcp

The overall architecture looks like this:

architecture

Why I Built This

To learn new languages more efficiently

If I can provide the official documentation as the source, I can acquire knowledge from a reliable resource.

To validate the correctness of implementations

Instead of relying solely on answers from Claude or Cursor, I can make more informed decisions by coding while referencing the official documentation.

To expand implementation options

By checking the primary source (references), I can consider more diverse ways of writing code and design choices.
While using Claude or Cursor has reduced the effort of searching and implementing things myself, I feel it has also decreased the opportunities to design and make appropriate implementation choices on my own.

For example:
1. Use Claude or Cursor to proceed with an implementation
2. Verify the generated code via MCP
3. Get answers with DevDocs links included
4. If necessary, open the links and read the references

By seamlessly referencing official documentation, I can supplement my knowledge about which use cases a feature applies to, while broadening my own design and implementation perspectives. Building this kind of UX was also one of my motivations.

How to Use

The repository is here.
👉 https://github.com/katsulau/devdocs-mcp

Please refer to the README for setup instructions.

Demo

Here is an example screen when used with Cursor:

devdocs mcp demo

Clicking the link will open the corresponding page in your local DevDocs instance.

click mcp demo

What I Learned from Building the MCP

Accuracy improves when reducing ambiguity in inputs

Even if you provide the exact same input to AI editors like Cursor or Claude, they won’t always return the same output *1.

At first, I considered automatically detecting the programming language from the user’s input, but encountered these issues:
• It took extra time to extract the target language for search
• Depending on the query, the target might not be detected correctly

So I switched to a method where the slash command directly specifies parameters. This greatly improved both stability and accuracy.

Leave output generation to the AI editor where possible

I also prepared an endpoint where users can check whether the language they want exists in DevDocs.

Initially, I considered implementing filtering logic in the MCP server (exact match, partial match, fuzzy search, etc.), and returning a narrowed-down list.

architecture2

However, since the list of available languages is about 700 entries in a JSON object, I realized I could always return the entire list and let the AI editor process it.

architectuire3

In practice, this worked very well, and I was able to greatly reduce implementation complexity on the MCP server side.

The key takeaway:
• Let the AI client handle tasks where possible
• Keep the MCP server simple
• If performance or accuracy drops due to too much data, consider adding filtering later

The importance of revisiting responsibilities

Initially, I worked with Cursor to create a design.md. At that stage, I separated responsibilities into two layers:
• A layer for MCP protocol implementation and communication with AI editors
• A layer for communication with DevDocs and data processing

However, as I added more instructions like for example, “please handle X process”, the original files kept expanding, and some classes ended up bloated with too many responsibilities.

Problems encountered:
• Overlapping configurations between config.yaml and .env
• Duplicate logic scattered across multiple files (e.g., “extract 20 items from a list” implemented in several places)

Eventually, I had to refactor, restructuring into controller, application, and repository layers with clearer responsibilities.

Lesson: AI editors tend to keep adding to existing files rather than creating new ones. You need to explicitly direct them to rethink responsibilities and maintain architecture consistency.

Development caveats

MCP communicates with AI editors over stdin/stdout.
Since stdin/stdout only naturally works between parent and child processes, Cursor cannot directly connect to an already running MCP server (e.g., started manually via docker-compose up).

To handle this, I:
• Prepared a shell script that runs docker-compose, and pointed the MCP configuration to that script
• The script stops any already running container before starting a new one

This prevented errors when Cursor connected, even if the user had already started the container manually.

At first, I was confused (“Why is there no log when Cursor sends requests…?”), but once I understood stdin/stdout limitations, it made sense. Knowing this upfront saves a lot of wasted time.

Final Note

If you’re interested, please give it a try!
And if something doesn’t work or you find bugs, I’d appreciate it if you could open an issue 🙌

👉 https://github.com/katsulau/devdocs-mcp

*1 cursor learn also mentions that outputs are not always deterministic.

Top comments (0)