DEV Community

sea-turt1e
sea-turt1e

Posted on

Building an MCP Server Using My Custom Python Library

Introduction

To familiarize myself with the MCP protocol, I implemented an MCP server using a custom Python library.
For the custom library, I utilized the kanjiconv library, which handles Japanese character conversion.
I have implemented this as a standalone MCP server and made it accessible through Claude Desktop.
The kanijconv article is here

What is MCP?

MCP (Model Context Protocol) is an open standard protocol designed to enable AI models to securely interact with external tools and data sources. While there are numerous articles explaining MCP in detail elsewhere, using this protocol allows developers to easily integrate specialized functionality into AI models by providing them with appropriate tools.

What I Created

The GitHub repository for this project is as follows:
https://github.com/sea-turt1e/kanjiconv_mcp

The kanjiconv_mcp implementation provides the following features:

  • Kanji to Hiragana Conversion: "幽☆遊☆白書は最高の漫画です" → "ゆうゆうはくしょはさいこうのまんがです"
  • Kanji to Katakana Conversion: "幽☆遊☆白書は最高の漫画です" → "ユウユウハクショハサイコウノマンガデス"
  • Kanji to Romanized Japanese Conversion: "幽☆遊☆白書は最高の漫画です" → "yuuyuuhakushohasaiikounomangadesu"

Technical Stack

  • Python 3.13: The Python environment
  • uv: A high-performance Python package manager
  • kanjiconv: The Japanese character conversion library
  • sudachipy: A morphological analysis engine
  • Docker: Used for containerization to ensure consistent environment setup
  • MCP SDK: For integration with Claude Desktop

Project Structure

kanjiconv_mcp/
├── main.py                    # Main implementation of the MCP server
├── pyproject.toml            # Project configuration and dependencies
├── requirements.txt          # Pip dependency list
├── Dockerfile               # uv version
├── Dockerfile.pip           # Pip fallback version
├── docker-compose.yml       # Service management configuration
├── docker.sh               # Helper script for Docker operations
├── claude_desktop_config.json    # Example Claude Desktop configuration
|── claude_desktop_config_docker.json # Docker-specific configuration example
└── test_client.py          # Test client for verification
Enter fullscreen mode Exit fullscreen mode

Key Implementation Points

1. Basic MCP Server Architecture

from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import TextContent, Tool

# Create MCP server instance
server = Server("kanjiconv-mcp")

@server.list_tools()
async def list_tools() -> list[Tool]:
    """Returns a list of available tools"""
    return [
        Tool(
            name="convert_to_hiragana",
            description="Convert Japanese text (including kanji) to hiragana",
            inputSchema={
                "type": "object",
                "properties": {
                    "text": {"type": "string", "description": "Japanese text to convert"},
                    "separator": {"type": "string", "default": "/"},
                    # ... other options
                },
                "required": ["text"],
            },
        ),
        # ... other tools
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
    """Handles tool invocation"""
    if name == "convert_to_hiragana":
        request = ConvertRequest(**arguments)
        kanji_conv = get_kanji_conv_instance(
            request.separator, 
            request.use_custom_readings, 
            request.use_unidic, 
            request.sudachi_dict_type
        )
        result = kanji_conv.to_hiragana(request.text)
        return [TextContent(type="text", text=result)]
Enter fullscreen mode Exit fullscreen mode

2. Flexible Configuration Options

Each conversion tool supports the following options:

  • separator: Word separation character (default: "/")
  • use_custom_readings: Use custom reading dictionary
  • use_unidic: Enable UniDic usage (for improved accuracy)
  • sudachi_dict_type: Sudachi dictionary type (options: full/small/core)

3. Environment Standardization via Docker

Main Dockerfile (uv version)

FROM python:3.13-slim-bookworm
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/

# Install system dependencies
RUN apt-get update && apt-get install -y \
    curl \
    build-essential \
    pkg-config \
    && rm -rf /var/lib/apt/lists/*

# Install Rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y \
    && . /root/.cargo/env
ENV PATH="/root/.cargo/bin:$PATH"

ADD . /app

# Configure uv
WORKDIR /app
RUN uv sync --locked

# Download dictionaries
RUN uv run python -m unidic download

COPY main.py ./

EXPOSE 8000

# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1

# Run MCP server
CMD ["uv", "run", "python", "main.py"]
Enter fullscreen mode Exit fullscreen mode

Fallback Dockerfile (pip version)

FROM python:3.13-slim-bookworm

RUN apt-get update && apt-get install -y \
    curl \
    build-essential \
    pkg-config \
    git \
    && rm -rf /var/lib/apt/lists/*

RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:$PATH"

WORKDIR /app

COPY requirements.txt ./

RUN pip install --upgrade pip \
    && pip install --no-cache-dir -r requirements.txt

RUN python -m unidic download

COPY main.py ./
COPY pyproject.toml ./

EXPOSE 8000

ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1

CMD ["python", "main.py"]
Enter fullscreen mode Exit fullscreen mode

4. Convenient Helper Script

The docker.sh script simplifies Docker operations:

# Build (tries uv first, then falls back to pip)
./docker.sh build

# Explicitly build with pip version
./docker.sh build-pip

# Run tests
./docker.sh test

# Clean up
./docker.sh clean
Enter fullscreen mode Exit fullscreen mode

Configuration for Claude Desktop

By configuring claude_desktop_config.json as shown below, you can utilize the MCP server with Claude Desktop. (I tested this on a Mac environment.)

Local Execution Version

{
  "mcpServers": {
    "kanjiconv": {
      "command": "uv",
      "args": [
        "--directory",
        "/path/to/kanjiconv_mcp",
        "run",
        "python",
        "main.py"
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Docker Version

{
  "mcpServers": {
    "kanjiconv": {
      "command": "docker",
      "args": ["run", "--rm", "-i", "kanjiconv-mcp:latest"],
      "cwd": "/path/to/kanjiconv_mcp"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Development Challenges

1. Rust Dependency Issues

The sudachipy package includes Rust-written components, which made building the Docker image particularly complex. I had to both install the Rust compiler and configure necessary environment variables.

2. Choosing Between uv and pip

While the latest uv version offers superior performance, it may not be supported on all environments. Therefore, I implemented a build system with fallback functionality to handle such cases.

Practical Usage Examples

You can ask Claude Desktop questions like these:

"It's a beautiful day today" - convert each token separated by "/" to hiragana.

→ "きょう/は/よ/い/てんき/です/ね"
Enter fullscreen mode Exit fullscreen mode
Convert "programming language" to katakana.

→ "プログラミング/ゲンゴ"
Enter fullscreen mode Exit fullscreen mode
Convert "Japanese text processing" to romaji.

→ "nihongo/shori"
Enter fullscreen mode Exit fullscreen mode

Conclusion

In this project, I successfully built an MCP server using my custom Python library.
This experience demonstrated that integrating such custom libraries with LLMs can be implemented more straightforwardly than one might expect.
By leveraging MCP, we can easily add specific functionalities to AI models, so I'm looking forward to developing more such tools in the future.

Top comments (0)