Taming Financial Data: How Bob Built an MCP Agent for Seamless XBRL Conversion provided by Docling!
TL;DR-What is XBRL?
XBRL (eXtensible Business Reporting Language) is a freely available, global standard for exchanging business information. Often described as “barcoding for financial statements,” it uses XML-based tags to identify every individual piece of financial data — such as net profit, total assets, or depreciation — within a digital document. Unlike a static PDF or Excel sheet where data is just text on a page, XBRL makes information machine-readable. This allows software to automatically recognize, select, and analyze specific data points without manual entry, drastically reducing errors and increasing the speed of financial analysis.
XBRL is primarily used by publicly traded companies, government regulators, and financial analysts. In the United States, the SEC (Securities and Exchange Commission) requires companies to file their quarterly and annual reports in XBRL format to ensure transparency and comparability across the market. Similarly, tax authorities, central banks, and stock exchanges worldwide use it to collect and process massive datasets efficiently. By providing a common “language” for reporting, XBRL enables investors and auditors to compare the performance of different companies across various industries and borders with a high degree of precision.
The Docling XBRL Processing Pipeline

To transform complex financial filings into something an AI can actually understand, the agent follows a strict offline-first protocol. This ensures that sensitive financial data never leaves the local environment during the conversion process.
Configuring the Offline Backend
The secret to Docling’s precision is the Taxonomy. Since XBRL documents rely on external schemas to define financial concepts (like “Net Income” or “Total Liabilities”), the agent must be configured to handle these resources locally:
- Local Resource Fetching: We set enable_local_fetch to True. This allows the converter to "read" the supporting taxonomy files Bob has stored on the disk.
- The Taxonomy Package: To achieve 100% offline operation, Bob provides a Taxonomy Package — a specialized bundle that maps web URLs to local files. Without this, the agent would need to “call home” to regulatory servers to understand the tags.
- Remote Fallback: If a package isn’t available, setting enable_remote_fetch to True allows Docling to download and cache these definitions once, then reuse them for future conversions.
The Conversion Lifecycle
Once the environment is set, the agent triggers the convert() method. This isn't just a file format change; it is a full intellectual "ingestion" of the report:
1-Parsing: The agent deconstructs the XBRL instance file (the .xml).
2-Validation: It cross-references the data against the local taxonomy to ensure every financial fact is technically accurate and properly “labeled.”
3-Extraction: It separates the document into three distinct streams:
- Metadata: Filing dates, entity names, and reporting periods.
- Text Blocks: The narrative “notes” and disclosures often found in 10-Ks.
- Numeric Facts: The raw dollars and cents, structured as high-fidelity key-value pairs.
4-Unification: Finally, all these streams are merged into a single
DoclingDocumentrepresentation.
How parsing (for example with an agent) to read XBRL documents can help?
Because Docling creates this “Unified Representation,” a Model Context Protocol (MCP) can now pass a clean, structured JSON or Markdown version of the financial report to the LLM.
Instead of the LLM guessing what a tag like <us-gaap:CashAndCashEquivalents> means, it receives a clear, human-readable label: "Cash and Cash Equivalents: $557,296." This drastically reduces "hallucinations" and makes financial analysis pinpoint accurate.
Docling provides an example notebook which could be used out-of-the box;
%pip install -q docling
import urllib.request
from pathlib import Path
# Create directories for XBRL data
data_dir = Path("xbrl_data")
taxonomy_dir = data_dir / "taxonomy"
taxonomy_dir.mkdir(parents=True, exist_ok=True)
# Base URL for test data
base_url = (
"https://raw.githubusercontent.com/docling-project/docling/main/tests/data/xbrl/"
)
# Download XBRL instance file
instance_file = data_dir / "mlac-20251231.xml"
if not instance_file.exists():
print("Downloading XBRL instance file...")
urllib.request.urlretrieve(f"{base_url}mlac-20251231.xml", instance_file)
print(f"Downloaded: {instance_file}")
# Download taxonomy files
taxonomy_files = [
"mlac-20251231.xsd",
"mlac-20251231_cal.xml",
"mlac-20251231_def.xml",
"mlac-20251231_lab.xml",
"mlac-20251231_pre.xml",
]
print("Downloading taxonomy files...")
for filename in taxonomy_files:
target_file = taxonomy_dir / filename
if not target_file.exists():
urllib.request.urlretrieve(f"{base_url}mlac-taxonomy/{filename}", target_file)
print(f" Downloaded: {filename}")
# Download taxonomy package (contains URL mappings for offline parsing)
taxonomy_package = taxonomy_dir / "taxonomy_package.zip"
if not taxonomy_package.exists():
print("Downloading taxonomy package...")
urllib.request.urlretrieve(
f"{base_url}mlac-taxonomy/taxonomy_package.zip", taxonomy_package
)
print(" Downloaded: taxonomy_package.zip")
print("\nAll files downloaded successfully!")
from docling.datamodel.backend_options import XBRLBackendOptions
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter, XBRLFormatOption
# Configure XBRL backend options
backend_options = XBRLBackendOptions(
enable_local_fetch=True, # Allow reading local taxonomy files
enable_remote_fetch=False, # Disable remote fetching for offline operation
taxonomy=taxonomy_dir, # Path to local taxonomy directory
)
# Create document converter with XBRL support
converter = DocumentConverter(
allowed_formats=[InputFormat.XML_XBRL],
format_options={
InputFormat.XML_XBRL: XBRLFormatOption(backend_options=backend_options)
},
)
print("XBRL converter configured successfully!")
# Convert the XBRL document
print(f"Converting XBRL document: {instance_file}")
result = converter.convert(instance_file)
doc = result.document
print("\nConversion successful!")
print(f"Document name: {doc.name}")
print(f"Number of items: {len(list(doc.iterate_items()))}")
from docling_core.types.doc import DocItemLabel
# Count items by type
item_counts = {}
for item, _ in doc.iterate_items():
label = item.label
item_counts[label] = item_counts.get(label, 0) + 1
print("Document structure:")
for label, count in sorted(item_counts.items(), key=lambda x: x[1], reverse=True):
print(f" {label.value}: {count}")
# Display first few text items
print("Sample text content:\n")
text_count = 0
for item, _ in doc.iterate_items():
if item.label == DocItemLabel.TEXT and text_count < 3:
print(f"- {item.text[:200]}..." if len(item.text) > 200 else f"- {item.text}")
print()
text_count += 1
# Display sample key-value pairs
graph_data = doc.key_value_items[0].graph
print(f"Total key-value pairs extracted: {len(graph_data.links)}\n")
for link in graph_data.links[:10]:
source = next(
item for item in graph_data.cells if item.cell_id == link.source_cell_id
)
target = next(
item for item in graph_data.cells if item.cell_id == link.target_cell_id
)
print(f"{source.text} -> {target.text}")
# Export to Markdown
markdown_content = doc.export_to_markdown()
# Display first 2000 characters
print("Markdown export (first 2000 characters):\n")
print(markdown_content[:2000])
print("\n...")
# Save to file
output_md = data_dir / "output.md"
output_md.write_text(markdown_content)
print(f"\nFull markdown saved to: {output_md}")
import json
# Export to JSON
output_json = data_dir / "output.json"
doc.save_as_json(output_json)
print(f"Document exported to JSON: {output_json}")
print(f"File size: {output_json.stat().st_size / 1024:.2f} KB")
What tool did Bob provide to automate the task?
The Agentic Workflow: From Raw Data to Decision-Ready Insights
Bob’s XBRL Document Conversion Agent operates as a specialized “knowledge worker” within a larger AI ecosystem. By leveraging the Model Context Protocol (MCP), the agent doesn’t just perform a task; it provides a live, structured “context window” for Large Language Models (LLMs) to interact with financial reality. Here is the four-stage process that defines its agentic behavior:
- Autonomous Resource Orchestration: Upon activation, the agent independently manages its dependencies. It doesn’t just wait for files; it ensures the environment is “audit-ready” by enabling local resource fetching. By prioritizing a Taxonomy Package, the agent intelligently maps abstract web-based URLs to local definitions, allowing it to function in high-security, offline environments where data privacy is paramount.
"""
XBRL Document Conversion Agent
This module provides a comprehensive agent for converting XBRL (eXtensible Business
Reporting Language) documents using Docling, with support for offline processing,
taxonomy validation, and multiple export formats.
XBRL is a standard XML-based format used globally by companies, regulators, and
financial institutions for exchanging business and financial information in a
structured, machine-readable format.
Author: Generated for XBRL Document Conversion
License: MIT
"""
import logging
from pathlib import Path
from typing import Optional, Dict, List, Any, Union
from dataclasses import dataclass, field
import json
try:
from docling.document_converter import DocumentConverter, PdfFormatOption, XBRLFormatOption
from docling.datamodel.base_models import InputFormat
from docling.datamodel.pipeline_options import PipelineOptions
from docling.datamodel.backend_options import XBRLBackendOptions
from docling.datamodel.document import ConversionResult, DoclingDocument
DOCLING_AVAILABLE = True
except ImportError as e:
DOCLING_AVAILABLE = False
# Define placeholder types when docling is not available
ConversionResult = Any
DoclingDocument = Any
logging.warning(f"Docling library not available. Install with: pip install docling[xbrl]. Error: {e}")
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
@dataclass
class XBRLConversionConfig:
"""
Configuration for XBRL document conversion.
Attributes:
enable_local_fetch: Enable fetching taxonomy files from local directory
enable_remote_fetch: Enable fetching taxonomy files from remote URLs
taxonomy_dir: Path to local taxonomy directory containing schema and linkbase files
taxonomy_package: Path to taxonomy package ZIP file for offline operation
output_dir: Directory for saving converted documents
export_formats: List of export formats (markdown, json, html, text)
"""
enable_local_fetch: bool = True
enable_remote_fetch: bool = False
taxonomy_dir: Optional[Path] = None
taxonomy_package: Optional[Path] = None
output_dir: Path = Path("./output")
export_formats: List[str] = field(default_factory=lambda: ["markdown", "json"])
def __post_init__(self):
"""Validate configuration after initialization."""
if self.enable_local_fetch and not self.taxonomy_dir:
logger.warning("Local fetch enabled but no taxonomy_dir specified")
if not self.enable_local_fetch and not self.enable_remote_fetch:
raise ValueError("At least one of enable_local_fetch or enable_remote_fetch must be True")
# Ensure output directory exists
self.output_dir.mkdir(parents=True, exist_ok=True)
class XBRLConversionAgent:
"""
Agent for converting XBRL documents to various formats.
This agent handles:
- XBRL instance document parsing
- Taxonomy validation (local or remote)
- Metadata extraction
- Text block extraction
- Numeric fact extraction
- Export to multiple formats (Markdown, JSON, HTML, etc.)
Example:
>>> config = XBRLConversionConfig(
... taxonomy_dir=Path("./taxonomy"),
... taxonomy_package=Path("./taxonomy/package.zip")
... )
>>> agent = XBRLConversionAgent(config)
>>> result = agent.convert_document("report.xml")
>>> agent.export_document(result.document, "output")
"""
def __init__(self, config: XBRLConversionConfig):
"""
Initialize the XBRL conversion agent.
Args:
config: Configuration object for XBRL conversion
Raises:
ImportError: If Docling library is not available
ValueError: If configuration is invalid
"""
if not DOCLING_AVAILABLE:
raise ImportError(
"Docling library is required. Install with: pip install docling[xbrl]"
)
self.config = config
self.converter = self._setup_converter()
logger.info("XBRL Conversion Agent initialized successfully")
def _setup_converter(self) -> DocumentConverter:
"""
Set up the DocumentConverter with XBRL backend configuration.
Returns:
Configured DocumentConverter instance
"""
# Configure XBRL backend options
# Use taxonomy_package if available, otherwise use taxonomy_dir
taxonomy_path = None
if self.config.taxonomy_package and self.config.taxonomy_package.exists():
taxonomy_path = self.config.taxonomy_package
elif self.config.taxonomy_dir and self.config.taxonomy_dir.exists():
taxonomy_path = self.config.taxonomy_dir
# Create backend options
backend_options = XBRLBackendOptions(
enable_local_fetch=self.config.enable_local_fetch,
enable_remote_fetch=self.config.enable_remote_fetch,
taxonomy=taxonomy_path
)
# Create format options with XBRLFormatOption wrapper
# This is the correct way according to Docling's test suite
format_options = {
InputFormat.XML_XBRL: XBRLFormatOption(backend_options=backend_options)
}
# Create and return converter
converter = DocumentConverter(
allowed_formats=[InputFormat.XML_XBRL],
format_options=format_options
)
logger.info("XBRL converter configured successfully")
return converter
def convert_document(
self,
xbrl_path: Union[str, Path]
) -> ConversionResult:
"""
Convert an XBRL instance document to DoclingDocument format.
This method:
1. Parses the XBRL instance file
2. Validates against the taxonomy
3. Extracts metadata, text blocks, and numeric facts
4. Returns a unified DoclingDocument representation
Args:
xbrl_path: Path to XBRL instance document (.xml file)
Returns:
ConversionResult containing the converted document and metadata
Raises:
FileNotFoundError: If XBRL file doesn't exist
ValueError: If conversion fails
"""
xbrl_path = Path(xbrl_path)
if not xbrl_path.exists():
raise FileNotFoundError(f"XBRL file not found: {xbrl_path}")
logger.info(f"Converting XBRL document: {xbrl_path}")
try:
result = self.converter.convert(xbrl_path)
if result.status.name != "SUCCESS":
raise ValueError(f"Conversion failed with status: {result.status}")
logger.info(f"Conversion successful! Document: {result.document.name}")
logger.info(f"Number of items: {len(list(result.document.iterate_items()))}")
return result
except Exception as e:
logger.error(f"Error converting XBRL document: {e}")
raise
def get_document_structure(self, document: DoclingDocument) -> Dict[str, int]:
"""
Analyze and return the structure of a converted document.
Args:
document: Converted DoclingDocument
Returns:
Dictionary mapping item types to their counts
"""
structure = {}
for item, _ in document.iterate_items():
item_type = type(item).__name__
structure[item_type] = structure.get(item_type, 0) + 1
logger.info(f"Document structure: {structure}")
return structure
def extract_key_value_pairs(self, document: DoclingDocument) -> List[Dict[str, Any]]:
"""
Extract numeric facts as key-value pairs from XBRL document.
XBRL numeric facts are represented as key-value pairs in the document.
This method extracts all such pairs for analysis.
Args:
document: Converted DoclingDocument
Returns:
List of dictionaries containing key-value pairs with metadata
"""
key_value_pairs = []
# Extract from key-value regions
for kv_item in document.key_value_items:
key_value_pairs.append({
"key": kv_item.label if hasattr(kv_item, 'label') else str(kv_item),
"value": kv_item.text if hasattr(kv_item, 'text') else None,
"type": "key_value_region"
})
logger.info(f"Extracted {len(key_value_pairs)} key-value pairs")
return key_value_pairs
def extract_text_content(
self,
document: DoclingDocument,
max_items: int = 10
) -> List[str]:
"""
Extract sample text content from the document.
Args:
document: Converted DoclingDocument
max_items: Maximum number of text items to extract
Returns:
List of text strings
"""
text_items = []
for item in document.texts[:max_items]:
if hasattr(item, 'text') and item.text:
# Truncate long text for readability
text = item.text[:200] + "..." if len(item.text) > 200 else item.text
text_items.append(text)
logger.info(f"Extracted {len(text_items)} text items")
return text_items
def export_to_markdown(
self,
document: DoclingDocument,
output_path: Optional[Path] = None
) -> str:
"""
Export document to Markdown format.
Args:
document: Converted DoclingDocument
output_path: Optional path to save the markdown file
Returns:
Markdown string
"""
markdown = document.export_to_markdown()
if output_path:
output_path = Path(output_path)
output_path.parent.mkdir(parents=True, exist_ok=True)
output_path.write_text(markdown, encoding='utf-8')
logger.info(f"Markdown exported to: {output_path}")
return markdown
def export_to_json(
self,
document: DoclingDocument,
output_path: Optional[Path] = None
) -> str:
"""
Export document to JSON format.
Args:
document: Converted DoclingDocument
output_path: Optional path to save the JSON file
Returns:
JSON string
"""
# Use the document's dict representation and convert to JSON
json_str = json.dumps(document.model_dump(), indent=2, ensure_ascii=False)
if output_path:
output_path = Path(output_path)
output_path.parent.mkdir(parents=True, exist_ok=True)
output_path.write_text(json_str, encoding='utf-8')
# Calculate file size
size_kb = len(json_str.encode('utf-8')) / 1024
logger.info(f"JSON exported to: {output_path} ({size_kb:.2f} KB)")
return json_str
def export_to_html(
self,
document: DoclingDocument,
output_path: Optional[Path] = None
) -> str:
"""
Export document to HTML format.
Args:
document: Converted DoclingDocument
output_path: Optional path to save the HTML file
Returns:
HTML string
"""
html = document.export_to_html()
if output_path:
output_path = Path(output_path)
output_path.parent.mkdir(parents=True, exist_ok=True)
output_path.write_text(html, encoding='utf-8')
logger.info(f"HTML exported to: {output_path}")
return html
def export_document(
self,
document: DoclingDocument,
base_name: str,
formats: Optional[List[str]] = None
) -> Dict[str, Path]:
"""
Export document to multiple formats.
Args:
document: Converted DoclingDocument
base_name: Base name for output files (without extension)
formats: List of formats to export (default: from config)
Returns:
Dictionary mapping format names to output file paths
"""
formats = formats or self.config.export_formats
output_files = {}
for fmt in formats:
output_path = self.config.output_dir / f"{base_name}.{fmt}"
if fmt == "markdown" or fmt == "md":
self.export_to_markdown(document, output_path)
output_files["markdown"] = output_path
elif fmt == "json":
self.export_to_json(document, output_path)
output_files["json"] = output_path
elif fmt == "html":
self.export_to_html(document, output_path)
output_files["html"] = output_path
else:
logger.warning(f"Unsupported export format: {fmt}")
return output_files
def process_xbrl_file(
self,
xbrl_path: Union[str, Path],
output_base_name: Optional[str] = None,
analyze: bool = True
) -> Dict[str, Any]:
"""
Complete processing pipeline for an XBRL file.
This is a convenience method that:
1. Converts the XBRL document
2. Analyzes the structure (optional)
3. Exports to configured formats
Args:
xbrl_path: Path to XBRL instance document
output_base_name: Base name for output files (default: input filename)
analyze: Whether to perform structure analysis
Returns:
Dictionary containing conversion results and analysis
"""
xbrl_path = Path(xbrl_path)
if not output_base_name:
output_base_name = xbrl_path.stem
# Convert document
result = self.convert_document(xbrl_path)
document = result.document
# Prepare results
results = {
"input_file": str(xbrl_path),
"document_name": document.name,
"conversion_status": result.status.name,
}
# Analyze structure if requested
if analyze:
results["structure"] = self.get_document_structure(document)
results["key_value_pairs"] = self.extract_key_value_pairs(document)
results["sample_text"] = self.extract_text_content(document, max_items=5)
# Export to configured formats
output_files = self.export_document(document, output_base_name)
results["output_files"] = {k: str(v) for k, v in output_files.items()}
logger.info(f"Processing complete for {xbrl_path}")
return results
def create_agent_from_taxonomy(
taxonomy_dir: Union[str, Path],
taxonomy_package: Optional[Union[str, Path]] = None,
output_dir: Union[str, Path] = "./output",
enable_remote_fetch: bool = False
) -> XBRLConversionAgent:
"""
Factory function to create an XBRL agent with taxonomy configuration.
Args:
taxonomy_dir: Path to directory containing taxonomy files
taxonomy_package: Optional path to taxonomy package ZIP
output_dir: Directory for output files
enable_remote_fetch: Whether to allow remote taxonomy fetching
Returns:
Configured XBRLConversionAgent instance
Example:
>>> agent = create_agent_from_taxonomy(
... taxonomy_dir="./data/xbrl/mlac-taxonomy",
... taxonomy_package="./data/xbrl/mlac-taxonomy/taxonomy_package.zip"
... )
>>> result = agent.process_xbrl_file("./data/xbrl/mlac-20251231.xml")
"""
config = XBRLConversionConfig(
enable_local_fetch=True,
enable_remote_fetch=enable_remote_fetch,
taxonomy_dir=Path(taxonomy_dir),
taxonomy_package=Path(taxonomy_package) if taxonomy_package else None,
output_dir=Path(output_dir)
)
return XBRLConversionAgent(config)
if __name__ == "__main__":
# Example usage
print("XBRL Document Conversion Agent")
print("=" * 50)
print("\nThis module provides tools for converting XBRL documents.")
print("Import and use the XBRLConversionAgent class in your code.")
print("\nExample:")
print(" from xbrl_agent import create_agent_from_taxonomy")
print(" agent = create_agent_from_taxonomy('./taxonomy')")
print(" result = agent.process_xbrl_file('report.xml')")
# Made with Bob
- Intelligent Validation & Parsing: When an XBRL instance document is submitted, the agent acts as a digital auditor. It doesn’t merely “read” the XML; it validates the data against the specific financial taxonomy. Using the Docling backend, it deconstructs the file into three critical streams — metadata, narrative text blocks, and numeric facts — ensuring that every data point is contextually accurate before it ever reaches the user.
- Unified Context Synthesis: The agent’s “brain” (the Docling Converter) takes these disparate financial streams and synthesizes them into a Unified DoclingDocument. This stage is crucial: it transforms raw, nested XML tags into a clean, hierarchical structure. By flattening complex numeric facts into machine-readable key-value pairs, the agent removes the technical “noise” that typically causes AI models to hallucinate during financial analysis.
- MCP-Enabled Interactivity: Finally, the agent exposes this processed intelligence via an MCP Server. This is where the magic happens: instead of the LLM receiving a massive, unreadable wall of XML, the agent serves up a precise “Context Retrieval” feed. Whether through a Web UI or a direct model connection, the agent stands ready to answer natural language queries like “What was the net change in cash?” by pulling exactly the right structured facts from its memory.

The MCP server provides 6 tools for XBRL conversion:
- xbrl_create_agent — Create conversion agent with taxonomy
- xbrl_convert_document — Convert XBRL document
- xbrl_analyze_structure — Analyze document structure
- xbrl_extract_key_values — Extract numeric facts
- xbrl_extract_text — Extract text content
- xbrl_export_document — Export to formats
#!/usr/bin/env python3
"""
XBRL MCP Server
Model Context Protocol (MCP) server for XBRL document conversion.
This server exposes XBRL conversion capabilities as MCP tools that can be
used by AI assistants and other MCP clients.
Usage:
python xbrl_mcp_server.py
Or as an MCP server in your MCP configuration:
{
"mcpServers": {
"xbrl": {
"command": "python",
"args": ["/path/to/xbrl_mcp_server.py"]
}
}
}
"""
import asyncio
import json
import logging
from pathlib import Path
from typing import Any, Dict, List, Optional
try:
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent, ImageContent, EmbeddedResource
MCP_AVAILABLE = True
except ImportError:
MCP_AVAILABLE = False
logging.warning("MCP library not available. Install with: pip install mcp")
from xbrl_agent import (
XBRLConversionAgent,
XBRLConversionConfig,
create_agent_from_taxonomy,
DOCLING_AVAILABLE
)
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class XBRLMCPServer:
"""
MCP Server for XBRL document conversion.
Provides tools for:
- Converting XBRL documents
- Analyzing XBRL structure
- Extracting data from XBRL
- Exporting to various formats
"""
def __init__(self):
"""Initialize the XBRL MCP server."""
if not MCP_AVAILABLE:
raise ImportError("MCP library required. Install with: pip install mcp")
if not DOCLING_AVAILABLE:
raise ImportError("Docling library required. Install with: pip install docling[xbrl]")
self.server = Server("xbrl-converter")
self.agents: Dict[str, XBRLConversionAgent] = {}
self._setup_tools()
logger.info("XBRL MCP Server initialized")
def _setup_tools(self):
"""Register all MCP tools."""
@self.server.list_tools()
async def list_tools() -> List[Tool]:
"""List all available XBRL conversion tools."""
return [
Tool(
name="xbrl_create_agent",
description=(
"Create an XBRL conversion agent with taxonomy configuration. "
"This must be called before using other XBRL tools. "
"Returns an agent_id to use in subsequent calls."
),
inputSchema={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": "Unique identifier for this agent"
},
"taxonomy_dir": {
"type": "string",
"description": "Path to directory containing taxonomy files"
},
"taxonomy_package": {
"type": "string",
"description": "Optional path to taxonomy package ZIP file"
},
"output_dir": {
"type": "string",
"description": "Directory for output files (default: ./output)",
"default": "./output"
},
"enable_remote_fetch": {
"type": "boolean",
"description": "Allow fetching taxonomy from remote URLs",
"default": False
}
},
"required": ["agent_id", "taxonomy_dir"]
}
),
Tool(
name="xbrl_convert_document",
description=(
"Convert an XBRL instance document to DoclingDocument format. "
"Parses the XBRL file, validates against taxonomy, and extracts "
"metadata, text blocks, and numeric facts."
),
inputSchema={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": "Agent ID from xbrl_create_agent"
},
"xbrl_path": {
"type": "string",
"description": "Path to XBRL instance document (.xml file)"
},
"output_base_name": {
"type": "string",
"description": "Base name for output files (optional)"
},
"analyze": {
"type": "boolean",
"description": "Perform structure analysis",
"default": True
}
},
"required": ["agent_id", "xbrl_path"]
}
),
Tool(
name="xbrl_analyze_structure",
description=(
"Analyze the structure of a converted XBRL document. "
"Returns counts of different item types (text, tables, etc.)."
),
inputSchema={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": "Agent ID from xbrl_create_agent"
},
"xbrl_path": {
"type": "string",
"description": "Path to XBRL instance document"
}
},
"required": ["agent_id", "xbrl_path"]
}
),
Tool(
name="xbrl_extract_key_values",
description=(
"Extract numeric facts as key-value pairs from XBRL document. "
"Returns all XBRL facts with their values and metadata."
),
inputSchema={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": "Agent ID from xbrl_create_agent"
},
"xbrl_path": {
"type": "string",
"description": "Path to XBRL instance document"
},
"max_items": {
"type": "integer",
"description": "Maximum number of items to return",
"default": 100
}
},
"required": ["agent_id", "xbrl_path"]
}
),
Tool(
name="xbrl_extract_text",
description=(
"Extract text content from XBRL document. "
"Returns text blocks and narrative content."
),
inputSchema={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": "Agent ID from xbrl_create_agent"
},
"xbrl_path": {
"type": "string",
"description": "Path to XBRL instance document"
},
"max_items": {
"type": "integer",
"description": "Maximum number of text items to return",
"default": 10
}
},
"required": ["agent_id", "xbrl_path"]
}
),
Tool(
name="xbrl_export_document",
description=(
"Export XBRL document to specified formats. "
"Supports markdown, json, and html formats."
),
inputSchema={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": "Agent ID from xbrl_create_agent"
},
"xbrl_path": {
"type": "string",
"description": "Path to XBRL instance document"
},
"output_base_name": {
"type": "string",
"description": "Base name for output files"
},
"formats": {
"type": "array",
"items": {"type": "string"},
"description": "Export formats (markdown, json, html)",
"default": ["markdown", "json"]
}
},
"required": ["agent_id", "xbrl_path", "output_base_name"]
}
),
]
@self.server.call_tool()
async def call_tool(name: str, arguments: Any) -> List[TextContent]:
"""Handle tool calls."""
try:
if name == "xbrl_create_agent":
return await self._create_agent(arguments)
elif name == "xbrl_convert_document":
return await self._convert_document(arguments)
elif name == "xbrl_analyze_structure":
return await self._analyze_structure(arguments)
elif name == "xbrl_extract_key_values":
return await self._extract_key_values(arguments)
elif name == "xbrl_extract_text":
return await self._extract_text(arguments)
elif name == "xbrl_export_document":
return await self._export_document(arguments)
else:
return [TextContent(
type="text",
text=f"Unknown tool: {name}"
)]
except Exception as e:
logger.error(f"Error in tool {name}: {e}")
return [TextContent(
type="text",
text=f"Error: {str(e)}"
)]
async def _create_agent(self, args: Dict[str, Any]) -> List[TextContent]:
"""Create an XBRL conversion agent."""
agent_id = args["agent_id"]
if agent_id in self.agents:
return [TextContent(
type="text",
text=f"Agent '{agent_id}' already exists. Use a different agent_id."
)]
try:
agent = create_agent_from_taxonomy(
taxonomy_dir=args["taxonomy_dir"],
taxonomy_package=args.get("taxonomy_package"),
output_dir=args.get("output_dir", "./output"),
enable_remote_fetch=args.get("enable_remote_fetch", False)
)
self.agents[agent_id] = agent
result = {
"status": "success",
"agent_id": agent_id,
"message": f"XBRL agent '{agent_id}' created successfully",
"config": {
"taxonomy_dir": args["taxonomy_dir"],
"output_dir": args.get("output_dir", "./output"),
"enable_remote_fetch": args.get("enable_remote_fetch", False)
}
}
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
except Exception as e:
return [TextContent(
type="text",
text=json.dumps({
"status": "error",
"message": str(e)
}, indent=2)
)]
async def _convert_document(self, args: Dict[str, Any]) -> List[TextContent]:
"""Convert an XBRL document."""
agent_id = args["agent_id"]
if agent_id not in self.agents:
return [TextContent(
type="text",
text=f"Agent '{agent_id}' not found. Create it first with xbrl_create_agent."
)]
agent = self.agents[agent_id]
try:
result = agent.process_xbrl_file(
xbrl_path=args["xbrl_path"],
output_base_name=args.get("output_base_name"),
analyze=args.get("analyze", True)
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
except Exception as e:
return [TextContent(
type="text",
text=json.dumps({
"status": "error",
"message": str(e)
}, indent=2)
)]
async def _analyze_structure(self, args: Dict[str, Any]) -> List[TextContent]:
"""Analyze XBRL document structure."""
agent_id = args["agent_id"]
if agent_id not in self.agents:
return [TextContent(
type="text",
text=f"Agent '{agent_id}' not found."
)]
agent = self.agents[agent_id]
try:
conv_result = agent.convert_document(args["xbrl_path"])
structure = agent.get_document_structure(conv_result.document)
result = {
"status": "success",
"document_name": conv_result.document.name,
"structure": structure,
"total_items": sum(structure.values())
}
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
except Exception as e:
return [TextContent(
type="text",
text=json.dumps({
"status": "error",
"message": str(e)
}, indent=2)
)]
async def _extract_key_values(self, args: Dict[str, Any]) -> List[TextContent]:
"""Extract key-value pairs from XBRL document."""
agent_id = args["agent_id"]
if agent_id not in self.agents:
return [TextContent(
type="text",
text=f"Agent '{agent_id}' not found."
)]
agent = self.agents[agent_id]
try:
conv_result = agent.convert_document(args["xbrl_path"])
kv_pairs = agent.extract_key_value_pairs(conv_result.document)
max_items = args.get("max_items", 100)
kv_pairs = kv_pairs[:max_items]
result = {
"status": "success",
"document_name": conv_result.document.name,
"key_value_pairs": kv_pairs,
"count": len(kv_pairs)
}
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
except Exception as e:
return [TextContent(
type="text",
text=json.dumps({
"status": "error",
"message": str(e)
}, indent=2)
)]
async def _extract_text(self, args: Dict[str, Any]) -> List[TextContent]:
"""Extract text content from XBRL document."""
agent_id = args["agent_id"]
if agent_id not in self.agents:
return [TextContent(
type="text",
text=f"Agent '{agent_id}' not found."
)]
agent = self.agents[agent_id]
try:
conv_result = agent.convert_document(args["xbrl_path"])
max_items = args.get("max_items", 10)
texts = agent.extract_text_content(conv_result.document, max_items)
result = {
"status": "success",
"document_name": conv_result.document.name,
"text_items": texts,
"count": len(texts)
}
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
except Exception as e:
return [TextContent(
type="text",
text=json.dumps({
"status": "error",
"message": str(e)
}, indent=2)
)]
async def _export_document(self, args: Dict[str, Any]) -> List[TextContent]:
"""Export XBRL document to specified formats."""
agent_id = args["agent_id"]
if agent_id not in self.agents:
return [TextContent(
type="text",
text=f"Agent '{agent_id}' not found."
)]
agent = self.agents[agent_id]
try:
conv_result = agent.convert_document(args["xbrl_path"])
output_files = agent.export_document(
document=conv_result.document,
base_name=args["output_base_name"],
formats=args.get("formats", ["markdown", "json"])
)
result = {
"status": "success",
"document_name": conv_result.document.name,
"output_files": {k: str(v) for k, v in output_files.items()}
}
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
except Exception as e:
return [TextContent(
type="text",
text=json.dumps({
"status": "error",
"message": str(e)
}, indent=2)
)]
async def run(self):
"""Run the MCP server."""
async with stdio_server() as (read_stream, write_stream):
logger.info("XBRL MCP Server running on stdio")
await self.server.run(
read_stream,
write_stream,
self.server.create_initialization_options()
)
async def main():
"""Main entry point for the MCP server."""
try:
server = XBRLMCPServer()
await server.run()
except ImportError as e:
logger.error(f"Missing dependency: {e}")
logger.error("Install with: pip install mcp docling[xbrl]")
except Exception as e:
logger.error(f"Server error: {e}")
raise
if __name__ == "__main__":
asyncio.run(main())
# Made with Bob
- Simplified User Interface: Last but not least, there is an intutive UI in order to test and interact with the mechanism and test it easily.
Bob’s XBRL Agent: The MCP Toolset Cheat Sheet
By exposing these six specialized tools through the Model Context Protocol, Bob has given the AI assistant a “financial expert” toolkit. Instead of struggling with XML, the assistant simply calls the appropriate tool to get clean, validated data.
| Tool Name | Agentic Responsibility | Business Value |
| ----------------------------- | -------------------------- | ------------------------------------------------------------ |
| **`xbrl_create_agent`** | **Resource Orchestration** | Sets up the secure, offline environment by loading specific legal and accounting taxonomies. |
| **`xbrl_convert_document`** | **Context Synthesis** | The "heavy lifter" that transforms raw `.xml` into a unified, machine-readable `DoclingDocument`. |
| **`xbrl_analyze_structure`** | **Structural Auditing** | Provides a high-level map of the filing (e.g., identifying where the Balance Sheet or Risk Factors are). |
| **`xbrl_extract_key_values`** | **Fact Retrieval** | Pulls precise numeric facts (Assets, Liabilities, Revenue) as ready-to-use key-value pairs. |
| **`xbrl_extract_text`** | **Narrative Extraction** | Isolates the "Management Discussion" and "Footnotes" from the numeric data for qualitative analysis. |
| **`xbrl_export_document`** | **Output Delivery** | Finalizes the process by generating clean Markdown or JSON for reports and downstream dashboards. |
Conclusion: The Impact of Agentic XBRL Conversion
By moving from a static conversion script to a Model Context Protocol (MCP) enabled agent, Bob has transformed a high-friction financial task into a seamless, conversational experience. This architecture represents a shift in how we handle specialized data: instead of forcing the AI to learn the intricacies of XBRL, we provide it with a “expert tool” that handles the complexity on its behalf.
For Bob’s organization, the results are clear:
- Zero Hallucination Risk: By providing the LLM with structured facts rather than raw XML, the agent ensures that the AI’s answers are grounded in validated financial truths.
- Enhanced Security: With local taxonomy fetching and offline processing, sensitive internal or pre-release filings never leave the secure environment.
- Plug-and-Play Intelligence: Because it uses the MCP standard, this agent can be plugged into any modern AI assistant (like watsonx Orchestrate, Claude, ChatGPT, or custom internal bots) without rewriting a single line of code.
Ultimately, Bob’s project demonstrates that the future of document processing isn’t just about “reading” files — it’s about building agentic bridges that translate the world’s most complex data into a language both humans and machines can finally agree on.
>>> Thanks for reading <<<
Links and ressources
- Blog Post’s Code Repository: https://github.com/aairom/docling-xbrl-test
- Docling Project: https://github.com/docling-project
- Docling Documentation: https://docling-project.github.io/docling/
- Docling XBRL Document Conversion: https://docling-project.github.io/docling/examples/xbrl_conversion/#export-to-json
- Docling’s repository “test_backend_xbrl.py”: https://github.com/docling-project/docling/blob/main/tests/test_backend_xbrl.py
- Test data for XBRL Conversion: https://github.com/docling-project/docling/tree/main/tests/data/xbrl
- IBM Project Bob (GA tomorrow March 24th): https://www.ibm.com/products/bob






Top comments (0)