Today I want to share the journey of building a complete Quran Search MCP (Model Context Protocol) server that bridges AI assistants with powerful Quran search capabilities. This project showcases how to transform an existing search engine into an MCP-compatible server while maintaining clean architecture.
The Challenge
The goal was to create an MCP server that exposes the excellent quran-search-engine by Adel Benyahia to AI assistants through the Model Context Protocol. The original engine is a powerful TypeScript library for searching Quranic text with morphological analysis, fuzzy matching, and advanced features.
Architecture Decisions
Monorepo Structure
I chose a monorepo approach to separate concerns:
quran-search-MCP/
├── packages/
│ └── quran-search-engine/ # Original search engine
├── mcp-server/ # MCP server adaptation
│ ├── src/ # Server source code
│ │ ├── data/ # Data caching module
│ │ ├── handlers/ # Search handlers
│ │ ├── tools/ # MCP tools
│ │ ├── types/ # Type definitions
│ │ └── index.ts # Server entry point
│ ├── dist/ # Built server files
│ ├── package.json # Server package config
│ └── tsconfig.json # TypeScript config
├── README.md # This documentation
├── CHANGELOG.md # Version history
└── package.json # Workspace configuration
Why this structure?
- Clear separation: Engine and server concerns are isolated
- Reusability: Engine can be used independently
- Maintainability: Each package has its own lifecycle
- Testing: Engine tests run separately from server logic
Data Loading Strategy
The MCP server needed to load three key datasets:
// Data caching implementation
export const loadDataCache = async (): Promise<DataCache> => {
const [quranData, morphologyMap, wordMap] = await Promise.all([
loadQuranData(),
loadMorphology(),
loadWordMap(),
]);
dataCache = {
quranData, // Complete Quranic verses
morphologyMap, // Morphological analysis per verse
wordMap, // Word-to-verse mappings
};
return dataCache;
};
Performance benefits:
- Single load: All data loaded once during server bootstrap
- Memory efficiency: Data cached for fast access
- Parallel loading: All three datasets loaded concurrently
MCP Implementation
Server Setup
The MCP server uses the official MCP SDK:
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
async function bootstrap() {
await loadDataCache();
const server = new McpServer({
name: 'quran-search-engine-mcp',
version: '0.1.0',
});
registerSearchTools(server);
const transport = new StdioServerTransport();
await server.connect(transport);
return server;
}
Tool Registration
The search tool is registered with Zod schema validation:
server.registerTool(
'search',
{
title: 'search',
description: 'Search Quran',
inputSchema: z.object({
query: z.string(),
options: z.object({
limit: z.number().optional(),
matchType: z.enum(['exact', 'fuzzy', 'prefix']).optional(),
includeMorphology: z.boolean().optional(),
}).optional(),
}),
},
async ({ query, options }) => {
// Search implementation
}
);
Dual Export Strategy
The package supports both CLI and library usage:
CLI Usage
npx quran-search-mcp
Library Usage
import { createServer } from 'quran-search-mcp';
const server = await createServer();
Subpath Exports
{
"exports": {
".": {
"types": "./mcp-server/dist/index.d.ts",
"import": "./mcp-server/dist/index.js",
"require": "./mcp-server/dist/index.js"
},
"./engine": {
"types": "./packages/quran-search-engine/dist/index.d.ts",
"import": "./packages/quran-search-engine/dist/index.mjs",
"require": "./packages/quran-search-engine/dist/index.js"
}
}
}
Build Process
TypeScript Compilation
Both packages use TypeScript but with different build tools:
Engine: Uses tsup for bundling with multiple outputs
{
"scripts": {
"build": "tsup"
}
}
MCP Server: Uses tsc for direct compilation
{
"scripts": {
"build": "tsc"
}
}
Workspace Scripts
The root package orchestrates both builds:
{
"scripts": {
"build": "pnpm -C packages/quran-search-engine build && pnpm -C mcp-server build",
"build:engine": "pnpm -C packages/quran-search-engine build",
"build:mcp": "pnpm -C mcp-server build",
"prepublishOnly": "pnpm run build"
}
}
Key Features Implemented
Search Capabilities
- Exact Search: Find exact Arabic text matches
- Fuzzy Search: Find similar words and phrases
- Prefix Search: Find words starting with specific prefixes
- Morphological Analysis: Search by lemma and root words
- Pagination: Control result sets with limit and offset
Performance Optimizations
- Data Caching: Quran data loaded once at startup
- Memory Efficient: Optimized data structures
- Type Safety: Full TypeScript support with Zod validation
- Error Handling: Comprehensive error handling and logging
npm Package Configuration
Complete Package Setup
{
"name": "quran-search-mcp",
"version": "0.1.0",
"description": "Quran Search Engine MCP Server - Complete package with engine and server",
"main": "./mcp-server/dist/index.js",
"bin": {
"quran-search-mcp": "./mcp-server/dist/index.js"
},
"files": [
"mcp-server/dist",
"packages/quran-search-engine/dist",
"CHANGELOG.md"
]
}
Repository Structure
- GitHub: https://github.com/Azizham66/quran-search-MCP
- License: MIT
- Attribution: Proper credit to original quran-search-engine project
Testing & Validation
Manual MCP Testing
# Test tools list
echo '{"jsonrpc": "2.0", "method": "tools/list", "params": {}, "id": 1}' | npx quran-search-mcp
# Test search functionality
echo '{"jsonrpc": "2.0", "method": "tools/call", "params": {"name": "search", "arguments": {"query": "الرحمن"}}, "id": 2}' | npx quran-search-mcp
Build Verification
# Both packages build successfully
pnpm build
# Engine tests pass
pnpm test
Challenges & Solutions
Challenge 1: Workspace Dependencies
Problem: MCP server needed to import engine package
Solution: Used workspace:* dependency and proper pnpm workspace configuration
Challenge 2: Data Loading
Problem: Large Quran datasets could slow startup
Solution: Implemented singleton cache with Promise.all() for parallel loading
Challenge 3: Dual Usage
Problem: Support both CLI and library usage
Solution: Added direct execution check and proper exports
Challenge 4: Type Safety
Problem: MCP protocol requires strict schema validation
Solution: Used Zod for runtime validation and TypeScript for compile-time safety
What I Learned
- Monorepo Benefits: Clear separation of concerns while maintaining unified development
- MCP Protocol: Standardized way to expose tools to AI assistants
- Performance Matters: Data caching strategies significantly impact user experience
- Export Strategy: Dual exports (main package + subpaths) provide flexibility
- Build Tools: Different tools serve different purposes (tsup vs tsc)
- Documentation: Accurate, up-to-date docs are crucial for adoption
Future Improvements
Potential Enhancements
- Streaming Results: For large search result sets
- Caching Layer: Redis or similar for distributed deployments
- Plugin System: Allow custom search algorithms
- Multiple Transports: Support HTTP, WebSocket beyond stdio
- Advanced Search: Boolean queries, phrase search, proximity search
Conclusion
Building the quran-search-mcp package was a rewarding experience in combining existing powerful libraries with modern protocols. The monorepo structure provides maintainability while the MCP integration makes it accessible to AI assistants.
Special thanks to Adel Benyahia for creating the excellent quran-search-engine library that made this MCP adaptation possible. His work on Arabic text processing, morphological analysis, and search algorithms provided the solid foundation needed for this project.
The key was respecting the original quran-search-engine's excellence while building a clean, standards-compliant adaptation layer that adds value without reinventing the wheel.
Try it out:
npm install quran-search-mcp
npx quran-search-mcp
The project is ready for production use and welcomes contributions from the community!
Built with ❤️ for the Muslim community and AI developers worldwide
Special thanks to Adel Benyahia for the foundational quran-search-engine library
Top comments (0)