<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohammed Safvan</title>
    <description>The latest articles on DEV Community by Mohammed Safvan (@mohammedsafvan).</description>
    <link>https://dev.to/mohammedsafvan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mohammedsafvan"/>
    <language>en</language>
    <item>
      <title>An Introduction to Model Context Protocol (MCP)</title>
      <dc:creator>Mohammed Safvan</dc:creator>
      <pubDate>Sat, 12 Jul 2025 07:09:41 +0000</pubDate>
      <link>https://dev.to/mohammedsafvan/an-introduction-to-model-context-protocol-mcp-jd6</link>
      <guid>https://dev.to/mohammedsafvan/an-introduction-to-model-context-protocol-mcp-jd6</guid>
      <description>&lt;p&gt;The &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; is an open standard enabling structured interaction between LLMs and external tools or data. It introduces a modular architecture comprising hosts, clients, and server, each with well-defined responsibilities, facilitating secure and extensible AI workflows.&lt;/p&gt;

&lt;p&gt;This blog shows how to build a minimal MCP server for semantic search over local Markdown notes, focusing on core protocol features and running everything locally.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;MCP Architecture Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5a7s6gcgs0r0nzv9uos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5a7s6gcgs0r0nzv9uos.png" alt="MCP Architecture" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Host&lt;/strong&gt;: The primary AI application (e.g., IDEs, assistants) managing LLM execution and client orchestration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client&lt;/strong&gt;: An isolated process that connects 1:1 with a server, handles bidirectional messaging, and negotiates capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server&lt;/strong&gt;: A lightweight service exposing tools or data through MCP. It remains isolated and cannot access global context or other servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MCP uses &lt;strong&gt;JSON-RPC&lt;/strong&gt; for communication and includes a &lt;strong&gt;capability negotiation&lt;/strong&gt; step during initialization.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Server Implementation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To demonstrate MCP in action, a lightweight server was implemented.&lt;br&gt;
The MCP server's tools are defined by adding python decorators &lt;code&gt;@server_name.tool()&lt;/code&gt; at the top of tools(function)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. &lt;code&gt;index_documents(directory_path)&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Reads all Markdown (&lt;code&gt;.md&lt;/code&gt;) files within the specified directory.&lt;/li&gt;
&lt;li&gt;Chunks text based on structure (e.g., headings).&lt;/li&gt;
&lt;li&gt;Converts chunks into vector embeddings.&lt;/li&gt;
&lt;li&gt;Stores embeddings in a &lt;strong&gt;Milvus&lt;/strong&gt; vector database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. &lt;code&gt;search(query)&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Converts the input query into vector form.&lt;/li&gt;
&lt;li&gt;Queries the Milvus DB for semantically similar text chunks.&lt;/li&gt;
&lt;li&gt;Returns top-matching segments for later use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;paraphrase-albert-small-v2&lt;/code&gt; model was used for embeddings. At ~50MB, it supports local execution with acceptable trade-offs for lightweight tasks.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Query Flow&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsahte8icu4ks84ba99b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsahte8icu4ks84ba99b.png" alt="A Sample search query to the server" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The protocol-driven flow of a semantic search query in an MCP-compatible setup is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User Input&lt;/strong&gt; is submitted through the host application.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;client&lt;/strong&gt; forwards this input along with a list of available tools to the LLM.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;LLM&lt;/strong&gt; selects the appropriate tool and specifies parameters.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;client&lt;/strong&gt; sends a protocol message to the designated server.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;server&lt;/strong&gt; executes the tool function and returns structured output.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;client&lt;/strong&gt; forwards retrieved content to the LLM.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;LLM&lt;/strong&gt; synthesizes a final response using the provided context.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each layer performs only its designated function, ensuring high modularity and isolation.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Observations&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chunking&lt;/strong&gt;: Heading-based segmentation produced more meaningful retrieval than token-based methods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Local models require batching to avoid CPU strain during indexing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protocol Design&lt;/strong&gt;: MCP’s modular structure and JSON-RPC communication simplify integration and debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interoperability&lt;/strong&gt;: Capability negotiation ensures only supported features are used, enhancing reliability and extensibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can learn more about MCP in “&lt;a href="https://www.zackriya.com/model-context-protocol/" rel="noopener noreferrer"&gt;Hands on Introduction to MCP”&lt;/a&gt;&lt;br&gt;
Checkout the Github Repo in &lt;a href="https://github.com/Zackriya-Solutions/MCP-Markdown-RAG" rel="noopener noreferrer"&gt;MCP-Markdown-RAG&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;MCP offers a robust foundation for integrating LLMs with local tools via clean, composable interfaces. This experiment confirms its suitability for lightweight semantic search systems and highlights its potential in privacy-conscious, modular AI workflow&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
