<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dineshsuriya D</title>
    <description>The latest articles on DEV Community by Dineshsuriya D (@dineshsuriya).</description>
    <link>https://dev.to/dineshsuriya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dineshsuriya"/>
    <language>en</language>
    <item>
      <title>PEP 703 Explained: How Python Finally Removes the GIL</title>
      <dc:creator>Dineshsuriya D</dc:creator>
      <pubDate>Mon, 08 Dec 2025 16:21:56 +0000</pubDate>
      <link>https://dev.to/epam_india_python/pep-703-explained-how-python-finally-removes-the-gil-49d</link>
      <guid>https://dev.to/epam_india_python/pep-703-explained-how-python-finally-removes-the-gil-49d</guid>
      <description>&lt;p&gt;PEP 703 is one of the most transformative changes in CPython’s history. This proposal—now accepted—re-architects memory management, garbage collection, and container safety to safely enable &lt;strong&gt;true parallelism&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For years, Python has had a massive "elephant in the room": the &lt;strong&gt;Global Interpreter Lock (GIL)&lt;/strong&gt;. If you have ever written a Python script that tries to do two CPU-heavy tasks at once, you have likely run into it. You spawn two threads, expecting your code to run twice as fast, but instead, it runs at the same speed—or sometimes even slower.&lt;/p&gt;

&lt;p&gt;The GIL allows only &lt;strong&gt;one thread&lt;/strong&gt; to execute Python bytecode at a time, effectively turning your powerful multi-core CPU into a single-core machine. In this deep dive, we will explore exactly how the Python core team solved the "impossible" problem: removing the GIL while keeping Python safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now
&lt;/h2&gt;

&lt;p&gt;Before looking at the "how," we must address the "why." For decades, single-threaded Python got faster automatically as CPUs improved (Moore’s Law). That era is over. Today, performance gains come from adding &lt;em&gt;more&lt;/em&gt; cores, not faster ones.&lt;/p&gt;

&lt;p&gt;At the same time, Python has become the standard for &lt;strong&gt;AI and Machine Learning&lt;/strong&gt;—workloads that are inherently parallel. Sticking to a single-threaded runtime in a multi-core, AI-driven world is no longer sustainable. PEP 703 bridges this gap, allowing Python to finally utilize modern hardware to its full potential.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Problem: Concurrency vs. Parallelism
&lt;/h2&gt;

&lt;p&gt;Imagine a ticket counter with 10 open windows (your CPU cores), but there is only &lt;strong&gt;one&lt;/strong&gt; employee (the GIL) who runs back and forth between the windows. Even if you have 10 customers (threads) ready to do business, only one gets served at a time. The others just wait.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Concurrency:&lt;/strong&gt; The employee switches between windows rapidly. Progress happens on all tasks, but not at the exact same instant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallelism:&lt;/strong&gt; You hire 10 employees. All 10 windows serve customers at the exact same second.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Standard Python does &lt;strong&gt;concurrency&lt;/strong&gt; well, but limits parallelism&lt;/p&gt;

&lt;p&gt;PEP 703 removes the "one employee" limit, allowing every core on your machine to run Python code simultaneously. But doing this safely requires redesigning Python internals from the ground up.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Python Fixed Reference Counting Without the GIL
&lt;/h2&gt;

&lt;p&gt;Python manages memory automatically using &lt;strong&gt;Reference Counting&lt;/strong&gt;. Every time you create a variable, Python attaches a counter to that object. In a free-threaded world, two threads updating the same counter simultaneously would cause race conditions.&lt;/p&gt;

&lt;p&gt;Using atomic operations for every reference count update would be safe but unacceptably slow. PEP 703 solves this with three innovations:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Biased Reference Counting (BRC)
&lt;/h3&gt;

&lt;p&gt;The developers realized that &lt;strong&gt;most objects are only ever used by the thread that created them.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;BRC "biases" the reference count accordingly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Owner thread:&lt;/strong&gt; Fast non-atomic increments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Other threads:&lt;/strong&gt; Slower atomic increments for safety&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc6mc2pwz0c2ecm4hcky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc6mc2pwz0c2ecm4hcky.png" alt="Biased Reference Counting" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Immortal Objects
&lt;/h3&gt;

&lt;p&gt;Objects like &lt;code&gt;None&lt;/code&gt;, &lt;code&gt;True&lt;/code&gt;, &lt;code&gt;False&lt;/code&gt;, and small integers (&lt;code&gt;0&lt;/code&gt;, &lt;code&gt;1&lt;/code&gt;) are accessed constantly. Locking them would create a hotspot of contention.&lt;/p&gt;

&lt;p&gt;PEP 703 marks these objects as &lt;strong&gt;Immortal&lt;/strong&gt;. Their reference counts never change—updates are simply ignored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi56vk9hcq9wmo6lvyzyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi56vk9hcq9wmo6lvyzyx.png" alt="Immortal Objects" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Deferred Reference Counting
&lt;/h3&gt;

&lt;p&gt;Functions and modules are touched millions of times by many threads. Updating their refcounts constantly would drown performance.&lt;/p&gt;

&lt;p&gt;PEP 703 defers these updates and allows the &lt;strong&gt;Garbage Collector to reconcile them later&lt;/strong&gt;, avoiding countless atomic operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Memory Allocation Works in the Free-Threaded Build
&lt;/h2&gt;

&lt;p&gt;Standard Python uses &lt;code&gt;pymalloc&lt;/code&gt;, which assumes the GIL is present. Without the GIL, it becomes unsafe for multithreaded use.&lt;/p&gt;

&lt;p&gt;The free-threaded build &lt;strong&gt;introduces optional support for mimalloc&lt;/strong&gt;, a high-performance allocator designed for parallel workloads.&lt;/p&gt;

&lt;p&gt;mimalloc provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Thread-safe allocation&lt;/strong&gt; without global locks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Paging of similar-sized objects&lt;/strong&gt;, making GC scans much more efficient&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If mimalloc is unavailable, CPython automatically falls back to a thread-safe variant of pymalloc, ensuring correct behavior even without the external allocator.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This significantly reduces contention when multiple threads request memory simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the New GC Avoids Race Conditions
&lt;/h2&gt;

&lt;p&gt;Python’s Garbage Collector finds &lt;strong&gt;cyclic references&lt;/strong&gt;. Under the GIL, the GC could run safely because no other thread could mutate structures mid-scan.&lt;/p&gt;

&lt;p&gt;Without the GIL, Python adds new protection:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Stop-the-World Scanning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Pause all threads executing Python code&lt;/li&gt;
&lt;li&gt;Scan safely&lt;/li&gt;
&lt;li&gt;Resume execution
### 2. Removal of Generational GC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Standard Python scans young objects frequently. In a multi-threaded world, frequent pauses would crush performance.&lt;br&gt;
The free-threaded build adopts a &lt;strong&gt;non-generational GC&lt;/strong&gt;, running less frequently and merging deferred reference count updates during each cycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Because the GC now runs less frequently in the free-threaded build, these stop-the-world pauses are shorter and happen significantly less often in practice.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Lists and Dicts Stay Safe Across Threads
&lt;/h2&gt;

&lt;p&gt;Each list and dictionary now has a very lightweight lock. But locking to &lt;em&gt;read&lt;/em&gt; would destroy performance.&lt;/p&gt;

&lt;p&gt;Python uses &lt;strong&gt;Optimistic Locking with version numbers&lt;/strong&gt; instead.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A writer acquires the lock and increments the version number.&lt;/li&gt;
&lt;li&gt;A reader:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Reads version&lt;/li&gt;
&lt;li&gt;Reads data without locking&lt;/li&gt;
&lt;li&gt;Re-reads version&lt;/li&gt;
&lt;li&gt;If unchanged → success&lt;/li&gt;
&lt;li&gt;If changed → retries with lock&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0p1zpyrbwpnl4q2btlb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0p1zpyrbwpnl4q2btlb.png" alt="Optimistic Locking" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lists and dicts no longer depend on the GIL for thread safety — their lock-and-version design ensures safe concurrent access even when many threads operate on the same container.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because reads far outnumber writes, this design keeps containers extremely fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality Check: Performance &amp;amp; Trade-offs
&lt;/h2&gt;

&lt;p&gt;Removing the GIL comes with overhead. Free-threaded Python must maintain per-object locks, deferred refcounting logic, and more.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single-threaded programs:&lt;/strong&gt; ~10–15% slower&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-threaded CPU-bound programs:&lt;/strong&gt; scale almost linearly with core count&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Python Build&lt;/th&gt;
&lt;th&gt;CPU-Heavy Threads on 8 Cores&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Standard CPython&lt;/td&gt;
&lt;td&gt;~1× speed (no scaling)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Free-Threaded CPython&lt;/td&gt;
&lt;td&gt;~6–8× speed depending on workload&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For AI, scientific computing, and high-throughput web servers, this trade-off is overwhelmingly worthwhile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Scenarios: Why Developers Should Care
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ML Pipelines:&lt;/strong&gt; Run data loading, preprocessing, and model evaluation in parallel without &lt;code&gt;multiprocessing&lt;/code&gt; overhead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web Frameworks:&lt;/strong&gt; Background CPU tasks (e.g., PDF generation, hashing) no longer block the main request thread.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scientific Computing:&lt;/strong&gt; Complex simulations can use multiple cores on a shared in-memory dataset instead of slow inter-process communication.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Compatibility &amp;amp; Migration
&lt;/h2&gt;

&lt;p&gt;Will this break your code?&lt;br&gt;
Short answer: &lt;strong&gt;No, but you might now see race conditions that were previously hidden by the GIL.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pure Python:&lt;/strong&gt; Runs unchanged, but you must protect shared mutable state yourself when using threads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;C-extensions:&lt;/strong&gt; Must be rebuilt with free-threading support. Many major libraries (NumPy, Pandas, PyTorch) already provide experimental builds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Async:&lt;/strong&gt; Unaffected—async is I/O-bound, not CPU-bound.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ecosystem:&lt;/strong&gt; Support is accelerating as the Python 3.14 timeline approaches.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;PEP 703 represents years of engineering effort. Key takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;True Parallelism:&lt;/strong&gt; Python can now execute multiple threads at the same time on different CPU cores.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redesigned Internals:&lt;/strong&gt; Reference counting, memory allocation, container safety, and GC were upgraded for multi-threading.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Reality:&lt;/strong&gt; ~10% slower for purely single-threaded workloads, but massive multi-core acceleration for CPU-heavy tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status:&lt;/strong&gt; Free-threaded Python is available in Python 3.13 (as an experimental build) and continues as an optional GIL-less build in Python 3.14.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; The views and opinions expressed in this blog are solely those of the author and do not represent the views of any organization or any individual with whom the author may be associated, professionally or personally.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>python</category>
      <category>core</category>
    </item>
    <item>
      <title>Model Context Protocol : A New Standard for Defining APIs</title>
      <dc:creator>Dineshsuriya D</dc:creator>
      <pubDate>Mon, 14 Apr 2025 11:54:07 +0000</pubDate>
      <link>https://dev.to/epam_india_python/model-context-protocol-a-new-standard-for-defining-apis-19ih</link>
      <guid>https://dev.to/epam_india_python/model-context-protocol-a-new-standard-for-defining-apis-19ih</guid>
      <description>&lt;h2&gt;
  
  
  Why APIs Need a New Standard for LLMs:
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) are like compressed versions of the internet — great at generating text based on context, but passive by default. They can’t take meaningful actions like sending an email or querying a database on their own.&lt;/p&gt;

&lt;p&gt;To address this, developers began connecting LLMs to external tools via APIs. While powerful, this approach quickly becomes messy: APIs change, glue code piles up, and maintaining M×N integrations (M apps × N tools) becomes a nightmare.&lt;br&gt;
That’s where the Model Context Protocol (MCP) comes in.&lt;br&gt;
MCP provides a standardized interface for connecting LLMs to tools, data sources, and services. Instead of custom integrations for every app-tool pair, MCP introduces a shared protocol that simplifies and scales integrations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy92jbcvcn4qdjcuef90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy92jbcvcn4qdjcuef90.png" alt="After-MCP" width="800" height="506"&gt;&lt;/a&gt;&lt;br&gt;
Think of MCP like USB for AI.&lt;/p&gt;

&lt;p&gt;Before USB, every device needed a custom connector. Similarly, before MCP, each AI app needed its own integration with every tool. MCP reduces M×N complexity to M+N:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tool developers implement N MCP servers&lt;/li&gt;
&lt;li&gt;AI app developers implement M MCP clients
With this client-server model, each side only builds once — and everything just works.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding MCP’s Core Architecture:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hosts&lt;/strong&gt;: These are applications like Claude Desktop, IDEs, or AI-powered tools that want to access external data or capabilities using MCP.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clients&lt;/strong&gt;: Embedded within the host applications, clients maintain a one-to-one connection with MCP servers. They act as the bridge between the host and the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Servers&lt;/strong&gt;: Servers provide access to various resources, tools, and prompts. They expose these capabilities to clients, enabling seamless interaction through the MCP layer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4coivrbt51izojlom3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4coivrbt51izojlom3j.png" alt="MCP Architecture" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Inside MCP Servers: Resources, Tools &amp;amp; Prompts
&lt;/h2&gt;

&lt;p&gt;MCP servers expose three key components — &lt;strong&gt;Resources&lt;/strong&gt;, &lt;strong&gt;Tools&lt;/strong&gt;, and &lt;strong&gt;Prompts&lt;/strong&gt; — each serving a unique role in enabling LLMs to interact with external systems effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Resources are data objects made available by the server to the client. They can be &lt;strong&gt;text-based&lt;/strong&gt; (like source code, log files) or &lt;strong&gt;binary&lt;/strong&gt; (like images, PDFs).&lt;/li&gt;
&lt;li&gt;These resources are &lt;strong&gt;application-controlled&lt;/strong&gt;, meaning the client decides when and how to use them.&lt;/li&gt;
&lt;li&gt;Importantly, resources are &lt;strong&gt;read-only&lt;/strong&gt; context providers. They should not perform computations or trigger side effects. If actions or side effects are needed, they should be handled through &lt;strong&gt;tools&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tools are &lt;strong&gt;functions&lt;/strong&gt; that LLMs can control to perform specific actions — for example, executing a web search or interacting with an external API.&lt;/li&gt;
&lt;li&gt;They enable LLMs to move beyond passive context and take &lt;strong&gt;real-world actions&lt;/strong&gt;, perform &lt;strong&gt;computations&lt;/strong&gt;, or fetch live data.&lt;/li&gt;
&lt;li&gt;Though tools are exposed by the server, they are &lt;strong&gt;model-driven&lt;/strong&gt; — meaning the LLM can invoke them (usually with user approval in the loop).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prompts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prompts are reusable &lt;strong&gt;prompt templates&lt;/strong&gt; that servers can define and offer to clients.&lt;/li&gt;
&lt;li&gt;These help guide LLMs and users to interact with tools and resources more effectively and consistently.&lt;/li&gt;
&lt;li&gt;Prompts are &lt;strong&gt;user-controlled&lt;/strong&gt; — clients can present them to users for selection and usage, making interaction smoother and more intentional.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  MCP Servers: The Bridge to External Systems
&lt;/h2&gt;

&lt;p&gt;MCP servers act as the bridge between the MCP ecosystem and external tools, systems, or data sources. They can be developed in any programming language and communicate with clients using two transport methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stdio Transport&lt;/strong&gt;: Uses standard input/output for communication — ideal for local or lightweight processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP with SSE (Server-Sent Events) Transport&lt;/strong&gt;: Uses HTTP connections for more scalable, web-based communication between clients and servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How MCP Works: The Full Workflow
&lt;/h2&gt;

&lt;p&gt;The Model Context Protocol follows a structured communication flow between hosts, clients, and servers. Here's how the full lifecycle plays out:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Initialization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;host&lt;/strong&gt; creates a &lt;strong&gt;client&lt;/strong&gt;, which then sends an &lt;code&gt;Initialize&lt;/code&gt; request to the server. This request includes the client’s &lt;strong&gt;protocol version&lt;/strong&gt; and its &lt;strong&gt;capabilities&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;server&lt;/strong&gt; responds with its own protocol version and a list of supported &lt;strong&gt;capabilities&lt;/strong&gt;, completing the handshake.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Message Exchange
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Once initialized, the host gains access to the server’s &lt;strong&gt;resources&lt;/strong&gt;, &lt;strong&gt;prompts&lt;/strong&gt;, and &lt;strong&gt;tools&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Here’s how interaction typically works:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Request&lt;/strong&gt;: If the LLM determines that it needs to perform an action (e.g., for the user query &lt;em&gt;“What’s the weather in Bangalore?”&lt;/em&gt;), the host instructs the client to send a request to the server — in this case, to invoke the &lt;code&gt;fetch_weather&lt;/code&gt; tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response&lt;/strong&gt;: The server processes the request by executing the appropriate tool. It then sends the result back to the client, which forwards it to the host. The host can then pass this enriched context back into the LLM, enabling it to generate a more accurate and complete response.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Termination
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Either the host or the server can gracefully close the connection by issuing a &lt;code&gt;close()&lt;/code&gt; command.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hands-On: Building an MCP Server &amp;amp; Client in Python:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  MCP Server:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp.server.fastmcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastMCP&lt;/span&gt;

&lt;span class="c1"&gt;# Create an MCP server
&lt;/span&gt;&lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastMCP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LLMs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# The @mcp.resource() decorator is meant to map a URI pattern to a function that provides the resource content
&lt;/span&gt;&lt;span class="nd"&gt;@mcp.resource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;docs://list/llms&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_all_llms_docs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Reads and returns the content of the LLMs documentation file.
    Returns:
        str: the contents of the LLMs documentation if successful,
             otherwise an error message.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="c1"&gt;# Local path to the LLMs documentation
&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;doc_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;C:\Users\dineshsuriya_d\Documents\MCP Server\llms_full.txt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="c1"&gt;# Open the documentation file in read mode and return its content
&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="c1"&gt;# Return an error message if unable to read the file
&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error reading file : &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="c1"&gt;# Note : Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data
# but shouldn't perform significant computation or have side effects
&lt;/span&gt;
&lt;span class="c1"&gt;# Add a tool, will be converted into JSON spec for function calling
&lt;/span&gt;&lt;span class="nd"&gt;@mcp.tool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_llm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Appends a new language model (LLM) entry to the specified file.
    Parameters:
        path (str): The file path where the LLM is recorded.
        new_llm (str): The new language model to be added.
    Returns:
        str: A success message if the LLM was added, otherwise an error message.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="c1"&gt;# Open the file in append mode and add the new LLM entry with a newline at the end
&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_llm&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Added new LLM to &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="c1"&gt;# If an error occurs, return an error message detailing the exception
&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error writing file : &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="c1"&gt;# Note: Unlike resources, tools are expected to perform computation and have side effects
&lt;/span&gt;

&lt;span class="nd"&gt;@mcp.prompt&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;review_code&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Please review this code:&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="c1"&gt;# Initialize and run the server
&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;mcp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;stdio&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  MCP Clients:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;MCP Clients are part of host like Claude Desktop, Cursor.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ClientSession&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;StdioServerParameters&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;types&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp.client.stdio&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;stdio_client&lt;/span&gt;

&lt;span class="c1"&gt;# Commands for running/connecting to MCP Server
&lt;/span&gt;&lt;span class="n"&gt;server_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StdioServerParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Executable
&lt;/span&gt;    &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;example_server.py&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;  &lt;span class="c1"&gt;# Optional command line arguments
&lt;/span&gt;    &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Optional environment variables
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;stdio_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;server_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;as &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;ClientSession&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sampling_callback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;handle_sampling_message&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Initialize the connection
&lt;/span&gt;        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;# List available prompts
&lt;/span&gt;        &lt;span class="n"&gt;prompts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_prompts&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;# Get a prompt
&lt;/span&gt;        &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;example-prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arg1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# List available resources
&lt;/span&gt;        &lt;span class="n"&gt;resources&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_resources&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;# List available tools
&lt;/span&gt;        &lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_tools&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;# Read a resource
&lt;/span&gt;        &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mime_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_resource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;file://some/path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Call a tool
&lt;/span&gt;        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tool-name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arg1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Use MCP? Key Advantages &amp;amp; Benefits
&lt;/h2&gt;

&lt;p&gt;MCP brings a number of practical benefits that make it easier to build and scale LLM-based applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Need to Manage API Changes&lt;/strong&gt;: MCP servers are typically maintained by the service providers themselves. So if an API changes or a tool gets updated, developers don’t need to worry — those changes are handled within the MCP server. This makes LLM integrations more reliable and future-proof.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Server Spin-Up&lt;/strong&gt;: MCP servers automatically start when a host creates a client and connects to it. There’s no need for developers to manually launch or manage the server process — just provide the necessary configuration to the MCP-compatible host (like Claude), and it takes care of the rest.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plug-and-Play Model Switching&lt;/strong&gt;: With MCP in place, switching between different models becomes much easier. Since all tool and resource integrations are standardized and maintained by their providers, any compatible model can use them without extra setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Ecosystem of Integrations&lt;/strong&gt;: A growing number of integrations are already available in the MCP ecosystem — offering access to powerful tools and high-quality resources. These can be effortlessly accessed through clients by any host application&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Explore: Community &amp;amp; Pre-Built MCP Servers
&lt;/h2&gt;

&lt;p&gt;Looking to dive deeper or get started fast? Check out these great community-driven resources and ready-to-use MCP servers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌟 &lt;a href="https://github.com/punkpeye/awesome-mcp-servers" rel="noopener noreferrer"&gt;Awesome MCP Servers&lt;/a&gt; – A curated list of high-quality MCP server implementations, tools, and resources.&lt;/li&gt;
&lt;li&gt;🛠️ &lt;a href="https://github.com/modelcontextprotocol/servers" rel="noopener noreferrer"&gt;Official MCP Server Repository&lt;/a&gt; – The official collection of MCP server examples and standards maintained by the protocol community.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Happy vibe coding! 😊
&lt;/h3&gt;




&lt;p&gt;Disclaimer:&lt;br&gt;
This is a personal blog. The views and opinions expressed here are only those of the author and do not represent those of any organization or any individual with whom the author may be associated, professionally or personally.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
