<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tony Lewis</title>
    <description>The latest articles on DEV Community by Tony Lewis (@tonylewislondon).</description>
    <link>https://dev.to/tonylewislondon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tonylewislondon"/>
    <language>en</language>
    <item>
      <title>The Part of My AI Stack That Isn't AI: Human Workers via MCP</title>
      <dc:creator>Tony Lewis</dc:creator>
      <pubDate>Fri, 03 Apr 2026 15:14:41 +0000</pubDate>
      <link>https://dev.to/tonylewislondon/the-part-of-my-ai-stack-that-isnt-ai-human-workers-via-mcp-5217</link>
      <guid>https://dev.to/tonylewislondon/the-part-of-my-ai-stack-that-isnt-ai-human-workers-via-mcp-5217</guid>
      <description>&lt;p&gt;Everyone talks about MCP as the protocol for connecting AI to APIs. Stripe, HubSpot, Postgres, Gmail — plug in a server, get tools, let the AI call them.&lt;/p&gt;

&lt;p&gt;But here's what nobody's writing about: MCP works just as well for dispatching tasks to &lt;em&gt;humans&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I've been running a system where my AI orchestrates microtask workers — real people — through the same MCP tool interface it uses to call any other API. The AI creates campaigns, assigns tasks, monitors submissions, validates results, rates workers, and stores outputs. All through standard MCP tool calls. No custom integration. No separate dashboard. The human workforce is just another tool in the AI's toolkit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;Microtask platforms (Microworkers, Amazon Mechanical Turk, Toloka) have REST APIs. You can create tasks, assign them to workers, pull results, and rate quality programmatically. Building one of these as an MCP tool bundle takes the same effort as building tools for any other SaaS API.&lt;/p&gt;

&lt;p&gt;Once you do, your AI gets tools like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create campaign&lt;/strong&gt; — define a task, set price per completion, specify how many workers you need&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;List submissions&lt;/strong&gt; — pull all worker responses for a campaign&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate submission&lt;/strong&gt; — mark work as accepted or rejected, with feedback&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get account balance&lt;/strong&gt; — monitor spend&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From the AI's perspective, these are identical to any other MCP tools. It calls them the same way it calls a Stripe API or a database query. The difference is that on the other end, a human does the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;There's a category of tasks that AI handles badly and humans handle trivially. Signing up for a website. Navigating a UI that has no API. Confirming whether a physical location exists. Reading a CAPTCHA. Verifying that a phone number connects to a real business.&lt;/p&gt;

&lt;p&gt;The conventional approach is to either skip these tasks or build brittle browser automation. Both are wrong. The correct abstraction is: &lt;strong&gt;route each task to whoever does it best.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes that's GPT-4. Sometimes that's a Python script. Sometimes that's a person in Nairobi who can complete the task in 90 seconds for $0.30.&lt;/p&gt;

&lt;p&gt;MCP doesn't care which. The tool returns a result. The AI consumes it and continues.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned running 300+ human tasks through MCP
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Speed surprised me
&lt;/h3&gt;

&lt;p&gt;Workers claim tasks within minutes of posting, not hours. The microtask workforce is global and online around the clock. I post a batch of 50 tasks at 3am my time and have results by breakfast.&lt;/p&gt;

&lt;h3&gt;
  
  
  Price per task is absurdly low for the right work
&lt;/h3&gt;

&lt;p&gt;Simple tasks (visit a website, find a specific piece of information, paste it back) run $0.20–$0.40 each. More complex tasks (create an account, navigate multi-step flows, take screenshots as proof) run $0.40–$0.60. At these prices, redundancy is cheap — assign three workers to the same task and cross-validate their answers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Worker quality varies wildly, and that's fine
&lt;/h3&gt;

&lt;p&gt;Some workers are meticulous. Some paste garbage. The key insight is that &lt;strong&gt;quality control is a data problem, not a people problem.&lt;/strong&gt; Track worker IDs across tasks. Build a quality score. Workers who consistently deliver good results get routed more work. Workers who submit garbage get excluded.&lt;/p&gt;

&lt;p&gt;After a few hundred tasks, I had a clear tier list:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier&lt;/th&gt;
&lt;th&gt;Workers&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hire&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~50&lt;/td&gt;
&lt;td&gt;Consistently accurate, follows instructions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Neutral&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~200&lt;/td&gt;
&lt;td&gt;Variable quality, acceptable for simple tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Exclude&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Submitted fake data, duplicated others' work&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The exclude list is tiny. Most people do honest work when the task is clear and the pay is fair.&lt;/p&gt;

&lt;h3&gt;
  
  
  Per-task instructions are everything
&lt;/h3&gt;

&lt;p&gt;My first campaign used generic instructions. Results were noisy — 30% of workers completed the wrong variant of the task because they picked whichever looked easiest. When I switched to unique, specific instructions per task (each worker gets exactly one assignment with step-by-step directions), accuracy jumped dramatically.&lt;/p&gt;

&lt;p&gt;The AI generates these per-task instructions. It knows what each task requires, formats the instructions with the right URLs and field names, and submits them as template variables in the campaign creation call. The human gets a clear, unambiguous task. The AI gets a structured result back.&lt;/p&gt;

&lt;h3&gt;
  
  
  Escape hatches prevent wasted money
&lt;/h3&gt;

&lt;p&gt;Not every task is completable. Sometimes the website requires a credit card. Sometimes the information doesn't exist. Workers need a clean way to say "I can't do this" without getting penalized.&lt;/p&gt;

&lt;p&gt;I added explicit escape hatch options to every task: "REQUIRES_CC" and "NOT_AVAILABLE." Workers who correctly identify impossible tasks get rated the same as workers who complete possible ones. This sounds small but it changed the economics — I stopped paying workers to waste time on dead ends, and I stopped rejecting honest workers for reporting real blockers.&lt;/p&gt;

&lt;h3&gt;
  
  
  The AI handles the whole lifecycle
&lt;/h3&gt;

&lt;p&gt;Here's what a typical batch looks like from the AI's perspective:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Select tasks&lt;/strong&gt; — query a database for items that need human work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate instructions&lt;/strong&gt; — create per-task directions from templates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create campaign&lt;/strong&gt; — call the microtask platform's API via MCP, submit all tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wait&lt;/strong&gt; — sleep, check back periodically (also an MCP tool)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull results&lt;/strong&gt; — list all submissions, parse structured answers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate&lt;/strong&gt; — test each result against a known-good source (API call, database lookup, HTTP request)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate workers&lt;/strong&gt; — accept valid submissions, reject garbage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store results&lt;/strong&gt; — persist validated data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update state&lt;/strong&gt; — mark items as complete in the database&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every step is an MCP tool call. The AI doesn't need a human operator to run this loop. It dispatches to humans, validates their work, manages quality, and continues autonomously.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cost math
&lt;/h2&gt;

&lt;p&gt;Over 300+ tasks across multiple campaigns:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total spend&lt;/td&gt;
&lt;td&gt;~$90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average cost per task&lt;/td&gt;
&lt;td&gt;$0.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tasks completed successfully&lt;/td&gt;
&lt;td&gt;~75%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tasks returned via escape hatch&lt;/td&gt;
&lt;td&gt;~20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Garbage submissions&lt;/td&gt;
&lt;td&gt;~5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unique workers used&lt;/td&gt;
&lt;td&gt;~250&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Workers excluded for quality&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;$90 for 300 tasks that would have taken me days to do manually, or would have required building and maintaining fragile browser automation that breaks every time a website updates its UI.&lt;/p&gt;

&lt;p&gt;The MCP tool definitions are identical to any other integration — same auth, same structured inputs and outputs, same orchestration. The only difference is that the "compute" on the other end is a human brain instead of a server.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What tasks in your pipeline should probably be done by a human instead of an AI? I'd genuinely like to know — drop a comment with your use case.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>productivity</category>
      <category>automation</category>
    </item>
    <item>
      <title>One Company Found 1,600 AI Tools Running Without Approval. Stanford Says This Is Normal.</title>
      <dc:creator>Tony Lewis</dc:creator>
      <pubDate>Fri, 03 Apr 2026 15:14:39 +0000</pubDate>
      <link>https://dev.to/tonylewislondon/one-company-found-1600-ai-tools-running-without-approval-stanford-says-this-is-normal-2gki</link>
      <guid>https://dev.to/tonylewislondon/one-company-found-1600-ai-tools-running-without-approval-stanford-says-this-is-normal-2gki</guid>
      <description>&lt;p&gt;Your company probably has a shadow AI problem right now. You just don't know how big it is.&lt;/p&gt;

&lt;p&gt;Stanford\'s Digital Economy Lab just published &lt;a href="https://digitaleconomy.stanford.edu/publication/enterprise-ai-playbook/" rel="noopener noreferrer"&gt;The Enterprise AI Playbook&lt;/a&gt; — 116 pages of research covering 51 successful AI deployments across 41 organizations. The team is led by Erik Brynjolfsson, one of the most-cited economists on technology. They interviewed executives and project leads who actually deployed AI at scale.&lt;/p&gt;

&lt;p&gt;One finding hit differently from the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  1,600 tools. One company.
&lt;/h2&gt;

&lt;p&gt;A semiconductor manufacturer ran a security audit and discovered employees were using &lt;strong&gt;1,500 to 1,600 different AI tools&lt;/strong&gt; across the organization. Not 15. Not 150. Over a thousand.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"When I did the security analysis, we found the company staff are using 1,500 or 1,600 different AI tools. So our objective was building working internal platforms before we go and say you cannot use non-approved tools."&lt;/em&gt;&lt;br&gt;
— Executive, Semiconductor Manufacturer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And this wasn\'t a rogue engineering team. Leadership had told people to "use AI" — but provided no approved platform to use. Enthusiasm outpaced governance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers are worse than you think
&lt;/h2&gt;

&lt;p&gt;The Stanford study cites industry data that\'s hard to ignore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;70-80%&lt;/strong&gt; of employees who use AI at work rely on tools not approved by their employer&lt;/li&gt;
&lt;li&gt;Only &lt;strong&gt;22%&lt;/strong&gt; use exclusively company-provided tools (IBM/Censuswide, 2025)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;57%&lt;/strong&gt; admit to entering sensitive company information into unauthorized AI platforms&lt;/li&gt;
&lt;li&gt;AI-associated data breaches cost organizations an average of &lt;strong&gt;$4.88M per incident&lt;/strong&gt; — the highest of any breach category (IBM Cost of Data Breach Report)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Shadow AI was explicitly mentioned in 15% of the case studies. The researchers found two distinct patterns:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern A: Enthusiasm outpaces governance.&lt;/strong&gt; The semiconductor company above. Leadership says "use AI," provides no sanctioned tooling, and people find their own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern B: Desperation beats bureaucracy.&lt;/strong&gt; In healthcare, physicians adopted ambient transcription tools without formal approval because hospital procurement processes took too long. Doctors were burned out, the technology existed, and the formal process was measured in quarters while their pain was measured in hours.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"A lot of these doctors have been adopting these technologies without approval or a formal vendor selection process."&lt;/em&gt;&lt;br&gt;
— Executive, Healthcare AI Company&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Shadow AI is a symptom, not the disease
&lt;/h2&gt;

&lt;p&gt;This is the insight that most security teams miss. The Stanford researchers are explicit:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Shadow AI is a symptom that policy moves slower than technology, and it needs to be expected but accounted for.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Blocking access doesn\'t work. People route around restrictions when the pain is acute enough. The organizations that solved this didn\'t solve it with stricter policies. They solved it by building internal platforms fast enough that shadow tools became unnecessary.&lt;/p&gt;

&lt;p&gt;The report draws a sharp line:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;When security investment makes sense:&lt;/strong&gt; When it enables use cases that would otherwise be impossible — handling customer financial data, processing healthcare records, managing confidential M&amp;amp;A documents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When security investment is wasteful:&lt;/strong&gt; When formal processes are too slow and shadow AI fills the gap, creating exactly the security risks the process was designed to prevent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That second point is worth sitting with. The security process designed to prevent data leaks causes data leaks by pushing people to unvetted tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  This maps to what I see building developer tools
&lt;/h2&gt;

&lt;p&gt;I build an MCP platform that connects AI assistants to real services — Stripe, HubSpot, Postgres, Shopify, and about 130 others. The pattern repeats constantly: a developer wants AI to access their company\'s CRM, IT hasn\'t approved anything, so they paste customer data into ChatGPT. The data lands in OpenAI\'s systems with no audit trail. The fix is providing a governed channel — proper auth, scoped permissions, audit logging — so they never need to copy-paste.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually works (from the 51 that succeeded)
&lt;/h2&gt;

&lt;p&gt;Every successful deployment in the Stanford study used an iterative approach. 100%. No waterfall. Start small, learn, expand. Two-thirds had significant failed attempts before their current success.&lt;/p&gt;

&lt;p&gt;The ones that moved fastest shared three accelerators:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Accelerator&lt;/th&gt;
&lt;th&gt;Prevalence&lt;/th&gt;
&lt;th&gt;What it means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Executive sponsorship&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;43%&lt;/td&gt;
&lt;td&gt;Not just approval — active championing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Building on existing infrastructure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;32%&lt;/td&gt;
&lt;td&gt;Don\'t start from zero&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;End-user willingness&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;25%&lt;/td&gt;
&lt;td&gt;People who wanted it to work&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For security specifically, the report found that the upfront investment is real but front-loaded. Once the infrastructure exists — data scrubbing pipelines, cloud provider contracts, compliant archival systems — each new AI use case builds on that foundation instead of starting from scratch.&lt;/p&gt;

&lt;p&gt;MIT\'s NANDA initiative reinforces this: &lt;strong&gt;95% of generative AI pilot programs fail&lt;/strong&gt; to produce measurable financial impact, and the failures come from poor workflow integration — not model quality. Every stalled pilot creates demand pressure that feeds shadow AI adoption.&lt;/p&gt;

&lt;h2&gt;
  
  
  The question for your team
&lt;/h2&gt;

&lt;p&gt;If you ran a security audit of AI tools in your organization right now, what number would you find?&lt;/p&gt;

&lt;p&gt;Not the tools IT approved. The tools people are actually using. The Chrome extensions. The API calls to Claude from personal accounts. The screenshots pasted into ChatGPT. The VS Code extensions that send code to who-knows-where.&lt;/p&gt;

&lt;p&gt;The Stanford researchers\' conclusion applies here:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"The window for experimentation is closing. The question is no longer whether AI will deliver value. It is whether organizations can evolve fast enough to capture it."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Shadow AI is what happens when organizations can\'t evolve fast enough. The tools exist. The demand exists. The only question is whether access happens through governed channels or ungoverned ones.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Data and quotes from &lt;a href="https://digitaleconomy.stanford.edu/publication/enterprise-ai-playbook/" rel="noopener noreferrer"&gt;The Enterprise AI Playbook&lt;/a&gt; by Elisa Pereira, Alvin Wang Graylin, and Erik Brynjolfsson, Stanford Digital Economy Lab, April 2026. 51 case studies, 41 organizations, 7 countries.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>enterprise</category>
      <category>devops</category>
    </item>
    <item>
      <title>Give Your AI Full Access to Your Obsidian Vault — 35 MCP Tools</title>
      <dc:creator>Tony Lewis</dc:creator>
      <pubDate>Fri, 27 Mar 2026 19:48:35 +0000</pubDate>
      <link>https://dev.to/tonylewislondon/give-your-ai-full-access-to-your-obsidian-vault-35-mcp-tools-2dd4</link>
      <guid>https://dev.to/tonylewislondon/give-your-ai-full-access-to-your-obsidian-vault-35-mcp-tools-2dd4</guid>
      <description>&lt;p&gt;Your Obsidian vault is your second brain. Years of notes, project plans, daily journals, meeting records, research — all connected with wikilinks and tags in a carefully organized folder of Markdown files.&lt;/p&gt;

&lt;p&gt;But your AI can't see any of it. You copy-paste snippets into ChatGPT. You describe your note structure to Claude. You manually relay information between your knowledge base and your AI.&lt;/p&gt;

&lt;p&gt;That ends now. &lt;a href="https://mcpbundles.com" rel="noopener noreferrer"&gt;MCPBundles&lt;/a&gt; provides an Obsidian MCP bundle — 35 tools that give your AI full read/write access to your vault through the &lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;. It's a standard MCP server — the tools show up natively in whatever AI you already use. Claude Desktop, ChatGPT, Cursor, Windsurf, the &lt;code&gt;mcpbundles&lt;/code&gt; CLI, or any MCP-compatible client.&lt;/p&gt;

&lt;h2&gt;
  
  
  35 tools. Your AI becomes an Obsidian power user.
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://mcpbundles.com/bundles/obsidian" rel="noopener noreferrer"&gt;Obsidian MCP bundle&lt;/a&gt; gives your AI the same access to your vault that you have — and then some. It reads notes and gets back structured data — parsed frontmatter, tags, file stats — not just raw text. It browses your folder hierarchy. It searches across every note in your vault with relevance scoring, regex patterns, frontmatter queries, and date ranges.&lt;/p&gt;

&lt;p&gt;It writes. Not just "dump a whole file" writing. Your AI can create notes with full frontmatter, append entries to existing notes, and — this is the part that matters — surgically edit specific sections of a document without touching anything else.&lt;/p&gt;

&lt;p&gt;It sees your images. It analyzes your graph structure. It finds orphaned notes and broken links. It lists every task across your entire vault.&lt;/p&gt;

&lt;h2&gt;
  
  
  Surgical edits change everything
&lt;/h2&gt;

&lt;p&gt;Most integrations that touch files do the same thing: read the whole file, modify it in memory, overwrite the whole file. Fine for code. Terrible for a living document with dozens of sections, tasks, and metadata fields that you don't want an AI to accidentally mangle.&lt;/p&gt;

&lt;p&gt;The Obsidian PATCH operation works differently. Your AI targets a specific heading, block reference, or frontmatter field and inserts, replaces, or appends content just there.&lt;/p&gt;

&lt;p&gt;Say you've got a project plan with milestones, discussion notes, and action items. You tell your AI to add two items to the milestones section. It reads the document map (a lightweight call that returns all headings, block refs, and frontmatter fields), finds the right heading, and appends only to that section. Your discussion notes and action items stay exactly as they were.&lt;/p&gt;

&lt;p&gt;The same precision works for frontmatter. Your AI can flip &lt;code&gt;status: draft&lt;/code&gt; to &lt;code&gt;status: shipped&lt;/code&gt; on a single field without rewriting the YAML block. It can add a new &lt;code&gt;reviewer: Tony&lt;/code&gt; field that didn't exist before. It can target nested heading paths like &lt;code&gt;"Launch Plan::Key Milestones"&lt;/code&gt; to reach the right section in a deeply structured document.&lt;/p&gt;

&lt;p&gt;This is the difference between an AI that can edit text files and an AI that understands Obsidian's document structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Daily notes are the fast path
&lt;/h2&gt;

&lt;p&gt;The most common Obsidian workflow is appending to today's daily note. A quick thought, a task, a meeting summary — you open today's note and add a line.&lt;/p&gt;

&lt;p&gt;Your AI does the same thing in one call. No need to figure out today's date, construct the filename, check if the file exists. Just "append this to my daily note." It handles the rest, including creating the note if it doesn't exist yet.&lt;/p&gt;

&lt;p&gt;This turns your AI into a persistent journal. Every conversation can leave a trace in your vault. Meeting summaries go into the daily note. Research findings get filed in project notes. Action items land where they belong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your AI can see your images
&lt;/h2&gt;

&lt;p&gt;When your AI reads an image from your vault, it doesn't get a file path or a base64 blob dumped into a text response. It gets the actual image as an MCP &lt;code&gt;ImageContent&lt;/code&gt; block — the same way a screenshot tool returns visual content.&lt;/p&gt;

&lt;p&gt;That means any AI model with vision — Claude, GPT-4o, Gemini — actually &lt;em&gt;sees&lt;/em&gt; the image. Your AI can describe a diagram, read handwritten notes from a photo, interpret a screenshot, or analyze a chart. All from files already sitting in your vault.&lt;/p&gt;

&lt;p&gt;PNG, JPEG, GIF, WebP, BMP, and SVG are all supported. The image flows through the same proxy tunnel as everything else — nothing gets stored on &lt;a href="https://mcpbundles.com" rel="noopener noreferrer"&gt;MCPBundles&lt;/a&gt; servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Graph analysis and vault maintenance
&lt;/h2&gt;

&lt;p&gt;Obsidian's power comes from connections between notes. Wikilinks turn a folder of Markdown files into a knowledge graph. But maintaining that graph — finding orphans, fixing broken links, understanding relationships — is manual work.&lt;/p&gt;

&lt;p&gt;Your AI traverses your link graph. &lt;strong&gt;Graph neighbors&lt;/strong&gt; does a breadth-first search from any note, finding every connected note within a configurable depth. Direction matters: outgoing links, incoming backlinks, or both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orphan detection&lt;/strong&gt; scans every note in your vault and identifies the ones with zero incoming wikilinks — notes that nothing else links to. These are the ones you forgot about, the stubs you never connected, the ideas that fell through the cracks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Broken link detection&lt;/strong&gt; scans every wikilink in every note and checks whether the target actually exists. That reference to &lt;code&gt;[[Old Project Name]]&lt;/code&gt; you renamed three months ago? Found.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task management across your entire vault
&lt;/h2&gt;

&lt;p&gt;Obsidian is great for tasks — the checkbox syntax (&lt;code&gt;- [ ] do the thing&lt;/code&gt;) works in any note. But there's no built-in way to see tasks across your entire vault. Your AI can.&lt;/p&gt;

&lt;p&gt;The task listing tool scans every note, extracts every checkbox, and returns them with their source file and line number. Filter by status (open, completed, all), by folder, by tag, or by keyword. Ask your AI "what are my open tasks tagged with Q2?" and get an answer without installing any plugins.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the proxy tunnel works
&lt;/h2&gt;

&lt;p&gt;Obsidian runs on your desktop. AI services run in the cloud. The &lt;a href="https://mcpbundles.com/docs/desktop-proxy" rel="noopener noreferrer"&gt;MCPBundles desktop proxy&lt;/a&gt; bridges them with an encrypted tunnel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI → MCPBundles → Proxy Tunnel → Your Desktop → Obsidian (localhost:27124)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your vault data flows through the tunnel in real time. Nothing gets stored on MCPBundles servers. The proxy handles Obsidian's self-signed TLS certificate automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works with the AI you already use
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mcpbundles.com" rel="noopener noreferrer"&gt;MCPBundles&lt;/a&gt; is a remote MCP server. You connect it once and the Obsidian tools appear in your AI's tool list — alongside any other bundles you've enabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Desktop, ChatGPT, Cursor, Windsurf&lt;/strong&gt; — add the MCPBundles MCP server URL and the tools are there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;mcpbundles&lt;/code&gt; CLI&lt;/strong&gt; — for terminal workflows, scripting, and automation. &lt;code&gt;pip install mcpbundles&lt;/code&gt; and you're set.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Any MCP-compatible client&lt;/strong&gt; — if it speaks MCP, it works. The tools are standard MCP tools with proper schemas, annotations, and content types.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five minutes to set up
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Install the Obsidian plugin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install the &lt;strong&gt;Local REST API&lt;/strong&gt; community plugin in Obsidian (by Adam Coddington). Enable it and copy the API key from the plugin settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Start the proxy&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;mcpbundles
mcpbundles login
mcpbundles proxy start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Enable the bundle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="https://mcpbundles.com" rel="noopener noreferrer"&gt;MCPBundles&lt;/a&gt;, enable the &lt;a href="https://mcpbundles.com/bundles/obsidian" rel="noopener noreferrer"&gt;Obsidian bundle&lt;/a&gt;, and paste your API key. The 35 tools are now available to every AI client connected to your MCPBundles server.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice
&lt;/h2&gt;

&lt;p&gt;You're prepping for a team meeting. You tell your AI: "Create meeting notes for the Q2 planning sync with attendees Tony and Sarah, agenda: hiring, roadmap, budget." A fully structured note appears in your vault with frontmatter, sections, and wikilinks to related project notes.&lt;/p&gt;

&lt;p&gt;After the meeting, you tell your AI to append the action items. It doesn't overwrite your agenda — it surgically appends to the action items section.&lt;/p&gt;

&lt;p&gt;Later, you ask your AI to search your vault for everything related to "database migration." It finds four notes across different projects, reads them, and creates a consolidated summary note linking back to the originals.&lt;/p&gt;

&lt;p&gt;You photographed a whiteboard and dropped the image into your vault. Your AI &lt;em&gt;sees&lt;/em&gt; the photo, reads the sticky notes, and creates structured tasks in a new project note.&lt;/p&gt;

&lt;p&gt;Your vault has grown to 500 notes. You ask your AI to run a health check. It finds 23 orphaned notes, 7 broken wikilinks, and 45 open tasks scattered across 12 files. It creates a maintenance summary with links to every issue.&lt;/p&gt;

&lt;p&gt;None of this requires you to leave your AI chat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;mcpbundles
mcpbundles login
mcpbundles proxy start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable the &lt;a href="https://mcpbundles.com/bundles/obsidian" rel="noopener noreferrer"&gt;Obsidian bundle&lt;/a&gt;, add your API key, and start talking to your vault.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://mcpbundles.com" rel="noopener noreferrer"&gt;MCPBundles&lt;/a&gt; is a hosted platform for connecting AI agents to real third-party services via the Model Context Protocol. 60+ bundles, 1900+ tools — Stripe, HubSpot, Postgres, Gmail, Obsidian, and more. &lt;a href="https://mcpbundles.com" rel="noopener noreferrer"&gt;Get started free&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
