<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sam Kennard</title>
    <description>The latest articles on DEV Community by Sam Kennard (@sam_kennard_8d1969bb).</description>
    <link>https://dev.to/sam_kennard_8d1969bb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sam_kennard_8d1969bb"/>
    <language>en</language>
    <item>
      <title>Context Engineering Has a Blind Spot</title>
      <dc:creator>Sam Kennard</dc:creator>
      <pubDate>Wed, 18 Mar 2026 10:27:57 +0000</pubDate>
      <link>https://dev.to/sam_kennard_8d1969bb/context-engineering-has-a-blind-spot-1jl6</link>
      <guid>https://dev.to/sam_kennard_8d1969bb/context-engineering-has-a-blind-spot-1jl6</guid>
      <description>&lt;p&gt;The biggest shift in agent design over the past year has been context engineering rather than improved models. Most of the published guidance focuses on codebases, documentation, and structured knowledge bases, and it's good guidance.&lt;/p&gt;

&lt;p&gt;But there's a category of enterprise data that breaks every standard &lt;a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents" rel="noopener noreferrer"&gt;context engineering&lt;/a&gt; pattern, and almost nobody is writing about it: email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why email is different from everything else
&lt;/h2&gt;

&lt;p&gt;When &lt;a href="https://developers.googleblog.com/architecting-efficient-context-aware-multi-agent-framework-for-production/" rel="noopener noreferrer"&gt;Google's ADK team writes about context engineering&lt;/a&gt;, they describe a pipeline: ingest data, compile a view, serve it to the model. &lt;a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents" rel="noopener noreferrer"&gt;When Anthropic describes it&lt;/a&gt;, they talk about curating tokens for maximum utility. &lt;/p&gt;

&lt;p&gt;Both assume the source data has some structural integrity to work with, because a codebase has files, functions, and imports, a knowledge base has documents with authors and dates, and even Slack has channels and timestamps.&lt;/p&gt;

&lt;p&gt;Email has none of that. A 20-reply business thread contains the same quoted text duplicated up to 20 times, every email client quoting differently (Gmail uses &amp;gt; prefixes, Outlook uses indentation, Apple Mail wraps in blockquote HTML). &lt;/p&gt;

&lt;p&gt;Forwarded chains collapse three separate conversations into a single message body with no structural separator. Inline replies break every deduplication pattern because someone typed new content between quoted blocks. &lt;/p&gt;

&lt;p&gt;And the most critical information, the PDF with the actual contract terms or the invoice that needs reconciling, is sitting in an attachment that most context pipelines never touch.&lt;/p&gt;

&lt;p&gt;This is where a huge amount of enterprise context actually lives, not in the CRM fields or the wiki, but in the messy, unstructured communication data where business actually happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  What breaks at enterprise scale
&lt;/h2&gt;

&lt;p&gt;The reason this matters isn't that one agent can't parse one email thread. It's what happens when you try to run context engineering across an organization's entire communication history. &lt;/p&gt;

&lt;p&gt;A finance team closing the books at month-end needs to reconcile invoices against purchase order approvals across hundreds of vendors. The invoices arrive as PDF attachments and the approvals live in email threads scattered across 15 people's inboxes, often buried in a reply that says "approved, go ahead" with no formal record in any system. &lt;/p&gt;

&lt;p&gt;An agent running multi-hop search over this data makes one retrieval call, gets a fragment, reformulates, searches again, and by hop 5 it's burning 40,000 tokens on a single vendor reconciliation. &lt;/p&gt;

&lt;p&gt;Multiply that by 300 vendors and you've spent more on token costs than the finance team's monthly payroll, with accuracy degrading on every query because each hop compounds the noise from the previous one.&lt;/p&gt;

&lt;p&gt;A compliance team monitoring regulatory commitments has to scan 50,000 threads per month for obligations that were agreed to in email and never entered into a tracking system. The commitments aren't labeled, they're buried in sentences like "we can do that by Q3" from someone in a 30-reply thread where the first 20 messages were about something else entirely. &lt;/p&gt;

&lt;p&gt;A multi-hop agent searching for "regulatory commitments" returns threads that mention regulations, not threads that contain actual commitments. The semantic gap between what the agent searches for and what the data looks like structurally is exactly where context engineering is supposed to help, and where standard approaches fail on email.&lt;/p&gt;

&lt;p&gt;A sales organization running deal risk scoring across 200 active opportunities needs to detect signals that only exist in email patterns: the champion going quiet over two weeks, procurement entering a thread where they weren't before, reply latency increasing, tone shifting from collaborative to transactional. &lt;/p&gt;

&lt;p&gt;None of this shows up in the CRM, which says the deal is "Stage 3, on track" while the email thread says the deal is dying. An agent that can't reason over the full communication history with participant attribution, temporal ordering, and cross-thread awareness will miss every one of these signals, and miss them confidently.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architectural gap
&lt;/h2&gt;

&lt;p&gt;Standard context engineering assumes you can compile a useful view of your data at query time. For email at enterprise scale, this doesn't hold because the preprocessing required to make email useful is too expensive and too complex to do per-query.&lt;/p&gt;

&lt;p&gt;Thread reconstruction, quoted text deduplication, participant attribution, attachment extraction, temporal ordering across threads that reference each other: this work needs to happen once at index time, not repeatedly inside an agent loop. &lt;/p&gt;

&lt;p&gt;When you do it at index time, the agent gets pre-assembled context in a single retrieval call where latency is predictable, cost is fixed, and the same query returns the same result every time, which is the only way downstream automation actually works.&lt;/p&gt;

&lt;p&gt;When you try to do it at query time through multi-hop search, you get variable latency (10-60 seconds depending on thread complexity), variable cost (scales with how messy the data is, which means your hardest queries are your most expensive), and variable accuracy (each hop builds on the previous hop's interpretation, and the error compounds). &lt;/p&gt;

&lt;p&gt;The agent is simultaneously trying to reconstruct the conversation, figure out who said what, determine what's current versus what's quoted history, and answer the actual question. That's four jobs where each one is hard enough on its own.&lt;/p&gt;

&lt;h2&gt;
  
  
  What index-time context engineering looks like
&lt;/h2&gt;

&lt;p&gt;The work that makes email usable for agents comes down to a few things that need to happen once, not per-query: reconstruct threads, strip quoted text, attribute who said what, and actually read attachments. &lt;/p&gt;

&lt;p&gt;Then index all of it with semantic and structural metadata, scoped per-user so one person's agent can't surface another person's data.&lt;/p&gt;

&lt;p&gt;Most teams skip this and go straight to multi-hop search, which works in demos and breaks in production at exactly the scale where the business case justifies the investment.&lt;/p&gt;

&lt;p&gt;We build this infrastructure at iGPT, where a developer sends one API call and gets back structured, reasoning-ready context with source citations, with no loops or retries or per-query preprocessing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;igptai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;IGPT&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;IGPT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Reconcile Q1 invoices from Apex Logistics, flag PO mismatches&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;quality&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cef-1-normal&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;output_format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Structured JSON: vendor, invoice amounts, PO deltas, source email citations
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The industry is right to focus on context, but most implementations assume the data is already usable, and email isn't. &lt;/p&gt;

&lt;p&gt;If your agent is reasoning over email without fixing that first, it's not failing because the model is weak, it's failing because the context never made sense in the first place.&lt;/p&gt;

&lt;p&gt;Docs: &lt;a href="https://docs.igpt.ai" rel="noopener noreferrer"&gt;docs.igpt.ai&lt;/a&gt;&lt;br&gt;
SDK: &lt;code&gt;pip install igptai&lt;/code&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
    </item>
    <item>
      <title>Why email breaks every RAG pipeline</title>
      <dc:creator>Sam Kennard</dc:creator>
      <pubDate>Thu, 12 Feb 2026 13:45:14 +0000</pubDate>
      <link>https://dev.to/sam_kennard_8d1969bb/why-email-breaks-every-rag-pipeline-4i9g</link>
      <guid>https://dev.to/sam_kennard_8d1969bb/why-email-breaks-every-rag-pipeline-4i9g</guid>
      <description>&lt;p&gt;If you've built RAG over email, you know the feeling: everything works on PDFs and wiki pages, and then you point the same pipeline at someone's inbox and the whole thing quietly falls apart. Not with errors, but with bad retrieval you keep trying to fix with better chunking and bigger context windows until you realize the problem was never the retrieval.&lt;/p&gt;

&lt;p&gt;Email threads aren't documents. Every standard RAG approach treats them like they are.&lt;/p&gt;

&lt;h2&gt;
  
  
  The standard approach
&lt;/h2&gt;

&lt;p&gt;Connect to Gmail API, pull messages, chunk, embed, retrieve top-k.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service = build("gmail", "v1", credentials=creds)
results = service.users().messages().list(userId="me", maxResults=50).execute()

raw_emails = []
for msg in results.get("messages", []):
    full = service.users().messages().get(
        userId="me", id=msg["id"], format="full"
    ).execute()
    raw_emails.append({
        "id": msg["id"],
        "threadId": full.get("threadId"),
        "body": get_body_text(full.get("payload", {})),
        "headers": {
            h["name"]: h["value"]
            for h in full.get("payload", {}).get("headers", [])
        }
    })

splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = []
for email in raw_emails:
    for split in splitter.split_text(email["body"]):
        chunks.append({"text": split, "metadata": {"thread_id": email["threadId"]}})

vectorstore = Chroma.from_texts(
    [c["text"] for c in chunks],
    OpenAIEmbeddings(),
    metadatas=[c["metadata"] for c in chunks]
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works on static documents because each chunk is self-contained and relationships between chunks are semantic. Email has neither property.&lt;/p&gt;

&lt;h2&gt;
  
  
  6 ways email breaks this
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Quoted text duplication
&lt;/h2&gt;

&lt;p&gt;In a 12-message thread, the Gmail API returns every reply with the full quoted chain below it. The original message appears 12 times. When you embed this, the oldest messages and signature blocks dominate the embedding space because they're repeated in every chunk, and the model reads repetition as reinforcement. Your most recent, most relevant messages get buried.&lt;/p&gt;

&lt;p&gt;The fix isn't regex because people reply inline, edit quotes, and forward with additions mid-quote.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Thread structure vanishes
&lt;/h2&gt;

&lt;p&gt;Email threads are conversation trees, not linear sequences. Message 7 might reply to message 3, not message 6. When you embed, that structure disappears. Ask "who approved this" and retrieval surfaces someone saying "looks good" when they were actually being quoted by someone disagreeing with them.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. CC vs. authorship confusion
&lt;/h2&gt;

&lt;p&gt;Your model sees "David" in the CC line and "David's proposal" in the body and has no structural way to distinguish "David was informed" from "David authored this." Extraction pipelines end up confidently attributing work to people who never wrote a single reply because their names appeared in CC fields.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Forwarded thread forks
&lt;/h2&gt;

&lt;p&gt;Someone forwards a thread to a new group. Now you have two conversations that share history but diverged, and Gmail treats them as separate threads with no link between them. Ask "what did the team decide" and retrieval pulls from either branch without knowing they're contradictory.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Signatures and boilerplate at scale
&lt;/h2&gt;

&lt;p&gt;Across a real organization: 30+ signature formats, compliance disclaimers in multiple languages, confidentiality notices longer than the actual messages. A meaningful portion of your token budget goes to this noise while the model treats it as content worth reasoning over.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Cross-thread temporal reasoning
&lt;/h2&gt;

&lt;p&gt;"Let's revisit this next quarter" in January. "The timeline we discussed" in March. Completely different words for the same thing. The connection is temporal, not semantic, so vector similarity can't find it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the usual fixes don't work
&lt;/h2&gt;

&lt;p&gt;All six failures happen upstream of the model. Better models reason more confidently over the same broken input. Bigger context windows stuff in more duplicated text you're paying for. &lt;/p&gt;

&lt;p&gt;Better prompts ask the model to reconstruct thread structure, deduplicate quotes, resolve attribution, and track temporal references on every single query. You're pushing infrastructure problems into the prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix: treat email as a graph, not a document
&lt;/h2&gt;

&lt;p&gt;Email threads are conversational graphs. Each message is a node, replies create edges, participants have roles that change over time, and decisions create cross-thread edges. The pipeline needs six layers between raw email and your model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────┐
│                   YOUR APPLICATION                   │
├─────────────────────────────────────────────────────┤
│  Layer 6: Hybrid Retrieval                           │
│  semantic search + metadata filters + graph traversal│
├─────────────────────────────────────────────────────┤
│  Layer 5: Cross-Thread Linking                       │
│  participant overlap, topic refs, temporal proximity │
├─────────────────────────────────────────────────────┤
│  Layer 4: Structured Metadata Extraction             │
│  decisions, tasks, owners, deadlines, sentiment      │
├─────────────────────────────────────────────────────┤
│  Layer 3: Participant &amp;amp; Role Tracking                │
│  From vs To vs CC, role changes across thread        │
├─────────────────────────────────────────────────────┤
│  Layer 2: Content Deduplication                      │
│  quoted text removal, inline edit preservation       │
├─────────────────────────────────────────────────────┤
│  Layer 1: Thread Reconstruction                      │
│  In-Reply-To / References headers → conversation tree│
├─────────────────────────────────────────────────────┤
│                  RAW EMAIL (Gmail API / IMAP)         │
└─────────────────────────────────────────────────────┘

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Layer 1 is where most people start and stop. Map In-Reply-To headers to build the conversation tree:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from collections import defaultdict

def build_thread_tree(messages):
    by_message_id = {}
    children = defaultdict(list)
    roots = []

    for msg in messages:
        msg_id = msg["headers"].get("Message-ID", "")
        reply_to = msg["headers"].get("In-Reply-To", "")
        by_message_id[msg_id] = msg

        if reply_to and reply_to in by_message_id:
            children[reply_to].append(msg_id)
        else:
            roots.append(msg_id)

    return roots, children, by_message_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Layers 2-3 handle deduplication and participant roles. Both are straightforward in concept but brutal in practice because email clients format quotes differently, people edit them without marking changes, and the distinction between "David authored this" and "David was CC'd" needs to be structured data, not something the model infers from flattened text.&lt;/p&gt;

&lt;p&gt;Layers 4-6 extract structured metadata (decisions, tasks, owners, deadlines), build cross-thread connections, and combine semantic search with metadata filtering and graph traversal so you can say "find messages from Sarah about Q2 budget where a decision was made" and have the retrieval handle filtering before semantic matching.&lt;/p&gt;

&lt;p&gt;This is what we built &lt;a href="https://www.igpt.ai/" rel="noopener noreferrer"&gt;iGPT&lt;/a&gt; to handle. All six layers, one API call. &lt;a href="https://docs.igpt.ai/" rel="noopener noreferrer"&gt;Docs&lt;/a&gt; here.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the difference looks like
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Standard RAG:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;results = vectorstore.similarity_search("What are the open action items?", k=5)
# - 2 chunks dominated by signature blocks
# - 1 chunk from a quoted reply (wrong attribution)
# - 1 relevant chunk buried in noise
# - 1 chunk from an unrelated thread (similar keywords)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Through iGPT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from igptai import IGPT

client = IGPT(api_key="your-api-key", user="user-123")
response = client.recall.ask(
    input="What are the open action items from this week?",
    quality="cef-1-normal"
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seven source documents referenced, structured data with owners, dates, and attribution. No signatures, no duplicated quotes, no misattributed CC recipients. The infrastructure handled it before the model saw anything.&lt;/p&gt;

&lt;p&gt;Streaming shows the pipeline stages in real time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for event in client.recall.ask(
    input="Who committed to what in the last 7 days?",
    stream=True,
    quality="cef-1-normal"
):
    if "delta" in event:
        print(event["delta"]["output"], end="", flush=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sources referenced: 22
Here is a summary of commitments made in the last 7 days...

| Date       | Person       | Commitment                                     |
|------------|-------------|------------------------------------------------|
| 2026-02-09 | Jane Doe | Proposed new campaign, requested alignment sync |
| 2026-02-10 | John Doe  | Reviewing blog and one-pager, final versions    |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works the same in &lt;a href="https://www.npmjs.com/package/igptai" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import IGPT from "igptai" and the API is identical.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install igptai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from igptai import IGPT

client = IGPT(api_key="your-key", user="your-user-id")

auth = client.connectors.authorize(
    service="google",
    scope="email",
    redirect_uri="https://your-app.com/callback"
)

datasources = client.datasources.list()

response = client.recall.ask(
    input="What decisions were made this week and who owns next steps?"
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't want to set up OAuth just to see it work? The playground lets you connect your inbox and run queries in about five minutes, no code required.&lt;br&gt;
Links:&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://www.igpt.ai/" rel="noopener noreferrer"&gt;iGPT Website&lt;/a&gt;&lt;br&gt;
📖 &lt;a href="https://docs.igpt.ai/" rel="noopener noreferrer"&gt;API Documentation&lt;/a&gt;&lt;br&gt;
🐍 &lt;a href="https://github.com/igptai/igptai-python" rel="noopener noreferrer"&gt;Python SDK (PyPI)&lt;/a&gt;&lt;br&gt;
📦 &lt;a href="https://github.com/igptai/igptai-node" rel="noopener noreferrer"&gt;Node.js SDK (npm)&lt;/a&gt;&lt;br&gt;
🛝 &lt;a href="https://igpt.ai/hub/playground/" rel="noopener noreferrer"&gt;Playground&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>api</category>
      <category>email</category>
    </item>
  </channel>
</rss>
