<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: connerlambden</title>
    <description>The latest articles on DEV Community by connerlambden (@connerlambden).</description>
    <link>https://dev.to/connerlambden</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/connerlambden"/>
    <language>en</language>
    <item>
      <title>Detecting AI-authored news at corpus scale from a single MCP call</title>
      <dc:creator>connerlambden</dc:creator>
      <pubDate>Tue, 21 Apr 2026 15:04:16 +0000</pubDate>
      <link>https://dev.to/connerlambden/detecting-ai-authored-news-at-corpus-scale-from-a-single-mcp-call-1n2o</link>
      <guid>https://dev.to/connerlambden/detecting-ai-authored-news-at-corpus-scale-from-a-single-mcp-call-1n2o</guid>
      <description>&lt;p&gt;There's a growing crisis inside news feeds: AI-generated content, slop, and opinion-masked-as-reporting are all appearing faster than human review systems can flag them. Most "AI detection" tools work per-document and return a single binary probability with no supporting evidence. That's not enough for someone who has to actually &lt;em&gt;decide&lt;/em&gt; what to read, publish, or cite.&lt;/p&gt;

&lt;p&gt;I put the opposite approach behind an MCP server - a continuous, corpus-scale, per-article &lt;code&gt;ai_authorship_probability&lt;/code&gt; score, plus 30 other framing dimensions, all queryable in plain English from Claude or Cursor.&lt;/p&gt;

&lt;h2&gt;
  
  
  The core dimension
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;Helium MCP&lt;/a&gt; scores every article it ingests across 3.2M+ articles and 5,000+ sources on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ai_authorship_probability&lt;/code&gt; - explicit model estimate that the article was LLM-generated&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;credibility&lt;/code&gt; - sourcing density, named-source ratio, evidence-citation patterns&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sensationalism&lt;/code&gt; - headline-vs-body amplification, superlative density&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;overconfidence&lt;/code&gt; - hedge-language vs declarative-certainty ratio&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;opinion_vs_fact&lt;/code&gt; - opinion language vs declarative-fact language ratio&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;oversimplification&lt;/code&gt; - single-cause reduction of complex causation&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;begging_the_question&lt;/code&gt; - conclusion assumed in the framing&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scapegoating&lt;/code&gt; - actor-blaming vs structural-explanation patterns&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;covering_responses&lt;/code&gt; - whether the criticized parties get space to respond&lt;/li&gt;
&lt;li&gt;...22 more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The point is not that any one score is a verdict. The point is that you can now &lt;em&gt;triangulate&lt;/em&gt;. A high AI-authorship probability paired with low sourcing density and high sensationalism is a very different signal from high AI-authorship in a meticulously-sourced explainer - and a scoring pipeline that only returns one number cannot tell them apart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# In Cursor or Claude Desktop MCP config:&lt;/span&gt;
npx mcp-remote https://heliumtrades.com/mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Free. No signup. No API key. Remote server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Asking the question
&lt;/h2&gt;

&lt;p&gt;In Claude, I asked:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using Helium, show me the most AI-suspicious recent articles across the corpus, and cross-reference against their credibility and sensationalism scores.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude called &lt;code&gt;search_articles&lt;/code&gt;, filtered by the top decile of &lt;code&gt;ai_authorship_probability&lt;/code&gt;, joined against per-source metadata, and returned a ranked list with the four relevant scores side-by-side. The top of the list was dominated by low-credibility, high-sensationalism sources - which is what you'd expect. But the &lt;em&gt;more interesting&lt;/em&gt; result was a small cohort in the middle of the pack: high AI-authorship, high credibility, moderate sensationalism. Those are almost certainly human-edited AI drafts - the category that a single-axis detector would miss entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why 31 dimensions, not one
&lt;/h2&gt;

&lt;p&gt;The one-number-detector fails in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;False negatives from human editing.&lt;/strong&gt; A human editor can smooth an LLM draft enough to drop a binary detector score below threshold, but framing artifacts (overconfidence pattern, opinion-vs-fact ratio, coverage-of-responses) survive. Multi-dim signal catches them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;False positives from LLM-like human writing.&lt;/strong&gt; Academic-style prose is often flagged as AI-generated by single-axis detectors. But the sourcing-density and citation-evidence axes in a 31-dim score are the difference between a grad student and an LLM - and they show up cleanly in the schema.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Example use cases
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Newsroom standards editors&lt;/strong&gt; - a daily cron job that flags high-AI-authorship articles in your freelance submissions bucket, weighted by credibility score, before an editor ever opens them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact-checkers&lt;/strong&gt; - when triaging a viral claim, pull the source's recent-window scores on &lt;code&gt;ai_authorship_probability&lt;/code&gt;, &lt;code&gt;credibility&lt;/code&gt;, and &lt;code&gt;overconfidence&lt;/code&gt;. A source that has drifted toward AI authorship and away from sourced evidence is a different trust situation than one that has been stable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Journalism-school instructors&lt;/strong&gt; - assign students to pull 10 articles from a single publication across a decade, graph the &lt;code&gt;ai_authorship_probability&lt;/code&gt; and &lt;code&gt;credibility&lt;/code&gt; trend lines, and write a piece on what changed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-safety researchers&lt;/strong&gt; - the full 31-dimension scored corpus is a ready-made dataset for studying how LLM-generated news content is spreading through mainstream feeds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Live example
&lt;/h2&gt;

&lt;p&gt;Here's a real query I ran in Claude:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Helium: Show me how AI-authorship probability has moved for tech-news sources over the last year, correlated with credibility.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude used the MCP tools, ran the query across the Helium corpus, and returned a tidy summary showing that several mid-tier aggregator sources have seen a meaningful upward shift in &lt;code&gt;ai_authorship_probability&lt;/code&gt; over the last 12 months, while their &lt;code&gt;credibility&lt;/code&gt; score drifted down. That's a reportable trend. The reporter didn't have to build a scraper, didn't have to maintain a classifier, didn't have to write SQL - they asked a question in English.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do
&lt;/h2&gt;

&lt;p&gt;If you work anywhere near news - as a reader, a writer, an editor, a researcher, or someone building AI-news workflows - try it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx mcp-remote https://heliumtrades.com/mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then ask Claude the question you've been asking Google and failing to get a structured answer from. The 31-dim schema is there, the corpus is populated, and the tool calls are free.&lt;/p&gt;

&lt;p&gt;Full tool list, full schema, full source: &lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;github.com/connerlambden/helium-mcp&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you find the schema missing something important, open an issue or reach out. The axes were picked empirically across 3.2M articles, but the space of things-worth-measuring about a news article is larger than what's in the schema today - and I'd rather know.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>news</category>
      <category>showdev</category>
    </item>
    <item>
      <title>31 dimensions of news bias, queryable from Claude in plain English</title>
      <dc:creator>connerlambden</dc:creator>
      <pubDate>Sun, 19 Apr 2026 22:21:30 +0000</pubDate>
      <link>https://dev.to/connerlambden/31-dimensions-of-news-bias-queryable-from-claude-in-plain-english-1ioo</link>
      <guid>https://dev.to/connerlambden/31-dimensions-of-news-bias-queryable-from-claude-in-plain-english-1ioo</guid>
      <description>&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Most "news bias" tools collapse a story into a single number on a left-right axis. That's useful for a thumbnail, but it's the wrong granularity for almost any real workflow - a newsroom standards editor, a fact-checker triaging a viral claim, a journalism-school instructor teaching framing, an AI safety researcher building a misinformation classifier.&lt;/p&gt;

&lt;p&gt;What those workflows actually need is &lt;strong&gt;structured, multi-dimensional, queryable framing data&lt;/strong&gt;. So I built it and put it behind an MCP server that any AI assistant can call.&lt;/p&gt;

&lt;h2&gt;
  
  
  The schema
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;Helium MCP&lt;/a&gt; scores every article on 31 dimensions. A non-exhaustive sample:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;liberal_conservative&lt;/code&gt; - the standard left-right axis (kept for compatibility)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;credibility&lt;/code&gt; - sourcing density, named-source ratio, evidence-citation pattern&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;opinion_vs_fact&lt;/code&gt; - opinion language vs declarative-fact language ratio&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scapegoating&lt;/code&gt; - actor-blaming patterns vs structural-explanation patterns&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;covering_responses&lt;/code&gt; - whether the article gives space to the people/orgs being criticized&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fearful&lt;/code&gt; - emotional valence, threat language&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sensationalism&lt;/code&gt; - headline-vs-body amplification, superlative density&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;overconfidence&lt;/code&gt; - hedge language vs declarative certainty&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;intelligence&lt;/code&gt; - reading level, conceptual density&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;begging_the_question&lt;/code&gt; - assumes the conclusion in the framing&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;oversimplification&lt;/code&gt; - reduces complex causation to single factors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ai_authorship_probability&lt;/code&gt; - explicit estimate that the article was LLM-generated&lt;/li&gt;
&lt;li&gt;... 19 more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key thing is that every dimension is &lt;strong&gt;operationalized&lt;/strong&gt; - it's not just "vibes" labeling, each one is computed from features in the text.&lt;/p&gt;

&lt;h2&gt;
  
  
  The corpus
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;3.2M+ articles&lt;/li&gt;
&lt;li&gt;5,000+ sources&lt;/li&gt;
&lt;li&gt;Updated continuously&lt;/li&gt;
&lt;li&gt;Sources span global mainstream, US partisan, business press, tech press, regional, and long-tail / hyperlocal&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The MCP interface
&lt;/h2&gt;

&lt;p&gt;Helium MCP exposes this via three main tools you can call from any AI assistant:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;get_source_bias(source)&lt;/code&gt; - aggregate scores across a source's recent corpus&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;get_bias_from_url(url)&lt;/code&gt; - score a single article on demand&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;search_balanced_news(query)&lt;/code&gt; - synthesize multi-source coverage of an event with structured outcomes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setup (one line)
&lt;/h2&gt;

&lt;p&gt;Add to your &lt;code&gt;mcp.json&lt;/code&gt; in Cursor or Claude Desktop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"helium"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp-remote"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://heliumtrades.com/mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Free, no signup, no API key.&lt;/p&gt;

&lt;h2&gt;
  
  
  A real example
&lt;/h2&gt;

&lt;p&gt;In Claude, I asked: &lt;em&gt;"Show me the bias profile for CNN's recent corpus using Helium."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Real output (445 articles analyzed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Liberal/Conservative:  -2   (slightly left)
Credibility:           15   (moderate-high)
Fearful:                4
Intelligence:          11
Covering Responses:     9   (gives space to the criticized)
Opinion:                5
Overconfidence:         8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The value of seeing it as 31 numbers (and not 1) is that you can ask follow-up questions like &lt;em&gt;"For these same articles, are the high-credibility ones more or less likely to be high-overconfidence?"&lt;/em&gt; - and the agent can compute the correlation in-place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases this unlocks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For a newsroom standards editor:&lt;/strong&gt; triage incoming wire/syndication content by &lt;code&gt;ai_authorship_probability&lt;/code&gt; before it goes through your editorial pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For a fact-checker:&lt;/strong&gt; rank a list of suspect URLs by &lt;code&gt;credibility&lt;/code&gt; (low) and &lt;code&gt;overconfidence&lt;/code&gt; (high) - the combination is a strong indicator of claims worth investigating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For a journalism instructor:&lt;/strong&gt; show students how the same event was framed across 10 sources, with structured &lt;code&gt;scapegoating&lt;/code&gt; / &lt;code&gt;covering_responses&lt;/code&gt; / &lt;code&gt;opinion_vs_fact&lt;/code&gt; scores attached.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For an AI safety researcher:&lt;/strong&gt; the schema is essentially a deployed multi-criterion eval pipeline applied to news rather than to LLM outputs - useful as an empirical reference for how multi-criterion eval criteria interact with each other in production (Goodhart, distribution shift, taxonomy choice).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For anyone building an AI agent that consumes news:&lt;/strong&gt; structured per-source/per-article framing metadata is the missing primary key for reasoning about source reliability programmatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger point
&lt;/h2&gt;

&lt;p&gt;In a world where readers query LLMs more than they visit homepages, the value of an individual article goes down and the value of &lt;strong&gt;structured, queryable, per-article metadata&lt;/strong&gt; goes up. The schema above is one open attempt at what that metadata layer should look like.&lt;/p&gt;

&lt;p&gt;If you have ideas for dimensions that should be added (or critiques of the existing ones), I'd love to hear them - the methodology is open.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;p&gt;This is not a substitute for human editorial judgment. Bias scoring is hard, the schema can be wrong, and there are distribution-shift / Goodhart concerns with any operationalized criterion. Use it as a triage layer, not a verdict.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Repo: &lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;https://github.com/connerlambden/helium-mcp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docs + live demo: &lt;a href="https://heliumtrades.com/mcp-page/" rel="noopener noreferrer"&gt;https://heliumtrades.com/mcp-page/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Companion piece on the options-pricing side: &lt;a href="https://dev.to/connerlambden/how-i-screen-for-ratio-spread-opportunities-in-30-seconds-with-an-mcp-server-130p"&gt;https://dev.to/connerlambden/how-i-screen-for-ratio-spread-opportunities-in-30-seconds-with-an-mcp-server-130p&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>mcp</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How I screen for ratio spread opportunities in 30 seconds with an MCP server</title>
      <dc:creator>connerlambden</dc:creator>
      <pubDate>Sun, 19 Apr 2026 22:05:52 +0000</pubDate>
      <link>https://dev.to/connerlambden/how-i-screen-for-ratio-spread-opportunities-in-30-seconds-with-an-mcp-server-130p</link>
      <guid>https://dev.to/connerlambden/how-i-screen-for-ratio-spread-opportunities-in-30-seconds-with-an-mcp-server-130p</guid>
      <description>&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;A "ratio spread" in options trading is when you sell N options at one strike and buy M options at another, where N != M. The classic 1x2 put ratio spread (sell 1 ATM put, buy 2 OTM puts) is a favorite of vol traders because it lets you express a view that downside skew is overpriced &lt;em&gt;and&lt;/em&gt; gives you positive convexity if the market really crashes.&lt;/p&gt;

&lt;p&gt;The hard part is finding candidates. Skew mispricings are the kind of thing you used to need a Bloomberg terminal + a custom IV-surface model + an analyst to surface. With a free remote MCP server I built called &lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;Helium MCP&lt;/a&gt;, you can do it in 30 seconds inside Claude Desktop or Cursor.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the MCP exposes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;Helium MCP&lt;/a&gt; is a thin wrapper over a per-symbol ML options pricing model. For any contract, it returns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;predicted_price&lt;/code&gt; - Helium's model fair value&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prob_itm&lt;/code&gt; - probability of expiring in the money&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;options_data_date&lt;/code&gt; - freshness of the chain snapshot&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model is trained per-symbol on each ticker's own historical options data, so it makes different (and sometimes wildly different) calls than a generic Black-Scholes fit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup (one line)
&lt;/h2&gt;

&lt;p&gt;Add to your &lt;code&gt;mcp.json&lt;/code&gt; in Cursor or Claude Desktop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"helium"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp-remote"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://heliumtrades.com/mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Free, no signup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding a ratio-spread candidate
&lt;/h2&gt;

&lt;p&gt;Inside Cursor, I asked the agent: &lt;em&gt;"Pull AAPL option prices from Helium for May 15 expiry across $180/$195/$200/$205 strikes."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Real output, just now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;get_option_price('AAPL', 200, '2026-05-15', 'call')   -&amp;gt;  $20.64,  prob_itm 0.52
get_option_price('AAPL', 205, '2026-05-15', 'call')   -&amp;gt;  $20.69,  prob_itm 0.52
get_option_price('AAPL', 195, '2026-05-15', 'put')    -&amp;gt;  $0.06,   prob_itm 0.01
get_option_price('AAPL', 180, '2026-05-15', 'put')    -&amp;gt;  $0.02,   prob_itm 0.01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at the puts. The model thinks AAPL has effectively 1% probability of finishing below $195 by mid-May. The market is paying actual money for those puts (and a lot more for the deeper OTM ones - that's the skew).&lt;/p&gt;

&lt;p&gt;If you believe the model, &lt;strong&gt;the deep OTM put side is the overpriced side&lt;/strong&gt;. That's a textbook setup for a 1x2 put ratio spread:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sell 1 near-ATM put (collect the rich premium the market is offering for a bearish view)&lt;/li&gt;
&lt;li&gt;Buy 2 deep OTM puts (cheap insurance + tail-side convexity)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the market chops sideways or rallies (the model's base case), all the puts expire worthless and you keep the credit. If the market crashes hard, the 2 long puts catch up to and exceed the 1 short put.&lt;/p&gt;

&lt;h2&gt;
  
  
  The point isn't this specific trade
&lt;/h2&gt;

&lt;p&gt;The point is that &lt;strong&gt;screening this kind of structural mispricing went from "needs an institutional setup" to "ask Claude in plain English"&lt;/strong&gt; the day MCP made it possible to expose model APIs to LLMs.&lt;/p&gt;

&lt;p&gt;The same workflow applies to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calendar spread arbitrage (compare term-structure of Helium IV vs market IV)&lt;/li&gt;
&lt;li&gt;Diagonal spreads (mix the two)&lt;/li&gt;
&lt;li&gt;Volatility compression candidates - Helium MCP has a &lt;code&gt;get_top_trading_strategies&lt;/code&gt; endpoint that returns a daily-ranked long-vol vs short-vol screen with explicit bull/bear cases:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"Market IV (~30%) above Helium IV (~26-27%) across maturities - favoring volatility compression. Skew is mostly tail-priced..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;ML option pricing was an institutional moat for two decades. With MCP, the marginal cost of querying an ML option pricing model from inside an LLM is zero. That changes who can run a structured options screen.&lt;/p&gt;

&lt;p&gt;Helium MCP also exposes 31-dimension structured bias scoring across 3.2M+ news articles (5,000+ sources) - useful for the news/narrative side of trading - but that's a topic for another post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;p&gt;This is not trading advice. Helium's model can be wrong. Per-symbol regression models can overfit. Always size positions appropriately and validate against your own framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Repo: &lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;https://github.com/connerlambden/helium-mcp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docs + live demo: &lt;a href="https://heliumtrades.com/mcp-page/" rel="noopener noreferrer"&gt;https://heliumtrades.com/mcp-page/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you build something interesting on top of it, I'd love to hear about it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>mcp</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How to Give Your AI Assistant Real-Time Market Intelligence</title>
      <dc:creator>connerlambden</dc:creator>
      <pubDate>Tue, 14 Apr 2026 21:15:15 +0000</pubDate>
      <link>https://dev.to/connerlambden/how-to-give-your-ai-assistant-real-time-market-intelligence-1057</link>
      <guid>https://dev.to/connerlambden/how-to-give-your-ai-assistant-real-time-market-intelligence-1057</guid>
      <description>&lt;p&gt;MCP (Model Context Protocol) lets AI assistants call external tools. I built a remote MCP server called &lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;Helium&lt;/a&gt; that gives any MCP-compatible AI assistant access to real-time financial intelligence — market data, ML-powered options pricing, news bias analysis, and more.&lt;/p&gt;

&lt;p&gt;The interesting part isn't the financial data itself (there are plenty of market APIs). It's what happens when you combine structured financial data with an LLM's reasoning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup (30 seconds)
&lt;/h2&gt;

&lt;p&gt;Add one line to your AI assistant's MCP config:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cursor / Windsurf:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="nl"&gt;"helium"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"https://heliumtrades.com/mcp"&lt;/span&gt;&lt;span class="p"&gt;}}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Claude Desktop:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="nl"&gt;"helium"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:[&lt;/span&gt;&lt;span class="s2"&gt;"mcp-remote"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"https://heliumtrades.com/mcp"&lt;/span&gt;&lt;span class="p"&gt;]}}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No API key. Nothing to install. Free tier.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Helium exposes 10 tools through MCP:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Market Intelligence (&lt;code&gt;get_ticker&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Ask "What's the outlook for NVDA?" and get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI-generated bull and bear cases&lt;/li&gt;
&lt;li&gt;5 probability-weighted scenarios (e.g., 38% chance of mean-reversion, 25% upside on AI headlines, 10% tail risk on export shock)&lt;/li&gt;
&lt;li&gt;Each scenario includes falsifiability criteria — what would prove it wrong&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. ML Options Pricing (&lt;code&gt;get_top_trading_strategies&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;The model computes independent fair values for every listed options contract. For each ticker, it returns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strategies ranked by expected value&lt;/li&gt;
&lt;li&gt;Backtested win rates (e.g., short vol calls on AAPL: 61% win rate, avg +$8.40/trade over 39 historical trades)&lt;/li&gt;
&lt;li&gt;Full Greeks for every contract&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A single call returns ~355KB of structured data.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Balanced News Synthesis (&lt;code&gt;search_balanced_news&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Aggregates 3.2M+ articles from 5,000+ sources. Instead of one take, it shows where sources agree vs. diverge on any topic.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Multi-Dimensional Bias Scoring (&lt;code&gt;get_all_source_biases&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Scores news sources across 15+ dimensions — not just left/right:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prescriptiveness&lt;/strong&gt;: Does the outlet tell you what to think, or just report?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sensationalism&lt;/strong&gt;: Framing intensity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fearful framing&lt;/strong&gt;: How much fear-based language is used&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrity&lt;/strong&gt;: Factual rigor&lt;/li&gt;
&lt;li&gt;Plus: dovish/hawkish, libertarian/authoritarian, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Historical Options Data (&lt;code&gt;get_historical_options_data&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Full historical chains with ML pricing baked in. A single SPY request returns ~30MB of every contract with the model's theoretical value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MCP matters for this
&lt;/h2&gt;

&lt;p&gt;The key insight is that MCP eliminates the build step. Instead of building a custom financial app, you add one config line and ask questions in natural language. The AI handles parsing 355KB of structured options data and pulling out what's relevant.&lt;/p&gt;

&lt;p&gt;This pattern — domain-specific intelligence delivered through MCP — is how I think a lot of specialized AI tools will work going forward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://heliumtrades.com/mcp-page/" rel="noopener noreferrer"&gt;Full docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://glama.ai/mcp/servers/connerlambden/helium-mcp" rel="noopener noreferrer"&gt;Glama listing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/helium-mcp" rel="noopener noreferrer"&gt;npm&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy to answer questions about the implementation or the MCP protocol in general.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Search Scientific Papers from Any AI Tool — Introducing BGPT MCP</title>
      <dc:creator>connerlambden</dc:creator>
      <pubDate>Tue, 17 Feb 2026 04:50:13 +0000</pubDate>
      <link>https://dev.to/connerlambden/search-scientific-papers-from-any-ai-tool-introducing-bgpt-mcp-18f3</link>
      <guid>https://dev.to/connerlambden/search-scientific-papers-from-any-ai-tool-introducing-bgpt-mcp-18f3</guid>
      <description>&lt;p&gt;If you work with scientific literature — whether you're a researcher, bioinformatician, or building AI-powered tools — you know the pain of searching for papers programmatically. PubMed's API is clunky. Semantic Scholar doesn't give you experimental details. And scraping is fragile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BGPT MCP&lt;/strong&gt; solves this by giving any AI tool direct access to a curated database of scientific papers, complete with raw experimental data extracted from full-text studies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is MCP?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt; is an open standard that lets AI assistants connect to external tools and data sources. Think of it as "USB-C for AI" — one protocol, many tools.&lt;/p&gt;

&lt;p&gt;If you use &lt;strong&gt;Cursor&lt;/strong&gt;, &lt;strong&gt;Claude Desktop&lt;/strong&gt;, &lt;strong&gt;Cline&lt;/strong&gt;, &lt;strong&gt;Windsurf&lt;/strong&gt;, or any MCP-compatible client, you can connect to BGPT with a single line of config.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does BGPT MCP Do?
&lt;/h2&gt;

&lt;p&gt;BGPT provides a &lt;code&gt;search_papers&lt;/code&gt; tool that searches a curated database of scientific papers. Unlike typical search APIs, BGPT extracts &lt;strong&gt;raw experimental data&lt;/strong&gt; from full-text papers. Each result includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Title, DOI, and authors&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Methods&lt;/strong&gt; — actual experimental procedures used&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results&lt;/strong&gt; — quantitative findings extracted from the paper&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality scores&lt;/strong&gt; — automated assessment of study rigor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;25+ metadata fields&lt;/strong&gt; — journal, year, sample size, organism, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the kind of structured data that used to require hours of manual extraction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start (2 minutes)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For Cursor IDE
&lt;/h3&gt;

&lt;p&gt;Add this to your MCP settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bgpt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://bgpt.pro/mcp/sse"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No API keys needed for your first 50 searches.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Claude Desktop
&lt;/h3&gt;

&lt;p&gt;Add to your &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bgpt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://bgpt.pro/mcp/sse"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  For Any MCP Client
&lt;/h3&gt;

&lt;p&gt;BGPT uses Server-Sent Events (SSE) transport. Point your client to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://bgpt.pro/mcp/sse
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example Queries
&lt;/h2&gt;

&lt;p&gt;Once connected, just ask your AI assistant naturally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;"Search for papers on CRISPR gene editing efficiency in human cells"&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;"Find studies comparing immunotherapy response rates in melanoma"&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;"What papers exist on transformer models for protein structure prediction?"&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI will call the &lt;code&gt;search_papers&lt;/code&gt; tool and return structured results you can immediately work with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;If you're building research tools, literature review pipelines, or AI agents that need scientific context, BGPT MCP gives you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No infrastructure&lt;/strong&gt; — it's a remote server, nothing to install&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured data&lt;/strong&gt; — not just abstracts, but methods and results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality scores&lt;/strong&gt; — filter for rigorous studies automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Works everywhere&lt;/strong&gt; — any MCP-compatible AI tool&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;50 free searches&lt;/strong&gt; per network (no account needed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$0.01 per result&lt;/strong&gt; after that with an API key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get your API key at &lt;a href="https://bgpt.pro/mcp" rel="noopener noreferrer"&gt;bgpt.pro/mcp&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live page:&lt;/strong&gt; &lt;a href="https://bgpt.pro/mcp" rel="noopener noreferrer"&gt;bgpt.pro/mcp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/connerlambden/bgpt-mcp" rel="noopener noreferrer"&gt;github.com/connerlambden/bgpt-mcp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Protocol:&lt;/strong&gt; &lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;modelcontextprotocol.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I'd love to hear how you use it. If you're working on research tooling or have feedback, drop a comment below or reach out at &lt;a href="mailto:contact@bgpt.pro"&gt;contact@bgpt.pro&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>science</category>
      <category>research</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
