<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: manja316</title>
    <description>The latest articles on DEV Community by manja316 (@manja316).</description>
    <link>https://dev.to/manja316</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/manja316"/>
    <language>en</language>
    <item>
      <title>308 Labeled Polymarket Crash Trades — Free Dataset For Mean-Reversion Research</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Tue, 28 Apr 2026 15:58:14 +0000</pubDate>
      <link>https://dev.to/manja316/308-labeled-polymarket-crash-trades-free-dataset-for-mean-reversion-research-3a4g</link>
      <guid>https://dev.to/manja316/308-labeled-polymarket-crash-trades-free-dataset-for-mean-reversion-research-3a4g</guid>
      <description>&lt;p&gt;If you want to study mean-reversion on prediction markets, the data you actually need does not exist publicly. Most "Polymarket datasets" are either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Synthetic&lt;/strong&gt; — generated for academic papers, no real money behind them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregate&lt;/strong&gt; — hourly volume and last-price across thousands of markets. Useless for tactical signal research.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I built one and open-sourced it: &lt;strong&gt;&lt;a href="https://github.com/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;cross-signal-data&lt;/a&gt;&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;cross-signal-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cross_signal_data&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;                          &lt;span class="c1"&gt;# pandas DataFrame, 308 rows
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;is_profitable&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;    &lt;span class="c1"&gt;# 0.802
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the actual labeled outcomes of 308 closed trades from a live Polymarket crash-recovery bot, with the signal features and the resolved outcome for each trade.&lt;/p&gt;

&lt;p&gt;Also mirrored on HuggingFace: &lt;a href="https://huggingface.co/datasets/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;huggingface.co/datasets/LuciferForge/cross-signal-data&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's in the dataset
&lt;/h2&gt;

&lt;p&gt;19 columns, one row per closed trade:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Column&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;trade_id&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Sequential 0-indexed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;market_id&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Polymarket market ID (queryable via &lt;code&gt;gamma-api.polymarket.com&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;question&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Market question text&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;outcome_label&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;YES/NO outcome the bot bet on&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entry_time&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;When the crash signal fired (ISO-8601 UTC)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;exit_time&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;When the position closed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entry_price&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Per-share price at entry (0–1, Polymarket prices are probabilities)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;exit_price&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Per-share price at exit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;pre_crash_high&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Recent local-window high before the crash trigger&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;drop_pct&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;(pre_crash_high − entry_price) / pre_crash_high × 100&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;size_usd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;USD allocated (typically $5)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;shares&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Share count purchased&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hold_hours&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Wall-clock hours from entry to exit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;pnl_usd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Realized P&amp;amp;L (theoretical, see below)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;is_profitable&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1 if &lt;code&gt;pnl_usd &amp;gt; 0&lt;/code&gt; else 0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;exit_reason&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;RECOVERY / TIMEOUT_48H / TIMEOUT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entry_hour_utc&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Hour-of-day at entry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entry_dow&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Day-of-week at entry (0=Monday)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;recovered_to_pct_of_high&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;exit_price / pre_crash_high × 100&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Aggregate stats
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;308 trades, 247 profitable (80.2% WR)&lt;/li&gt;
&lt;li&gt;Date range: March 2026 – April 2026&lt;/li&gt;
&lt;li&gt;Median hold: ~3 hours&lt;/li&gt;
&lt;li&gt;Average drop_pct at entry: ~22%&lt;/li&gt;
&lt;li&gt;Average recovery: ~85% of pre-crash high&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Exit reason distribution
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Reason&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;What it means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;RECOVERY&lt;/td&gt;
&lt;td&gt;235&lt;/td&gt;
&lt;td&gt;Price climbed back to ~90% of pre-crash high. Took profit.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TIMEOUT_48H&lt;/td&gt;
&lt;td&gt;62&lt;/td&gt;
&lt;td&gt;Held 48 hours without recovery. Sold at whatever the bid offered.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TIMEOUT&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;Older shorter-window timeout from earlier in the dataset.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Sports markets where the team had already lost the underlying game often end up in TIMEOUT_48H. So do political markets that crashed because the resolution fundamentals shifted, not just because of momentary panic. The bot's job is to filter those out before entering; the dataset shows where the filter fails.&lt;/p&gt;




&lt;h2&gt;
  
  
  How I used it
&lt;/h2&gt;

&lt;p&gt;Loaded the data with the bundled loader, ran a logistic regression and a random forest, got 79.9% cross-validated accuracy from 7 features:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;RF importance&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;drop_pct&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0.254&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;shares&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0.200&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entry_price&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0.174&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;pre_crash_high&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0.171&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entry_hour_utc&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0.110&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entry_dow&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0.059&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;size_usd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0.031&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Translation: the bot's &lt;strong&gt;trigger filter is doing 100% of the work&lt;/strong&gt;. A simple model that just learns "crashes with bigger drop_pct in the right time-of-day window are more likely to recover" basically reproduces the bot's actual win rate. There's no obvious feature engineering trick that beats the trigger.&lt;/p&gt;

&lt;p&gt;The diurnal pattern is interesting. Hours 16, 21, 22 UTC have ~100% WR (small samples). Hour 8 UTC dips to ~55%. Off-peak hours (when US/EU traders are asleep, books are thin) are punishing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;groupby&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;entry_hour_utc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;is_profitable&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run that yourself and see.&lt;/p&gt;




&lt;h2&gt;
  
  
  Important caveat: theoretical P&amp;amp;L vs on-chain P&amp;amp;L
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;pnl_usd&lt;/code&gt; column is &lt;strong&gt;theoretical&lt;/strong&gt; — computed from the bot's recorded &lt;code&gt;entry_price&lt;/code&gt; and &lt;code&gt;exit_price&lt;/code&gt;. This assumes you got every share filled at those prices. In practice on thin Polymarket books, fills come in slightly worse, especially for TIMEOUT exits.&lt;/p&gt;

&lt;p&gt;I built a separate audit tool that reconciles the bot's records against on-chain fills: &lt;a href="https://github.com/LuciferForge/pnl-truthteller" rel="noopener noreferrer"&gt;&lt;code&gt;pnl-truthteller&lt;/code&gt;&lt;/a&gt;. On this same 308-trade dataset, it surfaces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Theoretical P&amp;amp;L:  +$33.49
Actual P&amp;amp;L:       -$89.01
Slippage cost:    -$122.50  (-365.8% of theoretical)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So the bot has 80.2% trigger-level WR but is slightly underwater once slippage is included. That gap is worth more than the trigger itself — it tells you that the exit ladder strategy was walking thin books down. Interesting research question, exactly the kind of thing the dataset enables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pnl-truthteller
pnl-truthteller &lt;span class="nt"&gt;--wallet&lt;/span&gt; 0xYourProxyAddress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you build a strategy on top of the dataset, run pnl-truthteller against your live wallet too. Otherwise you'll think you're profitable when you aren't.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this dataset is good for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mean-reversion alpha studies&lt;/strong&gt; — does crash-recovery actually work? At what drop_pct does it start working? The data has all the inputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time-of-day effects&lt;/strong&gt; — &lt;code&gt;entry_hour_utc&lt;/code&gt; × &lt;code&gt;is_profitable&lt;/code&gt; reveals diurnal patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hold-time tradeoffs&lt;/strong&gt; — the win-rate vs hold-hours curve is in here.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature-engineering exercises&lt;/strong&gt; — if you can predict &lt;code&gt;is_profitable&lt;/code&gt; better than 80% accuracy from these features, you've found something.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backtesting frameworks&lt;/strong&gt; — real labeled data with real prices, suitable for cross-validation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What it's NOT good for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;General Polymarket research.&lt;/strong&gt; Too narrow a slice (one bot, one signal, two months).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-frequency studies.&lt;/strong&gt; Only entry/exit timestamps, not tick-level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Counterfactuals&lt;/strong&gt; ("what would a different bot have done?"). Only triggered trades are recorded.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Known biases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Survivorship in the trigger
&lt;/h3&gt;

&lt;p&gt;Only contains markets where the trigger fired (&amp;gt;20% drop, $0.04–$0.30 entry range). If you'd used a different threshold, you'd see different markets.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Selection in the entry-price band
&lt;/h3&gt;

&lt;p&gt;Most rows are concentrated in $0.04–$0.30. Markets that crashed from $0.80 → $0.50 are absent (above the range). Markets at $0.02 are absent (below the floor).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Theoretical PnL ≠ realized PnL
&lt;/h3&gt;

&lt;p&gt;See above. Use &lt;code&gt;pnl-truthteller&lt;/code&gt; for slippage-adjusted analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Time period
&lt;/h3&gt;

&lt;p&gt;March–April 2026. Includes one Polymarket V1 → V2 migration window, various political events specific to the period, and Polygon-specific gas conditions.&lt;/p&gt;

&lt;p&gt;Don't assume the patterns extrapolate forward indefinitely. Re-run the dataset extraction quarterly as it grows.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reproducibility
&lt;/h2&gt;

&lt;p&gt;The script that generated the dataset from the bot's &lt;code&gt;positions.json&lt;/code&gt; is checked in: &lt;a href="https://github.com/LuciferForge/cross-signal-data/blob/main/scripts/extract.py" rel="noopener noreferrer"&gt;&lt;code&gt;scripts/extract.py&lt;/code&gt;&lt;/a&gt;. Anyone with the bot's source data can rerun it and get the same output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/LuciferForge/cross-signal-data
&lt;span class="nb"&gt;cd &lt;/span&gt;cross-signal-data
python scripts/extract.py &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--positions&lt;/span&gt; /path/to/positions.json &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--output&lt;/span&gt; data/crashes_v1.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The dataset file is also bundled inside the pip package — &lt;code&gt;cross_signal_data.load()&lt;/code&gt; returns the data without any external download.&lt;/p&gt;




&lt;h2&gt;
  
  
  License &amp;amp; citation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;MIT.&lt;/strong&gt; Use it, fork it, train on it, build a competitor strategy. The chain is public; the data is public; the code is public.&lt;/p&gt;

&lt;p&gt;If you publish research using it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight bibtex"&gt;&lt;code&gt;&lt;span class="nc"&gt;@dataset&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;cross_signal_data_2026&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{cross-signal-data: Polymarket crash-recovery labeled dataset}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;author&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{LuciferForge}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;year&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{2026}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{https://github.com/LuciferForge/cross-signal-data}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;github.com/LuciferForge/cross-signal-data&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI:&lt;/strong&gt; &lt;code&gt;pip install cross-signal-data&lt;/code&gt; (&lt;a href="https://pypi.org/project/cross-signal-data/" rel="noopener noreferrer"&gt;pypi.org/project/cross-signal-data&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HuggingFace mirror:&lt;/strong&gt; &lt;a href="https://huggingface.co/datasets/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;huggingface.co/datasets/LuciferForge/cross-signal-data&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slippage audit tool:&lt;/strong&gt; &lt;code&gt;pip install pnl-truthteller&lt;/code&gt; — &lt;a href="https://github.com/LuciferForge/pnl-truthteller" rel="noopener noreferrer"&gt;github&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bot source:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-crash-bot&lt;/a&gt; — same bot that produced this data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you build a model that beats 80% on this dataset, I want to know what feature you used. The bot's edge is mine until someone finds a better one.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://github.com/LuciferForge" rel="noopener noreferrer"&gt;LuciferForge&lt;/a&gt; runs a public-audited Polymarket trading bot, &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt; (5,800+ MCP servers indexed), and the &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;free Polymarket data API&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>opensource</category>
    </item>
    <item>
      <title>80.2% Win Rate on 308 Polymarket Trades — Open Source, Public Data, On-Chain Verifiable</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Tue, 28 Apr 2026 15:57:38 +0000</pubDate>
      <link>https://dev.to/manja316/802-win-rate-on-308-polymarket-trades-open-source-public-data-on-chain-verifiable-385a</link>
      <guid>https://dev.to/manja316/802-win-rate-on-308-polymarket-trades-open-source-public-data-on-chain-verifiable-385a</guid>
      <description>&lt;p&gt;I run a Polymarket crash-recovery bot. As of today, it has placed &lt;strong&gt;308 closed trades at an 80.2% win rate&lt;/strong&gt;. The bot is open source. The 308-trade dataset is open source. The audit tools that prove the numbers are open source. Every claim in this post is verifiable.&lt;/p&gt;

&lt;p&gt;This isn't "I built a trading bot, here's a screenshot." This is the full data, the open code, and the honest lessons.&lt;/p&gt;




&lt;h2&gt;
  
  
  The numbers, with no spin
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Closed trades&lt;/td&gt;
&lt;td&gt;308&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Profitable&lt;/td&gt;
&lt;td&gt;247 (80.2%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Date range&lt;/td&gt;
&lt;td&gt;March 2026 – April 2026&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Median hold time&lt;/td&gt;
&lt;td&gt;~3 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average drop_pct at entry&lt;/td&gt;
&lt;td&gt;~22% (from recent local high)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average recovery to&lt;/td&gt;
&lt;td&gt;~85% of pre-crash high&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Theoretical lifetime P&amp;amp;L&lt;/td&gt;
&lt;td&gt;+$33.49&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Actual on-chain P&amp;amp;L&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;-$89.01&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hidden slippage cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;-$122.50&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Yes — that last number is real. Despite an 80.2% WR, the bot's actual on-chain P&amp;amp;L is &lt;strong&gt;negative&lt;/strong&gt; because of slippage. The DB said +$33; the chain said -$89. I open-sourced the audit tool that surfaced this gap as part of today's release: &lt;a href="https://github.com/LuciferForge/pnl-truthteller" rel="noopener noreferrer"&gt;&lt;code&gt;pnl-truthteller&lt;/code&gt;&lt;/a&gt;. More on that below.&lt;/p&gt;

&lt;p&gt;The bot is profitable on average per-trade entry, but the exit-ladder strategy walked thin order books down on TIMEOUT exits, and that ate the alpha. This is the kind of finding that only shows up when you compare your bot's records against on-chain reality. Most operators never do.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the bot does
&lt;/h2&gt;

&lt;p&gt;The bot enters Polymarket binary or multi-outcome markets when &lt;strong&gt;three conditions hit simultaneously&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The price has dropped &amp;gt; 20% from a recent local-window high (the "crash" signal).&lt;/li&gt;
&lt;li&gt;The current price is in a sweet-spot range (originally $0.04–$0.30; raised to $0.04–$0.60 today after a backtest showed the wider range had 81.8% WR on a recent slice).&lt;/li&gt;
&lt;li&gt;The orderbook has enough bid-stack depth to absorb the position size within a slippage budget.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It enters at $5 per trade. No leverage. No exotic positioning. Single Polygon proxy wallet, all USDC.&lt;/p&gt;

&lt;p&gt;The exit logic is where the alpha gets compressed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RECOVERY&lt;/strong&gt; (235 of 308 trades): price climbs back to ~90% of pre-crash high, sells. Profitable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TIMEOUT_48H&lt;/strong&gt; (62 trades): held 48 hours without recovery. Sells at whatever the bid stack offers. Sometimes profitable, often a small loss.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TIMEOUT&lt;/strong&gt; (11 trades): older, shorter-window timeout from earlier in the dataset. Same logic, less generous.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. The bot is ~500 lines of Python. The "edge" is in the trigger filter, not in clever execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  The edge: why crashes mean-revert on Polymarket
&lt;/h2&gt;

&lt;p&gt;Prediction markets have a structural property that doesn't show up in equity markets: &lt;strong&gt;the resolution probability is the price&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When a Polymarket binary contract trades at $0.20, the market is saying "20% chance YES." When it crashes from $0.40 to $0.20 in a few hours due to a news event, two things are usually true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The fundamental probability (whatever the actual likelihood is) didn't change by 50%. News events overcorrect.&lt;/li&gt;
&lt;li&gt;Liquidity is asymmetric — there are short-term sellers panicking out, but no equally-fast buyers stepping in. Bid stacks thin out.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The bot's job is to step in as the buyer when both happen. Most of the time, the price recovers within a few hours as cooler sellers reassess. That's a 2-3% win on a $5 position when it works.&lt;/p&gt;

&lt;p&gt;I trained a logistic regression and a random forest on the dataset to see if I could predict &lt;code&gt;is_profitable&lt;/code&gt; from the available features. Result: &lt;strong&gt;~79.9% cross-validated accuracy&lt;/strong&gt; on the simple model, basically matching the bot's actual WR. Translation: most of the alpha is in the trigger filter itself, not in feature engineering on top.&lt;/p&gt;

&lt;p&gt;The dataset, the loader, and the baseline notebook are at: &lt;strong&gt;&lt;a href="https://github.com/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;github.com/LuciferForge/cross-signal-data&lt;/a&gt;&lt;/strong&gt; (&lt;code&gt;pip install cross-signal-data&lt;/code&gt;).&lt;/p&gt;




&lt;h2&gt;
  
  
  What 308 trades teaches you that backtesting doesn't
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lesson 1: Slippage is bigger than you model
&lt;/h3&gt;

&lt;p&gt;I built backtests assuming midpoint fills. Real exits on thin Polymarket books are 3–8% worse than midpoint. Across 302 closed trades, that compounded into the -$122.50 hidden cost noted above.&lt;/p&gt;

&lt;p&gt;Detection: build the audit tool first. The bot kept a &lt;code&gt;live_trades.db&lt;/code&gt; SQLite logging the raw response from every CLOB POST. The audit replays each closed trade against the chain's actual fills (deduplicated by &lt;code&gt;orderID&lt;/code&gt;, because sweep retries log the same fill twice). The math is straightforward; the surprise is how big the gap is.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pnl-truthteller
pnl-truthteller &lt;span class="nt"&gt;--wallet&lt;/span&gt; 0xYourProxyAddress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read-only, no API key needed. Outputs a Markdown report with by-exit-reason breakdown, worst-10 trades, and dust shares stranded on-chain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 2: Category concentration kills you
&lt;/h3&gt;

&lt;p&gt;Three losing streaks in the dataset all came from the same market category in the same time window. Sports markets where the team had already lost the underlying game continued to trade with bid offers from people who hadn't yet updated. Easy to spot in retrospect; harder to filter automatically.&lt;/p&gt;

&lt;p&gt;Today's release adds a tiered category blacklist with persistence — markets in losing categories get a 7-day cooldown after a TIMEOUT exit. Whether this moves the WR is a question for the next 30 trades.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 3: The bid-stack changes faster than your poll interval
&lt;/h3&gt;

&lt;p&gt;A market that looks liquid at signal time can be thin by the time your limit order needs to fill. The fix is a buy-preflight check: before entering, verify the bid stack has at least &lt;code&gt;5 × shares_needed&lt;/code&gt; of capacity within your slippage budget. Skip if not.&lt;/p&gt;

&lt;p&gt;Ship this from day one. We didn't, and learned the hard way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 4: Per-token loss cooldown
&lt;/h3&gt;

&lt;p&gt;If a specific token already cost you money via TIMEOUT, don't re-enter it for 7 days. Sounds obvious. Wasn't enforced for the first 200 trades. Some markets just have stuck consensus and will eat every entry attempt for the same reason every time. Moving on is cheaper than persistence.&lt;/p&gt;




&lt;h2&gt;
  
  
  The infrastructure underneath
&lt;/h2&gt;

&lt;p&gt;Five tools shipped today, all built around the bot's actual operational lifecycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;polymarket-crash-bot&lt;/a&gt;&lt;/strong&gt; — the bot itself. 308 trades. Open source.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://pypi.org/project/polymarket-mcp-pro/" rel="noopener noreferrer"&gt;polymarket-mcp-pro&lt;/a&gt;&lt;/strong&gt; — Polymarket data as MCP tools. Lets Claude/Cursor query live markets natively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://pypi.org/project/pnl-truthteller/" rel="noopener noreferrer"&gt;pnl-truthteller&lt;/a&gt;&lt;/strong&gt; — fill-level P&amp;amp;L audit. Read-only, wallet address only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://pypi.org/project/quant-rollout/" rel="noopener noreferrer"&gt;quant-rollout&lt;/a&gt;&lt;/strong&gt; — staged rollout toolkit. Gates, kill switch, veto window. Pure stdlib, zero deps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://pypi.org/project/cross-signal-data/" rel="noopener noreferrer"&gt;cross-signal-data&lt;/a&gt;&lt;/strong&gt; — the 308-trade labeled dataset. CSV + Python loader + baseline notebook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://pypi.org/project/sigil-ta/" rel="noopener noreferrer"&gt;sigil-ta&lt;/a&gt;&lt;/strong&gt; — MCP-native TA runtime. 8 indicators + 2 composite signals + Polymarket Sentiment Divergence.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All MIT. All have tests. All have public CI-ready repos. If they save you a few hours, star them.&lt;/p&gt;

&lt;p&gt;The bot also went through Polymarket's V1 → V2 migration today. That has its own writeup with a working code diff and a 12-error troubleshooting guide: &lt;a href="https://github.com/LuciferForge/polymarket-v2-migration" rel="noopener noreferrer"&gt;polymarket-v2-migration&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The honest accounting
&lt;/h2&gt;

&lt;p&gt;I would be lying if I said the bot is currently a money-making machine. It's break-even-ish in theoretical terms and slightly down once slippage is included. What it IS, with confidence:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A live, public-audited mean-reversion strategy on prediction markets&lt;/li&gt;
&lt;li&gt;A credible 80.2% trigger-level alpha on a real (not backtest) sample&lt;/li&gt;
&lt;li&gt;A complete data + tooling stack that lets anyone replicate and improve on it&lt;/li&gt;
&lt;li&gt;A textbook case of why "I made money in backtest" ≠ "I made money on-chain"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reason I ship it open source is because the real value is in the data and the tooling, not in the entry logic. The entry logic is 500 lines anyone can read. The 308-trade labeled dataset and the slippage audit tool are what took months to produce.&lt;/p&gt;

&lt;p&gt;If you can do better than 80% on this dataset by adding features the bot doesn't currently log (orderbook depth at entry, market category metadata from gamma-api, time-to-resolution), you've found something worth more than my entry trigger. Go do it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bot source:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-crash-bot&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dataset (free, MIT):&lt;/strong&gt; &lt;code&gt;pip install cross-signal-data&lt;/code&gt; — &lt;a href="https://github.com/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;github&lt;/a&gt; · &lt;a href="https://huggingface.co/datasets/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;HuggingFace&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slippage audit:&lt;/strong&gt; &lt;code&gt;pip install pnl-truthteller&lt;/code&gt; — &lt;a href="https://github.com/LuciferForge/pnl-truthteller" rel="noopener noreferrer"&gt;github&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TA runtime:&lt;/strong&gt; &lt;code&gt;pip install sigil-ta&lt;/code&gt; — &lt;a href="https://github.com/LuciferForge/sigil" rel="noopener noreferrer"&gt;github&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Polymarket data MCP:&lt;/strong&gt; &lt;code&gt;pip install polymarket-mcp-pro&lt;/code&gt; — &lt;a href="https://github.com/LuciferForge/polymarket-mcp" rel="noopener noreferrer"&gt;github&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free Polymarket data API:&lt;/strong&gt; &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;api.protodex.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you build something with the dataset, send me the link. If you find a feature that pushes the model past 80%, send me the diff. The bot's edge is mine until someone finds a better one — and at that point, I want to know what you found.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://github.com/LuciferForge" rel="noopener noreferrer"&gt;LuciferForge&lt;/a&gt; is a solo operator running a public-audited Polymarket trading bot. Also runs &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt; (5,800+ MCP servers indexed) and the &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;free Polymarket data API&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>trading</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Sigil — An MCP-Native Technical Analysis Runtime That Talks to Claude</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Tue, 28 Apr 2026 15:57:01 +0000</pubDate>
      <link>https://dev.to/manja316/sigil-an-mcp-native-technical-analysis-runtime-that-talks-to-claude-3lja</link>
      <guid>https://dev.to/manja316/sigil-an-mcp-native-technical-analysis-runtime-that-talks-to-claude-3lja</guid>
      <description>&lt;p&gt;I needed a TA library that did three things I couldn't find together:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Indicators that aren't a stack of opinionated TA-Lib glue.&lt;/strong&gt; Pure Python, deterministic, zero runtime deps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composite signals my way, not LuxAlgo's.&lt;/strong&gt; No copying. Different naming. Different weighting.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP-native runtime so Claude / Cursor / Cline can query live signals without glue code.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So I built one. It's called &lt;strong&gt;Sigil&lt;/strong&gt; (&lt;code&gt;pip install sigil-ta&lt;/code&gt;), and I'm releasing v0.1.0 today.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's in the box
&lt;/h2&gt;

&lt;h3&gt;
  
  
  8 core indicators — pure Python, no numpy required
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Indicator&lt;/th&gt;
&lt;th&gt;Function&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Simple Moving Average&lt;/td&gt;
&lt;td&gt;&lt;code&gt;sma(closes, period)&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exponential Moving Average&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ema(closes, period)&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wilder RSI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rsi(closes, period=14)&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MACD&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;macd(closes, 12, 26, 9)&lt;/code&gt; — returns line, signal, histogram&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bollinger Bands&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;bollinger_bands(closes, 20, 2.0)&lt;/code&gt; — upper/middle/lower&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wilder ATR&lt;/td&gt;
&lt;td&gt;&lt;code&gt;atr(series, 14)&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supertrend&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;supertrend(series, 10, 3.0)&lt;/code&gt; — line + direction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stochastic&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;stochastic(series, 14, 3)&lt;/code&gt; — %K and %D&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;All deterministic, all return &lt;code&gt;list[float | None]&lt;/code&gt; aligned to input length, with &lt;code&gt;None&lt;/code&gt; during the warm-up period. No silent NaN propagation, no magic resampling, no exotic dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  2 composite signals — our naming, our weighting
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;reversion_score(series)&lt;/code&gt; → list of values in [-1, +1]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Combines RSI deviation from 50, Bollinger Band position, and ATR-normalized deviation from the 20-bar SMA. Score of +1 means strongly oversold (likely reversion up); -1 means strongly overbought.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;momentum_composite(series)&lt;/code&gt; → list of values in [-1, +1]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RSI deviation, MACD histogram (volatility-normalized), and Supertrend direction. +1 means strong sustained uptrend; -1 means strong downtrend.&lt;/p&gt;

&lt;p&gt;I tested both on 500 hours of BTCUSDT 1h data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ReversionScore (long+short, 10bps fees):
  24 trades, 66.7% WR, total_pnl_pct -2.84%, avg_bars_held 14.0

MomentumComposite (long-only, 10bps fees):
  6 trades, 50.0% WR, total_pnl_pct +2.69%, avg_bars_held 35.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Real numbers from a real test. Not impressive, not unimpressive — exactly what you should expect from generic TA on a 500-bar window. The point isn't that these signals print money; the point is that they're honestly measurable, you can tune them, and they live next to a backtest harness with realistic fees and no look-ahead.&lt;/p&gt;

&lt;h3&gt;
  
  
  1 unique signal — Polymarket Sentiment Divergence (PSD)
&lt;/h3&gt;

&lt;p&gt;This is the one that doesn't exist in any other TA library, because it requires data only Polymarket has.&lt;/p&gt;

&lt;p&gt;PSD measures divergence between &lt;strong&gt;an underlying asset's price action&lt;/strong&gt; and &lt;strong&gt;the resolved sentiment of a related Polymarket prediction market&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Example: BTC trades on Binance. Polymarket has a contract "Will BTC be above $80,000 by month-end?" The Polymarket price is a sentiment time-series — when bettors think yes is more likely, the price rises; when they think no, it falls.&lt;/p&gt;

&lt;p&gt;PSD scores +1 when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BTC price is dropping&lt;/li&gt;
&lt;li&gt;AND the Polymarket "yes" probability is rising&lt;/li&gt;
&lt;li&gt;= bullish divergence — sentiment thinks the dip is buying opportunity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PSD scores -1 when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BTC price is rising&lt;/li&gt;
&lt;li&gt;AND the Polymarket "yes" probability is falling&lt;/li&gt;
&lt;li&gt;= bearish divergence — sentiment doesn't believe the rally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When both move together, PSD is near zero (no signal).&lt;/p&gt;

&lt;p&gt;Why this matters: pure-TA libraries (LuxAlgo, ta, pandas-ta, ta-lib) &lt;strong&gt;do not have access to prediction-market sentiment&lt;/strong&gt;. PSD captures information that a chart-only signal cannot see. It's an integration play, not a math play — anyone can copy the formula, but they need a Polymarket data layer to use it. Sigil includes the Polymarket fetcher (&lt;code&gt;fetch_polymarket_price_history(token_id)&lt;/code&gt;) so the integration is built in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backtest harness with realistic fees
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sigil.backtest&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;backtest_signal&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;backtest_signal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;series&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ohlcv_series&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;signal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;reversion_scores&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;entry_threshold&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;exit_threshold&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;fee_per_side_pct&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# 10 bps each side
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="c1"&gt;# Backtest: 24 trades, WR 66.7% (16/24), total_pnl_pct -2.84%, ...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Decisions on bar &lt;code&gt;i&lt;/code&gt;, fills at bar &lt;code&gt;i+1&lt;/code&gt; open. No look-ahead. Default 10 bps per side fee (realistic for retail crypto).&lt;/p&gt;

&lt;h3&gt;
  
  
  Live data fetchers
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sigil.data&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;fetch_binance_ohlcv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fetch_polymarket_price_history&lt;/span&gt;

&lt;span class="n"&gt;btc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;fetch_binance_ohlcv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BTCUSDT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1h&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;fetch_polymarket_price_history&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fidelity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both use only Python's stdlib for HTTP. Zero runtime dependencies in core. Network only when you call the fetcher.&lt;/p&gt;




&lt;h2&gt;
  
  
  The MCP angle: indicators as Claude tools
&lt;/h2&gt;

&lt;p&gt;If you've used Claude Desktop, Cursor, or Cline, you know how MCP tools work — they appear in the LLM's tool catalog and get called natively in the conversation.&lt;/p&gt;

&lt;p&gt;Sigil exposes 14 tools through FastMCP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;sigil-ta[mcp]
sigil-mcp                &lt;span class="c"&gt;# stdio transport (Claude Desktop / Cursor / Cline)&lt;/span&gt;
sigil-mcp &lt;span class="nt"&gt;--transport&lt;/span&gt; http &lt;span class="nt"&gt;--port&lt;/span&gt; 8765   &lt;span class="c"&gt;# HTTP transport&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add to &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sigil"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sigil-mcp"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then in Claude:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Fetch the last 200 hours of BTCUSDT and compute the RSI and Bollinger Bands."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Run a Supertrend backtest on ETHUSDT with default params."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"What's the MomentumComposite for SOLUSDT right now?"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent calls &lt;code&gt;fetch_binance_ohlcv&lt;/code&gt;, then &lt;code&gt;compute_rsi&lt;/code&gt;, then &lt;code&gt;compute_bollinger_bands&lt;/code&gt;, and shows you the result. No glue code.&lt;/p&gt;

&lt;p&gt;This is how I run my own analysis now. I haven't opened a TradingView tab in two weeks. Claude does the indicator queries directly, and when I want a chart, I ask it to render one with &lt;code&gt;plotly&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Streamlit dashboard
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;sigil-ta[dashboard]
sigil-dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A simple chart-and-indicator UI for when you want to look at price visually rather than asking the agent. Pick a symbol, an interval, an indicator, see the output live. Backtest panel underneath for the composite signals.&lt;/p&gt;

&lt;p&gt;It's not a TradingView competitor — it's a 200-line proof that the same indicators work in a UI when you want one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Design choices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pure stdlib core.&lt;/strong&gt; No numpy, pandas, scipy required for indicators. Every implementation is plain Python doing arithmetic. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installs in milliseconds. No compilation.&lt;/li&gt;
&lt;li&gt;Runs on cold-start serverless.&lt;/li&gt;
&lt;li&gt;Never breaks because of a third-party version bump.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;pandas&lt;/code&gt;, &lt;code&gt;streamlit&lt;/code&gt;, &lt;code&gt;plotly&lt;/code&gt;, &lt;code&gt;mcp&lt;/code&gt;, &lt;code&gt;fastmcp&lt;/code&gt; are optional extras (&lt;code&gt;pip install sigil-ta[mcp]&lt;/code&gt;, &lt;code&gt;pip install sigil-ta[dashboard]&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Indicators return &lt;code&gt;list[float | None]&lt;/code&gt; aligned to input length.&lt;/strong&gt; This is unfashionable. Most libraries return numpy arrays with NaN warmup or pandas Series with index alignment. I find lists with explicit None more debuggable — when something's wrong, you can see exactly which bars are warmup and which are data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test coverage is the contract.&lt;/strong&gt; 47 tests at v0.1, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Constant-input behavior for every indicator&lt;/li&gt;
&lt;li&gt;Monotone-up RSI = 100, monotone-down = 0&lt;/li&gt;
&lt;li&gt;Stochastic at top of range = 100, at bottom = 0&lt;/li&gt;
&lt;li&gt;Bollinger Bands widen with volatility&lt;/li&gt;
&lt;li&gt;ATR converges on constant range&lt;/li&gt;
&lt;li&gt;Supertrend direction flips on trend reversal&lt;/li&gt;
&lt;li&gt;Backtest harness handles long-only, short-allowed, fee accounting&lt;/li&gt;
&lt;li&gt;All MCP tools register correctly
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/LuciferForge/sigil
&lt;span class="nb"&gt;cd &lt;/span&gt;sigil
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;".[dev,mcp]"&lt;/span&gt;
pytest tests/ &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;span class="c"&gt;# 47 passed in 3.18s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;v0.1 ships today with the core indicators + composite signals + PSD + backtest + MCP + dashboard. Roadmap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;v0.2&lt;/strong&gt; — More indicators: ADX, Ichimoku, OBV, VWAP, Heikin-Ashi&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;v0.3&lt;/strong&gt; — Walk-forward backtest with parameter sweeps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;v0.4&lt;/strong&gt; — Multi-timeframe signal fusion (15m + 1h + 4h alignment in one decision)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;v0.5&lt;/strong&gt; — Live signal alerting (Discord / Telegram / Slack)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;v0.6&lt;/strong&gt; — Public ground-truth ledger at signals.protodex.io&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If there's an indicator you'd use and don't see, &lt;a href="https://github.com/LuciferForge/sigil/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt; — happy to prioritize.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why ship this open source
&lt;/h2&gt;

&lt;p&gt;I run a Polymarket trading bot (&lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;308 trades, 80.2% WR, all data public&lt;/a&gt;). I needed a TA library to ask "should I add an RSI filter to the entry signal?" and didn't want to wrap an existing library that has 50+ dependencies and 5,000 lines of pandas magic.&lt;/p&gt;

&lt;p&gt;So I built the simplest thing that worked. Then I made it MCP-native because that's how I do all my analysis now. Then I added PSD because that's the part that's actually mine.&lt;/p&gt;

&lt;p&gt;Open-sourcing it isn't charity. The TA logic isn't the moat — the moat is the integration with Polymarket data, and that data layer is also what I sell access to via &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;api.protodex.io&lt;/a&gt;. If you want indicators on stocks or crypto, Sigil works fine standalone; if you want PSD on prediction markets, you need the Polymarket data, and that's where the funnel is.&lt;/p&gt;

&lt;p&gt;But honestly, even if there were no funnel — building a TA library this clean was satisfying enough that the time would have been worth it for me alone.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/sigil" rel="noopener noreferrer"&gt;github.com/LuciferForge/sigil&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI:&lt;/strong&gt; &lt;a href="https://pypi.org/project/sigil-ta/" rel="noopener noreferrer"&gt;pypi.org/project/sigil-ta&lt;/a&gt; (&lt;code&gt;pip install sigil-ta&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP-only install:&lt;/strong&gt; &lt;code&gt;pip install sigil-ta[mcp]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard install:&lt;/strong&gt; &lt;code&gt;pip install sigil-ta[dashboard]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests:&lt;/strong&gt; &lt;code&gt;git clone &amp;amp;&amp;amp; pip install -e ".[dev,mcp]" &amp;amp;&amp;amp; pytest&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you ship a project on top of Sigil, send me the link.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://github.com/LuciferForge" rel="noopener noreferrer"&gt;LuciferForge&lt;/a&gt; runs a public-audited Polymarket trading bot, &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt; (5,800+ MCP servers indexed), and the &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;free Polymarket data API&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>showdev</category>
    </item>
    <item>
      <title>I Shipped 6 Polymarket / Trading-Bot Tools Today — Here's What Each Does</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Tue, 28 Apr 2026 15:56:10 +0000</pubDate>
      <link>https://dev.to/manja316/i-shipped-6-polymarket-trading-bot-tools-today-heres-what-each-does-kcm</link>
      <guid>https://dev.to/manja316/i-shipped-6-polymarket-trading-bot-tools-today-heres-what-each-does-kcm</guid>
      <description>&lt;p&gt;I run a Polymarket crash-recovery bot (&lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;308 closed trades, 80.2% WR, all data public&lt;/a&gt;). Polymarket migrated from V1 to V2 contracts today. While doing the migration on my own bot, I extracted six tools that solve problems any prediction-market or trading-bot operator runs into. All MIT, all on PyPI / GitHub / HuggingFace, all tested, all verified shipped.&lt;/p&gt;

&lt;p&gt;If you're running a Polymarket bot, you probably need 2-3 of these. If you're researching prediction markets, you definitely need the dataset. If you use Claude / Cursor / Cline for analysis, the MCP servers will make your life easier.&lt;/p&gt;

&lt;p&gt;Here's the entire stack in one place:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Install / Use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://pypi.org/project/polymarket-mcp-pro/" rel="noopener noreferrer"&gt;polymarket-mcp-pro&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Polymarket data as MCP tools for Claude / Cursor / Cline&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pip install polymarket-mcp-pro&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/LuciferForge/polymarket-v2-migration" rel="noopener noreferrer"&gt;polymarket-v2-migration&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cookbook + 12 errors from today's V1→V2 cutover&lt;/td&gt;
&lt;td&gt;&lt;code&gt;git clone …&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://pypi.org/project/pnl-truthteller/" rel="noopener noreferrer"&gt;pnl-truthteller&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Audit your bot's actual on-chain P&amp;amp;L vs DB-recorded&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pip install pnl-truthteller&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://pypi.org/project/cross-signal-data/" rel="noopener noreferrer"&gt;cross-signal-data&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;308-trade labeled crash-recovery dataset + baseline notebook&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pip install cross-signal-data&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://pypi.org/project/quant-rollout/" rel="noopener noreferrer"&gt;quant-rollout&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Staged-deployment toolkit (gates, kill switch, veto window)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pip install quant-rollout&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://pypi.org/project/sigil-ta/" rel="noopener noreferrer"&gt;sigil-ta&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;MCP-native TA runtime with the unique Polymarket Sentiment Divergence&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pip install sigil-ta&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Total tests across all 6: &lt;strong&gt;95&lt;/strong&gt;. All passing. Every claim in this post verifiable in 30 seconds via &lt;code&gt;pip install&lt;/code&gt; + a one-liner.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. polymarket-mcp-pro — Polymarket data as Claude/Cursor tools
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;polymarket-mcp-pro
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add to &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"polymarket"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"--from"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"polymarket-mcp-pro"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"polymarket-mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then ask Claude:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;"What are the highest-volume Polymarket markets right now?"&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;"Which markets crashed &amp;gt;20% in the last 24 hours?"&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;"Show me the full order book for [token]."&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seven tools (&lt;code&gt;list_markets&lt;/code&gt;, &lt;code&gt;get_market&lt;/code&gt;, &lt;code&gt;get_prices&lt;/code&gt;, &lt;code&gt;get_crashes&lt;/code&gt;, &lt;code&gt;get_categories&lt;/code&gt;, &lt;code&gt;get_orderbook&lt;/code&gt;, &lt;code&gt;get_stats&lt;/code&gt;). Backed by &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;api.protodex.io&lt;/a&gt; which indexes 9,500+ markets with a price snapshot every 15 minutes.&lt;/p&gt;

&lt;p&gt;This is how I do all my Polymarket research now. The agent calls the tools, I look at the result, ask follow-ups. No manual &lt;code&gt;curl | jq&lt;/code&gt; chains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/polymarket-mcp" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-mcp&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  2. polymarket-v2-migration — the cookbook for today's cutover
&lt;/h2&gt;

&lt;p&gt;If you have a bot still 503-ing after today's cutover:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The fix is two import rewrites and one kwarg rename. The cookbook has the diff.&lt;/li&gt;
&lt;li&gt;The wallet migration HAS to be done in the Polymarket UI with a ≥5-share trade. SDK methods do NOT trigger it. I tested every alternative.&lt;/li&gt;
&lt;li&gt;The "1 hour cutover" announcement was wrong; actual was 6 hours of mixed states.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The cookbook is documentation-only — examples, timeline, allowance details, 12 specific errors with fixes, and a smoke test that verifies whether your wallet is migrated and your SDK is V2-ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/polymarket-v2-migration" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-v2-migration&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  3. pnl-truthteller — find your bot's hidden slippage cost
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pnl-truthteller
pnl-truthteller &lt;span class="nt"&gt;--wallet&lt;/span&gt; 0xYourProxyAddress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read-only. Wallet address only. No private key, no API key.&lt;/p&gt;

&lt;p&gt;The story: my bot's DB said +$33 lifetime profit. The chain said -$89. Difference: &lt;strong&gt;-$122 of hidden slippage cost&lt;/strong&gt; across 308 trades that the bot literally couldn't see because it records P&amp;amp;L when orders are &lt;em&gt;placed&lt;/em&gt;, not when they &lt;em&gt;fill&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The tool reconciles each closed trade against on-chain fills (deduplicated by orderID — critical, because sweep retries log the same fill multiple times in your local DB). Outputs a Markdown report with by-exit-reason breakdown, worst-10 trades, and dust shares stranded on-chain.&lt;/p&gt;

&lt;p&gt;Most Polymarket bot operators have never done this audit. If you're one of them, do it once. Worst case you confirm you're profitable. Best case you find the same gap I did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/pnl-truthteller" rel="noopener noreferrer"&gt;github.com/LuciferForge/pnl-truthteller&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  4. cross-signal-data — the 308-trade labeled dataset
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cross_signal_data&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;                          &lt;span class="c1"&gt;# pandas DataFrame, 308 rows, 19 cols
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;is_profitable&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;    &lt;span class="c1"&gt;# 0.802
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the actual labeled outcomes of every closed trade from my bot — entry features, exit features, timestamps, P&amp;amp;L, exit reason. 19 columns, 308 rows.&lt;/p&gt;

&lt;p&gt;Trained a logistic regression and a random forest on it: &lt;strong&gt;79.9% CV accuracy&lt;/strong&gt; from 7 simple features. Translation: the trigger filter is doing 100% of the work. If you can beat 80% with feature engineering, you've found something the bot doesn't know.&lt;/p&gt;

&lt;p&gt;License: MIT for both code and data. Mirrored on HuggingFace at &lt;a href="https://huggingface.co/datasets/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;huggingface.co/datasets/LuciferForge/cross-signal-data&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;github.com/LuciferForge/cross-signal-data&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  5. quant-rollout — staged deployment for trading bots
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;quant-rollout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You changed a bot parameter. Did it actually help, or are you about to lose money?&lt;/p&gt;

&lt;p&gt;&lt;code&gt;quant-rollout&lt;/code&gt; adds canary → 10% → 50% → 100% rollouts to any bot. Per-stage gates (n trades + win rate + EV/$). Kill switch on losing streaks. Veto window for human override. Persistent state across restarts. Pure stdlib. Zero deps.&lt;/p&gt;

&lt;p&gt;Pure decision logic — the library returns &lt;code&gt;RolloutDecision&lt;/code&gt; objects, your code applies the config swap. This makes it trivially testable (no Telegram, no real bot, no actual config files in the test path).&lt;/p&gt;

&lt;p&gt;26 tests including end-to-end state machine simulation walking through every transition (NOOP → VETO_OPEN → VETO_EXPIRED → KILL_TRIPPED → recovery).&lt;/p&gt;

&lt;p&gt;I extracted this from my own bot's stage-tracker after running 3 successful parameter rollouts in 14 days. Drop-in for any trading bot you have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/quant-rollout" rel="noopener noreferrer"&gt;github.com/LuciferForge/quant-rollout&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  6. sigil-ta — MCP-native TA library
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;sigil-ta              &lt;span class="c"&gt;# core, no deps&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;sigil-ta[mcp]         &lt;span class="c"&gt;# add Claude/Cursor tools&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;sigil-ta[dashboard]   &lt;span class="c"&gt;# add Streamlit dashboard&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;8 core indicators (SMA, EMA, RSI, MACD, BB, ATR, Supertrend, Stochastic) + 2 composite signals (ReversionScore, MomentumComposite) + &lt;strong&gt;the unique Polymarket Sentiment Divergence (PSD)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;PSD measures divergence between an asset's price action and the resolved sentiment of a related Polymarket prediction market. Pure-TA libraries (ta-lib, pandas-ta, LuxAlgo) cannot compute this because they have no prediction-market data. Sigil includes the Polymarket fetcher so the integration is built in.&lt;/p&gt;

&lt;p&gt;14 MCP tools. Pure stdlib indicator core (no numpy required). Backtest harness with realistic fees and no look-ahead. 47 tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/sigil" rel="noopener noreferrer"&gt;github.com/LuciferForge/sigil&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why ship all 6 at once
&lt;/h2&gt;

&lt;p&gt;I had two options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option A:&lt;/strong&gt; Ship one project at a time over six weeks. More posts, more chances at front-pages, more sequential momentum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option B:&lt;/strong&gt; Ship all six on the day Polymarket V2 cutover happens, when there's a forced demand spike.&lt;/p&gt;

&lt;p&gt;Option B turned out to be obvious in hindsight. Today, every Polymarket bot operator on the planet is searching for "polymarket v2 migration" or "polymarket bot 503 error" or some variant. They land on my cookbook. From the cookbook, they discover &lt;code&gt;pnl-truthteller&lt;/code&gt; (which solves the slippage problem they didn't know they had). From there, the dataset, the rollout toolkit, the TA library.&lt;/p&gt;

&lt;p&gt;The whole stack reinforces itself. The bot validates the dataset. The dataset validates the trigger logic. The audit tool validates the bot's claimed numbers. The MCP server lets you query the data with Claude. The TA library gives you the indicator stack to do your own analysis. Everything cross-references.&lt;/p&gt;

&lt;p&gt;This is the kind of thing that's only possible because I built it all for myself first. I didn't sit down to "create a portfolio." I had a bot. I needed tools. I built tools. They ended up being independently useful so I'm shipping them.&lt;/p&gt;




&lt;h2&gt;
  
  
  How I built six things in one day
&lt;/h2&gt;

&lt;p&gt;I didn't, exactly. I built them over the past couple months for my own bot. Today was just the day I cleaned them up, wrote tests, made them pip-installable, and pushed them.&lt;/p&gt;

&lt;p&gt;The pattern that worked:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start from a real problem.&lt;/strong&gt; Every one of these tools was built to solve a thing I was hitting in operations. Not "this seems useful," not "this would be a good open-source project." Real-pain-now-fix-it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write the test first when possible.&lt;/strong&gt; All 6 repos have meaningful test coverage (95 tests total across the stack). The tests caught real bugs during cleanup — RSI returning 100 on constant input (should be 50), kill switch evaluating wrong trade slice, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Verify before claiming.&lt;/strong&gt; I have a rule with myself: if I say "shipped," it means I've:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the tests&lt;/li&gt;
&lt;li&gt;Built the wheel&lt;/li&gt;
&lt;li&gt;Installed it in a fresh venv&lt;/li&gt;
&lt;li&gt;Imported it&lt;/li&gt;
&lt;li&gt;Run the smoke test from the CLI&lt;/li&gt;
&lt;li&gt;Confirmed the GitHub repo exists with the expected commits
This rule caught a sub-agent's false claim about a missing GitHub repo today. Without it, distribution would have rested on a phantom dependency.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize for the second user, not the first.&lt;/strong&gt; I'm the first user; I already know how to use these. The README, the CLI, the error messages, the example configs — all of those exist for the second person. If I can't read my own README and figure out how to install the package without context, neither can anyone else.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The honest accounting
&lt;/h2&gt;

&lt;p&gt;I'm not pretending this is a $1M ARR business. It isn't. It's six tools, a public bot, a free API, and an MCP server index (&lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt;) — all running on roughly $200 of personal capital, all open source, all run by one person.&lt;/p&gt;

&lt;p&gt;What I'm betting on: the surface area of the stack is the moat, not any single tool. If you're building on Polymarket, you need data (api.protodex.io), you need the SDK done right (polymarket-v2-migration cookbook), you need the audit (pnl-truthteller), you need the labeled examples (cross-signal-data), you need the rollout discipline (quant-rollout), you need the TA stack (sigil-ta), and you need an MCP server to wire it all into your AI tools (polymarket-mcp-pro).&lt;/p&gt;

&lt;p&gt;I shipped all seven things. The market will tell me which are right.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;All 6 repos under one org:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge" rel="noopener noreferrer"&gt;github.com/LuciferForge&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bot itself (open source):&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-crash-bot&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free Polymarket data API:&lt;/strong&gt; &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;api.protodex.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP server index:&lt;/strong&gt; &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you build something on top of any of these, send me the link. If you find a bug, open an issue.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built in public by &lt;a href="https://github.com/LuciferForge" rel="noopener noreferrer"&gt;LuciferForge&lt;/a&gt;, a solo operator running a Polymarket trading bot and the &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt; infrastructure.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>showdev</category>
      <category>opensource</category>
      <category>trading</category>
    </item>
    <item>
      <title>Polymarket V2 Migration: What I Actually Hit Migrating a Live Bot Today</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Tue, 28 Apr 2026 15:56:05 +0000</pubDate>
      <link>https://dev.to/manja316/polymarket-v2-migration-what-i-actually-hit-migrating-a-live-bot-today-3kb3</link>
      <guid>https://dev.to/manja316/polymarket-v2-migration-what-i-actually-hit-migrating-a-live-bot-today-3kb3</guid>
      <description>&lt;p&gt;If you run a trading bot against Polymarket's CLOB, today's V2 cutover broke it. I migrated my live crash-recovery bot (&lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;308 closed trades, 80.2% WR&lt;/a&gt;) in 4 hours during the cutover window. This post is exactly what I observed — not what the announcement said, but what actually happened on the wire.&lt;/p&gt;

&lt;p&gt;There's a companion repo with the working code, line-by-line diffs, and a smoke test you can run in 30 seconds: &lt;strong&gt;&lt;a href="https://github.com/LuciferForge/polymarket-v2-migration" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-v2-migration&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reading this without checking the cookbook is fine. Running a live bot through the cutover without the cookbook is how you lose hours.&lt;/p&gt;




&lt;h2&gt;
  
  
  The headline you missed
&lt;/h2&gt;

&lt;p&gt;Polymarket announced a 1-hour V2 cutover. The actual disruption was &lt;strong&gt;~6 hours of mixed states&lt;/strong&gt;, not 1. If your bot was running during the window and you were retrying every 30 seconds, you logged a few hundred 503s for nothing. Here's what those 6 hours looked like, hour by hour:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time (UTC)&lt;/th&gt;
&lt;th&gt;State&lt;/th&gt;
&lt;th&gt;What worked&lt;/th&gt;
&lt;th&gt;What didn't&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;11:00&lt;/td&gt;
&lt;td&gt;Cutover begins&lt;/td&gt;
&lt;td&gt;Reads (orderbook, balance, positions)&lt;/td&gt;
&lt;td&gt;New orders → HTTP 503 cancel-only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:00–13:00&lt;/td&gt;
&lt;td&gt;Pure cancel-only&lt;/td&gt;
&lt;td&gt;Cancels of pre-existing V1 orders&lt;/td&gt;
&lt;td&gt;Anything that posts a new order&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:00–14:30&lt;/td&gt;
&lt;td&gt;V2 contracts active, V1 SDK confused&lt;/td&gt;
&lt;td&gt;Migrated wallets via UI&lt;/td&gt;
&lt;td&gt;V1 SDK paths still 503-ing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:30–16:00&lt;/td&gt;
&lt;td&gt;Mixed state&lt;/td&gt;
&lt;td&gt;V2 SDK + migrated wallets&lt;/td&gt;
&lt;td&gt;V1 SDK; un-migrated V2 wallets (allowance errors)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16:00 →&lt;/td&gt;
&lt;td&gt;Stable V2&lt;/td&gt;
&lt;td&gt;Everything&lt;/td&gt;
&lt;td&gt;Anyone still on V1 imports&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The bot saw &lt;code&gt;HTTP 503: Trading is currently cancel-only&lt;/code&gt; for the entire window. Nothing about that error tells you it'll last 6 hours. Polling the &lt;code&gt;are-orders-scoring&lt;/code&gt; flag wouldn't have helped either — the right behavior is "pause, walk away, resume when V2 is up".&lt;/p&gt;




&lt;h2&gt;
  
  
  What actually changed (it's not a version bump)
&lt;/h2&gt;

&lt;p&gt;The V2 migration is &lt;strong&gt;a new SDK package&lt;/strong&gt;, not a &lt;code&gt;pip install --upgrade&lt;/code&gt;. Polymarket shipped V2 as a parallel package so bots could be migrated incrementally rather than via big-bang upgrade.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;py-clob-client-v2   &lt;span class="c"&gt;# V2 SDK&lt;/span&gt;
&lt;span class="c"&gt;# Don't uninstall py-clob-client yet — keep it for rollback&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three changes in your code. That's it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;- from py_clob_client.client import ClobClient
- from py_clob_client.clob_types import (
&lt;/span&gt;&lt;span class="gi"&gt;+ from py_clob_client_v2.client import ClobClient
+ from py_clob_client_v2.clob_types import (
&lt;/span&gt;      ApiCreds, BalanceAllowanceParams, AssetType, MarketOrderArgs,
  )
&lt;span class="gd"&gt;- from py_clob_client.constants import POLYGON
&lt;/span&gt;&lt;span class="gi"&gt;+ from py_clob_client_v2.constants import POLYGON
&lt;/span&gt;  ...
&lt;span class="gd"&gt;- return client.post_order(signed_order, orderType="FOK")
&lt;/span&gt;&lt;span class="gi"&gt;+ return client.post_order(signed_order, order_type="FOK")
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two &lt;code&gt;import&lt;/code&gt; rewrites and one kwarg rename (&lt;code&gt;orderType&lt;/code&gt; → &lt;code&gt;order_type&lt;/code&gt;, snake_case in V2). If you forget the kwarg rename, you get a loud &lt;code&gt;TypeError: post_order() got an unexpected keyword argument 'orderType'&lt;/code&gt;. Loud errors are mercy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Contract addresses (changed) and the only way to migrate your wallet
&lt;/h2&gt;

&lt;p&gt;V2 ships with new contract addresses. The SDK has them baked in, but &lt;strong&gt;your wallet's allowances are not auto-migrated&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Contract&lt;/th&gt;
&lt;th&gt;V1&lt;/th&gt;
&lt;th&gt;V2&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CTF Exchange&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0x4bFb41d5B3570DeFd03C39a9A4D8dE6Bd8B8982E&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0xE111180000d2663C0091e4f400237545B87B996B&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NegRisk Exchange&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0xC5d563A36AE78145C45a50134d48A1215220f80a&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0xe2222d279d744050d28e00520010520000310F59&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Collateral&lt;/td&gt;
&lt;td&gt;USDC.e&lt;/td&gt;
&lt;td&gt;pUSD (1:1 backed by USDC.e)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CTF (token framework)&lt;/td&gt;
&lt;td&gt;unchanged&lt;/td&gt;
&lt;td&gt;unchanged&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Important: the only way to trigger your wallet's V2 migration is to manually trade ≥5 shares in the Polymarket UI.&lt;/strong&gt; I tried each of the SDK methods first because the docs hinted they might work. They don't:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ &lt;code&gt;update_balance_allowance()&lt;/code&gt; from the SDK — only refreshes the SDK's view of existing allowances.&lt;/li&gt;
&lt;li&gt;❌ Manually approving the V2 contracts on Polygonscan — Polymarket also requires the pUSD wrapper signature, which only happens in the UI flow.&lt;/li&gt;
&lt;li&gt;❌ Logging into polymarket.com without trading — login alone does not trigger the migration prompt.&lt;/li&gt;
&lt;li&gt;❌ Trading &amp;lt;5 shares — Polymarket gates the migration prompt behind a minimum.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The only thing that works:&lt;/strong&gt; go to polymarket.com, click Trade on any market, set size to ≥5 shares, sign all 3 wallet signatures (USDC.e → pUSD wrap, V2 CTF allowance, V2 NegRisk allowance). Costs about 0.1 MATIC in gas. Takes 60 seconds.&lt;/p&gt;

&lt;p&gt;After that, your wallet has V2 allowances and the V2 SDK works. Until then, you'll get &lt;code&gt;HTTP 400: insufficient allowance for V2 CTF Exchange&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you operate multiple proxy wallets, &lt;strong&gt;each one has to do its own UI flow&lt;/strong&gt;. There is no batched migration.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I actually did, in order
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time (UTC)&lt;/th&gt;
&lt;th&gt;Action&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;11:05&lt;/td&gt;
&lt;td&gt;Detected first 503. Paused bot via &lt;code&gt;MAX_ENTRY_PRICE = 0.0&lt;/code&gt; config flag.&lt;/td&gt;
&lt;td&gt;Bot stopped firing buys; existing positions kept being monitored.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:30&lt;/td&gt;
&lt;td&gt;Read V2 announcement + SDK release notes.&lt;/td&gt;
&lt;td&gt;Confirmed import + kwarg changes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:00&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;pip install py-clob-client-v2&lt;/code&gt;. Did NOT uninstall V1.&lt;/td&gt;
&lt;td&gt;Both SDKs available; rollback path preserved.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:15&lt;/td&gt;
&lt;td&gt;`sed -i '' 's\&lt;/td&gt;
&lt;td&gt;from py_clob_client.\&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:20&lt;/td&gt;
&lt;td&gt;{% raw %}`sed -i '' 's\&lt;/td&gt;
&lt;td&gt;orderType="FOK"\&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:30&lt;/td&gt;
&lt;td&gt;Smoke test: {% raw %}&lt;code&gt;python -c "from py_clob_client_v2.client import ClobClient; print('ok')"&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Pass.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:45&lt;/td&gt;
&lt;td&gt;Opened polymarket.com, traded 5 shares on a small market, signed migration approvals.&lt;/td&gt;
&lt;td&gt;~$0.01 in MATIC gas. Wallet now holds pUSD; V2 allowances appeared in &lt;code&gt;get_balance_allowance&lt;/code&gt; response.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:00&lt;/td&gt;
&lt;td&gt;Manually posted a 1-share order via my updated bot to verify V2 path.&lt;/td&gt;
&lt;td&gt;Order filled on-chain.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14:00&lt;/td&gt;
&lt;td&gt;Resumed bot at half-size (&lt;code&gt;POSITION_SIZE_USD = 5.0&lt;/code&gt; instead of 10).&lt;/td&gt;
&lt;td&gt;First 3 orders went through cleanly.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17:00&lt;/td&gt;
&lt;td&gt;Confirmed 6h of clean operation.&lt;/td&gt;
&lt;td&gt;Returned to normal POSITION_SIZE.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Total wall time: ~4 hours of focus, plus passive waiting for V2 to stabilize.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 12 errors I hit (with fixes)
&lt;/h2&gt;

&lt;p&gt;The cookbook's &lt;a href="https://github.com/LuciferForge/polymarket-v2-migration/blob/main/docs/troubleshooting.md" rel="noopener noreferrer"&gt;troubleshooting guide&lt;/a&gt; has all 12 with exact fixes. The most common ones:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;ModuleNotFoundError: No module named 'py_clob_client_v2'&lt;/code&gt;&lt;/strong&gt; → &lt;code&gt;pip install py-clob-client-v2&lt;/code&gt; into the right venv. If you use launchd or systemd, double-check the interpreter path.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;TypeError: post_order() got an unexpected keyword argument 'orderType'&lt;/code&gt;&lt;/strong&gt; → grep for &lt;code&gt;orderType=&lt;/code&gt; and replace with &lt;code&gt;order_type=&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;HTTP 503: Trading is currently cancel-only&lt;/code&gt;&lt;/strong&gt; → Polymarket-side maintenance. Pause your bot, wait. Don't retry-storm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;HTTP 400: insufficient allowance for V2 CTF Exchange&lt;/code&gt;&lt;/strong&gt; → wallet not migrated. Go to polymarket.com and trade 5 shares.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Decimal precision mismatch on amount&lt;/code&gt;&lt;/strong&gt; → V2 is stricter. Wrap your amount in &lt;code&gt;round(amount_usd, 2)&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Order accepted but never settles on-chain&lt;/strong&gt; → your wallet has no pUSD balance. The migration trade wraps USDC.e to pUSD; if you skipped that step, accepts will fail at settlement.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The full list with code is in &lt;code&gt;docs/troubleshooting.md&lt;/code&gt; of the cookbook repo.&lt;/p&gt;




&lt;h2&gt;
  
  
  The slippage problem V2 doesn't fix
&lt;/h2&gt;

&lt;p&gt;If your bot used a price-discount ladder (e.g., 5%, 15%, 25% below ref price) for TIMEOUT exits on thin markets, &lt;strong&gt;V2 doesn't fix that.&lt;/strong&gt; V2 fixes nonce-related failed-fill bugs in the matching layer — it doesn't change order-book depth.&lt;/p&gt;

&lt;p&gt;I ran a separate audit on my bot's lifetime P&amp;amp;L vs. on-chain fills using a tool I open-sourced today: &lt;a href="https://github.com/LuciferForge/pnl-truthteller" rel="noopener noreferrer"&gt;&lt;code&gt;pnl-truthteller&lt;/code&gt;&lt;/a&gt;. Result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Lifetime theoretical P&amp;amp;L:  +$33.49
Lifetime actual P&amp;amp;L:       -$89.01
Total slippage cost:       -$122.50  (-365.8% of theoretical)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's 302 closed trades, 4-5 months of operation, and the difference between "I'm slightly profitable" and "I'm clearly not, and I should change the exit ladder." If you've never run that reconciliation against your bot's records, you don't know what you're really making.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pnl-truthteller
pnl-truthteller &lt;span class="nt"&gt;--wallet&lt;/span&gt; 0xYourProxyAddress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read-only, wallet address only, no API key. Outputs a Markdown report with by-exit-reason breakdown, worst-10 trades, and dust shares stranded on-chain.&lt;/p&gt;




&lt;h2&gt;
  
  
  The deeper lesson: V2 wasn't really about V2
&lt;/h2&gt;

&lt;p&gt;Every DeFi protocol upgrade exposes which bots are well-coupled to the protocol's own SDK abstractions and which ones have hardcoded addresses, hardcoded version assumptions, or naive retry loops.&lt;/p&gt;

&lt;p&gt;What V2 actually tested:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Whether your bot detects cancel-only mode cleanly (most don't)&lt;/li&gt;
&lt;li&gt;Whether you have a kill switch that pauses the bot on first sign of a system-wide error (most don't)&lt;/li&gt;
&lt;li&gt;Whether your slippage cost is being measured against on-chain reality or against your own DB optimism (most aren't)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any of those answers were "no", today is the day to fix them. I open-sourced three things that handle exactly these failure modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/LuciferForge/polymarket-v2-migration" rel="noopener noreferrer"&gt;polymarket-v2-migration&lt;/a&gt;&lt;/strong&gt; — the cookbook for this specific cutover. 12 errors with fixes, smoke test, hour-by-hour timeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/LuciferForge/pnl-truthteller" rel="noopener noreferrer"&gt;pnl-truthteller&lt;/a&gt;&lt;/strong&gt; — fill-level P&amp;amp;L audit. Find your hidden slippage. (&lt;code&gt;pip install pnl-truthteller&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/LuciferForge/quant-rollout" rel="noopener noreferrer"&gt;quant-rollout&lt;/a&gt;&lt;/strong&gt; — staged-deployment toolkit with kill switch and veto window. Stops you from shipping bad params into production. (&lt;code&gt;pip install quant-rollout&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All MIT, all zero-dep cores, all PyPI-published. If they save you a few hours over the next week, star them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bot source:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/polymarket-crash-bot" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-crash-bot&lt;/a&gt; — the same bot that ran through today's migration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;V2 cookbook:&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/polymarket-v2-migration" rel="noopener noreferrer"&gt;github.com/LuciferForge/polymarket-v2-migration&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smoke test:&lt;/strong&gt; clone the cookbook and run &lt;code&gt;python tests/test_v2_imports.py&lt;/code&gt; against your env. Tells you whether your wallet is migrated and your SDK is V2-ready.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free Polymarket data API:&lt;/strong&gt; &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;api.protodex.io&lt;/a&gt; — 9,500+ markets, no key required for basic access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Labeled crash-recovery dataset (free):&lt;/strong&gt; &lt;a href="https://github.com/LuciferForge/cross-signal-data" rel="noopener noreferrer"&gt;github.com/LuciferForge/cross-signal-data&lt;/a&gt; — 308 trades, 80.2% WR, real markets. &lt;code&gt;pip install cross-signal-data&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your bot is still 503-ing right now, the cookbook tells you whether it's the SDK or your wallet allowances. If it's silent, run pnl-truthteller. The chain is the source of truth.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://github.com/LuciferForge" rel="noopener noreferrer"&gt;LuciferForge&lt;/a&gt; is a solo operator running a public-audited Polymarket trading bot. Also runs &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt; (5,800+ MCP servers indexed) and the &lt;a href="https://api.protodex.io" rel="noopener noreferrer"&gt;free Polymarket data API&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>trading</category>
      <category>tutorial</category>
      <category>showdev</category>
    </item>
    <item>
      <title>We Index 2,013 MCP Servers and Security-Score Every One — Here's What We Found</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Sat, 18 Apr 2026 03:32:11 +0000</pubDate>
      <link>https://dev.to/manja316/we-index-2013-mcp-servers-and-security-score-every-one-heres-what-we-found-2k6i</link>
      <guid>https://dev.to/manja316/we-index-2013-mcp-servers-and-security-score-every-one-heres-what-we-found-2k6i</guid>
      <description>&lt;p&gt;There are over 40,000 MCP server repositories on GitHub right now. That number was 5,000 six months ago.&lt;/p&gt;

&lt;p&gt;The Model Context Protocol is eating the AI tooling world — but nobody is checking whether these servers are safe to run. We built &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;Protodex&lt;/a&gt; to fix that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;When you install an MCP server, you're giving it access to your filesystem, your APIs, your databases, and your shell. The &lt;code&gt;@modelcontextprotocol/sdk&lt;/code&gt; makes it trivially easy to build a server. The result: thousands of servers published by developers who never thought about security.&lt;/p&gt;

&lt;p&gt;We know this because &lt;strong&gt;we've been scanning them&lt;/strong&gt;. Out of the 2,013 servers we've indexed, we found vulnerabilities serious enough to file bounty reports — and &lt;a href="https://protodex.io/security.html" rel="noopener noreferrer"&gt;$4,725 has been confirmed for payout&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Protodex Does
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;Protodex.io&lt;/a&gt; is a searchable directory of 2,013 MCP servers across 13 categories:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Servers&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AI/LLM&lt;/td&gt;
&lt;td&gt;974&lt;/td&gt;
&lt;td&gt;Claude integrations, embedding servers, agent frameworks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code/Dev Tools&lt;/td&gt;
&lt;td&gt;224&lt;/td&gt;
&lt;td&gt;GitHub, GitLab, Jira, IDE integrations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API Integration&lt;/td&gt;
&lt;td&gt;116&lt;/td&gt;
&lt;td&gt;Slack, Discord, Stripe, Twilio connectors&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory/Knowledge&lt;/td&gt;
&lt;td&gt;111&lt;/td&gt;
&lt;td&gt;RAG servers, knowledge graphs, note-taking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database&lt;/td&gt;
&lt;td&gt;87&lt;/td&gt;
&lt;td&gt;Postgres, MongoDB, Redis, Supabase MCP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security&lt;/td&gt;
&lt;td&gt;67&lt;/td&gt;
&lt;td&gt;Vulnerability scanners, auth servers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Browser/Web&lt;/td&gt;
&lt;td&gt;58&lt;/td&gt;
&lt;td&gt;Playwright, puppeteer, web scrapers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every server listing shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source repository and stars&lt;/li&gt;
&lt;li&gt;Language and last update&lt;/li&gt;
&lt;li&gt;Category and description&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security indicators&lt;/strong&gt; (this is what makes us different)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How We Built It
&lt;/h2&gt;

&lt;p&gt;The architecture is simple by design:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Scraper&lt;/strong&gt; — 25 GitHub search queries run weekly, catching new MCP servers across Python, TypeScript, Go, Rust, Java, and C#&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Categorizer&lt;/strong&gt; — keyword-based classification into 13 categories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static site generator&lt;/strong&gt; — builds 2,000+ HTML pages with per-server detail pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-deploy&lt;/strong&gt; — git push to GitHub Pages every Monday at 6 AM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security scanner&lt;/strong&gt; — runs &lt;a href="https://github.com/LuciferForge/mcp-security-audit" rel="noopener noreferrer"&gt;mcp-security-audit&lt;/a&gt; on indexed servers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The entire pipeline runs unattended. Zero manual work after setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Found Scanning MCP Servers
&lt;/h2&gt;

&lt;p&gt;When we audited servers from the official Anthropic and community collections, we found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSRF vulnerabilities&lt;/strong&gt; in servers that fetch URLs without validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Path traversal&lt;/strong&gt; in file-serving MCP servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL injection&lt;/strong&gt; in database MCP servers that pass user input to queries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Command injection&lt;/strong&gt; in servers that shell out to system commands&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pickle deserialization&lt;/strong&gt; in ML model servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't theoretical. We filed reports with Huntr and MSRC. $4,725 has been confirmed for the first batch, with 74+ additional findings in the pipeline.&lt;/p&gt;

&lt;p&gt;The typical pattern: a developer builds an MCP server to solve their problem, publishes it to GitHub, and never thinks about what happens when the input comes from an untrusted source. But MCP servers receive prompts that could contain anything — including attack payloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not Just Use mcp.so or Smithery?
&lt;/h2&gt;

&lt;p&gt;Fair question. Here's the honest comparison:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;mcp.so&lt;/th&gt;
&lt;th&gt;Smithery&lt;/th&gt;
&lt;th&gt;Protodex&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Backed by&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;VC-funded&lt;/td&gt;
&lt;td&gt;Independent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Server count&lt;/td&gt;
&lt;td&gt;~200 curated&lt;/td&gt;
&lt;td&gt;~500&lt;/td&gt;
&lt;td&gt;2,013&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security scores&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open source&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto-updates&lt;/td&gt;
&lt;td&gt;Unknown&lt;/td&gt;
&lt;td&gt;Unknown&lt;/td&gt;
&lt;td&gt;Weekly (GitHub scraper)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vulnerability research&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$4,725 in bounties&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Our moat is security. Nobody else scans MCP servers for vulnerabilities and publishes the results. We use our own tools (&lt;a href="https://pypi.org/project/ai-injection-guard/" rel="noopener noreferrer"&gt;ai-injection-guard&lt;/a&gt;, &lt;a href="https://pypi.org/project/mcp-security-audit/" rel="noopener noreferrer"&gt;mcp-security-audit&lt;/a&gt;) to do the analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Auto-Refresh Pipeline
&lt;/h2&gt;

&lt;p&gt;Every Monday at 6 AM, a launchd job:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Runs 25 GitHub search queries for new MCP servers&lt;/li&gt;
&lt;li&gt;Indexes new servers into a SQLite database&lt;/li&gt;
&lt;li&gt;Categorizes them by keyword matching&lt;/li&gt;
&lt;li&gt;Exports to JSON&lt;/li&gt;
&lt;li&gt;Builds 2,000+ static HTML pages&lt;/li&gt;
&lt;li&gt;Git pushes to GitHub Pages&lt;/li&gt;
&lt;li&gt;Sends a Telegram notification with the count&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The pipeline has been running since March. We went from 1,629 servers to 2,013 in the last refresh — 384 new servers discovered in one week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Protodex
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Browse&lt;/strong&gt;: &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt; — search by keyword, filter by category&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contribute&lt;/strong&gt;: &lt;a href="https://github.com/LuciferForge/mcp-directory" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; — open source, PRs welcome&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Found a vulnerable MCP server? Email &lt;a href="mailto:LuciferForge@proton.me"&gt;LuciferForge@proton.me&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also maintain a &lt;a href="https://manja8.gumroad.com/l/polymarket-data" rel="noopener noreferrer"&gt;Polymarket historical dataset&lt;/a&gt; (8.9M data points) and &lt;a href="https://pypi.org/project/ai-injection-guard/" rel="noopener noreferrer"&gt;AI security tools on PyPI&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Protodex is built by &lt;a href="https://github.com/LuciferForge" rel="noopener noreferrer"&gt;LuciferForge&lt;/a&gt; — an independent security research lab focused on AI agent safety.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>security</category>
      <category>ai</category>
      <category>claude</category>
    </item>
    <item>
      <title>84% of Polymarket Traders Lose Money — I Backtested 6,225 Trades to Find What Actually Works</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Sat, 18 Apr 2026 03:32:07 +0000</pubDate>
      <link>https://dev.to/manja316/84-of-polymarket-traders-lose-money-i-backtested-6225-trades-to-find-what-actually-works-3n32</link>
      <guid>https://dev.to/manja316/84-of-polymarket-traders-lose-money-i-backtested-6225-trades-to-find-what-actually-works-3n32</guid>
      <description>&lt;p&gt;A &lt;a href="https://www.casino.org/news/84-of-polymarket-traders-arent-profitable/" rel="noopener noreferrer"&gt;study of 2.5 million wallets&lt;/a&gt; found that 84% of Polymarket traders lose money. But nobody asks the obvious question: &lt;strong&gt;what are the 16% doing differently?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I backtested 6,225 trades on 30 days of real data to find out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Losing Strategy: Bet on Outcomes
&lt;/h2&gt;

&lt;p&gt;Most Polymarket users bet like this: "I think Trump will win, so I'll buy YES at $0.55."&lt;/p&gt;

&lt;p&gt;This is gambling with extra steps. You're predicting the future — and prediction is hard. The house (the market) usually prices things roughly right, so your edge is near zero.&lt;/p&gt;

&lt;p&gt;The data shows this clearly. Across 9,550 markets I tracked for 30 days, the average price change between snapshots was &lt;strong&gt;-0.002%&lt;/strong&gt;. Essentially random noise. If you buy randomly and hold, you lose to fees.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Winning Strategy: Bet on Price Patterns
&lt;/h2&gt;

&lt;p&gt;The 16% who win aren't smarter about geopolitics or sports. They exploit a structural pattern: &lt;strong&gt;markets overreact to bad news and self-correct within hours.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's the pattern from 5,629 crash events in my dataset:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;After a &amp;gt;20% price drop:
  +15 min:  +6.6% average bounce
  +30 min:  +8.8% average bounce
  +45 min: +10.3% average bounce
  +1 hour: +11.0% average bounce
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't theoretical. It's measured across every category — politics, sports, crypto, geopolitics — over 30 continuous days.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Backtest: 6,225 Simulated Trades
&lt;/h2&gt;

&lt;p&gt;Strategy rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Entry&lt;/strong&gt;: Buy when a market drops &amp;gt;15% from its rolling 1-hour high&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exit&lt;/strong&gt;: Sell when price recovers to 90% of pre-crash level&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timeout&lt;/strong&gt;: Force exit after 12 hours if no recovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Size&lt;/strong&gt;: $1 per trade&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total trades&lt;/td&gt;
&lt;td&gt;6,225&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Win rate&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;75%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Recovery exits&lt;/td&gt;
&lt;td&gt;4,388 (71%) — avg hold 4.1 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Timeout exits&lt;/td&gt;
&lt;td&gt;1,790 (29%) — avg hold 24 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total P&amp;amp;L&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+$135 per $1 risked&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key insight: &lt;strong&gt;71% of trades recover within the first few hours.&lt;/strong&gt; The ones that don't are the timeouts — and they're the only source of losses.&lt;/p&gt;

&lt;h2&gt;
  
  
  What The Data Says About Hold Time
&lt;/h2&gt;

&lt;p&gt;This was the most surprising finding. I ran the same strategy with different maximum hold periods:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Max Hold&lt;/th&gt;
&lt;th&gt;Win Rate&lt;/th&gt;
&lt;th&gt;P&amp;amp;L&lt;/th&gt;
&lt;th&gt;Capital Efficiency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2 hours&lt;/td&gt;
&lt;td&gt;54%&lt;/td&gt;
&lt;td&gt;$87&lt;/td&gt;
&lt;td&gt;Highest throughput&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6 hours&lt;/td&gt;
&lt;td&gt;64%&lt;/td&gt;
&lt;td&gt;$108&lt;/td&gt;
&lt;td&gt;Good balance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;12 hours&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;70%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$121&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Optimal&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24 hours&lt;/td&gt;
&lt;td&gt;75%&lt;/td&gt;
&lt;td&gt;$135&lt;/td&gt;
&lt;td&gt;Diminishing returns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;48 hours&lt;/td&gt;
&lt;td&gt;81%&lt;/td&gt;
&lt;td&gt;$142&lt;/td&gt;
&lt;td&gt;Capital trap&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Going from 12 to 48 hours only adds $21 but locks your capital 4x longer. The smart money exits fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Categories Are Not Equal
&lt;/h2&gt;

&lt;p&gt;If you're going to trade crashes, pick your battlefield:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Win Rate&lt;/th&gt;
&lt;th&gt;P&amp;amp;L Per Trade&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Crypto&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;78%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.030&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sports&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;79%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.027&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Other&lt;/td&gt;
&lt;td&gt;75%&lt;/td&gt;
&lt;td&gt;$0.024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Politics&lt;/td&gt;
&lt;td&gt;76%&lt;/td&gt;
&lt;td&gt;$0.018&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Geopolitics&lt;/td&gt;
&lt;td&gt;71%&lt;/td&gt;
&lt;td&gt;$0.016&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;del&gt;Economics&lt;/del&gt;&lt;/td&gt;
&lt;td&gt;69%&lt;/td&gt;
&lt;td&gt;$0.008&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;del&gt;Weather&lt;/del&gt;&lt;/td&gt;
&lt;td&gt;57%&lt;/td&gt;
&lt;td&gt;Negative&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Crypto and sports markets mean-revert the hardest. Economics and weather markets crash and stay crashed — the "bad news" in those categories usually reflects real fundamental changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Works (And Why Most People Miss It)
&lt;/h2&gt;

&lt;p&gt;Prediction markets have a structural inefficiency: &lt;strong&gt;emotional overreaction followed by rational correction.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a political scandal breaks, YES holders panic-sell and the price drops 30% in minutes. But most scandals don't actually change the outcome probability by 30%. So the price drifts back.&lt;/p&gt;

&lt;p&gt;Professional market makers know this. They provide liquidity during crashes and profit from the bounce. This strategy is essentially doing the same thing — buying the panic, selling the recovery.&lt;/p&gt;

&lt;p&gt;The 84% who lose are betting on outcomes (Will X happen?). The 16% who win are betting on price behavior (Will this crash recover?). Same market, different game.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reproduce This Yourself
&lt;/h2&gt;

&lt;p&gt;The full dataset is available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.kaggle.com/datasets/luciferforge/polymarket-historical-prices" rel="noopener noreferrer"&gt;Kaggle&lt;/a&gt;&lt;/strong&gt; — free, includes full SQLite DB with 8.9M data points&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://huggingface.co/datasets/manja316/polymarket-historical-prices" rel="noopener noreferrer"&gt;HuggingFace&lt;/a&gt;&lt;/strong&gt; — free sample&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/manja316/polymarket-historical-data" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/strong&gt; — browse and star&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Load it in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sqlite3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;

&lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sqlite3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;market_universe.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;prices&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT * FROM prices WHERE outcome = &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Yes&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; price snapshots loaded&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Find crash events
&lt;/span&gt;&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prev&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;groupby&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;market_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;price&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;shift&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;change&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;price&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prev&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prev&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;crashes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;change&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;0.15&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;crashes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; crash events found&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the full dataset with orderbook depth (792K snapshots): &lt;strong&gt;&lt;a href="https://manja8.gumroad.com/l/polymarket-data" rel="noopener noreferrer"&gt;Gumroad ($9)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Data collected March 18 — April 17, 2026 using an automated pipeline. 9,550 markets, 8.1M price snapshots, 15-minute intervals. Source: &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>polymarket</category>
      <category>trading</category>
      <category>data</category>
      <category>python</category>
    </item>
    <item>
      <title>I Collected 8.9 Million Polymarket Price Points — Here's What I Found About How Markets Really Move</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Fri, 17 Apr 2026 18:21:55 +0000</pubDate>
      <link>https://dev.to/manja316/i-collected-89-million-polymarket-price-points-heres-what-i-found-about-how-markets-really-move-2dil</link>
      <guid>https://dev.to/manja316/i-collected-89-million-polymarket-price-points-heres-what-i-found-about-how-markets-really-move-2dil</guid>
      <description>&lt;p&gt;Everyone says prediction markets are efficient. I spent 30 days collecting data to test that claim.&lt;/p&gt;

&lt;p&gt;The result: &lt;strong&gt;8.9 million data points across 9,550 markets&lt;/strong&gt; — and the data tells a story most traders miss completely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;I built an automated collector that snapshots every active Polymarket market every 15 minutes. Not just BTC or the US election — &lt;em&gt;every&lt;/em&gt; market. Politics, sports, crypto, geopolitics, economics, entertainment, weather, science. All of it.&lt;/p&gt;

&lt;p&gt;After 30 days of continuous collection:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Markets tracked&lt;/td&gt;
&lt;td&gt;9,550&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price snapshots&lt;/td&gt;
&lt;td&gt;8,158,672&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Orderbook depth snapshots&lt;/td&gt;
&lt;td&gt;792,527&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Categories&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total data points&lt;/td&gt;
&lt;td&gt;8,951,199&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Most Polymarket datasets you'll find cover a single market or a single event. This covers the entire platform simultaneously — which lets you see patterns that single-market analysis can't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding #1: Markets Are Not Efficient After Crashes
&lt;/h2&gt;

&lt;p&gt;Everyone assumes prediction markets instantly price in new information. The data says otherwise.&lt;/p&gt;

&lt;p&gt;I measured what happens after a price drops more than 20% between consecutive snapshots. Here's what the 5,629 crash events show:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time After Crash&lt;/th&gt;
&lt;th&gt;Average Return&lt;/th&gt;
&lt;th&gt;Events Measured&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;+15 min&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+6.6%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;5,629&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;+30 min&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+8.8%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;5,629&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;+45 min&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+10.3%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;5,629&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;+1 hour&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+11.0%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;5,629&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;After a &amp;gt;20% crash, prices bounce back an average of 6.6% &lt;em&gt;within 15 minutes&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This is classic mean reversion — and it's massive. For comparison, the S&amp;amp;P 500's average &lt;em&gt;annual&lt;/em&gt; return is about 10%. These markets deliver that in an hour after a crash.&lt;/p&gt;

&lt;p&gt;The reverse is also true. After a &amp;gt;10% pump:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time After Pump&lt;/th&gt;
&lt;th&gt;Average Return&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;+15 min&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;-2.9%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;+30 min&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;-3.7%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Prices that spike tend to give it back. Markets overreact in both directions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding #2: Hold Time Matters More Than Entry Price
&lt;/h2&gt;

&lt;p&gt;I simulated the obvious strategy — buy the crash, sell the recovery — across all 30 days of data. 6,225 trades. Here's how hold time affects the result:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Max Hold&lt;/th&gt;
&lt;th&gt;Trades&lt;/th&gt;
&lt;th&gt;Win Rate&lt;/th&gt;
&lt;th&gt;Total P&amp;amp;L&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2 hours&lt;/td&gt;
&lt;td&gt;10,204&lt;/td&gt;
&lt;td&gt;54%&lt;/td&gt;
&lt;td&gt;$87&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6 hours&lt;/td&gt;
&lt;td&gt;8,324&lt;/td&gt;
&lt;td&gt;64%&lt;/td&gt;
&lt;td&gt;$108&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;12 hours&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;7,295&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;70%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$121&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24 hours&lt;/td&gt;
&lt;td&gt;6,225&lt;/td&gt;
&lt;td&gt;75%&lt;/td&gt;
&lt;td&gt;$135&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;48 hours&lt;/td&gt;
&lt;td&gt;5,352&lt;/td&gt;
&lt;td&gt;81%&lt;/td&gt;
&lt;td&gt;$142&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The sweet spot is &lt;strong&gt;12 hours&lt;/strong&gt;. Going from 12h to 48h only adds $21 to total P&amp;amp;L but locks your capital 4x longer. Most of the money is made in the first few hours.&lt;/p&gt;

&lt;p&gt;This surprised me. I expected entry price to be the key variable. It's not:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Entry Price Range&lt;/th&gt;
&lt;th&gt;Win Rate&lt;/th&gt;
&lt;th&gt;Avg P&amp;amp;L Per Trade&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Under $0.10&lt;/td&gt;
&lt;td&gt;74%&lt;/td&gt;
&lt;td&gt;$0.014&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;$0.10 — $0.30&lt;/td&gt;
&lt;td&gt;74%&lt;/td&gt;
&lt;td&gt;$0.018&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;$0.30 — $0.50&lt;/td&gt;
&lt;td&gt;79%&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.032&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Above $0.50&lt;/td&gt;
&lt;td&gt;86%&lt;/td&gt;
&lt;td&gt;$0.022&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Higher-priced markets actually have &lt;em&gt;better&lt;/em&gt; win rates. The cheap ones look tempting but they include more dust trades that go nowhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding #3: Category Is Your Edge Selector
&lt;/h2&gt;

&lt;p&gt;Not all Polymarket categories behave the same:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Trades&lt;/th&gt;
&lt;th&gt;Win Rate&lt;/th&gt;
&lt;th&gt;P&amp;amp;L Per Trade&lt;/th&gt;
&lt;th&gt;Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Crypto&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;646&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;78%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.030&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Best per-trade&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sports&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1,050&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;79%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.027&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Most consistent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Other&lt;/td&gt;
&lt;td&gt;1,872&lt;/td&gt;
&lt;td&gt;75%&lt;/td&gt;
&lt;td&gt;$0.024&lt;/td&gt;
&lt;td&gt;Most volume&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Politics&lt;/td&gt;
&lt;td&gt;1,362&lt;/td&gt;
&lt;td&gt;76%&lt;/td&gt;
&lt;td&gt;$0.018&lt;/td&gt;
&lt;td&gt;Decent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Geopolitics&lt;/td&gt;
&lt;td&gt;890&lt;/td&gt;
&lt;td&gt;71%&lt;/td&gt;
&lt;td&gt;$0.016&lt;/td&gt;
&lt;td&gt;Below average&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Economics&lt;/td&gt;
&lt;td&gt;101&lt;/td&gt;
&lt;td&gt;69%&lt;/td&gt;
&lt;td&gt;$0.008&lt;/td&gt;
&lt;td&gt;Avoid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Weather&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;57%&lt;/td&gt;
&lt;td&gt;Negative&lt;/td&gt;
&lt;td&gt;Avoid&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Crypto and sports markets have the strongest mean reversion. Economics and weather markets are traps — they crash and stay crashed.&lt;/p&gt;

&lt;p&gt;Why? Sports and crypto have event-driven resolution (the game happens, the price discovers). Economics markets depend on slow-moving indicators — when they crash, it's often because the fundamentals actually changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding #4: The "Always Bet No" Strategy Is Overhyped
&lt;/h2&gt;

&lt;p&gt;You may have seen the "Nothing Ever Happens" bot that bets NO on everything. The claim: 73% of Polymarket resolves NO.&lt;/p&gt;

&lt;p&gt;I checked with 4,763 resolved binary markets from the API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;All markets: 52.3% resolve NO&lt;/strong&gt; (not 73%)&lt;/li&gt;
&lt;li&gt;Non-sports: 57%&lt;/li&gt;
&lt;li&gt;"Will X happen?" framing: 59.3%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 73% figure comes from a heavily filtered subset. Across all markets, the NO edge is barely there — and at typical NO prices ($0.65-0.85), the math doesn't work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dataset Is Free to Explore
&lt;/h2&gt;

&lt;p&gt;I'm releasing the full dataset across multiple platforms:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Free (sample):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.kaggle.com/datasets/luciferforge/polymarket-historical-prices" rel="noopener noreferrer"&gt;Kaggle&lt;/a&gt; — markets.csv + 100K price preview + full SQLite DB&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://huggingface.co/datasets/manja316/polymarket-historical-prices" rel="noopener noreferrer"&gt;HuggingFace&lt;/a&gt; — same files, HF ecosystem integration&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/manja316/polymarket-historical-data" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; — browse the data, star if useful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Full dataset ($9):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://manja8.gumroad.com/l/polymarket-data" rel="noopener noreferrer"&gt;Gumroad&lt;/a&gt; — 8.9M data points, orderbook depth, SQLite DB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The data updates weekly from an automated pipeline. If you want to reproduce any of these findings, everything is there.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Build Next
&lt;/h2&gt;

&lt;p&gt;If I were starting a Polymarket quant project today, I'd focus on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Real-time crash detection&lt;/strong&gt; — the 6.6% bounce after crashes is the clearest edge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Category rotation&lt;/strong&gt; — crypto and sports, skip economics and weather&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;12-hour max hold&lt;/strong&gt; — the data is unambiguous on this&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-market signals&lt;/strong&gt; — does a crash in one political market predict crashes in related ones?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The prediction market space is where crypto was in 2017 — growing fast, most participants losing money, and the edge goes to people with data infrastructure.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The data was collected from Polymarket's Gamma API and CLOB API using an automated pipeline. All code and methodology are open source. I also maintain &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt;, an index of 2,013 MCP servers with security scores.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Questions or want custom data cuts? &lt;a href="mailto:LuciferForge@proton.me"&gt;LuciferForge@proton.me&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>polymarket</category>
      <category>data</category>
      <category>trading</category>
      <category>python</category>
    </item>
    <item>
      <title>How to Use Context7 MCP Server — Up-to-Date Docs for Every Library in Your AI Editor</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:16:51 +0000</pubDate>
      <link>https://dev.to/manja316/how-to-use-context7-mcp-server-up-to-date-docs-for-every-library-in-your-ai-editor-4ppe</link>
      <guid>https://dev.to/manja316/how-to-use-context7-mcp-server-up-to-date-docs-for-every-library-in-your-ai-editor-4ppe</guid>
      <description>&lt;p&gt;Every AI coding assistant has the same problem: stale training data. You ask Claude to help with Next.js 15 and it gives you Next.js 13 patterns. You ask about the latest Prisma syntax and get deprecated methods.&lt;/p&gt;

&lt;p&gt;Context7 fixes this. It's an MCP server that pulls &lt;strong&gt;live documentation&lt;/strong&gt; for any library, directly into your AI editor's context window. 49.9K stars on GitHub. Here's how to set it up in under 2 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Context7 Actually Does
&lt;/h2&gt;

&lt;p&gt;When you're coding with Claude Code, Cursor, or Claude Desktop and mention a library, Context7 fetches the &lt;strong&gt;current&lt;/strong&gt; docs and code examples for that library. Not cached training data from 2024 — the actual docs from today.&lt;/p&gt;

&lt;p&gt;It works with any library that has documentation indexed: React, Next.js, Prisma, FastAPI, LangChain, Supabase — thousands of libraries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup: Claude Desktop
&lt;/h2&gt;

&lt;p&gt;Add this to your &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude Desktop. Done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup: Claude Code
&lt;/h2&gt;

&lt;p&gt;One command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add context7 npx &lt;span class="nt"&gt;-y&lt;/span&gt; @upstash/context7-mcp@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup: Cursor
&lt;/h2&gt;

&lt;p&gt;Add to &lt;code&gt;.cursor/mcp.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup: Goose and Other MCP Editors
&lt;/h2&gt;

&lt;p&gt;Context7 works with any MCP-compatible editor. For Goose, Windsurf, Zed, and others, add the Context7 server using your editor's MCP configuration. The server command is the same: &lt;code&gt;npx -y @upstash/context7-mcp@latest&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Use It
&lt;/h2&gt;

&lt;p&gt;Once installed, just add "use context7" to your prompt when you need current docs:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Create a Next.js 15 server component with streaming, use context7 for the latest patterns"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Context7 will pull the current Next.js docs and feed them into the AI's context. No more outdated patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  When It Saves You
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;New library versions&lt;/strong&gt; — Framework just released a major update? Context7 has it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Migration guides&lt;/strong&gt; — Moving from one version to another? Get the actual migration docs, not guessed changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Niche libraries&lt;/strong&gt; — Working with a library the AI wasn't trained on? Context7 can still fetch its docs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Alternatives
&lt;/h2&gt;

&lt;p&gt;There are other documentation MCP servers worth knowing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Grounded Docs&lt;/strong&gt; — open source alternative focused on offline caching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zed MCP Context7&lt;/strong&gt; — Context7 integration specifically for Zed editor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Browse all documentation MCP servers at &lt;a href="https://protodex.io/category/memory-knowledge.html" rel="noopener noreferrer"&gt;Protodex — Memory/Knowledge category&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt; — 5,618 MCP servers with one-click install for Claude Desktop, Cursor, Goose, and more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>claude</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Claude Code vs Goose vs nanobot vs pi-mono — Which AI Coding Agent Should You Use?</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:16:19 +0000</pubDate>
      <link>https://dev.to/manja316/claude-code-vs-goose-vs-nanobot-vs-pi-mono-which-ai-coding-agent-should-you-use-38bi</link>
      <guid>https://dev.to/manja316/claude-code-vs-goose-vs-nanobot-vs-pi-mono-which-ai-coding-agent-should-you-use-38bi</guid>
      <description>&lt;p&gt;There are now 7+ open-source AI coding agents with 30K+ stars each. I've used all of them on real projects. Here's when each one makes sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Quick Answer
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;If you want...&lt;/th&gt;
&lt;th&gt;Use this&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Full-featured + MCP ecosystem&lt;/td&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maximum extensibility&lt;/td&gt;
&lt;td&gt;Goose&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Understand the source code&lt;/td&gt;
&lt;td&gt;nanobot&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-provider + self-host&lt;/td&gt;
&lt;td&gt;pi-mono&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visual flow building&lt;/td&gt;
&lt;td&gt;Flowise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Browser automation&lt;/td&gt;
&lt;td&gt;browser-use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Research experiments&lt;/td&gt;
&lt;td&gt;autoresearch&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Claude Code — The Full-Featured Option
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stars:&lt;/strong&gt; N/A (Anthropic product)&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Professional daily driving&lt;/p&gt;

&lt;p&gt;A full-featured experience. Hooks system for automation, MCP for connecting to anything (5,618 servers on &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;Protodex&lt;/a&gt;), skills for extending behavior. It's opinionated — Anthropic chose the defaults so you don't have to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; Closed source. You're locked into Anthropic's models. Monthly subscription.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You want a batteries-included agent with deep MCP integration.&lt;/p&gt;
&lt;h2&gt;
  
  
  Goose — The Extensible One
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stars:&lt;/strong&gt; 42.3K&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; People who want plugins for everything&lt;/p&gt;

&lt;p&gt;Block's open-source agent. MCP support, browser built in, extensible plugin system. The community is active and there are plugins for most workflows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;goose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; Heavier than it needs to be. Plugin system adds complexity. Sometimes slow to start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You need specific integrations and want community-built extensions.&lt;/p&gt;

&lt;h2&gt;
  
  
  nanobot — The Minimalist
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stars:&lt;/strong&gt; 39.7K&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Learning, research, low-resource environments&lt;/p&gt;

&lt;p&gt;99% less code than Claude Code. The entire agent is readable in an afternoon. Starts in under a second. No plugins, no marketplace, no config — just an agent loop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;nanobot-ai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; No MCP, no memory, no ecosystem. It does one thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You want to understand how AI agents work, or need something tiny and fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  pi-mono — The Toolkit
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stars:&lt;/strong&gt; 36.3K&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Building your own agent products&lt;/p&gt;

&lt;p&gt;Not just an agent — a full toolkit. Unified LLM API across OpenAI/Anthropic/Google, agent runtime with state management, TUI library, web components for chat interfaces, and vLLM deployment tools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/badlogic/pi-mono &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; More of a framework than a ready-to-use tool. You're building WITH it, not using it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You're building an AI product and need the infrastructure, not a personal assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  browser-use — The Web Agent
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stars:&lt;/strong&gt; 88.1K&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Automating anything in a browser&lt;/p&gt;

&lt;p&gt;Gives your AI real browser access. Navigate, click, fill forms, extract data. Not a coding agent — a web automation agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;browser-use
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; Browser automation is slow and brittle. Sites change, selectors break.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; Your task requires interacting with websites, not codebases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flowise — The Visual Builder
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stars:&lt;/strong&gt; 52K&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Non-coders who want AI agents&lt;/p&gt;

&lt;p&gt;Drag-and-drop agent builder. Connect LLMs, vector stores, tools, and APIs visually. Export as API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; Visual building has limits. Complex logic gets messy in node graphs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You want to prototype agent flows without writing code.&lt;/p&gt;

&lt;h2&gt;
  
  
  autoresearch — The Scientist
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stars:&lt;/strong&gt; 73.2K&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; ML research automation&lt;/p&gt;

&lt;p&gt;Karpathy's project. AI agents that run ML experiments — modify training code, run experiments, analyze results, iterate. Not a general coding agent — a research automation tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; Requires GPU infrastructure. Designed for ML training loops specifically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You're doing ML research and want to automate the experiment cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision Flowchart
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Do you write code daily?
├── Yes → Do you want to tinker with the agent?
│   ├── Yes → nanobot (learn) or pi-mono (build)
│   └── No → Claude Code, Cursor, or Goose (MCP ecosystem)
│
└── No → What do you need?
    ├── Browser automation → browser-use
    ├── Visual workflow → Flowise
    ├── ML research → autoresearch
    └── Custom agent product → pi-mono
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Ecosystem Matters
&lt;/h2&gt;

&lt;p&gt;The agent itself is just the core. What makes it useful is what it connects to. The MCP ecosystem (supported by Claude Code, Cursor, Goose, and others) includes 5,618+ servers for databases, APIs, browsers, memory, security, and more.&lt;/p&gt;

&lt;p&gt;If you're choosing between agents, check what integrations you need and whether the agent supports them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;→ Browse all MCP servers on Protodex&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;Protodex&lt;/a&gt; — 5,618 MCP servers with security scores and one-click install for Claude Desktop, Cursor, Goose, and more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>productivity</category>
      <category>comparison</category>
    </item>
    <item>
      <title>nanobot: The 39K-Star AI Agent That Does What Claude Code Does in 99% Less Code</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:15:33 +0000</pubDate>
      <link>https://dev.to/manja316/nanobot-the-39k-star-ai-agent-that-does-what-claude-code-does-in-99-less-code-2n4</link>
      <guid>https://dev.to/manja316/nanobot-the-39k-star-ai-agent-that-does-what-claude-code-does-in-99-less-code-2n4</guid>
      <description>&lt;p&gt;nanobot just crossed 39K stars on GitHub. It's from HKU's Data Science lab and it positions itself as the "ultra-lightweight personal AI agent." I spent a day with it to figure out if that claim holds up and when you should use it over Claude Code, Goose, or pi-mono.&lt;/p&gt;

&lt;h2&gt;
  
  
  What nanobot Actually Is
&lt;/h2&gt;

&lt;p&gt;It's a coding agent — like Claude Code or Cursor's agent mode — but built with 99% fewer lines of code. The entire agent core fits in a few hundred lines of Python. It connects to any LLM provider, reads your codebase, runs terminal commands, and edits files.&lt;/p&gt;

&lt;p&gt;The philosophy: strip everything except what matters. No plugin system, no marketplace, no GUI. Just an agent that reads, thinks, and acts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install (30 seconds)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Fastest&lt;/span&gt;
uv tool &lt;span class="nb"&gt;install &lt;/span&gt;nanobot-ai

&lt;span class="c"&gt;# Or pip&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;nanobot-ai

&lt;span class="c"&gt;# Or from source (latest)&lt;/span&gt;
git clone https://github.com/HKUDS/nanobot.git
&lt;span class="nb"&gt;cd &lt;/span&gt;nanobot &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nanobot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No config files. No API key setup wizard. It prompts you for your key on first run.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use nanobot vs Full-Featured Agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use nanobot when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want something that starts in &amp;lt;1 second (Claude Code takes 3-5s)&lt;/li&gt;
&lt;li&gt;You want to read and modify the agent's source code (it's small enough to understand in an afternoon)&lt;/li&gt;
&lt;li&gt;You're doing research on AI agents and need a clean base to experiment with&lt;/li&gt;
&lt;li&gt;You want a minimal agent on a low-resource machine (CI server, Raspberry Pi, cheap VPS)&lt;/li&gt;
&lt;li&gt;You don't need MCP servers, skills, or hooks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use a full-featured agent (Claude Code, Cursor, Goose) when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need the MCP ecosystem (5,600+ servers on &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;Protodex&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;You need hooks, plugins, or automated workflows&lt;/li&gt;
&lt;li&gt;You want deep model integration with context management&lt;/li&gt;
&lt;li&gt;You're working on large codebases where tooling matters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Goose when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want extensibility with a plugin system&lt;/li&gt;
&lt;li&gt;You need browser automation built in&lt;/li&gt;
&lt;li&gt;You want community extensions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How nanobot Works Under the Hood
&lt;/h2&gt;

&lt;p&gt;The architecture is dead simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User prompt → LLM call → Tool use (file read/write, shell) → LLM response → Loop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No routing, no planners, no multi-agent orchestration. One loop. The LLM decides what to do, does it, reports back. This is why it's fast — there's nothing between you and the model except the tool execution layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations (Be Honest)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No MCP support&lt;/strong&gt; — can't connect to databases, APIs, or browsers through MCP. If you need that, check &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;Protodex&lt;/a&gt; for servers that work with any MCP-compatible editor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No memory across sessions&lt;/strong&gt; — each run starts fresh. No persistent context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Small community&lt;/strong&gt; — compared to full-featured agents' ecosystems, there are fewer templates, guides, and pre-built workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research project&lt;/strong&gt; — it's from a university lab. Updates come when researchers have time, not on a product roadmap.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;nanobot is the right tool if you value simplicity and transparency over features. You can read every line of its code in a sitting. That makes it perfect for learning how AI agents work, for building custom agents on top of, or for environments where you need something tiny and fast.&lt;/p&gt;

&lt;p&gt;If you need the ecosystem, use a full-featured MCP-compatible agent (Claude Code, Cursor, or Goose) with &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;MCP servers&lt;/a&gt;. If you want to understand what's happening under the hood, nanobot is the best teacher.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building with AI agents? Browse 5,618 MCP servers with one-click install for Claude Desktop, Cursor, Goose, and more at &lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;protodex.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Which Database MCP Server Should You Use? I Compared the Top 6</title>
      <dc:creator>manja316</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:14:51 +0000</pubDate>
      <link>https://dev.to/manja316/which-database-mcp-server-should-you-use-i-compared-the-top-6-517g</link>
      <guid>https://dev.to/manja316/which-database-mcp-server-should-you-use-i-compared-the-top-6-517g</guid>
      <description>&lt;p&gt;Your AI assistant can now query your database directly. No more copy-pasting SQL results into chat. But there are 183 database MCP servers — which one actually works for YOUR stack?&lt;/p&gt;

&lt;p&gt;I tested the top 6 across Postgres, MySQL, SQLite, MongoDB, Redis, and Supabase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Decision Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Your Database&lt;/th&gt;
&lt;th&gt;Use This&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL&lt;/td&gt;
&lt;td&gt;&lt;code&gt;@modelcontextprotocol/server-postgres&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Official MCP team. Read-only by default. Stable.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;&lt;code&gt;@benborla29/mcp-server-mysql&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Best MySQL support with schema introspection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQLite&lt;/td&gt;
&lt;td&gt;&lt;code&gt;@modelcontextprotocol/server-sqlite&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Official. Great for local dev + prototyping&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MongoDB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;mcp-mongo-server&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full CRUD with aggregation pipeline support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis&lt;/td&gt;
&lt;td&gt;&lt;code&gt;@modelcontextprotocol/server-redis&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Official. Key-value ops + pub/sub&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supabase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;@supabase/mcp-server&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Supabase-specific: auth, storage, edge functions, not just SQL&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  PostgreSQL — The Safe Default
&lt;/h2&gt;

&lt;p&gt;If you're on Postgres (and most of you are), use the official one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Claude Code&lt;/span&gt;
claude mcp add postgres npx &lt;span class="nt"&gt;-y&lt;/span&gt; @modelcontextprotocol/server-postgres postgresql://user:pass@localhost/mydb

&lt;span class="c"&gt;# For Cursor, add to .cursor/mcp.json. For other MCP editors, use your editor's config.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It connects read-only by default. Your AI can explore schemas, run SELECT queries, and analyze data without risking writes. Perfect for "what does my user_sessions table look like?" or "find all orders over $100 from last week."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to pick something else:&lt;/strong&gt; If you need write access or Postgres-specific features (pgvector, PostGIS), look at the specialized servers on &lt;a href="https://protodex.io/category/database.html" rel="noopener noreferrer"&gt;Protodex's database category&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supabase — More Than a Database
&lt;/h2&gt;

&lt;p&gt;If you're on Supabase, don't use the generic Postgres server. The Supabase MCP server understands your whole stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Claude Code&lt;/span&gt;
claude mcp add supabase npx &lt;span class="nt"&gt;-y&lt;/span&gt; @supabase/mcp-server

&lt;span class="c"&gt;# Works with any MCP-compatible editor (Cursor, Goose, Windsurf, etc.)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can manage auth users, storage buckets, edge functions, and RLS policies — not just SQL. "Create a new table with RLS for authenticated users" works as one prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQLite — For Prototyping
&lt;/h2&gt;

&lt;p&gt;Building something locally? SQLite MCP is the fastest way to give your AI a database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Claude Code&lt;/span&gt;
claude mcp add sqlite npx &lt;span class="nt"&gt;-y&lt;/span&gt; @modelcontextprotocol/server-sqlite /path/to/your.db

&lt;span class="c"&gt;# Works with any MCP-compatible editor&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Point it at any &lt;code&gt;.db&lt;/code&gt; file. Instant data exploration. I use this for analyzing datasets — drop a CSV into SQLite, connect MCP, ask questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  MongoDB — When Your Data Isn't Relational
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Claude Code&lt;/span&gt;
claude mcp add mongo npx &lt;span class="nt"&gt;-y&lt;/span&gt; mcp-mongo-server mongodb://localhost:27017/mydb

&lt;span class="c"&gt;# Works with any MCP-compatible editor&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full aggregation pipeline support. "Show me the average order value grouped by country for the last 30 days" translates directly to a Mongo aggregation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Warning
&lt;/h2&gt;

&lt;p&gt;Every database MCP server has your connection string. Before installing any:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use read-only credentials&lt;/strong&gt; when possible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Never use production credentials&lt;/strong&gt; in your MCP config (config files are typically plain text)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check the security score&lt;/strong&gt; on &lt;a href="https://protodex.io/category/database.html" rel="noopener noreferrer"&gt;Protodex&lt;/a&gt; before installing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prefer official servers&lt;/strong&gt; (marked ✓ Secure on Protodex) over community ones&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  All 183 Database MCP Servers
&lt;/h2&gt;

&lt;p&gt;These 6 cover most use cases, but there are specialized servers for: TimescaleDB, ClickHouse, DuckDB, Neo4j, Elasticsearch, Pinecone, Qdrant, Weaviate, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://protodex.io/category/database.html" rel="noopener noreferrer"&gt;→ Browse all database MCP servers on Protodex&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://protodex.io" rel="noopener noreferrer"&gt;Protodex&lt;/a&gt; — 5,618 MCP servers with security scores and one-click install for Claude Desktop, Cursor, Goose, and more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>database</category>
      <category>postgres</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
