I run a Polymarket crash-recovery bot (308 closed trades, 80.2% WR, all data public). Polymarket migrated from V1 to V2 contracts today. While doing the migration on my own bot, I extracted six tools that solve problems any prediction-market or trading-bot operator runs into. All MIT, all on PyPI / GitHub / HuggingFace, all tested, all verified shipped.
If you're running a Polymarket bot, you probably need 2-3 of these. If you're researching prediction markets, you definitely need the dataset. If you use Claude / Cursor / Cline for analysis, the MCP servers will make your life easier.
Here's the entire stack in one place:
| Project | What it does | Install / Use |
|---|---|---|
| polymarket-mcp-pro | Polymarket data as MCP tools for Claude / Cursor / Cline | pip install polymarket-mcp-pro |
| polymarket-v2-migration | Cookbook + 12 errors from today's V1→V2 cutover | git clone … |
| pnl-truthteller | Audit your bot's actual on-chain P&L vs DB-recorded | pip install pnl-truthteller |
| cross-signal-data | 308-trade labeled crash-recovery dataset + baseline notebook | pip install cross-signal-data |
| quant-rollout | Staged-deployment toolkit (gates, kill switch, veto window) | pip install quant-rollout |
| sigil-ta | MCP-native TA runtime with the unique Polymarket Sentiment Divergence | pip install sigil-ta |
Total tests across all 6: 95. All passing. Every claim in this post verifiable in 30 seconds via pip install + a one-liner.
1. polymarket-mcp-pro — Polymarket data as Claude/Cursor tools
pip install polymarket-mcp-pro
Add to claude_desktop_config.json:
{
"mcpServers": {
"polymarket": {
"command": "uvx",
"args": ["--from", "polymarket-mcp-pro", "polymarket-mcp"]
}
}
}
Then ask Claude:
- "What are the highest-volume Polymarket markets right now?"
- "Which markets crashed >20% in the last 24 hours?"
- "Show me the full order book for [token]."
Seven tools (list_markets, get_market, get_prices, get_crashes, get_categories, get_orderbook, get_stats). Backed by api.protodex.io which indexes 9,500+ markets with a price snapshot every 15 minutes.
This is how I do all my Polymarket research now. The agent calls the tools, I look at the result, ask follow-ups. No manual curl | jq chains.
Repo: github.com/LuciferForge/polymarket-mcp
2. polymarket-v2-migration — the cookbook for today's cutover
If you have a bot still 503-ing after today's cutover:
- The fix is two import rewrites and one kwarg rename. The cookbook has the diff.
- The wallet migration HAS to be done in the Polymarket UI with a ≥5-share trade. SDK methods do NOT trigger it. I tested every alternative.
- The "1 hour cutover" announcement was wrong; actual was 6 hours of mixed states.
The cookbook is documentation-only — examples, timeline, allowance details, 12 specific errors with fixes, and a smoke test that verifies whether your wallet is migrated and your SDK is V2-ready.
Repo: github.com/LuciferForge/polymarket-v2-migration
3. pnl-truthteller — find your bot's hidden slippage cost
pip install pnl-truthteller
pnl-truthteller --wallet 0xYourProxyAddress
Read-only. Wallet address only. No private key, no API key.
The story: my bot's DB said +$33 lifetime profit. The chain said -$89. Difference: -$122 of hidden slippage cost across 308 trades that the bot literally couldn't see because it records P&L when orders are placed, not when they fill.
The tool reconciles each closed trade against on-chain fills (deduplicated by orderID — critical, because sweep retries log the same fill multiple times in your local DB). Outputs a Markdown report with by-exit-reason breakdown, worst-10 trades, and dust shares stranded on-chain.
Most Polymarket bot operators have never done this audit. If you're one of them, do it once. Worst case you confirm you're profitable. Best case you find the same gap I did.
Repo: github.com/LuciferForge/pnl-truthteller
4. cross-signal-data — the 308-trade labeled dataset
from cross_signal_data import load
df = load() # pandas DataFrame, 308 rows, 19 cols
print(df["is_profitable"].mean()) # 0.802
This is the actual labeled outcomes of every closed trade from my bot — entry features, exit features, timestamps, P&L, exit reason. 19 columns, 308 rows.
Trained a logistic regression and a random forest on it: 79.9% CV accuracy from 7 simple features. Translation: the trigger filter is doing 100% of the work. If you can beat 80% with feature engineering, you've found something the bot doesn't know.
License: MIT for both code and data. Mirrored on HuggingFace at huggingface.co/datasets/LuciferForge/cross-signal-data.
Repo: github.com/LuciferForge/cross-signal-data
5. quant-rollout — staged deployment for trading bots
pip install quant-rollout
You changed a bot parameter. Did it actually help, or are you about to lose money?
quant-rollout adds canary → 10% → 50% → 100% rollouts to any bot. Per-stage gates (n trades + win rate + EV/$). Kill switch on losing streaks. Veto window for human override. Persistent state across restarts. Pure stdlib. Zero deps.
Pure decision logic — the library returns RolloutDecision objects, your code applies the config swap. This makes it trivially testable (no Telegram, no real bot, no actual config files in the test path).
26 tests including end-to-end state machine simulation walking through every transition (NOOP → VETO_OPEN → VETO_EXPIRED → KILL_TRIPPED → recovery).
I extracted this from my own bot's stage-tracker after running 3 successful parameter rollouts in 14 days. Drop-in for any trading bot you have.
Repo: github.com/LuciferForge/quant-rollout
6. sigil-ta — MCP-native TA library
pip install sigil-ta # core, no deps
pip install sigil-ta[mcp] # add Claude/Cursor tools
pip install sigil-ta[dashboard] # add Streamlit dashboard
8 core indicators (SMA, EMA, RSI, MACD, BB, ATR, Supertrend, Stochastic) + 2 composite signals (ReversionScore, MomentumComposite) + the unique Polymarket Sentiment Divergence (PSD).
PSD measures divergence between an asset's price action and the resolved sentiment of a related Polymarket prediction market. Pure-TA libraries (ta-lib, pandas-ta, LuxAlgo) cannot compute this because they have no prediction-market data. Sigil includes the Polymarket fetcher so the integration is built in.
14 MCP tools. Pure stdlib indicator core (no numpy required). Backtest harness with realistic fees and no look-ahead. 47 tests.
Repo: github.com/LuciferForge/sigil
Why ship all 6 at once
I had two options:
Option A: Ship one project at a time over six weeks. More posts, more chances at front-pages, more sequential momentum.
Option B: Ship all six on the day Polymarket V2 cutover happens, when there's a forced demand spike.
Option B turned out to be obvious in hindsight. Today, every Polymarket bot operator on the planet is searching for "polymarket v2 migration" or "polymarket bot 503 error" or some variant. They land on my cookbook. From the cookbook, they discover pnl-truthteller (which solves the slippage problem they didn't know they had). From there, the dataset, the rollout toolkit, the TA library.
The whole stack reinforces itself. The bot validates the dataset. The dataset validates the trigger logic. The audit tool validates the bot's claimed numbers. The MCP server lets you query the data with Claude. The TA library gives you the indicator stack to do your own analysis. Everything cross-references.
This is the kind of thing that's only possible because I built it all for myself first. I didn't sit down to "create a portfolio." I had a bot. I needed tools. I built tools. They ended up being independently useful so I'm shipping them.
How I built six things in one day
I didn't, exactly. I built them over the past couple months for my own bot. Today was just the day I cleaned them up, wrote tests, made them pip-installable, and pushed them.
The pattern that worked:
Start from a real problem. Every one of these tools was built to solve a thing I was hitting in operations. Not "this seems useful," not "this would be a good open-source project." Real-pain-now-fix-it.
Write the test first when possible. All 6 repos have meaningful test coverage (95 tests total across the stack). The tests caught real bugs during cleanup — RSI returning 100 on constant input (should be 50), kill switch evaluating wrong trade slice, etc.
-
Verify before claiming. I have a rule with myself: if I say "shipped," it means I've:
- Run the tests
- Built the wheel
- Installed it in a fresh venv
- Imported it
- Run the smoke test from the CLI
- Confirmed the GitHub repo exists with the expected commits This rule caught a sub-agent's false claim about a missing GitHub repo today. Without it, distribution would have rested on a phantom dependency.
Optimize for the second user, not the first. I'm the first user; I already know how to use these. The README, the CLI, the error messages, the example configs — all of those exist for the second person. If I can't read my own README and figure out how to install the package without context, neither can anyone else.
The honest accounting
I'm not pretending this is a $1M ARR business. It isn't. It's six tools, a public bot, a free API, and an MCP server index (protodex.io) — all running on roughly $200 of personal capital, all open source, all run by one person.
What I'm betting on: the surface area of the stack is the moat, not any single tool. If you're building on Polymarket, you need data (api.protodex.io), you need the SDK done right (polymarket-v2-migration cookbook), you need the audit (pnl-truthteller), you need the labeled examples (cross-signal-data), you need the rollout discipline (quant-rollout), you need the TA stack (sigil-ta), and you need an MCP server to wire it all into your AI tools (polymarket-mcp-pro).
I shipped all seven things. The market will tell me which are right.
Resources
- All 6 repos under one org: github.com/LuciferForge
- Bot itself (open source): github.com/LuciferForge/polymarket-crash-bot
- Free Polymarket data API: api.protodex.io
- MCP server index: protodex.io
If you build something on top of any of these, send me the link. If you find a bug, open an issue.
Built in public by LuciferForge, a solo operator running a Polymarket trading bot and the protodex.io infrastructure.
Top comments (0)