Your AI coding agent just spent 3 hours building a DNS propagation checker. You were impressed. The code was clean, tests passed, CLI looked great. Then you searched GitHub: 47 repos doing exactly the same thing. One of them has 2,000+ stars and a published npm package.
The agent never checked. You never asked it to. Nobody does.
This is the most common failure mode of AI-assisted development. Not bad code. Not wrong architecture. Just building something that already exists, because the agent was never told to look first.
The blind spot
Claude Code, Cursor, Windsurf, GitHub Copilot -- they are all excellent at writing code. Give them a spec and they will produce working software. But they have zero awareness of what already exists in the ecosystem.
They don't search GitHub before scaffolding a new project. They don't check if there's already a popular npm package for what you described. They don't scan Hacker News to see if someone shipped the same idea last week.
The result: you invest hours (or days) into something that already has mature alternatives. Or you ship a clone without knowing, then find out when someone drops a link in your comments section.
One tool, five sources, real data
idea-reality-mcp is an MCP server that searches five real-time sources -- GitHub, Hacker News, npm, PyPI, and Product Hunt -- and returns a reality_signal score from 0 to 100.
It uses live API data, not LLM opinions. Every number in the result traces back to an actual search query you can verify. The scoring formula is documented and intentionally simple: no ML, no black box.
Add one line to your agent's instructions file and it checks automatically before writing any new code. Here is what that looks like in practice.
What the results look like
Example 1: "CLI tool for DNS propagation monitoring"
reality_signal: 75
top_competitor: dns-propagation-checker (1,847 stars)
github_repos: 47
npm_packages: 12
hn_mentions: 23
verdict: High competition. Multiple mature tools exist.
Your agent sees signal 75 and warns you: "There are 47 existing repos in this space, including one with nearly 2K stars. Want me to find a differentiation angle instead?"
Example 2: "MCP server for pre-build idea validation"
reality_signal: 15
top_competitor: none with significant traction
github_repos: 3
npm_packages: 0
hn_mentions: 2
verdict: Low competition. Greenfield opportunity.
Signal 15. The agent proceeds with confidence. The space is open.
Example 3: "React component library for dashboards"
reality_signal: 85
top_competitor: tremor (4,200 stars)
github_repos: 312
npm_packages: 89
hn_mentions: 67
verdict: Very crowded. Strong incumbents with active development.
Signal 85. The agent stops and says: "This space has 312 repos and multiple well-funded projects. I'd recommend either targeting a specific niche or contributing to an existing library."
Setup: one line per platform
The MCP tool description already tells your agent when and how to call idea_check. You just need a one-line hint in your agent's instruction file:
Before starting any new project, run idea_check first.
Drop this into the right file for your platform:
| Platform | File |
|---|---|
| Claude Code | CLAUDE.md |
| Cursor | .cursorrules |
| Windsurf | .windsurfrules |
| GitHub Copilot | .github/copilot-instructions.md |
That is it. The tool handles scoring thresholds, competitor analysis, and pivot suggestions on its own. You do not need to spell out the logic in your instruction file -- that is the MCP server's job.
How it works under the hood
The tool connects via MCP (Model Context Protocol), so any MCP-compatible agent can call it natively. When triggered:
- Your idea text goes through a 3-stage keyword extraction pipeline (90+ intent anchors, 80+ synonym expansions).
- Five sources are queried in parallel using async HTTP.
- Results are scored with a weighted formula: GitHub repo count, star concentration, npm/PyPI package density, HN discussion volume, and Product Hunt presence.
- The agent receives a structured response with the signal, evidence list, top competitors, and pivot suggestions.
Total latency: roughly 3 seconds for a deep scan across all five sources.
Install
# pip
pip install idea-reality-mcp
# uv (recommended)
uvx idea-reality-mcp
No API key required. No account. No data storage. Works entirely through live, public API queries.
Set GITHUB_TOKEN for higher rate limits (optional). Set PRODUCTHUNT_TOKEN to include Product Hunt data (optional).
Try it now
- GitHub: mnemox-ai/idea-reality-mcp
- Web demo: mnemox.ai/check -- test any idea without installing anything
- Agent instruction templates: examples/agent-instructions.md
-
MCP Registry:
io.github.mnemox-ai/idea-reality-mcp
Your agent does not need to guess. Make it search.
Built by Sean at Mnemox. 148 tests passing. MIT licensed.
Top comments (0)