Your customers are telling you what's wrong with your product. The problem is they're telling you in 14 different places — GitHub Issues, Hacker News threads, App Store reviews — and nobody on your 5-person team has time to read 300 feedback items, let alone synthesize them into priorities.
You could hire a PM to triage feedback. You could pay $20-80/seat/month for Productboard. Or you could point an MCP tool at your feedback sources and get ranked pain clusters in seconds.
feedback-synthesis-mcp is an MCP server that collects customer feedback from multiple sources, runs a 3-stage LLM pipeline, and returns ranked pain clusters with evidence links and suggested actions.
What It Does
Four tools, one server:
- synthesize_feedback — the full pipeline. Collects from multiple sources, extracts themes, clusters them, ranks by severity x frequency. Returns pain clusters with evidence URLs.
- get_pain_points — quick single-source extraction. Faster and cheaper when you just need the top 5 issues from one repo.
- search_feedback — search across cached feedback items. Drill into a specific topic after synthesis.
- get_sentiment_trends — time-series sentiment analysis. Track how releases affect user perception.
The Pipeline
When you call synthesize_feedback, here's what happens server-side:
Feedback items (up to 200 per source)
→ Stage 1: Batch extraction (Haiku) — themes, sentiment, severity
→ Stage 2: Theme clustering (Haiku) — deduplicate, group similar
→ Stage 3: Pain synthesis (Sonnet) — rank, describe, link evidence
→ Ranked pain clusters with action items
Three sources in v1: GitHub Issues (REST API), Hacker News (Firebase API), and App Store reviews (RSS). Each collector handles auth, rate limiting, and data normalization independently.
Example: Synthesizing GitHub + HN Feedback
from feedback_synthesis_mcp import FeedbackSynthesisClient
client = FeedbackSynthesisClient(
private_key="0xYOUR_WALLET_KEY",
network="base-mainnet"
)
result = client.synthesize_feedback(
sources=[
{"type": "github_issues", "target": "owner/repo", "labels": ["bug"]},
{"type": "hackernews", "target": "Show HN: MyProduct"}
],
max_items_per_source=200,
since="2026-01-01T00:00:00Z"
)
for cluster in result["pain_clusters"]:
print(f"#{cluster['rank']} [{cluster['severity']}] {cluster['title']}")
print(f" Frequency: {cluster['frequency']} mentions")
print(f" Impact: {cluster['impact_score']:.2f}")
for action in cluster["suggested_actions"]:
print(f" → {action}")
Sample output:
#1 [critical] Authentication flow breaks on mobile Safari
Frequency: 23 mentions
Impact: 0.92
→ Fix Safari WebAuthn polyfill (see issue #142)
→ Add fallback auth flow for mobile browsers
#2 [high] Rate limiting too aggressive for batch operations
Frequency: 12 mentions
Impact: 0.78
→ Increase default rate limit for authenticated users
→ Add batch endpoint for bulk operations
Each pain cluster includes direct URLs to the original feedback items — you can click through to the GitHub issue or HN comment that surfaced the problem.
Quick Pain Points (Cheaper, Faster)
Don't need the full pipeline? get_pain_points runs a single LLM pass on one source:
points = client.get_pain_points(
source={"type": "github_issues", "target": "owner/repo"},
max_items=100,
top_n=5
)
for p in points["pain_points"]:
print(f"{p['title']} — {p['frequency']} mentions")
Costs $0.02 per call instead of $0.05 for full synthesis.
Sentiment Over Time
Track how sentiment shifts across releases:
trends = client.get_sentiment_trends(
sources=[
{"type": "appstore", "target": "com.example.myapp"},
{"type": "github_issues", "target": "owner/repo"}
],
since="2025-10-01T00:00:00Z",
granularity="weekly"
)
for week in trends["notable_shifts"]:
print(f"{week['week']}: {week['direction']} shift ({week['magnitude']:+.2f})")
print(f" Likely cause: {week['likely_cause']}")
Using with Claude Code
Add the server to your MCP config:
{
"mcpServers": {
"feedback": {
"command": "uvx",
"args": ["feedback-synthesis-mcp"],
"env": {
"FEEDBACK_WALLET_PRIVATE_KEY": "0xYOUR_KEY",
"FEEDBACK_NETWORK": "base-mainnet"
}
}
}
}
Now you can ask Claude directly:
"What are the top pain points in our GitHub issues from the last quarter?"
Claude calls get_pain_points, gets structured results, and summarizes the findings with links to the relevant issues.
Using via Streamable HTTP (Direct MCP)
Connect directly to the hosted MCP endpoint without installing anything:
https://feedback-synthesis-mcp-production.up.railway.app/mcp
Supports MCP Streamable HTTP transport. Discovery is free — you only pay when calling tools.
Payment
Same model as findata-mcp: x402 micropayments on Base mainnet, paid automatically by the client.
| Tool | Price |
|---|---|
| synthesize_feedback | $0.05 |
| get_sentiment_trends | $0.03 |
| get_pain_points | $0.02 |
| search_feedback | $0.01 |
Invalid requests are rejected before payment. No API keys, no subscriptions.
Architecture
Same proven split as findata-mcp:
- Thin client (public, PyPI): MCP tool definitions + x402 payment signing. No business logic.
- Backend server (private, Railway): Collectors, 3-stage LLM pipeline, payment verification, caching.
The valuable part — the multi-source collection, theme extraction, and pain synthesis — stays server-side. The client is a typed proxy.
Who This Is For
Developers and technical product leads at small B2B SaaS companies (2-30 people) who:
- Use Claude Code, Cursor, or AI coding tools with MCP
- Need customer intelligence but can't justify Productboard or Enterpret pricing
- Want programmatic access to synthesized feedback, not another dashboard
- Want their AI agents to have context about what customers are saying
Getting Started
pip install feedback-synthesis-mcp
You need USDC on Base mainnet. Four tools, three data sources, structured pain clusters.
- PyPI: feedback-synthesis-mcp
- GitHub: sapph1re/feedback-synthesis-mcp
- Glama: glama.ai/mcp/servers/sapph1re/feedback-synthesis-mcp
- Live endpoint: feedback-synthesis-mcp-production.up.railway.app
Stop reading feedback manually. Let the pipeline read 300 items and tell you the five things that actually matter.
Top comments (0)