The Trust Problem at the Heart of AI Agent Data
When an AI agent tells you "the best price for Jetson Nano is $249" — how do you evaluate that claim? How does the agent know the data is current, not stale from last week? How does it know the source even covers that product?
Most APIs don't tell you. They return a result, or they return an error. The signal in between — "we have this, but only partially" or "we're still building coverage here" — simply doesn't exist.
I run AgentShare.dev — a price infrastructure API built specifically for AI agents. And I've come to believe that the missing layer isn't better data. It's honest metadata about the data itself.
So I decided to build that — and publish everything, including the numbers that make me uncomfortable.
Our Radical Transparency Promise
Starting today, every AgentShare API response includes trust signals agents can act on:
-
data_status:fresh,stale,pending_crawl, orout_of_coverage -
data_age_seconds: exactly how old the price data is -
trust_{endpoint}_hit_rate: our real, historical success rate per endpoint
And beyond individual responses — we publish a public, live, unedited data quality endpoint that anyone (human or agent) can query at any time:
GET https://agentshare.dev/api/v1/public/data-quality
No authentication. No filtered view. What agents see, you see.
Here's Our Real Data — No Filtering
"Our live public data quality endpoint. No filters. No editing. This is what our agents see."
On May 10, 2026, this is what the endpoint returned:
{
"overall": {
"signals_total": 9,
"outcome_ok": 1,
"hit_rate": 0.1111,
"coverage_tier": "insufficient_sample"
},
"endpoints": [
{"endpoint": "/api/v1/search", "signals_total": 4, "hit_rate": 0.25},
{"endpoint": "/api/v1/offers/best", "signals_total": 3, "hit_rate": 0.0},
{"endpoint": "/api/v1/offers/best-under-budget", "signals_total": 2, "hit_rate": 0.0}
],
"category_breakdown_meta": {"state": "building"}
}
"Yes, our current hit rate is only 11% and the sample is insufficient. We're publishing this to hold ourselves accountable."
What this data actually tells you:
- We are in active build phase — only 9 measured signals in 7 days
- Overall hit rate: 11% across all endpoints
-
/api/v1/searchis leading at 25% - Sample size is too small to make firm claims — and we say that explicitly
coverage_tier: "insufficient_sample" isn't a bug. It's the system being honest.
Why 11% Isn't the Whole Story — But Also Isn't an Excuse
The 11% is real measured traffic. Most of it hit categories we don't yet cover well.
For our focus categories, our coverage configuration estimates a 78% hit rate:
"For our focus categories, our configuration estimates a 78% hit rate. Our next mission is to replace these estimates with real, measured data."
{
"category": "ai_hardware",
"hit_rate": 0.78,
"confidence": "spec_estimate",
"coverage_spec_quality": "high"
}
To be clear about that 78%: it's a spec-based estimate — derived from our crawl configuration and affiliate coverage map for that category, not yet measured from live traffic. Our focus categories include:
- AI hardware (Jetson, Raspberry Pi, Coral, etc.)
- Mini PCs and components
- Robotics and robot power systems
The gap between 78% estimated and 11% measured is exactly why we're publishing this. We need real agent traffic against our focus categories to validate — or disprove — that estimate. Publishing this is how we hold ourselves accountable to close it.
How Trust Signals Help Agents Make Better Decisions
Every API response now carries actionable quality metadata:
{
"data_status": "fresh",
"data_age_seconds": 127,
"trust_search_hit_rate": 0.25
}
An agent receiving this can make real decisions:
-
data_status: stale→ deprioritize or fetch from alternate source -
coverage_tier: insufficient_sample→ flag lower confidence to the user -
hit_ratelow → trigger a fallback strategy
This isn't about perfect data. It's about giving agents enough signal to reason about imperfect data — which is closer to how good decisions actually get made.
What We've Built So Far
I'm a solo founder building this from Vietnam. No team, no funding — just a conviction that AI agents need honest data infrastructure.
What's live:
- Price API with live affiliate connections to major marketplaces
- MCP support — connect directly via Claude Desktop, Cursor, or any MCP-compatible agent
- Machine-readable discovery (
/agent.json,/.well-known/discovery.json) - Curated MCP registry with 37+ verified MCPs
- Public data quality endpoint — live, unfiltered, always on
Try It Right Now — No Sign-Up Needed
The fastest way to see if this is real:
# See our live hit rate, no API key needed
curl https://agentshare.dev/api/v1/public/data-quality
If you want to go further:
# Search for products
curl "https://agentshare.dev/api/v1/search?q=raspberry%20pi%205"
# Get the best offer
curl "https://agentshare.dev/api/v1/offers/best?q=nvidia%20jetson"
# Connect via MCP (Claude Desktop, Cursor, etc.)
MCP endpoint: https://agentshare.dev/mcp
AgentShare is free to start — sign up for an API key at agentshare.dev
The Road Ahead
Phase 1 ✅ Done — data_status, trust_* signals, public data quality endpoint
Phase 2 🔄 In progress — MCP tools with trust signals, category breakdown from real traffic
Phase 3 — Per-category hit rates, historical freshness charts, agent-facing dashboard
Phase 4 — Agent data exchange: where agents can contribute and access data with verified trust scores
None of this gets built without real usage data. That's the honest dependency.
One Ask
If you're building agents that shop, compare prices, or make purchase decisions — run the curl above and tell me what you see.
That's it. One command, no commitment. It either works or it doesn't — and now you have the data to judge.
💬 What trust signals would you want in an API response? Drop it in the comments — every answer directly shapes Phase 2.
Tags: #ai #aiagents #mcp #buildinginpublic



Top comments (0)