If you've ever tried to compare LLM pricing across vendors you know how painful it is. One charges per token, another per character, another per request. Cached input discounts exist but good luck finding them. Context window pricing is buried. And by the time you've normalized everything into a spreadsheet something changed on a pricing page and your numbers are stale.
This is the problem ATOM was built to solve. It tracks 2,583 SKUs across 47 vendors, normalizes everything to a common unit, and exposes it all through an MCP server your agents can query directly.
Here's how to set it up and what you can actually do with it.
What MCP gives you here
Model Context Protocol lets AI agents connect to external data sources through a standardized interface. Claude, Cursor, Windsurf and others support it natively.
Instead of pasting a pricing table into your prompt and hoping it's current, you give your agent a live connection to the source. It queries, reasons, and acts on real numbers.
Setting up the ATOM MCP server
ATOM's server is published on npm, Smithery, and the official MCP registry.
Claude Desktop
Add this to your claude_desktop_config.json and restart:
{
"mcpServers": {
"atom-pricing": {
"command": "npx",
"args": ["-y", "atom-mcp-server"]
}
}
}
Cursor or Windsurf
Add the server endpoint in your MCP settings:
https://atom-mcp-server-production.up.railway.app/mcp
Any other MCP client
The server supports both HTTP SSE and stdio transport. Run it locally via npx or point at the Railway endpoint above.
The tools
The free tier includes 4 tools that give you macro market intelligence with no login required. MCP PRO ($49/mo) unlocks the remaining 4, which give you model-level and vendor-level detail.
Free tier
| Tool | What it does |
|---|---|
list_vendors |
All 47 tracked vendors with type and region |
get_kpis |
6 live market KPIs updated weekly |
get_index_benchmarks |
14 AIPI price indexes by modality and tier |
get_market_stats |
Aggregate supply and cost structure data |
MCP PRO
| Tool | What it does |
|---|---|
search_models |
Filter by context size, tool support, modality, price |
get_model_detail |
Full spec and pricing for a specific model |
compare_prices |
Cross-vendor comparison for a model family |
get_vendor_catalog |
Full SKU list for a specific vendor |
What it looks like in practice
Check what the market looks like right now (free)
get_kpis
This week's numbers:
- Output tokens cost 3.84x more than input tokens on average
- Cached input saves 69.7% vs standard input pricing
- Open source models run 80% cheaper than closed source equivalents
- Only 20.3% of SKUs in the index offer cached pricing at all
- The price gap between small and large models in the same family is 4.8x
These are median figures across all tracked SKUs, recalculated every Monday.
Find the cheapest model with 100K+ context and tool calling (PRO)
search_models
context_window_min: 100000
tool_calling: true
sort_by: input_price_asc
Returns model-level results with normalized per-token pricing across vendors. The spread between cheapest and most expensive for functionally similar models is typically over 30x. That difference compounds fast at any real usage volume.
Compare vendors for a specific model family (PRO)
compare_prices
model_family: "Llama 3.3 70B"
Returns every vendor offering that model with normalized pricing so you can make a direct comparison without doing any unit conversion yourself.
Why this is useful for agent architecture
If you're building anything that makes a lot of LLM calls, model routing based on cost and capability is a real decision you're making, consciously or not. The cheapest model that can handle a task should handle it.
With ATOM connected your agent can check current prices before picking a model, catch when a vendor changes pricing, estimate the cost of a planned workload before running it, and compare vendors for a specific capability requirement. That reasoning used to mean a spreadsheet someone had to maintain. Now it's a tool call.
A note on the data
ATOM uses a chained matched-model methodology, the same logic you'd apply to a commodity price index. Every SKU is normalized to a common unit, timestamped, and verified. The point of the methodology is to eliminate composition bias so week-over-week comparisons are actually meaningful and not just reflecting which vendors got added or dropped.
Full methodology at a7om.com/methodology.
Try it
Run npx atom-mcp-server or search "ATOM" on Smithery. Free tier covers 4 tools with no login. MCP PRO is at a7om.com/mcp.
The inference market now has a benchmark. Might as well use it.
Top comments (0)