DEV Community

CloudyBot
CloudyBot

Posted on

I built a free LLM pricing tool that updates itself daily. here's how

Every time I had to pick an LLM for a project, the pricing
research was painful.

OpenAI updates their page. Anthropic changes their structure.
Google buries the actual numbers behind 3 clicks. DeepSeek
adds a new model. By the time I'd finished comparing, half
my notes were already outdated.

I kept ending up in the same loop. So I built a tool to do
it for me.

What it does

It's at cloudybot.ai/tools/ai-model-pricing — free, no signup.

364 models from 59 providers in one sortable table. USD per
1M tokens for input and output. Cached pricing where vendors
expose it. Context windows, max output, modalities.

You can filter by modality (text, image, audio, multimodal),
sort by cheapest output, or just dump the whole thing as CSV.

The annoying part — keeping it current

This is the part that took me longer than I expected.

Pricing pages aren't stable. They change layouts. They add
new SKUs. They reprice without warning (looking at you,
DeepSeek). A static scrape would be stale in 2 weeks.

So instead of scraping once, I set up an automation that
runs daily. It opens each provider's pricing page in a real
browser, extracts the rates, validates them against the
previous snapshot, and only publishes when the data passes
sanity checks.

If GPT-5 input price suddenly drops from $5 to $0.05, the
validation flags it as suspicious and the snapshot doesn't
auto-publish. Manual review kicks in.

This catches:

  • Pricing pages that broke their own layout
  • Vendors that A/B test pricing displays
  • Currency conversion glitches
  • Off-by-100 errors when someone updates a CMS field wrong

Stack

Nothing fancy. The browser automation is built on top of
Chrome via my main project (CloudyBot — an automation
platform). The pricing tool is essentially eating my own
dog food.

For each provider:

  1. Specialist opens the pricing page in real Chrome
  2. Extracts the table or pricing cards as structured JSON
  3. Diffs against the last snapshot
  4. Publishes if delta is within normal ranges
  5. Flags for review otherwise

Total runtime: about 6 minutes for all 59 providers.

What I learned

A few things that surprised me building this:

Vendor pricing pages are bad. Even big providers like
Anthropic and Google have inconsistent structure across
different model pages. You can't write a generic scraper.
Each provider needs its own extraction logic.

Cached input pricing is buried. OpenAI and Anthropic
both have cached input rates that can be 80-90% cheaper
than regular input. Most comparison tools don't show this
because it's a separate line item or only in docs. I
specifically pull this for the table.

Free tier models lie. A bunch of providers list "free"
models that have hidden rate limits or require approval.
The pricing page says $0 but the reality is more complicated.
The table marks these as $0 list-price but I'd recommend
checking before depending on them.

Context windows are inflated. Several providers claim
1M token context windows that, in practice, degrade badly
past 200K. The pricing table shows the claimed number but
benchmark it before you commit architecture.

What I'd add next

A few things on the list:

  • Historical price changes (chart pricing trends over time)
  • Quality scores per model (right now it's just cost)
  • Latency benchmarks
  • Throughput/rate limit comparisons

If you have ideas or find pricing that looks wrong, the
contact link is on the footer. Most weeks I get 2-3 bug
reports from devs noticing edge cases I missed.

Try it

cloudybot.ai/tools/ai-model-pricing

Free, no signup, no email gate. Built by a solo dev who
was tired of the spreadsheet.

If you're picking an LLM for a project this week, let me
know what you ended up with. Always curious how people
actually decide.

Top comments (0)