DEV Community

followtayeeb
followtayeeb

Posted on

I Built an Open-Source CLI to Compare LLM API Costs in Your Terminal (npx, Zero Install)

If you've ever had to compare costs between GPT-4o, Claude Sonnet, and Gemini before committing to a model, you know the pain: browser tabs, manual math, outdated blog posts.

I built llm-costs — a zero-install CLI that does it instantly.

The Problem

Every time I start a new project using LLMs, I go through the same ritual:

  • Open Anthropic pricing page
  • Open OpenAI pricing page
  • Open Google AI pricing page
  • Try to compare apples to oranges with different tokenizers
  • Do math in my head (or a spreadsheet)
  • Realize the blog post I was referencing is 6 months out of date

There had to be a better way.

Demo

npx llm-costs "Build a REST API in Python" --compare
Enter fullscreen mode Exit fullscreen mode

This counts your prompt tokens using the actual tokenizer (tiktoken for OpenAI models, character-based estimation for others), then renders a comparison table across all major providers right in your terminal.

Output looks like:

Model                    Input Cost    Output Cost   Total
─────────────────────────────────────────────────────────
deepseek-chat            $0.00003      $0.00008      $0.00011
gemini-flash-2.0         $0.00005      $0.00020      $0.00025
claude-haiku-3-5         $0.00020      $0.00100      $0.00120
gpt-4o-mini              $0.00027      $0.00108      $0.00135
claude-sonnet-4-5        $0.00150      $0.00750      $0.00900
gpt-4o                   $0.00375      $0.01500      $0.01875
Enter fullscreen mode Exit fullscreen mode

Features

  • Zero install — works with npx or npm i -g llm-costs
  • 17 models, 6 providers — Anthropic, OpenAI, Google, DeepSeek, Mistral, Cohere
  • Auto-updating prices — fetches fresh pricing weekly via GitHub Actions; client checks staleness locally (7-day TTL cache at ~/.llm-costs/pricing.json)
  • Batch processing — pipe a file of prompts, get cost totals: llm-costs batch prompts.txt
  • Budget guard — set a cost ceiling for CI/CD: llm-costs guard --max 0.10
  • Watch mode — live-refresh as you type your prompt
  • MCP server mode — integrate with Claude Desktop or any MCP-compatible tool
  • Price changelog — track when costs changed: llm-costs changelog --since 30d
  • Budget projectionsllm-costs budget --requests 10000

Auto-Updating Prices — The Key Feature

Most LLM pricing tools go stale within weeks. llm-costs solves this with a two-layer approach:

Client-side: On each run, the CLI checks if your local ~/.llm-costs/pricing.json is older than 7 days. If it is, it fetches fresh data from GitHub in the background (non-blocking, 5s timeout).

Server-side: A GitHub Actions workflow runs every Monday morning, fetches prices from LiteLLM's aggregate JSON (which tracks provider pricing), diffs the result, and opens a PR if anything changed — including a markdown table showing exactly what went up or down.

This means the package is always reasonably fresh, and the repo stays transparent about price changes.

Install

# One-shot, no install:
npx llm-costs "your prompt here"

# Or install globally:
npm install -g llm-costs

# Compare across all models:
npx llm-costs "your prompt" --compare

# Check a specific model:
npx llm-costs "your prompt" --model claude-sonnet-4-5
Enter fullscreen mode Exit fullscreen mode

Why Open Source?

LLM pricing is genuinely confusing and changes frequently. I wanted something the community could maintain together — PRs to add new models, fix prices, or add providers are very welcome.

Links

Would love feedback in the comments — what models or features are you missing?

Top comments (0)