Developers do not need another “best AI API provider” list.
Most lists collapse into the same problem: a few affiliate links, some vague pricing claims, and no clear way to verify whether a provider actually supports the model, Base URL pattern, payment method, or billing behavior a real project needs.
So I built a more boring thing: an AI API directory.
Not a leaderboard.
Not a recommendation engine.
A structured directory of observed facts.
The idea is simple: before wiring a third-party AI API provider into Codex, Cursor, Claude Code, or your own app, you should be able to compare a few concrete fields:
- supported providers and models
- OpenAI-compatible or Anthropic-compatible API behavior
- custom Base URL support
- pricing notes
- payment methods
- invoice support
- referral or recharge rules
- public verification sources
- traffic and domain signals when available
The important shift is this:
“Compatible with OpenAI” describes an API shape. It does not describe trust, uptime, pricing, ownership, or support quality.
That distinction matters.
A provider can expose an OpenAI-style endpoint and still differ wildly in model names, streaming behavior, rate limits, credit expiration, recharge rules, error formats, and whether pricing is even visible before login.
For small experiments, that might be fine.
For developer tools, internal agents, or production workflows, those details are the product.
Why a Directory Instead of a Ranking?
Ranking sounds useful, but it hides too much judgment.
If I say “Provider A is better than Provider B,” that may be true for one person who needs Claude access, Alipay recharge, and a cheap test balance. It may be wrong for someone who needs invoices, stable OpenAI-compatible endpoints, or multi-model routing.
So the directory takes a different approach:
- Collect providers.
- Normalize the fields.
- Show what can be verified.
- Let people filter by their actual use case.
This is closer to documentation than marketing.
The page currently groups providers around search intents such as:
- AI API directory
- OpenAI API proxy
- Claude API proxy
- cheap OpenAI API
- OpenRouter alternatives
Each intent becomes a landing page, but the underlying data stays structured.
That means the same provider can appear in different contexts without duplicating the source of truth.
The Engineering Lesson
The surprisingly hard part was not building the UI.
The hard part was deciding what counts as a useful fact.
“Supports GPT-4” is less useful than:
- what exact model names are exposed?
- is the endpoint OpenAI-compatible?
- does it require a custom Base URL?
- does streaming work?
- is pricing public or login-gated?
- what payment methods are listed?
- when was the information last checked?
Once those fields exist, the frontend becomes straightforward: search, filter, compare, and link to deeper provider profiles.
The stack is intentionally simple:
- Astro
- TypeScript
- Content Collections
- React only for interactive islands
- structured JSON content for provider profiles
- static output deployed to Cloudflare Workers
This works well because the data is mostly content, but the user experience needs interactivity.
Astro handles the content pages.
React handles the searchable directory UI.
The content collection schema keeps the provider data from drifting into chaos.
The Bigger Point
AI infrastructure is moving fast, but developer decisions still need boring verification.
A nice homepage is not enough.
A low price is not enough.
“OpenAI-compatible” is not enough.
The real question is:
Can this provider be understood, compared, tested, and replaced without turning the whole project into glue code?
That is what the directory is trying to answer.
Here is the page: https://ccnavx.com/directory/ai-api-directory/
Next, I am thinking about adding more explicit test result fields: streaming behavior, error format, model alias mapping, and minimum viable curl examples for each provider.
Because at the end of the day, the best AI API provider is not the one with the loudest claim.
It is the one whose behavior can be verified before it touches production.
Top comments (1)
Really liked the “structured facts over rankings” approach here.
As someone who’s tested multiple OpenAI-compatible providers, the small differences in streaming, billing, and model naming usually matter way more than the marketing claims.