Structured outputs are less portable across LLM providers than they look.
A schema change can seem harmless and still break one provider while passing another.
I built Schema Gateway for that exact failure mode.
What it does:
- compile one schema into provider-ready request shapes
- diff a baseline schema against a candidate schema
- lint for portability issues
- normalize payloads against a schema
It’s built for people working across providers like OpenAI, Gemini, Anthropic, and Ollama, where “structured output support” sounds similar until the edge cases show up.
There’s an open-source/local path, plus a hosted Starter Access if you want to try the hosted API quickly.
Main thing I’m looking for:
feedback from people actually shipping structured outputs or comparing providers in production. I’m especially interested in whether this pain is strong enough to justify a dedicated guardrail in real workflows.
Live here:
https://schema-gateway.sridharsravan.workers.dev/
Top comments (0)