Introduction
Have you ever deployed an AI app, only to find it suddenly broken because OpenAI or Gemini deprecated a model you were using? 😱
I did and it cost me hours of debugging, late-night panic, and a ton of lost productivity. Upgrading libraries when prod is down is no fun!
If you’re building apps on LLMs like OpenAI, Anthropic, or Gemini, model deprecations aren’t just annoying. they’re dangerous.
That’s why I created llm-model-deprecation, a lightweight Python library that alerts you before an LLM model disappears.
The Problem
LLM APIs evolve quickly:
OpenAI retires older GPT-3.5 models.
Gemini might tweak endpoint parameters without notice.
Anthropic occasionally removes older Claude versions.
If your production app depends on hardcoded model names, one day your API calls will start failing.
Common consequences:
Broken chatbots
Failed recommendation engines
Nightmarish debugging sessions
How I Solved It
Instead of checking docs manually or waiting for an unexpected failure, I automated the process:
✅ Track model deprecation status for OpenAI, Anthropic, Gemini
✅ Receive early warnings before a model is deprecated
✅ Integrate into CI/CD pipelines so your production app is always safe
Github Actions
Run the same check in GitHub Actions:
- name: Check LLM deprecations
uses: techdevsynergy/llm-model-deprecation@v1.1.0
with:
fail-on-deprecated: true
Options: path (project root to scan), fail-on-deprecated, version
CLI
pip install llm-model-deprecation
llm-deprecation scan
llm-deprecation scan /path/to/project
llm-deprecation scan --fail-on-deprecated # exit 1 if any found (for CI)
Library usage
from llm_deprecation import DeprecationChecker, DeprecationStatus
checker = DeprecationChecker()
# Check by model id (searches all providers)
checker.is_deprecated("gpt-3.5-turbo-0301") # True
checker.is_retired("gpt-3.5-turbo-0301") # True
checker.status("gpt-4") # DeprecationStatus.ACTIVE
# With provider for exact match
checker.get("claude-2.0", provider="anthropic")
# -> ModelInfo(provider='anthropic or gemini or openai', model_id='claude-2.0', status=..., replacement='...', ...)
# List deprecated models
for m in checker.list_deprecated(provider="openai"):
print(m.model_id, m.status.value, m.replacement)
Data Refresh (Weekly)
I wrote web crawlers which runs every week to update/add model details. Registry is loaded from this URL; if unreachable (e.g. offline), the built-in registry in the library is used. My company Reps.ai supports the cost here to keep this stable
Call to Action
Try it today and never get caught by a model deprecation again:
🔗 Check out llm-model-deprecation on GitHub
If this helps you, star the repo ⭐ — it motivates me to keep updating the library with new LLMs as they launch.
Author
Sudharsana Viswanathan, Engineering Lead at Reps.ai
Top comments (4)
Model deprecation breaking production is one of the most avoidable incidents and yet it keeps happening. The root issue is usually that the model name is baked directly into prompts or configs without an abstraction layer.
The same problem applies to prompt structure: when your prompt is one big string, a model switch means rewriting everything. When it's decomposed into typed blocks (role, constraints, output format), you can swap the model underneath and just adjust the blocks that need tuning. flompt.dev / github.com/Nyrok/flompt
That's the real challenge — prompt-model coupling. The more your prompt exploits model-specific quirks (verbosity, formatting defaults, reasoning style), the more brittle it is to swaps. One partial solution: structuring prompts as semantic blocks (role, constraints, output format) rather than one monolithic string makes the model-specific parts explicit and easier to isolate when you need to adapt. Still painful, but at least you know exactly what to retune.
Our prompts are very fine tuned to a particular model. So, we can't just swap them easily without fine-tuning it again. We use fast models like gpt5-nano since most of the time we make user wait in the screen so we put lot of effort to fine-tune the prompts and context to get high accuracy of these small models it's not so easy to swap them at last minute even if we follow all tricks in the book.