DEV Community

Cover image for Production AI Broke Because of a Model Deprecation — So I Built llm-model-deprecation
Sudharsana Viswanathan
Sudharsana Viswanathan

Posted on

Production AI Broke Because of a Model Deprecation — So I Built llm-model-deprecation

Introduction

Have you ever deployed an AI app, only to find it suddenly broken because OpenAI or Gemini deprecated a model you were using? 😱

I did and it cost me hours of debugging, late-night panic, and a ton of lost productivity. Upgrading libraries when prod is down is no fun!

If you’re building apps on LLMs like OpenAI, Anthropic, or Gemini, model deprecations aren’t just annoying. they’re dangerous.

That’s why I created llm-model-deprecation, a lightweight Python library that alerts you before an LLM model disappears.

The Problem

LLM APIs evolve quickly:

OpenAI retires older GPT-3.5 models.

Gemini might tweak endpoint parameters without notice.

Anthropic occasionally removes older Claude versions.

If your production app depends on hardcoded model names, one day your API calls will start failing.

Common consequences:

  • Broken chatbots

  • Failed recommendation engines

  • Nightmarish debugging sessions

How I Solved It

Instead of checking docs manually or waiting for an unexpected failure, I automated the process:
✅ Track model deprecation status for OpenAI, Anthropic, Gemini

✅ Receive early warnings before a model is deprecated

✅ Integrate into CI/CD pipelines so your production app is always safe

Github Actions

Run the same check in GitHub Actions:

- name: Check LLM deprecations
  uses: techdevsynergy/llm-model-deprecation@v1.1.0
  with:
    fail-on-deprecated: true
Enter fullscreen mode Exit fullscreen mode

Options: path (project root to scan), fail-on-deprecated, version

CLI

pip install llm-model-deprecation
llm-deprecation scan
llm-deprecation scan /path/to/project
llm-deprecation scan --fail-on-deprecated   # exit 1 if any found (for CI)
Enter fullscreen mode Exit fullscreen mode

Library usage

from llm_deprecation import DeprecationChecker, DeprecationStatus

checker = DeprecationChecker()

# Check by model id (searches all providers)
checker.is_deprecated("gpt-3.5-turbo-0301")   # True
checker.is_retired("gpt-3.5-turbo-0301")     # True
checker.status("gpt-4")                       # DeprecationStatus.ACTIVE

# With provider for exact match
checker.get("claude-2.0", provider="anthropic")
# -> ModelInfo(provider='anthropic or gemini or openai', model_id='claude-2.0', status=..., replacement='...', ...)

# List deprecated models
for m in checker.list_deprecated(provider="openai"):
    print(m.model_id, m.status.value, m.replacement)
Enter fullscreen mode Exit fullscreen mode

Data Refresh (Weekly)

I wrote web crawlers which runs every week to update/add model details. Registry is loaded from this URL; if unreachable (e.g. offline), the built-in registry in the library is used. My company Reps.ai supports the cost here to keep this stable

Call to Action

Try it today and never get caught by a model deprecation again:

🔗 Check out llm-model-deprecation on GitHub

If this helps you, star the repo ⭐ — it motivates me to keep updating the library with new LLMs as they launch.

Author

Sudharsana Viswanathan, Engineering Lead at Reps.ai

Top comments (3)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.