If your code hits claude-3-haiku-20240307 and started failing yesterday, you're not alone. Anthropic retired Claude Haiku 3 on April 20, 2026. Retired, not deprecated — requests to the old model ID no longer complete. They return an error.
This is a real deprecation cutoff that was announced 60 days ago and landed without much fanfare. If you missed the email, you're finding out now — probably from a production alert, or worse, from a user telling you the product is broken.
What the failure looks like
Anthropic's deprecation policy is explicit: retired models return errors, not redirects. There is no automatic fallback to Haiku 4.5. The model ID just stops resolving.
In practice that means a 4XX response from the Messages API — not_found_error if the router decides the model identifier is gone, or invalid_request_error if it's treated as a malformed request. Either way, the body looks roughly like this:
{
"type": "error",
"error": {
"type": "not_found_error",
"message": "The requested resource could not be found."
},
"request_id": "req_..."
}
What makes this nasty is that nothing about your code changed. The request body is identical to what worked last week. The SDK version is the same. Your API key is the same. The only thing that changed is whether Anthropic's routing layer still recognizes claude-3-haiku-20240307 — and as of April 20, it doesn't.
Where the stale model ID is probably hiding
Most teams have more than one reference to a model name. Places to look:
-
Environment variables and config files —
ANTHROPIC_MODEL,CLAUDE_MODEL,DEFAULT_MODEL. Check staging and prod, not just the repo. -
Framework wrappers — LangChain, LlamaIndex, Vercel AI SDK, LiteLLM. Older versions may have
claude-3-haiku-20240307as a default or in documented examples. Check what your package pins to. - Hardcoded test fixtures — VCR cassettes, snapshot tests, and mocked responses often capture the model string.
- Fine-grained routing logic — anywhere you pick a model based on task type ("use the cheap one for classification"), check which model the "cheap" branch resolves to.
- Logs and monitoring dashboards — not a code issue, but if you filter traces by model ID, those filters break silently.
The five-minute grep:
git grep -n "claude-3-haiku-20240307"
git grep -n "claude-3-haiku"
If either returns hits, you have work to do.
What to replace it with
Anthropic's recommended replacement is claude-haiku-4-5-20251001. On paper it's a straight swap. In practice, Haiku 4.5 is a Claude 4-generation model, and the Claude 4 generation introduced a handful of breaking API changes that trip up code written for Haiku 3:
-
temperatureandtop_pcan no longer both be set. Haiku 3 accepted both. Haiku 4.5 rejects the combination. If your code sets both defensively, remove one before switching. -
New stop reasons. Claude 4 adds stop reasons that Haiku 3 didn't emit. If you have a
switchormatchonstop_reasonthat throws on unknown values, it will start throwing. - Trailing newlines in tool parameters are preserved. Previously stripped. If your downstream logic trims on output, fine; if it does exact-match validation, you may get spurious mismatches.
- Rate limits are tracked separately per model. Your old Haiku 3 quota doesn't transfer. If you were running near the ceiling, plan for headroom at the new model.
- Pricing. Haiku 4.5 is $1.00 input / $5.00 output per million tokens, up from $0.25 / $1.25. That's a 4× increase. Cost-sensitive workloads need to re-forecast.
The migration isn't a one-line change, and pretending it is is how bugs ship.
The pattern this fits
Anthropic's deprecation cadence is now predictable enough to plan against. The recent timeline:
- October 28, 2025 — Claude Sonnet 3.5 retired
- January 5, 2026 — Claude Opus 3 retired
- February 19, 2026 — Claude 3.5 Haiku and Claude Sonnet 3.7 retired
- April 20, 2026 — Claude Haiku 3 retired (yesterday)
- June 15, 2026 — Claude Opus 4 and Claude Sonnet 4 retire (announced April 14, 2026)
If your code pins a Claude 4 model ID today, that pin has a two-month clock on it. The 2024- and early-2025- generation model IDs are all on a one-year glidepath to retirement.
Monitoring model deprecation as an API change
What's happening here is a narrow case of a broader problem: third-party APIs change between your deploys, and the change is invisible until something breaks.
- The Anthropic deprecation page updates.
- The model ID in
GET /v1/modelsdisappears. - The error response from
POST /v1/messageschanges shape for the old ID.
None of those events trigger your CI. None of them change your code. All of them change whether your code works.
The way to catch this before your users do is to treat the provider's API surface — the models list, the error shapes, the documented schemas — as a contract you continuously verify. Poll, diff against a known baseline, alert on breaking diffs. Same pattern as monitoring any other third-party API, applied to the specific surface each vendor exposes.
For Anthropic specifically: GET https://api.anthropic.com/v1/models returns the currently-available model IDs. If claude-haiku-4-5-20251001 disappears from that list, you want to know before your next deploy, not after.
Minimum-viable fix for today
-
git grep claude-3-haiku-20240307across every repo that talks to Anthropic - Replace with
claude-haiku-4-5-20251001 - Remove any call site that sets both
temperatureandtop_p - Re-run integration tests against the new model — not just unit tests with mocked responses
- Forecast cost impact of the 4× price change before rolling to production
- Check that your rate-limit dashboards pull the new model's quotas, not the retired one's
Then, if this is the second or third time a provider API change has broken your build without warning, consider whether catching these earlier is worth instrumenting.
FlareCanary monitors REST APIs and MCP servers for schema drift — including model availability endpoints from AI providers. Free tier covers 5 endpoints with daily checks.
Top comments (0)