Anthropic released Claude Opus 4.7 on April 16, 2026. Three things make this release worth paying attention to if you were on Opus 4.6 and wondering whether it was time to upgrade: a significant jump in image resolution support, a new task budget mechanism for agentic loops, and a measurable step up on the hardest coding benchmarks.
This guide covers what actually changed, how each feature works, and where Opus 4.7 fits into a practical developer workflow — including the cases where a smaller model is still the better call.
What Shipped in Claude Opus 4.7
The headline changes in the April 16 release are:
High-resolution image support — maximum input resolution raised from 1568px / 1.15MP to 2576px / 3.75MP. This is the first Claude model to reach this resolution ceiling, and it matters for any workflow where image fidelity directly affects output quality: UI screenshot analysis, document parsing, diagram interpretation, or vision-based code generation.
Task budgets (beta) — a new parameter that sets a token ceiling on an entire agentic loop. The model receives a running countdown across thinking, tool calls, tool results, and final output. Rather than cutting off unpredictably when the context window fills, Opus 4.7 sees the budget depleting and wraps up gracefully before hitting the limit. This is especially relevant for multi-step coding agents that can spiral into unexpected token spend.
Agentic coding gains — Anthropic built Opus 4.7 specifically for the tasks where Opus 4.6 needed hand-holding: long-running agentic workflows, production-grade work across large multi-file codebases, and tasks where the model needed to maintain context coherence across dozens of tool calls.
Benchmark Scores
| Benchmark | Opus 4.6 | Opus 4.7 | Change |
|---|---|---|---|
| SWE-bench Verified | 80.8% | 87.6% | +6.8 pp |
| SWE-bench Pro | 53.4% | 64.3% | +10.9 pp |
| CursorBench | 58% | 70% | +12 pp |
The SWE-bench Pro jump from 53.4% to 64.3% is the number that enterprise partners have flagged most. SWE-bench Pro uses harder, more realistic tasks than the standard benchmark — closer to actual production bug-fixing than curated coding puzzles.
Beyond benchmarks, Anthropic reports approximately 14% better multi-step workflow performance at the same or lower token cost, and around a third fewer tool errors compared to Opus 4.6. These figures are from Anthropic's own analysis and have not been independently replicated at publication time.
High-Resolution Vision in Practice
The resolution increase from 1568px to 2576px / 3.75MP unlocks several use cases that were previously degraded or impractical:
Dense UI screenshots — modern dashboards, IDEs, and design tools at full display resolution no longer require downsampling. Asking Claude to identify specific UI elements, flag accessibility issues, or generate code matching a screenshot is more reliable at higher resolution.
Document analysis — scanned PDFs, printed forms, and multi-column layouts had visible compression artifacts at 1568px. At 2576px, fine text in tables and footnotes is legible without preprocessing.
Diagram parsing — architecture diagrams, flowcharts, and network topology images often have small labels that were previously lost. Resolution-sensitive annotations now survive the input stage.
The practical limit is that images above the maximum are still downsampled to fit. For most purposes, the new ceiling covers consumer and retina display screenshots without any manual resizing step.
Task Budgets: What They Actually Do
Task budgets are an answer to a specific problem in production agentic systems: token spend is hard to predict. A coding agent asked to refactor a module might call ten tools or two hundred, depending on what it discovers. Without a ceiling, costs and latency can surprise you.
The task budget sets a hard token limit that the model itself is aware of. The key difference from earlier approaches is that Opus 4.7 actively accounts for the countdown in its planning — prioritizing work, deferring lower-priority steps, and producing a coherent partial result rather than abruptly stopping.
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=8192,
# Task budget: model sees this as its total budget for the agentic loop
task_budget_tokens=50000, # beta parameter
messages=[{
"role": "user",
"content": "Refactor the authentication module to use the new JWT library. Tests should still pass."
}],
tools=[...], # your tool definitions
)
Task budgets are in beta. The parameter name and exact behavior may change before general availability. Check the official Anthropic API documentation for current usage.
Platform Availability
Claude Opus 4.7 is available on all major platforms as of the April 16 release:
-
Anthropic direct API —
claude-opus-4-7 - Amazon Bedrock — via Bedrock's next-generation inference engine
- Google Cloud Vertex AI — global, multi-region, and regional endpoints
- Microsoft Foundry (Azure) — 1M-token context window confirmed
Pricing is unchanged from Opus 4.6: $5 per million input tokens, $25 per million output tokens. Prompt caching and batch API discounts apply under standard Anthropic pricing.
Opus 4.7 vs Opus 4.6: When to Upgrade
| Use Case | Opus 4.6 | Opus 4.7 | Recommendation |
|---|---|---|---|
| Simple Q&A and chat | Sufficient | Overkill | Stay on 4.6 or use Sonnet |
| Multi-file agentic coding | ✓ | Better | Upgrade if you hit 4.6 limits |
| Vision: screenshots/diagrams | ≤1568px | ≤2576px | Upgrade if resolution matters |
| Long agentic loops with cost control | No budget feature | Task budget (beta) | Upgrade |
| Hard SWE-bench-style bug fixes | 80.8% | 87.6% | Upgrade for hardest tasks |
| Cost-sensitive batch workloads | $5/$25 /M | $5/$25 /M | Same price, no reason to stay on 4.6 |
The core scenario for staying on Opus 4.6 is if you are running tasks that already work well and you do not need the resolution or agentic improvements. Same pricing means there is no cost penalty to upgrading, but API migration testing still takes engineering time.
Claude Sonnet 4.6 vs Opus 4.7: The Tier Decision
Sonnet 4.6 remains the right default for most applications. It is significantly cheaper and fast enough for interactive use cases. Opus 4.7 is worth the per-token premium specifically when:
- The task involves extended multi-step reasoning over a large codebase
- Vision quality is critical and images exceed ~1.15MP
- You need cost-predictable agentic loops and task budget control is worth the overhead
- You are benchmarking against GPT-5.5 or DeepSeek V4 on hard coding tasks
For anything interactive, user-facing, or cost-sensitive at volume, Sonnet 4.6 remains the appropriate choice.
Migration from Opus 4.6
The model ID change is the main migration step. Opus 4.7 is backward-compatible with existing Opus 4.6 prompts — no prompt rewrites required in most cases.
# Before
model = "claude-opus-4-6"
# After
model = "claude-opus-4-7"
If you are using Bedrock, update your model ARN to include claude-opus-4-7. On Vertex AI and Foundry, update the model identifier in your deployment configuration.
One thing to watch: the task budget parameter is a new beta field. If you add it to existing integrations, test thoroughly — the graceful-wrap behavior is new and may affect output length expectations in pipelines that parse structured output from the final message.
Verdict: Claude Opus 4.7 is a meaningful upgrade if your workflows are running into Opus 4.6's limits on long agentic tasks, need high-resolution vision, or require predictable token spend via task budgets. At the same price point, there is no cost reason to hold back — the migration is a one-line model ID change. For everything else, Sonnet 4.6 remains the cost-effective default.
FAQ
Q: What is the model ID for Claude Opus 4.7?
claude-opus-4-7. This is the identifier for Anthropic direct API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. There is no separate versioned suffix like -20251001.
Q: Has the pricing changed from Opus 4.6?
No. Pricing remains $5 per million input tokens and $25 per million output tokens. Prompt caching and batch API discounts apply at standard rates.
Q: What does the task budget parameter do?
It sets a token ceiling for an entire agentic loop — thinking, tool calls, results, and final output combined. The model sees a countdown and wraps gracefully before hitting the limit, rather than cutting off unexpectedly. The feature is in beta as of April 2026.
Q: Is Opus 4.7 available on Amazon Bedrock?
Yes. Anthropic confirmed Opus 4.7 availability on Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry alongside the Anthropic direct API.
Q: When should I use Opus 4.7 instead of Sonnet 4.6?
Opus 4.7 is the right choice for extended agentic coding tasks, high-resolution image analysis (above 1.15MP), and workflows where you need hard token budget control. Sonnet 4.6 remains better for cost-sensitive, interactive, or high-volume applications.
Top comments (0)