
Here is a developer-focused summary of what changed in Claude Opus 4.7, released on April 16, 2026.
Note: This article is a personal summary based on publicly available information, not the official view of any company. This article does not constitute financial or investment advice.
Where Opus 4.7 Sits
Claude Opus 4.7 is Anthropic's most capable generally available model. It sits below Claude Mythos Preview on benchmarks, but Mythos Preview remains restricted to a handful of platform partners through Project Glasswing and is not available for general use.
Pricing is unchanged from Opus 4.6: $5 per million input tokens and $25 per million output tokens. The model ID is claude-opus-4-7. It is available across all Claude products, the Anthropic API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.
Benchmark Results
Key numbers from the release and third-party evaluations:
- SWE-bench Verified: 87.6% (significant improvement over Opus 4.6)
- SWE-bench Pro: 64.3% (Opus 4.6: 53.4%, GPT-5.4: 57.7%)
- CursorBench: 70% (Opus 4.6: 58%)
- MCP-Atlas (multi-tool orchestration): 77.3% (best in class)
- CharXiv visual reasoning: 82.1% (Opus 4.6: 69.1%)
- XBOW visual acuity: 98.5% (Opus 4.6: 54.5%) Rakuten reported 3x more production tasks resolved compared to Opus 4.6. CodeRabbit noted recall improved by over 10 percent, with the model being slightly faster than GPT-5.4 at xhigh effort.
New Features
High-Resolution Image Support
Opus 4.7 is the first Claude model with high-resolution image support. Maximum image resolution increased from 1,568 pixels on the long edge (about 1.15 megapixels) to 2,576 pixels (about 3.75 megapixels), which is roughly 3x the visual capacity of previous Claude models.
For computer use workflows, pixel coordinates now map 1:1 with actual screen pixels, eliminating the scale-factor math that was previously required. Document analysis benefits from the ability to read smaller text and finer details in scanned documents, slides, and diagrams.
xhigh Effort Level
The effort parameter now has five levels: low, medium, high, xhigh, and max. The new xhigh level sits between high and max, providing deeper reasoning than high without the full cost of max.
Claude Code defaults to xhigh for all plans. Anthropic recommends starting with high or xhigh for coding and agentic use cases.
Task Budgets (Public Beta)
Task budgets let developers set a token allowance for an entire agentic loop rather than a single turn. The model sees a running countdown and uses it to prioritize work, skip low-value steps, and finish gracefully as the budget runs out. This is useful for preventing cost runaway in long-running agent sessions.
Claude Code /ultrareview Command
A new dedicated code review command that performs a multi-pass review looking for bugs, edge cases, security issues, and logic errors with more depth than a standard review pass.
Breaking API Changes
Three changes that will cause errors if not addressed:
1. Extended Thinking Budgets Removed
Setting thinking: {"type": "enabled", "budget_tokens": N} now returns a 400 error. The only supported thinking mode on Opus 4.7 is thinking: {"type": "adaptive"}. Note that adaptive thinking is off by default; requests with no thinking field run without thinking. You must set it explicitly to enable it.
2. Sampling Parameters Removed
Setting temperature, top_p, or top_k to any non-default value returns a 400 error. Use prompting to guide output behavior instead.
3. Thinking Content Hidden by Default
Thinking blocks still appear in the response stream, but their content is empty unless you opt in with "display": "summarized". If your product streams reasoning to users, the new default will appear as a long pause before output begins.
Migration Code Example
# Before (Opus 4.6)
model = "claude-opus-4-6"
thinking = {"type": "enabled", "budget_tokens": 8192}
temperature = 0.7
# After (Opus 4.7)
model = "claude-opus-4-7"
thinking = {"type": "adaptive"}
# Remove temperature entirely — use prompting instead
# Increase max_tokens for headroom (new tokenizer uses more tokens)
Behavior Changes
These are not API breaking changes but may require prompt adjustments:
- More literal instruction following, particularly at lower effort levels. The model will not silently generalize an instruction from one item to another
- Response length calibrates to perceived task complexity rather than defaulting to a fixed verbosity
- Fewer tool calls by default. Raise effort to increase tool usage
- More direct, opinionated tone with less validation-forward phrasing than Opus 4.6
- More regular progress updates during long agentic traces. If you added scaffolding to force interim status messages, try removing it
- Fewer subagents spawned by default. Steerable through prompting ## Tokenizer Change
Opus 4.7 uses a new tokenizer that may produce roughly 1.0 to 1.35x as many tokens for the same input, depending on content type. Per-token prices are unchanged, but the same prompt may cost more in practice. Test your workloads before switching production traffic.
Cybersecurity Safeguards
Opus 4.7 includes automated safeguards that detect and block requests involving prohibited or high-risk cybersecurity uses. Cyber capabilities were deliberately reduced compared to Mythos Preview. Security professionals who want to use the model for legitimate purposes such as vulnerability research and penetration testing can apply through the Cyber Verification Program.
Who Should Migrate and When
- Teams running production coding agents: The SWE-bench gains are large enough that the upgrade likely pays for itself in reduced human review cycles. Pair with task budgets to control costs
- Teams using computer use or image-heavy workflows: The 3.75 megapixel vision support alone justifies the switch
- Simple Q&A or FAQ bots: Haiku 4.5 or Sonnet 4.6 are more cost-effective. No need to move to Opus for these workloads The safe migration approach is to keep Opus 4.6 as a fallback for one to two weeks while validating Opus 4.7 on your production workloads in parallel.
Top comments (0)