DEV Community

Cover image for Claude Opus 4.7: Mixed Early Signal, Real API Breaks, and a Token Usage Story Anthropic Had to Address Fast
Solomon Neas
Solomon Neas

Posted on • Originally published at solomonneas.dev

Claude Opus 4.7: Mixed Early Signal, Real API Breaks, and a Token Usage Story Anthropic Had to Address Fast

Claude Opus 4.7: Mixed Early Signal, Real API Breaks, and a Token Usage Story Anthropic Had to Address Fast

Anthropic launched Claude Opus 4.7 on April 16 at the same base price as Opus 4.6, then tucked the real story inside the docs: API breaking changes, new thinking behavior, new effort controls, beta task budgets, sharper Claude Code defaults, and a multimodal bump that looks real once you read past the launch copy.1

My first take on this release was probably too upbeat. After digging in more, the honest read is messier. I have not spent enough time with Opus 4.7 myself to write a hard firsthand verdict, so this post is better read as reported analysis and early signal, not a personal field report. Anthropic is clearly pitching Opus 4.7 as the better model for long-running agentic coding, and the docs do point to real changes.2 But the early public reaction has been mixed at best. A lot of power users on X and Reddit are calling it underwhelming, more expensive in practice, or flat-out worse for their workflows, especially in Claude Code.3

The real picture, at least right now, is that Anthropic shipped a release with genuine technical changes, real migration consequences, and enough backlash that the token usage story became part of the launch within hours.

Anthropic Is Selling a Workflow Shift, Not Just a Better Benchmark Card

Anthropic is positioning Opus 4.7 as the recommended starting point for its hardest tasks and calls it a "step-change improvement in agentic coding" over Opus 4.6.4 That is strong language. Whether it holds up in real use is still getting argued out in public, and I am not going to pretend I have enough seat time yet to settle that myself.

The core pitch is not benchmark chest-thumping. It is behavior under real work. Anthropic says Opus 4.7 handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back.5 Boris Cherny's launch thread makes the same point from the operator side: auto mode, recaps, focus mode, stronger verification habits, and better effort tuning all matter because the model is being pushed toward longer, more autonomous runs.6

That is why this release feels different from a normal model bump. The base capability matters, sure. What stands out more is how explicit Anthropic is getting about the workflow layer around the model.

The Biggest Upgrade Might Be the Workflow Stack Around the Model

Three changes matter most here.

First, Opus 4.7 introduces xhigh effort between high and max, which gives API users finer control over reasoning depth and latency on hard tasks.7 Anthropic's effort docs say the API default is still high, but recommend starting at xhigh for coding and agentic use cases.8 That matters because a lot of people will assume they are getting the best version of 4.7 out of the box when they are not.

Second, adaptive thinking replaces the old style of thinking budgets on Opus 4.7. Thinking is not automatically on. You have to opt into thinking: { type: "adaptive" }, and once you do, interleaved thinking between tool calls comes with it.9 That is a real behavior change, not a cosmetic rename.

Third, task budgets look like one of the more practical additions in this whole release. Anthropic frames them as an advisory budget across the full agentic loop, not a hard cap like max_tokens.10 If that beta feature works the way the docs suggest, it gives teams a more realistic way to control cost and runtime on longer autonomous jobs without forcing the model to slam into a wall mid-task.

Read those three changes together and the direction is obvious: Anthropic wants people to run longer jobs, give the model room to think, and manage the run at the task level instead of babysitting each individual request.

Claude Code Users Get the Clearest Upgrade Path

The Claude Code side of the release is where the product strategy becomes easiest to read.

On the Anthropic API, the opus alias now resolves to Opus 4.7. On Bedrock, Vertex, and Foundry, that alias does not mean the same thing yet, so you need to pin explicitly if you want the new model.10 Opus 4.7 also requires Claude Code v2.1.111 or later.10

More interesting than the versioning detail is the default behavior. Anthropic's Claude Code docs say Opus 4.7 uses xhigh effort by default in Claude Code.10 Pair that with auto mode for Max users, the new /ultrareview workflow, and Boris's emphasis on focus mode, recaps, and verification, and you get a pretty clear picture of where Claude Code is going.511

It is moving away from short interactive back-and-forth and toward "hand this thing a chunk of work and come back to a recap." That will be great for people who actually want an agent. It will also expose every weak habit people built around constant supervision, vague prompts, and zero verification.

The API Story Is More Interesting Than the Launch Post

This is the part people should read twice before swapping production workloads over.

Anthropic's release notes explicitly say Opus 4.7 includes API breaking changes relative to Opus 4.6.1 The migration guide spells out the important ones: extended thinking budgets are gone, adaptive thinking is the supported model-specific path, non-default sampling parameters now error, and thinking content is omitted by default unless you opt back into it.11

That is not catastrophic, but it is enough to break wrappers, SDK assumptions, eval harnesses, and whatever cursed internal scripts people built at 2 a.m. six weeks ago.

There is also a cost wrinkle here that deserves way more attention than the launch copy gave it. Anthropic kept the base Opus 4.7 price at $5 input and $25 output per million tokens, but both the migration guide and pricing docs say the new tokenizer may use roughly 1x to 1.35x as many tokens for the same text.1112 Same sticker price, potentially higher real-world usage.

Anthropic's response matters here. Boris Cherny said Anthropic increased rate limits for all subscribers "to make up for" the higher token usage, and separately said the company tuned limits so users would get the same amount of usage with xhigh.13 That public acknowledgment tells you Anthropic knew this would land hard.

It still does not settle the practical question. Higher rate limits help subscribers, but they do not erase the operational effect of larger token counts, especially for Claude Code sessions, long contexts, or teams with cost controls built around older token behavior.

The Vision Upgrade Still Looks Legit

Anthropic says Opus 4.7 can see images at more than three times the resolution of earlier Claude models and should produce better interfaces, slides, and docs as a result.14 Normally I would roll my eyes at that kind of line. In this case, the docs actually give it teeth.

The vision docs say Opus 4.7 supports images up to 2576 pixels on the long edge and 4784 image tokens, versus 1568 and 1568 for prior models.15 That is a real jump. If you use models for screenshot review, document parsing, UI QA, or anything where detail gets lost in downscaling, this is one of the clearest improvements in the whole release.

It also has cost implications. Anthropic's own example shows a 2000x1500 image landing around 4000 image tokens on Opus 4.7.15 Better vision is nice. Better vision that quietly burns more tokens is the part teams discover later.

The Safety Story Got Weird Fast

The system card is worth reading because it says two things at once.

Anthropic calls Opus 4.7 its "most capable general-access model to date," then immediately says it does not advance the company's overall capability frontier because Mythos Preview remains stronger.16 That is a weird but useful distinction. It tells you 4.7 is the best generally available version of Claude, but not the strongest thing Anthropic has internally.

The system card also says Opus 4.7 ships with new classifier-based cyber safeguards for prohibited and high-risk cyber use, and Anthropic's public launch post ties legitimate security research access to the Cyber Verification Program.1617 So this is not just a model release. It is also a deployment and policy story.

But the rollout got messy almost immediately. A bunch of users reported that Opus 4.7 was suddenly treating ordinary code and file reads like malware or prompt injection attempts. Anthropic staffer Alex Albert later said this was a bug on Anthropic's side, not the model "being cautious": older Claude builds were applying a stale safety prompt that Opus 4.7 did not need, which led to the false malware warnings. His fix was simple, update Claude or relaunch the app.18

That distinction matters. If the issue was a stale prompt or integration-layer bug, then this was not really evidence that Opus 4.7 itself is uniquely paranoid. It was evidence that model upgrades can break in the harness around the model, and that those breakages can look exactly like a model regression when you are the person getting blocked.

That matters because Anthropic is clearly trying to push capability up while tightening the boundary around who gets to use the sharper edges. Day-one safety bugs like this poison trust fast, even when the root cause lives outside the base model.

The Early Public Reaction Is the Real Story Right Now

I have not used Opus 4.7 heavily enough yet to tell you, from personal battle scars, that it is obviously better than 4.6. And the public reaction over the first day does not support pretending otherwise. Reddit threads in r/ClaudeAI, r/Anthropic, and r/ClaudeCode are full of people calling it a regression, saying it burns through usage limits too quickly, or saying the gains are not obvious enough to justify the cost.314

That does not mean the model is bad. It does mean the rollout landed in a skeptical environment, and Anthropic's own staff had to publicly defend adaptive thinking, acknowledge higher token use, increase subscriber limits, and explain that at least one wave of malware-style warnings came from a stale safety prompt bug in older Claude builds.131518

The fair read, at least this early, is probably this: Opus 4.7 may be better on certain hard coding, vision, and long-horizon tasks, but the upgrade does not look clean or universally felt. If you have not noticed a huge difference yet, that is not some failure to appreciate the model. That seems like a pretty normal reaction right now.

What Actually Matters If You Build With This Stuff

If you build products, agents, or internal tooling on top of Claude, the real checklist is a little harsher:

  • Audit any 4.6-specific thinking and sampling assumptions before migrating.11
  • Decide whether your default should stay at high or move to xhigh for hard tasks.8
  • Test task budgets before you trust them for cost control in production.10
  • Pin exact model IDs across providers instead of assuming opus means the same thing everywhere.16
  • Re-check your image-heavy workloads because better vision can still mean a bigger bill.1217
  • Do not treat unchanged price-per-million as proof that real usage costs stayed flat.111213

That is the divide I keep coming back to. Opus 4.7 might be strong. The teams that get the most out of it are going to be the ones that treat it like a systems change, not just a model swap.

The Bottom Line

I still think Opus 4.7 is a real release. Not fake. Not just a marketing rename. There are too many actual platform and behavior changes for that.

What I do not think, at least not yet, is that the evidence supports writing as if Anthropic obviously nailed it, or as if I have personally validated the upside in heavy daily use.

The safer read is that Anthropic shipped a technically important release with mixed early reception. The model may be better on hard agentic coding, vision, and long-horizon tasks. The rollout also brought token-usage anxiety, migration friction, public skepticism, and enough backlash that Anthropic had to raise subscriber limits almost immediately.13

So this post is really two things at once: a news-and-docs roundup, and a cautious read on the early reaction.

Opus 4.7 might prove itself over the next week or two. It might also end up remembered as a release where the docs and benchmark story were cleaner than the first wave of user experience. Right now, pretending certainty would be bullshit.

The interesting part is not whether Anthropic says 4.7 is better. Of course they do. The interesting part is whether builders keep reaching for it once the novelty wears off.

Notes



Originally published at solomonneas.dev/blog/claude-opus-47-release. Licensed under CC BY-NC-ND 4.0 - attribution required, no commercial use, no derivatives.


  1. Anthropic, "Claude Platform," in "Release Notes Overview," Claude API Docs, accessed April 17, 2026, https://platform.claude.com/docs/en/release-notes/overview

  2. Anthropic, "Claude Opus 4.7," Anthropic News, accessed April 16, 2026, https://www.anthropic.com/news/claude-opus-4-7

  3. Henry Chandonnet, "The Claude-lash Is Here: Opus 4.7 Is Burning Through Tokens, and Some People's Patience," Business Insider, April 17, 2026, https://www.businessinsider.com/anthropic-claude-opus-4-7-backlash-tokens-2026-4; Reddit post, "Opus 4.7 Released!," r/ClaudeAI, accessed April 17, 2026, https://www.reddit.com/r/ClaudeAI/comments/1sn585s/opus_47_released/; Reddit post, "Claude Code tip: 10 seconds fix to avoid the Opus 4.7 token burn," r/ClaudeAI, accessed April 17, 2026, https://www.reddit.com/r/ClaudeAI/comments/1snv4yq/claude_code_tip_10_seconds_fix_to_avoid_the_opus/

  4. Anthropic, "Models Overview," Claude API Docs, accessed April 17, 2026, https://platform.claude.com/docs/en/about-claude/models/overview

  5. Claude (@​claudeai), "Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision," X, April 16, 2026, https://x.com/claudeai/status/2044785261393977612

  6. Boris Cherny (@​bcherny), "Dogfooding Opus 4.7 the last few weeks, I've been feeling incredibly productive. Sharing a few tips to get more out of 4.7 🧵," X, April 16, 2026, https://x.com/bcherny/status/2044847848035156457; Boris Cherny (@​bcherny), "1/ Auto mode = no more permission prompts," X, April 16, 2026, https://x.com/bcherny/status/2044847849662505288; Boris Cherny (@​bcherny), "3/ Recaps," X, April 16, 2026, https://x.com/bcherny/status/2044847853030580247; Boris Cherny (@​bcherny), "4/ Focus mode," X, April 16, 2026, https://x.com/bcherny/status/2044847855006024147; Boris Cherny (@​bcherny), "6/ Give Claude a way to verify its work," X, April 16, 2026, https://x.com/bcherny/status/2044847858634064115

  7. Claude (@​claudeai), "On the API, a new xhigh effort level between high and max gives you finer control over reasoning and latency on hard problems. Task budgets (beta) help Claude prioritize work and manage costs across longer runs," X, April 16, 2026, https://x.com/claudeai/status/2044785264313221470

  8. Anthropic, "Effort," Claude API Docs, accessed April 17, 2026, https://platform.claude.com/docs/en/build-with-claude/effort

  9. Anthropic, "Adaptive Thinking," Claude API Docs, accessed April 17, 2026, https://platform.claude.com/docs/en/build-with-claude/adaptive-thinking

  10. Anthropic, "Task Budgets," Claude API Docs, accessed April 17, 2026, https://platform.claude.com/docs/en/build-with-claude/task-budgets

  11. Anthropic, "Migrating to Claude Opus 4.7," in "Migration Guide," Claude API Docs, accessed April 16, 2026, https://platform.claude.com/docs/en/about-claude/models/migration-guide#migrating-to-claude-opus-4-7

  12. Anthropic, "Pricing," Claude API Docs, accessed April 17, 2026, https://platform.claude.com/docs/en/about-claude/pricing

  13. Boris Cherny (@​bcherny), "Opus 4.7 uses more thinking tokens, so we've increased rate limits for all subscribers to make up for it. Enjoy!" X, April 16, 2026, https://x.com/bcherny/status/2044839936235553167; Boris Cherny (@​bcherny), "@​mark_k @​AnthropicAI Not accurate. Adaptive thinking lets the model decide when to think, which performs better. Opus 4.7 also uses more thinking tokens on average than 4.6, which is why we have increased rate limits for all subscribers to make up for it," X, April 16, 2026, https://x.com/bcherny/status/2044836750066151666; Boris Cherny (@​bcherny), "@​a_lamparelli We've tuned rate limits to give you the same amount of usage with xhigh," X, April 16, 2026, https://x.com/bcherny/status/2044805730138804519

  14. Reddit post, "Opus 4.7 Released!," r/ClaudeAI, accessed April 17, 2026, https://www.reddit.com/r/ClaudeAI/comments/1sn585s/opus_47_released/; Reddit post, "I have tested Opus 4.7 and it is worse compared to Opus 4.6," r/Anthropic, accessed April 17, 2026, https://www.reddit.com/r/Anthropic/comments/1snijmr/i_have_tested_opus_47_and_it_is_worse_compared_to/; Reddit post, "Just use Sonnet 4.6 and stay away from Opus 4.7," r/ClaudeCode, accessed April 17, 2026, https://www.reddit.com/r/ClaudeCode/comments/1snwk9v/just_use_sonnet_46_and_stay_away_from_opus_47/

  15. Henry Chandonnet, "The Claude-lash Is Here"; Alex Albert (@​alexalbert_), "A lot of bugs that folks may have hit yesterday when first trying Opus 4.7 are now fixed. Thanks for bearing with us," X, April 17, 2026, https://x.com/alexalbert_/status/2045159041283064095. 

  16. Anthropic, "Model Configuration," Claude Code Docs, accessed April 17, 2026, https://code.claude.com/docs/en/model-config

  17. Anthropic, "Vision," Claude API Docs, accessed April 17, 2026, https://platform.claude.com/docs/en/build-with-claude/vision

  18. Alex Albert (@​alexalbert_), "Some of you ran into Opus 4.7 refusing normal code edits with \"this might be malware\" warnings. That was a bug on our side, not the model being cautious. Older builds applied a stale safety prompt that Opus 4.7 doesn't need. Run claude update or relaunch the app," X, April 17, 2026, https://x.com/alexalbert_/status/2045238786339299431. 

Top comments (0)