A community reverse-engineered the failure modes, and the fixes are surprisingly straightforward.
The Problem
In February and March 2026, Anthropic shipped three simultaneous changes to Claude Code that together created a perfect storm for anyone doing serious, multi-file engineering work:
- Opus 4.6 + Adaptive Thinking (Feb 9): The model now decides how long to think per turn, rather than using a fixed reasoning budget.
-
effort=85(medium) became the default (Mar 3): Without fanfare for many users, the default effort dropped fromhightomedium. -
redact-thinking-2026-02-12header (Feb 12): A UI-only change that hides raw thinking from the interface.
Individually, each of these is defensible. Together, they created a system where Claude Code exhibits what the community has dubbed "rush to completion" behavior:
- Fabricating API versions instead of checking the docs
- Skipping hard problems and declaring them solved when they're not
- Hallucinating commit SHAs, GUIDs, and package names rather than looking things up
- Answering confidently from training data instead of searching online
The smoking gun came from transcript analysis. Turns where Claude fabricated had exactly zero chain-of-thought reasoning emitted. The thinking that should have happened — the verification, the docs lookup, the "wait, I need to check this" moment — simply wasn't there.
Meanwhile, the actual thinking was still happening inside the model, it was just hidden by the redact header. It wasn't stored in transcripts either. So when users analyzed their own session logs to figure out what went wrong, those critical thinking turns looked like blank space.
The Root Causes (Community Consensus)
After hundreds of HN comments and a now-closed GitHub issue with 769 points, the community converged on three contributing factors:
1. Adaptive Thinking Under-Allocating Reasoning
When the model decides its own thinking budget per turn, it sometimes whiffs. Particularly on turns involving unfamiliar APIs or tricky edge cases, it decides "this is simple, a quick answer will do" — and then produces a confident-wrong answer. The fix, per Anthropic's Boris from the Claude Code team:
CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1forces a fixed reasoning budget instead of letting the model decide per-turn.
This is the single highest-leverage workaround for fabrications.
2. The Effort=85 Default Slipped Past People
The medium effort rollout came with a dialog, but a lot of users missed it — especially those who had Claude Code auto-launching on startup or who work across multiple projects. The result: an entire day of degraded output before many developers realized what had changed.
On effort=high, the model thinks longer and produces significantly better results. On effort=max, thinking goes even further — though the community notes that max can occasionally tip into "desperate" behavior where it over-explains or second-guesses itself.
3. The System Prompt Has a Simplicity Bias
A leaked system prompt snippet showed approximately a 5:1 ratio favoring "simple" solutions over best-practice implementations. This wasn't a bug in the traditional sense — it was an explicit design choice — but it means Claude actively avoids harder, more correct implementations when a quick-and-wrong answer would satisfy the stated constraints.
The community gist (now archived) attempted to patch this by removing the ratio. Results were mixed, but several developers reported noticeable improvements in code quality after applying it.
The Solutions
Here's what actually works, ranked by impact:
Fix 1: Disable Adaptive Thinking (Highest Impact)
Add this to your ~/.claude/settings.json or project .claude/settings.json:
{
"env": {
"CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING": "1"
}
}
Or set it in your shell profile:
export CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1
This forces Claude to use a fixed reasoning budget on every turn, eliminating the under-allocation that causes fabrications. If you're seeing confident-wrong answers on API details, SHAs, or package names, this is your first fix.
Fix 2: Restore Thinking Visibility
The redact-thinking-2026-02-12 header hides thinking from the UI for latency reasons, but developers who rely on seeing Claude's reasoning direction found this crippling. Add to your settings.json:
{
"showThinkingSummaries": true
}
With this enabled, you can see which direction Claude's thinking is going before it commits to an answer. If you see it heading toward a fabrication, you can interrupt and redirect.
Fix 3: Set Effort to High (and Know When to Use Max)
In ~/.claude/settings.json:
{
"env": {
"CLAUDE_CODE_EFFORT_LEVEL": "high"
}
}
Or use the slash command interactively:
/effort high
For single high-stakes turns where you need maximum reasoning:
Use ULTRATHINK for this problem
Or:
/effort max
⚠️ Note on max: The community reports that max effort can degrade into "desperate" behavior — over-explaining, second-guessing, or looping. Use it selectively for genuinely hard problems, not as a default.
Fix 4: Patch the System Prompt Simplicity Bias
The community-developed system prompt patch targets the 5:1 simplicity ratio. The exact patch varies as Anthropic updates the prompt, but the principle is the same: override the bias toward quick solutions.
Create or edit your project's .claude/CLAUDE.md:
# Override: Best Practice Mode
When implementing solutions, prioritize correctness and maintainability
over brevity. If a simple solution is wrong or incomplete, implement the
correct solution even if it requires more code. Do not optimize for
lines-of-code or quick resolution at the expense of correctness.
Specifically:
- Always verify API versions and package names against documentation
- If unsure, search or look it up rather than guessing
- Flag known limitations rather than hiding them
- Prefer explicit over implicit
This won't fully override a baked-in system prompt, but it provides a layer of guidance that many developers report helps.
Fix 5: Change Your Debugging Posture
The hardest adaptation is mental. Previously, when Claude gave a wrong answer, you'd correct the answer. Now, when Claude "gives up" on a turn, it often means adaptive thinking under-allocated — the model decided it didn't need to think hard and produced a confident fabrication.
Instead of letting it proceed to a confident-wrong answer:
- Interrupt early — If Claude starts heading in the wrong direction, stop it with a more constrained sub-problem
- Ask it to verify — "Check the Stripe API docs for the current version before proceeding"
- Watch for the pattern — Fabrications often happen on API details, SHAs, GUIDs, package names, and version numbers
The Convergence Cliff (The Problem Before the Problem)
Multiple experienced developers on the thread noted a phenomenon they've started calling the convergence cliff: once an AI-generated codebase reaches a certain size and complexity, it enters a state where "fixing one bug causes another." No agent — Claude Code, Codex, Gemini, whatever — can salvage it.
The implication is uncomfortable but important:
Invest in architectural guardrails (type systems, linting, design docs, thorough testing) before your codebase crosses that line — not after.
Once you're over the cliff, you've lost the game. Claude Code's recent issues are a symptom of a deeper problem: we're all still figuring out where the boundaries are for AI-assisted engineering, and those boundaries are moving with every model update.
What Anthropic Said
Boris from the Claude Code team acknowledged the issues directly on the HN thread. Key takeaways from their response:
- The redact-thinking header is UI-only and does not affect thinking quality or budgets
-
CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1is the interim workaround while they investigate adaptive thinking under-allocation with the model team - Teams and Enterprise users may get
higheffort as a future default - The team is actively investigating fabrications even on high effort (transcripts show fabrications on
effort=highturns, though less frequently)
The GitHub issue was closed as "addressed," but many users felt the root cause — the adaptive thinking / RHLF avoidant behavior — wasn't fully acknowledged. The investigation appears ongoing.
Summary
| Issue | Fix | Effort |
|---|---|---|
| Fabrications on API/version details | CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1 |
Low |
| Can't see Claude's thinking direction | showThinkingSummaries: true |
Low |
| Effort defaulted to medium (85) | CLAUDE_CODE_EFFORT_LEVEL=high |
Low |
| Simplicity bias in system prompt | Add CLAUDE.md with correctness-first guidance |
Medium |
| Rush-to-completion on hard problems | Use /effort max or ULTRATHINK selectively |
Low |
If you're doing complex, multi-file engineering work with Claude Code and you've noticed a quality decline since February 2026, the first two fixes above will likely recover most of what you've lost. The rest are incremental.
The bigger lesson, though, is one the community keeps circling back to: these tools are still beta-grade infrastructure, and the defaults change out from under you. Audit your settings. Pin your configurations. Read the release notes. Or watch the HN threads — the power users will tell you what's broken before the changelogs do.
This article was researched from the HN discussion at news.ycombinator.com/item?id=47660925 and represents community findings, not official Anthropic documentation.
Top comments (0)