BCG just proved your brain is frying. But that's not the real problem.
In March 2026, researchers from Boston Consulting Group and UC Riverside published a study in Harvard Business Review that gave a name to something many of us have been feeling.
"AI Brain Fry" — mental fatigue from excessive use or oversight of AI tools beyond one's cognitive capacity.
They surveyed 1,488 full-time US workers. 14% reported symptoms: mental fog, slower decision-making, headaches. Productivity dropped when people used more than three AI tools simultaneously. CNN, Fortune, Axios, CBS News — every major outlet covered it.
The finding is real. I've felt it. You probably have too.
But I've been working with AI for over three years now, and I think what BCG found is only the surface. They identified a symptom. The structural cause runs much deeper.
What I actually experience is not fatigue — it's decision evaporation
Some context about my situation. I'm a solo developer at a non-tech company. I built the IT department almost single-handedly. I don't write code myself — I describe what I need, AI implements it, I review and deploy. I have multiple production systems running this way. No teammates to review my work. No one who remembers why I made the decisions I made.
One morning, a change request came in for a system I'd built with AI three months earlier. I opened the codebase. The code was readable. But I couldn't remember why I'd chosen this architecture.
There had been two approaches — let's call them A and B. I'd picked A. Why? What tradeoff had I weighed?
I searched through my chat history. Found the conversation. It said "Implemented using approach A." That was it. The reasoning — the back-and-forth about why A over B — was scattered across other chats, or had only existed in my head and was never recorded anywhere.
This isn't fatigue. This is decision evaporation.
BCG described the tip of the iceberg. What I'm pointing at is below the waterline: the silent collapse of decision consistency across projects and sessions when working with AI.
I'm calling this Decision Consistency Collapse.
Four ways your decisions are collapsing
If BCG's AI Brain Fry is a study of how your brain gets tired, Decision Consistency Collapse is a study of how your decisions break.
It shows up in four specific ways:
1. Decision volatility
Last month, you evaluated technology X and rejected it for specific reasons. This month, in a different project, you're about to adopt the same technology because those reasons aren't recorded anywhere structured. Your past decision and its rationale evaporated the moment you closed the chat window.
2. Context fragmentation
Every AI session starts fresh. What you learned in Project A doesn't carry over to Project B's AI session. Claude's memory, ChatGPT's memory — these are conversational-level recall systems, not structured repositories for "how does the technical decision in Project A constrain the design of Project B."
The result: you redo the same analysis from scratch. You repeat the same mistakes in different projects.
3. Design philosophy drift
AI answers what you ask. It doesn't volunteer "here's why you shouldn't do that" unless prompted. When you're running multiple projects in parallel, each project's design philosophy drifts in a different direction. Tech stack choices, error handling patterns, security approaches — without a unified standard, AI proposes whatever seems locally optimal for each project, with no cross-project coherence.
4. Structural fatigue amplification
Here's where it connects back to BCG's finding. The "decision fatigue" they measured isn't purely a cognitive load problem. Part of what's exhausting you is the absence of a system for referencing past decisions, forcing you to reason from zero on questions you've already answered.
The second time you face a question, it should take five seconds: "I decided X last time, here's why." Without that record, you spend thirty minutes re-deriving the same conclusion. BCG is measuring the fatigue from this structural inefficiency, not just the raw cognitive load of AI interaction.
Why nothing that exists today solves this
Chat history is not a decision record. Extracting "what was the final decision and why" from a 100-message thread is archaeology.
ADRs don't get written. Architecture Decision Records are a great idea in theory. In practice, when AI generates code at the speed it does, nobody stops to write a formal ADR. And even if you do, there's no mechanism for cross-project lookup.
AI memory features serve a different purpose. Claude's memory and ChatGPT's memory store user preferences and basic facts. They're not designed to manage "the decision I made in Project A about authentication should constrain how I approach auth in Project B."
This hits solo developers hardest
Decision Consistency Collapse is worse for solo developers managing multiple AI-assisted projects than it is for teams.
In a team, someone in code review says "didn't we try this before?" That's your safety net. When you're solo, that net doesn't exist. AI doesn't remember your past decisions. You don't remember them either. No one stops you as contradictory decisions silently stack up.
BCG surveyed 1,488 employees at large companies. But the people this problem hits hardest weren't in that survey — they're the ones running everything alone.
I'm building a structural solution

Decision Consistency Collapse is not a memory problem. It's an infrastructure problem — there is no structure for preserving and referencing the decisions that emerge from human-AI collaboration.
BCG made the problem visible. The question is: where's the fix?
I'm currently designing a framework called REFORGE (REusable Framework for Organized Reference & Growth in Engineering).
The concept is a Decision Consistency Engine — a system for maintaining decision coherence across projects and AI sessions.
The core ideas:
- A structured place to record decisions. Not buried in chat history. A decision ledger that captures what was decided and why.
- Decisions become referenceable. Past decisions are stored in a format that AI can access in future sessions.
- Contradiction detection. When you're about to adopt a technology in Project B that you explicitly rejected in Project A, the system flags the inconsistency.
- Reverse import from existing systems. Extract implicit design decisions from codebases that are already running and add them to the ledger.
REFORGE will be released as OSS under CC BY 4.0. I'm currently finalizing the architecture document.
I want to hear from you
If you work with AI daily — especially if you're solo or on a small team — I want to ask you directly:
- Can you explain why you made a technical decision three months ago with AI?
- Have you almost repeated a mistake in one project that you'd already solved in another?
- Have you searched your chat history for a decision rationale and come up empty?
- Do your projects' design philosophies drift apart without you noticing?
If any of these resonate, you've experienced Decision Consistency Collapse.
This isn't just "brain fry." It's a structural problem that needs a structural solution.
I'd love to hear your experiences — in the comments, on X, anywhere. What situations trigger this for you? What workarounds have you found?
This problem is too big to solve alone.
REFORGE details will be published as they're ready. Follow me to get notified when it drops.
Japanese version of this article: Zenn
References
- Bedard, J., Kropp, M., Hsu, M., Karaman, O.T., Hawes, J. & Kellerman, G.R. (2026). "When Using AI Leads to 'Brain Fry.'" Harvard Business Review, March 5, 2026. https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry
- Stack Overflow (2025). 2025 Developer Survey: AI Section. https://survey.stackoverflow.co/2025/ai
- The Pragmatic Engineer (2026). AI Tooling Survey, February 2026. https://newsletter.pragmaticengineer.com/
Top comments (0)