Here's a feeling that's probably familiar. You drop some documents into an AI tool, get a draft back, and then spend twenty minutes reading it to figure out if you agree with what you just "wrote." No critical thinking required. Just vibes and approval.
Advait Sarkar calls this being "a professional validator of a robot's opinions." He's a researcher at Microsoft Research Cambridge, and I think it's the most accurate description of modern knowledge work I've heard.
The Pattern Has a Name
Sarkar gave a TED talk about this, and he calls it "outsourced reason" - the workflow where AI mediates every step of a knowledge worker's day. Email arrives, summarize it. Need a response, generate one. Report due, drop in some docs and request a draft. Deck required, same deal. Vibe code the prototype - a pattern we explored recently watching a vibe coder and a senior iOS dev each build an app in five prompts each.
Each step feels like a win. Output exists. Things moved faster. But the problem isn't the speed.
We've become, in Sarkar's phrase, intellectual tourists in our own work. We visit ideas. We don't inhabit them. Our relationship to our own output is entirely mediated by a machine, and we've gotten so used to it that it doesn't feel strange anymore. Sarkar's framing has a sharper edge than it sounds: a tourist can leave. The problem is we've started living there.
What AI Does to Your Critical Thinking (The Research Is Uncomfortable)
This isn't a vibe. There's actual research behind it, and the findings are harder to brush off than most people expect.
Start with creativity. Groups of knowledge workers using AI assistants produce a smaller range of ideas than groups working manually. Not because AI is bad at generating ideas - it's fast and fluent. But when everyone's prompting the same systems with roughly the same framings, we get what Sarkar calls "a hive mind. Except the hive is really boring and keeps suggesting the same five ideas."
Critical thinking drops too. Surveyed workers reported putting less effort into reasoning when working with AI, and the effect was strongest for people who had high confidence in AI and lower confidence in themselves. The more we trust the machine and doubt ourselves, the less we actually think.
Memory suffers in both directions. When people rely on AI to write for them, they remember less of what they "wrote." When they read AI-generated summaries instead of source documents, they remember a lot less of the content. And then there's metacognition - the ability to think about how you're thinking - which quietly disappears when AI mediates every step of your work.
Sarkar's phrase for where this lands: "We've become middle managers for our own thoughts."
These effects compound. Fewer ideas, thought about less critically, remembered less clearly. And the mundane tasks that used to exercise this cognitive musculature - the routine email, the quick summary, the first draft - are exactly the ones we hand over first. It's like protecting the gym membership but skipping every session. The equipment is still there. The capacity slowly isn't.
The Distinction That Actually Matters
Sarkar's argument isn't "stop using AI." It's more precise than that: the problem is using AI as an assistant.
An assistant optimizes for output. It takes intent and executes on it - fast, efficient, useful. But it leaves the person completely out of the process. A tool for thought operates differently. It challenges rather than obeys. It creates productive resistance. It keeps people metacognitively engaged rather than handing over a finished product and waiting for a rubber stamp.
Sarkar's line here is blunt: "We've solved the problem of having to think. Unfortunately, thinking wasn't actually a problem." Or: "It's like we invented a cure for exercise and then wondered why we're out of breath all the time."
The AI we've built is very good at removing friction. That's precisely the issue.
What "Tool for Thought" Looks Like in Practice
Sarkar's team built a research prototype to explore what the alternative actually feels like. The demo centers on Clara, who needs to write a business proposal from an industry report she's never read.
In a standard AI workflow: drop in the documents, request a draft, edit it. Fast. Alienating.
In the prototype, it works differently. Clara reads the document herself - but "lenses" let her emphasize what's most relevant to her task. She highlights sections, builds notes, constructs an argument outline manually. As she works, the AI surfaces what the team calls provocations: critiques, counterarguments, alternative framings. Not autocomplete. Not suggestions to passively accept. Friction, deliberately placed.
The draft gets generated from Clara's outline - her decisions, her structure, her judgment calls. The AI text is there, but it's rooted in real cognitive work she actually did. When a provocation appears that she disagrees with, she dismisses it confidently. That's the point: understanding your work well enough to reject a critique means the critique still did its job.
One detail worth noticing: there's no chat box anywhere in the interface. Clara never has a conversation with AI. She works, and the system assists her silently, without pretending to be a colleague.
The underlying design principles are simple enough to state: preserve material engagement (make people read and decide, not just review), offer productive resistance (AI should push back, not comply), and scaffold metacognition (prompt people to think about their own thinking). These aren't technically complex ideas. They're just the opposite of what most AI products are optimized for.
Early research results from tools designed this way are promising. The creativity and critical thinking losses reverse. People work faster and think better. Sarkar calls it "a lunch that pays you to eat it," which is a bit much, but the underlying observation holds.
The Question That Doesn't Resolve
Sarkar closes with something worth sitting in. Every time we've invented a tool that extends cognitive capacity - writing, books, maps, the internet - we've asked some version of the same question: if this can do it for us, does it matter that we can't?
Can maps navigate for us? Does it matter that we lose the ability to navigate ourselves?
Can AI write for us? Does it matter that we lose the ability to think through a blank page?
Sarkar thinks the answer is obvious. I think it's obvious too - but "obvious" is doing a lot of work there. Because every generation said the same thing about their version of the question. The tool won. Life went on. Maybe we're fine.
Or maybe we're not fine, and the degradation is just slow enough that we haven't noticed. Cognitive musculature atrophies quietly. You don't feel it until there's a genuinely hard problem in front of you and something isn't there that used to be.
What would you rather have: a tool that thinks for you, or a tool that makes you think?
Most people answer the second one. Most people's behavior already answered the first.
Top comments (0)