Last week, Anthropic published their most comprehensive analysis yet of how AI is actually being used in the economy. Not projections. Not hype. Real data from over two million Claude conversations, mapped against the entire US occupational database.
The headline finding stopped me mid-scroll:
AI can theoretically handle 94% of tasks in computer and mathematical roles. People are using it for 33%.
That's not a technology gap. That's an adoption gap. And it's the single biggest efficiency blind spot in business today.
Let me walk you through what the data actually says, what it means for your business, and what I'm seeing on the ground as someone who helps companies close exactly this kind of gap.
The Data: Theoretical Capability vs. Actual Usage
Anthropic's research combines two things that are rarely measured together:
Theoretical capability — what percentage of an occupation's tasks could an LLM theoretically speed up or perform
Observed usage — what people are actually using Claude for in real-world professional settings
The gap between these two numbers tells the real story.
By Occupation Category
CategoryTheoreticalObservedComputer & Mathematical94%33%Office & Administrative~90%A fractionBusiness & Finance~85%Barely scratched
Computer and math tasks make up roughly one-third of Claude.ai conversations and nearly half of API traffic — yet they're barely scratching the surface of what's possible. Office and admin roles, where 90% of tasks are theoretically automatable, are barely registering.
The 10 Most Exposed Roles
These aren't warehouse workers or truck drivers. Every single role on this list sits in a corporate office. They're knowledge workers — many of them your highest-paid employees.
The Numbers That Should Worry Every Business Leader
The earnings gap is inverted. Workers in the most AI-exposed occupations earn 47% more on average than workers with zero exposure. They're nearly 4x as likely to hold graduate degrees. The people most affected by AI aren't the ones businesses usually worry about protecting — they're the ones with the biggest salaries.
Young workers are already feeling it. The research found a 14% drop in job-finding rates for 22-to-25-year-olds entering AI-exposed occupations post-ChatGPT compared to 2022. Entry-level positions in knowledge work are quietly contracting before it shows up in unemployment numbers.
30% of workers have zero AI exposure. Cooks, mechanics, bartenders, lifeguards — roles requiring physical presence remain untouched. The divide between "AI-exposed" and "AI-proof" jobs is becoming a fault line in the labour market.
What This Actually Means for Businesses
Here's what I find most striking: this isn't an AI problem. It's a management problem.
The tools already exist. Claude, GPT-4, Gemini — they can handle the vast majority of tasks in knowledge work today. The 94% theoretical coverage in computer and math roles isn't aspirational. It's current capability.
So why is observed usage stuck at 33%?
From what I see working with businesses, four patterns explain the gap:
- No Systematic Deployment Framework Most companies have a ChatGPT subscription and a vague encouragement to "use AI more." That's it. No mapping of which workflows benefit most. No standardised prompts. No integration into existing toolchains. People experiment individually, hit a wall, and go back to doing things the old way.
- Individual Experimentation vs. Team-Wide Integration One person on the team discovers that AI can draft their reports in 20 minutes instead of 3 hours. They keep doing it quietly. Nobody else on the team knows. There's no mechanism to share what works, standardise it, or scale it.
- The Measurement Problem Nobody is tracking time saved. If you asked most managers "How much time does your team save using AI tools?" they'd shrug. Without measurement, there's no business case for expansion. Without a business case, there's no budget for proper deployment. The gap perpetuates itself.
- The "Waiting for Better" Trap I hear this constantly: "We'll invest properly when AI gets better." Meanwhile, the research shows that 97% of observed Claude tasks already fall into categories where AI is theoretically capable. And 68% involve tasks rated as fully feasible for an LLM to handle alone. The capability is here. The deployment isn't.
What I'm Seeing in the Field
I work as a Fractional Head of AI for SMEs — businesses that know they need to move on AI but don't have the in-house expertise to do it systematically. What the Anthropic data confirms matches what I see every engagement.
The pattern is remarkably consistent:
Before a systematic audit, most businesses estimate they're using AI for maybe 40-50% of what's possible. The actual number, once you map their workflows against what AI can handle today, is usually closer to 15-20%.
The biggest gaps are almost never where leaders expect. Everyone thinks about AI for content generation and coding. The real untapped value is in the mundane: data processing, report summarisation, customer communication drafts, internal knowledge retrieval, meeting preparation, and compliance documentation. Tasks nobody thinks of as "AI tasks" because they've always been done manually.
The fastest wins come from the boring stuff. A customer service team that implements AI-assisted response drafting sees measurable time savings in the first week. A finance team that uses AI for initial report drafting cuts their month-end close by days, not hours.
The companies pulling ahead aren't the ones with the fanciest tools. They're the ones with a framework: audit, deploy, measure, iterate.
A Framework for Closing the Gap
If the Anthropic data has you thinking "we're probably on the wrong side of this gap," here's where to start.
Step 1: Audit Your Workflows
Map your team's actual tasks against AI capabilities. For each role, ask: what does this person spend time on every day, and which of those tasks could AI meaningfully accelerate?
Be specific. "Marketing" isn't a task. "Writing first drafts of product descriptions based on feature specs" is.
Step 2: Run a Focused Pilot
Pick the 3 highest-impact workflows from your audit. "Highest impact" means: done frequently, time-consuming, and involving tasks AI handles well — writing, analysis, data processing, summarisation.
Give your team structured prompts and workflows. Not "here's a ChatGPT login, figure it out." Actual documented processes for how AI fits into each workflow. Two weeks is enough to get meaningful data.
Step 3: Measure Relentlessly
Track time-to-completion before and after. Track output quality. Track team adoption rates. Build the business case with real numbers from your own organisation, not vendor promises.
The measurement step is where most companies fail and most pilots die. Don't let it.
Step 4: Scale What Works
Take the workflows that proved out in the pilot, document them properly, train the full team, and integrate them into your standard operating procedures.
Then go back to Step 1 and audit the next layer of workflows. The gap is big enough that most businesses can run this cycle 3-4 times before they even approach the frontier of what's possible.
The Window Is Open
The Anthropic data makes one thing clear: there is an enormous gap between AI capability and AI adoption. That gap represents real, measurable efficiency sitting on the table right now.
But gaps close. As tools get easier, as competitors catch on, as the next generation of workers arrives expecting AI-native workflows — the advantage of being early narrows.
The companies that build their AI deployment framework now are compounding their advantage every month. The ones waiting for AI to "get better" are falling behind at the same rate.
The technology is ready. The data proves it. The question is whether your organisation has the framework to actually use what's already available.
This analysis is based on Anthropic's Labor Market Impacts research paper (March 2026) and the Anthropic Economic Index (January 2026), which together analysed over 2 million Claude conversations mapped against the US Bureau of Labor Statistics occupational database.
Jarrad Bermingham is the founder of Steadwise AI and works as a Fractional Head of AI, helping businesses close the gap between AI capability and actual adoption. Connect on LinkedIn.
Top comments (0)