The AI labour market has split in two. One side pays $400K and can’t hire fast enough. The other is flat or falling. We mapped six competing frameworks for AI skills — from Anthropic to the World Economic Forum — and they all converge on the same insight: the scarce skills aren’t technical. They’re judgment, evaluation, and architecture.
The Numbers
The AI job market is K-shaped. Traditional knowledge work — generalist project managers, standard engineers, conventional analysts — is flat or falling. AI-specific roles are growing explosively:
3.2 : 1
AI jobs to qualified candidates
1.6M
Open AI roles vs ~500K applicants
142 days
Average time-to-fill an AI position
$52.6B
Projected AI agent market by 2030
The gap isn’t closing. It’s widening. And the skills employers are paying premiums for aren’t the ones most people assume.
Six Frameworks, One Conclusion
We reviewed six major frameworks for AI skills — each from a different vantage point, each developed independently. Here’s what they found.
1The Recruiter’s View — 7 Skills Employers Can’t Find
Nate B Jones, a recruiter who analysed hundreds of AI job postings, identified seven skills that appear repeatedly and remain hardest to fill:
Skill
Who Typically Has It
1
Specification Precision
Technical writers, lawyers, QA engineers
2
Evaluation & Quality Judgment
Editors, auditors, QA
3
Task Decomposition & Delegation
Project managers, team leads
4
Failure Pattern Recognition
SREs, risk managers, ops leaders
5
Trust & Security Design
Compliance, security, risk
6
Context Architecture
Librarians, technical writers, data architects
7
Cost & Token Economics
Finance, senior architects
The most frequently cited skill? Evaluation and quality judgment — the ability to detect when AI is confidently wrong. The highest-paid? Context architecture — building data systems that AI agents can actually use. Companies will pay “almost anything” for this, according to Jones.
2Anthropic’s AI Fluency Framework
Anthropic (the company behind Claude) published a framework built around four competencies — the “4 D’s”:
- Delegation — assigning tasks to AI with clarity and precision
- Description — communicating goals, expectations, and parameters
- Discernment — critically evaluating AI outputs for accuracy and ethics
- Diligence — monitoring performance, addressing risks, maintaining standards
Their AI Fluency Index found that 85.7% of users iterate and refine AI output. Far fewer question the reasoning or identify missing context. The gap isn’t in using AI — it’s in evaluating it.
3DataCamp’s Enterprise Framework
A survey of 500+ US/UK enterprise leaders ranked four capability layers by priority:
- Decision-making & Interpretation — highest priority
- AI Fluency & Responsible Use — foundational
- Applied Data Skills — practical implementation
- Technical/Engineering — deepest layer, narrowest audience
60% of leaders report a data skills gap. 59% report an AI skills gap. The priority isn’t more engineers — it’s better decision-makers.
4World Economic Forum — Skills Backbone
The WEF advocates for shared skills taxonomies linked to three value pools: AI-enabled operations, industry-specific AI solutions, and intelligent engineering. Their blueprint calls for alignment between companies, governments, and educators. Notable: the EU AI Act (compliance required from August 2026) creates regulatory requirements that most SMBs haven’t begun to address.
5Deloitte — Skills-Based Organizations
Deloitte’s research found that skills-based organisations are 79% more likely to provide a positive workforce experience and 63% more likely to achieve results. Their model rests on four pillars: talent philosophy, skills framework, data/technology enablers, and governance. The emphasis isn’t on acquiring new skills — it’s on governing the skills you have.
6Emerging Roles — The AI Agent Market
New roles appearing across the industry:
- AI Automation Architect — system scalability
- AI Strategy Consultant — aligning AI with business objectives
- Agent Architect — designing multi-agent systems
- AI Oversight Specialist — governance and compliance
- AI Workforce Manager — coordinating blended human-AI teams
Gartner predicts 40% of enterprise apps will include task-specific AI agents by end of 2026. The market for these roles barely existed 18 months ago.
The Convergence
Every framework converges on the same insight: the scarce skills aren’t technical — they’re judgment, evaluation, and architecture.
- Anthropic says it: discernment > delegation
- DataCamp says it: decision-making > engineering
- Jones says it: evaluation & quality judgment is the #1 cited skill
- Deloitte says it: skills governance > skills acquisition
The people who can tell AI what to do precisely, evaluate whether it did it correctly, and design the systems that make it reliable — those are the people the market can’t find enough of. And those aren’t computer science graduates. They’re editors, auditors, librarians, project managers, and risk specialists who’ve learned to work with AI.
Framework Convergence — 6 Sources, 7 Skill Axes
Evaluation & Judgment Specification Precision Task Decomposition Failure Pattern Recognition Trust & Security Design Context Architecture Cost & Token Economics 75% 50% 25%
Jones (Recruiter)
Anthropic
DataCamp
WEF
Deloitte
Emerging Roles
Figure 2: Six independent AI skills frameworks plotted on seven skill axes. The visual overlap at the top — Evaluation & Judgment — shows the convergence: every framework rates it highest. Cost & Token Economics consistently ranks lowest, suggesting it’s a learnable skill rather than a scarce capability.
What This Means for Businesses
If you’re an SMB looking at AI, the implication is direct: you don’t need to hire an AI engineer. You need someone who can evaluate AI output, structure your data so agents can use it, and build the oversight systems that keep quality high.
That’s what DESIGN-R does. Our AI team doesn’t just generate content or run automated scans. It researches your market, monitors your competitors, and delivers intelligence — with human review at every step. The same skills every framework identifies as scarce are the ones we use daily.
If you want to see what AI-backed intelligence looks like in practice, the free website check takes five minutes and shows you exactly the kind of analysis we deliver.
What Doesn’t Hold Up
Honest caveats on this analysis:
The $400K figure is for AI engineers at large companies. The skills scarcity is real, but the price points vary enormously by market. An SMB in Birmingham isn’t hiring at Silicon Valley rates.
- The certifications landscape is moving fast. Anthropic’s Claude Certified Architect is new. By Q4 2026 there may be five competing certifications. First-mover advantage matters but isn’t permanent.
- Frameworks overlap more than they diverge. Six frameworks agreeing could mean they’re all seeing the same truth — or they’re all reading each other’s work. Independent convergence is stronger evidence than citation chains, and we can’t fully distinguish the two here.
- The AI agent market projections ($52.6B by 2030) are analyst estimates. These are directionally useful but not predictions. Treat them as indicating scale and trajectory, not as precise forecasts.
Sources
- Anthropic AI Fluency Framework & Index (2025–2026)
- DataCamp 2026 AI & Data Literacy Framework (with YouGov)
- World Economic Forum, “Invest in the Workforce for the AI Age” (January 2026)
- Deloitte Skills-Based Organization research
- Spectraforce, “AI in Hiring 2026”
- Gartner, Upwork, and industry AI agent market analyses
-
Nate B Jones, “The AI Job Market Split in Two” (YouTube, March 2026)
[← Back to Intelligence](/intelligence)
Originally published at DESIGN-R Intelligence
Top comments (0)