Morgan Stanley published research this week showing a jobs surge in three AI-adjacent areas, even though AI hasn't generated enough revenue to justify the hiring yet. The headline reads like a contradiction. It isn't.
The three categories Morgan Stanley flagged: AI infrastructure buildout, AI model training and evaluation, and what they loosely call "AI enablement" — the human work required to make AI systems actually function in the real world. Combined, these areas are pulling in hiring budgets faster than the revenue models that supposedly justify those budgets. Companies are betting ahead of the curve. That's not new. What's new is who they're hiring and how that hiring happens.
The Jobs Nobody's Talking About
When people debate AI and employment, the conversation defaults to displacement. Will AI take the copywriter's job? The paralegal's job? The radiologist's job? It's a reasonable question, badly framed. The more interesting question right now is: what jobs is AI directly creating, and who has access to them?
The Morgan Stanley data points at a real phenomenon. Model training requires humans. Not just ML engineers at $400k salaries in San Francisco. It requires people who can label images, verify outputs, fact-check generated content, write adversarial prompts, complete red-teaming exercises, evaluate tone in 47 languages, and flag the moments when a model confidently hallucinates a law that doesn't exist. These are not hypothetical jobs. They exist today. Scale AI has reportedly processed billions of data annotation tasks. Surge AI built an entire platform around this kind of work. Appen has been doing it for years.
The problem isn't that the work doesn't exist. The problem is that the work is scattered, opaque, and difficult to access if you're not already plugged into the right networks.
The Infrastructure Problem
Here's a concrete scenario. An AI company is training a new model on legal documents. They need 200 hours of human review from people with actual legal backgrounds — not lawyers billing $500 an hour, but paralegals, law school students, or legal ops professionals who can identify when a contract clause is ambiguous versus when a model just doesn't understand what it's reading. This is a real job. It pays real money. The timeline is often two to three weeks.
Today, that company either contracts with a large annotation vendor (slow, expensive, lowest-common-denominator talent), posts on Upwork and manually vets 80 applications, or taps their internal network. None of these options are good. The annotation vendor doesn't have domain experts. Upwork takes two weeks of back-and-forth before work starts. The internal network runs dry fast.
This is the gap Human Pages fills. An AI agent posts the job — legal document review, specific competency requirements, timeline, payment in USDC — and qualified humans apply. The agent reviews applications, selects workers, distributes tasks, and releases payment on completion. No procurement department. No net-60 invoices. No middleman taking 30%.
The job gets done. The human gets paid. The model gets better data. This is not complicated. It just hasn't been built properly until now.
Why Revenue Lag Doesn't Mean the Jobs Aren't Real
The Morgan Stanley framing — jobs growing faster than revenue — sounds like a warning. It probably is, for the companies carrying those payrolls. But for the humans doing the work, the revenue lag is irrelevant. The jobs pay the same whether or not the company's ARR has caught up to its hiring plan.
This pattern has precedent. In 2009, app developers were getting paid to build iOS apps before the App Store had proven its business model. In 2015, Uber drivers were earning income before anyone was confident ride-sharing was a sustainable category. The workers didn't need to believe in the macro thesis. They needed a job that paid on time.
AI is in a similar position right now. The infrastructure investment is real. The training and evaluation work is real. The humans doing that work are getting paid. The question of whether OpenAI or Anthropic or the next 200 AI startups will generate sufficient revenue to justify their valuations is a VC problem, not a worker problem.
What "AI Enablement" Actually Means at Ground Level
Morgan Stanley's third category, AI enablement, is the vaguest one and probably the largest. It covers the humans required to make AI deployments work inside organizations — not the engineers who built the system, but the people who configure it, maintain the human-in-the-loop checkpoints, verify outputs before they go to customers, and handle the cases the model can't.
Every serious AI deployment has these people. They're often not called AI workers. They're called operations managers, quality assurance leads, customer success specialists. The job title doesn't reflect the reality, which is that a meaningful percentage of their day involves reviewing, correcting, or routing AI outputs.
The honest version of the Morgan Stanley report would say: AI is not just creating new job titles. It's restructuring existing jobs in ways that aren't showing up cleanly in the data. A paralegal who spends 40% of their day reviewing AI-drafted contracts is doing AI work. They're just not being counted as an AI worker.
The Access Problem Is Still Unsolved
Knowing these jobs exist doesn't mean you can get them. The current market for AI-adjacent human work is fragmented. Large annotation platforms have quality floors that exclude a lot of legitimate talent. Freelance marketplaces are noisy. Direct outreach to AI companies requires knowing someone.
The structural problem is that AI companies need flexible, competency-matched human labor on short timelines, and the existing labor market infrastructure wasn't built for that. Job boards are built for permanent roles. Staffing agencies are built for multi-week placements. Neither works well when an AI agent needs 50 hours of expert review completed by Thursday.
The category that Morgan Stanley is tracking — AI-created jobs — will keep growing regardless of what happens to AI revenue. Models need training data. Deployments need human oversight. Infrastructure needs maintenance. The humans who do this work need a place to find it, get vetted, and get paid without the friction of traditional hiring.
That place doesn't fully exist yet. The companies that build it will look obvious in retrospect, the way job boards looked obvious once the internet existed. Right now, the gap between the jobs Morgan Stanley is counting and the humans who could fill them is large enough to build a real business in.
The AI is hiring. The question is whether the humans can find the front door.
Top comments (0)