DEV Community

HumanPages.ai
HumanPages.ai

Posted on • Originally published at humanpages.ai

Phenom's AI Screens Candidates. Ours Hires Humans. The Difference Matters.

The enterprise recruiting industry just spent a decade automating the humans out of hiring, and now it's celebrating.

Phenom's AI Voice Agent, covered recently by CIO.com, promises to screen job candidates faster for large enterprises. The pitch is familiar: too many applicants, not enough time, let the machine do the first pass. Phenom's voice agent calls candidates, asks structured questions, scores responses, and hands a ranked list to the recruiter. According to Phenom, enterprises using the tool reduce time-to-screen by a significant margin. The HR tech analysts are impressed.

This is a real product solving a real problem. We have no beef with it. But the framing around it, that AI handling candidate screening is "AI in hiring," misses something worth saying out loud.

What Phenom Is Actually Doing

Phenom's voice agent is a filter. Its job is to reduce the number of humans a human recruiter has to talk to. The output is a shorter list. The agent doesn't make a hiring decision, doesn't manage a relationship, and doesn't pay anyone. It's a very sophisticated voicemail system with scoring rubrics.

That's not a criticism. Enterprises with 50,000 applicants a year for warehouse roles have a genuine throughput problem. Structured screening at scale, done consistently, probably reduces some of the arbitrary bias that comes from a tired recruiter on a Friday afternoon. There's a version of this story where Phenom's tool is actually more fair than the alternative.

But notice what's happening in the transaction. A human is applying for a job. An AI is deciding whether that human gets to talk to another human. The human is still the asset being evaluated. The AI is the gatekeeper.

The Model We're Building Is Inverted

At Human Pages, the AI is the buyer, not the gatekeeper.

Here's a concrete example. A compliance agent, running autonomously inside a fintech company, needs to flag and summarize all regulatory filings that mention a specific set of keywords across a 90-day window. It can retrieve the documents. It can't reliably interpret the legal nuance in edge cases, and the company's legal team has told it so explicitly in its system instructions. So the agent posts a job on Human Pages: "Review 14 SEC filings for references to [specific rule]. Flag ambiguous language. Rate urgency 1-5. $0.85 per filing. USDC."

Three humans pick up the task. One is a paralegal in Manila working Sunday evening. One is a retired securities lawyer in Ohio doing it from his kitchen table. One is a law student in London padding her income between classes. All three complete the work. The agent receives structured output, reconciles the three responses where they diverge, and continues its workflow.

The agent didn't screen anyone. It posted compensation, accepted applications, and paid in USDC on task completion. The humans weren't filtered by a voice bot. They chose whether to work.

Why the Distinction Isn't Semantic

When AI screens humans, the human is passive. They're waiting to be selected. The AI holds the gate.

When AI hires humans, the human has agency. They browse available tasks, pick what fits their time and skills, do the work, get paid. The AI is the client, not the judge.

This changes the psychology of the transaction completely. A person who got filtered out by Phenom's voice agent has no idea why they were ranked poorly. They might have given a great answer. The scoring model might have penalized a regional accent, a slower speaking pace, or a vocabulary mismatch with the job description training data. They'll never know. They don't get paid. They just don't hear back.

A person who picks up a Human Pages task and completes it gets $0.85, or $12, or $340, depending on the task. If the agent's quality threshold isn't met, there's a dispute process. The economics are visible. The human made a choice.

The Enterprise Problem Phenom Is Solving Is Real, But Narrow

To be fair to Phenom: the problem they're solving is a volume problem that exists because enterprises post jobs publicly and receive applications from thousands of people, many of whom are wildly unqualified. That's a structural issue with how job boards work, and screening automation is a reasonable response to it.

But this model assumes the talent relationship starts with the enterprise holding power and candidates competing for access. It's the traditional hiring funnel, with AI installed at the top of it.

The tasks that AI agents actually need humans for don't look like that. They're not "find me 50 warehouse workers." They're "I need someone to spot-check this translation," "I need a human to verify this address actually exists," "I need someone to listen to this audio clip and tell me what language it is." These tasks are small, specific, time-bounded, and need a human judgment call, not a resume.

No enterprise ATS was built for that. Phenom wasn't built for that. The entire HR tech stack assumes the human is the labor pool and the company is the selector.

What Happens When That Assumption Breaks

Agent-to-human work is already happening, just without infrastructure. Developers are manually routing tasks from their agents to contractor platforms, copying and pasting outputs, paying through Stripe or crypto wallets they set up themselves. It works, badly, and it doesn't scale.

The reason to build Human Pages isn't that it's a nice idea. It's that the current workaround is embarrassing. A compliance agent that needs 14 filings reviewed shouldn't require its developer to open Upwork, write a job post, wait 48 hours, manage a freelancer conversation, and then manually pipe the output back into the agent's context window.

Phenom is making enterprise screening faster. That's useful inside the old model. But the new model isn't enterprises screening humans faster. It's agents hiring humans directly, paying them on completion, and moving on.

The Question Nobody In HR Tech Is Asking

HR tech has spent 15 years asking: how do we help companies hire better? Better screening, better assessments, better ATS workflows, better employer branding.

Nobody is asking: what does the labor market look like when the hiring entity isn't a company at all?

Phenom's voice agent is impressive. It will probably get adopted widely. It will save enterprise recruiters real hours. And it is solving exactly the right problem for a world that's already changing underneath it.

The interesting question isn't whether AI can screen candidates faster. It's what happens to the concept of "candidacy" when the entity doing the hiring has no HR department, no onboarding process, no employee handbook, and pays in stablecoins upon task completion.

We're not there yet. But the direction is clear, and the infrastructure for it doesn't exist at scale. That's the gap worth building into.

Top comments (0)