DEV Community

charlie-morrison
charlie-morrison

Posted on • Edited on

I Sent the Same Resume to 5 AI Recruiter Bots — Only 2 Replied. Here Is What Killed the Other 3

There is a layer of AI recruiting tooling that has quietly become unavoidable in 2026. Most candidates do not know they are interacting with it, because the recruiter on the phone is human and the email is signed by a human, but the first filter and the first conversation in roughly half of all postings I tracked this quarter goes through software.

I wanted to know how good the software actually is. Specifically: does the same resume get the same treatment across the five biggest tools? Or are some of them just throwing applications away?

I picked one application of each kind that I would actually accept if it became real. I sent the same resume — same role-fit, same seniority — to five different recruiting platforms. Then I tracked what happened.

Two of the five worked. Three did not. The failure modes are interesting because they are different from each other.

The five platforms

I will describe the categories rather than name vendors, since some of these platforms operate as white-labels under recruiter brand names.

  1. Conversational AI screener — the chatbot that DMs you on LinkedIn and asks 4-6 qualifying questions before forwarding to a human recruiter.
  2. Voice-AI phone screener — calls your phone, has a 5-minute conversation, transcribes, then routes.
  3. Resume-parsing + scoring API — runs your resume through OCR/NER + an LLM scorer; outputs a fit score; recruiter sees that score and decides whether to read.
  4. Async video interview tool — you record yourself answering 3-5 questions, AI transcribes and scores, recruiter watches the highest-scoring videos.
  5. AI sourcing platform that "matches" you to roles — you upload your resume; it runs against an embedding index of open roles; you get matched roles back via email.

The two that worked

Platform 1 (conversational chatbot) worked. It asked four questions, three of which were directly job-relevant ("are you authorized to work in the US," "are you open to remote," "what is your minimum comp expectation"). The fourth was a vague behavioral prompt that I answered with two sentences. Forty-eight hours later a real recruiter emailed asking to schedule a call. Same role, no surprises.

Platform 5 (AI sourcing match) also worked. I uploaded the resume; within 36 hours I had three relevant role matches in my inbox. Two were genuinely fit (correct seniority, correct stack, correct geography). The third was off — wrong stack — but it was a transparent miss, not a hostile one. Email-the-recruiter-back and they corrected.

What these two have in common: simple, structured tasks with feedback loops. Conversational screening is a 4-question survey with branching. Sourcing-match is an embedding lookup with manual recruiter override. Both are well-suited to the AI of 2026.

The three that didn't

Platform 2 (voice AI phone screen) failed in an interesting way. The bot was clearly trying to act human. It paused naturally. It said "mhm" at appropriate moments. It also missed two of my answers entirely, asked the same question twice (different wording), and ended with "thanks, we'll be in touch." Nothing followed. Six weeks later, no human contact. I escalated through a different channel — recruiter said the platform had logged me as "low engagement signal" because I had asked it to repeat one of its questions. The bot had penalized me for clarifying.

Platform 3 (resume scoring API) failed silently. There is no candidate-facing failure mode. You just don't hear back. I only know my application was scored low because a friend at the company forwarded me the recruiter's view of it. The fit score had been 41/100. The recruiter never opened the resume. The score was low because my title at my current company is not the standard title for the role. The scorer had no concept of "this person does the role under a different title at a smaller company." It was a string match. I lost the role to a less-qualified candidate with a perfectly-matching title.

Platform 4 (async video interview) failed in the way the platform itself documents but candidates don't notice. The scoring rubric weights three things heavily: speech pace, eye-contact-to-camera, and "energy" (a proxy that mostly captures pitch variation). I answered the questions clearly and concisely. The transcript was good. The video score was 64/100. I lost to a candidate who almost certainly performed better on the rubric than on the actual answers.

The pattern

The two that worked treat AI as a structured-task layer with human override. The three that failed treat AI as a judgment layer.

Conversational screening = AI does the easy part (collecting structured answers); humans do the hard part (deciding fit). It works.

Sourcing match = AI does the easy part (embedding similarity); humans do the hard part (deciding which match to act on). It works.

Voice AI screening, resume scoring, and async video scoring all ask the AI to make the judgment. The AI is not yet good enough. But it is good enough to silently filter you out without telling you. That is the dangerous version.

What to do as a candidate

Three concrete moves, in order of cost.

1. Free: title-match your resume. If platform 3 is in the funnel for a role you want, the title on your resume must exactly or nearly match the title in the JD. "Senior Software Engineer" vs. "Senior Backend Engineer" can be a 15-point fit-score gap. Stack tokens matter too: Go, Postgres, Kubernetes — exact spelling, exact case is fine but avoid abbreviations the parser may not have seen.

2. Cheap: rehearse the structured answer flow for platforms 1 and 2. Authorization, location, comp, "tell me about yourself" — short, direct, no anecdotes. The bot is parsing for keywords and intent. Give it both, fast.

3. Expensive: avoid platforms 3 and 4 if you can route around them. This is not always possible, but a referral from inside the company often skips the scoring layer and lands you in a human recruiter's queue directly. A referral was worth roughly 5x the response rate in this small sample.

The platforms aren't going away. They are getting more common, not less. The candidates who learn the layer's failure modes will get more interviews. The ones who don't will quietly lose offers they were qualified for.


Free tools I built for the job search: resume-checker, job-keywords, resume-bullets. All free, all in the browser, no signup.

Earlier in this series:

Top comments (0)