TL;DR
Tinder and Zoom now offer iris-scan verification to prove users are real humans. Insurance firms report a 71% rise in AI-fraud claims. The proof-of-humanity economy has arrived, and it has arrived in recruitment too. AI-generated CVs are now actively filtered. 46% of CVs flagged as likely AI-generated receive fewer human reviews. The CV that gets through in 2026 is the one that most clearly demonstrates a specific human wrote it about specific work. This post covers the five signals classifiers look for, the before/after that passes, and the six-point checklist to make any CV read as human.
I've been building CVPilot, an AI CV optimisation tool, and the counterintuitive finding from 2025 is that too much AI polish now hurts.
In 2023 and 2024, candidates rushed to use ChatGPT for CV bullets. Recruiters initially liked the polished output. Hiring managers, less so, once they interviewed the candidates.
By mid-2025, ATS vendors responded. Workday, Greenhouse, Lever, and iCIMS all now run LLM-style classifiers that detect AI-generated content. The telltale signals:
- Perfectly parallel sentence structures across every bullet
- Over-use of "leveraged", "orchestrated", "spearheaded", "transformed"
- Symmetric numbers (25%, 50%, 100%) appearing too often
- Absence of specific tools, projects, or named contexts
- Smooth rhythm without the natural irregularity of human writing
A 2025 Jobscan study found CVs flagged as likely AI-generated received 46% fewer human reviews, because screening tools now surface "authenticity risk" as a warning alongside keyword match.
What proof of humanity actually looks like
Not random typos. Not casual language. Specific, verifiable texture that signals a real person wrote about real experience.
1. Specific project names and internal context
Generic: Led digital transformation initiative across multiple departments.
Human: Led the migration of our Salesforce Service Cloud instance from custom objects to standard objects. Took 11 months, involved 14 teams, replaced 6 legacy tools.
The second version contains information only someone who lived it would know. No LLM invents "our" in that context, and the specific counts are grounded.
2. Natural irregularity in bullet length
AI CVs often have bullets of similar length (the LLM targets a consistent rhythm). Human CVs have a 20-word bullet next to a 7-word bullet next to a 34-word bullet. The irregularity is a signal.
3. Honest ranges, not symmetric numbers
Generic: Improved conversion rate by 25%.
Human: Improved conversion rate from 3.4% to 4.1% on our pricing page, based on A/B test results over 8 weeks.
Specific numbers with narrow ranges, tested periods, and context look like real measurements. Round numbers without context look like placeholders.
4. One contrarian or unglamorous detail
Every human CV contains at least one thing the candidate isn't proud of but includes for honesty. A project that didn't ship. A result that underperformed. A responsibility phasing out.
Initially led marketing automation rollout. Transitioned ownership to dedicated team member in Q2 after we concluded the scope warranted a specialist hire.
LLMs rarely produce this kind of bullet because they're trained to maximise perceived impressiveness. Recruiters now read this kind of bullet as credibility.
5. A voice that sounds like you
If you speak plainly, write plainly. Your CV should sound like you in a meeting room, not like a press release.
Before and after that passes the filter
Before (reads as AI-generated)
Spearheaded a comprehensive digital transformation initiative, leveraging cross-functional collaboration to drive significant operational efficiencies and deliver measurable business impact across multiple stakeholder groups.
Zero specific information. Every noun abstract. Every verb inflated.
After (reads as human)
Replaced our ticketing tool (Jira Service Desk) with Freshdesk over 14 weeks. Saved £96,000 per year in licences. Took an extra 3 weeks because the original data migration underestimated custom fields.
Five signals of authenticity: named tool, specific timeframe, concrete number, honest acknowledgment of overrun, technical detail.
AI suspicion risk by CV section
| Section | Risk | What to do |
|---|---|---|
| Executive summary paragraph | High | Rewrite in your voice, first person, 3 sentences max |
| Generic skills list | High | Remove, embed skills in bullets with evidence |
| Symmetric-number bullets | Medium | Add timeframes, tools, context, honest ranges |
| Education | Low | Safe, add specific modules if relevant |
| Contact and header | Low | Include portfolio URL / GitHub |
| Certifications | Low | Keep factual |
Highest-risk section is usually the executive summary at the top. Most candidates write it last, tired, using ChatGPT. Screening tools flag this pattern more reliably than any other section.
Three practical moves
Audit every bullet for specifics. Ask: "Could another person with my job title at another company have written exactly this sentence?" If yes, rewrite with named project / tool / number / context.
Break the parallel structure. If bullets all start with verbs and have similar lengths, mix it up. One starts with a noun. One long. One short.
Include one unpolished truth. A single bullet that names a limitation, trade-off, or mid-project correction. Recruiters flag CVs that read as "flawless" because nothing in professional life is.
Your six-point checklist
- Every bullet contains at least one specific detail only you would know
- Bullet lengths vary naturally
- Numbers are specific, with context or timeframes attached
- At least one bullet acknowledges a trade-off, limitation, or correction
- The voice sounds like how you actually speak
- No phrases that appear on 1,000+ other CVs ("results-driven", "passionate about", "strategic thinker")
Full guide with per-section rewrites and the exact language classifiers flag: CVPilot blog
What's the most obviously-AI bullet you've seen on a CV recently?
Top comments (0)