We run AI mock coding interviews on intervu.dev. Each session is scored across five pillars (algorithms, coding, problem solving, verification, and communication) on a calibrated 1-10 rubric.
We looked at the anonymized, aggregate patterns across completed interviews: how candidates spend their time, how much code they write, and where they lose points.
Only 52% received a Hire or Strong Hire signal. The reason isn't what most candidates expect.
Verification Is the Silent Killer
Here are the average scores across all five pillars:
| Pillar | Avg Score |
|---|---|
| Coding | 6.94 |
| Algorithms | 6.77 |
| Problem Solving | 6.26 |
| Communication | 5.87 |
| Verification | 5.65 |
Verification (testing your own code, walking through examples, catching edge cases) is the weakest pillar across the board.
Most candidates write code that works and then stop. They don't trace through examples. They don't check boundary conditions. They don't walk through their logic before hitting "Run."
The good news? Verification is the easiest pillar to improve. You don't need to learn a new algorithm. Just build the habit of saying "Let me trace through this with a concrete example" before you call it done.
The Biggest Gap Between Hire and No Hire Isn't Coding
When we split the data by outcome, verification isn't just the weakest pillar. It has the widest gap:
| Pillar | Hire Avg | No Hire Avg | Gap |
|---|---|---|---|
| Verification | 7.06 | 4.36 | 2.70 |
| Algorithms | 8.12 | 5.82 | 2.30 |
| Coding | 8.12 | 6.27 | 1.85 |
| Communication | 6.94 | 5.18 | 1.76 |
| Problem Solving | 7.25 | 5.73 | 1.52 |
A 2.7-point gap on a 10-point scale. Candidates who get the Hire signal score nearly twice as high on verification.
The single biggest differentiator between a Hire and a No Hire is whether the candidate tests their own work.
Writing correct code is table stakes. Everyone studies LeetCode. Proactively tracing through your solution, spotting edge cases, and catching your own bugs before being asked? That's what most people skip.
No Hires Spend More Time Coding, Less Time Testing
We looked at how candidates actually spend their time during the interview:
| Metric | Hire vs No Hire |
|---|---|
| Time spent coding | No Hires spend 2.4x more |
| Time spent testing | Hires spend 68% more |
No Hire candidates spend 2.4x more time coding, and 40% less time testing.
The pattern is consistent: No Hire candidates jump into code, get stuck, iterate, get stuck again, and run out of time before testing. Hire candidates arrive at working code faster and spend that saved time on verification.
They're not faster coders. They plan better.
More Code ≠ Better Code
No Hire candidates consistently write ~15% more code than Hires. Longer solutions are messier, have more edge cases to miss, and are harder to trace through.
Clean, concise code is itself a form of verification. Less surface area, fewer bugs.
Finishing Is a Skill
A lot of candidates abandon interviews partway through. They get stuck, or the time pressure gets uncomfortable, and they quit.
Finishing a full interview is a skill that requires practice. If you regularly quit mock interviews midway, the interview itself, not just the algorithm, is what you need to work on.
Three Data-Backed Changes to Make
1. Practice the Dry Run
After writing your solution, trace through it with a specific input. Say the variable values out loud. Check the boundaries. This one habit is the biggest differentiator we found.
"Let me walk through this with [2, 7, 11, 15] and target 9..."
2. Design Before You Code
Hire candidates get to coding faster, not because they skip design, but because they're efficient about it. State your approach in 2-3 sentences, confirm the complexity, and start writing. Don't wait for the perfect plan.
3. Write Less, Not More
If your solution is getting long, step back. Long solutions have more bugs, take longer to debug, and are harder to verify. A clean 30-line Python solution is almost always better than a tangled 50-line one.
Methodology
All data is fully anonymized. No individual sessions, users, or identifying information were used. Analysis was performed on aggregate metrics only.
The thing that separates Hires from No Hires isn't algorithm knowledge or coding speed. It's whether you test your own work. And that's the easiest thing to practice.
If you want to try it yourself, intervu.dev runs AI mock interviews with signal-based feedback across all five pillars. You can start from the Grind 75 pathway or practice any LeetCode problem as a full mock interview.

Top comments (0)