Since I started working in the software industry over 11 years ago, there's been one constant in technical hiring: the coding interview. Every company might tweak their process—some add system design rounds, others throw in culture fit sessions—but the coding interview has been universal.
That's changing fast.
Now that you can generate nearly correct code in seconds using an LLM, the traditional coding interview has become both obsolete and trivial. This is especially obvious in remote settings where candidates can—and let's be honest, do—just ChatGPT their way through problems.
But the problem runs deeper than just cheating.
The fundamental flaw that LLMs exposed
Coding interviews were always broken. We just didn't want to admit it.
They weren't testing problem-solving skills. They were testing how well someone had memorized LeetCode problems. After all, interviewers weren't creating original algorithmic challenges—they were pulling from the same pool of problems everyone else uses.
At best, these interviews showed whether someone could convert thoughts into executable logic. But here's the thing: with LLMs handling that exact same task, what skill are we actually measuring?
The bigger issue is that coding interviews were never good predictors of job performance. Real software engineering requires skills that a 45-minute algorithm session doesn't touch:
- Understanding complex existing codebases
- Learning new technologies quickly
- Actually finishing projects (not just starting them or fucking around halfway through)
- Making architectural decisions that won't bite you later
- Working effectively with other humans
Why LLMs killed coding interviews completely
LLMs have exposed two fatal flaws in traditional coding interviews:
Remote interviews are pointless. There's no reliable way to prevent AI usage, and frankly, why would you want to? You can be almost 100% sure candidates are just going to ChatGPT it anyway—can't really blame them.
In-person interviews test the wrong skills. Even if you force candidates into a conference room, you're asking them to manually code solutions they'd use AI for on the actual job.
You could insist on whiteboarding, but then you're testing penmanship and memorization in an era where the job requires AI collaboration.
The obvious solution nobody wants to admit
Instead of viewing LLMs as the death of technical assessment, they're actually the best thing that happened to hiring.
Here's why: a software engineer's job is to engineer software. We only settled for coding interviews because you couldn't build real projects in interview timeframes. Coding interviews were a proxy for the actual work—building complete systems.
With LLMs, that constraint is gone. You can now build a working project in hours.
So why use a proxy when you can test the real thing?
My proposal: Scrap coding interviews entirely. Give candidates a full project to complete, with the explicit expectation that they'll use AI tools.
What this actually looks like
I'm not talking about take-home assignments that sit in someone's backlog for weeks. I'm talking about time-boxed project builds that mirror real work.
And when I say "full project," I don't mean building the next Google Maps. I mean something focused but complete—like a simple blogging platform that takes markdown files and hosts them with switchable themes. Or better yet, let the candidate choose.
There's a common product interview question: "What's your favorite product and how would you improve it?" You could adapt this for engineering: "Pick your favorite product and build a barebones version—a proof of concept."
This approach is especially valuable for startups. Most early-stage work is rapid POC development. You need people who can build fast without overthinking and overengineering. This format reveals whether a candidate can make progress in ambiguous situations or if they'll get stuck internally debating which fucking datatype to use for a database column.
Here's what changes:
You see real engineering skills. How do they set up their development environment? How do they structure code for maintainability? How do they handle dependencies, build systems, maybe even basic deployment? These are the messy realities of actual software work that coding interviews completely ignore.
You evaluate AI collaboration. Like it or not, we're not going back. Even if AI only improves productivity by 50%, that's a massive competitive advantage. You want to hire people who are exceptionally good at human-AI collaboration, not people who can solve tree traversal problems from memory.
You get system design for free. A complete project forces architectural decisions. Which database? Why? How do you structure the code for extensibility? How do you handle error cases? These decisions reveal thinking that's impossible to assess in isolated coding problems.
Why this wasn't practical before (and why it is now)
The old constraints were real:
- Time: Non-trivial projects needed weeks, not hours
- Candidate dropout: Working professionals couldn't spend weekends building speculative projects
- Interview throughput: You could do multiple coding interviews per day, but not multiple project builds
LLMs changed the math. What used to take a week now takes hours.
Plus, the time investment isn't actually worse than the current system. Most companies already put candidates through 4-5 coding rounds, and since remote interviews aren't practical anymore (thanks to AI cheating), these end up being in-person.
Do the math: 5 hours of interviews plus 1-2 hours of travel time each way. If you're in Bangalore, make that 3+ hours in traffic. You're already looking at a full day commitment for the traditional process.
A single project-based assessment that takes 4-6 hours? That's actually more efficient than the current marathon of multiple coding rounds.
What I'm seeing in practice
Some companies are already moving this direction, though most won't admit it publicly. The ones I've talked to report better hiring outcomes—fewer mismatches, better performance correlation, and candidates who actually understand what they're signing up for.
The resistance isn't technical; it's cultural. We've been doing coding interviews for so long that they feel "rigorous" even when they're not predictive.
The companies that adapt first win
Traditional coding interviews are ending whether we acknowledge it or not. Candidates are already using AI in remote interviews. The choice isn't whether to allow AI—it's whether to design assessments that account for it.
Companies that switch to project-based assessments will attract better candidates and make more accurate hiring decisions. Those clinging to LeetCode-style problems will find themselves testing memorization skills while missing the qualities that actually predict engineering success.
The future of technical interviews isn't about fighting AI. It's about embracing it to create better, more realistic assessments of what engineering actually involves.
This isn't just a trend—it's the new reality. The question is whether your hiring process will adapt or become irrelevant.
Top comments (0)