I LOVE a good technical challenge. There is something satisfying about solving a complex problem, optimizing a data structure, or shaving off a few milliseconds of latency. But this past summer, I found myself back in the job market, and let me tell you… Things have changed.
The industry has split into two distinct realities.
On the one hand, I interviewed with established enterprise giants whose processes were like bread and butter to me. I was solving algorithmic puzzles and showcasing my understanding of the DOM and general web development skills. On the other hand, I interviewed with startups where the interviewer essentially handed me the keys to GitHub Copilot and said, "Build this feature. You have 45 minutes. Go."
It made me feel a bit like an "old man yelling at the sky," but it also raised some serious questions about where our industry is headed. Are we testing for engineering skills, or for subscription tiers?
The Great Bifurcation
If you've been interviewing lately, you've probably felt this whiplash. The data backs it up; we are seeing a dramatic bifurcation in how companies hire.
The Enterprise Players
The big players are committed to the LeetCode style interview. Why? Because they are terrified of AI impostors.
When I sat for these interviews, it was all about "Proof of Work". They wanted to know that I possessed the raw cognitive bandwidth to manipulate logic without a robot whispering the answer in my ear. And honestly? I get it. With tools like ChatGPT, it's easier than ever to fake competence. In fact, 81% of interviewers at Big Tech companies suspect candidates of using AI to cheat during remote sessions.
So, the LeetCode grind isn't going anywhere. It's the safety against the "AI-powered poser."
The Startups
Then there are the startups. These folks have embraced our new machine overlords. They aren't looking for someone who can write a linked list from memory; they want "AI Editors" and "Sense-Makers".
In these interviews, the constraints weren't syntax or memory; they were my ability to prompt, debug, and integrate. It was eventually fun, but also frantic. I had to get over the mindset of “showing" or “explaining" my work. I wasn't there to show I know the native functions and behaviors of the DOM. Instead, there was this massive expectation for speed. Because AI tools are supposed to make us 10x developers, the interview pacing was often set to "warp speed," expecting me to blaze through boilerplate code.
The Hidden Barrier to Entry
Here is the part that really worries me, and it's something we aren't talking about enough: THE COST.
In the traditional interview, all you needed was a brain and a whiteboard (or a laptop). Today, for these startup interviews, you are often expected to bring your own AI tools.
The startups aren't always providing the enterprise seats for these AI tools during the interview process. So, are you on ChatGPT's free tier? Or are you shelling out $20/month for Plus? Do you have a personal GitHub Copilot subscription?
It introduces a subtle but real economic barrier. Is your success in an interview dependent on whether you can afford Claude Code Max 20x ($200/month) vs. Cursor Pro ($20/month)? If my model hallucinates because I'm on a lower tier, and I didn't catch it, did I fail the interview?
Startups are essentially asking candidates to pay for the privilege of being efficient enough to get hired.
The "Home Court Advantage" Problem
Some companies, like Meta, are addressing this inequality by introducing standardized interviews that use the same AI tool for everyone. On paper, this sounds fair. It levels the playing field and removes the cost barrier.
But does it?
We all have our "fine-tuned" setups. You may have a CLAUDE.md or AGENTS.md file. We are "pros" with our tools. Throwing a developer into a standardized AI environment, like Meta's CoderPad setup, is like handing a race car driver a rental sedan and asking them to set a lap record. You might know how to drive, but you don't know this car.
From my own experience working at Meta, I found that even their internal tool, Metamate, struggled to handle complex tasks on the actual Facebook codebase. It often felt like I was correcting everything it output rather than it actually making life better. Anyway, if you've never used their specific flavor of AI before, are you ready to use it like a pro in a high-stakes 60-minute interview? Probably not.
Try Before You Buy
Finally, this brings me to the Paid Work Trial.
Startups are increasingly asking candidates to join the team for a few days to work on actual production tickets. In theory, there is some merit to this. It gives you a really good idea of how well you'll click with a team, and it mitigates the risk of a "false positive" hire.
But let's look at the logistics. What if you already have a job?
Can you really take 3 to 5 days off to work somewhere else? That's a massive time commitment that privileges people who are currently unemployed or have incredibly flexible schedules. Hell, does your current employer allow you to moonlight at another job? Most employment contracts have clauses that prevent us from working for another entity, even for a short "trial." The Paid Work Trial requires us to breach our current contract to potentially secure a new one.
tl;dr Everything is Awful
I wish I had a magic answer, but the reality is that for the foreseeable future, we're going to have to be bilingual.
- Keep your fundamentals sharp: The "old way" isn't dead. We still need to prove we understand the code well enough to audit the AI's output.
- Master (Generic) Prompts: Don't just rely on your custom configs. Get good at "Prompt Engineering" in a vanilla environment, because you might not get to bring your own configs to the interview.
- Protect your time: If a company asks for a multiday trial, ensure it is paid at market rates. Don't work for free.
It's a weird time to be interviewing, caught between the LeetCode grinders and the vibe-coding speedrunners. But hey, at least it keeps things interesting, right?
I'd love to hear your horror stories (or success stories!) from the 2025 interview circuit. Are you seeing more work trials? More AI? Let me know!
Top comments (3)
As the owner of a company I've struggled with this too. I ultimately want people to possess both good analytical skills themselves AND the ability to be effective with AI. In terms of cost to barrier to entry, I think of this like any other craft. If you are a carpenter, you should own a drill. A chef should own their own set of knives. I would expect an engineer to have a good machine of their own and preferences in terms of their tooling. If you don't have a subscription to an AI service, I'm going to question your ability to be effective with it. I use AI to code every single day, and I can tell you that I've for sure gotten better at it over time and I'm still learning more and more about how to get the best quality work for the shortest period of time out of my tooling. If you were planning to just hop into an interview and vibe code with whatever tooling they decide to let you use, I'd say that's a recipe for disaster. I WANT your opinions and tooling. Especially this early in the AI game, I am expecting new ideas from engineers on the best way to work. I'm not sure how that happens if you're not curious enough to do some serious tire kicking with these tools on your own.
If u value the startup and its potential as high then taking 3 -5 days off for work trial is ok as you will anyhow find a way to do it. Joining a startup needs that sort of problem solving skills. And if the trial is successful then the candidate must be paid full salary as per offer letter . If the startup is convinced of the skills of the person they can pay up to 50% for that work trial irrespective of outcome.
I have to push back on this. Framing this as a "problem-solving skill" ignores the massive risk asymmetry. You’re asking candidates to burn PTO and return to a backlog of work for something that isn't guaranteed. Unless there is a signed contingent offer and you're the sole candidate, this is likely just a "bake-off" against others. Companies (startups or enterprise) will always do what's best for them; we need to do the same. Risking stable employment for a 50% paid trial is rarely a smart move. And this doesn't even begin to cover the potential legal risks developers may be taking on doing this!