The path to hell is paved in good intentions
Just had a technical take-home test for a position I've applied for.
I know of many senior folks who, in better days, maybe even today, who knows, refused taking any sort of assignment as part of their candidacy flow.
None whatsoever.
"I'm a senior with 15+ years of experience", they reasoned, "my abilities are proven by my tenure in the industry".
Personally, I don't think years of tenure are a valid metric of skill, but I always encouraged those folks to stick to their beliefs... it meant folks like myself, a little more modest, a little less vain, had the shot those folks forfeited - a win-win situation. 😁
But today's test has me questioning the industry.
Questioning the industry's ability to gauge candidates correctly, its ability to evolve, ability to admit its mistakes.
The industry's willingness to learn and better itself.
We do, after all, consider ourselves the industry of meritocracy, of disrupt, of change, of the future.
Well, the prevalence of tests like I had to take today make me doubt that.
You see, the test I took consisted entirely of LeetCode (often styled “L337C0d3”, and from now on in this post, LC) programming puzzles.
And puzzles they are.
The thing about LC puzzles, the empirically proven thing, is that they are worse than useless as a measure for a candidate's potential to perform well in the day-to-day production setting of a modern software development/engineering department, in fact producing a noise that pollutes the signal!
How so?
Candidates can train for LC puzzles.
There are literally countless sources dedicated to showing how to tackle these puzzles, all exploiting the fact that LC puzzles all rely on some clever trick that, once recognized, allows for a swift, efficient, solution.
Once a candidate memorized the finite list of trick, learning to recognize each, and also memorized the “trick buster” implementation, all that’s left is to “parrot” the solution and they’re done.
Further, LC puzzles are usually self-contained: one function, one optimal implementation.
There’s no software engineering involved in solving them. No tradeoffs (e.g., he “trick buster” dictates what data structures to use). No ambivalent, dodgy spec document (to the contrary, LC puzzles descriptions are some of the best spec-ed documents I ever saw, right down to specifying the types and sizes of the input arguments, the expected outputs, and the system’s memory/storage/computation constraints).
And, since those are automated systems, scoring a candidate’s solution based on the number of unit tests it passed, public and private, hidden, edge-case, ones - there’s no chance of discussion between candidate and evaluator.
It’s an evaluation by not even LLM, rather a crude pass/fail “bean counter”. The more tests you passed, the better the submitted solution must be.
And the platforms for running those tests?!
"You must have a webcam turned on and facing your screen for the duration of the test - taking your eyes off your own screen is an automatic disqualification”.
“You must have your browser in "Full screen" mode - taking your browser off of full screen mode will result in an automatic disqualification”.
“You can't copy-paste, this functionality has been disabled. Finding a way to circumvent this will result in an automatic disqualification”.
The list of constraints and limitations goes on and on.
Any infringement of those constraints is referred to as “cheating”, and in the end report you are scored, among your professional capabilities, on “honesty”.
Now, maybe it’s me, but I find it degrading and demeaing that I’m even considered a potential cheat, and then given an “attaboy” for not.
I mean… going in to a test, a stressful enough situation as it is, knowing the people administering the test suspect you of cheating, a-priory, and it’s on you to prove them wrong - that is, come to think of it, humiliating!
I also wonder what do the test administrators think they’re actually testing for? What “signal” do they think these tests produce?
Because I'll tell you what they do test: not my coding abilities, not my software engineering skills.
Nope.
What LC puzzles actually test, especially being denied access to Google and LLMs, and under strict time pressure, is my memory.
Period!
Ask any software developer, especially senior ones, to implement a simple, but not trivial, task - they will, every single one of them, turn to the following tools for help, in order:
IntelliSense
LLMs
Software development Q&A sites (you know which one).
Official documentation for the language/framework/technology
Exactly the tools these stupid test platforms deny the candidate - some actually don't have even IntelliSense turned on, a glorified online notepad!
I wish I was joking, but sadly, I'm not.
The more senior you become the less you memorize specific APIs and commands, instead honing your engineering skills: the trade-offs of different data-structures, tighetning a loop by using collection functions instead of imperative constructs, manging state.
Implementation details are exactly that, details - that’s what all the above mentioned tools are for.
I don’t care to remember the exact syntax of a “forEach” function, or maybe it’s called “map” in this particular programming language?
I just know I’d be better off using it then implementing an imperative “for loop” by hand, but without access to my tools of the trade I’m forced to use what my pressure blacked-out brains knows by heart - exactly the sub-optimal, error-prone, hand-rolled, imperative loop!
And by the way... my code did pass all the unit tests!
I mean, sure, I submitted my code “on the buzzer” (each question has a time limit, ‘cause why not pressure the candidate some more?!), and the submitted code is hand-rolled, unmaintainable, imperative, non-idiomatic, sub-optimal performing, using obsolete APIs, that wouldn't pass a real-world’s production-grade code review.
But it did pass all the unit tests, including the private, hidden, ones.
Guess that means I'm a great software developer and engineer.
I'd be expecting your offer, at premium rates, any day now.
My way
"So, mister smarty-pants, if you're soooo smart, how would you have us evaluate candidates?"
Wow, I'm so happy you asked.
A little unexpected, I didn't prepare anything, but let me see what I can come up with on the spot.
Well, to begin with - ditch those stupid LC puzzles and platforms!
I know, they're easy: you select a bunch of questions from the platform’s literally endless supply, the platform's automated judge gives a pass/fail score to each question, crunches all the numbers for you, and displays the top N percent candidates (possibly also factoring speed of completion - ‘cause there’s nothing suspicious about a candidate completing a perfect solution to a puzzle in under 2 minutes from the clock starting to count. Everything’s Kosher).
Easy.
Fast.
Convenient.
Requires you to spend no time whatsoever personally getting to know the candidate.
Wrong!
You want to evaluate a candidate's real software engineering skills - ask them to engineer some software.
I know… what a total 🤯!
Have the candidate implement a small feature you already have running in production for years - no one like doing free work, or even feeling like it!
And I mean small: an optimal solution will have at most 2-3 classes, each with 2-3 methods.
Already you got a signal about how the candidate:
Organizes their code
What naming convention do they use - robust and legible, or the dreaded
int iDid they bother adding tests, and if so, were those test high-quality, or basic, low-hanging fruit
Isn’t this kind of signal already ten-folds better than LC puzzles? Of course it is!
But we’re just getting started!
Let the candidate take their time... no time constraint, no pressure.
You continue interviewing candidates all the time, up to when one has signed the contract.
If a candidate is too lax, taking their time - that's their problem. The opportunity will slip right through their fingers.
Also, let the candidate use whatever tools they feel like.
You put it in the assignment spec: "Use whatever tools you want. For all we care, if you can summon the occult to help you with the task - go right ahead, and don't forget to draw a pentagram on the floor first!"
You'd be surprised what a good chuckle where you least expect it will do to a candidate's morale.
By the way, speaking of "the spec", the assignment's actual text, the one the candidate needs to implement - make it vague on purpose. Omit key details. Leave out acceptance criteria for certain test cases.
You know, like the spec your own PM hands you. Every... single... sprint.
And, know what? You don't have to be super-responsive either, when, if, the candidate comes a-knocking on your email box’s door asking for clarifications - excellent signal if they do, excellent negative signal if they don't.
Is your PM just waiting at your beck and call? No?! What a shocker.
You don’t have to be at the candidate’s either. Just make sure to get back to them eventually.
In essence, you extend the candidate the same respect, and expectations, you will once they sign the contract and come onboard your engineering team.
Once the candidate submitted their work have your LLM of choice review it.
What for, what are the review criteria, you decide - only you know what skills and engineering best practices make your team vibe like the pit crew of a premium F1 championship team.
Of the candidates who passed the LLM verdict, you have a long discussion, regarding their solution.
This is the time to find out if they truly understand the code they submitted, what made them take the design choices, the data structure compromises - the engineering.
Again, I can't tell you how to run such a conversation, but experience taught me that when facing another like-minded engineer, the conversation flows of its own accord.
A jittery conversation, especially if the candidate is senior and has been around the block, is in itself an excellent signal. A negative one, but excellent all the same.
After the round of personal, technical, interviews, the only real hard decision left for you to make is which of the candidates who made a positive impression gets the offer.
Hey, that's why you're getting paid the manager's salary... big bucks for big decisions.
Yes, this flow is time consuming.
It's not easy.
Nor is it convenient.
It will “eat up” into your already tight schedule (another deadline was moved up a week?! Geez!)
It will require you to interact with the candidates, plural, for an extended period of time (in the process, giving you an insight about their personality, not only their coding skills. How’s about that?!)
It’s definitely not fun.
But… you need to ask yourself what are you optimizing for when recruiting - your own momentarily leisure, or your team's future success?
Like I said, big bucks for big decisions.
It's a cultural thang
As I conclude this post I'd like to say to those companies that in 2025, coming on 2026, still use LC puzzles, on restrictive platforms - don't call me, I won't call you.
Yes, economy is in the gutter right now, coming by interview opportunities is harder than winning the PowerBall. Still, that's fine, you do your thing, I’ll do mine.
I'm probably not a good cultural fit for you anyway - I like airing my grievances, not just pretending everything is fine, smiling and waving my hand.
I speak up when I feel I've been wronged, and I don't mince my words either.
I also like working with, and for, smart people, people that appreciate advanced, modern, technology, and know how to take advantage of it.
I don't like dinosaurs, never got the fascination.
And I especially don't like being judged unfairly, by lazy folks too complacent to step up their own game, and put in the work.
We’ve already established I am a technical fit - all my solutions to your LC puzzles passed all unit tests.
But culture is as important - and I don’t like yours!
KTHXBYE
Top comments (0)