DEV Community

Cover image for 2000 just called, it can't believe you're STILL using L33tC0d3!
O.F.K.
O.F.K.

Posted on

2000 just called, it can't believe you're STILL using L33tC0d3!

The path to hell is paved in good intentions

Just had a technical/take-home test for a position I've applied for.

I know of many senior folks who, in better days, maybe even today, who knows, refused taking any sort of assignment as part of their candidacy flow. None whatsoever. "I'm a senior with 15+ years of experience", they reasoned, "my abilities are proven by my tenure in the industry".

Personally, I don't think years of tenure are a valid metric of abilities, but I always encouraged those folks to stick to their beliefs... it meant folks like myself, a little more modest, a little less vain, had the shot those folks forfeited - I'd say a win-win situation. 😁

But today's test has me questioning the industry's ability!

The industry's ability to gauge candidates correctly. The industry's ability to evolve. The industry's ability to admit its mistakes.

Also, the industry's willingness to learn and better itself.

We do, after all, consider ourselves the industry of meritocracy, of disrupt, of change, of "the future"(™️).

Well, the prevalence of tests like I had to take today make me doubt that. Greatly.

You see, the test I took, in its entirety, was made of what has been long known, as of today, as L337C0d3 programming puzzles.

And puzzles they are.

The thing about LC puzzles, as I shall refer to it from now on, the empirically proven thing is that they are worse than useless as a measure for a candidate's potential to perform well in a day-to-day production setting in a modern software development/engineering department... it has been empirically proven these puzzles actually imply a noise - they pollute the signal that testing is supposed to give!

What do I mean by that?

LC puzzles can be trained for: there are literally countless books, websites, and Instagram/TikTok accounts all dedicated to showing the "client" how to tackle these kind of puzzles.

Since the format of all LC puzzles is quite the same, and the way to approach them is also quite the same - once you recognize which of a finite number of tricks the specific question you're facing belongs, solving it is a matter of implementing the "trick buster" in code, "trick buster" being a clever algorithm that cuts down the amount of space and time, two hard limits in any LC puzzle, while providing the correct response.

I did say clever.

But memorizing the enumeration of trick categories, the "trick buster" for each, and the implementation of it - aren't.

It's a test of a candidate's memory - sure. Just not their abilities to code, to design a software solution, to understand specs (one thing I will give LC puzzles - their descriptions are, almost always, pristine. I wish all my career I was given specification documentations as good as the average LC puzzle's description. If only...)

And the platforms for running those tests? Ha!

Well, companies are not unaware of the times we live in - Googling is the way of the past, today we have LLMs, that are getting better, more accurate, by the iteration.

We do live in wonderful times to be a software developer!

Only not when looking for a job, taking take-home tests.

"You must have a webcam turned on and facing your screen for the duration of the test - taking your eyes off your own screen is an automatic disqualification.

You must have your browser in "Full screen" mode - taking your browser off of full screen mode will result in an automatic disqualification.

You can't copy-paste, this functionality has been disabled. Finding a way to circumvent this will result in an automatic disqualification."

The list of constraints and limitations goes on and on.

I always wonder, when faced with this type of test, what is it the company administering the test believes it actually tests?

Because I'll tell you what they do test - not my coding abilities, no my software engineering ability... oh, no.

What they actually test when they forbid me the use of Google, of LLMs, is my memory of the programming language's APIs.

Period!

Ask any software developer, especially senior ones to implement a simple, but not trivial, task and they will, invariably turn to the following tools for help, in order:

  • IntelliSense - if the language can guide us by itself, that's best. Bringing in any external tool will require time and effort.
  • LLMs.
  • Software development Q&A sites (not going to name the one we're all thinking about for obvious reasons, but we are all thinking the same name).
  • Official documentation for the language/framework/technology.

In that order!

Exactly all the tools these stupid test platforms deny the candidate (some actually don't have even IntelliSense turned on - an online notepad. How I wish I was joking, but sadly, I'm not.)

Like I said, denied all the tools I use daily to "Get things done", what I'm actually being tested on by these platforms, taking LC puzzles, is my memory: of the language's APIs, and of the "trick busters".

Way to go recruiting company!

Undoubtedly, the "signal" these platforms give you about my ability to understand the spec document the PO released two days prior to sprint starting, and my ability to turn that spec into functioning code, well-structured, maintainable, idiomatic, performant, code is undeniable.

How's your false positive rate, by the way, since you started using these platforms as your evaluation tool?

High I'd bet. Probably a reason for concern for your engineering department mid-level managers... you know, the ones that don't get to decide how and what tools to use in candidates evaluations, but are stuck with whichever candidates actually make it and get hired to do actual line-of-business, daily grind, complex, can't-train-for-it-'cause-it's-unpredictable coding.

Ah, by the way... my code did pass all the unit tests!

I mean, sure, it took me all the alloted time for the question to submit my response, which is a steaming pile of unmaintainable, imperative, non-idiomatic, sub-optimal performing, obsolete API using code, code that wouldn't pass a code review if I bribed the colleague reviewing with... I don't know, something big.

But it did pass.

Guess that means I'm a great software developer.

I'd be expecting your offer, at premium rates, any time now.

My way

"So, mister smarty-pants, if you're soooo smart, how would you have us evaluate candidates?"

Wow, I'm so happy you asked.

I mean kinda unexpected, I didn't prepare anything, but let me see what I can come up with on the spot.

Well, to begin with - ditch those stupid LC puzzles and platforms!

I know, they're easy: you select a bunch of questions from the endless supply, the platform's automated judge gives a pass/fail score to each question, does all the math for you. All you have to do is select * from candidates where test_score >= 90... I mean, you enter the number "90" in the UI textbox, the platform even runs the query for you.

Like I said... easy. Fast.

Convenient.

Wrong!

You want to evaluate a candidate's real software engineering ability - ask them to engineer some software!

I know, what a 🤯

Ask them to implement a small feature you already have running in production for years (no one like doing free work, or even feeling like it!)

And I mean it when I say small: 2-3 classes tops, each with 2-3 methods.

Already you got a signal about how the candidate organizes their code, what naming convention do they use, did they bother adding tests (for such a small feature adding unit-tests, as part of the test's deliverables, seems not only plausible, but outright required, to me.)

Isn't this signal already ten-folds better than LC puzzles? Of course it is, but we're far from done!

Let the candidate take their time... no time constraint, no pressure.

You continue interviewing candidates all the time, up to when one has signed the contract.

If a candidate is too lax, taking their time - that's their problem. The opportunity will slip right through their fingers.

Also let the candidate use whatever tools they feel like.

You put it in the assignment spec: "Use whatever tools you want. For all we care, if you can summon the occult to help you with the task - go right ahead, and don't forget to draw a pentagram on the floor first!"

You'd be surprised what a good chuckle where you least expect it will do to a candidate's morale.

By the way, speaking of "the spec", the assignment's actual text, the one the candidate needs to implement - make it vague on purpose. Omit key details. Leave out acceptance criteria for certain test cases.

You know, like the spec your own PM hands you. Every... single... sprint.

And, know what? You don't have to be super-responsive either, when, if, the candidate comes a-knocking on your email box asking for clarifications - excellent signal if they do, excellent negative signal if they don't. I mean... is your PM just waiting for your messages, replying before you even had a chance to blink?! I never met that kind of unicorn. Guessing you haven't either.

In essence, you extend the candidate the same respect, and expectations, you will once they sign the contract and come onboard your engineering team.

Once the candidate submitted their work have your LLM of choice review it.

What for, what are the review criteria only you know - you know what traits, abilities and engineering best practices make your team vibe like the pit crew of a premium F1 championship team.

The candidates who passed the LLM verdict you have a long discussion with, regarding their solution.

This is the time to find out if they understand the code the LLM gave them, the design choices, the data structure compromise - the engineering, or not.

Again, I can't tell you how to run such a conversation, but experience taught me that when facing another like-minded engineer the conversation will flow of its own accord.

A jittery conversation, especially if the candidate is senior and has been around the block, is in itself an excellent signal. A negative one, but an excellent one none the less.

The only real hard decision left for you to make is which of the candidates with whom you had a lovely, technical, in-depth conversation, to pick and make an offer.

Hey, that's why you're getting paid the manager's salary... big bucks for big decisions.

Yes, I know, this flow is time consuming. It's not easy, sure as hell not as easy as using LC puzzle platforms.

You need to ask yourself what are you optimizing when recruiting, your own leisure (at the future expense of a false positive hiring), or your team's next member's skills?

Like I said, big bucks for big decisions.

It's a cultural thang

As I conclude this post I'd like to say to those companies that in 2025, coming on 2026, still use LC puzzles, on restrictive platforms denying the candidate of the tools they'd be using in their actual work - don't call me, I won't call you.

Yes, economy is in the gutter right now, coming by interview opportunities is harder than winning the PowerBall... but that's fine.

I'm probably not a good cultural fit for you - I like airing my grievances, not just pretending everything is fine, smile and wave my hand.

I speak up when I feel I've been wronged, and I don't mince my words either.

I also like working with, and for, smart people, people that appreciate advanced, modern, technology, and know how to, and are willing to take advantage of it.

It, and any other legal edge they can get over the competition.

I don't like dinosaurs, never got the fascination.

And I especially don't like being judged unfairly, by lazy folks too complacent to step up their own game, put in the work and extend me the respect they expect me to extend them.

You're right - I probably am not a good cultural fit for your organization.

I am a technical fit though - after all, all my solutions to your LC puzzles passed all unit tests.

But culture is as important. I get it.

Been a displeasure. Let's not do this again, ever.

KTHXBYE

Top comments (0)