Let’s talk about something many of us have experienced but not often discussed openly. Have you ever wondered why so many technical interviews focus heavily on data structures and algorithms (DSA)? Especially when, more often than not, these aren’t skills we use daily in our jobs. It’s like training for a marathon but then only ever needing to sprint!
Many of us spend weeks or even months on platforms like Leetcode, honing our DSA skills, just to clear the technical rounds. Sometimes, it feels like hitting the jackpot when a familiar question pops up. But what if it doesn’t? Is it fair to judge someone’s entire engineering capability on this alone?
It’s a bit of an open secret that even if you ace these questions and land the job, the bulk of what you’ve learned is forgotten within a month. The irony? You won’t need to write a single algorithm from scratch most of the time at your job, at least until the next job hunt begins. It’s a cycle that feels endless and, frankly, a bit pointless.
Behind closed doors, many agree that this system isn’t the best measure of a candidate’s ability, yet alternatives seem scarce. Testing technical skills is essential, yes, but does it have to be this way? Especially considering the vast resources, ready-made solutions, and the assistance of AI, reinventing the wheel each time seems… outdated.
So, what’s the alternative? I believe the answer varies by job role but leans towards real-world problem-solving. Imagine being tested on actual challenges you’d face in the job, tailored to front-end, back-end, or full-stack roles.
Wouldn’t that give a clearer picture of how a candidate tackles relevant tasks?
I’m keen to hear your thoughts on this. Do you think the current system needs a revamp?
Have ideas on how to improve the interview process? Feel free to share your opinions in the comments or message me directly. Let’s brainstorm ways to make technical hiring more meaningful and less of a memory test.
Top comments (20)
Attitude and aptitude are what we look for, we do that differently at different levels, but primarily for Juniors, it's them walking us through a project they've built and discussing the challenges and inspirations they had creating it. We might add a simple language check with escalating complexity.
For more senior roles, we also present code "written by a junior" and ask for a code review - this reveals knowledge, but more importantly, it shows us the applicant's style and attitude.
I used to run a business that built analytical databases, that did end up needing us to write and adapt sorting and searching algorithms, for a couple of developers, for a few months every couple of years. In other words, in a team of 130 engineers, 99%+ of the time the job had nothing to do with these kinds of problems, even in a business that actually sometimes did really need them. No business I've been CTO of since has ever implemented code that low-level.
On the surface the challenge of such tests is appealing, the problem is that it's now skewing the field in favour of people who spend their time practising algorithms, over people who actually focus on doing their job.
Absolutely agree with this and it's way better written than the rambling I was about to submit. Thanks!
I also thought that attitude and the way how candidate thinks is the reason behind this DSA question, which should be a good way to go in theory.
However, in reality, the more you study and learn how to answer DSA questions, the more chance you will receive similar questions in an interview and pass them. This means companies do not test thinking but rather how candidates can learn how to pass algos questions (more like a robot). And as you mentioned 99% of the time the job had nothing to do with this kind of problem.
The question is: Why candidates should receive a rejection in a job position where they will not write DSA 99% of the time because they did not pass the DSA interview?
I think leetcode problems can work - if the outcome of the code doesn't matter.
If you're evaluating how someone thinks? Use it, just make it clear that you're giving them a big tough problem and you don't care about the solution so much as how they're thinking as they approach it.
A leetcode type problem is great for this because it's something you can watch/listen as they tackle. The problem is when we put all the emphasis on "getting the optimal solution working in the time allotted".
I also thought so. But let's simulate the situation:
Who would you hire based on this?
Not enough information to decide. How was the rest of the interview? Do I pick up any personality red-flags from either? How do years of experience compare? Where did they work previously, so that I can get a picture of their career journey & trajectory?
I think I can stand by my original comment - IF the outcome doesn't matter, leetcode problems can show thought process and how someone will respond given a little bit of (gentle!) pressure.
Don't really care about their code though!
Let's simplify and say, no red flags, rest of the interviews were amazing, and they both had more/less the same experience.
Well then of course I hire the one who performs better. I can only afford to hire one person... and I have to do something to differentiate them. Leetcode shouldn't be the first criteria I divide them by, but if everything else is equal, it doesn't mean I can suddenly magically hire both. I still only have budget for one person. That's the world we live in, where constraints are... constraining. 🤷🏽♂️
For what it's worth - and I know it still doesn't feel great to be on the receiving end of this because I spent the first half of this year being rejected over and over and over myself - losing to a better-qualified candidate isn't failure. It means you're a contender. Lots of teams in the world cup ⚽️, only one champion. Doesn't mean those other teams don't deserve to be there!
Competition makes sense, however:
That's fair, but how's the alternative any better? I see posts almost daily where people are complaining that they had to do some "real-world" project, didn't get selected, and it leaves them thinking that the company just used the interview process to get them to do some work really cheaply / for free.
I think that as long as there's scarcity, the people who aren't selected will find a reason that the system was unfair. I know I thought of a few along the way myself! But I don't begrudge those who are hiring the use of something like DSA as a way to help them narrow the field. Yeah, I don't have to invert a binary tree every day, but we just need a way to select in a competitive market.
Sometimes the interview will go in my favor, sometimes it won't. I just keep doing my best and looking for ways to improve for next time.
I prefer to be rejected after doing real-world problems, instead of DSA questions :) Yes, companies can use me to do some work cheaply, that is true, it is hard to avoid that anyway. On the other hand, companies that do DSA checkups also sometimes waste our time.
I have gotten a lot of DSA interview questions, and they almost always fail to measure the abilities of a developer.
I think an assignment that includes nitty gritty technical details works better. If you will need an algorithm to solve an issue, you could easily search for the right one and implement it, you don’t have to solve hard leet code problems to be a good developer.
Totally agree. I told this in interviews lots of time, but response was something like:
"We have our company policies, 2-3 coding challenges, 1 home assignment, 1 system design" :D
I've never once been given a DSA question in an interview for a developer role (over 30 years). I would never give one to an interviewee either.
As Ben says, you want to get an idea of how they think, and if they can solve problems.
As a person who attended interviews in such companies as Amazon, Google, and Shopify, I can say that all of them ask you to solve 3-4 coding challenges based on DSA. I understand that they have to check the way you think. It makes sense. But let's say candidate 1 is faced with a DSA question you never studied or solved before during the interview. In this case, it is quite hard to solve the question in a very optimised way and cover all edge cases. On the other hand, if Candidate 2 solved this question before, he would have done it and passed the interview. Here is the question. Does it mean that because the second person received the question he studied before by luck, while the first one didn't, it makes the second candidate better than the first one?
Synthetic real world problems are super work intensive to create.
Why would you do the extra work?
Everything is framework dependent, especially in JS-Land where a new Web Frameworks pops out every month.
Frameworks can be harder more complex and harder to learn than the programming language its build upon because of its own api, assumptions, common treaded golden-paths, pitfalls, etc.
Combine it that each web framework releases a major version each year there the entire api gets mostly or completely overhauled.
You have code already, show it to them. Ask them how it works, ask how they'd maybe do it differently.
Yes, it's true, it will take more time.
But:
I’ve always thought something like “identify the code smells” or “this program has a bug in it” would make a lot of sense. It wouldn’t be that much work to use real examples from the team doing the interviews which would also be harder to game.
This post suggests a need for change in the technical interview process, implying dissatisfaction with the current methods. Fairbet7