My very first article on this site (https://dev.to/bytebodger/your-coding-tests-are-probably-eliminating-some-of-your-best-candidates-de7) was abou...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
That last question screams of being written by an academic who thinks they are being clear by writing that way, but unfortunately have never worked in a real company or understand how good developers actually think.
Nor have they had to actually write useful requirements.
They've resorted to bodgy pseudo-code, but written in the manner they teach... Badly!
How would you rewrite it better? Honest question.
If you look through the rest of the comments, I answered the same question for someone else.
I am glad that a seasoned developer like you find the same problem with Codility as I did. I am self-taught with very little math background, and the only time I deal with
N
andk
was in high school - in my native tongue of course. Reading these ciphers almost made me quit right there before l I took the time to break them down into human language.Also hard agree with your point on online coding test. I am still very sore about that one time I got graded 54% of the thing I was doing daily and live on production systems
I was presented with one of this for Swift. And it was INCREDIBLY outdated to a level that my real question was if thats the actual slack of the company...which surprisingly was not. So there I was answering UIKit questions when the stack was 100% SwiftUI.
Also I must add that the compiler does not get somethings so if you want to create a method to call many times it gives an error, but if you duplicate the method code everything gets a green check
These are great points. Yes, some of their tools are horribly outdated. And yes, I've also found scenarios where their system would FAIL something that actually worked perfectly.
I'm glad to see I'm not the only one that find their wording an absolute garbage... It takes me sometimes 10 minutes just to have a notion of is being asked.
However, for me the worse part of these tests is that sometimes you need to start guessing the unit test implementation! Once at codility I almost failed a simple react test that asked me to implement a simple Intersection Observer that WORKED in real browsers but somehow was failing their unit test.... 30 minutes spend to debug their code, because it didn't reflect a real browser.
I agree. It's psychotic to combine (a) a vague problem statement with (b) hidden unit tests for which you can't ever see the pass/fail results.
1,000%
YES. Sooooo much THIS. I had the exact same experience at Hackerrank as well. Code that performs wonderfully on my local NodeJS server. But, bafflingly, fails in their test environment.
You're keep mentioning that we can "remove ambiguity" or "provide clarity without being too complex" or "without using K-th person" etc.
If you were the problem writer, how would you rewrite the problem you mentioned above by using your principle? Give us some example
It's funny you mention that. Directly after I wrote the article, I thought, "I wonder if I should've provided a literal example of a better way to word it?" I even took the time to word a putative improvement, but I didn't go back in and add it to the article cuz I didn't want it to become a super-long read. But since you've asked, here's my first take at it:
Is that a perfect reimagining of the instructions? Maybe not. It could probably still be clearer. But the point is that it can be wayyyy clearer than that ridiculous word-jumble that they provided.
Some people might look at that description and think, "But this gives away too much in the description about how the solution should be coded." To which I'd say, I don't care. If the whole "challenge" in your coding test is to see if I can simply comprehend the opaque language that you're using to explain the task, then it's a crappy excuse for a coding challenge.
When you're working as a coder, you need to bring a ton of "higher level" thinking to the job if you ever expect to be good at it. But if that "higher level" thinking is required merely to understand what's being asked of me, then that's not a problem with the coder. It's a problem with the way that the organization defines/communicates specs.
I don't care how crazy-overly-complicated your environment is. If the people who need you to submit coding solutions can't even explain, in clear and common language, exactly what it is that they want you to do, then that represents a severe problem in that organization. You don't solve that problem by throwing obtuse explanations at a coding team and simply assuming that they'll be able to decipher your jargon. In fact, I'd argue that if you can't explain the task in layman's terms, you may not have a good understanding yourself of exactly what you're asking someone to do.
Also, I'll reiterate that the clarity of these instructions is actually far more important in an automated test than it is in a typical working environment. In a typical working environment, if I'm even slightly confused by the request, I can always go back to the PM / stakeholder / client / etc. and ask them for greater clarity. But you have no way to do that when you're taking an automated online test. For this reason, it's just downright ridiculous if you're asking someone to complete a coding test with these kinds of hard-to-decipher instructions.
I wouldn't consider it "word jumble" as IMHO it's perfectly clear and there is really not a single complicated word use albeit it is indeed dense and thus it takes effort to parse it. Being much shorter it still carries more information. For instance it tells how many people there are (M) and how they are numbered (0..M-1). Your explanation doesn't make it explicit that arrays assignedLetters and mustPassTo are correlated through index that represents a person. When you're writing description for problem to be solved in any choice of languages (including C and possibly even MatLab) it might be necessary to disambiguate on concepts that seemingly "everyone knows implicitly".
The common theme here, that I run into all the time, is that if you consider it to be clear, then you can't imagine how anyone else could not also see it as clear. It's like when there's some overly-complex chunk of code. But you wrote it, or, at a minimum, you've already had to work with it, so in your mind, it's clear. And that's great. But just because it's clear to you doesn't mean that the code is written well.
I can agree that it may be written better or different. But seems like a case of bike shedding. Engineers are diverse bunch and write in different styles. Seems strange to me to pick on something just because you don't like their style. And author went as far as trying to insult the authors for their chosen style. Suggesting a different style and suggesting improvements I think that is sure always great but can be done without insulting original work. If taken out of context ok first paragraph is hard to understand. Yet they have provided very good examples at which point it becomes easy to understand. Later once once you have idea in your head you can go back to first paragraph and validate your idea against it and then the fact that it's short and dense actually may even be helpful.
OK. It's great.
You are so spot on, and here we are more than 1 year later and it is still just as bad. It took more than 10 minutes to evaluate one of my answers. How is that possible? You have no way of writing your own test cases with assertions and the test cases that they give are completely hidden. Do the test cases violate the assumptions? They must. How is it a real world test if you cannot see the test cases? Literally no shop on planet earth would do that.
This has just the right amount of salt in it! hahaha
Jokes appart, i completly agree with you. It feels like the wording is complicated just because.
And, as a non-native english speaker, i can affirm that it's way more difficult to understand.
Also, with the ChatGpt being able to solve questions like these in mere seconds (although not aways correctly) i feel like we have to change the way we evaluate coding skills.
As a native English speaker with a PhD in computer science, I assure you that the problem isn't with the reader. The writing (at least for some problem statements) is just plain terrible.
This process has been validating. This is one of those tools that starts with an alignment of good intentions but breaks down fairly quickly in practice. First, a large set of the problems are not designed to solve for higher order patterns and language expertise. But rather, if you can solve a puzzle with for loops and array manipulation. Second, they're demoralizing. Imagine having all of that language expertise and clean code experience, maybe even architectural paradigms or domain experience. Practical experience that could directly applicable to the role. Only to have your candidacy reduced to a mensa puzzle in a problem domain you don't have context with.
I run enterprise leadership circles and the general argument is that FAANGs do this. I would argue that unless you're a FAANG, nobody is pounding down your door for resume clout and the organizations technical prowess on any given day can't pass the tests without notice but still provide tremendous value.
That said, if you're struggling with Codility and your brain, you're not using all of the tools at your disposal on the job. Use the AI assisted coding tools to solve the problem. I would never fault my engineers for using Copilot or ChatGPT to help them out. TBH, I encourage it, with guide rails.
I created an account just to join this discussion.
For the first time, I just had an interview coding test via Codility. Coding up my solutions (C++20) was pretty easy.
But for my final, highest-weighted Task, the problem statement write-up was attrocious. Their terminology was inconsistent and unidiomatic. Guessing at what they meant left me with only enough time to sketch out a solution as pseudocode, which I'm hoping the hiring manager will find acceptable.
Seriously, the only justification for that final Task problem statement would be as a prelude to a behavioral interview. E.g., "Tell us about a time when you had to deal with ambiguous requirements and a constrained schedule and the inability to get clarification while being evaluated for your performance."
I full agree with this. If there is a codility or similar assessment i am more likely to decline the opportunity. I have worked with people very good at these puzzles who basically can’t code, so i don’t think they measure anything other than ability to do codility puzzles.
If you were a good developer, you wouldn't mind bugs.... or Codility.
😉