In my career, I have found myself on both sides of the hiring process many times. Every company's approach is different but when it comes to a software developer role, there was almost always a form of a take-home coding challenge involved.
For both the candidate and the company, they are a necessary evil. It would be hard to find someone claiming they like solving or reviewing coding assignments. However, there are different levels of evil, and making that part of the process suck less should be in everyone's best interest.
This piece ended up much longer than I expected. If you would prefer to skip the background starter and go straight to the main course, here is the link to the rules of a good coding challenge.
Some companies choose to skip this step in their hiring process, sometimes conditionally (e.g. for folks with substantial open source contributions or for referrals) and sometimes completely. While there are alternatives that allow to do that and still hire confidently, and while I think some of them might have advantages, a take-home assignment remains an industry standard.
In this piece, I will focus on how to improve it, and not on questioning its merit or value. I can recommend this post by Michelle Barker if you want to explore that topic.
Most of what I describe in this piece likely applies to any software engineer role. However my experience is heavily skewed towards frontend development. If you plan to apply these learnings to other roles, your mileage may vary.
A coding assignment usually comes pretty early in the process, right after the first screening. From the perspective of the candidate, it can set a tone for how they perceive the company they are trying to get hired by. A bad experience, which is not rare, can make it hard to remain excited about the role later on. Come to think of it, so far I've only accepted positions from companies where I at least didn't hate the coding assignment ‒ it was never the deciding factor, but an early indicator for which places to avoid.
For the company, a malformed coding challenge can distort the first impression of the candidate. It can accidentally reward things like an abundance of free time while failing to evaluate traits that actually matter for the job at hand.
Ironically, when I joined these companies and got the chance to be at the other end of the same process that I found enjoyable as a candidate, things looked much different. I've often found reviewing those coding challenges to be a painfully long, demanding, and confusing task.
When I recently got a chance to rethink the hiring process for frontenders at my current company I've decided to take a better look at where that mixed experience comes from. I embarked on a mission to design an assignment that works for both the candidate and the reviewer while still doing a good job at evaluating the match. I started by considering what the incentives are for everyone involved.
As a candidate you want to get hired ‒ that's the main reason people apply for jobs. If the company asks you to solve a take-home coding challenge, you will probably first roll your eyes, but your best chance to get the job is to solve it as well as you can.
A few observations follow from this. First, knowing what criteria for "well" are is very helpful and welcomed by the candidate. Second, you want the assignment to allow you to showcase what you consider your strongest skills. Depending on your personal situation, optimizing for the least time "wasted" on it can be very important too.
Additionally, from my experience, the challenges I've hated the least were the ones that didn't feel dull, or as if I was asked to do unpaid work for the company, solving a random ticket from their backlog.
I also enjoyed when, as an added benefit, a hiring task was an opportunity to try a library or technique I was planning on learning anyhow, so I could fill two needs with one deed.
And after all is said and done, learning how well you've done and getting feedback about it is much appreciated. There are few things as annoying as submitting an assignment and then never hearing about it again.
Since the coding assignments are technical, the task of reviewing the submission will usually end up on the desk of a developer. As a reviewer, you want to fairly and accurately evaluate how well the candidate solved the challenge and how their solutions compared to other people's applying for the role. You also want to do so efficiently, you have a whole backlog of things to work on after all. How difficult of a task all that is depends to a large extent on how the assignment is designed.
If every hiring submission that lands in your inbox is a unique snowflake that doesn't follow any predictable structure, you're in for bad times. Unconstrained solution space forces the reviewer to either spend a long time getting familiar with that particular solution or to make the evaluation based on a rough skim of it. In both cases, the company loses, either in the time of the reviewer or the quality of the review and thus the confidence level it can have from the review.
Reviewer's best friends are clear guidelines for what a good and bad submission looks like, as well as a familiar review process. Switching processes, tools, and criteria just to evaluate someone's take-home challenge code can be a frustrating ordeal.
The person or people that represent the interest of the company in the hiring process will vary, but usually it will be an internal hiring manager, engineering manager, CTO, or all of these together. The reviewer can be seen in this role too. "The company" in this piece refers to that group.
What the company wants is to hire the best candidate, the one that is a match for the role and will be the optimal contributor. A daunting task, really.
The question we really want to finetune our coding challenge to answer is "How good of a technical contributor to our projects would the candidate be?". Note that "technical contributor" can mean more than just coding ‒ a good take-home coding assignment can teach the company about more aspects of a candidate's profile than just their ability to write code.
Ideally, the challenge is calibrated and flexible enough that it allows detecting outstanding candidates. Identifying highly desirable people early in the process can help in ultimately hiring them, e.g. by shortening the process later on.
Finally, all this should be accomplished without damaging the perception of the company that the candidates go into the process with. A poorly designed or poorly executed assignment can easily have a net negative value for the company.
As it's clear now, there is a lot at stake here. It's also easy to see where the potentially vastly different experiences for candidates and reviewers come from. Their incentives can seem contradictory, and without careful consideration, it's easy to end up with a test that caters to the needs of one of these audiences, while ignoring those of the others.
Can we optimize for everyone's needs, or do we need to sacrifice the experience or best interest of the candidate, reviewer, or the company? It's not an easy task and might require significant investment to execute well, but I believe it's possible to find a balance that satisfies everyone involved.
We've finally arrived at the bullet point list of Certified Good Advice, based on my experiences reviewing and solving coding assignments as well as my recent experience designing one myself. Here is how to make your coding test suck less:
- Base the challenge on an existing codebase, if you hire for a role that involves working on one. Once hired, the candidate likely won't have to start new projects from scratch as part of the job, so don't test that skill. It doesn't have to be a big project but needs at least some parts with a non-trivial level of complexity to be realistic. Using an existing "base" with a familiar initial structure makes a world of difference when reviewing as well. Especially when submissions are based on PRs or commits, it's easy to focus on the candidate's contributions and filter out the "glue code".
- Align the submission process with your delivery process. For example, if you use pull requests to propose and describe code changes, ask the candidates to submit their solutions as PRs. You will get a glimpse into how well the candidate adheres to a clearly defined process, how well they communicate and provide context for their work, prepare changes for review, etc. ‒ much more than just coding skills.
- Using your standard contribution process means that you can also use your standard code review process. This means assignment reviewers won't have to switch to a different "mode" when evaluating coding hiring assignment. It should all end up being easier and fairer.
- Lock the fundamental technology choices. Use the same core frameworks or libraries you use in your codebase, or the ones you'd like to migrate to. The exact choice is less important than the fact that all solutions will follow the same blueprint. This point follows from the "base project" advice earlier, but there are specific benefits to discuss here. Leaving the tech stack open can lead to testing for the wrong skills, but also makes the work of the reviewer tougher. Furthermore, comparing two solutions with completely different tech stacks can be almost impossible to do meaningfully. Reviewing submissions in different technologies can trigger unconscious biases in the reviewer. For example, it's much easier to scrutinize a project in a familiar technology, where you have experiences and opinions than an unfamiliar one. Another form of bias is dismissing a submission just because it's using some tech you don't personally like.
- Replicate the common scenarios from a day as a developer in your company. A task could be an interesting bug that needs a bit of investigation or a loosely defined new feature that is not straight forward to implement. Find problems that have a limited solution space, but not only one way to do it.
- Make one part of the challenge open-ended, as a call to showcase creativity, initiative, and whatever part of the software development craft the candidate deems their strongest suit. Invite the candidate to impress you. In this part of the assignment, the reviewer can avoid having to dive too deep into the implementation, and instead focus on the big picture. These open-ended tasks are also a great conversation starter for the in-person interview later in the process and they serve well for identifying outstanding candidates.
- Keep the base project basic Resist the temptation to make the base project too complete. Go with the basic, non-opinionated setup for whatever technology you choose. That leaves the field open to the candidate to improve things as part of the open task.
- Avoid detailed specs and evaluating via a test suit. Assignments that are too restrictive (implement a feature from a very detailed spec, write some code to pass tests, etc.) can make the challenge feel tedious and "like work". It also doesn't replicate the day-to-day realities of a software developer job ‒ and if it does at your company, you likely have a bigger problem than a bad take-home coding assignment.
- Don't base it on your product. Don't make it about adding a new feature to your product, or re-implementing an existing one. Doing so might create a false impression that you're getting some value out of the challenge beyond just hiring. You likely don't pay candidates for the time they spend on the assignment (if you do, kudos!), so might as well make it clear it's just a test.
- Try to make the topic interesting or entertaining. It's easier said than done, but a good starting point is to use data from a public, open API or a public data set instead of going for random data or no data at all. As a bonus, you will get to check how the candidate handles e.g. error cases or data inconsistencies.
- Be clear in what and how is graded. Be transparent about the expectations and what is in scope for the task. Provide clear guidelines for each task and address common questions to avoid misunderstandings.
- Avoid setting "traps" to see how they handle unexpected situations. You wouldn't do that to a developer on your team, why do it in the hiring challenge.
- Don't promote hustling, look for quality. Submissions that took a large amount of work can look very impressive, but be careful about rating them high by default. Try to get a feeling for how long someone spent on the tasks. You can ask the candidates to give an estimate or look at the commit history ‒ but keep in mind that both of these methods can be inaccurate. Having a lot of free time has nothing to do with how good of a technical contributor someone can be, so it's best to factor it out as much as possible. Narrowing the potential solution space by making the tasks well defined and smaller in scope can help to keep hustlers at bay. It might also be a good idea to give an estimated expected time for completion, but be careful with being too precise ‒ the time necessary might vary for different experience levels.
- Whether they pass or not, give them some feedback. They've put the work in to solve the assignment, so it's only fair you put the work in reviewing it thoroughly. And if you do, getting back to them with some feedback should not be too much trouble. Check with your legal team though ‒ there might be some reasons not to do it.
- Talk about their solution in the following step of the hiring process. If you've liked their solution enough to promote them to the next step ‒ let them know how good it was! Most developers will be happy to discuss the details of their solution. It will give you a lot of extra insight into how they talk about their work and how they take constructive feedback. If the next step involves a chat with the team, you can use the coding challenge app as a basis to discuss potential new features, design improvements, etc. to see how they perform in a collaborative environment.
Trying to stay true to these rules as much as I could, I've designed the new frontend coding challenge for my company.
Using the SpaceX API I've built a simple web app that shows some rocket launches and landing sites. I've used React and SWR, two libraries we're using in our production app. I've opted for a simple setup using CRA, no additional tooling or technologies. The interface is built using Chakra UI to replicate having a design system, which we are in the process of adopting in our codebase.
The challenge has three tasks, all asking the candidate to make changes to that app. The first task is to solve a simple, but not trivial bug involving timezones. The second task is about implementing a new feature, based on a rough sketch and a brief written spec. The last task is open - it asks the candidate to improve the app in any way they'd like, giving a few ideas to let their imagination going. The assignment includes detailed guidelines about each task and practicalities around the submission process. We ask candidates to submit solutions to each task as a PR against a cloned repo, and to keep the quality as if it was production code.
After using it as part of the process for a few weeks, the results have been positive. Here is a sample of comments from candidates and reviewers:
Loved the “bug + feature + anything-cool” approach, it’s way more appropriate than making a small toy app from scratch. It’s quite like what you’d be actually doing and shows how you’d blend your code into an existing codebase.
Feedback from a colleague who solved the new assignment to get hired
I definitely enjoy reviewing now more than I used to. I get to learn how others approach similar problems in vastly different ways. For grading, I focus on how they deliver, the commits, the description in the PR, and notes they leave in the PR description as well as in code to highlight areas for future improvements.
Feedback from a colleague who reviews the assignments
Just wanted to say that I really appreciate how you set up the challenge - with existing codebase and bug/feature tasks.
Candidate submitting the solution for the new challenge
I've mostly sent out open assignments in the past (start from scratch and build something). What works even better is having a take-home assignment where you see how the candidate actually works in an existing codebase. This takes a lot of time to build... But the best assignment I've personally done was the [My Current Company]'s assignment.
An internal message shared by a friend who went through the process but eventually opted for a role at a different company
The constructive feedback I've received so far was mostly focused on the implementation of the base project - e.g. library choice or coding style. Good thing these details are the easiest to adjust!
All my learnings can be summarised into one key takeaway ‒ when designing the coding challenge guide yourself by empathy first. Taking the time to look at it from the perspective of all the people involved in the process will lead you towards the best results.
And remember, no matter how much you tweak and optimise the coding test it will stay just that ‒ a coding test. There are many inherent limitations to what it can tell you about the candidate. But if used responsibly and designed deliberately, it can be a valuable ‒ and bearable ‒ tool in the developer hiring process.