A friend who reviews take-home code submissions for his company sent me a list a couple weeks ago: 40 take-homes he had reviewed in the last quarter, ranked from best to worst, with brief notes on what made him push the candidate through or reject them.
Looking through the list, the rejects clustered into 8 specific anti-patterns. The pass-throughs avoided most of them. None of them were about being a brilliant engineer; they were about not making any of the 8 mistakes.
Here they are, ordered from most to least common.
1. The "I Used Every Library I Know" submission
Anti-pattern: the project pulls in 14 npm packages to do what 200 lines of plain code would do. Authentication library, state management library, ORM, validation library, two utility libraries, a logger, a metrics collector, etc. for a 4-hour take-home with no users.
Why it kills the application: it signals that the candidate cannot evaluate when complexity is appropriate. Production teams need engineers who under-engineer for the constraint, not over-engineer for the resume.
Fix: ship the simplest thing that meets the requirement. If the brief says "build a CRUD API for tasks," your dependency list is HTTP framework + database driver + maybe a test runner. That is it.
2. The README that explains the wrong things
Anti-pattern: 800-word README that explains the architecture, the design philosophy, the trade-offs of microservices vs monolith — but never explains how to clone, install, and run the thing in 90 seconds.
Why it kills it: the reviewer cannot easily run the project. They will not spend 30 minutes debugging why your Postgres connection string is wrong. They mark it incomplete and move on.
Fix: top of the README is exactly four things, in this order:
## Run
git clone <this>
cd <this>
[1-3 commands to install]
[1 command to start]
[1 command to test]
Everything else in the README goes after that. Architecture rationale, trade-offs, "what I would do with more time" — all useful, but only after the reviewer can see your code running.
3. Tests that pass but test nothing
Anti-pattern: 30 tests in the repo, all passing, all asserting trivial things ("expect(add(2,2)).toBe(4)", "expect(typeof config).toBe('object')").
Why it kills it: signals padding. Reviewers explicitly mentioned they would prefer 4 tests that meaningfully cover the core logic over 30 trivial ones.
Fix: write tests for the business logic that would actually break if someone refactored the code. Test the edge cases of the main feature. Test one happy path end to end. Stop there.
4. The "I think the brief was wrong" submission
Anti-pattern: candidate decides the brief had a flaw, redesigns the problem, and ships something different from what was asked. Includes a long README section explaining why the original brief was misguided.
Why it kills it: the brief is sometimes deliberately ambiguous to test whether you ask clarifying questions or whether you barrel ahead. Either path is acceptable; rewriting the problem unilaterally is not.
Fix: if you think the brief is wrong, send one clarifying question to the recruiter before starting. Almost every reviewer in my friend's notes specifically rewarded candidates who asked one good question. None of them rewarded candidates who silently changed the problem.
5. The pristine main branch with one commit
Anti-pattern: a single commit titled "Initial commit" or "Take-home submission" with the entire project in it.
Why it kills it: the reviewer cannot see how you actually work. Did you start with the data model and build out? Did you do TDD? Did you backtrack? They cannot tell. The commit history is part of the submission.
Fix: commit as you would for a real PR — meaningful, atomic, descriptive. 5-12 commits is normal for a 4-hour take-home. The reviewer will skim them; the skim will inform their evaluation more than you think.
6. The over-polished UI for a backend role
Anti-pattern: backend take-home that comes back with a custom CSS theme, animations, dark mode, and a logo — but the API design has obvious issues.
Why it kills it: signals you allocated time poorly. The reviewer is looking at API design, error handling, data modeling. The UI is incidental.
Fix: ship the simplest UI that demonstrates the backend works. A boring HTML form is fine. Spend the time on the part of the brief the role is actually evaluating.
7. The error handling that swallows errors
Anti-pattern: every function wrapped in try/catch, every catch block doing nothing or just logging "Error happened" with no context.
Why it kills it: this is the biggest single technical signal of "junior or senior" in my friend's experience. Wrapping everything in try/catch and swallowing the errors is what people do when they have not been on-call for a system in production.
Fix: only catch errors at boundaries (API entry points, background job runners). Let the rest bubble up. When you do catch, include enough context to debug in production (the input, the operation, the error itself).
8. The "comprehensive" submission that does not solve the brief
Anti-pattern: the brief asked for a CRUD API for tasks. The submission delivers a CRUD API for tasks plus user authentication plus role-based access plus rate limiting plus deployment scripts plus a SDK.
Why it kills it: scope discipline is one of the things the take-home is testing. Adding scope is not a strength. It is a tell that you cannot resist gold-plating.
Fix: read the brief literally. Ship exactly what it asked for. Use the "what I would do with more time" section of the README to acknowledge the additional ideas without implementing them.
The pattern across all 8
Six of the eight anti-patterns share the same root cause: doing too much. Adding libraries, adding tests, adding scope, adding UI polish, adding architecture commentary. The take-home rewards engineers who do less, better.
The other two (commit history, error handling) are about making your work legible to a reviewer who has never met you and has 20 minutes.
For more on what take-home reviewers are actually scoring (vs what we assume they want), I took 12 skills-based hiring assessments and they all fall into 3 flavors walks through the specific evaluation criteria for each flavor, and the take-home flavor section maps directly to the patterns above.
Top comments (0)