I spent a full weekend building an end-to-end pipeline for a Series B startup. Ingestion from three sources, data modeling in their warehouse, dbt transformations, tests, documentation, and a 45-minute live presentation to their "data team" (two people). Monday morning I got a three-sentence rejection email. The subject line was "Update on your application." That was the update.
I wish this was unusual. It isn't. This is the data engineering hiring process in 2026, and it's broken in ways that should make every engineer angry.
The Take-Home Ballooned While You Weren't Looking
Here's how it used to work: a company sends you a take-home assignment. Build a small ETL script, write some SQL, maybe model a couple tables. Two hours, tops. You submit it, they review it, you talk about it. Reasonable.
Here's how it works now: a company sends you a take-home that says "should take 2-4 hours." You open the brief and find a multi-source ingestion problem, a data modeling exercise, a transformation layer, unit tests, documentation, a README explaining your design decisions, and oh yeah, a live presentation to the team next week. Recruiters claim 2-4 hours. Candidates actually spend 6-10 hours minimum. For roles they really want? 15-20 hours. I've seen reports of people spending entire weekends on a single application.
62% of companies admit their take-homes are "too long." They keep using them anyway.
The scope creep isn't accidental. It's structural. AI tools made short assignments trivial to complete, so companies stretched them longer to maintain "signal." Instead of fixing the format, they just demanded more of your time. The result is a 20-hour unpaid project that produces a deliverable artifact; a working pipeline, a documented data model, a presentation deck. That's not an assessment. That's consulting work with zero compensation and a templated rejection email as your receipt.
If your take-home produces something the company could deploy, you're not interviewing. You're freelancing for free.
And the numbers back this up. 68% of companies now use take-home coding tests. Only 20% of candidates pass. That means 80% of the people who spend their weekends building pipelines for strangers get a canned "we've decided to move forward with other candidates" email. No feedback. No explanation. Just silence and a wasted weekend.
The Hiring Market Doesn't Care About Your Time
Let's talk about why candidates put up with this. It's not complicated: 100,443 tech workers have been affected by layoffs in 2026. That's 837 job cuts per day through April. Q1 alone saw 52,050 tech-sector job cut announcements, up 40% from Q1 2025. Nearly half of those cuts were explicitly attributed to AI and automation.
When you're eight weeks into a job search and the bills are piling up, you don't push back on a 20-hour take-home. You grind through it at 2am because the alternative is another month of nothing. Companies know this. Three forces collided to make extended unpaid trials feel acceptable to employers: application volume favors employers, AI tools favor employers, and a budget-tight year favors employers. The asymmetry isn't a bug; it's the business model.
Entry-level hiring fell 25% from 2023 to 2024 at the top 15 tech firms. 72% of tech leaders plan further reductions. Early-career engineers with less than three years of experience are the hardest hit. These are the same people being asked to prove themselves with the longest, most grueling take-homes. Fewer positions, longer assessments, and a candidate pool too desperate to say no.
I've been on both sides of the interview table enough times to know what this looks like from the inside. Hiring managers aren't sitting there rubbing their hands together plotting to steal your work. Most of them genuinely believe the take-home is a fair evaluation. They don't think about the fact that they're asking someone to do 15 hours of uncompensated labor with an 80% chance of a form rejection. They've never done the math on what that costs the candidate in aggregate across a job search with 10-15 active applications.
I did somewhere around 20 interview loops in one search. If even half of those had included a 10-hour take-home, that's 100 hours of unpaid work. That's two and a half full work weeks. For one job search. At some point you have to ask: is this an evaluation or an extraction?
AI Killed the Signal; Companies Doubled Down Anyway
Here's the part that makes this whole thing absurd. The take-home format is collapsing under its own logic.
AI-assisted cheating on take-home assignments surged from 15% to 35% in six months across nearly 20,000 interviews analyzed. One in three candidates completing take-homes is using AI assistance. And honestly? Good for them. If a company asks me to spend 20 hours on an unpaid project, I'm using every tool available. The assignment isn't measuring my engineering ability; it's measuring my tolerance for exploitation.
But here's the loop that should concern everyone: AI makes short take-homes trivial, so companies make them longer. Longer take-homes produce more deliverable work product, which looks more like free consulting. Meanwhile, the "signal" companies wanted (can this person actually build things?) is completely destroyed because the output tells you nothing about whether the candidate wrote it, prompted it, or copy-pasted it.
The rational response would be to abandon the format. Instead, companies are doing something worse: keeping the 15-hour take-home AND adding live coding rounds on top. You get the worst of both worlds. The total time burden for a single application now includes a recruiter screen, a take-home project, a live coding round, a system design round, and a behavioral panel. That's 25-30 hours per company. Multiply by the 5-10 active pipelines any serious job seeker maintains, and you're looking at a part-time job just applying for jobs.
In-person interview rounds increased from 24% in 2022 to 38% in 2025, driven entirely by AI cheating concerns. The take-home isn't going away; it's just getting a live coding bodyguard that doubles your time commitment.
How to Spot the Extraction
Not every take-home is exploitative. Some are genuinely well-designed, 2-3 hour exercises that test real skills. Problems like this one force you to reason about grain before you join, and that's a legitimate thing to evaluate. The difference between a fair assessment and free consulting is usually obvious if you know what to look for.
The scope test. If the deliverable could ship as a real feature or inform a real business decision, it's consulting. A good take-home is abstract enough that the output has zero value to the company. The moment they ask you to use their actual data, model their actual domain, or solve their actual problem, you're doing their job for free.
The time test. If the brief says "2-4 hours" but the requirements include ingestion, modeling, transformation, testing, documentation, and a presentation, they're either lying about the time estimate or delusional about what's involved. Either way, red flag.
The feedback test. Ask upfront: "Will I receive detailed feedback on my submission regardless of the outcome?" If they can't commit to that, they're telling you the evaluation is one-directional. You give them 15 hours; they give you a three-sentence email. 65% of candidates never or rarely receive interview feedback. That's not an assessment process; it's a black hole.
The legal test. Under the Fair Labor Standards Act, any work that benefits an employer must be paid. A Nashville dental practice paid $50,000 in back wages after the DOL found they performed actual patient-facing work during unpaid trials. Take-homes that produce deployable work sit in the same legal gray area. Nobody's enforcing it yet, but the precedent exists.
What I Actually Do Now
I've stopped doing take-homes that exceed four hours of estimated work. Full stop. If a company sends me a brief that's clearly a weekend project, I respond with something like: "I'm happy to do a focused 2-3 hour exercise, or I'm happy to do a paid consulting engagement at my hourly rate. Which works for you?"
Most companies ghost me after that. Some actually appreciate the directness. The ones that appreciate it are invariably better places to work.
I've also started asking a question early in every process: "What does the full interview loop look like, and what's the estimated total time commitment?" If the answer exceeds 10 hours, I factor that into whether the role is worth pursuing. Not every opportunity justifies 20 hours of unpaid labor.
The data engineering market is growing. Roles are projected to increase 18% in the coming years. This isn't a dying field; it's a field with a broken hiring process being exploited by companies that know candidates are desperate. The problem isn't scarcity of roles. It's scarcity of proper hiring infrastructure. Companies expanding rapidly are just copy-pasting broken assessment formats at speed.
61% of job seekers have been ghosted after an interview. 80% won't reapply to companies that ghost them. Companies running 20-hour take-homes with no feedback aren't just burning individual candidates; they're torching their own recruiting pipeline.
So here's my question for the hiring managers still running these loops: if 80% of your candidates fail the take-home, you ghost the failures, and AI has destroyed whatever signal you thought you were getting, what exactly are you paying for with all that candidate time? Because from where I'm sitting, it looks like the only thing you're selecting for is desperation.
Top comments (0)