A company rewrote a complex query language library in one day using AI and saved half a million dollars annually. That's the whole story, and it should make every engineering manager a little uncomfortable.
Reco.ai posted about this on their blog. It hit Hacker News, got 55 upvotes, 49 comments, and the comment section did exactly what Hacker News comment sections do when something threatens the professional identity of its readers: it got very philosophical very fast.
What Actually Happened
JSONata is a query and transformation language for JSON data. It's not a toy project. Reco used it, hit performance walls, and instead of filing a six-week engineering sprint, they pointed AI at the problem. One day later, they had a working rewrite. Projected savings: $500,000 per year in infrastructure costs.
That number is almost certainly real. JSONata processing at scale is expensive. A custom implementation tuned to your data shapes and access patterns can cut compute costs dramatically. The AI didn't guess at the solution. It read the spec, understood the existing behavior, and produced code that matched outputs while running faster.
This is not a one-off. It's happening across codebases right now, quietly, in companies that aren't writing blog posts about it.
The Hacker News Reaction Was Predictable
The comments split into two camps. Camp one: "this is impressive, AI is a real tool now." Camp two: "but did they test it properly? Edge cases? Production load? What about correctness guarantees?"
Camp two is not wrong. They're just describing the new job description.
The rewrite took a day. The validation, the edge case hunting, the production monitoring, the performance profiling under real traffic — that's where the engineering hours actually went. Probably weeks of them. The AI compressed the creative output phase. It did not compress the verification phase, because verification requires context that lives outside the codebase.
When your JSONata rewrite hits a date-formatting edge case at 2am on a Tuesday because one enterprise customer sends timestamps in ISO 8601 with milliseconds and another sends Unix epoch strings, a human is still making the call on how to handle it without breaking either customer's integration. That decision isn't in any spec.
Where Humans Actually Fit
Here's the honest version of the future-of-work conversation nobody wants to have: AI compresses the parts of software work that are mechanical and well-specified. The parts that remain are the parts that were always the hard parts, just harder to point to on a job posting.
Edge case adjudication. Domain knowledge translation. Deciding which tradeoffs are acceptable for which customers. Knowing that the $2M enterprise account uses a specific JSONata expression that the rewrite will silently break if you're not careful.
This is exactly the problem Human Pages was built around. AI agents are good at generating. Humans are good at knowing which generated output is actually correct given constraints the AI can't see.
On Human Pages, a job posting for a scenario like this would look something like: "AI-generated JSONata replacement needs validation against 40,000 production transformation records. Compare outputs, flag discrepancies, categorize by severity. 6 hours of work, $180 USDC." A developer with JSONata experience picks it up, runs the comparison scripts, flags three categories of edge cases, and documents them. The AI built the first draft. The human made it shippable.
That's not a lesser version of engineering work. It's the work that determines whether the $500K saving is real or whether you're one edge case away from a customer escalation that costs more than you saved.
The Cost Math Is Changing Faster Than Org Charts
Reco saved $500K on infrastructure. They probably spent a fraction of that on the engineering time to validate and ship the rewrite. The ratio is the point. AI is making the cost of generating a solution approach zero. The cost of verifying a solution is not moving at the same rate.
Most companies haven't updated their hiring models to reflect this. They're still staffing for generation work. Junior engineers writing boilerplate, mid-level engineers translating product requirements into code, senior engineers reviewing it. The first two categories are getting compressed fast. The third is getting more important, not less.
The smart companies are figuring out that verification, validation, and contextual judgment are now the scarce inputs. They're not scarce because AI can't do them — AI can do versions of them. They're scarce because the cost of being wrong is asymmetric. A false positive in a financial data transformation isn't a bug to fix in the next sprint. It's a compliance issue.
The Question Worth Sitting With
If AI can rewrite a complex library in a day, and the verification work takes two weeks, what does a team of ten engineers actually look like in three years? Probably two people who are very good at knowing what correct looks like, and a pipeline of humans on-demand for specific validation tasks they have the domain knowledge to perform.
That's not a smaller engineering team. It's a different shape of one. And it relies on a market where you can find the right human for a specific verification task quickly, pay them fairly, and not hire them full-time for a job that exists for six hours.
The Reco story is impressive engineering. It's also an early data point in a longer trend where the value of producing code is collapsing and the value of knowing whether code is correct is rising. How you position yourself in that shift, whether you're an engineer, a company, or a platform, is the only question that matters right now.
Top comments (0)