There is no answer to that question. Because the question is about the result - a pragmatically calculated, measurable, expected result.
But building is about the process.
It's about that zen. That quiet delight when you finally solve something that's been annoying you for years. Maybe it annoys only you. Maybe you're solving a problem that doesn't exist for anyone else. But now it's solved, and you get your couple minutes of glory.
Now you can ditch all those validation libs - each with its own syntax, its own limitations - and just write TypeScript that gets validated at runtime:
export default defineRoute(({ POST }) => [
POST<{
json: {
email: TRefine<string, { format: "email" }>;
age: TRefine<number, { minimum: 18 }>;
}
}>(async (ctx) => {
const { email, age } = ctx.validated.json;
// validated before reaching here - no lib syntax, just TypeScript
}),
]);
And the breeze seems here to stay. But no - another annoyance starts to rise from the depths.
After your 100th route, looking at a file-based routing tree is like a nightmare.
What are all these files? They belong to what? Which is the master (handler), which is the servant (helper)?
And the answer comes in the form of a question: why not organize routes as directories, with a handler file inside? Wait, but that's a lot of folders!
Ok, let's sketch how it would look compared to the actual mess...
Ten minutes later: holy structure. What the order!
api/
users/
[id]/
index.ts ← handler for /api/users/:id
helpers.ts ← clearly not a route
A couple of weekends of spare time later: another annoyance is behind. Another minute of glory.
So, is the breeze here forever? For a couple of hours perhaps, but...
Why the heck are routes limited to monolithic segments like posts/:id?
How do you cover simple paths like posts.json or posts/1.json in a single route?
Should you create a separate route for each? That's crazy!
Time for a new zen. Turns out path-to-regexp v8 is finally state-of-the-art in routing - so flexible, so delightful to integrate. And here it is, the new Power Syntax for routes:
book{-:id}-info ➜ /book-info or /book-123-info
locale{-:lang{-:country}} ➜ /locale, /locale-en, /locale-en-US
api/{v:version}/users ➜ /api/users or /api/v2/users
So far so good.
Now, finally, a comfortable weekend! Or not?
Wait - forgot to wire auth middleware into the latest routes.
A couple of Neovim strokes and we're good.
A couple of Neovim strokes later: wait, how could I forget about auth middleware, it's essential!
And how is it that in the 21st century you still have to manually wire middleware into each route?
That's nonsense, that's anti-progress!
One wasted weekend later: who said stylesheets can be cascading but middleware cannot?
It's that easy - create a use.ts in any folder and all underlying routes automatically wire the exported middleware. No imports. No repetition.
api/
admin/
use.ts ← auth runs for every route under /admin
users/
index.ts ← inherits automatically
[id]/
index.ts ← inherits automatically
No more wasted weekends because someone forgot to wire something.
Now they'll be wasted for far more reasonable reasons.
And one more annoyance has been harming the breeze for years.
Whatever tricks or hacks tried to get a clean, type-safe, validated round-trip from client to server - none fully satisfies.
Then a perfectly legal question hits: once there are typed validation targets on the server, why not use them to generate typed clients? Wait, even better - TypeBox runs perfectly in the browser, so why not use the same validation routines on the client?
import fetchClients from "_/front/fetch";
// fully typed, validated client-side before the request is even sent
const user = await fetchClients["users/[id]"].GET([123]);
A couple of weekends later: so productive now with type-safe, client-side validated fetch clients!
Time for some rest, finally. Wait - a customer asks for an OpenAPI spec for their API.
Ok, let's quickly wrap a script that gets all routes and generates the spec...
Multiple "quickly wrap a script" iterations later: how can such a simple task need so much manual work?
No way, there has to be an automated solution. And another zen is on the road - taking the AST-parsed routes with their params, payloads, and responses, and gluing them together into an automated OpenAPI 3.1 spec generator.
And that zen doesn't come alone. It's accompanied by the sincere wow of customers who discover they got detailed OpenAPI for free.
And that's pretty much a lot - much more than any pragmatically calculated result expected from an apparently pointless effort.
Because the real result is the sum of all those micro-achievements that push you forward.
And no - the breeze isn't supposed to arrive once and stay forever.
It lives in the movement, not the stillness.
Top comments (10)
honestly this is one of those things i keep trying to explain to people who ask "why build your own X when Y exists". the answer is never about the output - it is about what you learn by going through the process. i built a mini validation lib once that i used for exactly one project. was it worth it? probably not in hours. but i understood validation at a level that made me a better dev for years after. you cant buy that with npm install
I enjoyed and related to your stream-of-consciousness telling of this!
It is really hard to say, whether or not we should invest time into any more frameworks. For me the answer is, whatever gives structure and guidance either for human or for AI and therefore improves quality of results is worth doing. If you call it framework or architecture depends entirely on you
How about the dev experience ? The utility for a specific case ? ...
Fair enough. Do you see any disadvantages about doing things that way compared to the traditional way ?
I recently answered a very similar question from colleague "Why we don't generate new software directly in Assembly, or at least in C++/Rust - AI doesn't see the difference. It will be faster!"
Think of it under a practical angle: AI was mostly trained on code that already uses frameworks.
Open-source repositories, tutorials, StackOverflow answers, documentation, production examples - the overwhelming majority of them are written using frameworks. Django, Spring, React, Express, Rails, etc.
So when an LLM generates code that uses a framework, it operates in a very dense training space: patterns are well-represented, solutions are repeated thousands of times, and edge cases are documented.
If the model invents its own mini-framework, it immediately leaves that safe zone (and we leave it too)
Right away, several practical problems appear:
Review cost
Reviewers, validation engineers, security auditors - whatever we call them - already understand the framework.
They can focus on business logic, not on reverse-engineering a home-grown AI architecture. Now they have several times that work.
Debugging cost
If a bug happens in a known framework, thousands of engineers have likely encountered it before.
If the bug is inside a custom framework generated by AI, congratulations - you now own a brand new ecosystem.
Knowledge transfer
Teams change. Engineers leave. New ones join. Framework knowledge is transferable across companies.
Your custom AI-generated framework is not.
Economic reasons.
Engineering time is expensive. Spending hours reviewing unfamiliar abstractions is expensive.
If you hire engineer with React knowledge, there's high probability you find one.
If you hire an engineer to support mini-framework written by AI, well, it would be a bit more challenging, mildly speaking.
Frameworks are essentially pre-paid cognitive infrastructure.
So in the AI era the question isn't "why use frameworks?"
If anything, it's the opposite.
Why would you ask AI to generate an entirely new framework - when it was trained on millions of projects that already solved the same problems?
The last dog dung I saw on the street deserved to be watched more than this question. This comment is not meant to offend you @sleewoo because I know you are not the source of it. By reading your post I'm certain that deep down you also know the answer.
You can rephrase that question in many ways:
"in the AI agents era, why ..... do anything at all...."
Well, we true programmers weave software. Yes, we weave those suckers. We create true quality by diving deep as pro scuba divers, not tourist snorkels who can't see beyond a couple of meters - thinking these few meters are everything there is to it.
What if instead we used AI to help us navigate deeper ? explore boundaries, known limits and architectures ? .. What if we could use it to build even greater quality software (including frameworks), weaved by skilled programmers who have their own unique styles and ideas ? ...
Don't stay on the surface. Start diving my friends.
The frameworks die. The annoyings persist. You're not building a framework, you're building a record of what annoyed you enough to fix. That's the only documentation that matters.
This comment makes no sense to me. If we talk about death, everything will die eventually. Some frameworks last more than others, that's true, but that is something else.
He is building a framework. He built a methodology, a way to go about something that will ease and boost whoever chooses to consume it.