The JavaScript ecosystem has a magic problem.
Not the fun kind. The kind where you stare at your code, everything looks correct, and something sti...
For further actions, you may consider blocking this person and/or reporting abuse
That was perfect π€£
You had me interested until "When state changes, the whole tree re-renders. But lit-html only touches the DOM nodes that actually changed". This problem has been solved with signals, popularized by SolidJS. Signals are also very simple, a function to get, and a function to set. I liked most of the rest of this, but I feel like this is a step backwards. It's a postive step from React, but that's a horrible pile of kludges because it was never intended to be a framework, just a library, and even Angular uses signals now.
I'm not saying signals are the best solution that exists, but they enable only updating the very specific part of the DOM that changed. They are also separate from the concept of components, and can even be global. They are just data, getter and setter so that can live anywhere, and very very easy to debug if something is weird.
Personally I think frameworks went off the rails at least a decade ago and we're starting to pull out of the nose dive. From this article, I think you grok the problem, mostly at least. If you read the short "The kitchen sink problem" paragraph in nuejs.org/docs/why-nue, it's spelled out in a concise pragmatic way.
After reading that, I'm back to HTML+CSS+JS but sometimes I'm using SolidJS as a light, superfast signals store for reactive data. Funny how everything old is new again. Modern CSS is surprisingly complete and feature-rich now: CSS variables, nested rules, :has, etc. Those using Tailwind or something similar don't get it yet. They probably will eventually. Those still using React definitely don't get it. I think you get it fairly well, but haven't quite taken it all the way yet. Thank you for posting this, it's another experienced developer saying "hold on a minute, let's think about this in the Big Picture." We need a lot more articles like this one to get people to rethink things more.
Thank you for the feedback! I really appreciate it.
You are right: signals are great, and fine-grained reactivity is genuinely faster in theory - on paper, at least. In fact, a re-render of the whole tree for the update of a single component every once in a while sounds very counter-intuitive.
But I ran benchmarks on whole-tree re-rendering with lit-html and the results were surprising: you get clean 120 FPS just like with signals, while wasting ~100ms of CPU every 10 seconds. For the vast majority of apps, that's not a problem worth solving at the cost of architectural complexity.
Here's the benchmark if you want to dig in.
I'd rather have a simple, predictable rendering model and pay a cost nobody notices than optimise for a problem I don't have. Which, funnily enough, is the same argument as the rest of the post.
Good feedback, yourself there! Two things. I don't find signals add architectural complexity, in any way. They just behave like variables, accept there are get/set accessors. They can go anywhere, are completely separate from components (one of the key points in the post above?) and thus I find them effectively cost-free. The benefit is that there is no render loop. At all. If nothing changes, no code runs. Only when it does change, does the affected code run. It's optimal by default, without something like lit. That 100ms every 10 seconds is still 1% of your CPU being thrown away when it could just be idle. I do agree that in the pragmatic world, none of that matters. The solution in your article is still a good choice and a good combination. I'm just a bit of an idealist when designing something new, especially in 2026 when we no longer need render loops.
The complexity that signals add in my opinion is a hidden dependency graph that could become hard to trace in large-scale systems. If you look at the benchmark, Inglorious Web wastes a bit of CPU time but the JS heap is very low, while Svelte uses less CPU but has a higher heap allocation. So in the end it seems like we're boiling down to the old caching problem: waste time to save space, or waste space to save time.
Iβve noticed the same pattern: as abstractions get more βmagicalβ, debugging cost increases even if developer experience initially improves. Predictability and explicit state transitions are underrated in modern frontend architecture.
Maybe is the why I disclaimer clean arch, too much abstractions
Thank you for your feedback! I'm glad you feel the same way.
Interesting take on clean architectures: personally, I think you can achieve a clean architecture without the need for magic. That's what I was trying to do with Inglorious Store: boring is sometimes better than clever. And clean code should not coincide with over-engineered code.
I see clean code and clean architecture as different things. The clean code not, but clean architecture to me is overrated, and sometimes is contradictory to clean code. Exemple: build things thats could be modified. YAGNI
Fair distinction! I think the intent behind clean architecture is sound β keep dependencies sane, separate concerns β but the prescribed implementation is often overkill. Interfaces and abstraction layers for a CRUD app nobody asked for.
Inglorious Store tries to capture the intent without the ceremony. Structure through convention, not boilerplate.
Iβll definitely explore it more and study the approach from that pov.
"Magic" frameworks are a tradeoff: you pay cognitive debt instead of upfront complexity.
When things work, magic is fast. When things break β and they always eventually break β you're debugging through layers you didn't write and don't understand. I've had 3am incidents that would've taken 20 minutes to fix in explicit code and took 3 hours in a "magic" setup because I couldn't see what was actually happening.
The shift you described toward explicit, understandable systems is the right instinct. The question is usually: how much magic can you afford given your team's ability to debug it? That answer varies a lot by context.
Really interesting take. The Vue destructuring trap you mentioned has bitten me more times than I'd like to admit - spent a solid hour once wondering why a composable's return values weren't updating before realizing I'd destructured the reactive object.
The ECS approach is what caught my attention though. I've worked on a couple game-adjacent projects and the entity/component pattern is genuinely underused in web dev. The way you're separating state from behavior from rendering feels like it'd scale really well for dashboards and config-driven UIs where the component tree metaphor starts to feel forced.
My one question: how does this handle cross-entity communication? Like if entity A needs to react to changes in entity B - is that all through the event/notification system? Curious how that looks in practice when you've got 20+ entity types interacting.
Thanks Kai, glad it resonated! The Vue destructuring story is exactly the kind of thing that's hard to explain until it's happened to you.
Cross-entity communication is all through the event system, with three targeting modes:
Handlers can also fire further events via
api.notify()β which might sound like it could spiral, but every event goes through a queue that processes them in order, so the flow stays deterministic no matter how many entities are interacting.If a render function really needs to peek at another entity's state,
api.getEntity("entityId")gives you a read-only snapshot:And when you need to understand why something happened, the store is compatible with Redux DevTools β full event history, state inspection, and time-travel debugging out of the box.