DEV Community

Discussion on: Components are Pure Overhead

Collapse
 
peerreynders profile image
peerreynders

So I don't see the focus on overhead or performance as particularly interesting in general.

I think that this position is informed by past experience on the back end and on desktop.

  • Personal (i.e. client side) computing has shifted to handheld devices.
  • While some flagship devices are still increasing the fat CPU core single thread performance, average devices are opting for a higher number of low power/small core CPUs with lower single thread performance.
  • Moore's law is done.
  • While improved mobile network protocols promise performance gains under ideal conditions, growth in subscriptions and consumption can quickly erode gains on the average connection.

As a result future average device single thread performance and connection quality could be trending downwards under many circumstances.

Aside: The Mobile Performance Inequality Gap, 2021.

The headroom necessary to accommodate the overhead of microfrontends may exist over corporate backbones - not so on public mobile wide area networks. So in many ways a lean perspective, much like in the embedded space, is beneficial in web application development.

Most of React's optimizations in the last few years were geared towards getting the most out of the client's single (main) thread performance in order to preserve the "perceived developer productivity" of the Lumpers component model where "React is the Application/Architecture" to forestall the need to adopt an off-the-main thread architecture which moves application logic and client state to web workers - significantly increasing development effort. Svelte garnered attention because it is often capable of delivering a much better user experience (than React) to hyper constrained client devices by keeping the JavaScript payload small and the CPU requirements low - while also maintaining a good developer experience.

components are not going anywhere because modularity remains a necessity for any codebase of reasonable size.

The issue is that in many cases components aren't a zero cost abstraction at runtime. So while component boundaries may be valuable at design time, the cost shouldn't extend beyond compile time. Frameworks/tools should favour abstractions that only impose run time cost where there is a runtime benefit - all other (design time) abstractions should ideally evaporate at compile time.

Collapse
 
brucou profile image
brucou • Edited

I think that this position is informed by past experience on the back end and on desktop.

Maybe. But also I think that user experience is the thing that we care about. Performance, understood here as CPU bound, is one of many proxies to that (look and feel, network, offline experience, etc.). The idea is spending time chasing an X% improvement in "performance" that is not noticed by the target user is a waste of engineering resources. Microbenchmarks being by design not representative of the user experience are interesting for library makers but not that much for people picking libraries. That is, you would not pick a framework or library based on microbenchmarks. So that is why I never find the arguing over a limited definition of performance in unrealistic conditions remotely insightful.

The issue is that in many cases components aren't a zero cost abstraction at runtime. So while component boundaries may be valuable at design time, the cost shouldn't extend beyond compile time. Frameworks/tools should favour abstractions that only impose run time cost where there is a runtime benefit - all other (design time) abstractions should ideally evaporate at compile time.

JavaScript is not a zero abstraction either and you pay most of it at runtime. Should we compile JavaScript to binaries and send that to the browser? Compiling is great, inlining is great, anything to make the code run faster is great but my point is that it is not free. There are tradeoffs and I want to have a look at the full picture before adding yet another layer of complexity in a landscape that is already crowded.

Thread Thread
 
ryansolid profile image
Ryan Carniato

Yet the article is about removing constraints caused by the current abstraction. The foundations here predate React or this current component centric view and are echoes from a simpler time.

That being said I'm not suggesting going back there. My argument here has been about removing cognitive overhead of contending with 2 competing languages for change on the React side, VDOM vs Hooks, and liberating the non-VDOM from unnecessary imposed runtime overhead that hurts its ability to scale.

But if that isn't convincing enough consider the implications on things like partial hydration. This has much larger performance implications.

When I step back this isn't micro optimizing but adjusting the architecture to ultimately reduce complexity. Nothing like leaky abstractions to add undue complexity. Every once in a while we need to step back and adjust. But like the pool I bought last week that won't stay inflated, it often starts with finding the leak.

Thread Thread
 
peerreynders profile image
peerreynders

The idea is spending time chasing an X% improvement in "performance" that is not noticed by the target user is a waste of engineering resources.

How can you be sure that it isn't noticed? Squandered runtime performance is an opportunity cost to user experience.


A Quest to Guarantee Responsiveness: Scheduling On and Off the Main Thread (Chrome Dev Summit 2018)

And there are costs to the business as well:

In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

Marissa Mayer at Web 2.0 (2006)
Google Marissa Mayer speed research

web.dev: Why does speed matter?

JavaScript is not a zero [cost] abstraction either and you pay most of it at runtime.

JavaScript is the means for browser automation. Ideally most of the heavy lifting should be done by capabilities within the browser itself coordinated by a small set of scripts. Unfortunately many JavaScript frameworks and libraries decide to do their "own thing" in pure JavaScript potentially bypassing features that are already available on the browser.

  • Treeshaking is already used to remove unused JS.
  • Minification which produces just functional but not readable JS is standard practice.

So tooling which emits the minimum amount of code necessary to get the job done sounds like the logical next step.

And at the risk of repeating myself:

Object-oriented development is good at providing a human oriented representation of the problem in the source code, but bad at providing a machine representation of the solution. It is bad at providing a framework for creating an optimal solution.

Data-Oriented Design: Mapping the problem (2018)

More than a decade ago part of the game industry, being constrained by having to deliver optimal user experiences on commodity hardware, abandoned object-orientation as a design time representation because the consequent runtime inefficiencies were just too great. In that case it lead to a different architecture - Entities, components and systems ECS - aligned with the "machine" rather than the problem domain.

Similarly in the case a (web) client application the "machine" is the browser. "Components" neither serve the browser nor the user at runtime - so it makes sense to make them purely a design time artefact that gets erased by compilation - or perhaps "components" need to be replaced with an entirely different concept.