Zero JavaScript has been the new buzz phrase around JavaScript libraries for the last little while. And I think it's time to address the elephant (or lack of elephant) in the room. Every library is talking about subtly different things which makes it hard at times to discern what is going on. So let's see if we can bring some clarity.
First, to answer the question. Probably not. Not really. We haven't fundamentally changed how things work. JavaScript didn't get to where it is today purely out of reckless abandonment as some critics might state.
The reasons for having JavaScript on your web pages are good ones. JavaScript can have a really positive effect on user experience. Smoother transitions, faster loading of dynamic content, better interactivity, and even improved accessibility.
So when people are talking about 0kb JavaScript what are they talking about?
Progressive Enhancement
In the past week, I've seen not one but two demo's showing how HTML Forms do POST requests without JavaScript on the page. Remix Run and SvelteKit both have the ability to server render a page and then have forms function perfectly fine without loading the JavaScript bundles.
Unsurprisingly links (<a>
anchor tags) work as well in this condition. This isn't groundbreaking and every server-rendered library can benefit from this if they design their APIs to handle form posts. But it definitely makes for the jaw-drop demo.
Spoiler Alert - I especially enjoyed the Remix Run demo where they didn't tell the audience they were not sending any JavaScript to the browser for the first 30 minutes. We just assumed they were building a client app.
Rich Harris, creator of Svelte, gave a very similar demo 4 days earlier. I'm not terribly surprised as this is core web fundamentals, and less popular frameworks have been doing the exact same thing for years even with React in tow.
For the majority of us, we might not need to cater to no JS. Ryan and Michael remind us repeatedly in their video that while this is really cool, the benefit is that by using the built-in platform mechanisms they can simplify the logic that you, the developer, need to write.
The best part of progressive enhancement is it is available to every framework. It's built into the browser. More meta-frameworks should support this. Ultimately, you are probably still sending that JavaScript. It's cool that you don't have to. But it is a sort of all-or-nothing deal on a per-page basis.
React Server Components
This announcement definitely was groundbreaking. Components that only render on the Server in React. These are being advertised as zero bundle-size components.
What does zero bundle-size actually mean? Well, it means that you aren't shipping these components with your bundle. Keep in mind, the rendered templates are making it to the browser eventually through a serialized format. You do save sending the React code to render it though.
Server components are stateless. Even so, there are big savings in not bundling for libraries like React whose code scales with template size as it creates each VDOM node one by one regardless. A stateless template in a framework like Lit or Solid, is just a one-line DOM template clone on top of the template itself which needs to be sent anyway.
A better perspective is to view this as a new type of integrated API. At minimum what you save here is the component-specific data-processing you do after you load some data. React Server components let you naturally create a per-component API that is perfectly tailored for that component's needs. That API just might contain some markup, so to speak.
This means no more Lodash or Moment in the browser. That is a huge win. But the real gain is how much easier it is to avoid performance cliffs. We could have already avoided sending most of this with our APIs, but we'd need to write those APIs.
What we get is a different way to do Code Splitting, and write our APIs. We are definitely shaving some weight, but zero bundle size isn't zero size.
Islands
A year or so ago Jason Miller, creator of Preact, was struggling to put a name on an approach to server rendering that had existed for years but no one was really talking about it. He landed on the Islands Architecture.
The idea is relatively simple. Instead of having a single application controlling the rendering of the whole page, as you find commonly in JavaScript frameworks, have multiple entry points. The JavaScript for these islands of interactivity could be shipped to the browser and hydrated independently, leaving the rest of the mostly static page sent as pure HTML.
Hardly a new idea, but finally it had a name. This as you can imagine drastically reduces the amount of JavaScript you have on the page.
Astro is a multi-framework meta-framework built on top of this idea.
What's really cool about this is we are actively reducing the JavaScript sent on a page while keeping interactivity if desired. The tradeoff is these are multi-page (server-routed) apps. Yes, you could build a Single Page App but that would be negating the benefits.
To be fair any 0kb JS app would have to function as separate pages. And with Astro 0kb is just a matter of not shipping client components. Progressive enhancement like described above is a natural addition.
So Islands are definitely a way to reduce JavaScript and you might actually end up with 0kb of JavaScript where you want it. Where you don't, you don't have to load unnecessary JavaScript. And with a library like Astro you can use tools you are familiar with.
Partial Hydration
Partial Hydration is a lot like the Island's architecture. The end result is Islands of interactivity.
The difference is the authoring experience. Instead of authoring a static layer and an Island's layer, you write your code like a single app more like a typical frontend framework. Partial Hydration automatically can create the islands for you to ship the minimal code to the browser.
A lesser known gem (released back in 2014!), Marko is a JavaScript library that uses its compiler to automate this Partial Hydration process to remove Server only rendered components from the browser bundle.
Beyond the benefits in terms of DX from maintaining a single application, this opens up potential coordination of components. One such application is progressive(streaming) rendering.
A loading experience like this can be coordinated between the client and server without sending a JavaScript bundle to the browser. Just because your page has data incrementally loading doesn't mean it needs a JavaScript library. Marko's out-of-order streaming with fallback placeholders needs JavaScript on the page that gets inlined as it renders. However, with in-order progressive rendering, no JavaScript still works.
Notice the client loading states of this Hacker News Demo, and then open the network tab to see the absence of shipped JavaScript.
What's particularly cool about this is the way the browser holds navigation until the page starts loading. A page can load its static content quickly and have that similar client-side style progress indication with no JavaScript bundle.
In general, Partial Hydration extends the benefits of Islands giving the potential to treat your pages as single coordinated apps.
So 0kb?
Maybe not, but all of these approaches and libraries bring some good benefits. JavaScript brings a lot of value, but we don't need as much of it everywhere. Adding new ways to leverage the server, on top of React or Svelte, can help reduce some much un-needed bloat without fundamentally changing the developer experience.
Islands approaches allow applications that do want to operate in no/low JavaScript mode to do so in an incremental way without the buying in to be all or nothing for each page. We can even accomplish dynamic loading without shipping a JavaScript bundle.
The real winner is the developer. All of these approaches give us the tools to simplify client-server interactions. That has been the real challenge as we attempt to move more to the server. And that is the really exciting part.
Top comments (57)
I can understand this trend from a Blog or E-Commerce POV. But for rich web applications this makes things much more complicated imo.
I was so happy when things moved away from the server to the client, where trivial functionality would not have desasterous security implications. However, it seems people definitely want to get back to these days.
Another thing I dislike about this trend is that due to increasing load on the Server it will be harder to entry the market with scalable low-cost solutions.
Why don't the frameworks make more use of dynamic import for load performance?
Why? If you render everything on the server, and let the browser do its magic, plus maybe use some HTMX, you end up only having to care about state on the back end, instead of having both frontend and backend state without any benefits whatsoever.
Why do I want to manage state in the backend? This is something at least I want to avoid at any cost.
You have to manage state in the backend anyway. This way, you simply avoid having to deal with it on the frontend, too.
Can you show me an example of state that I definitely have to manage in the backend? In my current understanding there is none.
Precisely, the models here are the opposite way. The desire here stems from the want to manage state on the frontend so using a client model lets us keep the server mostly stateless while having a single developer experience. It's basically the complete opposite approach but has the same benefits in the opposite way. The difference is arguably for many apps, the state is in the right place (closer to the customer).
I won't say this is the right solution for everything, but what is cool about this is if you wanted to scale state to the server it isn't that much of a jump, whereas given the complexity of frontend I'd say the opposite direction is not as true.
This is the fundamental difference about this time around. And why things are not just going back to how they were 10 years ago.
Thank you for this explanation. I am doing these critical confrontations mainly to find out which trends are hype and which are solid.
For sure, I think there is a lot of noise here. Especially from the primarily server rendered crowd. They've been waiting for like a decade to say I told you so. And the client approach hasn't been giving them that. There have been some missteps and some learnings but no one things things are suddenly going to go back to how things were.
The thing to discern I think revolve around architectural nuances. I always say follow the client state. This has been the bane of web dev. A musical chairs (hot potato) to see who is left holding the bag. Eventually client side was like, it is fine we like state, it is worth the complexity. If where you manage state has change you can't view even similar looking solutions the same.
My hope is with enough perspective of different projects working in this area we can see the emergent trends and better evaluate what is actually going on.
Actually, both approach are needed, its what you are building that will dictate how much state you should retain on the client-side.
Wanting to avoid server-side state at all costs is just being lazy and not wanting to be an actual developer who creates solutions to your customers problems.
You want to make everything just with front-end tech, come on, that isn't happening, there's always a service layer and a database, always.
There will always be the client/server model as long as we still use the HTTP protocol.
This split is never going away, just deal with it and become a full-stack developer.
"They've been waiting for like a decade to say I told you so"
This goes even back, I'm doing software development for 14 years, it used be called 3-tier architecture, there's always was a frontend layer, the service layer and the database layer, since the well before the first web bubble.
The client/server division is as old as the TCP/IP itself.
Front-end developers not wanting to have the server side is a funny proposition.
Let me propose not having front-end at all by allocating all the code of the server-side and creating a splitting mechanism to replace the HTML so the applications can actually run really distributed, by automatically moving running binary code between CPUs (of even different architectures), and we can end this insanity once and for all.
It's a matter of moving the boundaries. No one wants to write the interopt part. The API when there is no additional need for it.
The thinking of frontend on the backend is that you write a single app that communicates directly with your service/database layers. It isn't unlike how things were like before when we were mostly server rendering stuff. The difference is the front of backend looks more like the frontend than the backend. It's a bit like mobile development.
This is all abstraction/framework level consideration. It doesn't change or oust core backend technologies. It's more suggesting that viewing an application as only a bunch of rendered HTML pages is an antiquated model and we want a unified one. Now traditional backend technologies could(and have) take this approach too, it is just that JavaScript frameworks have been working in this ultra lean zone for years and still have advantage running in the browser. So the question is if they can bridge the gap in a way that hasn't been really successfully realized thus far.
"So the question is if they can bridge the gap in a way that hasn't been really successfully realized thus far."
That's my point, there's lot of space for engineering better things.
The irony is that using remote access gets a seamless "cloud" experience of a native application.
JS started lean, but now are so bloated that native applications outperform it. And ironically, they are smaller in size. I see that big electron 90MB blob there, its like a white gorilla in the middle of the class
Dynamic Imports + Components is sort of where this is coming from. Components lead to co-location of fetching and once we started lazy loading the client-side waterfalls started to become a problem. There are other ways to solve this of course. But the naive approach is causing larger performance issues than pretty much any optimization the React team could counter.
React Server Components are basically a different sort of Lazy Loading. They wrapped the parallelizing dynamic import problem and progressive rendering problems together. This can increase performance considerably, and was most likely motivated by usage at Facebook in their new redesign. Marko's progressive rendering and partial hydration grew from eBay's need when moving to Node almost a decade back.
This whole motion is coinciding with the move to serverless which offers relatively inexpensive metered compute. Both Remix Run and SvelteKit highlight serverless deploy as a starting configuration option. I think this is how they aim to handle scalable low-cost solutions.
Isn't that out-the-box functionality with a web browser? I think I'm missing a step, because (without watching a long video) this seems to be talking about websites that work without javascript - which is all websites by default. If you use Javascript and break default functionality in the process, you're taking a step backwards and uninventing the wheel!
It might be funny that I'm coming at this sort of backwards. The reality is that app development has gotten to a point where there is no expectation for it to work without JavaScript turned on. It might we working backwards, but it is completely where it is at. The old 2 steps forward 1 step back sort of thing.
A thing like a form has always been here to solve the problem, but if you look at highly interactive apps they moved way beyond these simple interactions and at some point stopped even trying to preserve this as they passed over a threshold where without JavaScript the experience would be so poor anyway.
Something like these simple demo really makes it obvious, and I think the Remix guys needed to make an example like this to really hit it home that audience. But even speaking for myself I haven't been making apps that looked like that for almost a decade. To be fair for the developer it was always a choice to support it, but was it worth the effort without the tooling to support you doing so? Probably not for a lot of things.
I'm happy to see that this is entering perspective again in terms of being able to have both. But it wasn't a priority of JavaScript frameworks for a long time.
To state it another way. If your goal is to maximize on the experience you can deliver with all tools available you aren't going to focus on arbitrary limitations initially. These could be important limitations but if they are outside of your target, how much time are you going to spend? I think that we are seeing a shift (maturing) of these technologies due to their own growth, and to a growing expectation from them that goes beyond what their original goals were.
The fact that server rendered components and prerender in general relies on javascript kinda makes any backend app not written in node.js have a huge dependency just to deliver that experience. Node is not particularly known for its robustness in terms of having mature backend frameworks at the level of laravel (PHP) or Rails (Ruby). Are we just expected to ship our apps with our backend app lang and Node to pre-render stuff? Jezz, what's next?
Well it's possible that something like WASM bridges the gap a different way. But what we are seeing is JavaScript being the only language of the browser getting pushed into the server to deliver on the idea of a single web. It's not surprising.
People can not use it just like they haven't been using Node. But there is a desire for universal solutions in this space, so I've been expecting a mad rush. The writing has been on the wall for a while now. I guess its taken the right time for this to happen. Not sure all the catalyst but something has been changing.
Or, you know, you could just render it without JavaScript...
From the section on islands:
And that's not even all bad. One of the ways that SPAs currently have to reinvent the browser poorly is that for accessibility, they have to signal to screen reader users that the page has loaded, by having a hidden ARIA live region with text like "navigated to [page name]". The browser and screen reader can do this better, with a user experience that the screen reader controls, e.g. automatically starting to read the new page and/or playing a screen-reader-supplied sound effect. That's why I continue to develop MPAs.
And ironically you believe due to these frameworks misguiding you that is the way to handle SPA navigation.
The answer is actually focus management once the page has loaded on to the page
<h1>
. The below answer is not perfectly well explained but it should give you an idea of the recommended way to handle page loads on a SPA.answer re: how to make screen reader read document loading and document loaded?
You will need to see how to do the following in React, but the principles for AJAX page loading carry across to all SPAs.
The only difference between this and what you asked for is you don't announce "document loaded" instead you focus the
<h1>
on the page as that…Now if we go back to MPA here is the recommended way to handle them....
I mean, it is all done for you basically! No history API, no focus management, no weird loading state feedback for slow loading pages, if the users connection drops out they know about it....all amazing stuff!
Yeah some elements of SPA (especially around navigation) require some emulation of browser behavior. Suspense Transitions in Concurrent Mode are basically emulating the way the browser holds on the current page while preparing to load the next in the background. Also considerations around focus and scroll position.
I've definitely had some experience with ARIA Live working in a media heavy site. Slideshows and Carousels had to do the same things. You will find people doing this though since there are some things you can do with a SPA that you don't get in MPA. But I do think people discount MPAs much too soon.
I am freaking dying here. As a Ruby on Rails fanboy and reluctant JavaScript developer, I am making popcorn and watching us come full circle.
Sort of. It's a bit different this time around. When Rails came about we didnt fully understand the nuances of client side state management and full capabilities of JavaScript in the browser. Rails hasn't really adapted to that where these solutions are sort of coming from the perspective of having those learnings and trying another crack at it. I think a lot has been learned in the last decade which changes how we'd approach certain parts of the problem.
That being said I am expecting people to borrow some of the ease of use tools from Rails. Once you've unified things again, that sort of scaffolding including stuff like databases right into your solutions via easy CLI commands become a thing again.
I think the difficulty has been that developing APIs are like developing a UI. There are a lot of particulars around the design. It's more than just getting something functional up. There is an aesthetic to it. You are building a product.
When these solutions are going to be autowiring up solutions, sort of rebuilding the monolith we're back to a place where you can bootstrap everything. It's probably obvious to someone doing Rails but the cost of having a separate API is that you doubled your product surface. Perfectly fine if you are making an API product but for many things it's unnecessary and with that boundary to contend with things get more complicated.
Nothing says these technologies need to be used monolithically just a matter of understanding what the proposed surface area is.
As mostly a Backend Developer myself, IMO one of the things modern frontend really got right is the component model. Yes you could use "includes" or "partials" in traditional server side frameworks, but you would still need to wire the javascript in separate files, which I have some trouble to get into my head after get used to the component structure.
It´s really interesting to see that, from one side, traditional backend technologies trying to close the gap to modern client applications with technologies like Hotwire Turbo, Phoenix Live View, etc, while from another side client frameworks also getting closer to the server with React Server components, SvelteKit, etc.
The future looks bright IMO and hope we will reach a middle ground.
I start playing a bit with Svelte and I am really impressed with it´s bundle size. Svelte has ~2kb gzipped while, for example Hotwire Turbo, has ~13kb gzipped. If you add Stimulus, it´s another ~8kb gzipped.
Here goes one of their main selling points and It makes really hard to think of doing a traditional full SSR with templates again and loosing all the component based structure ;)
Yeah this is a good point. As much as client side gets flack for size the libraries and approaches innovated to combat these are actually pretty good at it. Sure that doesn't help with all the 3rd party libraries brought in but compilers like Svelte or Marko or Solid can really aid in tree-shaking where the minimal bundles can be even smaller than advertised especially on small demos.
In reality Svelte components scale different etc, so final bundle sizes can actually be a bit larger etc.. but the overall thing is still very small.
Most of the server focus libraries like Hotwire, Stimulus, Alpine are larger and in many cases have worse browser performance. But I see the tension there since client side libraries want to control the rendering on the server to deliver their experience while optimizing rather than just be an Island in many cases.
Things are definitely converging, but at the same time it's more like passing ships. This ultimately might become a language thing.
Very interesting post. I discovered something new today as offered in this post. While apps are way far from no js policy, documents and blogs can actively implement the island strategy. Thanks for sharing!
Definitely. There are a whole class of apps where MPA architected approaches(Islands, Partial Hydration) would be perfectly fine experience and actually improve the experience on lower powered devices and slower networks.
Blogs, Content Sites, News Sites, eCommerce. Not everything but definitely more than one might expect.
Yes, it will provide a user satisfactory experience to all those people still having a low end device. Again thanks for sharing and do have a good day!
0kb JS was the past and seems like its coming back, one way or another. Having a site with 0kb JS in an utopia and is not achievable but minimizing the JS footprint is very much possible and thats where we as web devs/frontend devs should focus IMO.
Better known as AHAH or Asynchronous HTML over HTTP with REST and Content Negotiation. "Islands" are called HTML fragments and Partial HTML. Super powerful stuff.
webdevelopment2.com/ahah-asynchron...
xfront.com/microformats/AHAH.html
griffith.wordpress.com/2008/11/12/...
randyfay.com/ahah
medium.com/@ijoeyguerra/content-ne...
Async HTML and HTTP (AHAH) is more like Turbolinks (aka. HOTwire Turbo). It intercepts clicks on links and injects HTML fragments into the current page.
Islands Architecture is more like JS on-demand: it does not front-load all JS, but downloads it when needed (when the user scrolls to a clickable part of the page, or clicks on an interactive part which needs to execute some JS).
Third party scripts, how they would fit into new paradigm. Enterprise apps generally needs third part scripts (Monitoring, analytics etc.), would those functionalities move to server side as well.
If you haven't seen it I strongly recommend checking out Partytown: partytown.builder.io/
We need things in the browser for analytics but we don't need to compromise main thread performance.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.