I'm not sure if there's ever been a language more loathed, yet so widely used, as JavaScript.
I'm not of that camp. I quite like JavaScript. Its quirks, its flaws. How it somehow built upon Scheme yet was destined to be the most pervasive programming language.
JavaScript was designed to be a companion. A scripting language to perform menial tasks to assist small pieces of interactivity on the page. The language of the web.
But the web has grown well beyond what it originally was. It encompasses all manner of experiences and devices. From computers, mobile phones, televisions, and watches to all manners of IoT devices. From the simple content site to the immersive virtual reality video game.
And JavaScript has come along with it.
Phenomenal Cosmic Powers, Itty Bitty Living Space
The one thing the web foundations have repeatedly shown us is how critical the network is as a resource. Most programming is concerned with memory or disk speed, but the web is always concerned with the network. This, along with being a free-for-all of platforms and really the only available option, has led JavaScript to develop most peculiarly.
While JavaScript by any measure is an interpreted dynamically typed scripting language, it now is a transpiler, a melting pot of DSLs, and a whole toolchain. The machine of JavaScript has long replaced the soul. It needs to be everything to everyone, yet be imperceptibly small and resource-light.
The starkest thing I see when looking at how we develop applications in JavaScript is that ultimately no matter how great the potential, catering to the lowest common denominator on device capability and network speed still drives the conversation. It is an inescapable truth. The law of physics we must obey.
The Role of JavaScript Frameworks
It is not uncommon for a language or framework to aid developers in achieving the performance they desire. But what about deleting our own code? The most performance-oriented JavaScript frameworks are obsessed with allowing us to run less JavaScript.
JavaScript is probably more concerned with producing less JavaScript than anyone else. You see this when frameworks like Svelte or Solid are considerably smaller than Stimulus or even Alpine. You see this with all the focus from Marko, Astro, and Qwik on Partial Hydration. Even things like React Server Components reflect this concern.
Is 0kb of JavaScript in your Future?
Ryan Carniato for This is Learning ・ May 3 '21
We lean heavily into bundlers and compilers to strip out every bit of code we don't need. The goal is to optimize every last bit of execution in our templates. Create specific languages to better capture intent to make that all possible. We analyze our apps to break apart code that can only be run on the server from code that runs in both places. And we use that information to reduce data serialization costs.
We even leverage server-side rendering to inform how to reduce the cost of booting up the application in the browser, through newer concepts like resumability. Running the application on the server fills in the gaps compilation can't handle ahead of time.
A new JavaScript Framework every week as the saying goes. A constant struggle to innovate and push boundaries. Background knowledge of never being satisfied with the status quo haunts this space. There is even a term for it. JavaScript Fatigue. Buried in the complexity of learning and of choice. And yet they continue to rise like an unending stream of the undead. Each building upon the remains of the past.
This isn't necessarily a bad thing. It is a sign that there is more work to do. If you change your perspective that the status quo is more a 4 out of 10 instead of an 8 out of 10, none of this is surprising. JavaScript fatigue is caused by reality missing expectations. Let's talk more about those expectations.
The JavaScript Paradox
We created the problems we are solving. Our desire for more interactivity and better user experiences. Not relying as heavily on the network. The wish to use a single toolset to build all manners of site or application for the web.
But it is more than that. We could take a backend language and sprinkle JavaScript over it, and for a time that might be alright. Mechanically that is all we ever wanted. But it is almost impossible to turn back the clock on the developer experience we've seen over the last decade. The ability to author things as a single application, instead of weaving our JavaScript through as a steadily growing, but unwanted, orphan on top of our server application.
If anything we get more and more benefits from reducing those boundaries between the front and the back. To the point that it isn't even that controversial to suggest using JavaScript full-stack is the best way to ship less JavaScript.
Another language runtime might make savings in the 10s of ms but when we talk about the impact we can make for the end-user on the destination device through leveraging using JavaScript on the server can be in the 100s of ms. It's an order of magnitude more impactful to the end user.
But admittedly it might affect your bottom line. JavaScript's sole purpose for existence was the browser and now we have brought it everywhere.
Are we Stuck?
Well, where I'm sitting, at least for now. This is a direct extension of JavaScript being the only language of the browser. WASM shows promise in some areas but isn't making a dent on the user interface side of things yet. There are inherent costs that it needs to overcome.
If the end user's device and network are on the critical path, optimizing for it may be the most impactful thing we can do. And if the best way to combat JavaScript is using more JavaScript, that's where we are.
I'm sure someone will point out server-driven architectures like LiveView or HTMX and those contain great approaches to reducing costs. They abstract some of the JavaScript from the developer to maintain a server-centric view. However, when you do want the interactivity in the client (for whatever reason, offline, etc...), when JavaScript is the only choice, well, JavaScript is the only choice.
That being said tooling for JavaScript has seen a move to Rust and Go (and Zig). There is a desire for more performance and ever more creative ways to leverage these to allow for an authoring experience that is all JavaScript.
In Search of a Silver Bullet
Don't get me wrong. You can always just build an HTML site and put some JavaScript on it as needed. This whole motivation comes from a place of wanting to scale the development of a single app mentality. This isn't every project's concern.
But I did find it interesting that in my search I found that there is more than one way the problem is being approached for low-end devices and networks. I think for those used to fast networks with only the intermittent interruption of something like the subway, it's easy to think about how to optimize for some base case without changing the equation.
Looking at how big international eCommerce like Amazon or eBay operates or services like Google Search handles things, are confirmations of that. Build small, build light, and smartly leverage the server to get the quickest initial loads and interactions. There are enough studies to show how that impacts revenue.
However, in China and some other regions where the internet isn't so consistent, they've adopted a completely different model. Mini-Programs which are a bit like PWAs that load into existing mobile apps as pluggable sub-apps. A sort of localized app store.
Instead of optimizing for initial page loads, they optimize on background data loading, to ensure the app can run as well as possible regardless of the network or device resources. Often bringing in more JavaScript to save future network requests is seen as extremely beneficial. What we have is a whole ecosystem of web applications in constrained environments not at all interested in leveraging the server.
If there is any takeaway, this isn't always so cut and dry. If there were a way to bridge the gap here it is probably still more use of JavaScript today.
Conclusion
This topic asks a lot of questions of us. Should JavaScript continue to eat into the backend? Are there better ways we can leverage other languages and platforms with JavaScript? Should we even be chasing after that unified vision of the web?
Or maybe the question we should all be asking is how did we let such a monopoly happen?
While you have infinite choices in how you build your websites and applications, JavaScript has a substantial leg up. So much so, that it is probably the best way to actually ship less of it to your customers. And to me, that's kind of crazy.
Top comments (37)
Obviously loading more javascript, means worst starting times (download, bootstrap, etc).
But rendering everything on the backend has also big implications. You need more infrastructure to handle the load when in a SPA you use your users compute power. Backend rendering has a cost that not everybody is willing to pay money for and that's the reason why i think SPA will not die any time soon because file hosting is way cheaper than compute power.
But power consumption will become more prevalent issue in the future. And SPA and SSR consumes a lot of energy power.
So i hope we will see a trend for more SSG + (SPA/SSR) hybrid solutions like Fresh Islands to optimise both startup times and power consumption.
Yeah which was what I was refering to about the bottomline. But even with stuff like Fresh(and Marko, Qwik, and Astro which did the same before it) we are still talking a JS on the backend mentality. These are JavaScript authored solutions to rendering a page.
This isn't a SPA vs non-SPA position, or even SSR and SSG. I expect those lines to blur hard. This is JS vs non-JS question and right now JS on the server. Everyone from frameworks to infrastructure providers are looking into new caching mechanisms and Edge/CDN hybrids but JS app authoring is the name of the game. It's how we get both benefits. And to the point that the SPA distinction might matter a lot less on the server side especially when caching is married frameworks that are optimized for server side rendering. I've had conversations with both Guillermo from Vercel and Mathias from Netlify specifically on this topic and edge caching is very much top of mind. Dynamic is the new Static, caching Suspense boundaries, or template partials etc, Edge stitching.
I'm just sitting here as the author of SolidJS, employee of Netlify, and as someone who has been building on the web for over 2 decades and find myself in these conversations about how to deliver the best experiences. I started my professional career as a backend developer, afterwards I spent 8 years maintaining a SPA in production, and I worked for a few years at eBays on their open source framework Marko which pioneered Islands in a JavaScript framework back in 2014. And no matter how we dance around it, no matter who is in the conversation we get back here again. And all indications are JavaScript.
I'm always a bit wary when all indications point to a single solution. I want at least an opposite argument to push thinking and it doesn't exist today and really hasn't existed for years. There are solutions that best handle ranges of the solution, but nothing like JavaScript to handle the whole spectrum in the best way. Which makes me suspect if the whole endeavor is too much of a moonshot. I tend to benchmark things to challenge my assumptions and this is a place where we need to build first and reconcile later because of the size of the undertaking and because we won't really know until we see all the pieces together.
I totally understand, i also have 20 years XP in this field, but i don't think we are in any technological danger here.
Take C/C++, they had almost full market share before Java was incubated. And today we have more languages than ever to build for each ecosystem. And some might even say that this profusion may be more alerting. Because as engineers we strive for standards, we don't want to reinvent the wheel everytime (but we still do :-).
From my point of view, i see javascript for the web (meaning, frontend and backend rendering) as a consolidation. Those who build for the web are those who understand the technology. So it's perfectly normal that Javascripters have taken the web world by their hands and are building the best tools for it.
But it does not mean, it will stay like that forever. I'm pretty confident that javascript only has a head start, other technologies will copy what we do and improve uppon it.
Like you pointed out, there is WASM. And solutions that are used in production that use WASM are there (Blazzor for C#, Yew or Sycamore for Rust for those i know).
The issue right now, is that nothing is faster than Javascript today for doing the frontend side of thing. But also native Rust SSR is way faster.
From where i stand, there will soon (my guess is 5 or 10 years from now) be a moment where the performance diff between WASM will not matter in front of the advantages of other technologies.
Rust for example is a real breakthrough in stability and security. And it's already top loved langage according to stackoverflow, and i'm pretty sure it will soon start to be learned in CS schools. This will mean, more man power to it.
Will it take some of the gigantic place javascript took in the mean time? i don't know. But i hope.
I think what often goes unnoticed is that one of the most important factors to delivering the best user experience is not the tools the developer has at hand. But it is that the developer has attention and time away from fiddling with / upgrading / optimising those tools... to actually spending that precious time focusing on the user’s use case.
Perhaps that is more a function of the separation and narrowing of the roles in the industry (front end developers who don't style or design (seriously); Interaction and UX designers who don't code).
With increased specialization there are only few opportunities for the individual to effectively address the user's use case within their purview and perhaps more importantly perceive the use case in it's full end-to-end scope.
Yes, that's one root cause. The "unbundled" and (ideal) de-coupled approach to everything could be another (we're trying to be software engineers, not mere website builders.. sometimes to our/users detriment). That leads to the point of coordination going from someone else's shoulders (framework devs) unto our (app devs) shoulders. (Making us feel like Atlas, forgetting that Atlas shrugged).
But even as a solo full stack developer, you have a choice how to engage with it. It is a beast of our own making. Oftentimes less (of the desired things) is truly more (of the right things).
Brooks Law is now almost 50 years old but the core message seems to still fall on deaf ears with business management: communication overhead resulting from the "division of labour" will negatively impact efficiency and effectiveness every time unless you are setting up an assembly line.
And while there is truth to the statement that (UX/web) design and development require a different mindset, it's often used as an excuse to leave the communication barrier between them intact.
Effective solutions require end-to-end understanding from those participating in the solution implementation.
But I agree, ultimately that involves focusing less on JavaScript and more on the other things that comprise web-based solutions, especially as solo full stack developers.
(Perhaps you need your own article 😎)
I wholeheartedly agree. I have some hope in full-stack efforts such as Redwood and create-t3-app, and maybe Blitz tools. But I've come to the point I can't see a decent full-stack solution that isn't based on JS, since it's a necessity on clients, as @ryansolid mentions.
I would probably use SolidStart for all my projects, if it weren't for the desire to make a cross-platform app that also runs on native (without duplicating the code and effort). So I'm left with React, as it enables React Native Web.
But if SolidStart had a native story, with an integrated setup, something akin to Hotwire Turbo (but maybe more like React Native Turbolinks, so I never have to touch Swift or Java, even if customising the nav bar) that would by default be set up to enable native iOS Safari push notifications, plus maybe in-built shared element transitions, then I would love to choose SolidStart. Just for the fact that I would for the most part avoid the native ecosystem: separate build and release pipeline, App Stores, and whatnot. Only important thing would be that the React Native Turbolinks shell app had enough native features not to get thrown out of the App Store, even though it wrapped a SolidStart PWA.
The more obvious answer for SolidStart's native story would perhaps be CapacitorJS. But the caveat (and dirty little secret) there is that it actually bundles your PWA and ships it with the App Store bundle. Although I've read you could also use CapacitorJS to ship the App Store bundle with a WebView so that the PWA would always be served fresh. With the aforementioned risk. So only as long as the shell app had enough native features (something besides just notifications I have learned, preferably at least some navigation nav bar, like React Native Turbolinks and Hotwire Turbo has) to get past App Store review process.
The thing is—from the commercial perspective I can understand the appeal of the "cross-platform" approach but from an engineering perspective "cross-platform" is an immediate compromise towards sub-mediocre outcomes because it leaves you in a postion where you cannot optimize for any platform without heavily penalizing the others.
And React/Native demonstrates this beautifully. Look at how much bloat Preact got rid of, compared to React, by not having to support native while preserving "declarative UI" and DX. And look at the rings that Inferno runs around React because it doesn't have to support native.
So my attitude is:
A compromise is just going to leave everybody unhappy. Commit to the approach that is going to generate the most value.
Obligatory Scott Jenson "The App is a Technology Tiller" references :
That's a very personal perspective. I remember tossing Dave Thomas's Programming Ruby (2005) across the room because it made no sense to my C/C++/C#/Java/SQL/PLSQL brain at the time. Similarly JavaScript didn't jive with me either, until Clojure/Elixir/Erlang put my brain through the ringer. Once I realized the Scheme-ish tactics that were available in JavaScript I was off to the races.
For the longest time I resented that the rise of JavaScript CSR framework SSR restricted server side choice by forcing running Node and therefore JavaScript on the server. But once I realized that eBay with marko deliberately (and successfully) abandoned Java-based web development I had to reevaluate my position (and then the whole Gen 3 thing happened).
You cold well use PWA everywhere if it wasn't for apple's greed and users won't probably notice the difference.
The capabilities were somewhat limited but growing step by step:
With some bonus:
Thinking about this topic, why don't just stop building "native" applications and instead add and maintain capabilities to PWA platform to unify it?
I've tried PWAs in Windows, Android and iPadOS and I've no complains. Good experiences overall.
My question is do we really need yet another platform specific language and environment just to proceed to limit it's capabilities for different concerns?
Starbucks, Aliexpress, Trivago, Android Go platform, The Washington Post, NASA... Did a case study and/or have their PWA online (some available depending on the country) but maybe Tinder and Telegram are the best examples of a company built on the top of a PWA.
The second law of thermodynamics states that in an irreversible process, entropy always increases, but thermodynamics doesn't apply to software AFAIK and we still have a chance to spread a bit of love and KISS a bit here and there.
Putting aside the "popular-culture stack" of Next.js/Tailwind CSS the popularity that tRPC is gaining is more than a little troubling and seems to once again exemplify Dave Thomas's observation:
"I think there is a horizon of maybe 7 years, 8 years that developers just don't look back beyond that, so every 7 or 8 years somebody re-invents something that already existed past that horizon"
Developer's have been trying to make RPC work since 1974 and invariably run into limitations because "to make distributed computing more like local computing" defies the laws of physics.
I guess it's one thing to use it as a more "ergonomic fetch" but I think the more likely outcome is the development of applications that are in complete denial of the constraints that the server-client-chasm will always impose on web applications.
Given the recent work on hydration and resumability the server-client-chasm is finally being appropriately addressed in terms of rendering but it seems that when it comes to post-initial-load server-client-communication the realities of the server-client-chasm are still being willfully ignored (if not by the library itself then perhaps by the developers who use it).
I won't really say that there's a "chasm" between servers and clients, we've an infrastructure that's Internet, which is "just" a network, with it's characteristics. And we use that network to send and receive communication packages from one system to another, that's all.
I agree with:
In this post but I've never heard an individual saying a single of those fallacies mentioned in it 😅 (Probably because none of them are true).
On the other hand, those infrastructure limitations affect also desktop and mobile apps as interacting with servers is as requirement (the major part nowadays).
I don't think it can beat GraphQL but this tRPC thingy seems interesting (Added to the "must try" list) thanks for sharing!
It should be said that both Tinder and Telegram have native apps, but in addition they now also have PWA's called Tinder Online and Telegram Web.
The main reason (I guess) is Apple not allowing full support to PWAs so you need to pay the yearly subscription to the store plus a great % of your in-app payments...
@ecyrbe
I'm interested in knowing why you're saying SSR consumes a lot of energy power?
IMO it doesn't, but I haven't seen really a lot of benchmarks. Still, I think the user's device would consume more energy power, than a server with Next.js rendering React server components. We would do this to respond with HTML that is actually useful for crawlers (SEO considerations 😄).
I disagree with your argument, but would love to get your thoughts on why you think it consumes more power 👍.
Hi, can i reach out to you? I'm currently going through learning phase and i would appreciate it if you can help me clarify some concepts 😊
JavaScript was conceived as a "just good enough" light scripting language to be replaced by something more strongly typed w/ a JIT compiler like Java (hence the name & perpetual confusion). As more & more UX has moved to the browser, it's inevitable that we've practically gone full circle to that with TypeScript making the language more robust (and easier for compilers to optimize) and WASM being the new closer-to-the-metal run-time.
(Aside: I think history will show that while Flash/ActionScript helped lead the way toward richer client-side interaction it was a huge, decade-long distraction that delayed adoption of more mature and non-proprietary tool chains that we're only getting now.)
The only reason JavaScript has migrated to the server is to allow developer skills to translate more seamlessly between the two domains.
In any case, the trends are clear: You'll be able to pick your language of choice (TypeScript, Python, C#, etc.) and run that on client-side or server-side. Then, as always, your choice of framework/stack will make the most difference for long term productivity more than the language.
I think this is perhaps the prevailing, oversimplified view that fails to account for the significant difference in the original development environments these languages were designed for.
TypeScript has the advantage that it transforms to the language where the runtime and supporting APIs already exist on the target platform.
Languages like Python and C# do not have that luxury on the front end; they have to bring their runtime and standard library along with them over the constrained network; it's something these languages don't have to deal with on the "install it once" back end.
Even in the upcoming, predicted "WebAssembly Golden Age" the web's deployment model isn't going to change. Sure things can be cached but they can also be just as quickly ejected if the application tends toward the same kind of bloat that is typical of back end and native development.
The mandate to stay lean won't go away even with WebAssembly. Because of that mandate minimal runtime languages like C, C++, Rust, Zig (and perhaps AssemblyScript) will likely be favoured for WebAssembly but typically those languages also come with a slower development pace.
That was the reason that was usually cited when the hype around "universal" or "isomorphic" JavaScript initially grew. However the driver behind Gen 3 is to get around the 1 App for the price of 2 problem.
It's for this very reason that the Clojure community tried to keep ClojureScript as close as possible to Clojure, so that they could maximize reuse between server and client (within limits).
In terms of the web the front end client is the most constrained party and JavaScript is the most universally supported runtime, so it makes sense for the server side to simply adopt it from the reuse perspective.
That said the server side has access to options the client side doesn't and those options may lead to "even faster server side JavaScript".
It probably should be like that. And it very well may be someday.
This is a bit more like that sci-fi plot where aliens come to Earth to find what they need to save their failing homeworld. JavaScript on the server might come in under the banner of business but there is very specific purpose for it being there. The move in frameworks and tooling is an attempt to save the declarative application model popularized in client side web dev.
I think the frontend is probably more sensitive to this as we sort of caused this problem. But so much of the intent of going to the server is to solve the issues already happening in the browser. And WASM is still too far out to keep up with the steady rate of growth of JavaScript on websites.
There is always a trade-off between performance and easy-to-write. More languages and libraries are created because both creators and developers want faster and easier development.
Just like you said,
And...
We've seen JavaScript and Python took off due to the biggest reason that they are easier to write than static typed languages. The same thing with the rise of React and other libraries wrapping JavaScript with easier use cases. Then functional components and React hooks...
Does using a React have a worse performance than plain JavaScript? Yes. Does WASM beats JavaScript in benchmark? Yes. But better performance certainly always leads to more complex code.
Python is built upon C. C is built upon Basics. We can write code in assembly language, but no one is doing that.
Besides from easy to write, the monopoly of JavaScript and its libraries is majorly due to the close fit with the web development use cases nowdays.
Libraries became popular because they make handling user interaction, state and backend calls much simpler. If we are still using Yahoo and other Web1 applications with little user interactions, we probably don't need these libraries.
Yes JavaScript and Python are invented as scripting languages and not invented for large scale applications at first. But we need to view them with the use cases we have now.
If the performance and stability is the priority of a large scale application, it's definitely helpful to switch to WASM and static typed languages. Same reason for TypeScript.
But the easy to write for the rest 90% of medium to small scale applications is the priority. Combine with the MVP and lean product trends for millions of startups, they are definitely choosing JavaScript over others.
This is why I believe JavaScript and its libraries aren't going anywhere soon.
The thing you have to have (JS) will eventually eat up all the competing things you would like to have (Ruby etc.).
This idea is expressed elsewhere in society as: «the tyranny of the intolerant minority», or «the most intolerant wins».
FYI: web.dev has a whole Mini-Apps series of articles.
This article didn't provide any easy answers, but it made me think. Excellent article - well done!
WASM is still a bloat! Wait for the major compiler to support the component model. Most of the compiler we have now only think about producing single efficient executable but not thinking about chunking which is very crucial in web app/page. It will take a very~long~time™ for WASM to be usable as JS replacement. If you need performance then it's better to bet on compute shader use-cases in WebGPU rather than SIMD computation in WASM.
As for mini-programs, WASM might be very viable choice as long as the company define the standard interface for rendering their custom UI component.
To me always remember " Everything impossible will be possible"😎👍🏽
Just saying JavaScript literally changed my life, loathed is a strong word, maybe loafed because it gives me 🍞 bread
imho
:o)
What (for instance) React does (and that's why React is great) :
--- allow us to do the job using a "minimal" amount/subset of javascript.
So, effectively, to a certain extent, in a certain sense, what (for instance) React is doing:
--- make javascript irrelevant (=it matters much less).