Return? It never went away. Or at least that is what some smug "told you so" is going to say. But for those who haven't been living under a rock fo...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
Just wanted to say this is a nice write up, pretty extensive, but I don't think it hits the nail on the head.
What everyone seems to forget is that frontend web dev was never meant to be written the way it is today.
With Angular, React, Vue we allowed hardcore programmers/engineers from companies like Facebook and Google dictate how frontend devs should build websites.
If you look at the core of each one, it's a beast, it's super overly complex thinking for something that's always been as simple as HTML, CSS and a little JS.
Frontend was never supposed to be this complicated. the frameworks abstracted stuff and essentially just wrong their own "way of writing javascript" to the point that you have to learn more about the framework than about the languages in order to get things to work.
Has anyone stopped to ask. "How do you build a SPA from the ground up?"
I did, i spent a few weeks on it and I created a simple script that lets me build SPA sites. taino.netlify.app , it's 13kb, 3kb compressed. One script, requires no NPM install, no build time and it just works.
That site is small you'll say, just 7 pages, of course it works, what about a real application? Well, I created indiegameshowcase.org using Taino JS and it has user accounts, teams, games, and dynamically rendered content pulling from a Node/MySQL server that all shows up on google [though the ranking sucks because of competition and i'm not doing active SEO content on it]
Yes, it's not perfect, I'm one guy, not a 1000-dev strong company like FB and Google, but yet, i showed you can easily build 10 page sites that load blazingly fast, are SEO friendly and fits on a floppy disk including the assets if you've compressed them well enough.
As for server-side routing, the only reason stuff like that was re-invented for the JS frameworks is because they were all terrible for SEO. Google and other bots couldn't read them. 404s read like 200s and here we are.
Now, yes, SSR is nice and all, but it's not something we absolutely NEED anymore. Google can read your JS sites just fine. The frameworks just suck at it because their bundles are large, their rendering logic means the bot has to parse abunch to get to the content, and they shove way too much into the render, so google just doesn't like it.
Again, what I wrote allows you to render HTML within JS and it's clean enough for bots to read as long as they support JS rendering [which they're all coming around to anyway for the big frameworks], but i'm just one guy.
Just my two cents.
I'm never sure when to preface it but I'm the author of SolidJS. I've been working on it for almost 6 years now and spent the first few creating it on my own. So I think I can relate with some of the sentiment here:
That's basically what I've been chronicling for the last few years across dozens of articles on medium and now on dev.to. The thing is when I built Solid it was for myself (and maybe the startup I worked at). I reveled at the small size (Solid has grown in features but the treeshakeable core for most apps is around 3.9kb compressed) and the performance topping almost any benchmark I could find. I even made claims that my client side framework could outperform their server side frameworks at even the metrics they claimed SSR benefitted.
I wasn't wrong.
I wasn't right either.
In fact this sentiment was all too popular about 5 years ago and I feel that was our real failing. We let things progress beyond their original design before they were ready for that use case. I'm not going to say there isn't a bit of a misalignment here. The more popular these solutions get the more likely people are trying to apply them to places they were not designed for. Often simpler solutions fit better.
During the process I was hired by eBay to work on their framework, Marko. And one thing you realize there is not everyone has the same device or network, and that not every organization can deliver products in a way that doesn't require technology infrastructure to manage people as the product.
The one common thing I found suddenly being thrust into that world, and interacting with the maintainers of these libraries is they all care. In some cases the needs of an eCommerce giant or a Meta or Google are going to be different and the work caters to that, but I've never seen any indicator this is a product of that environment in terms of a pattern for excess.
In fact most core teams of frameworks or popular libraries are much smaller than you'd think. The Marko team at eBay was 2 people for 4 years. React core team I believe is around 8. These teams work like small cell in the midst of those 1000s of engineers. Honestly coming from a startup it felt familiar. And they are the ones trying to encourage best practices, trying to design things in a way that cause the least harm.
And like it or not, people like using these solutions so much they continue to want to use them anywhere they can. It might not be correct but there is huge interest and the tools are shifting to address that reality. Take it or leave it as just one guy to another, but having played on both teams this is bigger than you or me.
I'm glad you gained the experience of creating that framework. I've felt more fulfillment doing that than I have experienced during work over the years. If it is what you believe in share what you've built, get people excited about it. Show people a better way to do things, that's why we are all here. Speaking from experience you never know where that can take you.
I fully agree with all the above. I've been in the industry for some 16 years and i've seen large corporations fail to implement proper practices, and large dev teams that can't product what a 2-man team could. I still deal with a lot of that day to day.
I understand the communities support the frameworks we have, and frankly, they're a better solution than the old stacks, then again the old stacks work too and like you said it depends on the size and needs of each company.
I've planned/designed various tools, applications, websites for different sized companies and know how that goes all too well, to the point where personally I feel everything should be custom and catered to what the business needs are, but that's a discussion for another time.
As for my Taino JS, it was a proof of concept. I don't expect people to pick it up and use it, but i hope those who check it out learn a bit from it, so they're not so reliant on the larger frameworks without understanding what's happening at the language level itself, or at the browser level. Frameworks are great, until there's a bug and none of the devs can solve it because they don't really understand how the framework works. And this is more common than not, otherwise we wouldn't have thousands of issues on github repos regarding these things.
I also don't understand where people find the time to write such beautiful documentation and community sites for their projects, haha.
I'll have to check out SolidJS and see what it's like. Kudos on that, it looks "solid" :)
Matthew effect. . . To be honest, the hooks mechanism of solidjs combined with tsx has greatly improved the pain of manual dependency management of react hooks, but its ecology is too small. IDE (webstorm/vscode), ui library, routing, state management, testing, graphql integration, there are too many things to deal with, the ecological isolation between front-end frameworks is already so high that it is almost impossible for the production environment to convince others Use an unknown framework
Definitely. This field also benefitted from the fact that constant newcomers were entering it at an incredible rate, and the range of projects and platforms it can apply to was increasing. Maybe we are passed that point now. But technology tends to go in phases. I've been in no particular rush. But yeah I doubt there will ever be another React without something colossal changing.
I think a large part of the problem is the "works on my machine" mentality which is all too easy for any web dev to slip into.
We sometimes forget our machines which are capable of running powerful IDEs, editors, and servers, are a lot more powerful than the average persons own device.
Single page apps can reduce the number of calls being made to a backend, and reduce the individual page load data, but sometimes we're focusing too much on the trees and not the forest. Those SPAs are huge in comparison, and with things like CSS-in-JS that further reduce the browsers own capability to cache things, we're introducing larger upfront load times. This all has a big impact on performance and accessibility.
"SSR is nice and all, but it's not something we absolutely NEED anymore."
Are you saying some webapps/apps, going forward, won't need it? Or that we shouldn't use SSR at all, or only use it for a very small number of use cases?
lol, some will ALWAYS use it :)
Some devs don't know any better and will continue using the old stacks that all do SSR but render slower
Some devs will use new frameworks and the SSR tools they provide because of SEO and other reasons
Some devs won't use SSR because you don't need it in order to build a functional website that ranks well on SEO. You just need to understand how google's bots work.
Some devs who understand it will still use SSR because of specific business needs or because they want to appeal to bots beyond google and bing.
Some devs won't use SSR because they don't know what it means or does.
The real questions are always:
What are you building? Do you care about SEO? Do you care about performance?
Do you understand all three enough to do it the best way for that project?
Also, "best" is relative as you can build a website/app a dozen different ways and achieve the same result.
In the end though, how sure are you that your results are in the top 10% for performance and SEO?
That's what it comes down to.
Thank you for replying! Yeah, I just listened to a podcast about caching:
Easy to implement. Hard to implement efficiently and correctly. Or at least it takes time to understand it in depth enough to implement correctly.
Every app is different. Learning what tech/framework/etc. will work great for you requires dedicated upfront time to research.
An upfront cost that is always worth it
So Caching is a funny beast. I agree, it's worth implementing for many systems, but you gotta remember, with JS frameworks, much of what was once cached, can now simply be served as static sites.
So really you do very little caching, though one might say "static code" == "cached code".
Static sites would make api calls, meaning you either cache the api results or setup scalable environments for the dynamic data.
so damn right
a
a
b
c
d
Always enjoy your articles, thanks for sharing. I find the mental model of tools like Astro or Slinkity or even Eleventy + Alpine + vanilla easier to grasp and generally more flexible and more transparent. It's probably because of the clear separation between the client side and the server side code.
Those are all MPAs like the first category. For those used to the power of SPAs it is a bit harder to just give up those benefits. However, the way things are looking we might be able to get the benefit of both soon enough.
I'm not a front end dev,so some questions might be a bit naive.
1.So for example isn't SSR keeping the state of the page on the server ,that being a disadvantage to scaling?
2.Isn't SSR a way to not use Javascript on the front end but write the client code in your backend language of choice?See Vaadin which you write the client components in Java.
"Vaadin in its beginnings used GWT under the hood to compile the Java code to JavaScript. Not anymore.Since version 10 it compiles to Web Components and through its Client side JavaScript engine communicates the events taking place browser side to the server side which decides how to proceed, in for example modifying the DOM;that is Vaadin renders the application server side and this server side endeavor goes by the name of Vaadin Flow"
Not necessarily. Most SSR in modern frameworks is only an initial state and then browser maintains state after. This article is talking about moving to a more server driven model but it doesn't necessarily means the backend needs to be stateful. I mean obviously it needs to go somewhere, the URL can only hold so much. But what is being looked at now in JS land is the ability to do classic server style navigation/rendering but preserve the state in the browser.
I mean that is one take on SSR. JS frameworks have been doing SSR for the past several years using Node backends to run the same code client and server. The benefit is similar to this promise from Vaadin where you write a single app and it works in both places. The one thing though that I've observed is that many times using Server technology to generate JavaScript doesn't end up with as optimal JS. I've used those solutions in the past. I'm sure a backend dev can point out where node is inefficient, but browser is a much more resource constrained environment.
Hmm.. not sure what the Vaadin pitch is.. Web Components are JavaScript. But following along it sounds like it's doing HTML partials. Which isn't unlike HTML frames I'm talking about above. It's an interesting approach. The challenge with these approaches is while they do allow for less JS and a single app mentality, they rely a lot on the network for things that may not need to go the server. Generally client side updates are faster once you are there. It's a cool model, but ideally I think the direction is more hybrid. Something to get the advantages of both sides. Some work is better done in the browser without server back and forth, but if going to the server anyway better to offload that work, and reduce JS size.
Thanks for the brilliant post!
Multiple page applications don't need React for the single page feeling either. If you just want to prevent reloading the same header, navigation etc. for each page change, that's what AJAX did in Web 2.0 and it still does. Take a small utility like pjax and you get the same page speed improvement while still providing accessible and SEO-friendly pages each of which remains a complete landing page out of the box.
Thank you Ingo, this answered my question of, can MPA's provide that SPA native app feeling w/ navigation? Would you say that utilities like pjax or what Ryan mentioned (HTML frames), do exactly that? Or is it more complicated, and there are pros/cons? Personally I think it would be cool if things migrated more back to SSR versus CSR due to performance issues, but IMO the 'native feeling' is quite important...
I find it hard to see the pro's of (over)using React (etc.) for everything that just used to work even 20 years ago. Make pages load quickly by optimizing both frontend and backend for page speed (use caching, reverse proxy etc. optimize database queries, avoid unnecessary requests). React's new server side rendering features add a lot of complexity for the sake of what? The pre-rendered pages have to be "hydrated" with the client-side JavaScript application anyway so we get a good time to first byte, but a very bad time to interactive score.
But to answer your actual question: there are (and have been) ways to achieve the native feeling with very few JavaScript. Basically you intercept full page / document requests and load partial content and update the DOM using JavaScript, which makes it easy to use CSS transitions, loading animations and the like.
Yes pjax seems to be an example of what I was calling HTML frame above. Thanks for sharing.
Great article full of knowledge.
Ruby on Rails had something like this...back in 2006. Behold the marvels of the hotness that was RJS, allowing you to dynamically update the browser with your favorite language, Ruby, without having to deal with that messy JavaScript rubyonrails.org/2006/3/28/rails-1-...
jQuery was not even 1 yet. Blackberry & Nokia were battling to be the king of mobile. And here we are, the seasons of tech progress have come circling back again. This time the code is isomorphic, JavaScript is not something to be avoided, & sites are lighter to work on browsers on a mobile phone (no iPhone back then). Amazing!
I remember pitching to Yehuda Katz & Carl Lerche @ Engine Yard HQ a RPC library based on RJS & jQuery early 2010 to integrate server & client side code. I was quite proud of it. Yehuda was not interested as they were instead working on this client side rendering library...which I believe became EmberJS & were more interested in client/server at the time.
I'm certain there are many stories of similar patterns of evolution playing out in the history of tech. History rhymes...yada yada
Cool article!
I disagree on some points though:
While a simple page CAN be as simple as "HTML, CSS and a little JS" it's usually not that simple, unless you want to write JS like it's 1998 or ignore a big chunk of users (outdated browsers).
If you want to write modern JS with good browser support you will at least need Babel or other ways to use polyfills.
Want to use SAAS or LESS? - you probably need some sort of build pipeline.
Also SPAs handle bundling very efficient. It hits your index.html every time of course, but on returning visits usually 80-100% of the bundles will already be cached, making for a blazingly fast experience that is unachievable with pure server-side-rendering
There are more cases and it just adds up. You can implement all that yourself of course - but in the end, when i think about what I'd have to do to all by myself to come to a satisfying solution I end up creating yet another framework.
SPAs are overkill for static content pages for sure, but they are such a time safer for real applications
thank you, i appreciate your perspective. like Ryan said, maybe Rich Harris' termed transitional apps will hit a sweet spot that will encompass a big portion of websites.
I really enjoyed reading your post. It's interesting that you mention "Server-Side routing" as a trend and then bring up GraphQL as one of the issues for a large bundle size.
Coincidentally, we've developed a solution (open source soon) to put GraphQL on the server and expose GraphQL Operations dynamically as a REST API. This way, the client is only about 2kb in size and can leverage http layer caching. I'm curious what you think about the concept. Here's a link on the topic, it's a bit over the top but maybe. ;-) wundergraph.com/blog/the_most_powe...