DEV Community

Rich Harris
Rich Harris

Posted on

In defense of the modern web

I expect I'll annoy everyone with this post: the anti-JavaScript crusaders, justly aghast at how much of the stuff we slather onto modern websites; the people arguing the web is a broken platform for interactive applications anyway and we should start over; React users; the old guard with their artisanal JS and hand authored HTML; and Tom MacWright, someone I've admired from afar since I first became aware of his work on Mapbox many years ago. But I guess that's the price of having opinions.

Tom recently posted Second-guessing the modern web, and it took the front end world by storm. You should read it, or at the very least the CliffsNotes. There's a lot of stuff I agree with to varying degrees:

There is a sweet spot of React: in moderately interactive interfaces ... But there’s a lot on either side of that sweet spot.

It's absolutely the case that running React in the client for a largely static site is overkill. It's also true that you have to avoid React if your app is very heavily interactive — it's widely understood that if you want 60fps animation, you will likely have to bypass the React update cycle and do things in a more imperative fashion (indeed, this is what libraries like react-spring do). But while all this is true of React, it's much less true of component frameworks in general.

User sessions are surprisingly long: someone might have your website open in a tab for weeks at a time. I’ve seen it happen. So if they open the ‘about page’, keep the tab open for a week, and then request the ‘home page’, then the home page that they request is dictated by the index bundle that they downloaded last week. This is a deeply weird and under-discussed situation.

It's an excellent point that isn't really being addressed, though (as Tom acknowledges) it's really just exacerbating a problem that was always there. I think there are solutions to it — we can iterate on the 'index bundle' approach, we could include the site version in a cookie and use that to show actionable feedback if there's a mismatch — but we do need to spend time on it.

It’s your startup’s homepage, and it has a “Sign up” button, but until the JavaScript loads, that button doesn’t do anything. So you need to compensate.

This is indeed very annoying, though it's easy enough to do this sort of thing — we just need to care enough:

<button class="sign-up" disabled={!is_browser}>
  {is_browser ? 'Sign up' : 'Loading'}
</button>

But I'm not sure what this has to do with React-style frameworks — this issue exists whatever form your front end takes, unless you make it work without JS (which you should!).

Your formerly-lightweight application server is now doing quite a bit of labor, running React & making API requests in order to do this pre-rendering.

Again, this is true but more React-specific than anything. React's approach to server-side rendering — constructing a component tree, then serialising it — involves overhead that isn't shared by frameworks that, for example, compile your components (hi!) to functions that just concatenate strings for SSR, which is faster by a dramatic amount. And those API requests were going to have to get made anyway, so it makes sense to do them as early as possible, especially if your app server and API server are close to each other (or even the same thing).

The dream of APIs is that you have generic, flexible endpoints upon which you can build any web application. That idea breaks down pretty fast.

Amen. Just go and read the whole 'APIs' section several times.


Minor quibbles aside, Tom identifies some real problems with the state of the art in web development. But I think the article reaches a dangerous conclusion.

Let's start by dissecting this statement:

I can, for example, guarantee that this blog is faster than any Gatsby blog (and much love to the Gatsby team) because there is nothing that a React static site can do that will make it faster than a non-React static site.

With all due respect to those involved, I don't think Gatsby is a particularly relevant benchmark. The gatsby new my-site starter app executes 266kB of minified JavaScript for a completely static page in production mode; for gatsbyjs.org it's 808kB. Honestly, these are not impressive numbers.

The Lighthouse performance score for gatsbyjs.org

The Lighthouse score for Gatsby's homepage, obtained via webpagetest.org/easy.

Leaving that aside, I disagree with the premise. When I tap on a link on Tom's JS-free website, the browser first waits to confirm that it was a tap and not a brush/swipe, then makes a request, and then we have to wait for the response. With a framework-authored site with client-side routing, we can start to do more interesting things. We can make informed guesses based on analytics about which things the user is likely to interact with and preload the logic and data for them. We can kick off requests as soon as the user first touches (or hovers) the link instead of waiting for confirmation of a tap — worst case scenario, we've loaded some stuff that will be useful later if they do tap on it. We can provide better visual feedback that loading is taking place and a transition is about to occur. And we don't need to load the entire contents of the page — often, we can make do with a small bit of JSON because we already have the JavaScript for the page. This stuff gets fiendishly difficult to do by hand.

Beyond that, vanilla static sites are not an ambitious enough goal. Take transitions for example. Web developers are currently trapped in a mindset of discrete pages with jarring transitions — click a link, see the entire page get replaced whether through client-side routing or a page reload — while native app developers are thinking on another level:

It will take more than technological advancement to get the web there; it will take a cultural shift as well. But we certainly can't get there if we abandon our current trajectory. Which is exactly what Tom seems to be suggesting.


I'm not aware of any other platform where you're expected to write the logic for your initial render using a different set of technologies than the logic for subsequent interactions. The very idea sounds daft. But on the web, with its unique history, that was the norm for many years — we'd generate some HTML with PHP or Rails or whatever, and then 'sprinkle some jQuery' on it.

With the advent of Node, that changed. The fact that we can do server-side rendering and communicate with databases and what-have-you using a language native to the web is a wonderful development.

There are problems with this model. Tom identifies some of them. Another major issue he doesn't discuss is that the server-rendered SPA model typically 'hydrates' the entire initial page in a way that requires you to duplicate a ton of data — once in the HTML, once in the JSON blob that's passed to the client version of the app to produce the exact same result — and can block the main thread during the period the user is starting to interact with the app.

But we can fix those problems. Next is doing amazing innovation around (for example) mixing static and dynamic pages within a single app, so you get the benefits of the purely static model without ending up finding yourself constrained by it. Marko does intelligent component-level hydration, something I expect other frameworks to adopt. Sapper, the companion framework to Svelte, has a stated goal of eventually not sending any JS other than the (tiny) router itself for pages that don't require it.

The future I want — the future I see — is one with tooling that's accessible to the greatest number of people (including designers), that can intelligently move work between server and client as appropriate, that lets us build experiences that compete with native on UX (yes, even for blogs!), and where upgrading part of a site to 'interactive' or from 'static' to 'dynamic' doesn't involve communication across disparate teams using different technologies. We can only get there by committing to the paradigm Tom critiques — the JavaScript-ish component framework server-rendered SPA. (Better names welcomed.)

The modern web has flaws, and we should talk about them. But let's not give up on it.

Top comments (98)

Collapse
 
twigman08 profile image
Chad Smith

Honestly my biggest issue with the current state of the web is the current state of how complex the tools or build process is.

I miss the days of some html, add a bit of JavaScript to the page and you were done. You didn't worry about going and making sure you had a complete build or bundling process to get everything good. You spent your time worrying about how to develop your app. Which in my opinion was better for the users. There was less bugs. We actually spent more time making sure the application worked rock solid.

Nowadays to get anything cool to be be ready for production you have spend needless time on the configuration. Did you do this? Did you configure it to do that? Oh you can't do that unless you eject it that build tool and use this build tool. That's where the current state of the web is failing.

Sure we have taken steps forward in some areas. But we have to be honest with ourselves. We took major steps back in other areas of the web. Sometimes I wonder if we took too many steps back.

Collapse
 
gotofritz profile image
fritz

I don't get it. Nobody's stopping you from firing up an HTML doc in Notepad and FTPing the result to a web server, if that's all you need. But other people have more complex needs (in the "good ole days" there were no smart phones, to mention just one thing) and therefore we need more complex tools. Why do you want to force me to party like it's 1999?

Collapse
 
twigman08 profile image
Chad Smith

My issue isn't with applications are too complex really. I develop complex applications too at work.

My issue is with all the moving parts with a modern web application. Fighting if I can use certain language features. Oh I can't, now I have to bother with setting up and making sure I can use Babel or something to do it properly, or even loading in polyfills (cool just added another dependency the browser has to load before the application can be used).

That still hasn't covered bundling everything. What's the correct way bundle this? Well crap I now have to go and correctly configure WebPack. Unless you're one of the VERY FEW experts on that ,that will take time to figure out to get right. Ok so I think it bundles right, well now how should I lazy load this code for the user? What should be lazy loaded? What should i use to set it up?

That's just the beginning. I could go on and on about the other complex moving parts with a modern web application.

I am not asking anyone to make applications like it is 1999. All I'm asking for is a more modern and simpler process that is a STANDARD.

Maybe it is because I come from compiled language background. Where I have to worry about that one single binary. I only have to worry if the compiler supports the language version I'm using. Where I can just pass in a single flag to the compiler for the optimization level I want.

Yes I will be the first to say that in some areas of modern web application development we have taken steps forward. I just wonder if we have taken steps back in some areas to take those steps forward.

Thread Thread
 
gotofritz profile image
fritz • Edited

All I'm asking for is a more modern and simpler process that is a STANDARD ...
Maybe it is because I come from compiled language background ...

Well I am sorry, but web development is a complete different kettle of fish. We download all the parts that make up an application asynchronously, on a wide variety of devices, with different specs and rendering capabilities - all things which the app, once downloaded, need to adapt to. We have progressive rendering and respond to all sort of sensors. All while ensuring boot up time for the app is in the microseconds range and security for both you who download the code and the server.

Your expectations are simply unrealistic, sorry.

Also, we always had polyfills, even "back in the day". Except that back then everyone had to bake their own. In fact, we had browser wars and appalling browser (IE5 for the Mac, anyone??!?) and it was a real pain in the neck.

Collapse
 
adrianus profile image
Adrianus • Edited

Bingo, the complexity today is a must, at least because of the growing smartphone usage. The simple reason why "Native Apps didn't kill the web even with all their superior capabilities" (Ryan) is simply UX. As internet usage often starts with searching for something having in mind, that all-app-thinking interrupts exactly this flow, leaving at least 30% of all traffic untouched. For what? For 5-minutes-crafts skyrocketing their YouTube traffic? Although not everything needs server side complexity plus having API's and webhooks easily integrated using browser capabilities not existing in the god old days, the architecture underneath is not the real issue.

Collapse
 
techbelle profile image
rachelle palmer

re: sessions --> I keep tabs open for months at a time, not sure about anyone else

Collapse
 
ojrask profile image
Otto Rask

You fight complexity with ... more complexity?

Thread Thread
 
gotofritz profile image
fritz

You are getting it wrong. The aim is not "to fight complexity". The aim is to fulfil complex needs. I am saying for that you often need more complex tools.

Thread Thread
 
ojrask profile image
Otto Rask

What percentage of modern web is a "complex needs" target that requires a complex solution such as React or Vue or perhaps something custom written in WASM? 50% 25%? 5% 1%?

Thread Thread
 
gotofritz profile image
fritz

All of it. Unless you have actual data that proves otherwise...

Thread Thread
 
jacobmakestheweb profile image
Jacob Ybarra

uh what? the web is about content and media. not a software environment.

Collapse
 
mark_saward profile image
Mark Saward

Some interesting thoughts, in the original article and your response. Certainly good for generating discussion :)

It will take more than technological advancement to get the web there; it will take a cultural shift as well. But we certainly can't get there if we abandon our current trajectory.

The problem as I see it is that the web -- in terms of client-side executed markup and code (HTML/CSS/JavaScript) just wasn't designed with these kinds of interactions in mind (the interactions that Ryan Florence highlighted). It's the wrong foundation for it.

I don't think that "getting there" should be through bending original web technologies, but rather with something else entirely -- perhaps WebAssembly, or a return to actual native applications much like we've come to depend on on our tablets and phones. We already can do nice animations like that iPad demo with native desktop applications. There's many advantages to websites over native desktop apps, with the standout one being that they don't require someone to install an application on their machine. Maybe WebAssembly can give us a compromise -- desktop apps built with technology designed for that job, but available in a way that doesn't require any local installation. Then we can use tools that were purpose built for those kinds of tasks, instead of trying to wrangle the foundations of the web to fit something they were never built for.

When I think back to how we used to build websites back in the day, and how we build sites with technologies in tools like React now, I'm honestly not sure that we've gained anything of great enough value relative to the simplicity we've lost. In short, I'd be happy to abandon our current trajectory and put our hope in alternative solutions for those situations where what we really need is a native application. Leave the web for what it's good at.

Collapse
 
ryansolid profile image
Ryan Carniato

That's interesting. I mean I was there too, and the extent of what I can build with a modern library so extends what I could reasonably do before. And the reason for that is JavaScript. When I was building sites in the late 90s so many things weren't possible and you had no choice but to pay these costs for interactivity. Sure I view sourced a few cool tricks.. image maps etc, but with my table based layouts and a little JavaScript I could do very little. Part of it was the available DOM api's. My lack of understanding of the language but I think we look at the past with rose colored glasses.

It's more interesting to me that native platforms have borrowed concepts from libraries like React in terms of declarative views etc. I'm not saying React invented this stuff but that the trend would suggest the contrary. These trends could be mistakes but popularity has as much stake in the course of events as innate value.

I think this comes down to really this hope of a Silver Bullet. It doesn't exist. Instead we have momentum of a ubiquitous platform that just is constantly aiming to improve. It's not only that the other approaches have already lost, we're getting to a point where any new approach will have to atleast acknowledge the current web. At this point a competitor isn't going to change the tide, but something from within. It's more likely that the web addresses those fundamental shortcomings than people moving to a different platform. Native Apps didn't kill the web even with all their superior capabilities. And on a similar note it is going to take monumental change for this to fundamentally change. Even with React's reach, it wasn't alone in a movement that started years before. This is bigger than any particular library or framework.

Collapse
 
mark_saward profile image
Mark Saward • Edited

There is absolutely no doubt that you can do incredible things with modern web, and the power unlocked by having a programming language in the browser has been amazing. However, when I think about a great deal of the websites I interact with, and ones I build, there would not be much lost if they significantly simplified the tech stack they work with to be closer to the core technologies of the web (including a splash of JavaScript).

What I'm trying to convey is that I think our move to make everything a web app has been a mistake, for multiple reasons. You are absolutely right to say that this tide is going to be very hard (perhaps impossible) to turn, and I absolutely agree there is no silver bullet (I've looked), but that doesn't mean we haven't taken a worse path and shouldn't lament for what could have been.

The advantages of web apps are compelling for developers -- they are easy to distribute, update, control, make cross platform, when compared to native apps. It is not hard to understand why and how we have ended up where we are. I just wish we were somewhere else -- but it would take a lot of work to create that 'silver bullet'.

Thread Thread
 
adrianus profile image
Adrianus • Edited

That whole app environment was basically triggered by one need, and one need only: bandwidth. No idea if anyone remembers when they launched this WAP protocol, making the internet available on smartphones, what failed for two reasons. The providers made it easy to access on their own portals aka AOL strategy at beginning of the web, leaving the rest of the market untouched, secondly the site performance was ridiculous. Apple stepped in, put a bunch of intelligence into the app, dramatically reducing the load time, did not convert so much traffic on their own apps and let developers do what they do best. By 2008 more than 50% of the whole mobile internet traffic was iPhone traffic.

Bandwidth is no longer the breaking point.

Google has for years campaigned the untapped goldmine local traffic, experience flows that often start at the search for getting something done in time, monetising on micro moments as more than half of all mobile searches have a local intend. I posted an example how something to get done interferes cross-applicational from the perspective of a user. With an app-only scheme this is getting impossible. Sure it is understandable that nobody pays so much attention. Most search engines run a change and heavily invested in AI to optimise traffic at the client's frontend. But even this is not enough, when it comes to intends other than something very individualised. I wouldn't care so much about the user, but more about what the user wants to achieve.

Collapse
 
oenonono profile image
Junk

React definitely didn't invent HTML.

Declarative views, indeed.

Collapse
 
v6 profile image
🦄N B🛡

The problem as I see it is that the web -- in terms of client-side executed markup and code (HTML/CSS/JavaScript) just wasn't designed with these kinds of interactions in mind (the interactions that Ryan Florence highlighted). It's the wrong foundation for it.

Yes. Please. Can we please just accept that it's all broken.

Collapse
 
gotofritz profile image
fritz • Edited

Not at all.

CSS has media queries and all sort of crazy and wonderful stuff. HTML has tags for responsive design. JS has (in the browser) access to apis like mutation observer and orientation and speech recognition. It's TOTALLY built for that. Enough of the "get off my lawn" negativity

Thread Thread
 
v6 profile image
🦄N B🛡 • Edited

shakes his cane at you

mutters incomprehensibly

Collapse
 
holdit profile image
holdit

Looking at the criticism made about some recent redesigns (reddit, facebook, etc) it seems that the main complaint (from a user point of view, at least) is how slow it feels compared to the previous version or to a "simple" site with HTML, CSS, and a little bit of javascript.

I haven't checked it recently, but when the new reddit UI was released, scrolling was terrible on my $2500k laptop. It felt slow. Everything was sluggish. It also downloaded a few MB of javascript on pages that essentially had a thumbnail, a title, and some comments (not everyone lives in a big city or is connected to a super fast internet connection!). When the page was loaded, I was seeing more scrolling lag because they lazyload some content. I mean, it's hard to like this "modern web".

As a user, I don't care about how things work behind the scenes. I don't mind some javascript on a page, after all we don't need a page reload just to preview a comment or "like" something. You can even use javascript for everything, I don't mind. But it needs to be fast and it needs to work well... and right now some "modern" sites provide a worse experience than before.

Collapse
 
iamschulz profile image
Daniel Schulz • Edited

Tom identified the main weakness of modern web development correctly: It's based on javascript. He points out the sluggishness and accessibility problems that stem from that attitude, and hes absolutely right to do so. Static pages (as a paradigm, not as Gatsby in particular) are maybe not the cure for that, but they are definitely a step into the right direction.
I just rewrote my website with hugo and specifically without a major JS framework, but settled for a vanilla approach rooted in a progressive enhancement mindset. I didn't even run into the problems that are inherent to React, Vue and Angular.
Sure, you can serve HTML with SSR, but you still have to wait for the bundle in order to have a functional site, plus a truckload of ssr-related complexity. And that is utter rubbish. JS should never be a requirement, it needs to be put in its place as the cherry on top of the cake.
Progressive enhancement needs not only to be a major approach to webdevelopment again, it needs to be out default mode of operation.
This is also the core of Toms article. And I agree with him.

That said, nothing stops you from adding features like nice transitions. But please be sure not to break standard web features while doing so.
Transitions need to add to hyperlinks, not replace them.

Collapse
 
richharris profile image
Rich Harris

I feel like you didn't quite read the article closely enough ;) Of course progressive enhancement needs to be the default — that's why there's so much focus on SSR in all major meta-frameworks.

Sure, you can serve HTML with SSR, but you still have to wait for the bundle in order to have a functional site

This isn't true! Take sapper.svelte.dev, which is a site built with Sapper and baked out as static pages. It works just fine without JavaScript, you just don't get client-side routing. It's a small site, but the same thing applies to larger ones like Gatsby's homepage — no JS? No problem.

Collapse
 
iamschulz profile image
Daniel Schulz

It certainly is true for standard implementations of React-, Vue- and Angular- based sites. There may be a focus on SSR, but it still isn't close enough to deserve to be the default. It needs to come without extra complexity. Only then we're ready to take the step to fully embrace those solution.

And this is my point. You need to take one step after the other. It was a mistake to build everything with javascript first and then try to look if it still works without afterwards. The damage to the web has been done.

I didn't take a look at Sapper/Svelte. Though, in comparison, those are still a nieche product.

Thread Thread
 
martinmalinda profile image
Martin Malinda

The damage to the web has been done.

JS certainly caused a lot of damage on many websites. But now - finally - we're getting to the point of having frameworks which can, by default, offer client-side routing with SSR and hydration without shipping large bundles of JS. Up to this point the frontend devs should have been more conservative and they should have thought twice whether they should really go all-in into an SPA. But now's the time when it's really starting to be a viable option. Next, Nuxt, Marko and Sapper are finally technologies highly worth pursuing, even for content-based websites.

Collapse
 
ben profile image
Ben Halpern

The future I want — the future I see — is one with tooling that's accessible to the greatest number of people (including designers), that can intelligently move work between server and client as appropriate, that lets us build experiences that compete with native on UX (yes, even for blogs!), and where upgrading part of a site to 'interactive' or from 'static' to 'dynamic' doesn't involve communication across disparate teams using different technologies. We can only get there by committing to the paradigm Tom critiques — the JavaScript-ish component framework server-rendered SPA. (Better names welcomed.)

I like this takeaway because it's heavy on acknowledging the process behind the technology which is often disparate and confusing. Even the most well-intentioned and organized dev team is going to get out of wack if succeeding with the tooling is experts only. We need to be able to achieve great performance, UX and accessibility even under conditions where designers do some work, devs pop in some hotfixes here and there, old devs leave with some of the knowledge, priorities change, etc.

Collapse
 
v6 profile image
🦄N B🛡 • Edited

I like this aspect of Mr. Harris' article, too. It's the first principle of the Tao of HashiCorp: Workflows, not Technologies.

The HashiCorp approach is to focus on the end goal and workflow, rather than the underlying technologies. Software and hardware will evolve and improve, and it is our goal to make adoption of new tooling simple, while still providing the most streamlined user experience possible. Product design starts with an envisioned workflow to achieve a set goal. We then identify existing tools that simplify the workflow. If a sufficient tool does not exist, we step in to build it. This leads to a fundamentally technology-agnostic view — we will use the best technology available to solve the problem. As technologies evolve and better tooling emerges, the ideal workflow is just updated to leverage those technologies. Technologies change, end goals stay the same.

And it's why Sacrificial Architecture appeases the dark ones.

Collapse
 
phlash profile image
Phil Ashby

Good response, opinions are always worth having, as long as you are prepared to discuss/defend/change them! I had a few thoughts on this topic last year in response to your good self and web components that I think is worth referring back to: dev.to/phlash909/comment/cghl

In essence, the DOM and page model of HTML is a constraint around the outside of applications that might be better inverted: I'd like to see browsers that support HTML/DOM content if required, but do not constrain devs to always go though it. Let's stop thinking of them as HTML rendering engines, and start thinking of pre-installed runtimes with excellent web-based application management!

We'll want to retain the benefits of dynamic software running on the client (native UX, easy architecture changes, no user-install step, etc.), while leveraging the effort that goes into making that a portable & safe thing to do (common APIs, sandboxes, browsers as the standard runtime for client-side applications). We are nearly there with PWAs perhaps? WASM then expands the available languages for the runtime, allowing common client/server languages and development processes to ease developer adoption. As/when a document needs rendering, then HTML/DOM/CSS is there to perform it's proper function, however many apps many be better off with a UX library (eg: SDL) or widget set (eg: wxWidgets) atop the runtime bindings.

Collapse
 
v6 profile image
🦄N B🛡

start thinking of pre-installed runtimes with excellent web-based application management!

If you were to remove the "web-based" part, it'd almost describe the beginning of OSes back in the day.

Which is perhaps what they are: The UI part of a vast "operating system" by which we access the "applications" of the internet.

Collapse
 
panesofglass profile image
panesofglass

I believe at the point you fundamentally push to render things other than markup, you start making a different application. And that is perfectly reasonable. I’m frankly surprised how well HTTP has held up to so many use cases. I long ago expected to see far more protocols for more kinds of apps, but HTTP seems to have solved so many problems well enough that it now hosts other protocols.

There are probably more protocols that could be written and hosted on other ports. Those would likely address many of these issues in a far better way. I just wonder when the industry will pivot back to building protocols and leave HTTP well enough alone.

Collapse
 
rhymes profile image
rhymes

I think we're past that as most recent innovations in the web space are either formats or evolutions of HTTP. Why reinvent the wheel if the wheel works well? What do you think?

Thread Thread
 
panesofglass profile image
panesofglass

HTTP/3 with QUIC seems like a great improvement, but it is changing HTTP. I don’t see why HTTP needs changing for most things. Why not update SMTP or FTP, as well? QUIC solves specific problems, and it might be better for those who need QUIC for it to have features better suited to their needs.

In some ways, this also feels like the mash of several formats into HTML5 to avoid name spacing. While we focus on and champion components within our own apps, we continue to avoid and break components in the infrastructure.

Therefore, I think we may see new protocols emerge as the weight of these complexities begin to cause problems. That could still be some way off.

Thread Thread
 
rhymes profile image
rhymes

I'm not sure it is changing HTTP, it's HTTP over UDP. The protocol is basically the same as HTTP/1 and HTTP/2.

QUIC solves specific problems, and it might be better for those who need QUIC for it to have features better suited to their needs.

QUIC solves a specific problem HTTP has because it sits on top of TCP though, basically pipelining

In some ways, this also feels like the mash of several formats into HTML5 to avoid name spacing. While we focus on and champion components within our own apps, we continue to avoid and break components in the infrastructure.

I'm not sure I follow the analogy with HTML5, sorry :( IIRC HTML5 was created to stop having to revise HTML as a a single unit and let things evolve at their own pace. Same with CSS 3 I guess. I don't think it's the same thing here: HTTP/3 is the result of the entire world using HTTP and needing to improve performance.

Could they have created a new protocol? Sure, but why break compatibility with millions of browsers, proxies, machines, software application and so on that understand and function through HTTP? I think the decision to rewrite the transport layer to be a smart one.

This doesn't mean that other protocols can't emerge, but it's okay not to throw away those that work already

Collapse
 
gregfletcher profile image
Greg Fletcher

Great article!

Doesn't a lot of this disagreement stem from --> "keep the web as it is, coz it's fine" vs "let's make it better by making mistakes along the way coz that's how software development works"?

Collapse
 
v6 profile image
🦄N B🛡

What about making it better by sacrificing it?

We can learn this lesson from Biology, that the best way to avoid excessive optimization is the ultimate flexibility: Keeping the option on the table to throw out the code, or well abstracted parts of it, and rewrite that code or product from scratch.

It's a little known "dark" pattern of Software Design, from the shadows of Agile, called "Sacrificial Architecture."

exponential growth isnt kind to architectural decisions

Collapse
 
richharris profile image
Rich Harris

Is the Web for applications?

I feel like this question sort of answers itself! Have you heard of Amazon? Gmail? Facebook?

Collapse
 
ben profile image
Ben Halpern

My perpective: The web is for applications. The web is also for rich documents where reading and simple linking can take priority.

And there is an uncanny valley when you the latter is the sensible choice, but the former is the approach taken.

And many, the web is whatever Google tells them it is. Is AMP the web?

 
gotofritz profile image
fritz

We need to have a similar approach in client side frameworks.

No we don't... if the The One True Omniscient God Framework approach that Rails developers are constantly pining for was really the best one, then Rails would be ruling. It isn't; it peaked in 2012 or thereabouts, knocked off its perch by Node, among other technologies. It's a single point of failure and it cannot adapt quickly enough to the bewilderingly fast changing environment in which we operate.

Alternatively, you can pick one of the two frameworks that follow the model you advocate: Ember (actually inspired by Rails) or Angular (more of a C# flavour), both of which strive to be a nanny that remove as much as possible the need to (god forbid!) make your own decisions or (the horror!) having to learn new tools

If I could have the framework of my dreams ... It would allow me to focus on HTML with a sprinkle of JavaScript.

You may want to consider stimulusjs.org/, which comes from the Rails universe (it was created by no other than David Heinemeier Hansson, the Rails Superstar) and does exactly what you want

Thread Thread
 
loilo profile image
Florian Reuschel

Can confirm that the Stimulus framework works great for that exact use case. But if you're used to the more common frameworks, you might quickly miss the declarative "give me data, I give you DOM" approach they provide.

I haven't tried it myself yet, but Alpine.js gained some steam lately and might be a good middle ground between Stimulus and the "top dogs".

Collapse
 
rhymes profile image
rhymes • Edited

It strikes me that the web never was and never will be anything like any other platform for building applications. Whether or not someone wants to turn it into that is their prerogative and any success they have probably will help others build better things. But it ultimately depends on what you want to build and why, and emulating a native application might not always be the best option.

this :-)

 
mattwelke profile image
Matt Welke

writing an Express API requires "old javascript" unless you want to go through the hassle of setting Babel up

Not sure what you mean here. You can use very recent JS features in Node. We have async/await, async iteration, ES module imports, and more now. I've never felt the need to set up Babel in an Express project.

Collapse
 
mattgperry profile image
Matt Perry

it's widely understood that if you want 60fps animation, you will likely have to bypass the React update cycle and do things in a more imperative fashion (indeed, this is what libraries like react-spring do)

Probably aside from the article, which I largely agree with, but I think this is an odd point to try and make. This is true of any component library that has overhead above imperative JS. But there's nothing stopping you from using CSS in React, which is all Svelte does anyway.

Collapse
 
richharris profile image
Rich Harris

Wrong, sorry :)

This isn't a CSS demo: twitter.com/Rich_Harris/status/120...

Collapse
 
mrispoli24 profile image
Mike Rispoli

So one thing I wondered after reading Tom's article and now yours is how you feel about modern UI being so tied to node or javascript run times on the backend specifically. For instance if you were to be running something in Go and Phoenix you could dynamically render HTML on the back end and serve it up quite a bit faster that the current SSR environments based in Next or Nuxt or Sapper? So essentially somewhat the way Stimulus JS works where you can send over static HTML, rendered anywhere, by any type of server and the frontend could just hydrate that and build components from it.

Collapse
 
richharris profile image
Rich Harris

This is the 'PHP with jQuery sprinkles' approach I mention in the article, except with shinier things than PHP and jQuery. I'm not a fan, personally.

Phoenix LiveView is an interesting take on the problem, but I suspect you hit a limit as to how ambitious your UI can be when deltas have to come across web sockets at a consistent 60fps.

Collapse
 
mrispoli24 profile image
Mike Rispoli

So do you think it’s impossible for a framework to effectively hydrate static HTML not rendered by that frameworks specified renderToString method?

I really don’t know what these functions do under the hood to allow rehydration so I’m just curious if it would be possible and achieve a good frame rate.

Collapse
 
daniel15 profile image
Daniel Lo Nigro • Edited

We can provide better visual feedback that loading is taking place

I'm not so sure about that... IMO, so many single-page apps get this wrong. The native browser loading indicator is still a lot better than many SPAs, and unfortunately it's something that can't be triggered from JS (apart from ugly hacks like infinitely-loading iframes)

With the advent of Node, that changed. The fact that we can do server-side rendering and communicate with databases and what-have-you using a language native to the web is a wonderful development.

Server-side JS was a thing looooong before Node. I remember using Microsoft's version of JavaScript ("JScript") server-side via Classic ASP in the early 2000s. The issue back then was that JavaScript just wasn't quite that popular yet. ES5 wasn't around yet (it was all ES3), and there was a lack of good third-party libraries. But still, it was absolutely in use long before Node even existed.

What Node did do was cross-platform support, introduce a module system (CommonJS) as standard, and introduced the concept of easily obtaining third-party packages via "npm", taking ideas from similar systems like Perl's CPAN.

Collapse
 
vintharas profile image
Jaime González García

Beautifully written and very inspiring 😀👏👏👏