This is a story about a lot of things:
Fitting a Fortune 20 site in 20kB
Diving into site speed so deep we’ll see fangly fish
React thwarting m...
For further actions, you may consider blocking this person and/or reporting abuse
Entertaining read!
Perhaps time to banish them to a web worker - Partytown style - to preserve your TTI.
Fictional conversation at some unnamed retail location:
Customer: "Excuse me. Do you carry the 'Hot Pepper’s Poblano VLE5' phone?"
Clerk: "I'm sorry. We had to discontinue that model. Customers kept returning it convinced they had a defective unit after they tried to access our web front."
Indeed, later the third-parties had grown enough that they exceeded my budget by themselves. I did “solve” it with some code in the same spirit as Partytown, but a different approach. (I promise I’ll write about it later!)
Good post, I found it entertaining. I would've liked to have seen mention of other approaches to delivering content quickly such as edge caching. Having data close to the user can be more impactful than shaving off KBs, but it's great to see such a passion for performance that is often sorely neglected.
I agree. I originally had a piece talking about edge caching and rendering, and maybe I should dust it off and post it.
I don't know why I laughed on this so much
back in my day we would just serve content based on the user's environment.
seems like the proper solution isn't creating Lowest Common Denominator designs, so the same site is served on weak and strong hardware, but being able to dynamically serve content based on the specs of the device
"JSON→JavaScript→HTML cannot possibly be faster than skipping straight to the HTML part."
I thought about this a lot and I think there is a fairly good case for it being faster on most sites: If a page has a lot of repeated html elements then those repeated elements would need network data to transfer. Meanwhile your js can repeat that structure for free on the client. E.g imagine the simplest example: A 100000 long html table. If you transmit that entire thing, it will be huge. If you render it via js, the data could come from a json and then at least you save on the shell of the table rows, the extras.
The only downside is that all your js needs to be loaded first, so it should ideally be inlined and as small as possible....which it probably won't be at all because the whole point of doing things this way is because you want application level rendering control via js.
It's kind of like a video game engine though: You would never think to save the coordinates of every pixel of every texture of every character and think that this speeds up the initial render. No, you just save enough of the character data so everything can be moved into place in the first frame render. Or in a networking engine: You dont transmit the entire model, you only transmit the difference needed to be able to render the correct state. To constantly transmit the whole html structure should and would be considered wasteful.
That’s certainly plausible: truly scrutinized JSON can have fewer bytes wrapping around the data bytes than HTML with element wrappers and styling attributes. I’d have to benchmark to find the inflection point — if the HTML cruft is repetitive, compression makes the question more complicated.
I felt this pain
This was a great read and you listed many great resources. Wel done 👌
Legendary, Love it
"progressive upgrade" is the keyword here
Fascinating to hear your journey from an F20!
Following for more of this very insightful (and entertaining) writing style!
Hey, I'd like to experiment with Party-town and Dynatrace in my project but unfortunately in the official docs, there is no reference for Dynatrace integration! Anyone have idea how to integrate Dynatrace with Partytown?
In a way, AMP was the solution...