DEV Community

Cover image for Distributed Persistent Rendering: A new Jamstack approach for faster builds
Matt Biilmann
Matt Biilmann

Posted on • Originally published at netlify.com

Distributed Persistent Rendering: A new Jamstack approach for faster builds

I’d like to share a new concept we’ve been exploring for the Jamstack at Netlify, born out of our work with large enterprise dev teams deploying hundreds of thousands of pages at a time.

We wanted a way to make those deployments faster—without introducing unwanted complexity for Jamstack developers. Some solutions have come up before now, but they tend to lock you into a specific framework or break some of the most compelling reasons to use Jamstack in the first place. A Jamstack-focused approach was needed that would work across the ecosystem. After discussion both within Netlify and some members of the Jamstack community, we feel we’ve landed the right concept—something we’ve dubbed “Distributed Persistent Rendering” or “DPR” for short.

This post is designed as an early introduction to the approach we’re proposing as well as an invitation for the entire community to offer feedback and collaboration. Our hope is to create a new standard that maintains the core principles and benefits of Jamstack and works across a wide variety of site generators and frameworks, bringing an even wider range of websites and use cases to the Jamstack.

The power of atomic deploys

We think the very best tools are built around a simple mental model that helps you easily reason about the state of your application as you build it. Modern frontend libraries (like React and Vue) make a simple, powerful contract with the developer: UI becomes a function of state. Change the state, and the UI will rerender in response.

The Jamstack has thrived because it too centers around an intuitive mental model: every git push runs a build process to create its own atomic deployment. This approach keeps it incredibly easy to reason about the current state of your site or application, even as many changes are committed daily from around your team. It makes both new deploys and rollbacks painless. It keeps you always confident in what any visitor will see at any given URL. And, most importantly, it avoids all the deep layers of caching, complexity, and infrastructure that came out of scaling the legacy web.

The fight against complexity

For any technology, the hardest part is not establishing simplicity, but protecting it over time.

As the Jamstack evolves, new features like dynamic server side rendering, rehydration, tiered CDN caching, and stale while revalidate seem to be creeping us back towards all the complexity we fought to escape. Can you still be confident in a rollback? Do you really know how your site will behave if you push deploy preview #110?

Of course the momentum to add features to a platform is the desire to solve real challenges. And one of the challenges of the atomic deployment model is the time it takes to rebuild an entire site each for each deploy. Larger teams and projects are feeling this impact, especially now that Jamstack sites are scaling to 100,000 pages and beyond.

Speeding up deployments... without weighing down the Jamstack

Netlify would like to work with the entire Jamstack community to help bring a solution for faster large site deployments—but importantly an approach that stays true to the atomic deployment model of the Jamstack. We’re calling our proposal Distributed Persistent Rendering, and today we’ve posted details about the approach in a Request For Comments (RFC). Our hope is that community members and site generator authors will weigh in and help us build something that is easily adoptable by multiple frameworks. Already, we’ve seen early engagement from Nuxt and 11ty and we hope many more will join the collaboration.

Distributed Persistent Rendering

Distributed Persistent Rendering (or DPR) allows developers to defer rendering any given URL or asset until it’s first requested. With DPR, you can (and should) still prerender critical pages at build time—perhaps your homepage or recent blog posts. But that huge archive containing years of older posts? Using DPR, those pages can be rendered only when and if they are requested. And once rendered, they remain as a persistent asset until the next deploy—just as if they had been part of the initial build. It brings the benefit of faster builds without introducing the complexity of scaling and caching server-side rendering. And unlike a caching-based strategy, there’s no risk of confusion from stale assets or fallback pages.

We think DPR strikes the right balance—new capabilities without new complexities. We’re so excited we’ve even built a new service offering to help accelerate us towards DPR and we’re announcing today that any Netlify customer can gain early access to explore the approach and the simple API behind it.

Please let us know what you think in the RFC and show us what you build!

Top comments (4)

Collapse
 
grahamthedev profile image
GrahamTheDev • Edited

It is interesting as I have been working on a similar approach but from a different angle.

I am working on “self optimising websites”, where you deploy a page and a small JS script works out what is “above the fold” at a given screen size and communicates that back to the server so we can build variations of the page CSS and precache images that are the right size for a given screen (an over simplification but you hopefully get the idea)

The only difference in principle is I always have a “baseline” experience pre-cached. Then the first visitor at a screen size gets served something quickly and I can process the improved page in the background for the next visitor.

I am not sure if you can take that principle and apply it to this concept but just an idea. It depends how long that rendering process takes I suppose as to whether it matters.

Also thinking of core web vitals, could this potentially damage your score? I need to give that more thought though, just an initial half-baked idea that popped into my head.

P.s. - maybe a different acronym, “DPR” is “Display Port Resolution” to me and widely written about, your idea could get “lost in the noise” because of that (especially as DPR is talked about a lot in performance with regards to image resizing).

Anyway it sounds like I am being negative to what seems like a great concept for larger sites, I look forward to seeing where you take this idea! ❤️🦄

Collapse
 
swyx profile image
swyx

yeah i hope they find a less wordy name as well

Collapse
 
andrioid profile image
Andri

Interesting approach. But if the goal is less complexity, then I think server-side rendering with a caching layer (cdn, varnish, ngnix) is a simpler option.

Collapse
 
philhawksworth profile image
Phil Hawksworth

Although that seems simpler to describe (server side rendering with a caching layer), it introduces large amounts of complexity and unknowns. Such as:

  • When do you render what, and what is the state of each view at any given time?
  • What do you cache where and when? In Varnish? In the CDN? In the browser?
  • How do we invalidate the cache in all or any of those places?
  • How do you deploy an update so every asset is updated in every location that it might be cached? And can that updated happen atomically so that we don't get mixed resource deploys being served?
  • What infrastructure do we need to scale in order to handle load? Or is everything served from the CDN? If so, what logic gets which assets there and when?
  • How do we roll back to a previous deploy?

I speak from some bitter experience when I say that this is the tip of the iceberg, particularly on projects which need high availability and high performance.

One of the original intentions of Jamstack as an approach, and of the DPR proposal, is to avoid these sorts of unknowable conditions and make the state of the site far easier to reckon and the architecture to reason about. Persisting all things to the CDN for serving (whether entirely during a deploy or additionally after their first requests) in the way described in the DPR RFC should allow developers to keep on avoiding these issues.

At least. That's the goal. :)