This article was originally seen on Dareboost's blog.
There is nothing really new about the Content Performance Policy since August 2016. Still, given the last months' discussions and statements about AMP, eg Kill Google AMP before it KILLS the web, I think Content Performance Policy deserves some attention!
You may already know about Content Security Policy. It’s a great feature to add more security to your website, particularly to protect your visitors from the effects of an XSS attack.
The idea behind CSP is to allow website owners to offer a security policy that will next be applied by the web browser. For instance, it allows to whitelist explicitly some JavaScript files, or to ensure the use of HTTPs to request each resource within the page.
Tim Kaldec and Yoav Weiss borrowed the CSP general concept to apply it to web performance topic, proposing a new HTTP header (Content Performance Policy), allowing to declare precisely the compliance level of a given page with some web performance best practices. Then, the user agent would be responsible to ensure the effectiveness of the announced best practices.
It’s kind of a SLA emitted by the website to the web browsers, and that the user agent will guarantee even if it means broking its own default behaviour.
Example: my website is announcing to the user agent (via no-blocking-font directive) that the page content can be displayed without having to wait a font file to be downloaded. If it’s not true (so the directive would be inaccurate), the user agent should bypass its default behaviour, and  start anyway to display the textual content before downloading the font file.
Within this article, I aim to bring some context about the motivations behind Content Performance Policy (through a summary of the Tim Kaldec's article), to speak about the spec proposal of course (even if it’s very early stage), and at least to focus on the current limitations we could face with Content Performance Policy.
Indeed, within the Dareboost team, we work to provide a diagnostic tool as reliable as possible to check compliance with performance best practices (among others), so our point of view is not really the classical one.
Why a Content Performance Policy ?
We did not address this topic on this blog yet, but if you’re following us on Twitter (if not, it’s time to do so!), you surely know about it: Google recently shifted the lines with the Accelerated Mobile Pages Project.
It’s a framework to build mobile web pages, focused on speed and user experience. Google pushes things forwards, highlighting websites using this technologies in the search results since a couples of days.
Tim Kaldec had great words about it: AMP is a great technology, so is the project. But Google promote ONE technology, Â whereas AMP is not at all the only solution to build a fast experience, it’s just one way among others to do so.
Tim continues his analysis, mentioning that Google has actually done this to facilitate its own work.
I did raise this matter a few months ago, within this article about the Red Slow Label: it’s difficult to determine for sure whether a website is slow or fast, because you need to take into account a lot of params.
Besides, neither the Red Slow Label nor the Slow to Load have reached the SERP. Highlighting AMP, Google actually simplifies the equation: websites using it are considered fast, so they are promoted. Then there’s the rest of the world…
Here’s a Tim’s quote I find particularly clever about this:
So when we look at what AMP offers that you cannot offer yourself already, it’s not a high-performing site–we’re fully capable of doing that already. It’s this verification.
Personally, I’m still very impressed by what Google achieved, by such an adoption of the technology, and it has certainly contribute to wider the field of people caring of web performance. But perhaps for bad reasons?
Liquid error: internal
Tim Kaldec and Yoav Weiss worked to find a solution bringing a similar verification potential, allowing to promote fast websites with a good user experience, without creating a dependency to a particular technology. Because we need to preserve the web in its openness and its diversity.
That’s how Content Performance Policy was born! (event if it seems that AMP team was already on a similar matter)
As used framework or whatever technology would not be the guarantee of good performance anymore like AMP is, that should be the user agent role. A website is announcing what are the best practices it's complying with. If the site then broke its promise, the browser would have to enforce them.
Content Performance Policy, a good solution?
Let’s summarize:
- Google enforced web performance to numerous website owners that are dependant of organic trafic, therefore required to adopt AMP in order to benefit from the promotion of this technology in the search results
- Content Performance Policy do not go against the idea to promote speed, it even probably recognizes the benefit of the positive pressure that search engines can bring. However CPP idea is to offer an alternative to an AMP only world. Â
- AMP was a deal between Google and websites, whereas Content Performance Policy add user agent vendors to the team, as user agents would vouch for websites’ promises
Of course, within the Dareboost team, we’re very enthusiastic to discover such great ideas! We 100% agree with the approach and the need of something else than AMP.
Google penalizing slow websites has always been a matter for us, with a lot of questions raised. We automated the detection of a lot of web performance issues, going further that most of tools. We also benchmarked big samples of websites to know more about performance.
That’s why we also wished to bring our opinion through this article, even if we're more used to speak about finalized recommendations. Â
Stakeholders interdependencies
The first difficulty that may be interesting to focus on is probably the narrow relation between 3 parties to see CPP to be adopted: search engines, user agent vendors and website owners.
Without incentive from search engine, CPP would probably stay a web performance world thing, an interesting tool but not helping into pushing speed at a new level as AMP seems to achieve.
Without web browsers implementations, CPP would not be a verification signal at all, as nothing would enforce websites to comply with their promises.
Without a wide adoption from website owners it would result as penalizing a lot of fast websites not using the mechanism. Â
Speed or nothing?
Reading the list of directives (on 25th feburary), we can assume that understanding Content Performance Policy requires some advanced knowledge about various topics.
It sounds like it may hold up the adoption process. Not only you have to understand what it is about, but also what are the stakes and especially what a directive violation would result in.
Using CPP also implies an important maintenance effort. Today, a mistake is slowing down  your website. You can detect the slowdown, for instance using a web performance monitoring tool like Dareboost ;). Using CPP, a mistake may mean your website won’t work anymore! (example: my website promise is to load less than 800ko. If an heavy image is added for instance via a CMS - and so directly in production outside of my staging workflow - web browsers will block data to avoid an overflow in order to keep my website complying with its promise. What if the block content is an essential JS or a CSS?)
Some of the CPP obstacles are similar to the ones we have with Content Security Policy adoption. If you provide A/B Testing or Tag Manager solutions to your team, you expose yourself to broke the promises at any time in production. And that might result in broking your website, as the browser job is to vouch for the promise.
It’s very interesting to be able to constraint third-party contents. Nevertheless, for performance matters I think it something you have to do sooner, when choosing your services. Using CPP to do so, the risk to see some third-party content broking at any time due to an update is high.
Yoav answer :
“I'm not sure how it would look like, but we plan to have mechanisms that will limit the impact to certain hosts.” Â Proposal has since been updated
Finding a common reference basis
Last but not least, how can we establish the degree of importance for each directive? How to be sure we have enough directives in the list to reasonably affirm that a page is fast enough? Even we succeed, establish thresholds (resource-limit, max-internal-blocking-size) will be hard. Avoiding more complexity is absolutely required to hope website owners adoption. So it would imply a common basis for thresholds, for all the players using CPP as a verification mechanism (browsers, adblockers, search engines, etc).
As pointed out by Yoav, this matter would be taken into account (see github issue).
As reminded by the AMP team, we don’t have to forget that things that are slow today might not be slow tomorrow, so this basis should be able to evolve.
The approach remains great, and it’s really nice to see people coming with this kind of amazing ideas.
I think CPP needs to offer more control for each directives, to be able to make a promise including or excluding some resources (in order to prevent breaking anything vital if a promise would come to be broken)
IMO, it might be nice to make a clear separation between promises enforced by user agents that have no major consequences (ex: no-auto-play, no-blocking-font), those that can be risky, and those that can’t reasonably be enforced by browsers (ex : max-internal-blocking-size).
Maybe it's time to give CPP a new start? feel free to contribute to the proposal.
Top comments (1)
Sigh... This is a difficult problem and it's hard to see solutions that don't have potential consequences or a lot of time finding adoption. This is a space I am trying to follow closely.