CSS cascade layers sound great for projects where CSS may be coming from multiple sources, and you want a nice way to control the cascade. I've been looking into these to allow us to include the CSS for company-wide shared components, but allow project-level overrides via utility classes, no matter the specificity of the shared component code (a single-class utility like
.leading-snug often has no chance against higher-specificity library CSS like
.c-acme-component-name .c-acme-component-name__header :is(h2,h3,h4,h5,h6)).
Unfortunately, there will still be users on older devices that don't have an up-to-date enough browser for cascade layers to work. Browsers that don't support cascade layers will ignore any CSS inside a layer, so these users might see a pretty broken site if you used layers extensively.
Luckily, there is a PostCSS polyfill that rewrites your CSS to add selector complexity to match the behaviour of the cascade layers, but what if we didn't want to send users of modern browsers the polyfilled CSS?
@supports at-rule(@layer) in CSS or
Well, no nice way, but what if…
Years ago, maybe even decades ago now, I remember a CSS refactoring technique of adding unique transparent pixel CSS
background-image URLs to CSS rules that you suspected were no longer necessary, but you needed to prove were not in use:
After a few months, if you checked your server logs/database and no requests had been made to the unique background image URLs then you could delete the CSS rule as it clearly wasn't in use anymore.
I had a similar idea for feature detecting the cascade layers support. Be warned, it's a little over-the-top, and has quite a few moving parts, but it was a fun Sunday afternoon exercise in 'what if'.
In our site's build process, we generate
dist/styles.css and a polyfilled version,
dist/styles-polyfilled.css. The source of our HTML file references the polyfilled version as it's safer to assume no support.
An edge function intercepts all requests to our HTML pages and checks for the presence of a cookie called
A visually hidden HTML element
<span aria-hidden="true" data-css-cascade-layers-detector></span> is added to the page.
As well as the
span element we inject some HTML to lazyload a CSS file called
dist/detect.css that contains some unpolyfilled CSS inside a layer. Browsers that don't support CSS layers will ignore this. The CSS contains a background image declaration for our
[data-css-cascade-layers-detector] span. The browser will load the background image, and in doing so make a HTTP request to an another edge function that returns a transparent image but also sets the
newBody = body.replace(
`<span aria-hidden="true" data-css-cascade-layers-detector style="position:absolute;left:-9999px;top:-9999px;"></span>
<link rel="stylesheet" href="/dist/detect.css" />
If the cookie is present, we replace the link element that loads
dist/styles-polyfilled.css with one that loads
dist/styles.css, and for all subsequent page loads the user gets a more simple CSS file with less verbose CSS.
newBody = newBody.replace(
'<link rel="stylesheet" href="/dist/styles-polyfilled.css" />',
'<link rel="stylesheet" href="/dist/styles.css" />'
- The function that intercepts all HTML responses
- The function that returns the transparent image and sets a cookie
- The CSS that loads the transparent image and cookie function, but only if the browser supports cascade layers
- It's a lot of moving parts and network-level code for something as traditionally frontend-only as CSS. It's a very non-standard way of doing CSS feature detection. Is it fair to expect colleagues to hold all of this in their head?
- What's the performance impact? We make the user load the polyfilled CSS on their first page load, but then if their browser supports modern CSS we make them load the unpolyfilled CSS file on the subsequent page load, ignoring the perfectly good polyfilled CSS file in their browser cache.