Google calls Lighthouse "an open-source, automated tool for improving the quality of web pages". It is not a performance tool per se, but a promine...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
That was a great read indeed.
Giving meaningless numbers is often worse than having no number at all
Because you spend time and efforts "improving" things that don't matter much.
It's worth pointing out that Google doesn't actually know a single thing about your users.
Only you know your audience.
Google gives advice for the average site.
But noone is the average site.
Load time matters a loooot for Amazon because a 0.1 second improvment means lots more money.
But for my personal website with few highly motivated users, it would be an absolute waste of time and efforts to invest in writing my own static website generator in Rust to be faster.
For me what matters is the content and how easy it is for me to update it.
Thanks Jean-Michel. It is challenging to maintain a healthy perspective on performance. It is best to take a holistic approach to UX like you mentioned.
βοΈThis
Amazing Article.
I've voted it "High Quality". Keep posting such acticle!
Thanks Akash π
I think also it's dependent on the websites audience.
As an example, I've been working on web vitals for e-commerce clients. One client was using bigcommerce who historically score better in USA due to their hosting arrangements, but this client has a UK store with a UK only audience. Metrics such as TTFB suffered due to latency and poor infra issues which then had a knock on to their LCP, FCP metrics. From the lab driven page speed results reported by Google they were "ok" but as the audience was located elsewhere real results would lead to failed metrics.
After moving this client to Shopify which located them within EU DCs we infact saw the opposite happen. Lab driven tests would report OK this time around but real measurements returned a good/pass due to audience access and much better TTFB etc.
For me, if we are going to give Google page speed metrics and vitals any air time, we also should be comparing the locations of access and audience types. As you have suggested the simulated/lab tests are misleading enough!
Awesome article indeed. Recently I was working on a project where the client requested to improve the performance of the mobile site because it was around 25-35, we did A LOT of things to improve performance, and we managed to get it to 45-55, but we didn't understand why the score varied so much, I mean, one time I even got 60 when running the test.
In the end the client decided to leave it at that, specially because we also showed them that their competitors had much worse performance scores π
π
I agree to most of the points made in the article. But all critique aside, one thing that aiming for good ("better than currently") results will always give you: a faster and slimmer website.
Optimizing assets? Yes, please. In a lot of countries networks are still slow and expensive. Oh, and every byte uselessly transferred means more energy consumption. We cannot frown upon Bitcoin for wasting energy and at the same time act as if we had nothing to do with it as web developers.
Getting rid of unneeded scripts and data? Yes, please. It is very easy to achieve a better score by getting rid of GTM, the three dozen ad networks and the six other trackers in your site. Who benefits? First of all, the users of your site.
Most sites I worked on were easy to improve using those two things. And that's not hard, technically.
Whether your client buys into letting go of all the tracking/advertising cruft, might be an entirely different question, though. π
Field data is the best way of understanding the performance of your website, as noted by Philip Walton before: philipwalton.com/articles/my-chall.... This is why the Core Web Vitals initiative measures Core Web Vitals through the Chrome User Experience report (CrUX) and not Lighthouse.
Lighthouse is a diagnostic tool to provide recommendations and the score is a broad summary. I agree with the other comment not to focus on absolute values produced by it, but instead to look at the recommendations it surfaces to help improve performance.
There can be big differences between lab and field data: web.dev/lab-and-field-data-differe... and at the end of the day it really matters how users experience your site, not how a tool like Lighthouse sees it, under particular settings, and with the simple, cold load it does.
Lab-based tools are particularly difficult to measure responsiveness in (as measured in Core Web Vitals by FID, soon to be replaced by INP). As it is measured based on a user interacting with your site, and as Lighthouse doesn't interact with your site (though it can be made to with User Flows but only for the limited interactions you program them for). This is why Lighthouse cannot measure FID and INP, and the best proxy metric for a simple, cold load of a page that Lighthouse does is TBT which gives an indication of a busy thread and so potential responsiveness problems.
Thanks for sharing your perspective Barry
Could the scoring scale be non-linear due to the distribution of search rankings being non-linear?
For ex, the first ranked post in Google gets 30% of the clicks, next gets significantly less, so on and so forth.