DEV Community

Cover image for I'm Addy Osmani, Ask Me Anything!
Addy Osmani
Addy Osmani

Posted on

I'm Addy Osmani, Ask Me Anything!

My name is Addy and I’m an engineering manager on the Chrome team at Google leading up Web Performance. Our team are responsible for trying to get pages on the web loading fast. We do this through web standards, working with sites & frameworks in the ecosystem and through tools like Lighthouse and Workbox. I give talks about speed and have written books like Learning JavaScript Design Patterns and Essential Image Optimization. I'm married to the wonderful Elle Osmani who co-runs our side-project TeeJungle at the weekends.

To learn more, you can find me on Twitter at @addyosmani

Oldest comments (117)

Collapse
 
jess profile image
Jess Lee

What was the book writing process like? How did you balance something so long form with technologies that are constantly changing/improving? i.e. did you have to go back to the 'first chapter' at the end, and update anything?

Collapse
 
addyosmani profile image
Addy Osmani

The approach I take to writing books and articles is embracing "The Ugly First Draft". It forces you to get the key ideas for a draft out of your head and once you've got something on paper you can circle back and start to build on that base. I love this process because you get the short-term satisfaction of having "something" done but still have enough of an outline you can iterate on it.

With my first book, "Learning JavaScript Design Patterns", the first draft was written in about a week. It was pretty awful :) However, it really helped frame the key concepts I wanted the book to capture and gave me something I could share with friends and colleagues for their input. It took a year to shape my first ugly draft of that book into something that could be published.

On writing about technologies that are constantly changing - I think every writer struggles with this. My opinion is books are great for fundamental topics that are likely to still be valuable to readers many years into the future. Sometimes topics like patterns you would use with a JavaScript framework or how to use a particular third-party API might be better served as short-lived blog posts (with less of the editorial process blocking you). You're still spreading your knowledge out there but some mediums are better than others for technologies that change regularly.

This is especially true of the front-end :)

Collapse
 
rkoutnik profile image
Randall Koutnik

With my first book, "Learning JavaScript Design Patterns", the first draft was written in about a week.

Yikes, it took me nearly 9 months to put together the first draft of Build Reactive Websites with RxJS. What's your secret?

Thread Thread
 
addyosmani profile image
Addy Osmani

My ugly drafts are really, really ugly :)

It'll sound awful, but I have never intentionally written a book or long article. Often, there will be a topic I'm deeply invested in learning more about or writing about and I'll just try to consistently take time out every day to build on the draft.

With the first draft of the patterns book, I wanted to write an article about the topic so I started there and it just grew. I would stay up late and keep writing into the early hours of the morning each day during that week. The first draft wasn't very long - it may have been 60 pages of content.

However, the very early versions are not something I would have felt confident sharing with anyone. There were many parts with half-complete thoughts. It lacked a lot of structure. Many of these are things you have a better chance at getting right when spending 9-12 months on your first draft. I ended up spending that long on rewrites.

Thread Thread
 
rhymes profile image
rhymes

Apropos of books and long articles, thank you a lot for Images.guide. It was illuminating and also very useful to make clients understand that re-inventing image resizing each time is usually not the best move :D

Collapse
 
liana profile image
Liana Felt (she/her)

Hey Addy, Thanks so much for doing this! How much do different teams at Google coordinate?

Collapse
 
addyosmani profile image
Addy Osmani

Over on Chrome, we try our best to stay in touch with Google teams that are working on shipping experiences for the web as well as folks building for other platforms like Android or iOS. Sometimes this happens in the form of monthly check-ins to share learnings (there's often a lot we can learn from one another) and other times it's just over mailing lists.

That said, Google is a very large company and with this comes challenges always staying on top of who is working on what. We still have a long ways to go with improving our communication across all teams. We do want to keep making progress here :)

Collapse
 
andy profile image
Andy Zhao (he/him)

What are the first performance improvements that you look for when going to a web page?

Collapse
 
addyosmani profile image
Addy Osmani

The first performance improvement that I check for is whether the site can be shipping less JavaScript while still providing most of their value to the end user. If you're sending down multiple megabytes of JS, that might be completely fine if your target audience are primarily on desktop, but if they're on mobile this can often dwarf the costs of other resources because it can take longer to process.

In general, I try to go through the following list and check off if the site could be doing better on one or more of them:

✂️ Send less JavaScript (code-splitting)
😴 Lazy-load non-critical resources
🗜 Compress diligently! (GZip, Brotli)
📦 Cache effectively (HTTP, Service Workers)
⚡️ Minify & optimize everything
🗼 Preresolve DNS for critical origins
💨 Preload critical resources
📲 Respect data plans
🌊 Stream HTML responses
📡 Make fewer HTTP requests
📰 Have a Web Font loading strategy
🛣 Route-based chunking
📒 Library sharding
📱 PRPL pattern
🌴 Tree-shaking (Webpack, RollUp)
🍽 Serve modern browsers ES2015 (babel-preset-env)
🏋️‍♀️ Scope hoisting (Webpack)
🔧 Don’t ship DEV code to PROD

Collapse
 
andy profile image
Andy Zhao (he/him)

Phew, extensive list! Love the emojis :)

Collapse
 
rhymes profile image
rhymes

Great checklist! Thanks!

Collapse
 
superkarolis profile image
Karolis Ramanauskas • Edited

Could you clarifiy what you mean by library sharding? Awesome list by the way, thank you!

Collapse
 
iamsunny profile image
Sunny Sharma • Edited

Thank you Addy for sharing the checklist, enough points for my next talk :-)

Collapse
 
peter profile image
Peter Kim Frank

Hey Addy, what are your feelings about AMPs?

Collapse
 
addyosmani profile image
Addy Osmani

I think what we all want is really fast first-party content delivering great experiences to users.

With my user-hat on, an unfortunate reality is that most sites on the web still provide users a slow, bloated experience that can frustrate them on mobile. If a team has the business buy-in to work on web performance and optimize the experience themselves, I'm more than happy for them to do so. We need as many folks waving the #perfmatters flag as we can get :)

That said, staying on top of performance best practices is not something every engineering team has the time to do. Especially in the publishing space, this is where I see AMP providing the most value. I'm pretty excited about their commitments to the Web Packaging specification for addressing some of the valid critique AMP's had with respect to URLs: amphtml.wordpress.com/2018/01/09/i....

I'm also very keen for us to keep exploring what is possible with the Search announcement that page speed will be used as a ranking signal irrespective of the path you take to get there.

Collapse
 
ben profile image
Ben Halpern

This evolution for AMP definitely has me more interested in the project. I've been standing on the sideline hoping some of these URL issues could be resolved.

Collapse
 
ben profile image
Ben Halpern

Hey Addy, thanks for this!

What's your favorite programming language besides JavaScript?

Collapse
 
addyosmani profile image
Addy Osmani • Edited

I recently enjoyed digging back into Rust and loved it. It has a pretty expressive type system that lets you convey a lot about the problem you're working on. Support for macros and generics are super nice. I also enjoyed using Cargo.

My first interaction with Rust was in this excellent tutorial by Matt Brubeck back in 2014 called "Let's build a browser engine!" (I hope someone tries to update it!). Perhaps a good future post for someone on dev.to? ;)

Collapse
 
restoreddev profile image
Andrew Davis

Do you think the recent rise in popularity of single page applications using React/Angular/Vue have been good for web performance? To me, it seems too easy to create bundles that are very large and difficult to parse on the client (plus, SPAs can be really complicated, but that is a whole other discussion). Do you think the SPA is the future of web development or is there still a place for server generated HTML?

Collapse
 
addyosmani profile image
Addy Osmani

Great question :) A lot of the sites I profile these days don't perform well on average mobile phones, where a slower CPU can take multiple seconds to parse and compile large JS bundles. To the extent that we care about giving users the best experience possible, I wish our community had better guardrails for keeping developers off the "slow" path.

React/Preact/Vue/Angular (with the work they're doing on Ivy) are not all that costly to fetch over a network on their own. The challenge is that it's far too easy these days to "npm install" a number of additional utility libraries, UI components, routers..everything you need to build a modern app, without keeping performance in check. Each of these pieces has a cost to it and it all adds up to larger bundles. I wish our tools could yell at you when you're probably shipping too much script.

I'm hopeful we can embrace performance budgets more strongly in the near future so that teams are able to learn to live within the constraints that can guarantee their users can load and use your sites in a reasonable amount of time.

SPAs vs SSR sites: Often we're shipping down a ton of JavaScript to just render a list of images. If this can be done more efficiently on the server-side by just sending some HTML to your users, go for it! If however the site needs to have a level of interaction powered by JavaScript, I heavily encourage using diligent code-splitting and looking to patterns like PRPL for ensuring you're not sending down so much code the main thread is going to stay blocked for seconds.

Collapse
 
restoreddev profile image
Andrew Davis

Thanks for responding! PRPL is a new pattern to me, hopefully with more awareness we will be able to use it and techniques like it to get better performance.

Collapse
 
nickytonline profile image
Nick Taylor • Edited

This is from way back, but I find your origin story quite interesting.

I guess you've always had perf on the brain? 😜

Collapse
 
nickytonline profile image
Nick Taylor

And a follow up question. Is the work you did on your "Xwebs megabrowser" what paved the way for all browsers to start serving multiple HTTP connections per domain to load a web page?

Collapse
 
addyosmani profile image
Addy Osmani

Haha. Perf always matters :)

For some back-story, when I was growing up in rural Ireland, dial-up internet was pervasive. We spent years on 28.8kbps modems before switching to ISDN, but it was an even longer time before fast cable internet became the norm. There were many times when it could take 2-3 days just to download a music video. Crazy, right?

When it was so easy for a family member to pick up a phone and drop your internet connection, you learned to rely on download managers quite heavily for resuming your network connections.

One idea download managers had was this notion of "chunking" - rather than creating one HTTP connection to a server, what if you created 5 and requested different ranges from the server in parallel? If you were lucky (which seldom happened), you would have a constant speed for just that one connection, but it was often the case that "chunking" led to your file being downloaded just that little bit faster.

I wanted to experiment with applying this notion of "chunking" to web browsers. So if you're fetching a HTML document or an image that was particularly large, would chunking make a difference? As it turns out, there were cases where this could help, but it had a high level of variance. Not all servers want you to create a large number of connections for each resource but the idea made for a fun science project when I was young and learning :)

Back to your question about this paving the way for browsers serving multiple HTTP connections per domain: I think if anything, it was happenstance that I was looking at related ideas. Network engineers working on browsers are far more intelligent than I ever could have been at that age and their research into the benefits of making multiple connections to domains is something I credit to them alone :)

Thread Thread
 
nickytonline profile image
Nick Taylor

Thanks for taking the time to reply Addy. Keep up the great work and keep that Canadian @_developit in check. I'm not sure how much he knows about perf 🐢. Better check his bundle sizes. 😉

Collapse
 
rkoutnik profile image
Randall Koutnik

I'm sure you've encountered some real humdingers when trying to optimize slow pages. Got a favorite story about some ridiculous performance bug you've encountered?

Collapse
 
ben profile image
Ben Halpern

Collapse
 
addyosmani profile image
Addy Osmani

Hmmmm. The worst optimized site I've encountered in my career was probably just a few weeks back :) This was a site with a number of verticals where the front-end teams for each vertical were given the autonomy to decide how they were going to ship their part of the site.

As it turns out, this didn't end well.

Rather than each "vertical" working collaboratively on the stack the site would use, they ended up with vaguely similar, yet different things. From the time you entered the site to the time you checked out, you could easily load 6 different versions of React and Redux. Their JavaScript bundles were multiple MBs in size (a combination of utility library choices and not employing enough much code-splitting or vendor chunking). It was a disaster.

One thing we hope can change this is encouraging more teams to adopt performance budgets and stick closely to them. There's no way the web can compete on mobile if we're going to send down so much code that accomplishes so little.

Oh, other stories.

  • Ran into multiple sites shipping 80MB+ of images to their users for one page...on mobile
  • Ran into a site that was Service Worker caching 8GB of video... accidentally

There are so many ridiculous perf stories out there :)

Thread Thread
 
ben profile image
Ben Halpern

Oh my goodness this is making me squirm.

Thread Thread
 
rhymes profile image
rhymes • Edited

OMG six different versions of the same library is definitely the result of poor communication. I can't wait for an AI powered browser opening alerts saying "please tell those developer fools that did this website to talk to each other :D"

The imaging thing is all very common.

I've seen galleries/grids of images rendered using the original images uploaded by the operator, which obviously were neither checked for size nor resized automatically.

Thread Thread
 
galdin profile image
Galdin Raphael

Those stories sound like they're really easy to repeat though.

Collapse
 
maestromac profile image
Mac Siri

Hello Addy! Do you have all the main-stream browsers installed and how often do you use the them to stay ahead of the game?

Collapse
 
addyosmani profile image
Addy Osmani

I usually try to have the latest + bleeding-edge versions of most browsers installed for testing. On my Mac at the moment there's..

  1. Chrome: stable, beta, canary, Chromium (tip-of-tree) installed
  2. Firefox: Quantum, Firefox Developer Edition/Nightly
  3. Safari: stable, a version of Safari Tech Preview that's often a few weeks old
  4. On my Android phone: Chrome, Firefox, Brave, Opera

For Edge, I'll usually have a VM setup for testing or take out a Surface from my drawer to test the current stable version.

I otherwise heavily rely on services like BrowserStack or WebPageTest to validate that my projects perform and render correctly on real devices.

Collapse
 
ben profile image
Ben Halpern

What's it like being a coding celebrity?

Did you think you'd have a massive following like this before it happened?

How did you get to where you are in this regard? I'm super curious about your mindset along the way.

Collapse
 
addyosmani profile image
Addy Osmani

I often don't feel like I deserve any of the attention. Some of the most exceptional coders in the world don't get as much acknowledgement of their work as they should. I wish that we could change this and to the extent platforms like dev.to and social mentions enable a path to this, I'm hopeful more of them will be considered coding celebrities in the future :)

What's it like being a coding celebrity?

What's it like.. you learn the importance of being humble. You learn to be careful with what you say and how it can be interpreted when you make a strong statement. When people look up to you (in any situation), you have a responsibility to try giving them a measured response where you've considered the best data and facts available to you. It's far too easy to spend 15s thinking about something and just posting it out into the world (think before you speak).

There are tweets and articles about topics that I would love to post, but don't because I'd prefer to take my time to check on the data and consult with others in the community so I can be confident if I suggest something is a best practice, that I truly believe it is. It's very possible I overthink and over-analyze so take this with a grain of salt :)

Did you think you'd have a massive following like this before it happened?

I didn't think I would be fortunate enough to get the following I have. I just constantly hope I'm giving folks some value vs. throwing out nonsense :)

How did you get to where you are in this regard? I'm super curious about your mindset along the way.

I get asked this question a lot and the answer is: by trying to continue delivering value to the community as often as I can. I definitely don't do this every day or every week, but I think we all struggle to stay on top of things on the web. It's challenging knowing what the latest best practices, tools and techniques are. To the extent that we can distill some of this down into a bite-sized form for folks (tweets etc) that they feel comfortable digesting, maybe that's useful enough.

I will say the journey itself to this point, although hard, has been fun and educationally rewarding.

Collapse
 
henrylim96 profile image
Henry Lim

um... totally tooling tips?

Collapse
 
addyosmani profile image
Addy Osmani • Edited

Totally Tooling Tips will probably not be coming back with it's current line-up (sorry!). Matt and I had an amazing time doing the show but as we've switched to different roles at Google over time we've had less free time to shoot.

We're evaluating whether we want to keep the show going with a different set of speakers (or do something completely new) so keep watching this space!

That said....Matt and I might come back for one final episode. If folks want it :)

Collapse
 
alexparish profile image
alexparish

Are there any important considerations or gotchas when shipping an ES2015 bundle to modern browsers and a transpiled bundle to legacy browsers? Is this something you recommend?

Collapse
 
addyosmani profile image
Addy Osmani • Edited

One caveat off the top of my head: check the bundle that is being served to search crawlers can be interpreted correctly. For Google Search, our web rendering service is based on Chrome 41.

I would just check to ensure the legacy (non-ES2015) bundle doesn't also contain anything that requires additional polyfills. If it does, look at ways you can feature detect and serve that additional JS as needed.

Collapse
 
alexparish profile image
alexparish

Hadn’t thought of crawlers! Great shout, thank you.

Collapse
 
antoinebr profile image
Antoinebr • Edited

I help medium businesses on mobile UX ( webperf / UI ) in a big search Ads company

I feel that lot of businesses are ~5 years behind implementing new tech ( REST, JavaScript FW, PWA )

YouTube... is full of trainings but in the real world devs don't have time to learn "exciting tech"

E.g : companies are stuck with old Magento 1.x, affraid to touch anything ( because no tests)...

What do you think could change / improve this ?

Collapse
 
addyosmani profile image
Addy Osmani

I've worked in companies where change aversion made it difficult to migrate legacy codebases onto anything more modern or efficient. It's not uncommon to hear stakeholders use the "if it's not broken, why fix it?" rationale. They often don't have your insight into the maintainability of performance issues some of these decisions can cause over time.

One approach I've increasingly found teams use is pitching for an A/B test - e.g "Let us try to migrate over a smaller part of the site. If we can show you it will improve business/conversion metrics by at least X, let us try it out for other parts of the site". This reduces the cost of the exploration in the eyes of the business ("they're just asking to do this for one page or section...") and gives you a target to justify continued investment.

Where it's not quite as straight-forward as demonstrating a change will lead to improvements in business metrics, showing data about how many engineering hours will be saved by switching to a more modern Magento vs maintainance cost of old, what other opportunities doing so unlocks etc might also be something that can convince the business it's worthwhile letting you explore it.

Collapse
 
fnh profile image
Fabian Holzer

I'd be interested to hear your thoughts on WebAssembly.

Collapse
 
addyosmani profile image
Addy Osmani

I'm hiring for a WebAssembly Developer Advocate at the moment so I definitely believe it has a future :)

I'm excited about the potential WASM will unlock for types of applications that were heavily bound to the compute of JavaScript. I think it's going to be huge for certain classes of games, accelerating how quickly well known desktop applications and libraries can be ported to the web (I was playing around with a Vim port in WASM just last night!) and potentially for data-science. At the same time, I don't think it's going to displace the use-cases for JavaScript directly. JS continues to see strong adoption for UI development and I don't see this changing anytime soon.

Collapse
 
ben profile image
Ben Halpern

Are there any specific thresholds in terms of sending a certain number of KBs or MBs where the user experience starts to suffer most? And would that number be considerably different for areas well-served by fast Internet vs underserved areas?

I operate under "less is better" paying some attention to possible packet-specific thresholds, but I can't say I'm all that certain about any of my ways and any insight in this regard would be awesome.

Collapse
 
addyosmani profile image
Addy Osmani

I usually try to walk back from my goals when it comes to performance budgets. For example:

"Users on average phones can load and interact with this site in 5s on a 3G or better network".

If we look at the sequence of communication between a browser and a server, a few hundred milliseconds (400-600ms) will already be used up by network overhead and round-trips: DNS lookup to resolve the host-name (e.g google.com) to an IP, network round-trip to do the TCP handshake and then a full round-trip to send the HTTP request.

This leaves us with ~4+ seconds to transfer data while still keeping the page interactive. On a 3G network, you can probably at best get away with sending 130-170KB of resources while meeting your targets. If your users are on 4G/LTE, you may be able to send more. The variability of mobile networks means that, even if you're on a high-end phone over LTE (e.g iPhone X) your network speeds can effectively be slower than you'd like. This is why developing for a "poorer" baseline is great. It means even under worse conditions you're still able to deliver a good user experience.

On desktop, it's a different ball-game. Your users are likely connected to more reliable WiFi/cable on a CPU-beefy machine. You can probably get away with shipping MBs of code to your users there. That said, many of us have gone through the pain of trying to use WiFi from a coffee shop, at a conference or on a plane. When your effective network connection speed is poor, you can start to feel the pain of those MBs even on a desktop machine.

"Less is better" is always a good mantra to hold regardless of device and network type :)