DEV Community

Cover image for Should Frontend Devs Care About Performance??

Should Frontend Devs Care About Performance??

Adam Nathaniel Davis on February 27, 2022

I was recently talking to an architect at Amazon and he made a very interesting comment to me. We were talking about the complexity of a given alg...
Collapse
 
peerreynders profile image
peerreynders • Edited

TL;DNR: Often it's less about being performance conscious but more about being explicit about what tradeoffs are being made: for who's benefit and and to who's detriment.

The article largely focuses on code produced by the frontend developer but the third party code selected for use on the client side (and thus affecting the client side architecture) imposes overhead even before a single line of code is written (The Cost of Javascript Frameworks, Benchmarking JavaScript Memory Usage).

So perhaps "caring about performance" should be practised by honestly understanding the impact our tools have on end user performance.

These days React is pretty much a bandwagon choice; reportedly popular DX, large ecosystem, ready supply of developers - but is the (performance) cost of adoption fully understood? If React Native isn't needed perhaps Preact is "good enough" (Etsy). And if it's mostly about JSX maybe Solid is an option?

Similarly Next.js is popular right now but are the end user performance tradeoffs well understood by those who develop with it? There is room for improvement which is why Remix exists. Astro right now supports multiple frameworks making it possible to gradually migrate towards more lightweight solutions once Astro becomes SSR capable (currently just in the SSG phase). Meanwhile Qwik aims to accomplish things that are impossible with the mainstream frameworks.

it was entirely unexpected coming from someone in the Ivory Tower that is Amazon.

Amazon is a large company with numerous teams.

In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

Marissa Mayer at Web 2.0 (2006)

So given their business volume a 1% difference can establish a tolerance for a lot of effort, expense, and "a certain lack of maintainability" in the right place.

And even if that browser is running on a mobile device, it probably has loads of unleveraged processing power available for you to use.

That's largely a desktop web perspective that doesn't transfer well to the (mass) mobile web.

It seems everybody is adopting a stance that serves their particular needs best - example: "on a mobile device this can take seconds".

So the truth is likely somewhere in between and "good enough" is highly context sensitive.

But if the code runs in the browser, you're not a crappy dev just because the tweak is not forefront in your mind.

That comes across as "if it doesn't happen in my backyard, I don't care".


Frontend Devs should care about web performance; JavaScript micro-optimizations play only a minor role in that (unless we're dealing with the implementation of frameworks/libraries).

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

I pretty much agree with everything you've written. But I may not have made it clear when I said you should "Care... but not too much" that what you should care about are discernible differences in performance. I totally agree that even a 100 millisecond "delay" may be enough to negatively affect conversions. What I'm railing against are those who are fretting over a nested loop, when the array being looped over can only ever hold, say, 10 values. In scenarios like those, fretting about "performance" is rather silly.

Collapse
 
peerreynders profile image
peerreynders

But I may not have made it clear when I said you should "Care... but not too much" that what you should care about are discernible differences in performance.

"Care... but not too much" would resonate strongly with the crowd that likes to invoke the "premature optimization" clause to shut down any discussion relating to any kind of performance - typically to justify or even promote "performance ignorance" because "that's the responsibility of the framework/libraries that we're using - so we don't have to care". So it's kind of "in vogue" to downplay performance.

My sense was that you were singling out "pointless JavaScript micro-optimizations" but there was never a counterpoint "what aspects of performance should a front end developer care about?"

when the array being looped over can only ever hold, say, 10 values.

Understood but there has to be the conscious decision "it's OK for 10 values, for 100_000_000 I'd have to do better", i.e. there should be knowledge of potential performance consequences should the code find itself on the hot path.

"… but the takeaway I want you to get is that more so than in other systems, you need to measure measure measure measure, and make sure your measurements are as near as possible to the real thing you're trying to build."

That said most code isn't on the hot path but it's easy for people to fixate on JavaScript micro-optimizations because those are relatively easy to spot in code - whether or not they are actually relevant. By extension the real performance issues are: knowing how to measure whether code is performant enough, knowing how to find the code that needs improvement, identifying early decisions that limit performance, and exploiting opportunities that aren't directly related to JavaScript.

The Three Unattractive Pillars of Web Dev: accessibility, security and performance;

  • "They’re only a problem when they’re missing."
  • "Try and retrofit any of them to your project and you’re going to have a bad time."

Even in React there is a fair amount of judgement involved when deciding to use features like React.memo, useMemo or to "just let things go".

A front end development performance mindset isn't about micro-optimizing every piece of JavaScript but caring about end user performance from the beginning of the first request up to the point when the browser page tab finishes closing.


Henry Petroski:

The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.

Collapse
 
cubiclesocial profile image
cubiclesocial

If you run Javascript anywhere, then you already don't care about system performance. Neither your own nor anyone else's.

You probably care more about whether or not the code runs the same in all major web browsers on all OSes. And if you use NodeJS, then you probably care that there is one language that you can use everywhere: You've got a hammer and everything looks like a nail.

If you want to measure performance, then you need to measure clock cycles. A clock cycle is the amount of time it takes to execute a common instruction on the CPU. Most modern CPUs are clocked at around 3-4GHz or roughly 3-4 billion instructions per second. Clock cycle information is not available to Javascript nor any current web browser tools. Measuring how much wall clock time an instruction takes to execute in a loop in Javascript is not actually all that helpful because many CPUs have pipelining and predictive branching thus allowing them to intelligently determine what the next instruction is likely to be and precalculate the result. If the next instruction is actually what was predicted, then it has already obtained the answer and can skip ahead (if not, the pipeline will probably stall). So doing something in a loop is measuring how long a loop is going to take. It might give you a rough idea of any given instruction but clock cycles are a more definitive and accurate measurement. Without line-level clock cycle counts, you'll have a very difficult time measuring performance in Javascript.

You should write some C or C++ code sometime. You'll suddenly see Javascript as the very sluggish, bloated, extremely abstracted away from the metal language that it actually is. Of course, C++ devs also tend to abstract away from the metal. Javascript and DOM are useful for abstracting and normalizing the GUI but it's not fast by any stretch of the imagination. Nor will it ever be.

Collapse
 
peerreynders profile image
peerreynders • Edited

If you run Javascript anywhere, then you already don't care about system performance. Neither your own nor anyone else's.

That attitude simply ignores the realities on the web. The browser already has a runtime for JavaScript so you don't have to ship one.

WebAssembly for Web Developers (Google I/O ’19):

Both JavaScript and WebAssembly have the same peak performance. They are equally fast. But it is much easier to stay on the fast path with WebAssembly than it is with JavaScript. Or the other way around. It is way too easy sometimes to unknowingly and unintentionally end up in a slow path in your JavaScript engine than it is in the WebAssembly engine.

Also Replacing a hot path in your app's JavaScript with WebAssembly.

Using the language du jour on the browser will typically require the download of a massive runtime unless something like C/C++/Rust is used and those tend to inflate development time. So using WebAssembly has to be seen as an optimization once things stabilize.

In this case performance is about using the available resources to the best effect - JavaScript on the browser is (for some time to come only) part of the whole picture.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

This is a great point. And I wouldn't disagree with you on any level. I will only point out what may not have been clear in my original post: When you're writing JavaScript for the browser, the preeminent measure of "performance" is time. Now of course, that can vary wildly on a machine-by-machine (or browser-by-browser) basis. But the generic end-user's perception of time is what typically dictates whether my code is seen as "performant".

Of course JavaScript is "sluggish". In fact, all interpreted languages are. Because they are, as you've pointed out, "farther from the metal". But when I'm writing web-based apps, in JavaScript, the "metric" by which my code is typically judged is: Does the end-user actually perceive any type of delay? If the page/app seems to load/function in a near-instant fashion, I'm not going to waste time arguing with someone over the CPU benchmark performance of one function versus another.

But again, I totally agree with your points here.

Collapse
 
jayjeckel profile image
Jay Jeckel

Interesting article and a lot of good points, but I disagree greatly with one aspect:

"But you also need to be realistic about the fact that your code is almost always running in an environment where there are tons of unused resources."

My "unused resources" aren't an excuse for web devs to write less performant and efficient code. You should be no less concerned about using my client resources that cost me money than you are concerned about using your server resources that cost you money.

Collapse
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

I find it interesting that you did not mention what is probably the biggest predictor of performance in the browser - the size of the download. In general, the less code you send to the browser, the better.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

With regard to initial page load time, yes. After the initial page load, the size of the download has almost nothing to do with performance.

Collapse
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

That's mostly true when your audience has relatively recent hardware and a good connection to the internet. Something about 4 billion people don't have.

 
bytebodger profile image
Adam Nathaniel Davis • Edited

No. I'm sorry. But it doesn't matter whether you have gigabit fiber or a 56k dial-up modem. Once the code has been downloaded, the amount of code makes no difference to performance. I'm not saying - in any way - that you shouldn't care at all about bundle size. But if you're inferring that more code leads to lower performance once the package has been downloaded, then that's simply not accurate.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes • Edited

I'm referring to the fact that a lot of people run on old hardware and/or out of date browsers and more code does affect performance for them.

 
bytebodger profile image
Adam Nathaniel Davis

I guess you're referring to the performance of the code in memory. Because more code can take up more space in RAM. But even on a relatively-ancient system, the "performance" hit needed to process 10,000 lines of JS code versus 1,000 lines of JS code is extremely minimal. If you think that you can improve the runtime performance of your code, on anyone's system, merely by writing fewer lines of code, then your target audience probably can't effectively run ANY React / Angular / jQuery / whatever app.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

It sounds like you've never encountered an app that will not run well on your old system, but runs fine on your new system.

 
bytebodger profile image
Adam Nathaniel Davis

When an app runs poorly on your old system, but it runs fine on your new system, it's not based on the number of lines of code.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

I didn't mention lines of code. There is a correlation between the size of the app and the complexity of its function and the demand it places on its running environment.

The size isn't the actual cause (usually). It's just indicative of the likelihood that the app will be more demanding of its execution environment.

 
bytebodger profile image
Adam Nathaniel Davis

I'm sorry, but this is a bit disingenuous. You say that you didn't mention lines of code. But your initial comment was about the size of the download. What do you think makes the download large???

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes • Edited

I'm sorry, but your original use of the phrase was disingenuous. It comes across as an attempt to belittle the point. Lines of code is purely a function of formatting (unless you can point me to an accepted standard of how to measure it).

I get that you don't think the number of bytes you send to the browser matters. You've made that perfectly clear. I understand that point of view. The company I work for takes the exact same stance. It's still the wrong stance. Size is not usually the actual cause, but it is certainly a reasonable proxy for judging potential performance requirements. And that is exactly what I pointed out.

The more code you send to the client, the more potential for execution errors, logic errors, or errors indirectly related to the code itself. More code usually means more complexity, which is another vector for more demand placed on the client system.

The code you didn't have to write will never cause a problem. I'm a firm believer in the best code is no code. If you've never heard the phrase before, you might look it up. The idea has been around for quite a while.

 
Sloan, the sloth mascot
Comment deleted
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Wow. Your sarcasm skills are epic. I hope you can teach me as well.

 
bytebodger profile image
Adam Nathaniel Davis

I could. But you'd have to download the instructions. And I'm sure that your bandwidth/device couldn't handle the bundle size.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Now that you've given up refuting my point, you're going to stick to ad hominem attacks instead. I'll keep that in mind.

 
bytebodger profile image
Adam Nathaniel Davis

It sounds really impressive to use Latin words like "ad hominem" - until you use them in a way that doesn't make any sense in the current context.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Definition of ad hominem (Entry 1 of 2)
1: appealing to feelings or prejudices rather than intellect

It's appropriate.

Collapse
 
ingosteinke profile image
Ingo Steinke

Funniest aspect of "performance" is how many different meanings the word can have to different people. I used to be responsible for "web performance optimization" in a company and co-headed a meetup series about the same topic (formerly known as "meet for speed") and I still care a lot about speeding up websites and avoid unnecessary loading times, but I also care about a lot of other aspects like usability, accessibility and environmental energy optimization.

So far, so good, but to a business person, "performance optimization" might mean adding more videos to help them sell more products in their online shop, or it might mean making their developer teams more efficient to increase their programming performance.

Care... But Not That Much

That's probably the most important thing that we can learn from business people. Don't strive for perfection! Don't overengineer! Don't micro-optimize! Care for readability, maintenance and a pragmatic effort-to-outcome ratio!

Collapse
 
nitzanhen profile image
Nitzan Hen

Great article!

I feel that generally, many developers approach web development with the mindset of developing algorithms, and it's critical to understand that coding in different environments and/or for different purposes means that your top priorities as a developer should also be different.
It's similar, in a sense, to different types of writing - when writing a technical document, for example, you put your focus on completely different qualities than when writing a novel or a poem, even though they're both essentially writing!

As you've said, and it can't be stressed enough - in the case of web development the big-O efficiency of your code is usually a secondary priority. It's important to keep an eye out for it, but unless we're talking about really bad code, it typically makes no noticeable difference. Code brevity, maintainability and other similar qualities have a far greater impact on your product.

However, there is a nuance I'd like to shed light on - big-O time (and memory) efficiency are the two most popular aspects of efficiency, but they're by no means the only aspects of efficiency. Us web developers can afford to pay less attention to those, but other types of inefficiency can make a huge difference: concurrency & async operations, for example, are cardinal to virtually any modern app, and bad performance in that aspect could lead to terrible results. A similar point goes for network operations, bundle sizes, and more.
Once again, in most cases writing clear and maintainable code is a top priority, and can be achieved without sacrificing any of those, but it's important to keep in mind that inefficiency in those aspects of your logic could significantly harm the overall result.

And again - great article, well done!

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

TOTALLY agree. One of my biggest pet peeves is when someone stresses over tiny details of algorithmic "performance", but when you open the inspector, you can see that their app is making three identical GET calls to the exact same endpoint to retrieve the exact same data.

Collapse
 
nitzanhen profile image
Nitzan Hen

Exactly 😂

Collapse
 
ecyrbe profile image
ecyrbe • Edited

Hi Adam

Nice article again. I'll summarize for the Lazy:

  • Do not optimize early, if at all if there's no issue
  • Focus on maintainability over optimisation

I'll add, if you start having front-end Time issues, that you should mesure or add tooling to mesure easily (automate lighthouse reporting, activate flamegraphs). and optimize only problematic parts of your reports.

Nowadays, the biggest perf issues i face are not related to algorithms. But on front-end monolith being so big that webpack can take like 20 mn to package all the bundles (working on a really Big app). Vite JS is not an option, as we have too much legacy that vite can't even compile the project. Optimizing this kind of front-end issue is much harder.
So nowadays, i'm doing micro front-end to slim the Monster down. Module federation is a really nice pièce of technology.
I wrote a small article about it yesterday if you are interested.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Agreed. And module federation is indeed a wonderful feature.

Collapse
 
lexlohr profile image
Alex Lohr

Premature optimization is the root of all evil, they say. I think not caring about performance means we're more interested about pushing MVPs onto the customer than actually solving problems.

One of these is that a lack of performance will needlessly burn CPU cycles and waste energy, while also ensuring that whatever system it runs on needs to be replaced faster.

So keep in the back of your mind that you don't want to kill the planet with bad front end performance. Thanks for coding considerately.

Collapse
 
webreflection profile image
Andrea Giammarchi • Edited

Imho, in every part of the stack you need to care about performance when performance is your bottleneck. Yet knowing better algorithms, or better libraries or solutions, assuming similar DX, to obtain the same result, is a plus that removes the idea "performance are not great" from the equation and reduce long term need for refactoring and/or maintenance.

In few words, if it takes the same time to implement the same solution but because you care about performance it's faster by design, you'll be a better developer in the long term than one that "didn't care about performance because FE, yolo!".

There are also a lot of people that mistake FE with business logic, PWA or SPA or MTA needs in terms of architecture and so on and so forth ... saying FE shouldn't care about performance is as short sighted as one could be in this industry, if by FE you consider how much responsibility has JS these days to make literally anything work on the Web.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

A few thoughts on performance: one critical performance indicator these days is the amount of battery that functionality uses, and while it's often the case that it is hard to determine this for a website; hybrid apps and heavily used web apps that burn through a user's battery have a directly negative impact on that user's day. Not that this is an argument for micro optimisation, but I suggest it should be a consideration around critical functionality.

Imagine a web app that has some type-ahead functionality, too frequent use of a device's radio to contact the server for suggestions will have a negative impact if this is a commonly used function. Poorly written search functionality in the browser could make the experience of the search functionality poor and burn battery. Over eager caching of entire data sets to allow client side searching could negatively impact both energy usage and startup performance. This simple example shows that we should address proper consideration to the user objectives and the architecture of solutions where there is some chance that solution will be a core part of the user's journey.

The data structures we use frequently dictate performance too, choosing when to trade memory for computation (e.g. building O(1) lookup tables) or utilising our own or 3rd party APIs to request data in the right shape to reduce data transfer, round trips or client side processing are also worth considering at the solution architecture stage too.

I am totally with you on the pragmatism side, I'd use find over a fancy lookup table for arrays expected to be small too, because there is another cost here, the cost to our business or employer in terms of the amount of time it takes to build and deliver solutions to our customers. This is another practical optimisation, because if we run out of money before the solution is released (perhaps due to one of those 3 month long Linting wars?) we have also failed at our task!

A great article, so good to be back reading your thoughts and the debate that they produce after the hiatus.

Collapse
 
daviddurand profile image
David G. Durand

Just a note. You probably know this, but it's still worth being a little precise about terms. In particular you used the term "exponentially" to mean "grows too fast," but exponential time is unbelievably worse than what you showed -- nesting can be bad, but that's actually just Quadratic time: O(n^2).

I hate the thought of people making people feel bad about this kind of knowledge. It's useful, and not that hard to understand if you don't go into every detail of the mathematics. And if someone twits you for using a O(n log n) algorithm, be happy -- they don't even understand it themselves.

In my quick cheat sheet below, O(n^2) is the place where all developers should be cautious, because it's easy to end up doing something quadratic by accident.

I find it easiest to think about it in terms of what happens when n changes:

  • O(1) doesn't change, as the problem grows. Never worry about this time and your problem size.
  • O(log n) is not quite that good, but still pretty great: doubling the size of the problem adds one more unit of time to the work. Never worry about this.
  • For Linear time, O(n), doubling the size of the problem doubles the size of the work. You only have to worry about this, for very large problems.
  • For O(n log n), there's not a great mnemonic special case, but you don't need to worry about O(n log n) algorithms, either. If the problem goes to a size of n^2 you will take a bit more than twice the time you took for a size of n.
  • For Quadratic time, O(n^2), if you double the size of a problem, you have to do 4 times the work work. For cubic, O(n^3), you do 8 times the work, etc. This is the most common cause for complexity errors to cause intolerable performance. Nested loops may be fine as long as each limit is independent, and some of them have small or fixed bounds (like looping over 3 coordinates in graphics). Even worse, the nested loop may be invisible. It's worth being careful any time you loop calling a function that builds a data structure. If the "add an item" function takes time linear in size of the stored data, the whole thing is a quadratic loop, and will go bad fast as the problem grows.
  • For exponential time, adding 1 to the size of the problem doubles the size of the work. (of course exponentials like 3^n or 10^n) are the same for theorists, but practically it makes a big difference whether the n + 1 case is double or 10 times the effort. In any case you're now in the realm of problems where answers for problem sizes in the double or triple digits may effectively unknowable.
Collapse
 
bytebodger profile image
Adam Nathaniel Davis

First, thank you for the wonderful explanation.

Second, you're being gracious in implying that I already know this. I make no qualms about the fact that Big-O is primarily taught in universities - and I'm a self-taught programmer. In fact, I've only recently begun really paying any attention to it at all.

Finally, I have to say that you may have taught me something. (And thank you for that!) I understand that if you have a nested loop, where the inner loop is dependent upon a variable separate from the inner loop, this is quadratic time. But I honestly thought that if both loops were dependent upon the same variable, that this could reliably be referred to as exponential time. After all, if you have a nested loop, and both loops are looping through the same array, you are looping through it a number of times equal to the square of the length of the array. If you were to nest a third loop, where you're once again going through the same array, you would loop through everything a number of times equal to the cube of the length of the array. Of course, "squares" and "cubes" are... exponentials.

But this isn't meant to argue with you in any way. I'll need to read up on it more myself!

Collapse
 
daviddurand profile image
David G. Durand

As to the language, you are right that in n^2 and n^3, the numbers 2 and 3 are exponents. Your interpretation makes perfect sense, but doesn't match the way the words are used. Those functions are called polynomial functions because in a polynomial, all the exponents are constants (however big they may happen to be). The term exponential is reserved for when the exponent is the variable, which is a much faster-growing kettle of fish.

The Factorial function n! , that counts how many ways you can arrange n objects in a row, grows exponentially:
There's one way to order 1 object: pick it, and you're done. For two, you get two ways to pick the first one, and then there's only one way to pick remaining one. For 3 object, there are 3 ways to pick the first, multiplied by two ways to pick the remaining two, and so on.

Here's the first 10 values:

1 2 6 24 120 720 5040 40320 362880 3628800
Enter fullscreen mode Exit fullscreen mode

This terminology issue has been bugging me for two years, because of the pandemic. It makes me crazy to know that most people, and many of our leaders, have no idea of what an epidemiologist means when he talks about exponential growth. The ones who think they know are probably thinking about polynomial growth -- already scary, but still nothing on exponential. It's just really hard to understand how fast it is.

And you're right that I wasn't so clear about what has to be different and what the same, and your understanding is good. The point I wanted to make is that if there are hidden loops then you have two variables, but one is easy to forget about (often hidden inside some library routine). If the number of times through the loop is the same, you're surely in n^2-land... But you don't have to have n*n in such an obvious way to get O(n^2) growth:

for (i = 0; i < n; i++) {
    for (j = 0; j < i; i++) {
        i_do_a_lot_of this(i, j);
   }
}
Enter fullscreen mode Exit fullscreen mode

As long as the limits on the nested loops grow together, you can still get a quadratic time. Loops that build data up can do this really easily if the inner loop depends on the size of the data being built.

On the original Macs the system's function to add a menu item to a menu was pretty obvious. It scanned down the menu to the end, then added a new item. When this met Microsoft Word, the result was that graphic designers and font-freaks would have to wait over a minute for Word to start up, because it was adding all the fonts in the system to the font menu one at time. For 10 fonts that was 45 loops. For 100 fonts that's 4950 loops, for 120 it's over 7000. Of course, you if passed the whole list at one time, it just copied all the items, and even on those old machines you could have 100's or thousands of items (if you wanted to).

So for a working programmer, it really helps to be aware of O(n^2) as the start of bad growth -- sometimes it is best that can be done, but then you'll only want to use if you know the sizes are really limited. Otherwise, it may well involve long runtimes and "big computing" to get the answer.

There are some quadratic implementations that do make it into interfaces. Breaking lines for a paragraph or an editor are often implemented in a quadratic way (you can avoid the slowdown, but it's real data structure work), so you often see text editors become painfully slow if a very large file has only one line in it. Megabyte long lines aren't part of the design point of code editors, and the unreasonable slowness with lines of thousands of characters is not worth fixing.

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

This is great info and I sincerely appreciate you taking the time to spell this out so clearly. Always learning... Thank you!

Collapse
 
supportic profile image
Supportic

Due to Javascript's JIT compiler it's hard to optimize it with a predictable increase in performance. You can't know when the compiler flags a Funktion as hot. But you can help the compiler i.e. if you don't mix datatypes too often or declare variables outside of the loop instead of letting allocation happening inside. Still very marginal performance outcome.
I would put more effort into DOM manipulation if you use pure JS without react and Co.

Collapse
 
peerreynders profile image
peerreynders • Edited

Just some side points.

Once the code has been downloaded, the amount of code makes no difference to performance.

That rationale makes sense from an SPA perspective.

But Marko, Astro and Qwik are pursuing partial or progressive hydration to enable next-generation MPA‡; if you can load SSR HTML and go interactive much quicker than an SPA there is no need for client-side routing. So keeping the "critical JavaScript" to an absolute minimum, lazy loading the rest (which may never be needed) is what next-gen MPAs are based on.

‡ at this point Remix is focusing on progressive enhancement. For the time being they are not convinced about the benefits of islands/partial hydration. But ultimately that may simply be a limitation of the React mental model as each island would have to be a separate component tree (and any inter-island communication would have to happen outside of React).

then your target audience probably can't effectively run ANY React / Angular / jQuery / whatever app.

Actually in 2019 Rich Harris mentions that Svelte was used for the Brazilian Stone Point-of-Sale device because React, Vue etc. simply imposed too much overhead on the hardware. This is just one example of what is often identified as a "resource-constrained device".

Similarly in the mobile space there are cases where the CPU cores are getting smaller ("more power efficient"; though more numerous) which means single thread performance is decreasing (and today's predominant web technologies rely on single thread performance) - this leads to a situation where a $400 iPhone SE has better "web performance" than a $1400 Samsung Galaxy S20 Ultra—because the iPhone has two (of six) cores that are more performant than any one of Samsung's 8 cores.

Finally performance improvements over time for mobile devices is strongly coupled to the device class.
single core scores
Source

The graphic shows that budget devices aren't really improving that much—their feature is that they are inexpensive, not performant. Some projections stipulate that most of the future growth in the mobile market will be at the lower end, creating a situation where performance of the "median device" could be going down.

Collapse
 
darkwiiplayer profile image
𒎏Wii 🏳️‍⚧️ • Edited

One of the most essential skills related to performance is knowing when it matters.

50ms of expensive loop in a button press that will lead the user to a different view in your application? Probably not a big deal.

50ms of expensive loop in a paint worklet that will be used on several elements in your website? That is probably a huge problem.

Once you've figured where performance matters, there's many other things to worry about related to identifying performance problems and fixing them. Bot all of that is wasted time when we've failed to realise that the code we're working on doesn't need to be performant in the first place.


With that being said though: Front-end developers should probably care more about performance than back-end developers. These days adding a bit of processing power to your distributed back-end isn't all that expensive anymore, whereas losing paying users to a bad UX due to slow code quickly adds up.

What's more, processing power isn't distributed evenly throughout society, so there is serious risk of unintentionally preventing users who can't afford good-enough hardware from using a service.

And last but not least, wasted processing power does not care whether it happens in the browser or in the server. If anything, it might be more likely that big hosting companies are using green energy to improve their image, making the front end by far the worse place of the two to waste processing power.

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

There's an area of front-end performance that hasn't even been touched on here, and it's probably the aspect that us developers are least likely to ever encounter.

The main problem with the front end is that we don't know what devices or browsers are being used, but it's almost a guarantee that those devices are considerably underpowered compared to what we're using.

For example, consider your daily work machine. It's pretty capable of running IDEs, virtual machines, servers, etc, all without complaint. The average users PC has probably a 10th of that power. What might only be a 10ms difference on your machine will be much more noticeable on theirs.

Also, mobile phones are often going to be very old, and probably out of date. Not every phone is old, some were just not an amazing spec to begin with. Considering that most users are browsing via phones and not laptops or desktops, it's important to consider performance there too.

I'm not saying that we should look to optimise to the same degree as code running on the old mainframes or embedded chips, but that we should be at the least mindful of performance. Whilst our code should always be as readable as possible, we should avoid the obvious issues (like nested loops as you've highlight).

Collapse
 
dylanlacey profile image
Dylan Lacey

I love the nuance here and agree with everything you've said with one exception: There being tonnes of unused resources.

Your code might be lightweight on its own, but most people have multiple tabs. I have enough tabs that I'm not going to count them because it'll give me The Anxiety. Also, 3rd party JS code from ad servers and management tools and whatever-else can be extremely weighty. If I visit any single one of the Gawker sites with an ad-blocker off, for example, my fans spin up like my Mac wishes it was a hoverboat.

This is especially the case if you load un-versioned 3rd party resources directly from the source; Who knows what performance snafus their team might have in any given version?

I can't agree more that it's not worth obsessing over... But the more performance leeway you leave yourself, the less impact external forces have on your product.

Collapse
 
jdhinvicara profile image
John Harding • Edited

This is a great post - and I should read the replies more closely.

I generally agree with all that you say.

A couple of initial thoughts:
1) There's not much point focusing on algorithmic performance and totally ignoring network performance. 90%* of the time it's the network slowing you down (also apply that to a hierarchy of "slow things" and the algorithms are normally near the bottom of the list).
2) When you do look at algorithmic performance you really have to have a good idea what the size of the average and peak data sets will be. 87.3%** of the time your boss's idea of the numbers (especially at startups) is completely overstated.

  • - a completely made up number ** - also completely made up but stated with far more impressive accuracy.
Collapse
 
bytebodger profile image
Adam Nathaniel Davis

The biggest "fault" I had in my article is that I apparently didn't make it clear that I was talking primarily about algorithmic performance. But yes, I totally agree with everything you've written.

Collapse
 
optimisedu profile image
optimisedu

Hi,

I think that you have presented this really well, it is important to talk about performance. It has so many meanings.

You have neatly raised awareness of big o as a rule to learn then at least you are aware you are breaking it.

Performance does translate directly to bandwidth and indirectly to search best practice and client retention as a result.

Bandwidth has a tangeable price. Unreadable code has a performance price. I have done a lot of research in SEO back when Google were less koy with their algorithm. I often say in a sentence that SEO is the payoff from correctly ballencing performance with acessability.

Great article, I am curious what you think.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

I def agree with you about bandwidth. However, I'll add one thought to that (which may eventually be its own article): I've seen so many frontend dev wring their hands over stripping a few K out of their bundle size - only to deploy it to a content site or e-commerce site that automatically bloats the page with MEGABYTES of additional ad/tracking software. When this happens, I find the debate over bandwidth to be a little silly.

Collapse
 
shrihari profile image
Shrihari Mohan

Yes , I too don't care about the micro optimizations unless the data set is high and Big O goes off the roof.

These are things I make sure I have done at the frontend .

  • Image Optimization - Those less KBs really matter ( This is a must if the website contains more images , I know this is not JS related but since we re talking about frontend , may help someone who is new to frontend development)

  • Lazy Load pages - Angular / React ( If you are using next you dont have to care about any optimizations)

Collapse
 
mikecompsci profile image
Mike Reynolds

What a world if our sort algorithms ran in logn. I believe you meant o(nlogn) as js built in sort is a quick sort.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Thank you for pointing this out. I've fixed it now.

Collapse
 
chrismuga profile image
Info Comment hidden by post author - thread only accessible via permalink
ChrisMuga

The part about performance is actually hard to read.
It's mad to assume that everybody is using the same powerful device you're using.

The browser itself is a mess. Yet here were are talking about how performance is not much of a priority. Sad.

Some comments have been hidden by the post's author - find out more