I was recently talking to an architect at Amazon and he made a very interesting comment to me. We were talking about the complexity of a given algorithm (discussed in Big-O notation), and before we even got too far into the explanation, he said:
I mean, it's not like we need to worry too much about this. After all, we're frontend devs!
I found this admission to be extremely refreshing, and it was entirely unexpected coming from someone in the Ivory Tower that is Amazon. It's something that I've always known. But it was still really nice to hear it coming from someone working for the likes of a FAANG company.
You see, performance is one of those subjects that programmers love to obsess about. They use it as a Badge of Honor. They see that you've used JavaScript's native .sort() method, then they turn up their nose and say something like, "Well, you know... That uses O(n log(n)) complexity." Then they walk away with a smug smirk on their face, as though they've banished your code to the dustbin of Failed Algorithms.
Smart Clients vs. Dumb Terminals
The terms "smart client" and "dumb terminal" have fallen somewhat by-the-wayside in recent decades. But they're still valid definitions, even in our modern computing environments.
Mainframe Computing
Way back in the Dark Ages, nearly all computing was done on massive computers (e.g., mainframes). And you interacted with those computers by using a "terminal". Those terminals were often called "dumb terminals" because the terminal itself had almost no computing power of its own. It only served as a way for you to send commands to the mainframe and then view whatever results were returned from... the mainframe. That's why it was called "dumb". Because the terminal itself couldn't really do much of anything on its own. It only served as a portal that gave you access to the mainframe.
For those who wrote mainframe code, they had to worry greatly about the efficiency of their algorithms. Because even the mainframe had comparatively-little computing power (by today's standards). More importantly, the mainframe's resources were shared by anyone with access to one of the dumb terminals. So if 100 people, sitting at 100 dumb terminals, all sent resource-intensive commands at the same time, it was pretty easy to crash the mainframe. (This is also why the allocation of terminals was very strict, and even those who had access to mainframe terminals often had to reserve time on them.)
PC Computing
With the PC explosion in the 80s, suddenly you had a lot of people with a lot of computing power (relatively speaking) sitting on their desktop. And most of the time, that computing power was underutilized. Thus spawned the age of "smart clients".
In a smart client model, every effort is made to allow the client to do its own computing. It only communicates back to the server when existing data must be retrieved from the source, or when new/updated data must be sent back to that source. This offloaded a great deal of work off of the mainframe, down to the clients, and allowed for the creation of much more robust applications.
A Return To Mainframe Computing (Sorta...)
But when the web came around, it knocked many applications back into a server/terminal kinda relationship. That's because those apps appeared to be running in the browser, but the simple fact is that early browser technology was incapable of really doing much on its own. Early browsers were quite analogous to dumb terminals. They could see data that was sent from the server (in the form of HTML/CSS). But if they wanted to interact with that data in any meaningful way, they needed to constantly send their commands back to the server.
This also meant that early web developers needed to be hyper-vigilant about efficiency. Because even a seemingly-innocuous snippet of code could drag your server to its knees if your site suddenly went viral and that code was being run by hundreds (or thousands) of web surfers concurrently.
This could be somewhat alleviated by deploying more robust backend technologies. For example, you could deploy a web farm that shared the load of requests for a single site. Or you could write your code in a compiled language (like Java or C#), which helped (somewhat) because compiled code typically runs faster than interpreted code. But you were still bound by the limits that came from having all of your public users hitting a finite set of server/computing resources.
The Browser AS Smart Client
I'm not going to delve into the many arguments for-or-against Chrome. But one of its greatest contributions to web development is that it was one of the first browsers that was continually optimized specifically for JavaScript performance. When this optimization was combined with powerful new frameworks like jQuery (then Angular, then React, then...), it fostered the rise of the frontend developer.
This didn't just give us new capabilities for frontend functionality, it also meant that we could start thinking, again, in terms of the desktop (browser) being a smart client. In other words, we didn't necessarily have to stay up at night wondering if that one aberrant line of code was going to crash the server. At worst, it might crash someone's browser. (And don't get me wrong, writing code that crashes browsers is still a very bad thing to do. But it's farrrrr less likely to occur when the desktop/browser typically has all those unused CPU cycles just waiting to be harnessed.)
So when you're writing, say, The Next Great React App, how much, exactly, do you even need to care about performance?? After all, the bulk of your app will be running in someone's browser. And even if that browser is running on a mobile device, it probably has loads of unleveraged processing power available for you to use. So how much do you need to be concerned about the nitty-gritty details of your code's performance? IMHO, the answer is simple - yet nuanced.
Care... But Not That Much
Years ago, I was listening to a keynote address from the CEO of a public company. Public companies must always (understandably) have one eye trained on the stock market. During his talk, he posed the question: How much do I care about our company's stock price? And his answer was that he cared... but not that much. In other words, he was always aware of the stock price. And of course, he was cognizant of the things his company could do (or avoid doing) that would potentially influence their stock price. But he was adamant that he could not make every internal corporate decision based upon one simple factor - whether or not it would juice the stock price. He had to care about the stock price, because a tanking stock price can cause all sorts of problems for a public company. But if he allowed himself to focus, with tunnel vision, on that stock price, he could end up making decisions that bump the price by a few pennies - but end up hurting the company in the long run.
Frontend app development is very similar in my eyes. You should always be aware of your code's performance. You certainly don't want to write code that will cause your app to run noticeably bad. But you also don't want to spend half of every sprint trying to micro-optimize every minute detail of your code.
If this all sounds terribly abstract, I'll try to give you some guidance on when you need to care about application performance - and when you shouldn't allow it to bog down your development.
Developer Trials
The first thing you need to keep in mind is that your code will (hopefully) be reviewed by others devs. This happens when you submit new code, or even when someone comes by months later and looks at what you've written. And many devs LOVE to nitpick your code for performance.
You can't avoid these "trials". They happen all the time. The key is not to get sucked into theoretical debates about the benchmark performance of a for loop versus the Array.prototype function of .forEach(). Instead, you should try, whenever possible, to steer the conversation back into the realm of reality.
Benchmarking Based Upon Reality
What do I mean by "reality"? Well, first of all, we now have many tools that allow us to benchmark our apps in the browser. So if someone can point out that I can shave a few seconds of load time off my app by making one-or-two minor changes, I'm all ears. But if their proposed optimization only "saves" me a few microseconds, I'm probably gonna ignore their suggestions.
You should also be cognizant of the fact that a language's built-in functions will almost always outperform any custom code. So if someone claims that they have a bit of custom code that is more performant than, say, Array.prototype.find(), I'm immediately skeptical. But if they can show me how I can achieve the desired result without even using Array.prototype.find() at all, I'm happy to hear the suggestion. However, if they simply believe that their method of doing a .find() is more performant than using the Array.prototype.find(), then I'm going to be incredibly skeptical.
Your Code's Runtime Environment
"Reality" is also driven by one simple question: Where does the code RUN??? If the code-in-question runs in, say, Node (meaning that it runs on the server), performance tweaks take on a heightened sense of urgency, because that code is shared and is being hit by everyone who uses the app. But if the code runs in the browser, you're not a crappy dev just because the tweak is not forefront in your mind.
Sometimes, the code we're examining isn't even running in an app at all. This happens whenever we decide to do purely academic exercises that are meant to gauge our overall awareness of performance metrics. Code like this may be running in a JSPerf panel, or in a demo app written on StackBlitz. In those scenarios, people are much more likely to be focused on finite details of performance, simply because that's the whole point of the exercise. As you might imagine, these types of discussions tend to crop up most frequently during... job interviews. So it's dangerous to be downright flippant about performance when the audience really cares about almost nothing but the performance.
The "Weight" Of Data Types
"Reality" should also encompass a thorough understanding of what types of data that you're manipulating. For example, if you need to do a wholesale transformation on an array, it's perfectly acceptable to ask yourself: How BIG can this array reasonably become? Or... What TYPES of data can the array typically hold?
If you have an array that only holds integers, and we know that the array will never hold more than, say, a dozen values, then I really don't care much about the exact method(s) you've chosen to transform that data. You can use .reduce() nested inside a .find(), nested inside a .sort(), which is ultimately returned from a .map(). And you know what?? That code will run just fine, in any environment where you choose to run it. But if your array could hold any type of data (e.g., objects that contain nested arrays, that contain more objects, that contain functions), and if that data could conceivably be of nearly any size, then you need to think much more carefully about the deeply-nested logic you're using to transform it.
Big-O Notation
One particular sore point (for me) about performance is with Big-O Notation. If you earned a computer science degree, you probably had to become very familiar with Big-O. If you're self-taught (like me), you probably find it to be... onerous. Because it's abstract and it typically provides no value in your day-to-day coding tasks. But if you're trying to get through coding interviews with Big Tech companies, it'll probably come up at some point. So what do you do?
Well, if you're intent upon impressing those interviewers who are obsessed with Big-O Notation, then you may have little choice but to hunker down and force yourself to learn it. But there are some shortcuts you can take to simply make yourself familiar with the concepts.
First, understand the dead-simple basics:
O(1)is the most immediate time complexity you can have. If you simply set a variable, and then at some later point, you access the value in that same variable, this isO(1). It basically means that you have immediate access to the value stored in memory.O(n)is a loop.nrepresents the number of times you need to traverse the loop. So if you're just creating a single loop, you are writing something ofO(n)complexity. Also, if you have a loop nested inside another loop, and both loops are dependent upon the same variable, your algorithm will typically beO(n-squared).Most of the "built-in" sorting mechanisms we use are of
O(n log(n))complexity. There are many different ways to do sorts. But typically, when you're using a language's "native" sort functions, you're employingO(n log(n))complexity.
You can go deeeeeep down a rabbit hole trying to master all of the "edge cases" in Big-O Notation. But if you understand these dead-simple concepts, you're already on your way to at least being able to hold your own in a Big-O conversation.
Second, you don't necessarily need to "know" Big-O Notation in order to understand the concepts. That's because Big-O is basically a shorthand way of explaining "how many hoops will my code need to jump through before it can finish its calculation."
For example:
const myBigHairyArray = [... thousandsUponThousandsOfValues];
const newArray = myBigHairyArray.map(item => {
// tranformation logic here
});
This kinda logic is rarely problematic. Because even if myBigHairyArray is incredibly large, you're only looping through the values once. And modern browsers can loop through an array - even a large array - very fast.
But you should immediately start thinking about your approach if you're tempted to write something like this:
const myBigHairyArray = [... thousandsUponThousandsOfValues];
const newArray = myBigHairyArray.map(outerItem => {
return myBigHairyArray.map(innerItem => {
// do inner tranformation logic
// comparing outerItem to innerItem
});
});
This is a nested loop. And to be clear, sometimes nested loops are absolutely necessary, but your time complexity grows exponentially when you choose this approach. In the example above, if myBigHairArray contains "only" 1,000 values, the logic will need to iterate through them one million times (1,000 x 1,000).
Generally speaking, even if you haven't the faintest clue about even the simplest aspects of Big-O Notation, you should always strive to avoid nesting anything. Sure, sometimes it can't be avoided. But you should always be thinking very carefully about whether there's any way to avoid it.
Hidden Loops
You should also be aware of the "gotchas" that can arise when using native functions. Yes, native functions are generally a "good" thing. But when you use a native function, it can be easy to forget that many of those functions are doing their magic with loops under the covers.
For example: imagine in the examples above that you are then utilizing .reduce(). There's nothing inherently "wrong" with using .reduce(). But .reduce() is also a loop. So if your code only appears to use one top-level loop, but you have a .reduce() happening inside every iteration of that loop, you are, in fact, writing logic with a nested loop.
Readability / Maintainability
The problem with performance discussions is that they often focus on micro-optimization at the expense of readability / maintainability. And I'm a firm believer that maintainability almost always trumps performance.
I was working for a large health insurance provider in town and I wrote a function that had to do some complex transformations of large data sets. When I finished the first pass of the code, it worked. But it was rather... obtuse. So before committing the code, I refactored it so that, during the interim steps, I was saving the data set into different temp variables. The purpose of this approach was to illustrate, to anyone reading the code, what had happened to the data at that point. In other words, I was writing self-documenting code. By assigning self-explanatory names to each of the temp variables, I was making it painfully clear to all future coders exactly what was happening after each step.
When I submitted the pull request, the dev manager (who, BTW, was a complete idiot) told me to yank out all the temp variables. His "logic" was that those temp variables each represented an unnecessary allocation of memory. And you know what?? He wasn't "wrong". But his approach was ignorant. Because the temp variables were going to make absolutely no discernible difference to the user, but they were going to make future maintenance on that code sooooo much easier. You may have already guessed that I didn't stick around that gig for too long.
If your micro-optimization actually makes the code more difficult for other coders to understand, it's almost always a poor choice.
What To Do?
I can confidently tell you that performance is something that you should be thinking about. Almost constantly. Even on frontend apps. But you also need to be realistic about the fact that your code is almost always running in an environment where there are tons of unused resources. You should also remember that the most "efficient" algorithm isn't always the "best" algorithm, especially if it looks like gobbledygook to all future coders.
Thinking about code performance is a valuable exercise. One that any serious programmer should probably have, almost always, in the back of their mind. It's incredibly healthy to continually challenge yourself (and others) about the relative performance of code. In doing so, you can vastly improve your own skills. But performance alone should never be the end-all/be-all of your work. And this is especially true if you're a "frontend developer".






Oldest comments (50)
Hi Adam
Nice article again. I'll summarize for the Lazy:
I'll add, if you start having front-end Time issues, that you should mesure or add tooling to mesure easily (automate lighthouse reporting, activate flamegraphs). and optimize only problematic parts of your reports.
Nowadays, the biggest perf issues i face are not related to algorithms. But on front-end monolith being so big that webpack can take like 20 mn to package all the bundles (working on a really Big app). Vite JS is not an option, as we have too much legacy that vite can't even compile the project. Optimizing this kind of front-end issue is much harder.
So nowadays, i'm doing micro front-end to slim the Monster down. Module federation is a really nice pièce of technology.
I wrote a small article about it yesterday if you are interested.
Agreed. And module federation is indeed a wonderful feature.
Hi,
I think that you have presented this really well, it is important to talk about performance. It has so many meanings.
You have neatly raised awareness of big o as a rule to learn then at least you are aware you are breaking it.
Performance does translate directly to bandwidth and indirectly to search best practice and client retention as a result.
Bandwidth has a tangeable price. Unreadable code has a performance price. I have done a lot of research in SEO back when Google were less koy with their algorithm. I often say in a sentence that SEO is the payoff from correctly ballencing performance with acessability.
Great article, I am curious what you think.
I def agree with you about bandwidth. However, I'll add one thought to that (which may eventually be its own article): I've seen so many frontend dev wring their hands over stripping a few K out of their bundle size - only to deploy it to a content site or e-commerce site that automatically bloats the page with MEGABYTES of additional ad/tracking software. When this happens, I find the debate over bandwidth to be a little silly.
TL;DNR: Often it's less about being performance conscious but more about being explicit about what tradeoffs are being made: for who's benefit and and to who's detriment.
The article largely focuses on code produced by the frontend developer but the third party code selected for use on the client side (and thus affecting the client side architecture) imposes overhead even before a single line of code is written (The Cost of Javascript Frameworks, Benchmarking JavaScript Memory Usage).
So perhaps "caring about performance" should be practised by honestly understanding the impact our tools have on end user performance.
These days React is pretty much a bandwagon choice; reportedly popular DX, large ecosystem, ready supply of developers - but is the (performance) cost of adoption fully understood? If React Native isn't needed perhaps Preact is "good enough" (Etsy). And if it's mostly about JSX maybe Solid is an option?
Similarly Next.js is popular right now but are the end user performance tradeoffs well understood by those who develop with it? There is room for improvement which is why Remix exists. Astro right now supports multiple frameworks making it possible to gradually migrate towards more lightweight solutions once Astro becomes SSR capable (currently just in the SSG phase). Meanwhile Qwik aims to accomplish things that are impossible with the mainstream frameworks.
Amazon is a large company with numerous teams.
Marissa Mayer at Web 2.0 (2006)
So given their business volume a 1% difference can establish a tolerance for a lot of effort, expense, and "a certain lack of maintainability" in the right place.
That's largely a desktop web perspective that doesn't transfer well to the (mass) mobile web.
It seems everybody is adopting a stance that serves their particular needs best - example: "on a mobile device this can take seconds".
So the truth is likely somewhere in between and "good enough" is highly context sensitive.
That comes across as "if it doesn't happen in my backyard, I don't care".
Frontend Devs should care about web performance; JavaScript micro-optimizations play only a minor role in that (unless we're dealing with the implementation of frameworks/libraries).
I pretty much agree with everything you've written. But I may not have made it clear when I said you should "Care... but not too much" that what you should care about are discernible differences in performance. I totally agree that even a 100 millisecond "delay" may be enough to negatively affect conversions. What I'm railing against are those who are fretting over a nested loop, when the array being looped over can only ever hold, say, 10 values. In scenarios like those, fretting about "performance" is rather silly.
"Care... but not too much" would resonate strongly with the crowd that likes to invoke the "premature optimization" clause to shut down any discussion relating to any kind of performance - typically to justify or even promote "performance ignorance" because "that's the responsibility of the framework/libraries that we're using - so we don't have to care". So it's kind of "in vogue" to downplay performance.
My sense was that you were singling out "pointless JavaScript micro-optimizations" but there was never a counterpoint "what aspects of performance should a front end developer care about?"
Understood but there has to be the conscious decision "it's OK for 10 values, for 100_000_000 I'd have to do better", i.e. there should be knowledge of potential performance consequences should the code find itself on the hot path.
"… but the takeaway I want you to get is that more so than in other systems, you need to measure measure measure measure, and make sure your measurements are as near as possible to the real thing you're trying to build."
That said most code isn't on the hot path but it's easy for people to fixate on JavaScript micro-optimizations because those are relatively easy to spot in code - whether or not they are actually relevant. By extension the real performance issues are: knowing how to measure whether code is performant enough, knowing how to find the code that needs improvement, identifying early decisions that limit performance, and exploiting opportunities that aren't directly related to JavaScript.
The Three Unattractive Pillars of Web Dev: accessibility, security and performance;
Even in React there is a fair amount of judgement involved when deciding to use features like React.memo, useMemo or to "just let things go".
A front end development performance mindset isn't about micro-optimizing every piece of JavaScript but caring about end user performance from the beginning of the first request up to the point when the browser page tab finishes closing.
Henry Petroski:
Interesting article and a lot of good points, but I disagree greatly with one aspect:
My "unused resources" aren't an excuse for web devs to write less performant and efficient code. You should be no less concerned about using my client resources that cost me money than you are concerned about using your server resources that cost you money.
Great article!
I feel that generally, many developers approach web development with the mindset of developing algorithms, and it's critical to understand that coding in different environments and/or for different purposes means that your top priorities as a developer should also be different.
It's similar, in a sense, to different types of writing - when writing a technical document, for example, you put your focus on completely different qualities than when writing a novel or a poem, even though they're both essentially writing!
As you've said, and it can't be stressed enough - in the case of web development the big-O efficiency of your code is usually a secondary priority. It's important to keep an eye out for it, but unless we're talking about really bad code, it typically makes no noticeable difference. Code brevity, maintainability and other similar qualities have a far greater impact on your product.
However, there is a nuance I'd like to shed light on - big-O time (and memory) efficiency are the two most popular aspects of efficiency, but they're by no means the only aspects of efficiency. Us web developers can afford to pay less attention to those, but other types of inefficiency can make a huge difference: concurrency & async operations, for example, are cardinal to virtually any modern app, and bad performance in that aspect could lead to terrible results. A similar point goes for network operations, bundle sizes, and more.
Once again, in most cases writing clear and maintainable code is a top priority, and can be achieved without sacrificing any of those, but it's important to keep in mind that inefficiency in those aspects of your logic could significantly harm the overall result.
And again - great article, well done!
TOTALLY agree. One of my biggest pet peeves is when someone stresses over tiny details of algorithmic "performance", but when you open the inspector, you can see that their app is making three identical
GETcalls to the exact same endpoint to retrieve the exact same data.Exactly 😂
Premature optimization is the root of all evil, they say. I think not caring about performance means we're more interested about pushing MVPs onto the customer than actually solving problems.
One of these is that a lack of performance will needlessly burn CPU cycles and waste energy, while also ensuring that whatever system it runs on needs to be replaced faster.
So keep in the back of your mind that you don't want to kill the planet with bad front end performance. Thanks for coding considerately.
A few thoughts on performance: one critical performance indicator these days is the amount of battery that functionality uses, and while it's often the case that it is hard to determine this for a website; hybrid apps and heavily used web apps that burn through a user's battery have a directly negative impact on that user's day. Not that this is an argument for micro optimisation, but I suggest it should be a consideration around critical functionality.
Imagine a web app that has some type-ahead functionality, too frequent use of a device's radio to contact the server for suggestions will have a negative impact if this is a commonly used function. Poorly written search functionality in the browser could make the experience of the search functionality poor and burn battery. Over eager caching of entire data sets to allow client side searching could negatively impact both energy usage and startup performance. This simple example shows that we should address proper consideration to the user objectives and the architecture of solutions where there is some chance that solution will be a core part of the user's journey.
The data structures we use frequently dictate performance too, choosing when to trade memory for computation (e.g. building O(1) lookup tables) or utilising our own or 3rd party APIs to request data in the right shape to reduce data transfer, round trips or client side processing are also worth considering at the solution architecture stage too.
I am totally with you on the pragmatism side, I'd use
findover a fancy lookup table for arrays expected to be small too, because there is another cost here, the cost to our business or employer in terms of the amount of time it takes to build and deliver solutions to our customers. This is another practical optimisation, because if we run out of money before the solution is released (perhaps due to one of those 3 month long Linting wars?) we have also failed at our task!A great article, so good to be back reading your thoughts and the debate that they produce after the hiatus.
The part about performance is actually hard to read.
It's mad to assume that everybody is using the same powerful device you're using.
The browser itself is a mess. Yet here were are talking about how performance is not much of a priority. Sad.
One of the most essential skills related to performance is knowing when it matters.
50ms of expensive loop in a button press that will lead the user to a different view in your application? Probably not a big deal.
50ms of expensive loop in a paint worklet that will be used on several elements in your website? That is probably a huge problem.
Once you've figured where performance matters, there's many other things to worry about related to identifying performance problems and fixing them. Bot all of that is wasted time when we've failed to realise that the code we're working on doesn't need to be performant in the first place.
With that being said though: Front-end developers should probably care more about performance than back-end developers. These days adding a bit of processing power to your distributed back-end isn't all that expensive anymore, whereas losing paying users to a bad UX due to slow code quickly adds up.
What's more, processing power isn't distributed evenly throughout society, so there is serious risk of unintentionally preventing users who can't afford good-enough hardware from using a service.
And last but not least, wasted processing power does not care whether it happens in the browser or in the server. If anything, it might be more likely that big hosting companies are using green energy to improve their image, making the front end by far the worse place of the two to waste processing power.
What a world if our sort algorithms ran in logn. I believe you meant o(nlogn) as js built in sort is a quick sort.
Thank you for pointing this out. I've fixed it now.
Some comments have been hidden by the post's author - find out more