TDLR
If you are merging arrays with thousands of elements across, you can shave off seconds from the process by using arr1.push(...arr2) instead of arr1 = arr1.concat(arr2). If you really to go faster, you might even want to write your own implementation to merge arrays.
Wait a minute... how long does it take to merge 15,000 arrays with .concat...
Recently, we had a user complaining of a major slowdown in the execution of their UI tests on UI-licious. Each I.click I.fill I.see command which usually takes ~1 second to complete (post-processing e.g. taking screenshots) now took over 40 seconds to complete , so test suites that usually completed under 20 minutes took hours instead and was severely limiting their deployment process.
It didn't take long for me to set up timers to narrow down out which part of the code was causing the slowdown, but I was pretty surprised when I found the culprit:
arr1 = arr1.concat(arr2)
Array's .concat method.
In order to allow tests to be written using simple commands like I.click("Login") instead of CSS or XPATH selectors I.click("#login-btn"), UI-licious works using dynamic code analysis to analyse the DOM tree to determine what and how to test your website based on semantics, accessibility attributes, and popular but non-standard patterns. The .concat operations was being used to flatten the DOM tree for analysis, but worked very poorly when the DOM tree was very large and very deep, which happened when our user recently pushed an update to their application that caused their pages to bloat significantly (that's another performance issue on their side, but it's another topic).
It took 6 seconds to merge 15,000 arrays that each had an average size of 5 elements with .concat.
What?
6 seconds...
For 15,000 arrays with the average size of 5 elements?
That's not a lot data.
Why is it so slow? Are there faster ways to merge arrays?
Benchmark comparisons
.push vs. .concat for 10000 arrays with 10 elements each
So I started researching (by that, I mean googling) benchmarks for .concat compared to other methods to merge arrays in Javascript.
It turns out the fastest method to merge arrays is to use .push which accepts n arguments:
// Push contents of arr2 to arr1
arr1.push(arr2[0], arr2[1], arr2[3], ..., arr2[n])
// Since my arrays are not fixed in size, I used `apply` instead
Array.prototype.push.apply(arr1, arr2)
And it is faster by leaps in comparison.
How fast?
I ran a few performance benchmarks on my own to see for myself. Lo and behold, here's the difference on Chrome:
To merge arrays of size 10 for 10,000 times, .concat performs at 0.40 ops/sec, while .push performs at 378 ops/sec. push is 945x faster than concat! This difference might not be linear, but it is already is already significant at this small scale.
And on Firefox, here's the results:
Firefox's SpiderMonkey Javascript engine is generally slower compared to Chrome's V8 engine, but .push still comes out top, at 2260x faster.
This one change to our code fixed the entire slowdown problem.
.push vs. .concat for 2 arrays with 50,000 elements each
But ok, what if you are not merging 10,000 size-10 arrays, but 2 giant arrays with 50000 elements each instead?
Here's the the results on Chrome along with results:
.push is still faster than .concat, but a factor of 9.
Not as dramatic as 945x slower, but still dang slow.
Prettier syntax with rest spread
If you find Array.prototype.push.apply(arr1, arr2) verbose, you can use a simple variant using the rest spread ES6 syntax:
arr1.push(...arr2)
The performance difference between Array.prototype.push.apply(arr1, arr2) and arr1.push(...arr2) is negligable.
But why is Array.concat so slow?
It lot of it has to do with the Javascript engine, but I don't know the exact answer, so I asked my buddy @picocreator, the co-creator of GPU.js, as he had spent a fair bit of time digging around the V8 source code before. @picocreator's also lent me his sweet gaming PC which he used to benchmark GPU.js to run the JsPerf tests because my MacBook didn't have the memory to even perform .concat with two size-50000 arrays.
Apparently the answer has a lot to do with the fact that .concat creates a new array while .push modifies the first array. The additional work .concat does to add the elements from the first array to the returned array is the main reason for the slowdown.
Me: "What? Really? That's it? But by that much? No way!"
@picocreator: "Serious, just try writing some naive implementations of .concat vs .push then!"
So I tried writing some naive implementations of .concat and .push. Several in fact, plus a comparison with lodash's _.concat:
Naive implementation 1
Let's talk about the first set of naive implementation:
Naive implementation of .concat
// Create result array
var arr3 = []
// Add Array 1
for(var i = 0; i < arr1Length; i++){
arr3[i] = arr1[i]
}
// Add Array 2
for(var i = 0; i < arr2Length; i++){
arr3[arr1Length + i] = arr2[i]
}
Naive implementation of .push
for(var i = 0; i < arr2Length; i++){
arr1[arr1Length + i] = arr2[i]
}
As you can see, the only difference between the two is that the .push implementation modifies the first array directly.
Results of vanilla methods:
-
.concat: 75 ops/sec -
.push: 793 ops/sec (10x faster)
Results of naive implementation 1
-
.concat: 536 ops/sec -
.push: 11,104 ops/sec (20x faster)
It turns that my DIY concat and push is faster than the vanilla implementations... But here we can see that simply creating a new result array and copying the content of the first array over slows down the process significantly.
Naive implementation 2 (Preallocate size of the final array)
We can further improve the naive implementations by preallocating the size of the array before adding the elements, and this makes a huge difference.
Naive implementation of .concat with pre-allocation
// Create result array with preallocated size
var arr3 = Array(arr1Length + arr2Length)
// Add Array 1
for(var i = 0; i < arr1Length; i++){
arr3[i] = arr1[i]
}
// Add Array 2
for(var i = 0; i < arr2Length; i++){
arr3[arr1Length + i] = arr2[i]
}
Naive implementation of .push with pre-allocation
// Pre allocate size
arr1.length = arr1Length + arr2Length
// Add arr2 items to arr1
for(var i = 0; i < arr2Length; i++){
arr1[arr1Length + i] = arr2[i]
}
Results of naive implementation 1
-
.concat: 536 ops/sec -
.push: 11,104 ops/sec (20x faster)
Results of naive implementation 2
-
.concat: 1,578 ops/sec -
.push: 18,996 ops/sec (12x faster)
Preallocating the size of the final array improves the performance by 2-3 times for each method.
.push array vs. .push elements individually
Ok, what if we just .push elements individually? Is that faster than Array.prototype.push.apply(arr1, arr2)
for(var i = 0; i < arr2Length; i++){
arr1.push(arr2[i])
}
Results
-
.pushentire array: 793 ops/sec -
.pushelements individually: 735 ops/sec (slower)
So doing .push on individual elements is slower than doing .push on the entire array. Makes sense.
Conclusion: Why .push is faster .concat
In conclusion, it is true that the main reason why concat is so much slower than .push is simply that it creates a new array and does the additional work to copy the first array over.
That said, now there's another mystery to me...
Another mystery
Why are the vanilla implementations so much slower than the naive implementations?🤔I asked for @picocreator's help again.
We took a look at lodash's _.concat implementation for some hints as to what else is vanilla .concat doing under the hood, as it is comparable in performance (lodash's is slightly faster).
It turns out that because according to the vanilla's .concat's specs, the method is overloaded, and supports two signatures:
- Values to append as n number of arguments, e.g.
[1,2].concat(3,4,5) - The array to append itself, e.g.
[1,2].concat([3,4,5])
You can even do both like this: [1,2].concat(3,4,[5,6])
Lodash also handles both overloaded signatures, and to do so, lodash puts all the arguments into an array, and flattens it. It make sense if you are passing in several arrays as arguments. But when passed an array to append, it doesn't just use the array as it is, it copies that into another array, and then flattens it.
... ok...
Definitely could be more optimised. And this is why you might want to DIY your own merge array implementation.
Also, it's just my and @picocreator's theory of how vanilla .concat works under the hood based on Lodash's source code and his slightly outdated knowledge of the V8 source code.
You can read the lodash's source code at your leisure here.
Additional Notes
The tests are done with Arrays that only contain Integers. Javascript engines are known to perform faster with Typed Arrays. The results are expected to be slower if you have objects in the arrays.
Why are we doing such large array operations during UI-licious tests anyway?
Under the hood, the UI-licious test engine scans the DOM tree of the target application, evaluating the semantics, accessible attributes and other common patterns to determine what is the target element and how to test it.
This is so that we can make sure tests can be written as simple as this:
// Lets go to dev.to
I.goTo("https://dev.to")
// Fill up search
I.fill("Search", "uilicious")
I.pressEnter()
// I should see myself or my co-founder
I.see("Shi Ling")
I.see("Eugene Cheah")
Without the use of CSS or XPATH selectors, so that the tests can be more readable, less sensitive to changes in the UI, and easier to maintain.
ATTENTION: Public service announcement - Please keep your DOM count low!
Unfortunately, there's a trend of DOM trees growing excessively large these days because people are building more and more complex and dynamic applications with modern front-end frameworks. It's a double-edge sword, frameworks allow us to develop faster, folks often forget how much bloat frameworks add. I sometimes cringe at the number of elements that are just there to wrap other elements when inspecting the source code of various websites.
If you want to find out whether your website has too many DOM nodes, you can run a Lighthouse audit.
According to Google, the optimal DOM tree is:
- Less than 1500 nodes
- Depth size of less than 32 levels
- A parent node has less than 60 children
A quick audit on the Dev.to feed shows that the DOM tree size is pretty good:
- Total count of 941 nodes
- Max. depth of 14
- Max number of child elements at 49
Not bad!


Oldest comments (70)
So from what I remember, Array implementations in V8 come in two flavours: Fast element and Dictionary element (hash table) arrays. The former is present when the array is below 10000 elements and doesn't have any "holes" in them. The latter, on the other hand, is present when the opposite is true: array.length > 10000 and array has "holes" (e.g. [1, 2, undefined, 4, 5])
So concat might have been slow because of this reason. I'm still not certain about this, mainly because .push was still fast.
While not shown in the use case (as the above oversimplifies the actual code). Your right that holes have an impact between the array mode / dictionary mode in V8 code. Having holes does not gurantee a trigger into dictionary mode, but is one of the contributing factor in how the internal engine decides the mode.
Probably worth further investigation, gut feel as of now from how I understand it, is the array in our use case would not have any holes 99% of the time. And from our internal code monitoring, its these 99% that's has the problem. >_<
I can't recall the talk reference for this (but definitely know it's real) so if you know where it's from. It would be good to add here for others to learn.
I replied to this, but I don't see the reply here. Really weird.
Anyways here are two resources that talk about how V8 handles arrays:
Good investigation and article!
Thanks :)
Nice investigation.
The concat loop is akin to O(n^2) whereas the push loop is O(n). By creating a new array you need to do the copy, as you showed. This happens for every iteration of the loop*. An array push however is amortized constant time (usually), thus you only copy each element once (well probably three times).
Part of the issue with increased copying is the increased memory access. Your array probably exceeds the bounds of the L0 cache on the CPU, possibly the L1 cache. This means that when concat copies, it needs to load new memory from a more distant cache, repeatedly. The push version never accesses the data already in the list, thus avoiding this extra memory loading.
(*Before somebody gets pedantic, it's more like a K*N loop, where K is the loop iterations. Hwever, since the size of the resulting array is linearly related to K, it means
c * K = N, thus ~ N^2)Talking strictly in terms of the performance of
pushversusconcat(albeit naively as it relates to VM implementation details), this wouldn't really apply.A meaningful comparison of the two would take two large pre-allocated arrays, and compare the results of a single
concatoperation and a singlepushoperation, where I'd expectconcatto perform better when dealing with sufficiently large arrays, since it's (theoretically) a single allocation and then two memcpys, whereaspushcopies a single element over one at a time causing multiple reallocations.To be pedantic, the context in which we're talking about the performance characteristics of
pushandconcatis in terms of merging a large number of arrays, where I'd wager garbage collection is likely the dominating factor in theconcatsolution, even after taking runtime characteristics into account.I'm not following how this means what I said doesn't apply?
pushis highly unlikely to allocate memory one bit at a time. If it's anything like a C++ vector, or a contiguous array in any other language, it has amortized O(1)pushtime by allocating geometrically rather than linearly.We're also dealing in sizes that are definitely large enough to spill over the CPU buffers, especially in the concat case where we allocate new data.
GC is potentially involved, but the allocation and deallocation of memory, in such tight loops, for an optimized VM (I assume it is), is likely also linear time.
There's no reason a
memcpyis faster than a loop copying over items one at a time.memcpyis not a CPU construct, it gets compiled to a loop that copies a bit of memory at a time.I should clarify: What you said applies to merging multiple arrays, where the input size is the number of arrays being merged. If we're talking strictly about the performance of
pushvsconcatfor concatenating one array to another (which the headline sort of implies), then the input size is always one if analyzing in terms of merging multiple arrays.As you said, it's going to re-allocate similarly to a C++ vector, which for the purpose of runtime analysis we just disregard. I choose not to disregard it since I don't believe runtime analysis tells the whole story here. Or perhaps more appropriately, merging multiple arrays is a specific use of
pushandconcatthat doesn't speak to their performance characteristics in a generalized way.The point is,
concatmay perform better in scenarios where one is likely to use it, and it would be misleading to advocate always usingpushinstead.Also known as a Schlemiel the Painter’s Algorithm.
Note that your tests may be returning biased results as for example Chrome and likely other browsers have special cased arrays that are known to (not) be sparse.
Doing
Array.from(Array(50000), (x,i) => i + 1)looks nice but will create a "hole-y" (sparse) Array. That is, the initial array will be 50000 "holes" or empty values. That you then fill them in later will not upgrade the array to be non-sparse later on. These things change quickly but this is what was featured in a session by a Chrome engineer on Array implementations from last year: youtube.com/watch?v=m9cTaYI95ZcI work with large arrays as well (3D model data), these are anywhere from 1KB to 100MB in size and I have to sometimes append them all together without wasting memory etc. I use the following method to append them together. It uses
Array.slicebut browsers (seem to) implement slicing as CopyOnWrite so the slice creates a view on the array and doesn't make copies constantly I've found empirically.Linked here in my repo, which also has quite a few other array methods, check 'em out ;)
github.com/stardazed/stardazed/blo...
Definately a biased sample for browser point of view - not fully intentional though. We tried to get firefox results for extreamly large arrays.... and err lets say it didnt go well at all for the other browsers when put side by side, so we left them out.
Bigger impact though is probably gonna be on the server side (node.js) of things, which is the case for us.
Despite my involvement in GPU.JS, and all things 3D, which is cool in its own right. For most other sites, I do personally feel that its usually a bad idea to overbloat the UI, especially for those with older devices.
Big thanks for the youtube link =)
Yes, my focus is in-browser (on desktop) high-end 3D scenes, which is very different from web pages and server-side stuff. I've spent a good amount of time benchmarking various approaches to array based data manipulations so certain of my functions have separate paths for diff browsers but in general with my code Safari (since v11) and Chrome are very close with my code usually performing best in STP and Firefox generally coming in 3rd place by about -10% to -30%, unless it faster by a good margin in particular things… JS engines.
This problem also exists for ArrayBuffers as the changes to JS engines to accommodate ES6 runtime features negatively affected many common functions on those types. New patches have come in since then and they're better now, but certain functions on ArrayBuffer, like
set, are still very slow on all browsers.Thanks to Shi Ling for the nice article.
Actually your time spent on various array benchmarks across browsers, would be a cool article on its own - especially the various other browsers, safari desktop, mobile, edge, ie11? (haha).
Would actually love to see your browser specific implementations 🤔 It is admittingly a lot of work though
Thanks! A lot of this work is in various test files (not in repos) and quite a bit of it was done over the last 3 years so I'd have to retest all permutations etc. I often feel like writing an article here but instead I just write a long comment on someone else's story as that feels less "weighty." But cool that you'd be interested!
As for browsers, given the scope of my project I have the luxury of requiring essentially the latest versions of desktop browsers, so my project outputs ES2018 code and I only need to check a few different engines.
Oh, I get it - you mean those holey arrays will be evaluated as holey arrays forever even if they become packed.
That video is really helpful - I learnt yet another thing about Array implementations in V8 today. Gee I wish more of these good videos are transcribed into searchable online literature.
Hm... I could change the array initialisation so that it's not holey and redo the experiments - concat will still be slower than push, but the results for the vanilla implementations could perform faster.
Huh, that's interesting! I was wondering what are the memory footprints of different implementations to merge arrays!
Great investigation and data!
Very thorough article.
Considering the fame of lodash, I now wonder is there's a place for an equivalent with performance in mind.
I also wonder how many more example like this is there is. Is that only
concator most ofArray.prototypethat can be optimized ?I remember my early struggles with JS performance, replacing
Math.floorby<< 0gaining considerable amount op/s. (Not true today)Another sample is
array.pushvsarray.unshift, basically for memory reasons. If I remember correctly, apushis simple for most computers because it already has "extra" memory allocated at the end of an array, and will only have to recreate the array when the allocated memory runs out.unshifthowever will always recreate the whole array because adding an item to the start of an array would require moving ever single item one step. Recreating the whole array is simpler to do than moving each one.See stackoverflow.com/questions/440315...
Huh, interesting!
What about:
cont newArr = [...oldArr, newItem]
🤔
I tried this out in the babel repl, the spread uses concat after es6 code gets transpiled so I can assume its probably the same :O
babeljs.io/repl/#?babili=false&bro...
Doesn't that still create a copy of the first array - which makes it the same as the
concat?I don't know actually. If so, that's really insightful thank you, I'll investigate!!
Great stuff, I have so much love for jsperf.com that I remember being very upset for a while when it went down for a while. Glad to have it back up, I have recently used it to check the performance on different solutions the Sock Merchant Challenge, please check the comments section if you have a sec
My question though is that since some have expressed possible bias on certain browsers, how much do we trust or should rely on the built in console.time method running in a browserless environment such as node to provide a base for such benchmarks;
If this is an objective and efficient way of testing speed, it should be easy to put a wrapper around it for reuse, if not what do folks recommend as tools to look into for this crucial exercise
Huh, I didn't know there's a
console.timeandconsole.timeEndmethods.Been using new Date() all along :D.
I don't know if JsPerf uses
console.timeandconsole.timeEndunder the hood, but the underlying implementations of the timer method can't be so sophisticated that it'll make a significant impact on the tests between browsers.I'm really curious about your use case now. DO the arrays even need to be combined? Would it be possible to have an array of arrays instead, slightly modifying the search algorithm? This could mean no longer needing to modify or duplicate any content at all, making things even faster.
That did occur to me when I was refactoring our code to fix this problem, it is possible in my use case, that hurts my code readability and the performance improvement from refactoring
.concatto.pushwas good enough for me to be satisfied.Some comments may only be visible to logged-in visitors. Sign in to view all comments.