DEV Community

Cover image for Javascript Array.push is 945x faster than Array.concat 🤯🤔

Javascript Array.push is 945x faster than Array.concat 🤯🤔

Shi Ling on May 02, 2019

TDLR If you are merging arrays with thousands of elements across, you can shave off seconds from the process by using arr1.push(...arr2)...
Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Nice investigation.

The concat loop is akin to O(n^2) whereas the push loop is O(n). By creating a new array you need to do the copy, as you showed. This happens for every iteration of the loop*. An array push however is amortized constant time (usually), thus you only copy each element once (well probably three times).

Part of the issue with increased copying is the increased memory access. Your array probably exceeds the bounds of the L0 cache on the CPU, possibly the L1 cache. This means that when concat copies, it needs to load new memory from a more distant cache, repeatedly. The push version never accesses the data already in the list, thus avoiding this extra memory loading.

(*Before somebody gets pedantic, it's more like a K*N loop, where K is the loop iterations. Hwever, since the size of the resulting array is linearly related to K, it means c * K = N, thus ~ N^2)

Collapse
 
dubyabrian profile image
W. Brian Gourlie • Edited

Talking strictly in terms of the performance of push versus concat (albeit naively as it relates to VM implementation details), this wouldn't really apply.

A meaningful comparison of the two would take two large pre-allocated arrays, and compare the results of a single concat operation and a single push operation, where I'd expect concat to perform better when dealing with sufficiently large arrays, since it's (theoretically) a single allocation and then two memcpys, whereas push copies a single element over one at a time causing multiple reallocations.

To be pedantic, the context in which we're talking about the performance characteristics of push and concat is in terms of merging a large number of arrays, where I'd wager garbage collection is likely the dominating factor in the concat solution, even after taking runtime characteristics into account.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

I'm not following how this means what I said doesn't apply?

push is highly unlikely to allocate memory one bit at a time. If it's anything like a C++ vector, or a contiguous array in any other language, it has amortized O(1) push time by allocating geometrically rather than linearly.

We're also dealing in sizes that are definitely large enough to spill over the CPU buffers, especially in the concat case where we allocate new data.

GC is potentially involved, but the allocation and deallocation of memory, in such tight loops, for an optimized VM (I assume it is), is likely also linear time.

There's no reason a memcpy is faster than a loop copying over items one at a time. memcpy is not a CPU construct, it gets compiled to a loop that copies a bit of memory at a time.

Thread Thread
 
dubyabrian profile image
W. Brian Gourlie • Edited

I'm not following how this means what I said doesn't apply?

I should clarify: What you said applies to merging multiple arrays, where the input size is the number of arrays being merged. If we're talking strictly about the performance of push vs concat for concatenating one array to another (which the headline sort of implies), then the input size is always one if analyzing in terms of merging multiple arrays.

push is highly unlikely to allocate memory one bit at a time

As you said, it's going to re-allocate similarly to a C++ vector, which for the purpose of runtime analysis we just disregard. I choose not to disregard it since I don't believe runtime analysis tells the whole story here. Or perhaps more appropriately, merging multiple arrays is a specific use of push and concat that doesn't speak to their performance characteristics in a generalized way.

The point is, concat may perform better in scenarios where one is likely to use it, and it would be misleading to advocate always using push instead.

Collapse
 
ap profile image
Aristotle Pagaltzis
Collapse
 
zenmumbler profile image
zenmumbler • Edited

Note that your tests may be returning biased results as for example Chrome and likely other browsers have special cased arrays that are known to (not) be sparse.

Doing Array.from(Array(50000), (x,i) => i + 1) looks nice but will create a "hole-y" (sparse) Array. That is, the initial array will be 50000 "holes" or empty values. That you then fill them in later will not upgrade the array to be non-sparse later on. These things change quickly but this is what was featured in a session by a Chrome engineer on Array implementations from last year: youtube.com/watch?v=m9cTaYI95Zc

I work with large arrays as well (3D model data), these are anywhere from 1KB to 100MB in size and I have to sometimes append them all together without wasting memory etc. I use the following method to append them together. It uses Array.slice but browsers (seem to) implement slicing as CopyOnWrite so the slice creates a view on the array and doesn't make copies constantly I've found empirically.

const MAX_BLOCK_SIZE = 65535; // max parameter array size for use in Webkit

export function appendArrayInPlace<T>(dest: T[], source: T[]) {
    let offset = 0;
    let itemsLeft = source.length;

    if (itemsLeft <= MAX_BLOCK_SIZE) {
        dest.push.apply(dest, source);
    }
    else {
        while (itemsLeft > 0) {
            const pushCount = Math.min(MAX_BLOCK_SIZE, itemsLeft);
            const subSource = source.slice(offset, offset + pushCount);
            dest.push.apply(dest, subSource);
            itemsLeft -= pushCount;
            offset += pushCount;
        }
    }
    return dest;
}
Enter fullscreen mode Exit fullscreen mode

Linked here in my repo, which also has quite a few other array methods, check 'em out ;)
github.com/stardazed/stardazed/blo...

Collapse
 
picocreator profile image
Eugene Cheah • Edited

Definately a biased sample for browser point of view - not fully intentional though. We tried to get firefox results for extreamly large arrays.... and err lets say it didnt go well at all for the other browsers when put side by side, so we left them out.

Bigger impact though is probably gonna be on the server side (node.js) of things, which is the case for us.

Despite my involvement in GPU.JS, and all things 3D, which is cool in its own right. For most other sites, I do personally feel that its usually a bad idea to overbloat the UI, especially for those with older devices.

Big thanks for the youtube link =)

Collapse
 
zenmumbler profile image
zenmumbler

Yes, my focus is in-browser (on desktop) high-end 3D scenes, which is very different from web pages and server-side stuff. I've spent a good amount of time benchmarking various approaches to array based data manipulations so certain of my functions have separate paths for diff browsers but in general with my code Safari (since v11) and Chrome are very close with my code usually performing best in STP and Firefox generally coming in 3rd place by about -10% to -30%, unless it faster by a good margin in particular things… JS engines.

This problem also exists for ArrayBuffers as the changes to JS engines to accommodate ES6 runtime features negatively affected many common functions on those types. New patches have come in since then and they're better now, but certain functions on ArrayBuffer, like set, are still very slow on all browsers.

Thanks to Shi Ling for the nice article.

Thread Thread
 
picocreator profile image
Eugene Cheah • Edited

Actually your time spent on various array benchmarks across browsers, would be a cool article on its own - especially the various other browsers, safari desktop, mobile, edge, ie11? (haha).

Would actually love to see your browser specific implementations 🤔 It is admittingly a lot of work though

Thread Thread
 
zenmumbler profile image
zenmumbler • Edited

Thanks! A lot of this work is in various test files (not in repos) and quite a bit of it was done over the last 3 years so I'd have to retest all permutations etc. I often feel like writing an article here but instead I just write a long comment on someone else's story as that feels less "weighty." But cool that you'd be interested!

As for browsers, given the scope of my project I have the luxury of requiring essentially the latest versions of desktop browsers, so my project outputs ES2018 code and I only need to check a few different engines.

Collapse
 
shiling profile image
Shi Ling

Oh, I get it - you mean those holey arrays will be evaluated as holey arrays forever even if they become packed.

That video is really helpful - I learnt yet another thing about Array implementations in V8 today. Gee I wish more of these good videos are transcribed into searchable online literature.

Hm... I could change the array initialisation so that it's not holey and redo the experiments - concat will still be slower than push, but the results for the vanilla implementations could perform faster.

Huh, that's interesting! I was wondering what are the memory footprints of different implementations to merge arrays!

Collapse
 
merri profile image
Vesa Piittinen

A thing I do wonder though is that if you an extra .slice(0) operation after the array is packed, wouldn't that result into a true fast-to-operate non-holey array? So the cost would be an extra allocation and copy, which would still be better than continuous .push() into a packed array.

Of course this only holds true as long there will be thousands of items to process and you know the final size before you start working on it.

Thread Thread
 
zenmumbler profile image
zenmumbler

This is Javascript engine dependent, but please have a look at my initial comment, I observed that Array.slice creates a view on an array instead of making a new one until it is modified (CopyOnWrite). This behaviour likely will vary for example based on what the VM thinks is best given the size and contents of the array and will again vary per VM implementation. Also, my observation is just that, I did not verify this in the source of each VM, so consider this an unconfirmed opinion.

The video may also give a real answer to this Q as the presenter goes into detail about these things, but I haven't watched it recently. Good to have a look.

Thread Thread
 
merri profile image
Vesa Piittinen • Edited

I watched the video and it didn't answer that specific question. However if slice creates a view then it still does need to make a new array once any modification is made on the sliced copy. At that point shouldn't it regard it as a new array and be able to optimize it?

It would be awesome though if holey arrays could become non-holey without this kind of VM specific tricking once there are no holes remaining.

Collapse
 
kitanga_nday profile image
Kitanga Nday

So from what I remember, Array implementations in V8 come in two flavours: Fast element and Dictionary element (hash table) arrays. The former is present when the array is below 10000 elements and doesn't have any "holes" in them. The latter, on the other hand, is present when the opposite is true: array.length > 10000 and array has "holes" (e.g. [1, 2, undefined, 4, 5])

Collapse
 
picocreator profile image
Eugene Cheah • Edited

While not shown in the use case (as the above oversimplifies the actual code). Your right that holes have an impact between the array mode / dictionary mode in V8 code. Having holes does not gurantee a trigger into dictionary mode, but is one of the contributing factor in how the internal engine decides the mode.

Probably worth further investigation, gut feel as of now from how I understand it, is the array in our use case would not have any holes 99% of the time. And from our internal code monitoring, its these 99% that's has the problem. >_<

I can't recall the talk reference for this (but definitely know it's real) so if you know where it's from. It would be good to add here for others to learn.

Collapse
 
kitanga_nday profile image
Kitanga Nday

I replied to this, but I don't see the reply here. Really weird.

Anyways here are two resources that talk about how V8 handles arrays:

Collapse
 
kitanga_nday profile image
Kitanga Nday

So concat might have been slow because of this reason. I'm still not certain about this, mainly because .push was still fast.

Collapse
 
lrpinto profile image
Luisa Pinto

I am sorry, I do not understand what is the big fuss about this... concat() and push() have two different applications and they were not made to compete with each other. You do not need a study to conclude concat() is a slower option. That is very clear by reading the two first lines of the documentation of each method.

developer.mozilla.org/en-US/docs/W...

developer.mozilla.org/en-US/docs/W...

It is important to understand that you use concat() when you are after immutability, for example, when you want to preserve states. One example where immutability is important is when using React - reactjs.org/tutorial/tutorial.html....

Otherwise, you use push when you don't care about changing the arrays (one of them at least).

I am very much more surprised immutability was not mentioned in this article, than with the difference between the performance of each method.

Collapse
 
shiling profile image
Shi Ling • Edited

While that is true, and although it just so happened that in my use case immutability doesn't matter, but it is still worthwhile to consider using other methods to merge arrays instead of concat, while still preserving the source arrays.

If you needed to merge thousands of small arrays in a loop without modifying the source arrays for example, you should create a new result array and use push the source arrays instead to avoid the penalty of copying the first array during each loop when you use concat.

Naive code to merge 1000 arrays with concat (basically what I did, which in my case is 15,000 arrays):

for(var i = 0; i < 1000; i++){
  arr1 = arr1.concat(arr2)   // arr1 gets bigger and bigger, so this gets more expensive over time
}

Faster code to merge 1000 arrays with push w/o modification to source arrays:

let result = []
result.push(...arr1)
for(var i = 0; i < 1000; i++){
  result.push(...arr2) 
}

Also, given by the name, it's not very obvious to people that concat creates a new array instead of modifying the first array. And even though I frequently read MDN docs, I still sometimes forget that concat creates the result as new array.

Collapse
 
johncip profile image
jmc • Edited

it's not very obvious to people that concat creates a new array instead of modifying the first array

I always forget too 😂

Thread Thread
 
joineral32 profile image
Alec Joiner

FWIW, to me it is very intuitive that concat would create a new array since when I hear the word 'concat' I immediately think of strings. Maybe it's misguided, but that leads me to expect other things to behave the same way strings would.

Collapse
 
gmartigny profile image
Guillaume Martigny • Edited

Very thorough article.

Considering the fame of lodash, I now wonder is there's a place for an equivalent with performance in mind.

I also wonder how many more example like this is there is. Is that only concat or most of Array.prototype that can be optimized ?

I remember my early struggles with JS performance, replacing Math.floor by << 0 gaining considerable amount op/s. (Not true today)

Collapse
 
adamgerthel profile image
Adam Gerthel

Another sample is array.push vs array.unshift, basically for memory reasons. If I remember correctly, a push is simple for most computers because it already has "extra" memory allocated at the end of an array, and will only have to recreate the array when the allocated memory runs out. unshift however will always recreate the whole array because adding an item to the start of an array would require moving ever single item one step. Recreating the whole array is simpler to do than moving each one.

See stackoverflow.com/questions/440315...

Collapse
 
shiling profile image
Shi Ling

Huh, interesting!

Collapse
 
mykohsu profile image
mykohsu

There's a major problem that should be mentioned.

push(... arr)

Will stack overflow if your array is larger than the stack size.

Collapse
 
shiling profile image
Shi Ling

Wait... why? Stack as in execution stack or memory?

Collapse
 
johncip profile image
jmc

Execution stack. It's the same for apply:

> const a = [], b = new Array(10**6)
> Array.prototype.push.apply(a, b)
Thrown:
RangeError: Maximum call stack size exceeded

The problem is that function call arguments all need to go on the call stack, so that they can be bound to the called function's variables. Put another way, push.apply(a, [1,2,3]) --> a.push(1,2,3), where 1, 2, and 3 are assumed to fit in the call stack.

The max stack size is set by the OS & user, but generally small (a few hundred or thousand kb), since it's only meant to hold function call contexts.

So we can distinguish between cases where the array is actually a list of function arguments, or the push.apply usage, where the array is "just" data, and we're only able to use it with push because that function, for convenience, takes a variable number of arguments.

In order to use apply in the latter case, it's good to know up front that the array will be small.

Thread Thread
 
shiling profile image
Shi Ling

Oh I see!

Collapse
 
brokenthorn profile image
Paul-Sebastian Manole

Might want to mention what the spread operator actually does, which is to spread the one argument to array.push(...arg) to multiple arguments like so: array.push(arg[0], arg[1], arg[2], ..., arg[n]).

Collapse
 
shiling profile image
Shi Ling

You're right, I didn't realise that people might not know what the ES6 spread operator does. Updated the article - thanks!

Collapse
 
4mitch profile image
Dima

Seems the problem is not in concat, but in = itself.
As I can see map of concat is even faster than push

console.time('how long');
var arBig = 
[...Array(10000).keys()].map( function(step) {
  return arr1.concat(arr2);
});
console.log(arBig.length);
console.timeEnd('how long');

Plz check jsperf.com/javascript-array-concat...

Collapse
 
merri profile image
Vesa Piittinen

Not comparable: the concat mapper creates 10000 arrays of 20 item arrays, while the other functions create an array of 100010 items.

Collapse
 
4mitch profile image
Dima

Ahh, indeed!
Well, with reduce-concat timing became the same as arr1 = arr1.concat :(

console.time('how long');
var arBig = 
[...Array(10000).keys()].reduce( function(acc, val) {
  return acc.concat(arr2);
}, arr1);
console.log(arBig.length);
console.timeEnd('how long');
Collapse
 
zakius profile image
zakius

according the DOM count: how to properly handle situation when you simply have a lot of items to display? implementing fake scroller to render only the visible elements and remove from DOM these hidden ones doesn't work that well IMO, especially when all of them are still more or less interactive (multiple selection etc.)

Collapse
 
shiling profile image
Shi Ling

You mean how does should a front-end engineer improve performance on a page with a high DOM count, or how does our test engine handle such huge pages?

Collapse
 
zakius profile image
zakius

I meant how to handle high item counts while keeping good performance/minimizing DOM count

Thread Thread
 
shiling profile image
Shi Ling

Well, the most common solution is pagination. But if your designer fancies the infinite scrolling pattern, then as you said, rendering only visible content is a solution. The Chrome Dev Team proposed the <virtual-scroller> element youtube.com/watch?v=UtD41bn6kJ0 last year, which may be handy when if it becomes standard in the future.

Additionally, we can also check if there are redundant wrapper elements on the page that can be removed.

Thread Thread
 
zakius profile image
zakius

Thanks for the link.
It's not exactly infinite scrolling or anything like that, it's an app (kind of, browser extension), RSS reader to be precise, and just by importing my sources list and downloading currently available articles ir goes to over 3000 articles. Selecting "all feeds" surely renders quite a big list.
It's a legacy project I picked up since it was abandoned and cleaned up a bit, there was even an attempt at handling long lists better but it was actually working slower than naive list everything, approach and was glitching quite often. Currently performance isn't terrible but if there was some easy way to improve it I'd gladly use it, I'll do some more research in this matter to see what can be done.

Collapse
 
adyngom profile image
Ady Ngom

Great stuff, I have so much love for jsperf.com that I remember being very upset for a while when it went down for a while. Glad to have it back up, I have recently used it to check the performance on different solutions the Sock Merchant Challenge, please check the comments section if you have a sec

My question though is that since some have expressed possible bias on certain browsers, how much do we trust or should rely on the built in console.time method running in a browserless environment such as node to provide a base for such benchmarks;

console.time('how long');
  plzBenchMarkMe();
console.timeEnd('how long');

If this is an objective and efficient way of testing speed, it should be easy to put a wrapper around it for reuse, if not what do folks recommend as tools to look into for this crucial exercise

Collapse
 
shiling profile image
Shi Ling

Huh, I didn't know there's a console.time and console.timeEnd methods.

Been using new Date() all along :D.

I don't know if JsPerf uses console.time and console.timeEnd under the hood, but the underlying implementations of the timer method can't be so sophisticated that it'll make a significant impact on the tests between browsers.

Collapse
 
dabit3 profile image
Nader Dabit

What about:

cont newArr = [...oldArr, newItem]

🤔

Collapse
 
iwilsonq profile image
Ian Wilson

I tried this out in the babel repl, the spread uses concat after es6 code gets transpiled so I can assume its probably the same :O

babeljs.io/repl/#?babili=false&bro...

Collapse
 
shiling profile image
Shi Ling

Doesn't that still create a copy of the first array - which makes it the same as the concat?

Collapse
 
dabit3 profile image
Nader Dabit

I don't know actually. If so, that's really insightful thank you, I'll investigate!!

Collapse
 
neolivz profile image
Jishnu Viswanath

I modified the code

var arr1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
var arr2 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
var arr3 = []

var date = Date.now();
for (let i = 10000; i > 0; i--) {
  var arr  =[];
  Array.prototype.push.apply(arr, arr1);
  Array.prototype.push.apply(arr, arr2);
  arr1 = arr;
}

Date.now() - date;

This gives the exact same performance as the concat function, basically creating the new object is the one which is costing the extra perf.
I mean concat function, actually creates a new array and append existing array into that, that is costly affair considering that it will constantly increase 10, 20, 30,40.... 99990entries in it and append in hte next iteration

Collapse
 
ben profile image
Ben Halpern

Great investigation and data!

Collapse
 
jaguilar profile image
J • Edited

Your benchmark for 50k length arrays is busted. You are building up arr1 in each run of the test case, which means it is longer and longer on each pass. This is obviously going to favor push, which is only going to copy each power of two, whereas concat has to copy every time.

If you fix the benchmark, concat is faster: jsperf.com/javascript-array-concat...

Collapse
 
dollarakshay profile image
Akshay.L.Aradhya • Edited

I just tested this myself and found the concat is much faster than push. Benchmarking done with Node 15.7.0 using Benchmark.js

Test Results - Merging 2 10k element arrays
--------------------------------------------------------------------------
Concat                    : 9780.56 ops/sec (+138.97 %)
Spread operator           : 6278.58 ops/sec ( +53.41 %)
Uisng Push                : 4273.39 ops/sec (  +4.41 %)
Push with spread operator : 4092.72 ops/sec (  +0.00 %)
--------------------------------------------------------------------------


Test Results - Merging 2 1000 element arrays
-------------------------------------------------------------------------------
Concat                    : 329820.18 ops/sec (+574.92 %)
Spread operator           : 112274.04 ops/sec (+129.75 %)
Push with spread operator : 98152.51 ops/sec (+100.85 %)
Uisng Push                : 48868.39 ops/sec (  +0.00 %)
-------------------------------------------------------------------------------

Test Results - Merging 2 100 elements array
-------------------------------------------------------------------------------
Concat                    : 2325515.17 ops/sec (+381.10 %)
Spread operator           : 649242.79 ops/sec ( +34.31 %)
Push with spread operator : 608852.43 ops/sec ( +25.96 %)
Uisng Push                : 483378.08 ops/sec (  +0.00 %)
-------------------------------------------------------------------------------
Enter fullscreen mode Exit fullscreen mode
Collapse
 
dimpiax profile image
Dmytro Pylypenko

Good investigation and article!
Thanks :)

Collapse
 
milahu profile image
milahu

"what if we just .push elements individually?"

it depends!
in some cases this is even faster than array1.push(...array2)

plus it is not limited by max call stack size = ~100K in chrome, 500K in firefox

Array.prototype._pushArray = function (other) {
  for (var i = 0; i < other.length; i++) this.push(other[i]);
  return this;
};
array1._pushArray(array2)._pushArray(array3)._pushArray(array4);
Enter fullscreen mode Exit fullscreen mode

should be the fastest, simplest and most robust solution

Collapse
 
juslintek profile image
Linas

array[] = $value; is faster then push. 😊 And

$array = array_merge(... $arrayOfArrays); is faster then merge per item inside loop.

Collapse
 
nimodota profile image
Simon

Just to take some mystery out of V8s behavior here. Builtins (e.g. Array#push and Array#concat) are mostly written in either a C++ DSL called CodeStubAssembler or a newer DSL on top of it called Torque. One of the Array#push implementations can be found here. These get statically compiled into platform specific code during build time, NOT runtime.

Some builtins (like Array#push) have a special handling in V8s JIT compiler Turbofan. If a function or a loop becomes hot enough, that contains such a call, it gets "inlined" into the JIT compiled code directly. This happens at runtime and the optimizing compiler can take advantage of information like type feedback.

Long story short, if you have a tight loop (often the case in microbenchmarks), Turbofan will throw everything plus the kitchen sink at it to optimize that code. The result is, that a builtin that does not have special handling (like Array#concat) might be slower in a microbenchmark (!) vs hand written code. The reason is simply that the builtin might have been statically compiled, while the hand written version was heavily optimized for one specific call-site.

Collapse
 
darkain profile image
Vincent Milum Jr • Edited

I'm really curious about your use case now. DO the arrays even need to be combined? Would it be possible to have an array of arrays instead, slightly modifying the search algorithm? This could mean no longer needing to modify or duplicate any content at all, making things even faster.

Collapse
 
shiling profile image
Shi Ling

That did occur to me when I was refactoring our code to fix this problem, it is possible in my use case, that hurts my code readability and the performance improvement from refactoring .concat to .push was good enough for me to be satisfied.

Collapse
 
knoxcard profile image
Indospace.io

I freaking love this site!! This was one hell of a post. Guess what I am doing now? I am submitting a bunch of pull requests to npm packages that use .concat and replacing it with .push. Imagine all the CPU processing time that will be saved with the millions of updated npm packages.

Collapse
 
jaguilar profile image
J

Please don't. First of all, at least one of the benchmarks in this post is broken, so I don't know if the conclusions are valid. Second of all, often the performance differences between these two will not matter in practice. Third of all, the behaviors aren't equivalent, and mechanically verifying that you're not breaking anything will be hard.

Collapse
 
2n2b1 profile image
kyle • Edited

Just thought I'd put this out here as I didn't see any mention of it yet.

Mozilla's JavaScript Documentation Reference actually makes mention about adding elements to arrays.

Using apply to append an array to another

We can use push to append an element to an array. And, because push accepts a variable number of arguments, we can also push multiple elements at once.

But, if we pass an array to push, it will actually add that array as a single element, instead of adding the elements individually, so we end up with an array inside an array. What if that is not what we want? concat does have the behaviour we want in this case, but it does not actually append to the existing array but creates and returns a new array.

apply to the rescue!

(Source: developer.mozilla.org/en-US/docs/W...)

... and then they go on to say ...

Merging two arrays

"This example uses apply() to push all elements from a second array."

"Do not use this method if the second array (moreVegs in the example) is very large, because the maximum number of parameters that one function can take is limited in practice. See apply() for more details."

(source: developer.mozilla.org/en-US/docs/W...)

Collapse
 
manantank profile image
Manan Tank • Edited

It is important to note that concat is way faster than push if you just have a single array and you want push items of another array

jsbench.me/73kzwcxtmw/1

Collapse
 
fire profile image
fire

Now I'm curious if a push-based concat is roughly the same performance as any of the naive implementations

Collapse
 
rhymes profile image
rhymes

Shi, this article is terrific. You could have just said "here are the results, bye" but you went deep into the how and the why, thank you very much!

Collapse
 
shiling profile image
Shi Ling

Aww... thanks for the appreciation! Good to know that my annoying habit of asking endless whys until I'm satisfied is productive and helpful. 😁

Collapse
 
bugmagnet profile image
Bruce Axtens

Wow. I use push almost exclusively. Now, if anyone asks, I can say why.

Collapse
 
seanbehan profile image
Sean Behan

0x0.st/zTak.png

I actually got the opposite results. This must only hold true for the V8 engine.

Collapse
 
shiling profile image
Shi Ling

Actually, that's not the same tests, because for the concat test in your screenshot, arr1 is not being assigned the result of arr1.concat(arr2) so it never increases in size. Instead arr3 is being assigned the result. It's not merging 10,000 arrays to a single array, but instead it is just running concat on arrays of the same size 10,000 times.

Collapse
 
dbzap profile image
Julio Gonzalez

tnks a lot !!!

Collapse
 
ssbozy profile image
Sandilya Bhamidipati

Nice writeup. I am looking at this an thinking: since Redux requires us to use pure functions to update state and hence a lot of concats, would it make it slow?

Collapse
 
dwarkeshsp profile image
Dwarkesh Patel

this helped me speed up a project from >5min to <2 secs. Thanks!

Collapse
 
_gdelgado profile image
Gio

What version of Node / V8 were these tests performed on?