DEV Community

Cover image for Time complexity Big 0 for Javascript Array methods and examples.
Luis Castillo
Luis Castillo

Posted on

Time complexity Big 0 for Javascript Array methods and examples.

Hello everyone, some weeks ago I started to study some computer science algorithms using JavaScript as the programming language, and normally after finished to implement an algorithm I like to calculated complexity with the Big 0 notation. That is the reason why I wanted to write this post, to understand the time complexity for the most used JS Array methods.

So, let's start with a quick definition of the method, his time complexity, and a small example.

Mutator Methods.

1. push() - 0(1)
Add a new element to the end of the array.

const names = ['Luis','John','Jose'];
console.log(names); // (4) ["Luis", "John", "Jose", "Aaron"]

2. pop() - 0(1)
Delete the last element of the array

const names = ['Luis','John','Jose','Aaron'];
console.log(names.pop()); //Aaron
console.log(names); // (3) ["Luis", "John", "Jose"]

3. shift() - 0(n)
Delete the first element of the array

const names = ['Luis','John','Jose','Aaron'];
console.log(names.shift()); // Luis
console.log(names); // (3) ["John", "Jose", "Aaron"]

4. unshift() - 0(n)
Add one or more elements in the beginning of the array

const names = ['Luis','John','Jose'];
console.log(names.unshift("Aaron")); // 4
console.log(names); // (4) ["Aaron", "Luis", "John", "Jose"]

5. splice() - 0(n)
Remove, add or replace a new element indicate by index.

const names = ['Luis','John','Jose','Aaron'];
console.log(names.splice(0,0,"Fernando")); // Add Michelle
console.log(names.splice(0,1,"Michelle")); // replace Fernando to Michelle
console.log(names.splice(0,1)); // remove Michelle

6. sort() - 0(n log(n))
Modify the array, ordered by a compare Function, or if this compare function is not provided the default order is by the position of the Unicode values in the array.

const names = ['Luis','Jose','John','Aaron'];
console.log(names.sort()); // (4) ["Aaron", "John", "Jose", "Luis"]

/*complex sorting*/
const users = [
    {name:'Luis', age:25},
    {name:'Jose', age:20},
    {name:'Aaron', age:40}
const compareFuc = (item1,item2) => {
  return item1.age - item2.age;
 [{name: "Jose", age: 20}, {name: "Luis", age: 25}, {name: "Aaron", age:40}]

Accessor methods

1. concat() - 0(n)
Create a new array with the union of two or more arrays.

const names1 = ["Luis","Jose"];
const names2 = ["John","Aaron"];
const newArray = names1.concat(names2,["Michelle"]);
console.log(newArray); // (5) ["Luis", "Jose", "John", "Aaron", "Michelle"]

2. slice() - 0(n)
Return a copy of a sub array between two index, start and end.
Important Note: if you modify the original array, the value also will be modify in the copy array.

const users = [
  {name:'Luis', age:15},
  {name:'Jose', age:18},
  {name:'Aaron', age:40}

const  adults = users.slice(1, users.length);
console.log(adults); // (2) [{name: "Jose", age: 18}, {name: "Aaron", age: 40}]

3. indexOf() - 0(n)
Return the first index of the element that exists in the array, and if not exists return-1.

const names = ['Luis','Jose','John','Aaron'];
console.log(names.indexOf("John")); // 2
console.log(names.indexOf("Michelle")); // -1

Iteration methods

1. forEach() - 0(n)
Just execute a function for each element in the array.

const names = ['Luis','Jose','John','Aaron'];

names.forEach(item => {
/* Print all user names
  Luis Jose John  Aaron 

2. map() - 0(n)
Create a new array with the result of the callback function (this function is executed for each item same as forEach)

const users = [
    {name:'Luis', age:15},
    {name:'Jose', age:18},
    {name:'Aaron', age:40}
const userDescriptions = => {
   return `Hello my name is ${} and I have ${item.age} years old.`
/*["Hello my name is Luis and I have 15 years old.",
 "Hello my name is Jose and I have 18 years old.",
 "Hello my name is Aaron and I have 40 years old."] */

3. filter() - 0(n)
Create a new array with the elements that apply the given filter condition as true.

const users = [
  {name:'Luis', admin:true},
  {name:'Jose', admin:true},
const adminUsers =  users.filter(item => item.admin);
console.log(adminUsers); // [{name: "Luis", admin: true},{name: "Jose", admin: true}]

4. reduce() - 0(n)
Return a single value after applying the reduction function for each element.

const users = [
  {name:'Luis', age:15},
  {name:'Jose', age:18},
  {name:'Aaron', age:40}

const reducer= (accumulator, item)=> accumulator + item.age;
const totalAge =  users.reduce(reducer,0);
const ageAverage = totalAge / users.length;
console.log(`Total ${totalAge}, Average ${ageAverage}`); // Total 73, Average 24.333333333333332


1. some() - 0(n)
Return a boolean value as true if found one or more item that apply the given condition, and return false if not (also if the array is empty).

const users = [
  {name:'Luis', admin:true},
const adminExists = users.some(item => item.admin);
console.log(adminExists); // true

2. every() - 0(n)
This function Return a boolean value as true if all the items apply the given condition, and false if not.

const users = [
  {name:'Luis', active:true},
  {name:'Jose', active:true},
  {name:'Aaron', active:false}
const isAllUsersActive = users.every(item =>;
console.log(isAllUsersActive); // false


I think that it is very important to understand the time complexity for the common Array methods that we used to create our algorithms and in this way we can calculte the time complexity of the whole structure.

I hope that this information was helpful for you. If you have any questions please, left in the comment section. All the comments are welcome.😉

Top comments (17)

miketalbot profile image
Mike Talbot ⭐

Like it - just a point of clarification - a sliced array is a shallow copy and changing the original array won't modify it as you seem to suggest:

    const a = [1,2,3,4]
    const b = a.slice(-2)
    a[3] = 5
    console.log(a) // -> [1,2,3,5]
    console.log(b) // -> [3,4]
Enter fullscreen mode Exit fullscreen mode

If it's an array of objects, clearly it's a shallow copy so changing an object will change the one referenced by both arrays.

lukocastillo profile image
Luis Castillo • Edited

You're right! Thank you to share this clarification. 👍

faiwer profile image
Stepan Zubashev

Thx for the article. I'm sure it's very important for the frontend community.

People these days often use .reduce with spread .... So it becomes O(n^2). There're lots of articles (even on which showcase such an approach. My experience of interviewing says me that people don't understand that there's a problem.

Also I think that BigO of .splice depends on the arguments. I don't think e.g. that 1 to 1 replacement would cause O(n), because it's a pretty simple optimization.

_mikeusa profile image

people don't understand that there's a problem

I would update this to can be a problem. Often time/space complexity only matters at scale and in many cases it will never be a problem; for instance, when populating a select-dropdown. In other cases, for instance big data, time/space complexity may always be a consideration, especially for an iterative process like that of graphing or machine learning.

I think that BigO of .splice depends on the arguments

Absolutely. That's often the case with Big-O. Big-O is the worst-case scenario, but worst-case may rarely be reached. Big-O means f(n) is less than a constant of O(g(n)). So in the worst case, splice may accept every element but on average it won't and a best-case scenario it would be zero or one element (if 0 isn't practical) at the end of the stack.

For best-case big-omega is used (e.g., Ω(n)) and for average-case big-theta is used (e.g., Θ(n)). Notice the internal ligature of the big-theta vs that of a big O as it may be easy to mistake one for the other.

vcheeney profile image
Victor Cheeney

People these days often use .reduce with spread .... So it becomes O(n^2). There're lots of articles (even on which showcase such an approach. My experience of interviewing says me that people don't understand that there's a problem.

I'm not sure I understand what you mean, do have an example that would illustrate this kind of situation? Thanks!

_mikeusa profile image

I think he means something like:

const users = [
  {name: 'foo',  admin: false, dateRegistered: '2022-02-01', },
  {name: 'bar',  admin: false, dateRegistered: '2010-05-30', },
  {name: 'baz',  admin: true,  dateRegistered: '2009-12-25', },
  {name: 'spaz', admin: true,  dateRegistered: '2020-06-30', },

const admins = users.reduce((agg, user) => (
  !user.admin ? agg : {
    [] : user.dateRegistered
), {});
Enter fullscreen mode Exit fullscreen mode

Where ...agg is potentially being spread for each iteration over users. Instead of an internal traversal, use any kind of loop for O(n) time instead of O(n^2).

const users = [
  {name: 'foo',  admin: false, registered: '2022-02-01', },
  {name: 'bar',  admin: false, registered: '2010-05-30', },
  {name: 'baz',  admin: true,  registered: '2009-12-25', },
  {name: 'spaz', admin: true,  registered: '2020-06-30', },

const admins = {}
  if (user.admin) { admins[]=user.registered }
Enter fullscreen mode Exit fullscreen mode

reduce could be used, but was abandoned because it's absolutely unnecessary. for .. of (or any loop structure) could be used instead of .forEach. The latter was used as a manner of preference.

Note: it's actually something closer to O(n(n+1)/2) + O(n) where once coefficients and smaller figures are removed simplifies as O(n^2).

redbar0n profile image
Magne • Edited

What reference did you lean on to find their time complexity?
If their complexities are assumed or inferred, I'm not sure the assumptions hold, since each engine implements JS arrays differently. See:

PS: I think it would be wise to mention that .some() is only O(n) in the worst case, but rather O(k) normally, where k is the index to be found, which on average will be the middle (median) index, so k = n/2. While a factor of n, it is different than other O(n) functions which necessarily will have to go through the entire array and thus amount to the full n.

rokal profile image
Roland Kalmogo

Big O is already the worst case. otherwise, we have Omega and Theta

shksa profile image

Big O is not the worst case. Big O tells you the upper bound growth rate function. That's it. Worst case tells you a specific example of input which would make the algorithm run for the longest time. Big O can be used to describe the upper bound in worst case, average case, best case. Both are orthogonal.

Thread Thread
_mikeusa profile image

This may be accurate, but for all practical purposes of conceptual understanding; upper-bound ≈ worst case.

It's simpler to describe Big-O vs Big-Theta vs Big-Omega and also stay away from little-o. Again, this is for simplistic terms and anything outside a data scientist and seasoned engineer should probably stick to the simplified understanding as colloquial terms.

This is targeting entry level devs, which I think does service in how it describes the topic:

salyadav profile image
Saloni Yadav

Crisp and to-the-point post. Well done!

mirzayevorzu profile image
Mirzayev Orzu • Edited

what about .join() method?

thamaraiselvam profile image

It should be O(n)

irina_kats profile image
Irina Kats

Very useful. I had about the same idea on the complexity but it's good to know that you took the time to summarize this data.

axibord profile image
Aghiles Lounis

@luiscastillokc for the method "indexOf" I think it's O(log(n)) time, because it uses something like Binary search

konstantin_kulikov_5503ba profile image
Konstantin Kulikov

Binary search can only be executed on a sorted collection

vcheeney profile image
Victor Cheeney

Very nice article. Straight and to the point. Thanks for sharing!