DEV Community

Cover image for Why is using javascript “for loop” for array iteration a bad idea?
Prosen Ghosh
Prosen Ghosh

Posted on

Why is using javascript “for loop” for array iteration a bad idea?

Sometimes using loop for array iteration can be bad. Let's see how?

Let's create a 10 empty slot array using Array constructor.

const array = Array(10);
console.log(array); //[empty × 10]
Enter fullscreen mode Exit fullscreen mode

Now let's iterate over the array element using for loop. This loop will log Hi 10 times to the console, which we do not want to do.

But our for loop iterates over the empty slot, that is bad for our software performance.

const array = Array(10);
const len = array.length;
console.log(array); //[empty × 10]

for(let i = 0; i < len; i++){
   console.log("Hi");
}
Enter fullscreen mode Exit fullscreen mode

Let's see another example.

const array = [1, 2, 3];
array[4] = 10;

const len = array.length;
for(let i = 0; i < len; i ++){
    console.log("Hi");
}
Enter fullscreen mode Exit fullscreen mode

In the above snippet index 3 of the array is an empty slot, but the console.log will execute 5 times.

Now let's try using one of the array methods.

forEach() method invoked the passed callback function for each element.

But in this case our array slot is empty and our forEach() method is not going to invoke the log function for the empty slot.

So, the below code will print nothing to the console.

const array = Array(10);
const log = () => console.log("Hi");

array.forEach(log); // NOT GOING TO INVOKE THE LOG FUNCTION
Enter fullscreen mode Exit fullscreen mode

If we try our second for...loop example using forEach() method it will execute 4 times. The passed callback function not going to invoke for empty slot.

const array = [1, 2, 3];
array[4] = 10;

const len = array.length;

array.forEach(() => {
    console.log("Hi");
})
Enter fullscreen mode Exit fullscreen mode

Also you can try .map() method insted of for..loop.

Top comments (38)

Collapse
 
jonrandy profile image
Jon Randy 🎖️

I'm actually quite surprised that forEach skips the empty slot. That is odd behaviour if you ask me, and certainly not what I would expect. Also, as far as I am aware, using the traditional for loop is also considerably faster than forEach - so I'm not sure how it would be bad for performance?

Collapse
 
_prosen profile image
Prosen Ghosh • Edited

Surly Jon, for...loop is on of the fastest looping mechanism. In the example we are using forEach() for the exceptional behavior. Under the hood forEach() using something like the below code.

while(k < len){ // HERE len IS THE ACTUAL ARRAY LENGTH AND k START FROM 0
if(CHECK IF array[k] IS EMPTY SLOT)continue;
// DO THE OTHER STAFF
}
Enter fullscreen mode Exit fullscreen mode

Using forEach() can skip some unnecessary heavy calculation. What if we accidentally set array[400] = "SOME VALUE" and the other slots are empty, then we have to unnecessarily execute our heavy calculation function 400 times that will affect the performance.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

If you're accidentally setting array[400] then that is your problem, and it needs fixing. The heavy calculation is merely a side effect of your bug

Thread Thread
 
_prosen profile image
Prosen Ghosh • Edited

yes, that is the point of my post how we can write good quality code, where can we do the mitake? if we do mistake in our code it can be affect the performance and create unnecessary bug.

Thread Thread
 
jasoncubic profile image
JasonCubic

It's still a micro optimization. The for loop can loop through 400 empty places in an array in what, 5ms? It's extremely fast.

Thread Thread
 
jonrandy profile image
Jon Randy 🎖️

Precisely - and by addressing the symptom rather than the cause i.e. by switching from a 'for loop' to forEach - you would actually be making the performance of the optimal (bug fixed) system worse than before.

Collapse
 
adam_cyclones profile image
Adam Crockett 🌀 • Edited

But our for loop iterates over the empty slot, that is bad for our software performance.

This is a code smell it's not much to do with for loops being bad they are doing exactly what you would expect.

It's better to sanitise your inputs such as

const arr = new Array(10); // even making an array like this is broken in JavaScript

arr.length // 10
arr.push('hi');
const cleanArr = (arr.filter(Boolean)).length // 1

// Then a loop is fine either way
Enter fullscreen mode Exit fullscreen mode

Or better, don't make 10 empty slots ☺️

Collapse
 
_prosen profile image
Prosen Ghosh • Edited

You are right Adam,

const array = new Array(10); // JUST FOR EXAMPLE, WE DON'T DO IT IN REAL LIFE
Enter fullscreen mode Exit fullscreen mode

no one going to create an empty slot for an array, it can happen accidentally/unintentionally and it can be bad for our application performance that is the whole point of this post.

i have updated my post using a relevant code example. Please take a look.

Thanks for your feedback.

Collapse
 
adam_cyclones profile image
Adam Crockett 🌀

Graciously I admit I did learn more foibles about the forEach method in this post, not iterating over empties is not what I expected at all, thanks for bringing it up!

Collapse
 
peerreynders profile image
peerreynders • Edited

It doesn't matter what function you pass to filter() - it skips holes.

const list = [, , { level: { deep: 3 } }, ,];
const predicate = (o) => o.level.deep > 0;

console.log(list.filter(predicate));
// [{ level: { deep: 3}}]
Enter fullscreen mode Exit fullscreen mode

ECMAScript 6: holes in Arrays.

Collapse
 
adam_cyclones profile image
Adam Crockett 🌀

I imagine you misunderstood my code

Thread Thread
 
peerreynders profile image
peerreynders • Edited

I imagine that you were trying to use

console.log(Boolean(undefined)); // false
Enter fullscreen mode Exit fullscreen mode

to sanitize the array. However observe

// sparse array - i.e. has holes
const a = [, , 2, , ,];
// dense array - no holes
const b = [undefined, undefined, 2, undefined, undefined];

console.log(`a: ${a.length}`); // "a: 5" i.e. last comma is treated as a trailing comma
console.log(`b: ${b.length}`); // "b: 5"

function predicate(value, index) {
  console.log(index);
  return Boolean(value);
}

const af = a.filter(predicate); // 2
const bf = b.filter(predicate); // 0, 1, 2, 3, 4

console.log(af); // [2]
console.log(bf); // [2]
Enter fullscreen mode Exit fullscreen mode

i.e. holes are skipped, actual undefined values are not.

So the function passed to filter() never gets to make a decision about the holes.

sparse arrays vs. dense arrays

Thread Thread
 
adam_cyclones profile image
Adam Crockett 🌀 • Edited

The intent is to remove All falsey values which I admit is a bit of a hammer and yes I can see your point, filter ignores empties, but it still cleans an array which is my point.

To anyone not aware, This is a well known trick which removes ALL falsey value.

If you want to handle falsey valuess such as 0, -n, false, null, undefined, write a better predicate function, my demo is just that, a quick demo of principal, prepare your inputs.

Collapse
 
Sloan, the sloth mascot
Comment deleted
Collapse
 
_prosen profile image
Prosen Ghosh

I use loop or array methods depending on what/which purpose i'm using it for.

In the below code example for loop or forEach() will not going to serve my purpose in a easy to readable/understandable way.

function ListItems({ items }) {
  const listItems = items.map((item) =>
    <li>{item}</li>
  );
  return (
    <ul>{listItems}</ul>
  );
}
Enter fullscreen mode Exit fullscreen mode
Collapse
 
lepinekong profile image
lepinekong

for of works with async not forEach

Collapse
 
hasnaindev profile image
Muhammad Hasnain

Higher order functions makes code cleaner and easy to understand. forEach may be simple but filter, map or reduce makes tedious operations very easy to implement.

Collapse
 
_prosen profile image
Prosen Ghosh

Yes, forEach() method also a higher order function.

But as i mention before that, i use those methods depending on the context where it fits the most.

Collapse
 
ianwijma profile image
Ian Wijma

Tip !

You can use for...of to loop over an array and for...in to loop over keys of an object.

Collapse
 
peerreynders profile image
peerreynders • Edited

The idiomatic way to detect holes in arrays is with the in operator;

const a = [1, 2, 3, ,5];

for(let i = 0; i < a.length; i += 1)
  console.log(i in a ? a[i] : 'hole'); // 1, 2, 3, 'hole', 5
Enter fullscreen mode Exit fullscreen mode

If you need to be paranoid about third party libraries modifying prototype objects:

// https://eslint.org/docs/rules/no-prototype-builtins
const hasOwnProperty = Object.prototype.hasOwnProperty;
const a = [1, 2, 3, , 5];

for (let i = 0; i < a.length; i += 1)
  console.log(hasOwnProperty.call(a, i) ? a[i] : 'hole'); // 1, 2, 3, 'hole', 5
Enter fullscreen mode Exit fullscreen mode

For a more detailed discussion of holes see ECMAScript 6: holes in Arrays.

Also you can try .map() method instead of for..loop.

While map() doesn't call the iteratee on holes it does preserve the holes in the array it returns. The point being - the use case for map() (producing a new array of identical length containing transformed values) is much narrower than that of a for…loop. The behaviour of map() is consistent with treating holes as None while values are treated as Some(value). This is problematic when one needs to replace the holes with some default value - in that case it is necessary to use Array.from():

const a = [1, 2, 3, , 5];
const transform = (x) => (typeof x !== 'undefined' ? x * 2 : 0);

console.log(a.map(transform)); // [2, 4, 6, empty, 10]
console.log(Array.from(a, transform)); // [2, 4, 6, 0, 10]
Enter fullscreen mode Exit fullscreen mode

Ultimately the entire for…loop considered harmful attitude is misguided. Iteration is often required without arrays being involved. So it is valuable to be able to write a readable for…loop.

Higher order functions (HOFs) like reduce(), map() and filter() have their uses but forEach() is the oddball as it needs a function with side effects to be useful.

Iteration on iterables doesn't have direct access to the Array HOFs, so either they need to be converted to intermediate arrays (e.g. with Array.from()) or be processed with for…of.

forEach() exists on Maps, Sets, DOMtokenList, NodeList, TypedArray, etc.

But I fail to see the advantage of

let node = document.createElement('div');
let kid1 = document.createElement('p');
let kid2 = document.createTextNode('hey');
let kid3 = document.createElement('span');

node.appendChild(kid1);
node.appendChild(kid2);
node.appendChild(kid3);

let list = node.childNodes;

list.forEach(function (value, index, _list) {
  console.log(`${value}, ${index}, ${this}`);
}, 'myThisArg');
Enter fullscreen mode Exit fullscreen mode

over

let list = node.childNodes;
const context = 'myThisArg';

for (let i = 0; i < list.length; i += 1)
  console.log(`${list[i]}, ${i}, ${context}`);
Enter fullscreen mode Exit fullscreen mode

or perhaps

let list = node.childNodes;
const context = 'myThisArg';

for (const item of list) console.log(`${item}, ${context}`);
Enter fullscreen mode Exit fullscreen mode

Perhaps the real solution is to clean up the loop bodies of for…loop so that they are easier to read.

In functional languages recursion is used as the fundamental means of any iteration:

function fibRec(n1, n, i) {
  return i > 1 ? fibRec(n, n1 + n, i - 1) : n;
}

function fib(lastIndex) {
  return lastIndex > 0 ? fibRec(0, 1, lastIndex) : 0;
}

const a = Array.from({ length: 21 }, (_v, i) => [i, fib(i)]);
console.log(a);

/* [
  [0, 0], [1, 1], [2, 1], [3, 2], [4, 3], 
  [5, 5], [6, 8], [7, 13], [8, 21], [9, 34], 
  [10, 55], [11, 89], [12, 144], [13, 233], [14, 377], 
  [15, 610], [16, 987], [17, 1597], [18, 2584], [19, 4181], 
  [20, 6765]
  ] */
Enter fullscreen mode Exit fullscreen mode

This works in JavaScript but JavaScript isn't a functional language - and none of today's JavaScript engines support tail call elimination, so recursion is usually avoided to avoid blowing the call stack (and sometimes to avoid the overhead of the function calls).
The fundamental means of iteration in JavaScript is the for…loop which means that mutation of the looping state is essential but the rest of the values can largely be immutable:

function fib(lastIndex) {
  let n1 = 0;
  if (lastIndex < 1) return n1;

  let n = 1;
  for (let i = lastIndex; i > 1; i -= 1) {
    const next = n1 + n; // [1]
    [n1, n] = [n, next]; // [2]
  }
  return n;
}

const a = Array.from({ length: 21 }, (_v, i) => [i, fib(i)]);
console.log(a);
Enter fullscreen mode Exit fullscreen mode

i.e. for a cleaner loop body:

  • [1] the majority of the loop body sticks to immutable values ...
  • [2] ... to only later mutate the loop state values in preparation for the next cycle

The idea is to organize the top of the loop body so that it could be easily refactored to:

function next(n1, n) {
  return [n, n1 + n];
}

function fib(lastIndex) {
  let n1 = 0;
  if (lastIndex < 1) return n1;

  let n = 1;
  for (let i = lastIndex; i > 1; i -= 1) [n1, n] = next(n1, n);

  return n;
}

const a = Array.from({ length: 21 }, (_v, i) => [i, fib(i)]);
console.log(a);
Enter fullscreen mode Exit fullscreen mode

but to then leave the loop body "as is" because it already is sufficiently cleaned up.

Collapse
 
peerreynders profile image
peerreynders • Edited
const length = 1000000;

let start = performance.now();
const a = new Array(length);
let finish = performance.now();
console.log(`Sparse array was created in ${finish - start}ms`); // < 5ms

start = performance.now();
const b = Array.from({ length });
finish = performance.now();
console.log(`Dense array was created in ${finish - start}ms`); // ~60ms

start = performance.now();
const c = [];
c[length - 1] = 1;
finish = performance.now();
console.log(`Element added in ${finish - start}ms`); // < 1ms
console.log(c[length - 1]); // 1
console.log(c.length); // 1000000
Enter fullscreen mode Exit fullscreen mode

Basically think of arrays as key-value stores that use positive integers as keys.


The problem is that JavaScript engines optimize arrays for performance

v8.dev: Elements kinds in V8:

  • Small Integers: PACKED_SMI_ELEMENTS -> HOLEY_SMI_ELEMENTS
  • Doubles: PACKED_DOUBLE_ELEMENTS -> HOLEY_DOUBLE_ELEMENTS
  • Other: PACKED_ELEMENTS -> HOLEY_ELEMENTS .
  • PACKED (dense) arrays can transition to HOLEY (sparse) arrays - but not the other way around
  • PACKED processing is more optimized, though:

the performance difference between accessing holey or packed arrays is usually too small to matter or even be measurable.

That said (as already suggested elsewhere), if holes don't have any particular meaning within the processing context, it's better to sanitize HOLEY arrays into PACKED arrays rather than relying on the "skipping behaviour" of select methods.

Collapse
 
jayjeckel profile image
Jay Jeckel

In the above snippet index 4 of the array is an empty slot, but the console.log will execute 5 times.

This seems to be a typo. In the example code, you initialize indexes 0, 1, and 2, then you explicitly set index 4 equal to 10, so it is index 3 that would be empty.

Collapse
 
_prosen profile image
Prosen Ghosh

Thanks Jay, fixed the typo.

Collapse
 
mcsee profile image
Maxi Contieri

great!

I think for 95% of proyects readability is more important than performance.
Happily in this case they agree
But if you need to choose between the two, most always you should choose readability

Collapse
 
hasnaindev profile image
Muhammad Hasnain

Can you tell us why it is bad for software performance and how much difference does it make? Do you know how the Array.forEach method skips these empty slots?

Collapse
 
_prosen profile image
Prosen Ghosh

Hi Hasnain, under the hood forEach() using something like the below code.

while(k < len){ // HERE len IS THE ACTUAL ARRAY LENGTH AND k START FROM 0
if(CHECK IF array[k] IS EMPTY SLOT)continue;
// DO THE OTHER STAFF
}
Enter fullscreen mode Exit fullscreen mode

Using forEach()can skip some unnecessary heavy calculation. What if we accidentally set array[400] = "SOME VALUE" and the other slots are empty, then we have to unnecessarily execute our heavy calculation function 400 times that will affect the performance.

Collapse
 
hasnaindev profile image
Muhammad Hasnain • Edited

That is why we almost never use index in terms of manual assignment and use push method or array spreading instead. This is why it is also recommended to never use these constructor methods, unless of course, it is an absolute requirement which is almost quite rare.

This also does not improve performance either. Both use for loops and there is no evidence that forEach is faster than the for loop.

Thread Thread
 
_prosen profile image
Prosen Ghosh • Edited

No one says that forEach() is faster than for...loop. For the same piece of code for..loop will be always faster then forEach() method unless we make any mistake in our code.

The above example code snippets for for..loop are written in such a way that, we can understand what can happen if we make this kind of mistake in our code.

Thread Thread
 
hasnaindev profile image
Muhammad Hasnain

Hmm. Thanks for writing the post.

Thread Thread
 
_prosen profile image
Prosen Ghosh

Thanks for giving me the feedback.

Collapse
 
_genjudev profile image
Larson • Edited

For some who are not sure what forEach/map does, it should bedoing something like this:

under the hood is a while loop.

In most cases its absolutely ok to use for loop. Writing in the right scope is "easier" with for loops. But its important to know this behavior.

And the benefit of for loops is not much. So writing with tools that already exist is much better. (map, foreach, every ...)

Collapse
 
cadams profile image
Chad Adams

What happens if you want to break out of the loop? You can’t with forEach. Also I would imagine forEach uses for loop under the hood.

Collapse
 
peerreynders profile image
peerreynders

One workaround it to abuse some() instead:

const a = [, 1, , 3, , 5, , 7, , 9, ,];

let result = 0;
const gatherUntil = (value, index) => {
  if (value > 6) return true;

  console.log(index);
  result += value;
  return false;
};

a.some(gatherUntil); // 1, 3, 5
console.log(`Result: ${result}`); // "Result: 9"
Enter fullscreen mode Exit fullscreen mode

Personally I would find a for…loop with break clearer

Collapse
 
_genjudev profile image
Larson • Edited

filter and reduce would do that and would be clearer.

const a = [, 1, , 3, , 5, , 7, , 9, ,];

const result = a.filter(n => n < 6).reduce((acc , current) => acc + current)
console.log(result) // 9
Enter fullscreen mode Exit fullscreen mode

Also Array.some gives you a boolean.

Thread Thread
 
peerreynders profile image
peerreynders • Edited

filter and reduce would do that and would be clearer.

I took the original comment to have the objectives of:

  • not traversing the entire array (which forEach and filter will do)
  • not creating an intermediate array (which reduce relies on)

There is nothing wrong with the filter/reduce approach, especially if the array is reasonably short (whatever that means in the context of the executing hardware) - and avoiding the intermediate array does require a bit more code

const a = [, 1, , 3, , 5, , 7, , 9, ,];

const result = gatherUntil((v) => v > 6, a); // 1, 3, 5
console.log(`Result: ${result}`); // "Result: 9"

function gatherUntil(p, xs) {
  let result = 0;
  for (let i = 0; i < xs.length; i += 1) {
    if (!xs.hasOwnProperty(i)) continue;

    const value = xs[i];
    if (p(value)) break;

    console.log(i);
    result += value;
  }
  return result;
}
Enter fullscreen mode Exit fullscreen mode

But the focus on array HOFs seems to obscure to some people that functions can be composed in a variety of ways.

I also find that in many cases array HOFs make code less readable because the authors insist on using inline, multi-line anonymous functions (forcing a full mental parse of the code) instead of spending a few more lines of code to separate them out and assign them a "Descriptive And Meaningful Name" (related to DAMP) that can be used with the HOF.

Thread Thread
 
_genjudev profile image
Larson • Edited

In my example you could easily write functions for filter and reduce in a declarative way. Which is understandable.

const a = [, 1, , 3, , 5, , 7, , 9, ,];

const isSmallerSix = (n) => n < 6;
const sum = (n, m) => n+m

const result = a.filter( isSmallerSix ).reduce( sum )

console.log(result) // 9
Enter fullscreen mode Exit fullscreen mode

Readability:

  • filter and reduce are much better readable as the gatherUntil function.

Performance (because they all talk about performance in JS)

when was the last time you had performance issue and tought: 'maybe its because of THIS type of loop' in JS?

normally when you write JS you have more performance issues somewhere else as iterating through array's.
Yes, for loops are faster. But you write more code and more code means more bugs. As you can see my little example has just some lines. There is not much space to have a bug here.

Only because something is faster makes it not better for programming, then you wouldn't write JS in first place.

I prefer a more "functional" style of doing things. And would recursion not be so slow (and the stack gets full) in JS i would do that.

and im not quit sure if for loops are slower as reversed while loops.

Thread Thread
 
peerreynders profile image
peerreynders

filter and reduce are much better readable as the gatherUntil function.

Because in this particular case gatherUntil is just as utilitarian as filter and reduce - which still focuses more on the "how" rather than the "why" - declarative or not. Ideally the functions "descriptive and meaningful names" should be more DSL-like, focusing more on the "why" than the "how".

when was the last time you had performance issue and thought: 'maybe its because of THIS type of loop' in JS?

Whenever I start to play around with performance comparisons (example based on this) the good 'ol for…loop seems to perform consistently well across multiple JavaScript engines - while the "other" ways tend to be all over the place.

But you write more code and more code means more bugs.

No Code, no bugs. Other than that studies typically focus on total application length when they state "larger code bases contain more bugs" - but complex applications would be expected to be a.) larger and b.) be more prone to bugs. That doesn't necessarily imply down at the lower level that 1 line is always better than 5 unless this is systemic across the entire code base leading to a significantly larger application. Code Golf emphasizes brevity but tends to lead to code that isn't easy to understand. If there is something to "Code inspection is up to 20 times more efficient than testing" then making code understandable to the broadest possible audience of reviewers is important (within reason - because … trade-offs). Even in A Large-Scale Study of Programming Languages and Code Quality in Github:

However, we caution the reader that even these modest effects might quite possibly be due to other, intangible process factors, for example, the preference of certain personality types for functional, static languages that disallow type confusion.

i.e. as always … it's complicated.

Another argument against "more lines" is that more lines take more time to parse. But not all lines of code are created equal - what matters is the actual workload on the interpreter/compiler during runtime after the parse.

There is not much space to have a bug here.

You did notice that people here expressed surprise about forEach, filter, and reduce skipping holes? Given that processing tends to need to skip undefined values there often isn't a problem - until undefined values and/or the difference between an undefined value and a hole becomes significant. So even on a single line there is space for bugs if code authors/reviewers have an incorrect conceptual model.

Only because something is faster makes it not better for programming, then you wouldn't write JS in first place.

Back in 2009 the video game industry started to abandon OO programming for Data-Oriented Design because OOP imposed too much overhead on commodity hardware. Back then this was largely applied to games written in C/C++ but the trend has lead Unity to adopt DOTS (Data-Oriented Tech Stack) for .NET/C#. Data-Oriented Design aims to optimize for the platform rather than developer experience (ECS - again it's a trade-off).

Similarly I'm imagining JS as running on the shittiest phone out there (on the back end you have other options right?) - the whole Responsible JavaScript thing.

The web is still the most hostile development platform in the world even more so for hyper-constrained devices (Techniques to make a web app load fast,, Setting up a static render in 30 minutes). That level of effort is a far cry from "press of a button, comfy chair development".

I prefer a more "functional" style of doing things.

Sure, but at its core JavaScript is imperative and not as effective at appropriating functional practices as Rust. Most fundamentally it doesn't have "immutability by default" through core-supported, optimized persistent data structures - so mutability is par of the course and it falls to the author to use it responsibly and effectively. I view JavaScript as function-oriented.

And would recursion not be so slow in JS I would do that.

"Slowness" of recursion in JS would be due the "overhead" of the repeated function invocations. You don't seem to be too concerned about the repeated invocations of the functions that are passed to reduce, map, filter etc. (something that could be addressed by putting the logic into a loop body; on the other hand inlining can be tricky). Bounded recursion is doable but simply not optimized for as the for…loop is the most basic (and performant) mode of iteration in JS.