DEV Community

Discussion on: Use Closures for Memory Optimizations in JavaScript (a case study)

Collapse
 
ahmedgmurtaza profile image
Ahmed Murtaza

@miketalbot , I agree with you to the point that it saves the processing time, but without closures, everytime we run multiply(), the line 2, let x = Math.pow(10,10); recreates in memory, where as with closures, line 2 does not recreates in memory + it saves the processing time for large jobs (as you have said above).

Collapse
 
miketalbot profile image
Mike Talbot ā­

My point is that a local primitive variable in a function is allocated on the stack and not in the main heap of memory (it's different if it is an object), this memory isn't "being used" it's just an offset from the current stack pointer.

The moment we "close over" a function then the function definition is converted internally into a class and the value of that variable becomes a property of the instance of that class. The stack is a super efficient way of handling primitive value storage during the execution of a function. If x was an object then it wouldn't be the same story, as the reference to x would be stored on the stack but the contents of x would be allocated in memory and be subject to subsequent garbage collection, leading to additional processing requirements. In neither of these cases would the variable be "used" as in it wouldn't continue to take up required memory after the function exited, however as Javascript uses garbage collection it would need processing for the memory to be made available again.


function multiply(y) {
    let x = 10 ** 10
    return y * x
}

// This function does NOT allocate 10000000 copies
// of x in memory, the same location on the stack is used each time.
//  No additional memory will be consumed and
// there will be no garbage collection
for(let i = 0; i < 10000000; i++) {
    console.log(multiply(i))
}


Enter fullscreen mode Exit fullscreen mode

Consider this:


function multiply(y) {
    let x = {pow: 10 ** 10}
    return y * x.pow
}

// This function DOES allocate 10000000 copies of x in memory,
// however NO additional memory will be permanently held
// as garbage collection will free the temporary values as they are no longer
// accessible.
for(let i = 0; i < 10000000; i++) {
    console.log(multiply(i))
}


Enter fullscreen mode Exit fullscreen mode
Collapse
 
mindplay profile image
Rasmus Schultz

But Mike is right here - in terms of net memory usage, there is no difference.

The difference in terms of memory here, is not the amount of memory used, but when that memory is used and gets released. The other difference is in terms of thrashing the garbage collector - by using a closure, you avoid unnecessary thrashing, but strictly speaking, you do so by keeping this memory allocated and never releasing it.

In that sense, your optimization could be considered "worse" in terms of overall memory usage. Although it's definitely much better in terms of performance overall, the explanation for that is not that you're saving memory, which you're really not - on the contrary.

It's a good optimization, but not for the reasons you explained.

Also, with regards to this part of your explanation:

the let x = Math.pow(10,10) is recreated and occupy certain memory, in this case quite a large memory for sure, due to the large numeric value it's generating.

It's a good guess, but that is not how numbers are stored in JavaScript - the number type is a 64-bit floating point, no matter which number.

You can learn about number storage here:

2ality.com/2012/04/number-encoding...

But to begin with, you really shouldn't spend your time "optimizing" something this small. You should trust that the language does what's best. If you raise a value to a variable in a parent scope, it should be because the code makes more sense that way - the person reading the code will understand that this value doesn't change. Or because the value is expensive to calculate.

Speculating about saving 8 bytes of memory is not a good use of your time, unless you expect to have millions of instances - and even then, you would need to weigh the fact that those millions of instances can't be deallocated when they're not in use, against the performance overhead of calculating the value on demand.

If you do have a case that calls for memory optimizations, you should learn to use a profiler and get your facts from actual measurements - in your case here, you would have found that the extra closure you use for your "memory optimization" actually requires more memory, not less.

Performance, and sometimes even memory usage, is too complex in JavaScript for you to guess or assume anything - the execution model of JavaScript is extremely complex, and the measurements often not at all what you might intuitively expect.

Thread Thread
 
alrunner4 profile image
Alexander Carter

This is what I came to the comments to add. šŸ‘