I agree with your sentiment, and would not recommend actually using this code in production. It definitely won't be quick! JavaScript performance is less of a concern as it used to be. As for Array.filter, the tradeoff has to be made between saving machine performance (both speed and space), and human performance (clarity and maintainability).
Actually, astonished that it didn't optimize to O(1)/constant time/noop in both languages, since the output is unobservable. Was the C compiled without optimization? 🤔
He then proceeds to rewrite
letsmallest=Infinity;constnumbersLength=numbers.length;// avoid property look up in loopfor(leti=0;i<numbersLength;i++){constnumber=numbers[i];// avoid repeated index lookupsif(number>0&&number<smallest)smallest=number;}
again, asm.js spartanity code, but both of the things mentioned in the comments actually optimize well in JIT
into
introducing a whole ass mutable sorting step instead of a min reduce, fundamentally transforming the algorithm from O(n) to O(n log n), on TOP of the temporary memory allocation and slower constant factor.
Up next, he has delete user.password vs user.password = undefined.
Oh, it takes 1 billion iterations to show a difference between the two? Try benchmarking the whole application that contains this line. Look at memory use and battery power consumption.
How about the fact that the deletion affects the speed of all of your code that handles user-type objects by turning them from template objects into dynamic dictionaries or at best duplicating the monomorphised jitted code in memory, meaning less of it fits into the CPU cache?
Would love to see a better real-world benchmark! If you can find codebase where overuse of Array.filter has resulted in an application that is unusable, I would love to see it!
And last but not least, see how competitive WASM is with JS, even when supposedly having to run through a JS glue layer, in JS's historical home of the browser.
And this is not about fancy ergonomic end-user-written business logic, this is about web frameworks, worked on for years to improve performance at any cost. You know they aren't using delete or forEach.
And yet just look at those memory allocation numbers.
How fast JS runtimes are, given the language specification, is a huge achievement and nothing short of a miracle. But when you regard JS as the delivery target, because that's what runs in all browsers, I don't think it's ever right to completely forget about how to write to make the best use of those efforts.
I'd love to see that too. I'd tongue-in-cheek say facebook.com, but that's rendered unusable (performance-wise) by quite a bit more than Array methods :)
(They are used there, and not transpiled, though, which I found rather surprising.)
Haha, truthfully, I've been thinking the same thing about Facebook. There are times I can barely get it to fully render. I think their problem (and the problem with React in general) is that literally everything is replicated in the virtual DOM. I would love to see a JavaScript compiler to WebAssembly that turns things like Array.filter into faster solutions. Paired with a UI library that does virtual DOM in web workers, it could give us the best of both worlds (declarative code that compiles to optimized low level constructs).
I'm just assmad that Array.filter et al got added to the language specification, with all its awkwardness like creating an array, and passing so so many arguments to the callable passed. In addition to thwarting attempts at .reduce(Math.max) and .forEach(console.log), it causes arity mismatch which once again causes a miniscule deoptimization. Because, honestly, who writes .reduce((a, x, _, _) =>? Not many :v
Virtual DOM seems to be fundamentally too expensive for the performance people want, so a different model like reactive signals that bypasses it entirely is needed. I shitposted about it here yesterday.
Writing the business logic (including bringing in performant libraries that you use in the backend or native application) in the same language as the UI helpers and compiling it to WASM makes sense to me. The speed of WASM DOM modification is sure to increase somewhat in the future, but currently the critical performance downside for me is first-time startup performance. Sure, caching compiled modules is fast and efficient, but is shipping a raw HTML loading/landing/login page while the WASM downloads and compiles really enough?
Perhaps the situation will improve as people make sense of dynamically linking multiple pieces of WASM together, with the granularity and enthusiasm they show for JS bundle code splitting?
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I agree with your sentiment, and would not recommend actually using this code in production. It definitely won't be quick! JavaScript performance is less of a concern as it used to be. As for
Array.filter
, the tradeoff has to be made between saving machine performance (both speed and space), and human performance (clarity and maintainability).I appreciate your readability concerns, but that article looks rather insane to me.
He starts by benchmarking absolutely bare,
asm.js
level code:Actually, astonished that it didn't optimize to O(1)/constant time/noop in both languages, since the output is unobservable. Was the C compiled without optimization? 🤔
He then proceeds to rewrite
again, asm.js spartanity code, but both of the things mentioned in the comments actually optimize well in JIT
into
instead of
which in the future can be written as follows to avoid the intermediate array:
introducing a whole ass mutable sorting step instead of a min reduce, fundamentally transforming the algorithm from
O(n)
toO(n log n)
, on TOP of the temporary memory allocation and slower constant factor.Up next, he has
delete user.password
vsuser.password = undefined
.Oh, it takes 1 billion iterations to show a difference between the two? Try benchmarking the whole application that contains this line. Look at memory use and battery power consumption.
How about the fact that the deletion affects the speed of all of your code that handles
user
-type objects by turning them from template objects into dynamic dictionaries or at best duplicating the monomorphised jitted code in memory, meaning less of it fits into the CPU cache?Would love to see a better real-world benchmark! If you can find codebase where overuse of
Array.filter
has resulted in an application that is unusable, I would love to see it!And last but not least, see how competitive WASM is with JS, even when supposedly having to run through a JS glue layer, in JS's historical home of the browser.
And this is not about fancy ergonomic end-user-written business logic, this is about web frameworks, worked on for years to improve performance at any cost. You know they aren't using
delete
orforEach
.And yet just look at those memory allocation numbers.
How fast JS runtimes are, given the language specification, is a huge achievement and nothing short of a miracle. But when you regard JS as the delivery target, because that's what runs in all browsers, I don't think it's ever right to completely forget about how to write to make the best use of those efforts.
I'd love to see that too. I'd tongue-in-cheek say facebook.com, but that's rendered unusable (performance-wise) by quite a bit more than
Array
methods :)(They are used there, and not transpiled, though, which I found rather surprising.)
Haha, truthfully, I've been thinking the same thing about Facebook. There are times I can barely get it to fully render. I think their problem (and the problem with React in general) is that literally everything is replicated in the virtual DOM. I would love to see a JavaScript compiler to WebAssembly that turns things like
Array.filter
into faster solutions. Paired with a UI library that does virtual DOM in web workers, it could give us the best of both worlds (declarative code that compiles to optimized low level constructs).I'm just assmad that
Array.filter
et al got added to the language specification, with all its awkwardness like creating an array, and passing so so many arguments to the callable passed. In addition to thwarting attempts at.reduce(Math.max)
and.forEach(console.log)
, it causes arity mismatch which once again causes a miniscule deoptimization. Because, honestly, who writes.reduce((a, x, _, _) =>
? Not many :vVirtual DOM seems to be fundamentally too expensive for the performance people want, so a different model like reactive signals that bypasses it entirely is needed. I shitposted about it here yesterday.
Writing the business logic (including bringing in performant libraries that you use in the backend or native application) in the same language as the UI helpers and compiling it to WASM makes sense to me. The speed of WASM DOM modification is sure to increase somewhat in the future, but currently the critical performance downside for me is first-time startup performance. Sure, caching compiled modules is fast and efficient, but is shipping a raw HTML loading/landing/login page while the WASM downloads and compiles really enough?
Perhaps the situation will improve as people make sense of dynamically linking multiple pieces of WASM together, with the granularity and enthusiasm they show for JS bundle code splitting?