Hi, thanks for the great post. I'm still learning and I really want to start using transducers soon :) but I've a question that sounds a bit too "old-school": how that compare to imperative approach with for loops?
I like and use a functional approach as much as possible, for various reasons (reusability, immutability, readiness etc), but I always have some doubts when it comes to performances.
How that transducer function
const benchmarkTransducer = xduce(
compose(
map(function(x) {
return x + 10;
}),
map(function(x) {
return x * 2;
}),
filter(function(x) {
return x % 5 === 0;
}),
filter(function(x) {
return x % 2 === 0;
})
)
);
compare to an imperative approach like
let i = 0;
const updatedData = []
for(;i< data.length;i++) {
const value = data[i];
const updatedValue = (value + 10) * 2
if (updatedValue % 5 === 0 && updatedValue % 2 === 0){
updatedData.push(updatedValue)
}
}
For sure it cannot be used for streams, but for a static data array it can outperform the transducer version. Where and when you think one approach can be used instead of another?
On the field of animation and interactions, where every ms count, having optimized performances is a must, what do you think?
Software Engineer at InVision. Full-Stack JavaScript dev, with passion for front-end development.
Psytrance DJ on weekends, playing in local clubs (yeah, that means: Goa Parties!) :D
The imperative approach you are describing is of course the fastest of all alternatives. It gives you roughly a 50% performance plus. So depending on your collection sizes and performance requirements, it is worth considering to use the imperative approach. I'd recommend reading this article if you want to get more insight
Most of the performance penalty comes from the function call overhead. I know that there are/were some efforts going on, to inline functions at build time (via babel plugin or closure compiler). These optimizations are not very reliable (at the current time). Babel needs all functions in the same file for this optimization and the closure compiler only inlines, when it would save space to inline a function.
A more reliable alternative would be to write your data transforms in ReasonML and let the Bucklescript compiler do the heavy lifting. The optimizations applied by the Bucklescript compiler are more advanced than the optimizations of any Js to Js compiler out there. So if your transforms tend to be rather complex but you don't want to sacrifice readability or maintainability, then I'd recommend to try ReasonML 😉
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Hi, thanks for the great post. I'm still learning and I really want to start using transducers soon :) but I've a question that sounds a bit too "old-school": how that compare to imperative approach with for loops?
I like and use a functional approach as much as possible, for various reasons (reusability, immutability, readiness etc), but I always have some doubts when it comes to performances.
How that transducer function
compare to an imperative approach like
For sure it cannot be used for streams, but for a static data array it can outperform the transducer version. Where and when you think one approach can be used instead of another?
On the field of animation and interactions, where every ms count, having optimized performances is a must, what do you think?
Hi Marco,
thanks for your comment 😃
The imperative approach you are describing is of course the fastest of all alternatives. It gives you roughly a 50% performance plus. So depending on your collection sizes and performance requirements, it is worth considering to use the imperative approach. I'd recommend reading this article if you want to get more insight
Most of the performance penalty comes from the function call overhead. I know that there are/were some efforts going on, to inline functions at build time (via babel plugin or closure compiler). These optimizations are not very reliable (at the current time). Babel needs all functions in the same file for this optimization and the closure compiler only inlines, when it would save space to inline a function.
A more reliable alternative would be to write your data transforms in ReasonML and let the Bucklescript compiler do the heavy lifting. The optimizations applied by the Bucklescript compiler are more advanced than the optimizations of any Js to Js compiler out there. So if your transforms tend to be rather complex but you don't want to sacrifice readability or maintainability, then I'd recommend to try ReasonML 😉