Software Engineer at InVision. Full-Stack JavaScript dev, with passion for front-end development.
Psytrance DJ on weekends, playing in local clubs (yeah, that means: Goa Parties!) :D
Interesting challenge. So I've three very similar solutions. They are all based on the same principle of reducing the array into a key-value structure and re-creating the array from the values only.
Approach 1: Classical Reducer
(reducer maintains immutability)
/**
* classic reducer
**/constuniqByProp=prop=>arr=>Object.values(arr.reduce((acc,item)=>item&&item[prop]?{...acc,[item[prop]]:item}// just include items with the prop:acc,{}));// usage:constuniqueById=uniqByProp("id");constunifiedArray=uniqueById(arrayWithDuplicates);
Depending on your array size, this approach might easily become a bottleneck in your app. More performant is to mutate your accumulator object directly in the reducer.
Approach 2: Reducer with object-mutation
/**
* using object mutation
**/constuniqByProp=prop=>arr=>Object.values(arr.reduce((acc,item)=>(item&&item[prop]&&(acc[item[prop]]=item),acc),// using object mutation (faster){}));// usage (same as above):constuniqueById=uniqByProp("id");constunifiedArray=uniqueById(arrayWithDuplicates);
The larger your input array, the more performance gain you'll have from the second approach. In my benchmark (for an input array of length 500 - with a duplicate element probability of 0.5), the second approach is ~440 x as fast as the first approach.
Approach 3: Using ES6 Map
My favorite approach uses a map, instead of an object to accumulate the elements. This has the advantage of preserving the ordering of the original array:
/**
* using ES6 Map
**/constuniqByProp_map=prop=>arr=>Array.from(arr.reduce((acc,item)=>(item&&item[prop]&&acc.set(item[prop],item),acc),// using map (preserves ordering)newMap()).values());// usage (still the same):constuniqueById=uniqByProp("id");constunifiedArray=uniqueById(arrayWithDuplicates);
Using the the same benchmark conditions as above, this approach is ~2 x as fast as the second approach and ~900 x as fast as the first approach.
Conclusion
Even if all three approaches are looking quite similar, they have surprisingly different performance footprints.
Interesting challenge. So I've three very similar solutions. They are all based on the same principle of reducing the array into a key-value structure and re-creating the array from the values only.
Approach 1: Classical Reducer
(reducer maintains immutability)
Depending on your array size, this approach might easily become a bottleneck in your app. More performant is to mutate your accumulator object directly in the reducer.
Approach 2: Reducer with object-mutation
The larger your input array, the more performance gain you'll have from the second approach. In my benchmark (for an input array of length 500 - with a duplicate element probability of 0.5), the second approach is ~440 x as fast as the first approach.
Approach 3: Using ES6 Map
My favorite approach uses a map, instead of an object to accumulate the elements. This has the advantage of preserving the ordering of the original array:
Using the the same benchmark conditions as above, this approach is ~2 x as fast as the second approach and ~900 x as fast as the first approach.
Conclusion
Even if all three approaches are looking quite similar, they have surprisingly different performance footprints.
You'll find the benchmarks I used here: jsperf.com/uniq-by-prop