OK, I've had a proper chance to think about this (when I'm not posting from work/on the move), and Memoization is definitely not an ideal optimisation technique here. The problem is that what you really want is to not have to recompute things all the time, and so you want to have partial results. However, because of the form of these partial results, there's no requirement to actually store them in either a precomputed array or by memoization. It's trivial to use a pair of accumulators, as in the following code (Python, because that's what I normally use)
output_array =  * len(input_array)
reverse_acc = 1
for index, val in enumerate(reversed(input_array)):
output_array[len(input_array) - index - 1] = reverse_acc
reverse_acc *= val
forward_acc = 1
for index, val in enumerate(input_array):
output_array[index] *= forward_acc
forward_acc *= val
This version is likely to be as fast as you can get the algorithm, is pretty simple, and uses the bare minimum of additional memory (for clarity the two accummulators are seperate; they can be combined, meaning that the overhead needed for this algorithm is a single integer + the output array).
If you don't object, I might write this up into an object lesson on how to tell if the optimizations people choose to apply are actually appropriate optimizations. As I was saying, if you're only reading your memoized value once, it's unlikely to be the correct technique.
Thanks for all your insights, David. This is a really awesome solution. It didn't quite occur to me that I could have two separate accumulator loops (forward and reverse), and then combine the results like this.
You're definitely welcome to write this up in a post. That would be super interesting.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.