DEV Community

[Comment from a deleted post]
Collapse
 
habilain profile image
David Griffin

This isn't about a "formal definition" of memoization, or indeed caching, as such. The basic misunderstanding is that memoisation cannot be applied to a problem where each request is unique. Your second version isn't memoization - it could be counted as pre-computing your arrays, but that's it. Fundamentally, you still only read each element of the cache once.

As I said before, the speedup here is from the way the problem has been transformed to being O(n) rather than O(n2 ). But for this particular problem, memoization isn't useful, because each answer is required exactly once. Because each query is unique, you cannot memoize.

Memoization might be of use in a slightly different problem specification, where there's a user of the system who asks for different indicies and you cannot precompute the entire array; in that case, as the user may request the same index multiple times memoization is viable. But for this one, there just isn't anything that can be memoized.

 
_rileyflynn profile image
rileyflynn

Not sure I follow, to be honest. In the problem, we need to calculate, for example, the product of 1*2*3*4 two separate times, meaning that that particular result is required exactly twice. The second time, we don't have to recalculate it because we've already cached it. Each request is not unique—it happens twice.

It's true that we only read from the cache once per item, but does it make a difference whether we're reading each element from the cache once or 500 times?

 
habilain profile image
David Griffin • Edited

Well, yes, I think it kind of does, because it governs what's an appropriate technique. Certainly in the case of the forward direction, all you've actually implemented is an accumulator, and in a particularly inefficient way as far as memory is concerned. And perhaps I got a bit muddled because you only presented the forward direction - the backwards direction probably does require memoisation (Edit: No, it doesn't).

The real point to this interview question, I believe, is that the most efficient solution requires different approaches for the forward and backward directions.

 
habilain profile image
David Griffin • Edited

OK, I've had a proper chance to think about this (when I'm not posting from work/on the move), and Memoization is definitely not an ideal optimisation technique here. The problem is that what you really want is to not have to recompute things all the time, and so you want to have partial results. However, because of the form of these partial results, there's no requirement to actually store them in either a precomputed array or by memoization. It's trivial to use a pair of accumulators, as in the following code (Python, because that's what I normally use)

def f(input_array):
    output_array = [0] * len(input_array)
    reverse_acc = 1
    for index, val in enumerate(reversed(input_array)):
        output_array[len(input_array) - index - 1] = reverse_acc
        reverse_acc *= val
    forward_acc = 1
    for index, val in enumerate(input_array):
        output_array[index] *= forward_acc
        forward_acc *= val
    return output_array

This version is likely to be as fast as you can get the algorithm, is pretty simple, and uses the bare minimum of additional memory (for clarity the two accummulators are seperate; they can be combined, meaning that the overhead needed for this algorithm is a single integer + the output array).

If you don't object, I might write this up into an object lesson on how to tell if the optimizations people choose to apply are actually appropriate optimizations. As I was saying, if you're only reading your memoized value once, it's unlikely to be the correct technique.

 
_rileyflynn profile image
rileyflynn

Thanks for all your insights, David. This is a really awesome solution. It didn't quite occur to me that I could have two separate accumulator loops (forward and reverse), and then combine the results like this.

You're definitely welcome to write this up in a post. That would be super interesting.