These are not the types of calculations where memoization helps. They are not pure functions, being based on a nested hierarchy of data. The input space at each node is large enough that you'll also never get a cache match on any change. The cost of input comparison will also generally be more expesnive than the calculation itself (matrix operations).
Ok, understood. Memoization is good on simple inputs. Yes, an inner caching mechanism would be actually good. In that case the "cache invalidation key" need to be calculated inline when the operations occurs, an "hash of the subtree" could fit. If the function is iterative it will skip calculation on non changed branches.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
These are not the types of calculations where memoization helps. They are not pure functions, being based on a nested hierarchy of data. The input space at each node is large enough that you'll also never get a cache match on any change. The cost of input comparison will also generally be more expesnive than the calculation itself (matrix operations).
Ok, understood. Memoization is good on simple inputs. Yes, an inner caching mechanism would be actually good. In that case the "cache invalidation key" need to be calculated inline when the operations occurs, an "hash of the subtree" could fit. If the function is iterative it will skip calculation on non changed branches.