DEV Community

Cover image for What's hard about Dynamic Programming
Imaculate
Imaculate

Posted on

What's hard about Dynamic Programming

As far as coding interviews are concerned, dynamic Programming has become synonymous with hard. Candidates spend disproportionate amount of time cracking the code, secretly hoping they won't have to do it in an interview. I totally get it, there is a big jump from the familiar fibonacci problem to palindrome partitioning. Here is why I think its hard and what to do about it.

  1. Detection
    When encountering hard problems, solutions that come to mind are brute force, maybe a greedy algorithm. Greedy problems look eerily similar in that they are also trying to maximize or minimize a variable. For instance, a jumping frog scenario is a DP problem but a slight variation is a greedy problem. One way to distinguish the two is to question whether it is necessary to search the entire solution space versus accept a local solution.

  2. Categorization
    Dynamic Programing is a diverse algorithm. There are many types of problems to which it is applied. Prominent categories include Longest Increasing Subsequence, Longest Common Substring, Knapsack problems, Matrix Chain Multiplication among others. Sometimes it works alongside other techniques like Number Theory and Bit Manipulation. It definitely helps to be aware of these categories.

  3. Recurrence relation
    The premise of dynamic programming is that big problems can be broken into smaller subproblems. The recurrence relationship is the bread and butter of this supposition, a misstep here throws off the whole solution. It is important to get these steps right:

    • Is the problem made of smaller subproblems?
    • Knowing the solution to the subproblem, how does it tally up to the bigger problem?
    • Pay close to problem wording even if you have seen the problem before. For instance given infinite number of coins, counting the number of combinations to make change will have a different formular from minimizing the number of coins needed.
  4. Caching order
    Even after successfully deriving the recurrence relation, there is still a chance the solution won't work. It has a lot to do with the order in which the cache is built which depends on the recurrence relation. For instance for the formular:

cache[i][j] = 1 + Min(cache[i-1][j] + cache[i][j-1])

The cache is filled from top to bottom, left to right, so when computing for cache[i][j], the prerequisites are known. On the other hand, this wouldn't work for the formular:

cache[i][j] = Max(cache[i + 1][j], cache[i][j - 1])

The reason being the value at cell cache[i][j] depends on diagonal neighbor cache[i+1][j] which has to be computed prior. The table has to be filled diagonally from bottom to top, left to right. This also means the base case at the bottom right corner has to be set beforehand.

That said, these are just some issues I've seen folks struggle with. After sitting on both sides of multiple interview loops I have reasonable confidence in the techniques that work. I have compiled these techniques in a short but comprehensive course now available on leanpub. Interviews aside, it is important to master algorithms because solving problems optimally is the essence of Software Engineering. You are also welcome to check my other course on bit manipulation which is completely free.

Till next time, happy coding!

Top comments (0)