The original post can be found here on the Whiteboard Hero blog.
Dynamic Programming: it seems to be the arch nemesis of most people who are prepping for coding interviews. While these problems are definitely challenging, I will show you how to attack them in a methodical fashion so you can feel confident in any coding interview!
What is Dynamic Programming?
Dynamic Programming (DP for short) is an algorithmic technique that can be used to solve problems that have a recursive substructure and overlapping subproblems within that substructure. That sounds great, but what does it actually mean?
Recursive Substructure
Recursive substructure means that the solution to a problem can be expressed by some combination of solutions to its subproblems. Subproblems manifest themselves as recursive function calls with different parameters, which converge on some base case where recursion will stop. To better understand what subproblems are, let’s examine an implementation of the classic dynamic programming problem, the Fibonacci sequence:
function fib(n) {
if (n === 0) return 0;
if (n === 1) return 1;
return fib(n  1) + fib(n  2);
}
Here, we say the problem we are trying to solve is fib(n)
and it can be expressed as the sum of the solutions to the subproblems fib(n  1)
and fib(n  2)
. What we have just described is also known as a recurrence relation, which we will discuss in further detail later. We also define base cases for n = 0
and n = 1
so that we don’t get trapped in an infinite recursion.
Overlapping Subproblems
What it means for subproblems to be overlapping is that solutions to the same subproblems are needed multiple times. Going back to our Fibonacci example, if we call fib(5)
, you may notice that the following function invocations happen:

fib(5)
callsfib(4)
andfib(3)
. 
fib(4)
callsfib(3)
andfib(2)
. 
fib(3)
callsfib(2)
andfib(1)
. 
fib(3)
callsfib(2)
andfib(1)
. 
fib(2)
callsfib(1)
andfib(0)
. 
fib(2)
callsfib(1)
andfib(0)
. 
fib(2)
callsfib(1)
andfib(0)
.  ...
As you can see, we are calling fib
with the same n
over and over again. This is unnecessary work! This is the primary issue that dynamic programming solves.
How It Works
Dynamic Programming optimizes runtime performance by storing the solutions to subproblems in memory. For example: when we solve fib(3)
, we will store that solution somewhere in memory so that whenever we call fib(3)
again, we can just look up the answer. This saves us a lot of work! Here is how we could implement that in code:
function fib(n) {
const memo = new Array(n + 1).fill(1);
function fibMemoized(x) {
if (x === 0) return 0;
if (x === 1) return 1;
if (memo[x] === 1) {
memo[x] = fibMemoized(x  1) + fibMemoized(x  2);
}
return memo[x];
}
return fibMemoized(n);
}
This technique of caching the return value of a function call is known as memoization, and is also known as the topdown approach to dynamic programming. It is called this because we solve the problem by starting with the main problem: fib(n)
, then work our way down to the base cases: fib(0)
and fib(1)
.
There is a second approach to dynamic programming known as the bottomup approach, where we solve the subproblems in the opposite direction. We start by filling in our base case(s), and then solving all the subproblems: fib(2)
, fib(3)
, etc, until we reach the main problem: fib(n)
. Here is how this would look in code:
function fib(n) {
const memo = new Array(n + 1).fill(1);
memo[0] = 0;
memo[1] = 1;
for (let i = 2; i < n + 1; i++) {
memo[i] = memo[i  1] + memo[i  2];
}
return memo[n];
}
Bottomup solutions are generally preferred as they avoid the overhead of recursion by solving the problem iteratively. Bottomup solutions are usually trickier to implement because you might have to solve the subproblems in a particular order, as opposed to the topdown approach where you can simply express your problem in terms of a recurrence relation and add memoization.
The solution we have is already pretty good, but we can actually make it even better! Look closely at the body of the for loop: do you notice anything about which values we’re using from our lookup table? We are only ever using the previous two values, which means that we can forget anything older than that! As a matter of fact, this is a potential optimization of any dynamic programming problem. If we only ever need a constant number of the previous values in the table to solve a subproblem, we can drop the table altogether and replace it with individual variables. Here is what that would look like with the Fibonacci sequence:
function fib(n) {
if (n === 0) return 0;
let prev = 0;
let curr = 1;
for (let i = 1; i < n; i++) {
const temp = curr;
curr += prev;
prev = temp;
}
return curr;
}
How to Solve a Dynamic Programming Problem
Now you know what dynamic programming is, and how to write a function to compute the Fibonacci sequence. But, how do we solve new dynamic programming problems, especially under the stress of a coding interview? You’ll need a strategy, and lots of practice. I’ll help you with the strategy, but you’ll have to practice to actually see results!
Side note: during a real interview, you may want or have to skip some of these intermediate steps. Some interviewers may require you to get through multiple problems in a single interview, in which case you may not have the time to go through all the steps I’ve listed here to get to a solution. However, when practicing, I recommend completing all of these steps so you really internalize the concepts.
Identify the Problem as a Dynamic Programming Problem
The first thing you’ll need to do when given a dynamic programming problem in a coding interview is... to realize that you’ve been given a dynamic programming problem, obviously!
Side note: I realize the phrase “dynamic programming problem” is a bit of a misnomer. Dynamic programming is not a type of problem, it is a technique which can be used to solve a problem. In interviews, you will be given problems where dynamic programming may only be one of a number of possible techniques to solve the problem. However, I will continue using this phrase for brevity.
As mentioned earlier, dynamic programming can be used on any problem with a recursive substructure and overlapping subproblems within that substructure. However, this can be a little tricky to recognize immediately. But, don’t worry! There are some clues that you can look for that may tip you off to a problem being a dynamic programming problem.
Dynamic programming problems are often optimization problems. This means that the problem is asking you for an optimal answer to the set of inputs. What is the longest increasing subsequence in this array? What is the minimal amount of coins to make a certain amount of change? What is the longest palindromic substring of a given string? Keep on the lookout for problems that are asking you to find things like the longest, shortest, maximum, or minimum of something.
Dynamic programming problems are also sometimes counting problems. This means that the problem asks you for a count of items or arrangements, given certain conditions. How many paths are there from the top left to the bottom right of a given grid, with obstacles? How many ways are there to get to the top of an N step staircase, if you can take either 1 or 2 steps from each step?
There are also dynamic programming problems that do not fit into either of these categories. Even our example, the Fibonacci sequence is not an optimization or a counting problem! However, the vast majority of dynamic programming problems you find in coding interviews will be in one of these two categories.
Figure out the Recurrence Relation
Once you’ve determined that the problem is a dynamic programming problem, the next thing you want to do is formulate a recurrence relation. A recurrence relation is simply how we express the solution to a problem in terms of solutions to its subproblems. Going back to our Fibonacci example, our recurrence relation would be:
fib(n) = fib(n  1) + fib(n  2)
I gave you the answer to that one, but how do you actually figure this out in an interview, with a problem that you’ve never seen before? In my opinion, this is the hardest part of the entire process. If you can figure out how to describe the problem as a recurrence relation (and the base case(s)), you can at least implement a topdown solution with almost no extra thinking required. To get this part down you really need practice solving a wide variety of dynamic programming problems. However, to get a glimpse into what this process looks like, we’ll walk through an example.
We’ll be using LeetCode #198 House Robber as our example. I encourage you to read the problem description before continuing, but for the problem, we are given an array of numbers where each number represents the amount of money that the house in that position has stashed away. We want to figure out the maximum amount of money we can rob given that we cannot rob adjacent houses.
If we didn’t have the rule of not being able to rob adjacent houses, we would just be able to take the sum of the array and be done with it! However, since we have the rule that we can’t rob adjacent houses, we now have to make a decision about each house. Do we rob it, or not?
Sometimes when trying to determine a solution to a problem, it can help to think about a simpler version of the problem first, and then generalize from there. We can employ that tactic here:
 What do we do if we have one house? Obviously we rob that one!
 What do we do if we have two houses? Obviously we pick the one with more money!
 What do we do if we have three houses? Good question...
The first two cases are very simple to solve for, but when we get to three houses, the answer gets a bit more tricky... We can either rob the first and the third house, or we can only rob the second house.
What we can do is generalize our problem a bit. Instead of saying we have a function that takes a list of numbers and returns the maximum haul for the entire set of houses, we can say we have a function that takes a list of numbers and the index of one of the houses. This gives us a problem that has subproblems of the same form. We can express this as a function f(h, x)
where h
is the array of numbers that represents how much money is stashed in each house and x
is the index of the house we want to get the maximum haul for. So, what is our recurrence relation for this function? Remember that if we rob a house, we cannot rob the house directly before or after it. Considering that, we can determine that the recurrence relation that solves this problem is:
f(h, x) = max(h[x] + f(h, x  2), f(h, x  1))
This makes sense because we can either rob the current house (take the money in the current house and the maximum haul from 2 houses ago) or we can skip it (just take the max haul from the previous house).
Determine the Base Case(s)
Now that we have our recurrence relation, we need to figure out our base case(s). All recursive functions must have at least one base case, otherwise we will get stuck in an infinite recursion. How do we identify base cases? This is also something that comes with practice, but there are some different ideas that we can consider when confronted with this task.
One type of base case is to stop recursing for an input that exists at the edge of the possible range of inputs. For example, if we have a parameter that can be any integer ≥ 0
, we might have a base case for when that integer is 0
. This is what we did in our Fibonacci example, except we also had a base case for n = 1
, since we call fib(n  2)
within the function and fib(1)
is outside the range of possible values for n.
Another type of base case is to stop recursing when we hit an invalid input. A good example of this is depth first search on a binary tree:
function dfs(node, value) {
if (node === null) return false;
if (node.value === value) return true;
return dfs(node.left)  dfs(node.right);
}
Here, we just return false if we encounter a null node, since there’s no way for a subtree that doesn’t exist to contain the value we’re looking for.
For our house robber problem, we already discussed a couple possible base cases when we were trying to figure out the recurrence relation:
 What do we do if we have one house? Obviously we rob that one!
 What do we do if we have two houses? Obviously we pick the one with more money!
These base cases are of the first category that we mentioned, where we stop recursing for some edge input value. Here is how the base cases would change our function definition:
f(h, 0) = h[0]
f(h, 1) = max(h[1], h[0])
f(h, x >= 2) = max(h[x] + f(h, x  2), f(h, x  1))
These base cases work fine, but we can simplify. If you think about it a bit more, you will realize we don’t actually need those two specific base cases for 0
or 1
. Why? Because we can simply make f(x)
return 0
whenever x
is less than 0
. See below how using this new base case and our standard recurrence relation for x = 0
and x = 1
simplifies to the above base cases:
f(h, x < 0) = 0
f(h, 0) = max(h[0] + f(h, 0  2), f(h, 0  1))
= max(h[0] + f(h, 2), f(h, 1))
= max(h[0] + 0, 0)
= h[0]
f(h, 1) = max(h[1] + f(h, 1  2), f(h, 1  1))
= max(h[1] + f(h, 1), f(h, 0))
= max(h[1] + 0, h[0])
= max(h[1], h[0])
Our updated base case is a member of the second category, which is to stop recursing when we reach an invalid input. When designing a solution to a dynamic programming problem, it is always a good idea to try and think about how you can simplify your base case(s). This will make the coding portion much simpler.
Write the Recursive Function
Now that we have figured out our recurrence relation and base cases, we can write the code for our naive recursive solution!
function rob(nums) {
function maxHaulFrom(x) {
if (x < 0) return 0;
return Math.max(nums[x] + maxHaulFrom(x  2), maxHaulFrom(x  1));
}
return maxHaulFrom(nums.length  1);
}
Notice that instead of making our function take two parameters like we had when we were working out the recurrence relation and base cases, we simply made our recursive function closure the houses
variable and take a single parameter for the index. I did it like this so our code would conform to LeetCode’s function signature, and it also simplifies the code by making it so we don’t have to keep passing a parameter to our recursive function that just stays constant anyways.
Add Memoization
We now have code that solves our problem; but unfortunately it is unperformant, as we are calculating the result of the same subproblems multiple times. To illustrate this, let’s look at an example:
rob([2, 4, 3, 9, 1, 12])
The result we would get here is 25 (4 + 9 + 12). However, let’s look at everything we have to compute in order to get this answer:
maxHaulFrom(5) = Math.max(nums[5] + maxHaulFrom(3), maxHaulFrom(4))
maxHaulFrom(4) = Math.max(nums[4] + maxHaulFrom(2), maxHaulFrom(3))
maxHaulFrom(3) = Math.max(nums[3] + maxHaulFrom(1), maxHaulFrom(2))
maxHaulFrom(3) = Math.max(nums[3] + maxHaulFrom(1), maxHaulFrom(2))
maxHaulFrom(2) = Math.max(nums[2] + maxHaulFrom(0), maxHaulFrom(1))
maxHaulFrom(2) = Math.max(nums[2] + maxHaulFrom(0), maxHaulFrom(1))
maxHaulFrom(2) = Math.max(nums[2] + maxHaulFrom(0), maxHaulFrom(1))
maxHaulFrom(1) = Math.max(nums[1] + maxHaulFrom(1), maxHaulFrom(0))
maxHaulFrom(1) = Math.max(nums[1] + maxHaulFrom(1), maxHaulFrom(0))
maxHaulFrom(1) = Math.max(nums[1] + maxHaulFrom(1), maxHaulFrom(0))
maxHaulFrom(1) = Math.max(nums[1] + maxHaulFrom(1), maxHaulFrom(0))
maxHaulFrom(1) = Math.max(nums[1] + maxHaulFrom(1), maxHaulFrom(0))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
maxHaulFrom(0) = Math.max(nums[0] + maxHaulFrom(2), maxHaulFrom(1))
That is a lot of repeated work! You may also notice that the longer our list of houses gets, that the amount of work we have to repeat explodes exponentially (or O(2^n)
). We can overcome this by simply memoizing (caching) the return value of maxHaulFrom
for all houses. Here is what that would look like:
function rob(nums) {
const n = nums.length;
const memo = new Array(n).fill(1);
function maxHaulFrom(x) {
if (x < 0) return 0;
if (memo[x] === 1) {
memo[x] = Math.max(nums[x] + maxHaulFrom(x  2), maxHaulFrom(x  1));
}
return memo[x];
}
return maxHaulFrom(n  1);
}
This means that we only need to calculate maxHaulFrom
once for every house, which brings our time complexity down to linear time (or O(n)
).
Switch to BottomUp
We now have our solution down to linear time. This is great, but we can make it even better by solving the problem iteratively rather than recursively. Recursion usually incurs performance costs due to stack frame allocation, which we won’t get into because it’s sufficient to know that most of the time, if all else is equal: iteration beats recursion. Here is what our bottomup solution would look like:
function rob(nums) {
const n = nums.length;
const maxHauls = new Array(n).fill(0);
const getMaxHaul = (x) => x >= 0 ? maxHauls[x] : 0;
for (let i = 0; i < n; i++) {
maxHauls[i] = Math.max(nums[i] + getMaxHaul(i  2), getMaxHaul(i  1));
}
return maxHauls[n  1];
}
Optimize to Constant Memory (If You Can)
One last observation we can apply is that we only ever need to remember the maximum haul from the previous two houses (kind of like we only had to remember the last two numbers in Fibonacci!). This means we can drop our table altogether and substitute it with two variables. Here is what this last optimization looks like:
function rob(nums) {
let prev = 0;
let curr = nums[0];
for (let i = 1; i < nums.length; i++) {
const temp = curr;
curr = Math.max(nums[i] + prev, curr);
prev = temp;
}
return curr;
}
Keep in mind, that this optimization is not available to all dynamic programming problems. Remember that you can only do this when you only need to remember a constant number of previous values!
Conclusion
To recap, your plan of attack when you get a dynamic programming problem in an interview is:
 Identify the problem you have been asked is a dynamic programming problem.
 Figure out the recurrence relation.
 Figure out the base cases.
 Write a naive recursive solution.
 Add memoization to your recursive solution.
 Adapt your solution to use the bottomup approach.
 Optimize your bottomup approach to constant memory (if you are able to).
If you are studying for a coding interview, you should check out AlgoMonster (affiliate link). It will help you crush the technical interview in less time and with fewer sleepless nights grinding away random problems. You will learn the key patterns necessary to solve any interview question and gain the systematic knowledge you need to prove your expertise. Be more confident as you walk into that interview!
That’s it! I hope this has been a great help to all of you and wish you all the best in your future coding interviews. If you have any critiques of this blog post, please email me at mathewhorner456@gmail.com or DM me on Twitter @mathewhorner_ and let me know your thoughts. Thanks for reading!
Top comments (1)
Great explanation, thanks !