DEV Community

Cover image for Making Change With Dynamic Programming πŸ’°
Katie
Katie

Posted on

Making Change With Dynamic Programming πŸ’°

I have a love-hate relationship with the Coin Change question on Leetcode. I studied it for hours for my algorithms class, then, months later was asked a variation of this question in an interview and totally blew it. Just goes to show how important consistent practice is!

Backtoback SWE's video helped immensely in my understanding of this problem. I highly recommend watching his video, he walks through every iteration of the problem.

Let's dive in!

The Question

You are given coins of different denominations and a total amount of money amount. Write a function to compute the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1.

A sample input/output is also provided, and is below:
Input: coins = [1, 2, 5], amount = 11
Output: 3
Explanation: 11 = 5 + 5 + 1

The question is asking for the minimum number of coins necessary to equal the given total. A good clue this problem has a dynamic programming solution is the use of the word "fewest." Dynamic programming problems often solve optimization problems with a constraint, where you are asked to find the minimum, maximum, shortest number of this or that... etc. I can't recall how I stumbled upon this, but there is a 500+ page PDF from the Yale CS department on data structures on programming techniques. If you scroll to section 5.1.4, you can click on the title to view the section on dynamic programming.

The Approach

My first inclination is the brute force approach of checking each coin, and checking each combination with the other coins to see if I would reach my total. However, as you likely already guessed, that method is wildly inefficient.

An important element of DP algorithms is that of overlapping structure and sub problems. We also will be using a table as a memoization table, which we'll use as a data structure to store values that we won't have to recalculate later on. For each coin, we are trying to solve the same subproblem of, 'which value is larger?' This is why using a memoization table to store already calculated values is so important to the algorithm.

The Solution

As almost always, the first code I'll think about/write are the cases if any input is nil.

def coin_change(coins: List[int], amount: int):
    if not amount:
        return 0

So I do feel like using that if not blah syntax is a little weird in this context since we are talking about a numerical ammount, so another option could be if amount == 0:. I feel like the syntax in the code block is the more pythonic way, but I'm also no python expert. I'd love to hear other's thoughts/opinions on this!

This next bit of code is building our memoization table. A commonly used application of memoization is in the Fibbonacci Sequence algorithm, but in this case, we are going to be using a list to store our calculated values.

    cols = amount + 1
    # create table with number of columns is amount + 1
    T = [0 if idx == 0 else float('inf') for idx in range(cols)]
    # set 0th index to 0, otherwise, set each idx to infinity

First I am creating a variable, cols, that represents the total number of columns (or indices) in my memoization table. It will soon become clear why we want the amount + 1 number of columns. The next line is using list comprehension to build the table. There are other ways to do this, such as a for loop, or probably better list comprehension. But, again, I just felt like this was the more pythonic way.

In keeping with our example input, after the code above runs, I should have an empty list that contain one 0 eleven infs.

T = [0, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf]

You don't have to set the other indices in your table as inf. I've seen some solutions set it to be amount + 1, so long as the values are greater than the amount, you're good! After our algorithm runs, whatever value is at T[cols] is going to be our answer. The value at T[12] in this example, is the minimum number of coins needed to add up to the value 11, our original given amount.


We will want to start the meat of our algorithm next! I was surprised at how relatively rudimentary the solution was. The basic idea is this: We are going to check for each coin in our given coins list, we need to check every single coin against what is currently in our memoization table (T in this example), to see which value is larger (this is our overlapping subproblem).

    for j in range(len(coins)):
        for i in range(1, cols): 

Good 'ol nested for-loop πŸ˜… We are iterating through our coins list [1, 2, 5], then accessing each index in our memoization table, starting at index 1. Index 0 is 0, because each value in T represents the number of coins needed to make the given amount. So, when our target amount is 0, we need 0 coins to achieve that. Similarly, when our target amount is 11, we need ?? coins to achieve that. The ?? is what we are going to find next!

             coin = coins[j]
             if(i >= coins[j]):
                 T[i] = min(T[i], T[i - coin] + 1)

First, I set a variable called coin and set it to the current coin in the given list. In our case, this would be 1, initially. It's not required, but it helps me keep things organized in my mind. I then want to check if i is greater than or equal to the current coin at index j. If that condition is true, we then decide on what value will ultimately be assigned to T[i]. We select the value by examining the minimum of the current value T[i] or T[i - coin] + 1, whichever is smaller. We need the + 1 because we would be adding a new coin to our coin combination. We deduct the value of the current coin from i because we are seeing if the value of the current coin can 'go into' that amount. For example, if i is 2 and our current coin is 5, the value 5 cannot be used to make the amount 2. (Hopefully this isn't too confusing of an explanation. Again, Backtoback SWE explains it so much better).

And really, that is it! Again, a core property of dynamic programming is that there are overlapping subproblems, so often times the code ends up being only a few lines (that ultimately run many times over). A common hang up is knowing the differences between using a dynamic programming solution versus a recursive solution (and when to use one or the other). I still don't 100% understand the full differences, but one thing I've observed very generally is if your input, n, is large, a recursive solution could end up being very inefficient because it will need to execute at least n number of function calls to the system stack. Overwhelming this could lead to a stack overflow, and this is why dynamic programming is kind of like applying optimization to a recursive-like problem.


At the end of our code, we ultimately need to return the minimum number of coins required to make 11. If there is no possible combination, we return -1. I am getting used to being more savvy with my return statements, but I also am aware that sometimes simpler is better (and more readable/understandable). Below, I am simply saying, "return -1 if the value at the last index in T is infinity otherwise, return the value at the last index in T." If the value of the last index in T is still infinity, it means there is no valid combination.

    return -1 if (T[-1] == float('inf')) else T[-1]

You could also write:

    if (T[-1] == float('inf')):
        return -1
    else:
        return T[-1]

Go with whichever makes the most sense to you.

Here is the full solution:

def coinChange(self, coins: List[int], amount: int) -> int:
        if amount == 0:
            return 0

        cols = amount + 1
        T = [0 if idx == 0 else float('inf') for idx in range(cols)]

        for j in range(len(coins)):
            for i in range(1, cols): 
                coin = coins[j]
                if(i >= coins[j]):
                    T[i] = min(T[i], T[i - coin] + 1)

        return -1 if (T[-1] == float('inf')) else T[-1]

The Complexity

The time complexity is O(amount x coins) or O(ac). I think more frequently this is represented as O(mn).

The space complexity is O(a), where a is the total amount. We are calculating and storing a subproblems.

Thank You

I know this post was long and very text-heavy and riddled with subpar explanations, but thank you for reading and sticking with it! πŸ™‚

Top comments (0)