DEV Community

ShangjieZhou
ShangjieZhou

Posted on • Edited on

The 0/1 Knap-sack Problem

1. Overall Idea

The sub-problem:
If we're given the optimal solution for item[0]...item[i] with a bag of size n, when facing item[i+1], we only have to think about two things:

  1. If we must add item[i+1] of weight[i+1] and value[i+1], then the total value is: (optimal solution for item[0]...item[i] with a bag of size n-weight[i+1]) + (value[i+1])
  2. See if not including item[i+1] yields a higher value, in other words, check optimal solution for item[0]...item[i] with a bag of size n

Therefore, it's clear to see that we define the sub-problem as a two-dimension memoization, where:

  • dp[i][j] = the optimal solution for the first i items with a bag of size j
  • dp[i][j+1] = the higher of dp[i-1][j+1] and dp[i-1][j-weight[i]] + value[i]

2. The Classic Approach

We first init a 2D array, where row[i] = item[i] and col[i] = bag of size i.

Populate the base cases of "empty bag with all items" and "first item with bag of all sizes"

dp = int[item_len][bag_size + 1]
for i in range(0, item_len): 
    dp[i][0] = 0
for i in range(0, bag_size + 1): 
    dp[0][i] = values[0] if i >= weights[0] else 0
Enter fullscreen mode Exit fullscreen mode

Applying the sub-problem induction logic, note that we need to start from the second row and second column in each row.

for i in range(1, item_len):
    for j in range(1, bag_size + 1):
        if weights[i] > j: # item too big
            dp[i][j] = dp[i - 1][j]
        else:
            add_item = dp[i - 1][j - weights[i]] + values[i]
            skip_item = dp[i - 1][j]
            dp[i][j] = max(add_item, skip_item)
Enter fullscreen mode Exit fullscreen mode
  • Space Complexity: O(bag_size * item_len)
  • Time Complexity: O(bag_size * item_len)

3. Space-Optimised Approach

In the previous approach, we used a 2D array to store the computed results in our DP but we can observe one thing that to compute the result for item[i] we just need the results for item[i-1].

Therefore, we can reduce the size of our DP to 1D by just storing the results on different bag sizes for the previous element.

dp = [0] * (bag_size + 1)
for i in range(0, bag_size + 1):
    if weights[0] <= i: dp[i] = values[0]

for i in range(1, size):
    for j in range(1, bag_size + 1):
        if weights[i] > j:
            dp[j] = prev[j]
        else:
            dp[j] = max(dp[j], dp[j - weights[i]] + values[i])

print(memo[bag_size])
Enter fullscreen mode Exit fullscreen mode
  • Space Complexity: O(bag_size)
  • Time Complexity: O(bag_size * item_len)

4. The "Forward-Thinking" Approach (Intuitive)

In all previous approaches, we think in the way of "if I add the current item(of weight w) into my bag(of size n), I must trace back to find the optimal solution for bag size of n-w in order to calculate the new total value".

This is a form of back-tracking, using "backward thinking".

Another way to think about this could be "if I add the current item(of weight w) into my bag(of size n), it will produce a new solution(not necessarily optimal) for bag size of n+w, given the optimal solution for bag size of n"

This is more like "forward-thinking", which is actually just the same logic, but we will now use a hashmap instead of an array to store the intermediate results, and each key-value pair of the hashmap now has a slightly different meaning than the array approach:

  • each key of the hashmap is an integer in range [0, bag_size]
  • each value of key n marks the optimal solution for a bag that has items with total size of exactly n
# an empty bag has maximum value of 0
dp = {0: 0}

for i in range(0, item_len):
    prev = dict(dp)
    for size in prev:
        new_size = size + weights[i]
        if new_size <= bag_size:
            if new_size in prev:
                dp[new_size] = max(prev[new_size], prev[size] + values[i])
            else:
                dp[new_size] = prev[size] + values[i]

print(max(dp.values()))
Enter fullscreen mode Exit fullscreen mode

Note that we cannot simply return dp[bag_size] as the final solution with this approach, since:

  1. There might not be a combination of items that gives a total weight of exactly bag_size
  2. The optimal solution might come from a key that is less than bag_size
  • Space Complexity: O(bag_size)
  • Time Complexity: O(bag_size * item_len)

5. The Target Sum Problem

Given an array of positive integers nums, and an integer target, return True if there is a subset of nums such that the sum of its elements is equal to target, return False otherwise.
Example: nums = [1,2,3,3], target = 7 will return True ([1,3,3])

We could solve this problem with the classic knap-sack approach, where target is the bag size, and each number x in nums can be regarded as an item of weight x and value x, then at the end we simply return dp[target] == target

dp = [False] * (target + 1)
dp[0] = True

for num in nums:
    for i in range(num, target + 1):
        dp[i] = dp[i] or dp[i - num]

return dp[target_sum]
Enter fullscreen mode Exit fullscreen mode

However, it's also interesting to follow the "forward thinking" approach:

  • we loop through each number in nums
  • at each number n, we keep a list of all possible sums so far sums, and add in the new sums produced by n+sums[i]

An example using hashmap

possible_sums = {0: True}

for n in nums:
    prev = dict(possible_sums)
    for s in prev:
        if s + n <= target: possible_sums[s + n] = True

return (target in possible_sums)
Enter fullscreen mode Exit fullscreen mode

Note that the only difference here is that we change the order in which to find the solution, from "looking back" to "looking forward". However, by turning it to "forward thinking", it now feels more linear and straight forward.

I often find it useful to take the "forward-thinking" approach for many other knapsack-like problems.

Conclusion

In this blog, I compared the classic approach, space-optimised approach and forward-thinking approach of solving the 0/1 knap-sack problem, as well as showing how the forward-thinking approach could give us another subtle way to view and understand the knap-sack problem, by solving another similar problem "Target Sum".

All of these approaches are essentially the same strategy using dynamic programming, but I hope that by giving a side-by-side comparison here, it can help us build a more thorough understanding of the knapsack problem and the idea behind dynamic programming.

Top comments (0)