1. Overall Idea
The sub-problem:
If we're given the optimal solution for item[0]...item[i]
with a bag of size n
, when facing item[i+1]
, we only have to think about two things:
- If we must add
item[i+1]
ofweight[i+1]
andvalue[i+1]
, then the total value is: (optimal solution foritem[0]...item[i]
with a bag of sizen-weight[i+1]
) + (value[i+1]
) - See if not including
item[i+1]
yields a higher value, in other words, check optimal solution foritem[0]...item[i]
with a bag of sizen
Therefore, it's clear to see that we define the sub-problem as a two-dimension memoization, where:
-
dp[i][j]
= the optimal solution for the firsti
items with a bag of sizej
-
dp[i][j+1]
= the higher ofdp[i-1][j+1]
anddp[i-1][j-weight[i]] + value[i]
2. The Classic Approach
We first init a 2D array, where row[i] = item[i]
and col[i] = bag of size i
.
Populate the base cases of "empty bag with all items" and "first item with bag of all sizes"
dp = int[item_len][bag_size + 1]
for i in range(0, item_len):
dp[i][0] = 0
for i in range(0, bag_size + 1):
dp[0][i] = values[0] if i >= weights[0] else 0
Applying the sub-problem induction logic, note that we need to start from the second row and second column in each row.
for i in range(1, item_len):
for j in range(1, bag_size + 1):
if weights[i] > j: # item too big
dp[i][j] = dp[i - 1][j]
else:
add_item = dp[i - 1][j - weights[i]] + values[i]
skip_item = dp[i - 1][j]
dp[i][j] = max(add_item, skip_item)
- Space Complexity:
O(bag_size * item_len)
- Time Complexity:
O(bag_size * item_len)
3. Space-Optimised Approach
In the previous approach, we used a 2D array to store the computed results in our DP but we can observe one thing that to compute the result for item[i]
we just need the results for item[i-1]
.
Therefore, we can reduce the size of our DP to 1D by just storing the results on different bag sizes for the previous element.
dp = [0] * (bag_size + 1)
for i in range(0, bag_size + 1):
if weights[0] <= i: dp[i] = values[0]
for i in range(1, size):
for j in range(1, bag_size + 1):
if weights[i] > j:
dp[j] = prev[j]
else:
dp[j] = max(dp[j], dp[j - weights[i]] + values[i])
print(memo[bag_size])
- Space Complexity:
O(bag_size)
- Time Complexity:
O(bag_size * item_len)
4. The "Forward-Thinking" Approach (Intuitive)
In all previous approaches, we think in the way of "if I add the current item(of weight w
) into my bag(of size n
), I must trace back to find the optimal solution for bag size of n-w
in order to calculate the new total value".
This is a form of back-tracking, using "backward thinking".
Another way to think about this could be "if I add the current item(of weight w
) into my bag(of size n
), it will produce a new solution(not necessarily optimal) for bag size of n+w
, given the optimal solution for bag size of n
"
This is more like "forward-thinking", which is actually just the same logic, but we will now use a hashmap instead of an array to store the intermediate results, and each key-value pair of the hashmap now has a slightly different meaning than the array approach:
- each key of the hashmap is an integer in range
[0, bag_size]
- each value of key
n
marks the optimal solution for a bag that has items with total size of exactlyn
# an empty bag has maximum value of 0
dp = {0: 0}
for i in range(0, item_len):
prev = dict(dp)
for size in prev:
new_size = size + weights[i]
if new_size <= bag_size:
if new_size in prev:
dp[new_size] = max(prev[new_size], prev[size] + values[i])
else:
dp[new_size] = prev[size] + values[i]
print(max(dp.values()))
Note that we cannot simply return dp[bag_size]
as the final solution with this approach, since:
- There might not be a combination of items that gives a total weight of exactly
bag_size
- The optimal solution might come from a key that is less than
bag_size
- Space Complexity:
O(bag_size)
- Time Complexity:
O(bag_size * item_len)
5. The Target Sum Problem
Given an array of positive integers
nums
, and an integertarget
, returnTrue
if there is a subset ofnums
such that the sum of its elements is equal totarget
, returnFalse
otherwise.
Example:nums = [1,2,3,3]
,target = 7
will returnTrue
([1,3,3]
)
We could solve this problem with the classic knap-sack approach, where target
is the bag size, and each number x
in nums
can be regarded as an item of weight x
and value x
, then at the end we simply return dp[target] == target
dp = [False] * (target + 1)
dp[0] = True
for num in nums:
for i in range(num, target + 1):
dp[i] = dp[i] or dp[i - num]
return dp[target_sum]
However, it's also interesting to follow the "forward thinking" approach:
- we loop through each number in
nums
- at each number
n
, we keep a list of all possible sums so farsums
, and add in the new sums produced byn+sums[i]
An example using hashmap
possible_sums = {0: True}
for n in nums:
prev = dict(possible_sums)
for s in prev:
if s + n <= target: possible_sums[s + n] = True
return (target in possible_sums)
Note that the only difference here is that we change the order in which to find the solution, from "looking back" to "looking forward". However, by turning it to "forward thinking", it now feels more linear and straight forward.
I often find it useful to take the "forward-thinking" approach for many other knapsack-like problems.
Conclusion
In this blog, I compared the classic approach, space-optimised approach and forward-thinking approach of solving the 0/1 knap-sack problem, as well as showing how the forward-thinking approach could give us another subtle way to view and understand the knap-sack problem, by solving another similar problem "Target Sum".
All of these approaches are essentially the same strategy using dynamic programming, but I hope that by giving a side-by-side comparison here, it can help us build a more thorough understanding of the knapsack problem and the idea behind dynamic programming.
Top comments (0)