# Solving the Knapsack Problem with Dynamic Programming

###
Tim Downey
*Updated on *
・7 min read

Graduate Algorithms Journey (8 Part Series)

## What is the Knapsack Problem?

Consider a backpack (or "knapsack") that can hold up to a certain amount of weight. You have a set of items at your disposal, each being worth a different value and having a different weight. You want to fill the backpack with the most valuable combination of items without overburdening it and going over the weight limit. This is the Knapsack Problem. It's one of the most well studied combinatorial optimization problems and a popular introduction to dynamic programming. In this post, we'll explain two variations of the knapsack problem:

- Items can be selected repeatedly (the grocery store variation)
- Items can be selected at most once (the museum variation)

Before we dive in, though, let's first talk briefly about what Dynamic Programming entails.

## Detour: Brief Introduction to Dynamic Programming

You may have heard the term "dynamic programming" come up during interview prep or be familiar with it from an algorithms class you took in the past. At it's most basic, Dynamic Programming is an algorithm design technique that involves identifying subproblems within the overall problem and solving them starting with the smallest one. Results of smaller subproblems are memoized, or stored for later use by the subsequent larger subproblems. Consider the following array, *A*:

Say we want to do a prefix sum across the array and we're specifically interested in element 4 (highlighted in red). Simple enough, just loop over and add up the values before it. Now let's say we want to know the prefix sum up to element 5. What about element 2? Do we need to loop over them all again for each one? Using Dynamic Programming we can do this a bit more efficiently using an additional array *T* to memoize intermediate values.

Let *T[i]* be the prefix sum at element *i*. We can then say **T[i] = T[i-1] + A[i]**. Here *T[i-1]* represents a smaller subproblem -- all of the indices prior to the current one. Now look at the array *T* below to help visualize this:

This was a pretty simple example of Dynamic Programming, but we will use these same thought processes and techniques to solve the knapsack problem. Now back to our regularly scheduled programming...

## Knapsack Problem Restated

Let's restate the problem a bit more formally this time. We have the following:

- A knapsack that can hold a total weight
*W* - A collection of
*n*items to choose from - Each of these
*n*items has a weight*w*that can be selected from the array*w*_{1}...w_{n} - Each of these
*n*items has a value*v*that can be selected from the array*v*_{1}...v_{n}

We want to choose the optimal combination of items from such that we maximize the total value of our items without exceeding the maximum weight limit *W*.

### Our Knapsack and Items

For the sake of the problems below, we'll consider the following knapsack and collection of items:

**Knapsack:** *W* = 15

**Items:**

Item |
Weight |
Value |
---|---|---|

1 | 2 | 1 |

2 | 10 | 20 |

3 | 3 | 3 |

4 | 6 | 14 |

5 | 18 | 100 |

## Repeated Selection (Grocery Store Variant)

The first variation of the knapsack problem allows us to repeatedly select the same item and place it in the bag. I call this the "Grocery Store" variant because I like to think of it as being like Supermarket Sweep where participants race to fill a shopping cart with the highest valued items possible. Since the grocery store has lots of stock available, it's fine to pick the same item multiple times.

### Define our Subproblem

First let's define our subproblem. Let *w* be a weight less than our max weight *W*. Or, in other words, **0 ≤ w ≤ W**. Given that, we can define our subproblem as:

**K( w) = max value attainable with a total weight ≤ w**

So basically, each subproblem will operate on a smaller and smaller weight limit and we'll try our items available against that smaller limit.

### Define our Recurrence

**Base Case:** **K(0) = 0**

**Recurrence:** **K( w) = max( for(i...n) { K(w - w_{i}) + v_{i}, if w_{i} ≤ w } )**

What we're doing here is trying all possibilities for items to add while factoring in the weight capacity reduction incurred by that item. We use the *max()* function to ensure we select the subproblem parameters that yield the highest value. Our base case is **K(0)** yielding a value of 0 because no item has a weight ≤ 0.

### Memoization Table Structure

For this problem we should be able to use a simple 1-dimensional table (array) from *w _{1}* to

*W*in length. In each index of this table we'll store the max value obtainable at that sub-weight and since we are able to pick the same items multiple times we do not need to store any information about the items chosen.

### Python Implementation

Below is a sample implementation in Python. Feel free to tweak the values for the items and *W* to see what happens!

### Runtime Analysis

Let's take a look at the complicated bit of the code above and determine it's Big O upper bound.

```
for w in range(1, W + 1):
max_sub_result = 0
for i in range(1, n):
wi = item_weights[i]
vi = item_values[i]
if wi <= w:
subproblem_value = K[w - wi] + vi
if subproblem_value > max_sub_result:
max_sub_result = subproblem_value
K[w] = max_sub_result
```

Within the outer loop over the *W* weights we have a nested loop over the *n* items. Within these loops the comparisons and lookups from **K[]** take constant time. This means our algorithm is dominated by the nested loops so it is *O(nW)* in time complexity.

## Single Selection (Museum Variant)

The first variation of the knapsack problem allows us to pick an item at most once. I call this the "Museum" variant because you can picture the items as being one-of-a-kind artifacts. Here there is only one of each item so we even if there's an item that weights 1 lb and is worth the most, we can only place it in our knapsack once.

### Define our Subproblem

Our approach here will be very similar to the "Repeated Selection" variant with the caveat that we now have to keep track of the items that we've used. So let's take that into account when defining our subproblem!

Let *i* be a item from our *n* items such that **0 ≤ i ≤ n**.

Additionally, as before, let

*w*be a weight less than our max weight

*W*. Or, in other words,

**0 ≤**.

*w*≤*W*Given these conditions, we can define our subproblem as:

**K( i, w) = max value attainable with a subsect of objects in 1, ..., i that have a total weight ≤ w**

### Define our Recurrence

**Base Case 1:** **K(0, w) = 0**

**Base Case 2:**

**K(**

*i*, 0) = 0**Recurrence:**

If ** w_{i} ≤ w**:

**K(**

*i*,*w*) = max(K(*i*- 1,*w*-*w*) +_{i}*v*, K(_{i}*i*- 1,*w*-*w*))_{i}Else:

**K(**

*i*,*w*) = K(*i*- 1,*w*-*w*)_{i}This recurrence is a bit more complicated than the previous one, so let's take a second to deconstruct it. Our base cases are either when we're at item 0 which represents the empty set of items or when we're at weight 0 where we can no longer add any item to the knapsack. Since nothing can be added in either of these cases, our maximum value is 0.

Now for the recurrence we first have to check whether or not we have room to add the item in question to the knapsack. If we do have room we then try two possibilities:

- We add the current item to the knapsack:
**K(***i*- 1,*w*-*w*) +_{i}*v*_{i} - We do not add the current item:
**K(***i*- 1,*w*-*w*)_{i}

We take the maximum value of these two scenarios via *max()*.

If the item does not fit in the knapsack (i.e. ** w_{i} > w**) then there is no point in considering what value we might get from it and we simply follow the

**K(**path.

*i*- 1,*w*-*w*)_{i}### Memoization Table Structure

Since our problem definition**K( i, w)** takes two parameters, a simple 1-dimensional array won't suffice. We will need a 2-dimensional table with dimensions from 0...

*n*and 0...

*W*. In each index of this table we'll store the max value obtainable for each item

*i*at sub-weight

*w*.

Spoilers, but for the problem above the final version of this table will look like this:

0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |

2 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 20 | 20 | 21 | 21 | 21 | 21 |

3 | 0 | 0 | 1 | 3 | 3 | 4 | 4 | 4 | 4 | 4 | 20 | 20 | 21 | 23 | 23 | 24 |

4 | 0 | 0 | 1 | 3 | 3 | 4 | 14 | 14 | 15 | 17 | 20 | 20 | 21 | 23 | 23 | 24 |

5 | 0 | 0 | 1 | 3 | 3 | 4 | 14 | 14 | 15 | 17 | 20 | 20 | 21 | 23 | 23 | 24 |

### Python Implementation

Below is a sample implementation in Python. Uncomment and run the Pandas code at the bottom to see the 2D table visualized.

### Runtime Analysis

Let's take a look at the complicated bit of the code above and determine it's Big O upper bound.

```
for i in range(1, n):
for w in range(1, W + 1):
wi = item_weights[i]
vi = item_values[i]
if wi <= w:
K[i][w] = max([K[i - 1][w - wi] + vi, K[i - 1][w]])
else:
K[i][w] = K[i - 1][w]
```

The analysis for this problem is very similar to what we did earlier. The outer loop over the *n* items contains an inner loop over the *W* weights.. Within these loops the comparisons, *max()*, and the lookups from **K[][]** take constant time. This means our algorithm is dominated by the nested loops so it is *O(nW)* in time complexity.

## Summary

There you have it, two variations of the knapsack problem with table-based Dynamic Programming solutions. Hopefully you found this post helpful. If not, I at least found it helpful for myself to write it!

**Cover Image Photo Credit:**

*"Fjallraven Grid" by Mitchell Griest on Unsplash*

I felt this photo really captured the concepts of knapsacks and memoization tables. 😂

Graduate Algorithms Journey (8 Part Series)