DEV Community

Cover image for Dynamic Programming Algorithms Every Programmer Should Know
Rahul Bagal
Rahul Bagal

Posted on • Updated on

Dynamic Programming Algorithms Every Programmer Should Know

Introduction

Dynamic programming is a technique that allows us to break down complex problems into smaller, more manageable subproblems. It involves solving subproblems only once and storing the results in memory for future use, instead of solving them repeatedly. This technique is widely used in computer science and can be applied to a wide range of problems, including optimization, scheduling, string, and graph problems.

In this article, we'll explore some of the essential dynamic programming algorithms that every programmer should know, with examples and code snippets.

What is Dynamic Programming?

Dynamic programming is an algorithmic technique that involves solving a problem by breaking it down into smaller, more manageable subproblems. It's similar to divide-and-conquer, but with a crucial difference: dynamic programming solves subproblems only once and stores their solutions in memory for future use, instead of solving them repeatedly.

Overlapping Subproblems

One of the key characteristics of dynamic programming is overlapping subproblems. This means that the same subproblem can occur multiple times in the course of solving a larger problem. For example, in the Fibonacci sequence, calculating the nth Fibonacci number involves calculating the (n-1)th and (n-2)th Fibonacci numbers, which themselves require calculating the (n-2)th and (n-3)th Fibonacci numbers, and so on.

Optimal Substructure

Another key characteristic of dynamic programming is optimal substructure. This means that the optimal solution to a larger problem can be constructed from the optimal solutions to its subproblems. For example, in the knapsack problem, the optimal solution for a knapsack of size S and a set of items with weights and values can be constructed from the optimal solutions for knapsacks of size S-1 and the same set of items.

Memoization

Memoization is a technique used to implement dynamic programming by storing the results of expensive function calls and returning the cached result when the same inputs occur again. In other words, memoization is caching the results of expensive function calls so that we can avoid repeated computations.

Here's an example of memoization in action, using the Fibonacci sequence:

 fib_cache = {}

def fib(n):
    if n in fib_cache:
        return fib_cache[n]
    if n <= 1:
        return n
    fib_cache[n] = fib(n-1) + fib(n-2)
    return fib_cache[n]
Enter fullscreen mode Exit fullscreen mode

Tabulation

Tabulation is another technique used to implement dynamic programming, which involves filling up a table with the results of subproblems in a bottom-up manner. In other words, instead of recursively solving subproblems and storing the results in memory, we solve them iteratively and store the results in a table.

Here's an example of tabulation in action, using the Fibonacci sequence:

def fib(n):
    if n <= 1
    # Initialize the table
    fib_table = [0, 1]

    # Fill up the table in a bottom-up manner
    for i in range(2, n+1):
        fib_table.append(fib_table[i-1] + fib_table[i-2])

    # Return the nth Fibonacci number
    return fib_table[n]
Enter fullscreen mode Exit fullscreen mode

Fibonacci Sequence

The Fibonacci sequence is a classic example of a problem that can be solved using dynamic programming. The Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding numbers, starting from 0 and 1. The sequence goes like this: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on.

Here's an example of solving the Fibonacci sequence using dynamic programming, with tabulation:

 def fib(n):
    if n <= 1:
        return n

    # Initialize the table
    fib_table = [0, 1]

    # Fill up the table in a bottom-up manner
    for i in range(2, n+1):
        fib_table.append(fib_table[i-1] + fib_table[i-2])

    # Return the nth Fibonacci number
    return fib_table[n]
Enter fullscreen mode Exit fullscreen mode

Knapsack Problem

The knapsack problem is another classic example of a problem that can be solved using dynamic programming. The knapsack problem is a problem in combinatorial optimization, where given a set of items, each with a weight and a value, we must determine the items to include in a collection so that the total weight is less than or equal to a given limit, and the total value is maximized.

Here's an example of solving the knapsack problem using dynamic programming, with tabulation:

 def knapsack(W, wt, val, n):
    # Initialize the table
    K = [[0 for x in range(W+1)] for x in range(n+1)]

    # Build the table in a bottom-up manner
    for i in range(n+1):
        for w in range(W+1):
            if i == 0 or w == 0:
                K[i][w] = 0
            elif wt[i-1] <= w:
                K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w])
            else:
                K[i][w] = K[i-1][w]

    # Return the maximum value
    return K[n][W]
Enter fullscreen mode Exit fullscreen mode

Longest Common Subsequence

The longest common subsequence problem is a problem in computer science, where given two sequences, we must find the longest subsequence present in both of them. A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements.

Here's an example of solving the longest common subsequence problem using dynamic programming, with tabulation:

def lcs(X, Y):
    m = len(X)
    n = len(Y)

    # Initialize the table
    L = [[0 for x in range(n+1)] for x in range(m+1)]

    # Build the table in a bottom-up manner
    for i in range(m+1):
        for j in range(n+1):
            if i == 0 or j == 0:
                L[i][j] = 0
            elif X[i-1] == Y[j-1]:
                L
                L[i][j] = L[i-1][j-1] + 1
            else:
                L[i][j] = max(L[i-1][j], L[i][j-1])

    # Return the length of the longest common subsequence
    return L[m][n]
Enter fullscreen mode Exit fullscreen mode

Conclusion

Dynamic programming is a powerful technique that can be used to solve a wide range of problems in computer science and beyond. By breaking down complex problems into smaller subproblems and solving them in a systematic manner, we can arrive at an optimal solution that might not be achievable by other methods.

In this article, we covered three classic examples of dynamic programming algorithms: the Fibonacci sequence, the knapsack problem, and the longest common subsequence problem. We showed how each of these problems can be solved using dynamic programming with tabulation.

Whether you're a beginner or an experienced programmer, understanding dynamic programming can help you become a more efficient problem solver. So the next time you're faced with a challenging problem, try applying dynamic programming to see if you can find an optimal solution.

Writing has always been my passion and it gives me pleasure to help and inspire people. If you have any questions, feel free to reach out!

Connect me on Twitter,LinkedIn, and GitHub!

Visit my DEV for more articles like this.

Top comments (5)

Collapse
 
kalkwst profile image
Kostas Kalafatis

Just a heads up that you can add highlighting to the code blocks if you'd like. Just change:

code block with no colors example

... to specify the language:

code block with colors example

More details in our editor guide!

Collapse
 
oxdeadface profile image
Levi Perez • Edited

Please fix all the errors in your code examples before you post. I am only a human, not a Python interpreter, but I do count at least 4 syntactic/runtime Python errors (not counting repeats) among your examples. Come on. Python is one of the easiest languages to check for syntax.

Collapse
 
mrpatrick2407 profile image
mrpatrick2407

We are humans

Collapse
 
Sloan, the sloth mascot
Comment deleted
Collapse
 
ldnovaes profile image
Leandro Duarte

Very Good!