loading...
Cover image for Learn Data Structure and Algorithm in JavaScript | Part 19

Learn Data Structure and Algorithm in JavaScript | Part 19

edisonnpebojot profile image Edison Pebojot(๐Ÿ‘จโ€๐Ÿ’ป) Updated on ใƒป22 min read


Prerequisite (โœ‹๐Ÿ˜)

If you're reading this article right now, please considered to read our Part 01: Big-O Notation, Part 02: JavaScript Unique Part, Part 03: JavaScript Numbers to Part 18: Advance Strings

Part Table of Contents Description
01 Big-O Notation By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed).
02 JavaScript Unique Parts Big-O is important for analyzing and comparing the efficiencies of algorithms. The analysis of Big-O starts by looking at the code, and, applying the rules, applying the rules is because to simplify the Big-O notation linear or quadratic rule is not enough.
03 JavaScript Numbers This part 3 will focus on JavaScript number operations, number representation, Number objects, common number algorithms, and random number generation.
04 JavaScript Strings This part 4 will focus on strings, JavaScript String object, and the String objectโ€™s built-in functions. You will learn how to access, compare, decompose, and search strings for commonly used real-life purposes. In addition, the chapter will explore string encoding, decoding, encryption, and decryption.
05 JavaScript Arrays As a JavaScript developer, you will use the array often; it is the most commonly used data structure. Arrays in JavaScript come with a lot of built-in methods. By the end of this part, you will understand arrays and choose the right method
06 JavaScript Object This part will focus on what JavaScript objects are, how they are declared, and how their properties can be changed. In addition, this part will cover how JavaScript classes are implemented using prototypal inheritance. Also this part will be short.
07 JavaScript Memory Management A variable takes up some memory. In C, the programmer allocate and deallocate memory manually. In contrast, modern JavaScript engines have garbage collectors that delete unused variables. However, there are pitfalls(unexpected) that developers can fall into(โ—) This part will show these unexpected and present techniques to help the garbage collector minimize the memory problems.
08 Recursion This part 8 introduces the concept of recursion and recursive algorithms(Remember they are different, we will discuss them later(๐Ÿ˜‰)). We will also discuss the definition of recursion and fundamental rules for recursive algorithms.
09 Sets This part focuses on the concepts of sets from both a mathematical definition and on the implementation. Also, Common set operations, as well as their implementations, are covered in great detail (๐Ÿ’ก).
10 Searching and Sorting This part 10 focuses on searching and sorting for arrays. By the end of this part 10, you will understand how to use sorting and searching algorithms for arrays. Also, this article is a bit complicated for beginners, so as much as possible the visual aids is your friend (๐Ÿ‘€). (๐Ÿ˜ƒ)
11 Hash Tables A hash table is a fixed-sized data structure in which the size is defined at the start. This part 11 explains how hash tables work, and the method of generating a unique key. By the end of this part 11, you will understand various hashing techniques and know how to implement a hash table. (๐Ÿ˜ƒ)
12 Stacks and Queues This part 12 covers stacks and queues(pronounce as kyooz (๐Ÿ”ˆ)) not (kwewe) okay? hehehe (๐Ÿ˜…); both are data structures used in the implementation of complex data structures. You'll learn what the stacks and queues are, how they're used, when they're used, and how to implement them (๐Ÿ˜ƒ) Let's go! (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ)
13 Linked Lists A linked list is a data structure in which each node (or element) points to another node. Unlike arrays, which have a fixed size, a linked list is a dynamic data structure. By the end of this part 13, you will understand how to implement and work with linked lists. And oh! (๐Ÿ˜ฎ) There are two types of linked lists: singly (โžก๏ธ) and doubly (โ†”๏ธ). Letโ€™s examine the singly linked list first.(๐Ÿ˜ƒ) Let's go! (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ)
14 Caching Caching is the process of storing data into temporary memory so that it can be easily retrieved for later use if it is required again. As an example, a database keeps data cached to avoid re-reading the hard drive, and a web browser caches web images to avoid re-downloading. In this part 14, two caching techniques will discussed: LFU and LRU caching.
15 Trees A general tree data structure is composed of nodes with children nodes. The top node is called the root node. This part 15 will explore different types of trees such as binary trees, binary search trees, and self-balancing binary search trees. First, this part 15 will cover what trees are and how they are structured. Then, it will cover methods of traversing(crossing or taking a zigzag path) the tree data structure in detail. Finally, you will learn about binary search trees and self-balancing binary search trees to understand how to store searchable data. (๐Ÿ˜ƒ)
16 Heaps A heap is an important data structure that returns the highest or lowest element in O(1) time. This part 16 will focus on explaining how heaps are implemented as well as how to work with them. One example is heap sort, which is a sorting algorithm based on heaps.
17 Graphs In this Part 17, you will learn graph basics, including fundamental terminology and graph types. The Part 17 will also cover working with these different graph types and methods of representing graphs in data structures. Finally, algorithms for traversing, searching, and sorting graphs are explored to solve problems such as finding the shortest path between two graph nodes. (๐Ÿ‘)
18 Advance Strings Part 18 will cover more complex string algorithms than covered in the previous section. Now that you have heard of certain other data models or structures, they should be easier to comprehend. Specifically, Part 18 will focus on string searching algorithms. (๐Ÿ˜‰)
19 Dynamic Programming Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (โฌ‡๏ธ). To explain dynamic programming, letโ€™s re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (๐Ÿ˜‰)
20 Bit Manipulation Bit manipulation is an advanced topic that JavaScript developers typically do not need to know. However, you should learn a bit about bit manipulation if you want to implement high-performance server-side code. Understanding bit manipulation requires some knowledge of digital logic. Any introductory course in discrete math or circuits would be helpful to understand these concepts.

Part 19: Dynamic Programming (๐Ÿ˜ฑ ๐Ÿ”ฅ โšก)

Alt Text

Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (โฌ‡๏ธ). To explain dynamic programming, letโ€™s re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (๐Ÿ˜‰)

Motivations for Dynamic Programming (๐Ÿ˜ค๐Ÿ”ฅ)

Alt text

The code for the Fibonacci sequence:

// Recursive Fibonacci Sequence
function getNthFibo(n) {
    if (n <= 1) {
        return n;
    } else {
        return getNthFibo(n - 1) + getNthFibo(n - 2);
    }
}

getNthFibo(3);
Enter fullscreen mode Exit fullscreen mode
// Recursive Fibonacci Sequence function getNthFibo(n) { if (n <= 1) { return n; } else { return getNthFibo(n - 1) + getNthFibo(n - 2); } } getNthFibo(3);

Recall that the recursive implementation of this algorithm is O(2n)O\lparen 2^{n}\rparen (๐Ÿ’ก). Upon closer examination, you will notice that much of the same computation is repeated:


Alt Text

Figure 19-1. Recursion tree for Fibonacci numbers

As shown in Figure 19-1 (Above), when getNthFibo for 6 is called, the calculation for 4, 3, 2, and 1 are repeated multiple times. Knowing this, how can you make this algorithm more efficient?

Using a hash table (use a hash table), once the Fibonacci number has been computed, it can be stored like the following implementation:

var cache = {};
function fiboBest(n) {
    if (n <= 1) {
        return n;
    }
    if (cache[n]) {
        return cache[n];
    }
    return (cache[n] = fiboBest(n - 1) + fiboBest(n - 2));
}
fiboBest(10); // Prints "55"
Enter fullscreen mode Exit fullscreen mode
var cache = {}; function fiboBest(n) { if (n <= 1) { return n; } if (cache[n]) { return cache[n]; } return (cache[n] = fiboBest(n - 1) + fiboBest(n - 2)); } fiboBest(10); // Prints "55"

Calculating the Fibonacci sequence for 6 requires calculating the Fibonacci sequence for 4 and 5. Hence, the Fibonacci sequence for 5 overlaps with the fourth Fibonacci sequence calculation. This is known as overlapping subproblems (๐Ÿ’ก). So the code above refers to the fact that the optimal solution (best solution) to the problem contains optimal solutions to its subproblems.

Rules of Dynamic Programming (๐Ÿ˜Œโ˜๏ธ)

Alt text

Dynamic programming (DP) is the method of storing values that were already calculated and using those values to avoid any recalculations. This method can be applied only to those problems with overlapping subproblems and optimal substructure.

Overlapping Subproblems (๐Ÿ˜ก๐Ÿ™…โŒ)

DP stores subproblem solutions typically in a hash table, an array, or a matrix, and this is referred to as memoization (See on Wikipedia). DP is useful for solving problems in which there are many repeated subproblems.

An example of this can be seen with the Fibonacci sequence recursive method. It can be observed that some numbers such as 3 will be recalculated many times. A hash table can be used to store results to avoid any recalculations. Doing this reduces the time complexity from O(2n)O\lparen 2^{n}\rparen to just O(n)O\lparen n\rparen , which is an immense change (๐Ÿ˜ฎ).

Optimal Substructure (๐Ÿ“โœ๏ธ๐Ÿ“„)

An optimal substructure is when the optimal solution of a problem can be found by using the optimal solutions of its subproblems.

Note (๐Ÿ“): For example, the shortest path finding algorithms have optimal substructures. Consider finding the shortest path for traveling between cities by car. If the shortest route from Los Angeles to Vancouver passes through San Francisco and then Seattle, then the shortest route from San Francisco to Vancouver must pass through Seattle as well.

Example: Ways to Cover Steps (1๏ธโƒฃ2๏ธโƒฃ3๏ธโƒฃ)

Given a distance (n), count the total number of ways to cover n(number of steps or distance) with one, two, and three steps. For example, when n=3, there are four ways, as shown here:

1. 1 Step, 1 Step, 1 Step, 1 Step
2. 1 Step, 1 Step, 2 Steps
3. 1 Step, 3 Steps
4. 2 Steps, 2 Steps

Note (๐Ÿ“): Look at the pattern carefully (above)

Hereโ€™s the function for achieving the count:

function waysToCoverSteps(step) {
    if (step < 0) {
        return 0;
    }
    if (step == 0) {
        return 1;
    }
    return waysToCoverSteps(step - 1) + waysToCoverSteps(step - 2) + waysToCoverSteps(step - 3);
}

waysToCoverSteps(12);
Enter fullscreen mode Exit fullscreen mode
function waysToCoverSteps(step) { if (step < 0) { return 0; } if (step == 0) { return 1; } return waysToCoverSteps(step - 1) + waysToCoverSteps(step - 2) + waysToCoverSteps(step - 3); } waysToCoverSteps(12);

Time Complexity: O(3n)O\lparen 3^{n}\rparen

This recursive method has a large time complexity. To optimize the time complexity, simply cache the result and use it instead of recalculating the values:

function waysToCoverStepsDP(step) {
    var cache = {};
    if (step < 0) {
        return 0;
    }
    if (step == 0) {
        return 1;
    }
    // Check if exists in cache:
    if (cache[step]) {
        return cache[step];
    } else {
        cache[step] = waysToCoverStepsDP(step - 1) + waysToCoverStepsDP(step - 2) + waysToCoverStepsDP(step - 3);
        return cache[step];
    }
}

waysToCoverStepsDP(12);
Enter fullscreen mode Exit fullscreen mode
function waysToCoverStepsDP(step) { var cache = {}; if (step < 0) { return 0; } if (step == 0) { return 1; } // Check if exists in cache: if (cache[step]) { return cache[step]; } else { cache[step] = waysToCoverStepsDP(step - 1) + waysToCoverStepsDP(step - 2) + waysToCoverStepsDP(step - 3); return cache[step]; } } waysToCoverStepsDP(12);

Time Complexity: O(n)O\lparen n\rparen

This shows the power of dynamic programing. It improves time complexity immensely.

Classical Dynamic Programming Examples (๐Ÿ‘ด๐Ÿ‘ต)

Alt text

This section will explore and solve some of the classical dynamic programming problems. The first one that will be explored is the knapsack problem (See on Wikipedia).

The Knapsack Problem (โŒ๐Ÿ‘œโœ…)

The knapsack problem is as follows: Given n (weights) and the values of items, put these items in a knapsack of a given capacity (w) to get the maximum total value in the knapsack.

Optimal Substructure (๐Ÿ“๐Ÿ“โœ๏ธ๐Ÿ“„)

For every item in the array, the following can be observed:

  • The item is included in the optimal subset.
  • The item is not included in the optimal set. The maximum value must be one of the following
  • Including the Nth item: Max value obtained with n-1 items minus the Nth item (can only work if the weight of the Nth item is smaller than W)
  • Excluding the Nth item: Max value obtained with n-1 items

Naive Approach (๐Ÿขโžก๏ธโŒ๐Ÿ˜ต)

The naive approach implements the described optimal substructure as shown here:

function knapsackNaive(index, weights, values, target) {
    var result = 0;
    if (index <= -1 || target <= 0) {
        result = 0;
    } else if (weights[index] > target) {
        result = knapsackNaive(index - 1, weights, values, target);
    } else {
        // Case 1:
        var current = knapsackNaive(index - 1, weights, values, target);
        // Case 2:
        var currentPlusOther = values[index] +
            knapsackNaive(index - 1, weights, values, target - weights[index]);
        result = Math.max(current, currentPlusOther);
    }
    return result;
}

var weights = [1, 2, 4, 2, 5],
    values = [5, 3, 5, 3, 2],
    target = 10;
knapsackNaive(4, weights, values, target);
Enter fullscreen mode Exit fullscreen mode
function knapsackNaive(index, weights, values, target) { var result = 0; if (index <= -1 || target <= 0) { result = 0; } else if (weights[index] > target) { result = knapsackNaive(index - 1, weights, values, target); } else { // Case 1: var current = knapsackNaive(index - 1, weights, values, target); // Case 2: var currentPlusOther = values[index] + knapsackNaive(index - 1, weights, values, target - weights[index]); result = Math.max(current, currentPlusOther); } return result; } var weights = [1, 2, 4, 2, 5], values = [5, 3, 5, 3, 2], target = 10; knapsackNaive(4, weights, values, target);

Time Complexity: O(2n)O\lparen 2^{n}\rparen


Alt Text

Figure 19-2. Recursion tree for knapsack

Figure 19-2 (Above) shows the recursion tree for a knapsack capacity of 2 units and 3 items of 1 unit weight. As the figure shows, the function computes the same subproblems repeatedly and has an exponential time complexity. To optimize this, you can have the results reference via index and target (weight: w).

DP Approach (๐Ÿขโžก๏ธโœ…โ˜บ๏ธ)

DP implementation stores the result of the knapsack using the array index and target as a key to a JavaScript object for later retrieval. For recursive calls it will use the stored result, and this reduces the time complexity of the algorithm:

function knapsackDP(index, weights, values, target, matrixDP) {
    var result = 0;
    // DP Part:
    if (matrixDP[index + "-" + target]) {
        return matrixDP[index + "-" + target];
    }
    if (index <= -1 || target <= 0) {
        result = 0;
    } else if (weights[index] > target) {
        result = knapsackDP(index - 1, weights, values, target, matrixDP);
    } else {
        var current = knapsackDP(index - 1, weights, values, target, matrixDP),
            currentPlusOther = values[index] + knapsackDP(index - 1, weights, values, target - weights[index], matrixDP);
        result = Math.max(current, currentPlusOther);
    }
    matrixDP[index + "-" + target] = result;
    return result;
}


var weights = [1, 2, 4, 2, 5],
    values = [5, 3, 5, 3, 2],
    target = 10;
knapsackDP(4, weights, values, target, {});
Enter fullscreen mode Exit fullscreen mode
function knapsackDP(index, weights, values, target, matrixDP) { var result = 0; // DP Part: if (matrixDP[index + "-" + target]) { return matrixDP[index + "-" + target]; } if (index <= -1 || target <= 0) { result = 0; } else if (weights[index] > target) { result = knapsackDP(index - 1, weights, values, target, matrixDP); } else { var current = knapsackDP(index - 1, weights, values, target, matrixDP), currentPlusOther = values[index] + knapsackDP(index - 1, weights, values, target - weights[index], matrixDP); result = Math.max(current, currentPlusOther); } matrixDP[index + "-" + target] = result; return result; } var weights = [1, 2, 4, 2, 5], values = [5, 3, 5, 3, 2], target = 10; knapsackDP(4, weights, values, target, {});

Time Complexity: O(nร—w)O\lparen n \times w\rparen
Here, n is the number of items, and w is the capacity of the knapsack.

Space Complexity: O(nร—w)O\lparen n \times w\rparen
This algorithm requires an n x w combination to store the cached results inside matrixDP.

Longest Common Subsequence (๐Ÿ‘œ๐Ÿ”€๐Ÿ”ข)

Given two sequences, find the length of the longest subsequence where a subsequence is defined as a sequence that appears in relative order without necessarily being contiguous. For example, Edi, dis, iso, and so forth, are subsequences of Edison. A string has 2n2^{n} possible subsequences where nn is the length of the string.

Note (๐Ÿ“): As a real-world example, letโ€™s consider a generalized computer science problem that appears in main domains such as bioinformatics (DNA sequencing). This algorithm is also how the diff (See on Wikipedia) functionality is implemented in version control and operating systems. Typically, diff is used to show the changes between two versions of the same file.

Naive Approach (๐Ÿขโžก๏ธโŒ๐Ÿ˜ต)

Letting str1 be the first string of length m, str2 be the second string of length n, and LCS be the function, the naive approach can first consider the following pseudocode:

// 1. if last characters of both sequences match (i.e. str1[m-1] == str2[n-1]):
// 2. result = 1 + LCS(X[0:m-2], Y[0:n-2])
// 3. if last characters of both sequences DO NOT match (i.e. str1[m-1] != str2[n-1]):
// 4. result = Math.max(LCS(X[0:m-1], Y[0:n-1]),LCS(X[0:m-2], Y[0:n-2]))
Enter fullscreen mode Exit fullscreen mode

With this structure in mind, the following can be implemented:

function LCSNaive(str1, str2, str1Length, str2Length) {
    if (str1Length == 0 || str2Length == 0) {
        return 0;
    }
    if (str1[str1Length - 1] == str2[str2Length - 1]) {
        return 1 + LCSNaive(str1, str2, str1Length - 1, str2Length - 1);
    } else {
        return Math.max(LCSNaive(str1, str2, str1Length, str2Length - 1),
            LCSNaive(str1, str2, str1Length - 1, str2Length));
    }
}

function LCSNaiveWrapper(str1, str2) {
    return LCSNaive(str1, str2, str1.length, str2.length);
}

LCSNaiveWrapper("AGGTAB", "GXTXAYB"); // Prints "4"
Enter fullscreen mode Exit fullscreen mode
function LCSNaive(str1, str2, str1Length, str2Length) { if (str1Length == 0 || str2Length == 0) { return 0; } if (str1[str1Length - 1] == str2[str2Length - 1]) { return 1 + LCSNaive(str1, str2, str1Length - 1, str2Length - 1); } else { return Math.max(LCSNaive(str1, str2, str1Length, str2Length - 1), LCSNaive(str1, str2, str1Length - 1, str2Length)); } } function LCSNaiveWrapper(str1, str2) { return LCSNaive(str1, str2, str1.length, str2.length); } LCSNaiveWrapper("AGGTAB", "GXTXAYB"); // Prints "4"


Alt Text

Figure 19-3. Recursion tree for longest common string length

Time Complexity: O(2n)O\lparen 2^{n}\rparen

Figure 19-3 shows the recursion tree for EDI and SON. As you can see, (ED, SON) is repeated.

DP Approach (๐Ÿขโžก๏ธโœ…โ˜บ๏ธ)

The recursive structure described can be translated into a table where the rows each represent a character in str1 and the columns each represent a character in str2. Each item in a matrix at a row, i, and a column, j, represents LCS(str1[0:i], str2[0:j]):

function longestCommonSequenceLength(str1, str2) {
    var matrix = Array(str1.length + 1).fill(Array(str2.length + 1).fill(0)),
        rowLength = str1.length + 1,
        colLength = str2.length + 1,
        max = 0;
    for (var row = 0; row < rowLength; row++) {
        for (var col = 1; col < colLength; col++) {
            var str1Char = str1.charAt(row - 1),
                str2Char = str2.charAt(col - 1);
            if (str1Char == str2Char) {
                matrix[row][col] = matrix[row - 1][col - 1] + 1;
                max = Math.max(matrix[row][col], max);
            }
        }
    }
    return max;
}

longestCommonSequenceLength("abcd", "bc"); // Prints "2"
Enter fullscreen mode Exit fullscreen mode
function longestCommonSequenceLength(str1, str2) { var matrix = Array(str1.length + 1).fill(Array(str2.length + 1).fill(0)), rowLength = str1.length + 1, colLength = str2.length + 1, max = 0; for (var row = 0; row < rowLength; row++) { for (var col = 1; col < colLength; col++) { var str1Char = str1.charAt(row - 1), str2Char = str2.charAt(col - 1); if (str1Char == str2Char) { matrix[row][col] = matrix[row - 1][col - 1] + 1; max = Math.max(matrix[row][col], max); } } } return max; } longestCommonSequenceLength("abcd", "bc"); // Prints "2"

Time Complexity: O(mร—n)O\lparen m \times n\rparen
Space Complexity: O(mร—n)O\lparen m \times n\rparen

Note (๐Ÿ“):Here, m is the length of str1, and n is the length of str2.

Coin Change (๐Ÿ’ฐ๐Ÿ”€๐Ÿ”ข)

Given a value n and an unlimited supply of each coin of different values, S = {S1, S2, .. Sm}, of size M, how many ways can the change be made. Given N=4, M=3, and S = {1,2,3}, the answer is 4:

1. 1, 1, 1, 1
2. 1, 1, 2
3. 2, 2
4. 1, 3

Optimal Substructure (๐Ÿ“๐Ÿ“โœ๏ธ๐Ÿ“„)

You can observe the following about the number of changes:
1. Solutions without Mth coin
2. Solutions with (at least) one Mth coin

Given that coinChange(S, M, N) is a function to count the number of coin changes, mathematically it can be rewritten as follows by using the two observations:

coinChange(S, M, N) = coinChange(S, M-1, N) + coinChange(S, M, N-Sm)
Enter fullscreen mode Exit fullscreen mode

Naive Approach (๐Ÿขโžก๏ธโŒ๐Ÿ˜ต)

The naive approach can implement the described algorithm using recursion, as shown here:

// RETURN THE COUNT OF WAYS we can sum "coinArr" which have...
// ... index like: [0, ..., numCoins]
function countCoinWays(coinArr, numCoins, coinValue) {
    if (coinValue == 0) {
        // If the value reached zero, then only solution...
        // ...is to not include any coin
        return 1;
    }
    if (coinValue < 0 || (numCoins <= 0 && coinValue >= 1)) {
        // Value is less than 0 means no solution
        // No coins left but coinValue left also means no solution
        return 0;
    }
    return countCoinWays(coinArr, numCoins - 1, coinValue) +
        countCoinWays(coinArr, numCoins, coinValue - coinArr[numCoins - 1])
}

function countCoinWaysWrapper(coinArr, coinValue) {
    return countCoinWays(coinArr, coinArr.length, coinValue);
}

countCoinWaysWrapper([1, 2, 3], 4); // Prints "4"
Enter fullscreen mode Exit fullscreen mode
// RETURN THE COUNT OF WAYS we can sum "coinArr" which have... // ... index like: [0, ..., numCoins] function countCoinWays(coinArr, numCoins, coinValue) { if (coinValue == 0) { // If the value reached zero, then only solution... // ...is to not include any coin return 1; } if (coinValue < 0 || (numCoins <= 0 && coinValue >= 1)) { // Value is less than 0 means no solution // No coins left but coinValue left also means no solution return 0; } return countCoinWays(coinArr, numCoins - 1, coinValue) + countCoinWays(coinArr, numCoins, coinValue - coinArr[numCoins - 1]) } function countCoinWaysWrapper(coinArr, coinValue) { return countCoinWays(coinArr, coinArr.length, coinValue); } countCoinWaysWrapper([1, 2, 3], 4); // Prints "4"

Time Complexity: O(nm)O\lparen n^{m}\rparen
Space Complexity: O(n)O\lparen n\rparen

Note (๐Ÿ“): Here, m is the number of available types of coins, and n is the desired currency to convert into change.

Overlapping Subproblems (๐Ÿ˜ก๐Ÿ”€๐Ÿ™…โŒ)

You can see from the recursion tree below that there are lots of overlapping subproblems:

Alt Text

Figure 19-4. Recursion tree for longest coin change

Note (๐Ÿ“): To solve this, a table (matrix) can be used to store already computed results.

DP Approach (๐Ÿขโžก๏ธโœ…โ˜บ๏ธ)

The matrix has the coinValue number of rows and the numCoins number of columns. And i and j represent the number of ways given a coinValue of i and a numCoins of j:

function countCoinWaysDP(coinArr, numCoins, coinValue) {
    // Create the matrix:
    var dpMatrix = [];
    for (var i = 0; i <= coinValue; i++) {
        dpMatrix[i] = [];
        for (var j = 0; j < numCoins; j++) {
            dpMatrix[i][j] = undefined;
        }
    }
    for (var i = 0; i < numCoins; i++) {
        dpMatrix[0][i] = 1
    }
    for (var i = 1; i < coinValue + 1; i++) {
        for (var j = 0; j < numCoins; j++) {
            var temp1 = 0,
                temp2 = 0;
            if (i - coinArr[j] >= 0) {
                // Solutions including coinArr[j]
                temp1 = dpMatrix[i - coinArr[j]][j];
            }
            if (j >= 1) {
                // Soulutions excluding coinArr[j]
                temp2 = dpMatrix[i][j - 1];
            }
            dpMatrix[i][j] = temp1 + temp2;
        }
    }
    return dpMatrix[coinValue][numCoins - 1];
}

function countCoinWaysDPWrapper(coinArr, coinValue) {
    return countCoinWaysDP(coinArr, coinArr.length, coinValue);
}

countCoinWaysDPWrapper([1, 2, 3], 4); // Prints "4"
Enter fullscreen mode Exit fullscreen mode
function countCoinWaysDP(coinArr, numCoins, coinValue) { // Create the matrix: var dpMatrix = []; for (var i = 0; i <= coinValue; i++) { dpMatrix[i] = []; for (var j = 0; j < numCoins; j++) { dpMatrix[i][j] = undefined; } } for (var i = 0; i < numCoins; i++) { dpMatrix[0][i] = 1 } for (var i = 1; i < coinValue + 1; i++) { for (var j = 0; j < numCoins; j++) { var temp1 = 0, temp2 = 0; if (i - coinArr[j] >= 0) { // Solutions including coinArr[j] temp1 = dpMatrix[i - coinArr[j]][j]; } if (j >= 1) { // Soulutions excluding coinArr[j] temp2 = dpMatrix[i][j - 1]; } dpMatrix[i][j] = temp1 + temp2; } } return dpMatrix[coinValue][numCoins - 1]; } function countCoinWaysDPWrapper(coinArr, coinValue) { return countCoinWaysDP(coinArr, coinArr.length, coinValue); } countCoinWaysDPWrapper([1, 2, 3], 4); // Prints "4"

Time Complexity: O(mร—n)O\lparen m \times n\rparen
Space Complexity: O(mร—n)O\lparen m \times n\rparen

Note (๐Ÿ“):Here, m is the number of available types of coins, and n is the desired currency to convert into change.

Edit (Levenshtein) Distance (๐Ÿ‘œ๐Ÿ”€๐Ÿ”ข)

The edit distance problem considers the following: Given a string (str1) of length m and another string (str2) of length n, what is the minimum number of edits to convert str1 into str2?

The valid operations are the following: Insert, Remove, Replace.

Optimal Substructure (๐Ÿ“๐Ÿ“โœ๏ธ๐Ÿ“„)

If each character is processed one by one from each str1 and str2, the following is possible:

1. The characters are the same: Do Nothing
2. The characters are different. Consider the cases recursively: Insert (for m and n-1), Remove (for m-1 and n), Replace (for m-1 and n-1).

Naive Approach (๐Ÿขโžก๏ธโŒ๐Ÿ˜ต)

The naive approach can implement the described substructure recursively, as shown here:

function editDistanceRecursive(str1, str2, length1, length2) {
    // "str1" is empty. Only option is insert all of "str2"
    if (length1 == 0) {
        return length2;
    }
    // "str2" is empty. Only option is insert all of "str1"
    if (length2 == 0) {
        return length1;
    }
    // Last chars are same: 
    // "Ignore last chars and count remaining"
    if (str1[length1 - 1] == str2[length2 - 1]) {
        return editDistanceRecursive(str1, str2, length1 - 1, length2 - 1);
    }
    // Last chars is not the same:
    // "There are three operations: insert, remove, replace"
    return 1 + Math.min(
        // Insert:
        editDistanceRecursive(str1, str2, length1, length2 - 1),
        // Remove:
        editDistanceRecursive(str1, str2, length1 - 1, length2),
        // Replace:
        editDistanceRecursive(str1, str2, length1 - 1, length2 - 1)
    );
}

function editDistanceRecursiveWrapper(str1, str2) {
    return editDistanceRecursive(str1, str2, str1.length, str2.length);
}

editDistanceRecursiveWrapper("Edison", "Pebojot"); // Prints "6"
Enter fullscreen mode Exit fullscreen mode
function editDistanceRecursive(str1, str2, length1, length2) { // "str1" is empty. Only option is insert all of "str2" if (length1 == 0) { return length2; } // "str2" is empty. Only option is insert all of "str1" if (length2 == 0) { return length1; } // Last chars are same: // "Ignore last chars and count remaining" if (str1[length1 - 1] == str2[length2 - 1]) { return editDistanceRecursive(str1, str2, length1 - 1, length2 - 1); } // Last chars is not the same: // "There are three operations: insert, remove, replace" return 1 + Math.min( // Insert: editDistanceRecursive(str1, str2, length1, length2 - 1), // Remove: editDistanceRecursive(str1, str2, length1 - 1, length2), // Replace: editDistanceRecursive(str1, str2, length1 - 1, length2 - 1) ); } function editDistanceRecursiveWrapper(str1, str2) { return editDistanceRecursive(str1, str2, str1.length, str2.length); } editDistanceRecursiveWrapper("Edison", "Pebojot"); // Prints "6"

Time Complexity: O(3m)O\lparen 3^{m}\rparen

Note (๐Ÿ“): The time complexity of the naive solution is exponential, and the worst case is when no characters in the two strings match. This makes sense because each call has three calls (insert, remove, replace).

Alt Text

Figure 19-5. Recursion tree for edit distance

Again, you can see that the same problems are solved over and over again. This can be optimized by constructing a matrix that stores the already-computed results of subproblems.

DP Approach (๐Ÿขโžก๏ธโœ…โ˜บ๏ธ)

The dynamic programming approach will construct a matrix with the dimensions str1 and str2. The base case is when i or j is equal to 0:

function editDistanceDP(str1, str2, length1, length2) {
    // Creating the matrix:
    var dpMatrix = [];
    for (var i = 0; i < length1 + 1; i++) {
        dpMatrix[i] = [];
        for (var j = 0; j < length2 + 1; j++) {
            dpMatrix[i][j] = undefined;
        }
    }
    for (var i = 0; i < length1 + 1; i++) {
        for (var j = 0; j < length2 + 1; j++) {
            // If first str1 is empty:
            // "Have to insert all the chars of str2"
            if (i == 0) {
                dpMatrix[i][j] = j;
            } else if (j == 0) {
                dpMatrix[i][j] = i;
            } else if (str1[i - 1] == str2[j - 1]) {
                // If the same:
                // "No additional cost"
                dpMatrix[i][j] = dpMatrix[i - 1][j - 1];
            } else {
                var insertCost = dpMatrix[i][j - 1],
                    removeCost = dpMatrix[i - 1][j],
                    replaceCost = dpMatrix[i - 1][j - 1];
                dpMatrix[i][j] = 1 + Math.min(insertCost, removeCost, replaceCost);
            }
        }
    }
    return dpMatrix[length1][length2];
}

function editDistanceDPWrapper(str1, str2) {
    return editDistanceDP(str1, str2, str1.length, str2.length);
}

editDistanceDPWrapper("Edison", "Pebojot"); // Prints "6"
Enter fullscreen mode Exit fullscreen mode
function editDistanceDP(str1, str2, length1, length2) { // Creating the matrix: var dpMatrix = []; for (var i = 0; i < length1 + 1; i++) { dpMatrix[i] = []; for (var j = 0; j < length2 + 1; j++) { dpMatrix[i][j] = undefined; } } for (var i = 0; i < length1 + 1; i++) { for (var j = 0; j < length2 + 1; j++) { // If first str1 is empty: // "Have to insert all the chars of str2" if (i == 0) { dpMatrix[i][j] = j; } else if (j == 0) { dpMatrix[i][j] = i; } else if (str1[i - 1] == str2[j - 1]) { // If the same: // "No additional cost" dpMatrix[i][j] = dpMatrix[i - 1][j - 1]; } else { var insertCost = dpMatrix[i][j - 1], removeCost = dpMatrix[i - 1][j], replaceCost = dpMatrix[i - 1][j - 1]; dpMatrix[i][j] = 1 + Math.min(insertCost, removeCost, replaceCost); } } } return dpMatrix[length1][length2]; } function editDistanceDPWrapper(str1, str2) { return editDistanceDP(str1, str2, str1.length, str2.length); } editDistanceDPWrapper("Edison", "Pebojot"); // Prints "6"

Time Complexity: O(mร—n)O\lparen m \times n\rparen
Space Complexity: O(mร—n)O\lparen m \times n\rparen

Note (๐Ÿ“): Here, m is the length of str1, and n is the length of str2.

Summary (๐Ÿ“š๐Ÿ“š)

Alt text
Dynamic programming can be utilized to optimize an algorithm if the following conditions are satisfied:

  • Optimal substructure: The optimal solution to the problem contains optimal solutions to its subproblems.
  • Overlapping subproblems: The solutions for subproblems are needed multiple times.

To store the already computed solutions to a subproblem, a matrix or a hash table is typically used; this is because both provide O(1)O\lparen 1\rparen lookup time. Doing this, the time complexity can be improved from exponential (e.g., O(2n)O\lparen 2^{n}\rparen ) to polynomial time (e.g., O(n2)O\lparen n^{2}\rparen ).


Up Next๐Ÿ‘‰ Part 20: Bit Manipulation (๐Ÿ”ฅ๐Ÿ”ฅ) (October 03-04, 2020)


Alt Text

Posted on by:

edisonnpebojot profile

Edison Pebojot(๐Ÿ‘จโ€๐Ÿ’ป)

@edisonnpebojot

I started using computers and writing software around 2008 and 2009, I taught myself Programming. However, I wasn't a typical "nerd-klutz". ๐Ÿ˜…

Discussion

pic
Editor guide