Learn Data Structure and Algorithm in JavaScript | Part 08

Edison Pebojot(👨‍💻) Updated on ・19 min read

Prerequisite (✋😐)

If you're reading this article right now, please considered to read our Part 01: Big-O Notation, Part 02: JavaScript Unique Part, Part 03: JavaScript Numbers to Part 07: JavaScript Memory Management

01 Big-O Notation By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed).
02 JavaScript Unique Parts Big-O is important for analyzing and comparing the efficiencies of algorithms. The analysis of Big-O starts by looking at the code, and, applying the rules, applying the rules is because to simplify the Big-O notation linear or quadratic rule is not enough.
03 JavaScript Numbers This part 3 will focus on JavaScript number operations, number representation, Number objects, common number algorithms, and random number generation.
04 JavaScript Strings This part 4 will focus on strings, JavaScript String object, and the String object’s built-in functions. You will learn how to access, compare, decompose, and search strings for commonly used real-life purposes. In addition, the chapter will explore string encoding, decoding, encryption, and decryption.
05 JavaScript Arrays As a JavaScript developer, you will use the array often; it is the most commonly used data structure. Arrays in JavaScript come with a lot of built-in methods. By the end of this part, you will understand arrays and choose the right method
06 JavaScript Object This part will focus on what JavaScript objects are, how they are declared, and how their properties can be changed. In addition, this part will cover how JavaScript classes are implemented using prototypal inheritance. Also this part will be short.
07 JavaScript Memory Management A variable takes up some memory. In C, the programmer allocate and deallocate memory manually. In contrast, modern JavaScript engines have garbage collectors that delete unused variables. However, there are pitfalls(unexpected) that developers can fall into(❗) This part will show these unexpected and present techniques to help the garbage collector minimize the memory problems.
08 Recursion This part 8 introduces the concept of recursion and recursive algorithms(Remember they are different, we will discuss them later(😉)). We will also discuss the definition of recursion and fundamental rules for recursive algorithms.
09 Sets This part focuses on the concepts of sets from both a mathematical definition and on the implementation. Also, Common set operations, as well as their implementations, are covered in great detail (💡).
10 Searching and Sorting This part 10 focuses on searching and sorting for arrays. By the end of this part 10, you will understand how to use sorting and searching algorithms for arrays. Also, this article is a bit complicated for beginners, so as much as possible the visual aids is your friend (👀). (😃)
11 Hash Tables A hash table is a fixed-sized data structure in which the size is defined at the start. This part 11 explains how hash tables work, and the method of generating a unique key. By the end of this part 11, you will understand various hashing techniques and know how to implement a hash table. (😃)
12 Stacks and Queues This part 12 covers stacks and queues(pronounce as kyooz (🔈)) not (kwewe) okay? hehehe (😅); both are data structures used in the implementation of complex data structures. You'll learn what the stacks and queues are, how they're used, when they're used, and how to implement them (😃) Let's go! (🔥🔥🔥)
13 Linked Lists A linked list is a data structure in which each node (or element) points to another node. Unlike arrays, which have a fixed size, a linked list is a dynamic data structure. By the end of this part 13, you will understand how to implement and work with linked lists. And oh! (😮) There are two types of linked lists: singly (➡️) and doubly (↔️). Let’s examine the singly linked list first.(😃) Let's go! (🔥🔥🔥)
14 Caching Caching is the process of storing data into temporary memory so that it can be easily retrieved for later use if it is required again. As an example, a database keeps data cached to avoid re-reading the hard drive, and a web browser caches web images to avoid re-downloading. In this part 14, two caching techniques will discussed: LFU and LRU caching.
15 Trees A general tree data structure is composed of nodes with children nodes. The top node is called the root node. This part 15 will explore different types of trees such as binary trees, binary search trees, and self-balancing binary search trees. First, this part 15 will cover what trees are and how they are structured. Then, it will cover methods of traversing(crossing or taking a zigzag path) the tree data structure in detail. Finally, you will learn about binary search trees and self-balancing binary search trees to understand how to store searchable data. (😃)
16 Heaps A heap is an important data structure that returns the highest or lowest element in O(1) time. This part 16 will focus on explaining how heaps are implemented as well as how to work with them. One example is heap sort, which is a sorting algorithm based on heaps.
17 Graphs In this Part 17, you will learn graph basics, including fundamental terminology and graph types. The Part 17 will also cover working with these different graph types and methods of representing graphs in data structures. Finally, algorithms for traversing, searching, and sorting graphs are explored to solve problems such as finding the shortest path between two graph nodes. (👍)
18 Advance Strings Part 18 will cover more complex string algorithms than covered in the previous section. Now that you have heard of certain other data models or structures, they should be easier to comprehend. Specifically, Part 18 will focus on string searching algorithms. (😉)
19 Dynamic Programming Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (⬇️). To explain dynamic programming, let’s re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (😉)

Part 08: Recursion (😱 🔥 🌀)

This part 8 introduces the concept of recursion and recursive algorithms(Remember they are different, we will discuss them later(😉)). We will also discuss the definition of recursion and fundamental rules for recursive algorithms. In addition, methods of analyzing efficiencies of recursive functions will be covered using mathematical notations. Finally, part 8 exercises will help solidify this information and your understanding so make sure you really read all section of this article. (😀)

Introducing Recursion (🌀🌀)

In math, recursion refers to the occurrence of a thing itself. In computer science, a recursive function is a function that calls itself (🌀). Recursive functions solve complex problems through divide-and-conquer method. Recursion is important because you will see it again and again in data structures: (😱)

Figure 8-1 shows a visual illustration of recursion

Rules of Recursion (📜🌀)

When recursive functions are implemented incorrectly, it causes the programs stuck and not terminate. Infinite recursive result in stack overflow (😐). Stack overflow is when the maximum number of call exceeds the limited amount of space(memory).

Note(📝): Remember, it's not the Stackoverflow you're thinking about right now (😐)

To implement correctly, they must follow certain rules
so that stack overflow is avoided. These rules are covered next:

• Base Case
• Divide-and-Conquer Method
• Classic Fibonacci Sequence
• Iterative Solution
• Recursive Solution
• Fibonacci Sequence: Tail Recursion
• Pascal's Triangle

Base Case (0️⃣◀️1️⃣)

In recursion, there must be a base case (also referred to as terminating case). Because recursive methods call themselves, they will never stop unless this base case is reached.

Stack overflow from recursion is most likely the result of not having a proper base case. In the base case, there are no recursive function calls.

Let’s examine the following function, which prints numbers counting down from $n$ to $0$ as an example:

Example:


// Learn Data Structures and Algorithms in JavaScript | Part 08
function countDownToZero(n) {
// base case. Stop at 0
if (n < 0) {
return; // stop the function
} else {
console.log(n);
countDownToZero(n - 1); // count down 1
}
}
countDownToZero(5);



Execution:

   // Learn Data Structures and Algorithms in JavaScript | Part 08 function countDownToZero(n) { // base case. Stop at 0 if (n < 0) { return; // stop the function } else { console.log(n); countDownToZero(n - 1); // count down 1 } } countDownToZero(5); 

The base case for this function is when $n$ is smaller to $0$ (or equal). This is because the desired outcome was to stop counting at $0$ . If a negative number is given as input, it will not print that number because of the base case.

Divide-and-Conquer Method

In computer science, the divide-and-conquer method is when a problem is solved by solving all of its smaller components. With the countdown example, counting down from $2$ can be solved by printing $2$ and then counting down again from $1$ . It is necessary to make the problem smaller to reach the base case. Otherwise, if the recursive call does not converge(or meet) to a base case, a stack overflow occurs. oh-uh(😮)

Classic Example: Fibonacci Sequence (➿➿➿)

The Fibonacci sequence is a list of infinite numbers, each of which is the sum of the past two terms (starting with 1):

$1, 1, 2, 3, 5, 8, 13, 21 …$

So, how do you print the Fibonacci sequence? (❓) Let's see in the next section. (😉)

Iterative Solution: Fibonacci Sequence (📦➡️🐖➡️)

An iterative solution using a for loop may look something like this:

Example:

function getNthFibo(n) {
if (n <= 1) return n;
var sum = 0,
last = 1,
lastlast = 0;

for (var i = 1; i < n; i++) {
sum = lastlast + last;
lastlast = last;
last = sum;
}
return sum;
}


Execution:

   function getNthFibo(n) { if (n <= 1) return n; var sum = 0, last = 1, lastlast = 0; for (var i = 1; i < n; i++) { sum = lastlast + last; lastlast = last; last = sum; } return sum; } getNthFibo(5) 

A for loop can be used to keep track of the last two elements(last and lastlast) and its sum yields the Fibonacci number. Now, how might this be done recursively? (❓) Let's see in the next section. (😉)

Recursive Solution: Fibonacci (🌀🌀🌀🌀)

The following shows the recursive solution:

Example:

function getNthFibo(n) {
if (n <= 1) {
return n;
} else {
return getNthFibo(n - 1) + getNthFibo(n - 2);
}
}


Execution:

   function getNthFibo(n) { if (n <= 1) { return n; } else { return getNthFibo(n - 1) + getNthFibo(n - 2); } } getNthFibo(5) 

Base case: The base case for the Fibonacci sequence is 1
Divide and conquer: By definition, the Nth Fibonacci
number is the sum of the $(n-1)th$ and $(n-2)th$ . However, this
implementation has a time complexity of $2^n$

We will explore a more efficient recursive algorithm for the Fibonacci sequence using tail recursion in the next section. Let's go! (🔥🔥🔥)

Fibonacci Sequence: Tail Recursion (🔢➡️1️⃣)

A tail recursive function is a recursive function in which the recursive call is the last executed thing in the function. First let’s look at the iterative solution:

Example:

function getNthFibo(n) {
if (n <= 1) return n;
var sum = 0,
last = 1,
lastlast = 0;

for (var i = 1; i < n; i++) {
sum = lastlast + last;
lastlast = last;
last = sum;
}
return sum;
}


At each iteration, the following update happens:

$\lparen lastlast, last\rparen = \lparen last, lastlast + last\rparen$

This is also similar from the following:

$\lparen 0, 1\rparen = \lparen 1, 0 + 1\rparen$
$\lparen 1, 1\rparen = \lparen 1, 1 + 1\rparen$
$\lparen 1, 2\rparen = \lparen 2, 1 + 2\rparen$
$\lparen 2, 3\rparen = \lparen 3, 2 + 3\rparen$
$\lparen 3, 5\rparen = \lparen 5, 5 + 3\rparen$

With this structure, the following recursive function can be formed:

Example:

function getNthFiboBetter(n, lastlast, last) {
if (n == 0) {
return lastlast;
}
if (n == 1) {
return last;
}
return getNthFiboBetter(n - 1, last, lastlast + last);
}


Execution:

   function getNthFiboBetter(n, lastlast, last) { if (n == 0) { return lastlast; } if (n == 1) { return last; } return getNthFiboBetter(n - 1, last, lastlast + last); } getNthFiboBetter(5, 0, 1) // Prints '5' 

Time Complexity: $O\lparen n\rparen$

This function executes n times because it’s decremented by n-1 each time with only single recursive call.

Space Complexity: $O\lparen n\rparen$

The space complexity is also O(n) because of the stack call used for this function. This will be further explained in the Recursive Call Stack Memory later in the next section.

Let’s examine another example, which is more complex: Pascal's Triangle

Pascal’s Triangle (📐📐📐)

Pascal’s triangle is a triangle whose value is the summation of its top two (left and right) values as shown in below:

Base case: The base case for Pascal’s triangle is 1.

Divide and conquer: By the mathematical definition, a Pascal’s triangle is defined as the sum of its upper terms. Therefore, this can be expressed as the following:

$pascalTriangle\lparen row - 1, col\rparen + pascalTriangle\lparen row - 1, col - 1\rparen$

Example:

function pascalTriangle(row, col) {
if (col == 0) {
return 1;
} else if (row == 0) {
return 0;
} else {
return pascalTriangle(row - 1, col) + pascalTriangle(row - 1, col - 1);
}
}


Execution:

   function pascalTriangle(row, col) { if (col == 0) { return 1; } else if (row == 0) { return 0; } else { return pascalTriangle(row - 1, col) + pascalTriangle(row - 1, col - 1); } } pascalTriangle(5, 2); // Prints '10' 

Big-O for Recursion (🌀🎯)

Recursive algorithms are much harder to analyze. To perform Big-O analysis for recursive algorithms, its recurrence relations must be analyzed. Let's go! (🔥🔥🔥)

Recurrence Relations (🌀↔️🌀)

Algorithms implemented iteratively, Big-O analysis is much simpler because loops clearly define when to stop and how much to increment in each iteration. For analyzing recursive algorithms, recurrence relations are used. Recurrence relations consist of two-part analysis:

Example:

function getNthFibo(n) {
if (n <= 1) {
return n;
} else {
return getNthFibo(n - 1) + getNthFibo(n - 2);
}
}


Execution:

   function getNthFibo(n) { if (n <= 1) { return n; } else { return getNthFibo(n - 1) + getNthFibo(n - 2); } } getNthFibo(5); // Prints '5' 

The base case has a time complexity of O(1). The recursive case calls itself twice. Let’s represent this as:

$T (n) = T (n − 1) + T (n − 2) + O(1)$
• Base case: $T (n) = O(1)$
• Recursive case: $T (n) = T (n − 1) + T (n − 2) + O(1)$

Now, this relation means that since $T (n) = T (n − 1) + T (n − 2) + O(1)$ , then (by replacing $n$ with $n−1$ ), $T (n − 1) = T (n − 2) + T (n − 3) + O(1)$ . Replacing $n−1$ with $n−2$ yields to $T (n − 2) = T (n − 3) + T (n − 4) + O(1)$ . Therefore, you can see that for every call, there are two more calls. In other words, this has a time complexity of $O\lparen 2^n\rparen$ .

It helps to visualize it as such:

Calculating Big-O this way is difficult. Thankfully (😇), there is a concept known as the master theorem to help. The master theorem helps programmers easily analyze the time and space complexities of recursive algorithms. (😊)

Master Theorem (🔠🔡🔤)

The master theorem states the following:

• Given a recurrence relation of the form $T \lparen n\rparen = aT \lparen {n\over b}\rparen + O\lparen n^c\rparen$
• Where $a >= 1$ and $b >= 1$ , but there are three case, which we will discuss in the next section.

$a$ is the coefficient that is multiplied by the recursive call. $b$ is the logarithmic term, which is the term that divides the $n$ during the recursive call. Finally, $c$ is the polynomial term on the nonrecursive component of the equation.

The first case is when the polynomial term $O(n^c)$ is less than $log_{b}(a)$ :

Case 1: If $c < log_{b}(a)$ then $T (n) = O(n^{(logb(a))})$

For example: $T (n) = 8T ({n\over 2}) + 1000n^2$

Identify(a, b, and c): $a = 8, b = 2, c = 2$

Evaluate: If $log_2(8) = 3$ , then $c<3$

Result: $T (n) = O(n^{3})$

The second case is when $c$ is equal to $log_b(a)$ :

Case 2: If $c = log_{b}(a)$ then $T (n) = O(n^{c}log(n))$

For example: $T (n) = 2T ({n\over 2}) + 10n$

Identify(a, b, c): $a = 2, b = 2, c = 1$

Evaluate: If $log_{2}(2) = 1$ then $c = 1$ is satisfied.

Result: $T (n) = O(n^{c}log(n)) = T (n) = O(n^{1}log(n)) = T (n) = O(nlog(n))$

The third case is when $c$ is greater than $log_{b}(a)$ :

Case 3: If $c > log_{b}(a)$ then $T (n) = O(f (n))$

For example: $T (n) = 2T ({n\over 2}) + n^2$

Identify(a,b,c): $a = 2, b = 2, c = 2$

Evaluate: If $log_{2}(2) = 1$ then $c > 1$ is satisfied.

Result: $T (n) = f (n) = O(n^{2})$

This section covered a lot about analyzing the time complexity of recursive algorithms. The memory used by recursive function calls should also be noted and analyzed for space complexity analysis which we will discuss next. Let's go! (🔥🔥🔥)

Recursive Call Stack Memory (🔁💾)

When a recursive function calls itself, that takes up memory, and this is really important in Big-O space complexity analysis.

For example, this simple function for printing from $n$ to $1$ recursively takes $O(n)$ in space:

Example:

function printNRecursive(n) {
console.log(n);
if (n > 1) {
printNRecursive(n - 1);
}
}
printNRecursive(10);


Execution:

   function printNRecursive(n) { console.log(n); if (n > 1) { printNRecursive(n - 1); } } printNRecursive(10); 

A developer can run this on a browser and will see the result in the call stack as show below:

As shown in Figures 8-3 and 8-4, each recursive call must be stored in memory until the base case is resolved. Recursive algorithms take extra memory because of the call stack.

Recursive functions have an additional space complexity cost that comes from the recursive calls that need to be stored in the operating system’s memory. The stack is accumulated(collected) until the base case is solved.

This is why an iterative solution may be preferred over the recursive solution. In the worst case, if the base case is implemented incorrectly, the recursive function will cause the program to crash because of a stack overflow error. (⚠️)

Summary (📚)

Recursive functions consist of two parts: the base case and the divide-and-conquer method. Analyzing the Big-O of these recursive algorithms can be done by using the master theorem. The master theorem needs the recurrence relation in the following form: $T (n) = aT ({n\over b}) +O(n^{c})$ . When using the master theorem, identify $a, b$ and $c$ to determine which of the three cases of the master theorem it belongs.

Finally, when implementing recursive algorithms, consider the
additional memory caused by the call stack of the recursive function call . Each recursive call requires a place in the call stack. When the call stack accumulate n calls, then the space complexity of the function is $O(n)$

Challenge (😤🔥👊)

These exercises on recursion cover varying problems to help solidify your knowledge gained from this part. The focus should be to identify the correct base case first and, before solving the entire problem: (😃)

CONVERT DECIMAL (BASE 10) TO BINARY NUMBER

Problem: To do this, keep dividing the number by 2 and each time calculate the modulus (remainder) and division.

Sample Jumble Code:

var binaryString = "";
function base10ToString(n) {

function base10ToStringHelper(n) {
base10ToStringHelper(n);
return binaryString;

}
if (n < 2) {
binaryString += n;
return;
} else {
base10ToStringHelper(Math.floor(n / 2));
base10ToStringHelper(n % 2);
}

console.log(base10ToString(232)); // 111010001110101011011
}


Note(📝): Remember to fix the jumble code

Base case: The base case for this problem is when the n is less than 2. When it is less than 2, it can be only 0 or 1.

Time Complexity: $O(log_{2}(n))$

Time complexity is logarithmic because the recursive call divides the n by 2, which makes the algorithm fast. For example, for n = 8, it executes only three times. For n=1024, it executes
10 times.

Space Complexity: $O(log_{2}(n))$

PRINT ALL PERMUTATIONS OF AN ARRAY

Problem: This is a classical recursion problem and one that is pretty hard to solve. The premise of the problem is to swap elements of the array in every possible position. First, let’s draw the recursion tree for this problem below:

Sample Jumble Code:

var temp = strArr[index1];
function swap(strArr, index1, index2) {

permute(strArr, 0, strArr.length - 1);
}
function permute(strArr, begin, end) {
if (begin == end) {
for (var i = begin; i < end + 1; i++) {
swap(strArr, begin, i);
permute(strArr, begin + 1, end);
swap(strArr, begin, i);
}
} else {

console.log(strArr);
}
}

function permuteArray(strArr) {

strArr[index1] = strArr[index2];
strArr[index2] = temp;
}

permuteArray(["A", "C", "D"]);
// ["A", "C", "D"]
// ["A", "D", "C"]
// ["C", "A", "D"]
// ["C", "D", "A"]
// ["D", "C", "A"]
// ["D", "A", "C"]


Base case: beginIndex is equal to endIndex

Time Complexity: $O(n!)$

Space Complexity: $O(n!)$

FLATTEN AN OBJECT

Problem: Given a JavaScript array like this:

var dictionary = {
'Key1': '1',
'Key2': {
'a': '2',
'b': '3',
'c': {
'd': '3',
'e': '1'
}
}
}


Flatten it into: {'Key1': '1', 'Key2.a': '2','Key2.b' : '3', 'Key2.c.d' : '3', 'Key2.c.e' : '1'}, where the child is denoted by . between the parent and child. See below:

To do this, iterate over any property and recursively check it for child properties, passing in the concatenated string name:

Sample Jumble Code:

var flattenedDictionary = {};
function flattenDitionaryHelper(dictionary, propName) {

flattenedDictionary[propName] = dictionary;
if (typeof dictionary != 'object') {
return;
}
for (var prop in dictionary) {
if (propName == "") {
flattenDitionaryHelper(dictionary[prop], propName + '.' + prop);
} else {

flattenDitionaryHelper(dictionary[prop], propName + prop);
}
}
}
function flattenDictionary(dictionary) {

flattenDitionaryHelper(dictionary, "");
return flattenedDictionary;
}


Base case: The base case for this problem is when input is not an object

Time Complexity: $O(n)$

Space Complexity: $O(n)$

WRITE A PROGRAM THAT RECURSIVELY DETERMINES IF A STRING IS A PALINDROME

Problem: A palindrome is a word spelled the same backward and forward such as deified, racecar, testset, and aibohphobia (the fear of palindromes):

Sample Jumble Code:

function isPalindromeHelper(word, beginPos, endPos) {
function isPalindromeRecursive(word) {
return isPalindromeHelper(word, 0, word.length - 1);
}
if (word.charAt(beginPos) != word.charAt(endPos)) {
return false;
}
if (beginPos >= endPos) {
return isPalindromeHelper(word, beginPos + 1, endPos - 1);
} else {
return true;
}
}

isPalindromeRecursive('hi'); // false
isPalindromeRecursive('iii'); // true
isPalindromeRecursive('ii'); // true
isPalindromeRecursive('aibohphobia'); // true
isPalindromeRecursive('racecar'); // true


The idea behind this one is that with two indexes (one in front and one in back) you check at each step until the front and back meet. (Go find it)

Time Complexity: $O(n)$

Space Complexity: $O(n)$

Space complexity here is still $O(n)$ because of the recursive call stack. Remember that the call stack remains part of memory even if it is not declaring a variable or being stored inside a data structure.