loading...
Cover image for Learn Data Structure & Algorithm in JavaScript | Part 01

Learn Data Structure & Algorithm in JavaScript | Part 01

edisonnpebojot profile image Edison Pebojot(๐Ÿ‘จโ€๐Ÿ’ป) Updated on ใƒป14 min read


Part Table of Contents Description
01 Big-O Notation By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed).
02 JavaScript Unique Parts Big-O is important for analyzing and comparing the efficiencies of algorithms. The analysis of Big-O starts by looking at the code, and, applying the rules, applying the rules is because to simplify the Big-O notation linear or quadratic rule is not enough.
03 JavaScript Numbers This part 3 will focus on JavaScript number operations, number representation, Number objects, common number algorithms, and random number generation.
04 JavaScript Strings This part 4 will focus on strings, JavaScript String object, and the String objectโ€™s built-in functions. You will learn how to access, compare, decompose, and search strings for commonly used real-life purposes. In addition, the chapter will explore string encoding, decoding, encryption, and decryption.
05 JavaScript Arrays As a JavaScript developer, you will use the array often; it is the most commonly used data structure. Arrays in JavaScript come with a lot of built-in methods. By the end of this part, you will understand arrays and choose the right method
06 JavaScript Object This part will focus on what JavaScript objects are, how they are declared, and how their properties can be changed. In addition, this part will cover how JavaScript classes are implemented using prototypal inheritance. Also this part will be short.
07 JavaScript Memory Management A variable takes up some memory. In C, the programmer allocate and deallocate memory manually. In contrast, modern JavaScript engines have garbage collectors that delete unused variables. However, there are pitfalls(unexpected) that developers can fall into(โ—) This part will show these unexpected and present techniques to help the garbage collector minimize the memory problems.
08 Recursion This part 8 introduces the concept of recursion and recursive algorithms(Remember they are different, we will discuss them later(๐Ÿ˜‰)). We will also discuss the definition of recursion and fundamental rules for recursive algorithms.
09 Sets This part focuses on the concepts of sets from both a mathematical definition and on the implementation. Also, Common set operations, as well as their implementations, are covered in great detail (๐Ÿ’ก).
10 Searching and Sorting This part 10 focuses on searching and sorting for arrays. By the end of this part 10, you will understand how to use sorting and searching algorithms for arrays. Also, this article is a bit complicated for beginners, so as much as possible the visual aids is your friend (๐Ÿ‘€). (๐Ÿ˜ƒ)
11 Hash Tables A hash table is a fixed-sized data structure in which the size is defined at the start. This part 11 explains how hash tables work, and the method of generating a unique key. By the end of this part 11, you will understand various hashing techniques and know how to implement a hash table. (๐Ÿ˜ƒ)
12 Stacks and Queues This part 12 covers stacks and queues(pronounce as kyooz (๐Ÿ”ˆ)) not (kwewe) okay? hehehe (๐Ÿ˜…); both are data structures used in the implementation of complex data structures. You'll learn what the stacks and queues are, how they're used, when they're used, and how to implement them (๐Ÿ˜ƒ) Let's go! (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ)
13 Linked Lists A linked list is a data structure in which each node (or element) points to another node. Unlike arrays, which have a fixed size, a linked list is a dynamic data structure. By the end of this part 13, you will understand how to implement and work with linked lists. And oh! (๐Ÿ˜ฎ) There are two types of linked lists: singly (โžก๏ธ) and doubly (โ†”๏ธ). Letโ€™s examine the singly linked list first.(๐Ÿ˜ƒ) Let's go! (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ)
14 Caching Caching is the process of storing data into temporary memory so that it can be easily retrieved for later use if it is required again. As an example, a database keeps data cached to avoid re-reading the hard drive, and a web browser caches web images to avoid re-downloading. In this part 14, two caching techniques will discussed: LFU and LRU caching.
15 Trees A general tree data structure is composed of nodes with children nodes. The top node is called the root node. This part 15 will explore different types of trees such as binary trees, binary search trees, and self-balancing binary search trees. First, this part 15 will cover what trees are and how they are structured. Then, it will cover methods of traversing(crossing or taking a zigzag path) the tree data structure in detail. Finally, you will learn about binary search trees and self-balancing binary search trees to understand how to store searchable data. (๐Ÿ˜ƒ)
16 Heaps A heap is an important data structure that returns the highest or lowest element in O(1) time. This part 16 will focus on explaining how heaps are implemented as well as how to work with them. One example is heap sort, which is a sorting algorithm based on heaps.
17 Graphs In this Part 17, you will learn graph basics, including fundamental terminology and graph types. The Part 17 will also cover working with these different graph types and methods of representing graphs in data structures. Finally, algorithms for traversing, searching, and sorting graphs are explored to solve problems such as finding the shortest path between two graph nodes. (๐Ÿ‘)
18 Advance Strings Part 18 will cover more complex string algorithms than covered in the previous section. Now that you have heard of certain other data models or structures, they should be easier to comprehend. Specifically, Part 18 will focus on string searching algorithms. (๐Ÿ˜‰)
19 Dynamic Programming Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (โฌ‡๏ธ). To explain dynamic programming, letโ€™s re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (๐Ÿ˜‰)
20 Bit Manipulation Bit manipulation is an advanced topic that JavaScript developers typically do not need to know. However, you should learn a bit about bit manipulation if you want to implement high-performance server-side code. Understanding bit manipulation requires some knowledge of digital logic. Any introductory course in discrete math or circuits would be helpful to understand these concepts.

Introduction (โ˜๏ธ)

Alt Text

The motivation (๐Ÿ”ฅ๐Ÿ”ฅ) for writing this was the lack of available information about JavaScript-written data structures and algorithms, this is true, because as you can see, most of our data structure and algorithm book was written in Java and C++:

I found this strange (๐Ÿ˜‘) because many of the job opportunities for software engineering and development today require JavaScript knowledge; it is the only language that can be used to write the entire stack, including the front end and back end. (๐Ÿ‘Œ)

JavaScript follows the prototypal inheritance pattern, unlike Java and C++ which follow the inheritance pattern only(sorry Java and C/C++ ๐Ÿ˜…), there are some changes in writing data structures in JavaScript. The inheritance pattern allows inheritance by creating a blueprint-like form that objects follow during inheritance. Here is an example of inherentance in Java,

๐Ÿ‘‰ Inheretance in Java:

class Vehicle {
  protected String brand = "Ford";
  public void honk() {
    System.out.println("Tuut, tuut!");
  }
}

class Car extends Vehicle {
  private String modelName = "Mustang";
  public static void main(String[] args) {
    Car myFastCar = new Car();
    myFastCar.honk();
    System.out.println(myFastCar.brand + " " + myFastCar.modelName);
  }
}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ‘‰ Output:

Tuut, tuut!
Ford Mustang
Enter fullscreen mode Exit fullscreen mode

However, the prototypal inheritance pattern means copying the objects (๐Ÿ“๐Ÿ“) and changing their properties. Here is an example of prototype in JavaScript,

๐Ÿ‘‰ Inheretance in JavaScript:

function Sandwich(bread, ham, butter, cheese, tomato, cucumber) {
    this.bread = bread;
    this.ham = ham;
    this.butter = butter;
    this.cheese = cheese;
    this.tomato = tomato;
    this.cucumber = cucumber;
}

Sandwich.prototype.log = function () {
    return "Let's add some " + this.bread + " and " + this.cheese;
}

var sandwich = new Sandwich("bread", "ham", "butter", "cheese", "tomato", "cucumber");

sandwich.log();

Enter fullscreen mode Exit fullscreen mode

๐Ÿ‘‰ Output:

Let's add some bread and cheese
Enter fullscreen mode Exit fullscreen mode

Part 01: Big-O Notation (๐Ÿ˜ฑ ๐Ÿ”ฅ โšก)

By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed).

Big-O Notation Primer (๐Ÿ˜ฎ)

The Big-O notation measures the complexity of an algorithm. In Big-O notation, n represents the number of inputs. The question is: โ€œWhat will happen as n approaches infinity? (โ“โ“)โ€ Big-O notation tells you how efficient the algorithm is.

alt text

Figure 1-0. Common Big-O Complexities (Live)

alt text

Figure 1-1. Common Big-O Complexities (Static)

The figure 1-0 and 1-1 illustrate these common time complexities as live and static.

Common Examples (๐Ÿค˜)

See figure 1-1, O(1){O(1)} does not change. Hence, O(1){O(1)} is referred to as being constant time. O(n){O(n)} is linear time. And since O(n){O(n)} is linear time, an example of an O(n){O(n)} algorithm below is printing numbers from 00 to nโˆ’1n-1 , as shown here:

Linear Time: (๐Ÿ•™)

// linear time code goes here function exampleLinear(n) { for (var i = 0; i < n; i++) { console.log(i); } } exampleLinear(10)

Similarly, O(n2){O(n^2)} is quadratic time, and O(n3){O(n^3)} is cubic time. Examples of these complexities are shown here:

Quadratic Time: (๐Ÿ•”)

// quadratic time code goes here function exampleQuadratic(n) { for (var i = 0; i < n; i++) { console.log(i); for (var j = i; j < n; j++) { console.log(j); } } } exampleQuadratic(10)

Cubic Time: (๐Ÿ•ฃ)

// cubic time code goes here function exampleCubic(n) { for (var i = 0; i < n; i++) { console.log(i); for (var j = i; j < n; j++) { console.log(j); for (var k = j; k < n; k++) { console.log(k); } } } } exampleCubic(10)

Finally, an another example logarithmic time complexity is printing elements that are a power of 2 between 2 and n. For example, exampleLogarithmic(100) will print the following:

1, 2, 4, 8, 16, 32, 64
Enter fullscreen mode Exit fullscreen mode
function exampleLogarithmic(n) { for (var i = 1; i < n; i=i*2) { console.log(i); } } exampleLogarithmic(100);

Rules of Big-O Notation

Letโ€™s represent an algorithmโ€™s complexity as f(n){f(n)} . n represents the number of inputs, f(n)timef\lparen n \rparen_{time} represents the time needed, and f(n)spacef\lparen n \rparen_{space} represents the space (additional memory) needed for the algorithm. It can be challenging to calculate f(n){f(n)} . But Big-O notation provides some fundamental rules that help developers compute for f(n){f(n)} .

  • Coefficient rule: If f(n){f(n)} is O(g(n)){O(g(n))} , then kf(n){kf(n)} is O(g(n)){O(g(n))} , for any constant k>0{k>0} .
  • Sum rule: If kf(n){kf(n)} is O(h(n)){O(h(n))} and g(n){g(n)} is O(p(n)){O(p(n))} , then f(n){f(n)} +g(n){+g(n)} is O(h(n)+p(n)){O(h(n)+p(n))} .
  • Product rule: If f(n){f(n)} is O(h(n)){O(h(n))} and g(n){g(n)} is O(p(n)){O(p(n))} , then f(n)g(n){f(n)g(n)} is O(h(n)p(n){O(h(n)p(n)} .
  • Transitive rule: If f(n){f(n)} is O(g(n)){O(g(n))} and g(n){g(n)} is O(h(n)){O(h(n))} , then f(n){f(n)} is O(h(n)){O(h(n))} .
  • Polynomial rule: If f(n){f(n)} is a polynomial of degree k, then f(n){f(n)} is O(nk){O(n^k)} .
  • Log of a power rule: log(nk){log(nk)} is O(log(n)){O(log(n))} for any constant k>0{k>0} .

Alt Text

Note: Don't try to understand them for now. Just give your special attention to the first three rules and the polynomial rule because they are the most commonly used. Iโ€™ll discuss each of those rules in the following sections.

Coefficient Rule: โ€œGet Rid of Constantsโ€ (๐Ÿ’ฏ)

This rule simply requires you to ignore any non-input-size constants. Coefficients(or constant) in Big-O are negligible with large input sizes. Therefore, this is the most important rule of Big-O notations.

If f(n){f(n)} is O(g(n)){O(g(n))} , then kf(n){kf(n)} is O(g(n)){O(g(n))} , for any constant k>0{k>0} .

This means that both kf(n){kf(n)} and f(n){f(n)} have the same Big-O notation of O(g(n)){O(g(n))} if they are constant. Here is an example of a code block for f(n){f(n)} :

 function a(n){
    var count =0;
      for (var i=0;i<n;i++){
        count+=1;
      }
   return count;
 }
Enter fullscreen mode Exit fullscreen mode

This block of code has f(n)=n{f(n)=n} . This is because it adds to count n times. Therefore, this function is O(n){O(n)} . Here's another example code block for for kf(n){kf(n)} :

 function a(n){
   var count =0;
     for (var i=0;i<5*n;i++){
       count+=1;
     }
    return count;
 }
Enter fullscreen mode Exit fullscreen mode

This block has kf(n)=5n{kf(n)=5n} . After all, the first two examples both have a Big-O notation of O(n){O(n)} or O(g(n)){O(g(n))} from the above coefficient rule. Another following code block demonstrates another function with a linear time complexity but with an additional operation which is count+=1:

 function a(n){
     var count =0;
     for (var i=0;i<n;i++){
      count+=1;
     }
     count+=1;
     return count;
  }
Enter fullscreen mode Exit fullscreen mode

This block of code has f(n)=n+1{f(n)=n+1} (in linear time, it can also be nโˆ’1{n-1} ). There is +1{+1} from the last operation (count+=1). This still has a Big-O notation of O(n){O(n)} . This is because the one operation is not dependent on the input n like we said earlier.

Sum Rule: โ€œAdd Big-Os Upโ€ (โž•)

The sum rule is easy to understand; time complexities can be added. Imagine a master(which is function a(n){a(n)} ) algorithm that involves two other algorithms(which is the for loop). The Big-O notation of that master algorithm is simply the sum of the other two Big-O notations.

If f(n){f(n)} is O(h(n)){O(h(n))} and g(n){g(n)} is O(p(n)){O(p(n))} , then f(n)+g(n){f(n)+g(n)} is O(h(n)+p(n)){O(h(n)+p(n))}

Note:It is important to remember to apply the coefficient rule after applying this rule.

The following code block demonstrates a function with two main loops whose time complexities(quadratic time) must be considered independent:

  function a(n){
    var count =0;
    for (var i=0;i<n;i++){
     count+=1;
    }
    for (var i=0;i<5*n;i++){
     count+=1;
    }
    return count;
 }
Enter fullscreen mode Exit fullscreen mode

In this example, f(n)=1n{f(n)=1n} (or 1n), and g(n)=5n{g(n)=5n} . This results to 6n because f(n)+g(n)=6n{f(n)+g(n)=6n} (or h(n)+p(n){h(n)+p(n)} ). That means, O(h(n)+p(n)){O(h(n)+p(n))} is 6n{6n} or O(h(n)+p(n))=6n{O(h(n)+p(n))=6n} . However, when applying the coefficient rule, the final result is O(n)=n{O(n)=n} . This is because both 2 loops or operation abide from this coefficient rule f(n){f(n)} is O(g(n)){O(g(n))} , then kf(n){kf(n)} is O(g(n)){O(g(n))}

Product Rule: โ€œMultiply Big-Osโ€ (โŒ)

The product rule simply states how Big-Os can be multiplied.

If f(n){f(n)} is O(h(n)){O(h(n))} and g(n){g(n)} is O(p(n)){O(p(n))} , then f(n)g(n){f(n)g(n)} is O(h(n)p(n)){O(h(n)p(n))}

The following code block demonstrates a function with two nested for loops(remember that this is a quadratic time inside product rule):

 function (n){
   var count =0;
   for (var i=0;i<n;i++){
     count+=1;
     for (var i=0;i<5*n;i++){
       count+=1;
     }
   }
   return count;
 }
Enter fullscreen mode Exit fullscreen mode

In this example, f(n)=5nxn{f(n)=5nxn} because the second loop has 5nxn{5nxn} which runs 5n{5n} times. Therefore, this results in a total of 5n2{5n^2} operations. Applying the coefficient rule, the result is that O(n)=n2{O(n)=n^2} .

Polynomial Rule: โ€œBig-O to the Power of kโ€ (๐Ÿ“ˆ)

The polynomial rule states that polynomial time complexities have a Big-O notation of the same polynomial degree.(Look at basic Polynomial subject)

If f(n){f(n)} is a polynomial of degree k{k} , then f(n){f(n)} is O(nk){O(n^k)}

The following code block has only one for loop with quadratic time complexity(quadratic time because n*n is equal to 2 loop):

  function a(n){
    var count =0;
    for (var i=0;i<n*n;i++){
     count+=1;
    }
   return count;
 }
Enter fullscreen mode Exit fullscreen mode

In this example, f(n)=n2{f(n)=n^2} because the first loop runs n*n iterations which is equivalent n2{n^2} in accord to polynomial rule f(n){f(n)} is a polynomial of degree k{k} , then f(n){f(n)} is O(nk){O(n^k)} .

Summary (๐Ÿ“š)

Big-O is important for analyzing and comparing the efficiencies of algorithms. The analysis of Big-O starts by looking at the code, and, applying the rules, applying the rules is because to simplify the Big-O notation linear or quadratic rule is not enough. The following are the most often used rules:

  • Eliminating constant
  • Adding up Big-O Notations
  • Multiplying Big-O Notations
  • Determining the polynomial of the Big-O notation

Up Next๐Ÿ‘‰ Part 02: JavaScript: Unique Parts ๐Ÿ”ฅ ๐Ÿ”ฅ (July 01, 2020)


Posted on by:

edisonnpebojot profile

Edison Pebojot(๐Ÿ‘จโ€๐Ÿ’ป)

@edisonnpebojot

I started using computers and writing software around 2008 and 2009, I taught myself Programming. However, I wasn't a typical "nerd-klutz". ๐Ÿ˜…

Discussion

pic
Editor guide
 

Great organizing and visuals , but The example of Inheritance it's not right.
java script will link the objects together , sorry my English is not good , But I will try to explain . It's like pointers in C bird will assign to car , so they are the same in memory if you change one the other will be effected by it.

var animal = {
  name : "" ,
  canFly: false,
  quantity: 0
};

var cheetah = Object(animal);
cheetah.name = "jaafar";
cheetah.quantity = 200;

console.log(animal); // { name: 'jaafar', canFly: false, quantity: 200 }
console.log(cheetah); // { name: 'jaafar', canFly: false, quantity: 200 }
 

Yeah, you're right. This is one of the example of inheritance from the Mozilla Developer Network without Prototype: (developer.mozilla.org/en-US/docs/W...)
Alt Text
But I will fix that using prototype. The example is actually bad. This is base from the Mozilla Developer Network: (developer.mozilla.org/en-US/docs/W...)

function Sandwich(bread, ham, butter, cheese, tomato, cucumber) {
    this.bread = bread;
    this.ham = ham;
    this.butter = butter;
    this.cheese = cheese;
    this.tomato = tomato;
    this.cucumber = cucumber;
}

Sandwich.prototype.log = function () {
    return "Let's add some " + this.bread + " and " + this.cheese;
}

var sandwich = new Sandwich("bread", "ham", "butter", "cheese", "tomato", "cucumber");

sandwich.log();

Testing:

function Sandwich(bread, ham, butter, cheese, tomato, cucumber) { this.bread = bread; this.ham = ham; this.butter = butter; this.cheese = cheese; this.tomato = tomato; this.cucumber = cucumber; } Sandwich.prototype.log = function () { return "Let's add some " + this.bread + " and " + this.cheese; } var sandwich = new Sandwich("bread", "ham", "butter", "cheese", "tomato", "cucumber"); sandwich.log();
 
 
[deleted]
 

Did you mean the logarithmic power rule?

 

Can you please clarify so that I can fix any of my typo. Thank you in advance! (๐Ÿ˜ƒ)

 

Great post! The JS community really needs more of this. Looking forward to reading the rest of your posts. I would like to add that JS isn't the only full stack lang tho. Definitely one of the most popular, but there are others.

 
 

What a terrible example of "inheritance pattern".

 
 

I will always post every 3 times in a week! ๐Ÿ˜ƒ